diff --git a/dev/api/index.html b/dev/api/index.html index 4df02a328..32298439d 100644 --- a/dev/api/index.html +++ b/dev/api/index.html @@ -1,6 +1,6 @@ API · KernelFunctions.jl

API Library

Functions

The KernelFunctions API comprises the following four functions.

KernelFunctions.kernelmatrixFunction
kernelmatrix(κ::Kernel, x::AbstractVector)

Compute the kernel κ for each pair of inputs in x. Returns a matrix of size (length(x), length(x)) satisfying kernelmatrix(κ, x)[p, q] == κ(x[p], x[q]).

kernelmatrix(κ::Kernel, x::AbstractVector, y::AbstractVector)

Compute the kernel κ for each pair of inputs in x and y. Returns a matrix of size (length(x), length(y)) satisfying kernelmatrix(κ, x, y)[p, q] == κ(x[p], y[q]).

kernelmatrix(κ::Kernel, X::AbstractMatrix; obsdim)
-kernelmatrix(κ::Kernel, X::AbstractMatrix, Y::AbstractMatrix; obsdim)

If obsdim=1, equivalent to kernelmatrix(κ, RowVecs(X)) and kernelmatrix(κ, RowVecs(X), RowVecs(Y)), respectively. If obsdim=2, equivalent to kernelmatrix(κ, ColVecs(X)) and kernelmatrix(κ, ColVecs(X), ColVecs(Y)), respectively.

See also: ColVecs, RowVecs

source
KernelFunctions.kernelmatrix!Function
kernelmatrix!(K::AbstractMatrix, κ::Kernel, x::AbstractVector)
+kernelmatrix(κ::Kernel, X::AbstractMatrix, Y::AbstractMatrix; obsdim)

If obsdim=1, equivalent to kernelmatrix(κ, RowVecs(X)) and kernelmatrix(κ, RowVecs(X), RowVecs(Y)), respectively. If obsdim=2, equivalent to kernelmatrix(κ, ColVecs(X)) and kernelmatrix(κ, ColVecs(X), ColVecs(Y)), respectively.

See also: ColVecs, RowVecs

source
KernelFunctions.kernelmatrix!Function
kernelmatrix!(K::AbstractMatrix, κ::Kernel, x::AbstractVector)
 kernelmatrix!(K::AbstractMatrix, κ::Kernel, x::AbstractVector, y::AbstractVector)

In-place version of kernelmatrix where pre-allocated matrix K will be overwritten with the kernel matrix.

kernelmatrix!(K::AbstractMatrix, κ::Kernel, X::AbstractMatrix; obsdim)
 kernelmatrix!(
     K::AbstractMatrix,
@@ -8,8 +8,8 @@
     X::AbstractMatrix,
     Y::AbstractMatrix;
     obsdim,
-)

If obsdim=1, equivalent to kernelmatrix!(K, κ, RowVecs(X)) and kernelmatrix(K, κ, RowVecs(X), RowVecs(Y)), respectively. If obsdim=2, equivalent to kernelmatrix!(K, κ, ColVecs(X)) and kernelmatrix(K, κ, ColVecs(X), ColVecs(Y)), respectively.

See also: ColVecs, RowVecs

source
KernelFunctions.kernelmatrix_diagFunction
kernelmatrix_diag(κ::Kernel, x::AbstractVector)

Compute the diagonal of kernelmatrix(κ, x) efficiently.

kernelmatrix_diag(κ::Kernel, x::AbstractVector, y::AbstractVector)

Compute the diagonal of kernelmatrix(κ, x, y) efficiently. Requires that x and y are the same length.

kernelmatrix_diag(κ::Kernel, X::AbstractMatrix; obsdim)
-kernelmatrix_diag(κ::Kernel, X::AbstractMatrix, Y::AbstractMatrix; obsdim)

If obsdim=1, equivalent to kernelmatrix_diag(κ, RowVecs(X)) and kernelmatrix_diag(κ, RowVecs(X), RowVecs(Y)), respectively. If obsdim=2, equivalent to kernelmatrix_diag(κ, ColVecs(X)) and kernelmatrix_diag(κ, ColVecs(X), ColVecs(Y)), respectively.

See also: ColVecs, RowVecs

source
KernelFunctions.kernelmatrix_diag!Function
kernelmatrix_diag!(K::AbstractVector, κ::Kernel, x::AbstractVector)
+)

If obsdim=1, equivalent to kernelmatrix!(K, κ, RowVecs(X)) and kernelmatrix(K, κ, RowVecs(X), RowVecs(Y)), respectively. If obsdim=2, equivalent to kernelmatrix!(K, κ, ColVecs(X)) and kernelmatrix(K, κ, ColVecs(X), ColVecs(Y)), respectively.

See also: ColVecs, RowVecs

source
KernelFunctions.kernelmatrix_diagFunction
kernelmatrix_diag(κ::Kernel, x::AbstractVector)

Compute the diagonal of kernelmatrix(κ, x) efficiently.

kernelmatrix_diag(κ::Kernel, x::AbstractVector, y::AbstractVector)

Compute the diagonal of kernelmatrix(κ, x, y) efficiently. Requires that x and y are the same length.

kernelmatrix_diag(κ::Kernel, X::AbstractMatrix; obsdim)
+kernelmatrix_diag(κ::Kernel, X::AbstractMatrix, Y::AbstractMatrix; obsdim)

If obsdim=1, equivalent to kernelmatrix_diag(κ, RowVecs(X)) and kernelmatrix_diag(κ, RowVecs(X), RowVecs(Y)), respectively. If obsdim=2, equivalent to kernelmatrix_diag(κ, ColVecs(X)) and kernelmatrix_diag(κ, ColVecs(X), ColVecs(Y)), respectively.

See also: ColVecs, RowVecs

source
KernelFunctions.kernelmatrix_diag!Function
kernelmatrix_diag!(K::AbstractVector, κ::Kernel, x::AbstractVector)
 kernelmatrix_diag!(K::AbstractVector, κ::Kernel, x::AbstractVector, y::AbstractVector)

In place version of kernelmatrix_diag.

kernelmatrix_diag!(K::AbstractVector, κ::Kernel, X::AbstractMatrix; obsdim)
 kernelmatrix_diag!(
     K::AbstractVector,
@@ -17,7 +17,7 @@
     X::AbstractMatrix,
     Y::AbstractMatrix;
     obsdim
-)

If obsdim=1, equivalent to kernelmatrix_diag!(K, κ, RowVecs(X)) and kernelmatrix_diag!(K, κ, RowVecs(X), RowVecs(Y)), respectively. If obsdim=2, equivalent to kernelmatrix_diag!(K, κ, ColVecs(X)) and kernelmatrix_diag!(K, κ, ColVecs(X), ColVecs(Y)), respectively.

See also: ColVecs, RowVecs

source

Input Types

The above API operates on collections of inputs. All collections of inputs in KernelFunctions.jl are represented as AbstractVectors. To understand this choice, please see the design notes on collections of inputs. The length of any such AbstractVector is equal to the number of inputs in the collection. For example, this means that

size(kernelmatrix(k, x)) == (length(x), length(x))

is always true, for some Kernel k, and AbstractVector x.

Univariate Inputs

If each input to your kernel is Real-valued, then any AbstractVector{<:Real} is a valid representation for a collection of inputs. More generally, it's completely fine to represent a collection of inputs of type T as, for example, a Vector{T}. However, this may not be the most efficient way to represent collection of inputs. See Vector-Valued Inputs for an example.

Vector-Valued Inputs

We recommend that collections of vector-valued inputs are stored in an AbstractMatrix{<:Real} when possible, and wrapped inside a ColVecs or RowVecs to make their interpretation clear:

KernelFunctions.ColVecsType
ColVecs(X::AbstractMatrix)

A lightweight wrapper for an AbstractMatrix which interprets it as a vector-of-vectors, in which each column of X represents a single vector.

That is, by writing x = ColVecs(X), you are saying "x is a vector-of-vectors, each of which has length size(X, 1). The total number of vectors is size(X, 2)."

Phrased differently, ColVecs(X) says that X should be interpreted as a vector of horizontally-concatenated column-vectors, hence the name ColVecs.

julia> X = randn(2, 5);
+)

If obsdim=1, equivalent to kernelmatrix_diag!(K, κ, RowVecs(X)) and kernelmatrix_diag!(K, κ, RowVecs(X), RowVecs(Y)), respectively. If obsdim=2, equivalent to kernelmatrix_diag!(K, κ, ColVecs(X)) and kernelmatrix_diag!(K, κ, ColVecs(X), ColVecs(Y)), respectively.

See also: ColVecs, RowVecs

source

Input Types

The above API operates on collections of inputs. All collections of inputs in KernelFunctions.jl are represented as AbstractVectors. To understand this choice, please see the design notes on collections of inputs. The length of any such AbstractVector is equal to the number of inputs in the collection. For example, this means that

size(kernelmatrix(k, x)) == (length(x), length(x))

is always true, for some Kernel k, and AbstractVector x.

Univariate Inputs

If each input to your kernel is Real-valued, then any AbstractVector{<:Real} is a valid representation for a collection of inputs. More generally, it's completely fine to represent a collection of inputs of type T as, for example, a Vector{T}. However, this may not be the most efficient way to represent collection of inputs. See Vector-Valued Inputs for an example.

Vector-Valued Inputs

We recommend that collections of vector-valued inputs are stored in an AbstractMatrix{<:Real} when possible, and wrapped inside a ColVecs or RowVecs to make their interpretation clear:

KernelFunctions.ColVecsType
ColVecs(X::AbstractMatrix)

A lightweight wrapper for an AbstractMatrix which interprets it as a vector-of-vectors, in which each column of X represents a single vector.

That is, by writing x = ColVecs(X), you are saying "x is a vector-of-vectors, each of which has length size(X, 1). The total number of vectors is size(X, 2)."

Phrased differently, ColVecs(X) says that X should be interpreted as a vector of horizontally-concatenated column-vectors, hence the name ColVecs.

julia> X = randn(2, 5);
 
 julia> x = ColVecs(X);
 
@@ -28,7 +28,7 @@
 true

ColVecs is related to RowVecs via transposition:

julia> X = randn(2, 5);
 
 julia> ColVecs(X) == RowVecs(X')
-true
source
KernelFunctions.RowVecsType
RowVecs(X::AbstractMatrix)

A lightweight wrapper for an AbstractMatrix which interprets it as a vector-of-vectors, in which each row of X represents a single vector.

That is, by writing x = RowVecs(X), you are saying "x is a vector-of-vectors, each of which has length size(X, 2). The total number of vectors is size(X, 1)."

Phrased differently, RowVecs(X) says that X should be interpreted as a vector of vertically-concatenated row-vectors, hence the name RowVecs.

Internally, the data continues to be represented as an AbstractMatrix, so using this type does not introduce any kind of performance penalty.

julia> X = randn(5, 2);
+true
source
KernelFunctions.RowVecsType
RowVecs(X::AbstractMatrix)

A lightweight wrapper for an AbstractMatrix which interprets it as a vector-of-vectors, in which each row of X represents a single vector.

That is, by writing x = RowVecs(X), you are saying "x is a vector-of-vectors, each of which has length size(X, 2). The total number of vectors is size(X, 1)."

Phrased differently, RowVecs(X) says that X should be interpreted as a vector of vertically-concatenated row-vectors, hence the name RowVecs.

Internally, the data continues to be represented as an AbstractMatrix, so using this type does not introduce any kind of performance penalty.

julia> X = randn(5, 2);
 
 julia> x = RowVecs(X);
 
@@ -39,7 +39,7 @@
 true

RowVecs is related to ColVecs via transposition:

julia> X = randn(5, 2);
 
 julia> RowVecs(X) == ColVecs(X')
-true
source

These types are specialised upon to ensure good performance e.g. when computing Euclidean distances between pairs of elements. The benefit of using this representation, rather than using a Vector{Vector{<:Real}}, is that optimised matrix-matrix multiplication functionality can be utilised when computing pairwise distances between inputs, which are needed for kernelmatrix computation.

Inputs for Multiple Outputs

KernelFunctions.jl views multi-output GPs as GPs on an extended input domain. For an explanation of this design choice, see the design notes on multi-output GPs.

An input to a multi-output Kernel should be a Tuple{T, Int}, whose first element specifies a location in the domain of the multi-output GP, and whose second element specifies which output the inputs corresponds to. The type of collections of inputs for multi-output GPs is therefore AbstractVector{<:Tuple{T, Int}}.

KernelFunctions.jl provides the following helper functions to reduce the cognitive load associated with working with multi-output kernels by dealing with transforming data from the formats in which it is commonly found into the format required by KernelFunctions. The intention is that users can pass their data to these functions, and use the returned values throughout their code, without having to worry further about correctly formatting their data for KernelFunctions' sake:

KernelFunctions.prepare_isotopic_multi_output_dataMethod
prepare_isotopic_multi_output_data(x::AbstractVector, y::ColVecs)

Utility functionality to convert a collection of N = length(x) inputs x, and a vector-of-vectors y (efficiently represented by a ColVecs) into a format suitable for use with multi-output kernels.

y[n] is the vector-valued output corresponding to the input x[n]. Consequently, it is necessary that length(x) == length(y).

For example, if outputs are initially stored in a num_outputs × N matrix:

julia> x = [1.0, 2.0, 3.0];
+true
source

These types are specialised upon to ensure good performance e.g. when computing Euclidean distances between pairs of elements. The benefit of using this representation, rather than using a Vector{Vector{<:Real}}, is that optimised matrix-matrix multiplication functionality can be utilised when computing pairwise distances between inputs, which are needed for kernelmatrix computation.

Inputs for Multiple Outputs

KernelFunctions.jl views multi-output GPs as GPs on an extended input domain. For an explanation of this design choice, see the design notes on multi-output GPs.

An input to a multi-output Kernel should be a Tuple{T, Int}, whose first element specifies a location in the domain of the multi-output GP, and whose second element specifies which output the inputs corresponds to. The type of collections of inputs for multi-output GPs is therefore AbstractVector{<:Tuple{T, Int}}.

KernelFunctions.jl provides the following helper functions to reduce the cognitive load associated with working with multi-output kernels by dealing with transforming data from the formats in which it is commonly found into the format required by KernelFunctions. The intention is that users can pass their data to these functions, and use the returned values throughout their code, without having to worry further about correctly formatting their data for KernelFunctions' sake:

KernelFunctions.prepare_isotopic_multi_output_dataMethod
prepare_isotopic_multi_output_data(x::AbstractVector, y::ColVecs)

Utility functionality to convert a collection of N = length(x) inputs x, and a vector-of-vectors y (efficiently represented by a ColVecs) into a format suitable for use with multi-output kernels.

y[n] is the vector-valued output corresponding to the input x[n]. Consequently, it is necessary that length(x) == length(y).

For example, if outputs are initially stored in a num_outputs × N matrix:

julia> x = [1.0, 2.0, 3.0];
 
 julia> Y = [1.1 2.1 3.1; 1.2 2.2 3.2]
 2×3 Matrix{Float64}:
@@ -64,7 +64,7 @@
  2.1
  2.2
  3.1
- 3.2

See also prepare_heterotopic_multi_output_data.

source
KernelFunctions.prepare_isotopic_multi_output_dataMethod
prepare_isotopic_multi_output_data(x::AbstractVector, y::RowVecs)

Utility functionality to convert a collection of N = length(x) inputs x and output vectors y (efficiently represented by a RowVecs) into a format suitable for use with multi-output kernels.

y[n] is the vector-valued output corresponding to the input x[n]. Consequently, it is necessary that length(x) == length(y).

For example, if outputs are initial stored in an N × num_outputs matrix:

julia> x = [1.0, 2.0, 3.0];
+ 3.2

See also prepare_heterotopic_multi_output_data.

source
KernelFunctions.prepare_isotopic_multi_output_dataMethod
prepare_isotopic_multi_output_data(x::AbstractVector, y::RowVecs)

Utility functionality to convert a collection of N = length(x) inputs x and output vectors y (efficiently represented by a RowVecs) into a format suitable for use with multi-output kernels.

y[n] is the vector-valued output corresponding to the input x[n]. Consequently, it is necessary that length(x) == length(y).

For example, if outputs are initial stored in an N × num_outputs matrix:

julia> x = [1.0, 2.0, 3.0];
 
 julia> Y = [1.1 1.2; 2.1 2.2; 3.1 3.2]
 3×2 Matrix{Float64}:
@@ -90,7 +90,7 @@
  3.1
  1.2
  2.2
- 3.2

See also prepare_heterotopic_multi_output_data.

source
KernelFunctions.prepare_heterotopic_multi_output_dataFunction
prepare_heterotopic_multi_output_data(
     x::AbstractVector, y::AbstractVector{<:Real}, output_indices::AbstractVector{Int},
 )

Utility functionality to convert a collection of inputs x, observations y, and output_indices into a format suitable for use with multi-output kernels. Handles the situation in which only one (or a subset) of outputs are observed at each feature. Ensures that all arguments are compatible with one another, and returns a vector of inputs and a vector of outputs.

y[n] should be the observed value associated with output output_indices[n] at feature x[n].

julia> x = [1.0, 2.0, 3.0];
 
@@ -110,7 +110,7 @@
 3-element Vector{Float64}:
  -1.0
   0.0
-  1.0

See also prepare_isotopic_multi_output_data.

source

The input types returned by prepare_isotopic_multi_output_data can also be constructed manually:

The input types returned by prepare_isotopic_multi_output_data can also be constructed manually:

KernelFunctions.MOInputType
MOInput(x::AbstractVector, out_dim::Integer)

A data type to accommodate modelling multi-dimensional output data. MOInput(x, out_dim) has length length(x) * out_dim.

julia> x = [1, 2, 3];
 
 julia> MOInput(x, 2)
 6-element KernelFunctions.MOInputIsotopicByOutputs{Int64, Vector{Int64}, Int64}:
@@ -119,6 +119,6 @@
  (3, 1)
  (1, 2)
  (2, 2)
- (3, 2)

As shown above, an MOInput represents a vector of tuples. The first length(x) elements represent the inputs for the first output, the second length(x) elements represent the inputs for the second output, etc. See Inputs for Multiple Outputs in the docs for more info.

MOInput will be deprecated in version 0.11 in favour of MOInputIsotopicByOutputs, and removed in version 0.12.

source

As with ColVecs and RowVecs for vector-valued input spaces, this type enables specialised implementations of e.g. kernelmatrix for MOInputs in some situations.

To find out more about the background, read this review of kernels for vector-valued functions.

Generic Utilities

KernelFunctions also provides miscellaneous utility functions.

KernelFunctions.nystromFunction
nystrom(k::Kernel, X::AbstractVector, S::AbstractVector{<:Integer})

Compute a factorization of a Nystrom approximation of the square kernel matrix of data vector X with respect to kernel k, using indices S. Returns a NystromFact struct which stores a Nystrom factorization satisfying:

\[\mathbf{K} \approx \mathbf{C}^{\intercal}\mathbf{W}\mathbf{C}\]

source
nystrom(k::Kernel, X::AbstractVector, r::Real)

Compute a factorization of a Nystrom approximation of the square kernel matrix of data vector X with respect to kernel k using a sample ratio of r. Returns a NystromFact struct which stores a Nystrom factorization satisfying:

\[\mathbf{K} \approx \mathbf{C}^{\intercal}\mathbf{W}\mathbf{C}\]

source
nystrom(k::Kernel, X::AbstractMatrix, S::AbstractVector{<:Integer}; obsdim)

If obsdim=1, equivalent to nystrom(k, RowVecs(X), S). If obsdim=2, equivalent to nystrom(k, ColVecs(X), S).

See also: ColVecs, RowVecs

source
nystrom(k::Kernel, X::AbstractMatrix, r::Real; obsdim)

If obsdim=1, equivalent to nystrom(k, RowVecs(X), r). If obsdim=2, equivalent to nystrom(k, ColVecs(X), r).

See also: ColVecs, RowVecs

source
KernelFunctions.NystromFactType
NystromFact

Type for storing a Nystrom factorization. The factorization contains two fields: W and C, two matrices satisfying:

\[\mathbf{K} \approx \mathbf{C}^{\intercal}\mathbf{W}\mathbf{C}\]

source

Conditional Utilities

To keep the dependencies of KernelFunctions lean, some functionality is only available if specific other packages are explicitly loaded (using).

Kronecker.jl

https://github.com/MichielStock/Kronecker.jl

KernelFunctions.kronecker_kernelmatrixFunction
kronecker_kernelmatrix(
+ (3, 2)

As shown above, an MOInput represents a vector of tuples. The first length(x) elements represent the inputs for the first output, the second length(x) elements represent the inputs for the second output, etc. See Inputs for Multiple Outputs in the docs for more info.

MOInput will be deprecated in version 0.11 in favour of MOInputIsotopicByOutputs, and removed in version 0.12.

source

As with ColVecs and RowVecs for vector-valued input spaces, this type enables specialised implementations of e.g. kernelmatrix for MOInputs in some situations.

To find out more about the background, read this review of kernels for vector-valued functions.

Generic Utilities

KernelFunctions also provides miscellaneous utility functions.

KernelFunctions.nystromFunction
nystrom(k::Kernel, X::AbstractVector, S::AbstractVector{<:Integer})

Compute a factorization of a Nystrom approximation of the square kernel matrix of data vector X with respect to kernel k, using indices S. Returns a NystromFact struct which stores a Nystrom factorization satisfying:

\[\mathbf{K} \approx \mathbf{C}^{\intercal}\mathbf{W}\mathbf{C}\]

source
nystrom(k::Kernel, X::AbstractVector, r::Real)

Compute a factorization of a Nystrom approximation of the square kernel matrix of data vector X with respect to kernel k using a sample ratio of r. Returns a NystromFact struct which stores a Nystrom factorization satisfying:

\[\mathbf{K} \approx \mathbf{C}^{\intercal}\mathbf{W}\mathbf{C}\]

source
nystrom(k::Kernel, X::AbstractMatrix, S::AbstractVector{<:Integer}; obsdim)

If obsdim=1, equivalent to nystrom(k, RowVecs(X), S). If obsdim=2, equivalent to nystrom(k, ColVecs(X), S).

See also: ColVecs, RowVecs

source
nystrom(k::Kernel, X::AbstractMatrix, r::Real; obsdim)

If obsdim=1, equivalent to nystrom(k, RowVecs(X), r). If obsdim=2, equivalent to nystrom(k, ColVecs(X), r).

See also: ColVecs, RowVecs

source
KernelFunctions.NystromFactType
NystromFact

Type for storing a Nystrom factorization. The factorization contains two fields: W and C, two matrices satisfying:

\[\mathbf{K} \approx \mathbf{C}^{\intercal}\mathbf{W}\mathbf{C}\]

source

Conditional Utilities

To keep the dependencies of KernelFunctions lean, some functionality is only available if specific other packages are explicitly loaded (using).

Kronecker.jl

https://github.com/MichielStock/Kronecker.jl

KernelFunctions.kronecker_kernelmatrixFunction
kronecker_kernelmatrix(
     k::Union{IndependentMOKernel,IntrinsicCoregionMOKernel}, x::MOI, y::MOI
-) where {MOI<:IsotopicMOInputsUnion}

Requires Kronecker.jl: Computes the kernelmatrix for the IndependentMOKernel and the IntrinsicCoregionMOKernel, but returns a lazy kronecker product. This object can be very efficiently inverted or decomposed. See also kernelmatrix.

source
KernelFunctions.kernelkronmatFunction
kernelkronmat(κ::Kernel, X::AbstractVector{<:Real}, dims::Int) -> KroneckerPower

Return a KroneckerPower matrix on the D-dimensional input grid constructed by $\otimes_{i=1}^D X$, where D is given by dims.

Warning

Requires Kronecker.jl and for iskroncompatible(κ) to return true.

source
kernelkronmat(κ::Kernel, X::AbstractVector{<:AbstractVector}) -> KroneckerProduct

Returns a KroneckerProduct matrix on the grid built with the collection of vectors $\{X_i\}_{i=1}^D$: $\otimes_{i=1}^D X_i$.

Warning

Requires Kronecker.jl and for iskroncompatible(κ) to return true.

source

PDMats.jl

https://github.com/JuliaStats/PDMats.jl

KernelFunctions.kernelpdmatFunction
kernelpdmat(k::Kernel, X::AbstractVector)

Compute a positive-definite matrix in the form of a PDMat matrix (see PDMats.jl), with the Cholesky decomposition precomputed. The algorithm adds a diagonal "nugget" term to the kernel matrix which is increased until positive definiteness is achieved. The algorithm gives up with an error if the nugget becomes larger than 1% of the largest value in the kernel matrix.

source
kernelpdmat(k::Kernel, X::AbstractMatrix; obsdim)

If obsdim=1, equivalent to kernelpdmat(k, RowVecs(X)). If obsdim=2, equivalent to kernelpdmat(k, ColVecs(X)).

See also: ColVecs, RowVecs

source
+) where {MOI<:IsotopicMOInputsUnion}

Requires Kronecker.jl: Computes the kernelmatrix for the IndependentMOKernel and the IntrinsicCoregionMOKernel, but returns a lazy kronecker product. This object can be very efficiently inverted or decomposed. See also kernelmatrix.

source
KernelFunctions.kernelkronmatFunction
kernelkronmat(κ::Kernel, X::AbstractVector{<:Real}, dims::Int) -> KroneckerPower

Return a KroneckerPower matrix on the D-dimensional input grid constructed by $\otimes_{i=1}^D X$, where D is given by dims.

Warning

Requires Kronecker.jl and for iskroncompatible(κ) to return true.

source
kernelkronmat(κ::Kernel, X::AbstractVector{<:AbstractVector}) -> KroneckerProduct

Returns a KroneckerProduct matrix on the grid built with the collection of vectors $\{X_i\}_{i=1}^D$: $\otimes_{i=1}^D X_i$.

Warning

Requires Kronecker.jl and for iskroncompatible(κ) to return true.

source

PDMats.jl

https://github.com/JuliaStats/PDMats.jl

KernelFunctions.kernelpdmatFunction
kernelpdmat(k::Kernel, X::AbstractVector)

Compute a positive-definite matrix in the form of a PDMat matrix (see PDMats.jl), with the Cholesky decomposition precomputed. The algorithm adds a diagonal "nugget" term to the kernel matrix which is increased until positive definiteness is achieved. The algorithm gives up with an error if the nugget becomes larger than 1% of the largest value in the kernel matrix.

source
kernelpdmat(k::Kernel, X::AbstractMatrix; obsdim)

If obsdim=1, equivalent to kernelpdmat(k, RowVecs(X)). If obsdim=2, equivalent to kernelpdmat(k, ColVecs(X)).

See also: ColVecs, RowVecs

source
diff --git a/dev/create_kernel/index.html b/dev/create_kernel/index.html index 7526f660e..30c5bafcd 100644 --- a/dev/create_kernel/index.html +++ b/dev/create_kernel/index.html @@ -23,4 +23,4 @@ return MyKernel(x.n, xs.a) end return (a = x.a,), reconstruct_mykernel -end +end diff --git a/dev/design/index.html b/dev/design/index.html index b5aef06db..9500dd402 100644 --- a/dev/design/index.html +++ b/dev/design/index.html @@ -4,4 +4,4 @@ kernelmatrix(k.kernels[2], x; obsdim=obsdim) end

While this prevents this package from having to pre-specify a convention, it doesn't resolve the length issue, or the issue of representing collections of inputs which aren't immediately represented as vectors. Moreover, it complicates the internals; in contrast, consider what this function looks like with an AbstractVector:

function kernelmatrix(k::KernelSum, x::AbstractVector)
     return kernelmatrix(k.kernels[1], x) + kernelmatrix(k.kernels[2], x)
-end

This code is clearer (less visual noise), and has removed a possible bug – if the implementer of kernelmatrix forgets to pass the obsdim kwarg into each subsequent kernelmatrix call, it's possible to get the wrong answer.

This being said, we do support matrix-valued inputs – see Why We Have Support for Both.

AbstractVectors

Requiring all collections of inputs to be AbstractVectors resolves all of these problems, and ensures that the data is self-describing to the extent that KernelFunctions.jl requires.

Firstly, the question of how to interpret the columns and rows of a matrix of inputs is resolved. Users must wrap matrices which represent collections of inputs in either a ColVecs or RowVecs, both of which have clearly defined semantics which are hard to confuse.

By design, there is also no discrepancy between the number of inputs in the collection, and the length function – the length of a ColVecs, RowVecs, or Vector{<:Real} is equal to the number of inputs.

There is no loss of performance.

A collection of N Real-valued inputs can be represented by an AbstractVector{<:Real} of length N, rather than needing to use an AbstractMatrix{<:Real} of size either N x 1 or 1 x N. The same can be said for any other input type T, and new subtypes of AbstractVector can be added if particularly efficient ways exist to store collections of inputs of type T. A good example of this in practice is using Tuple{S, Int}, for some input type S, as the Inputs for Multiple Outputs.

This approach can also lead to clearer user code. A user need only wrap their inputs in a ColVecs or RowVecs once in their code, and this specification is automatically re-used everywhere in their code. In this sense, it is straightforward to write code in such a way that there is one unique source of "truth" about the way in which a particular data set should be interpreted. Conversely, the obsdim resolution requires that the obsdim keyword argument is passed around with the data every single time that you use it.

The benefits of the AbstractVector approach are likely most strongly felt when writing a substantial amount of code on top of KernelFunctions.jl – in the same way that using AbstractVectors inside KernelFunctions.jl removes the need for large amounts of keyword argument propagation, the same will be true of other code.

Why We Have Support for Both

In short: many people like matrices, and are familiar with obsdim-style keyword arguments.

All internals are implemented using AbstractVectors though, and the obsdim interface is just a thin layer of utility functionality which sits on top of this. To avoid confusion and silent errors, we do not favour a specific convention (rows or columns) but instead it is necessary to specify the obsdim keyword argument explicitly.

Kernels for Multiple-Outputs

There are two equally-valid perspectives on multi-output kernels: they can either be treated as matrix-valued kernels, or standard kernels on an extended input domain. Each of these perspectives are convenient in different circumstances, but the latter greatly simplifies the incorporation of multi-output kernels in KernelFunctions.

More concretely, let k_mat be a matrix-valued kernel, mapping pairs of inputs of type T to matrices of size P x P to describe the covariance between P outputs. Given inputs x and y of type T, and integers p and q, we can always find an equivalent standard kernel k mapping from pairs of inputs of type Tuple{T, Int} to the Reals as follows:

k((x, p), (y, q)) = k_mat(x, y)[p, q]

This ability to treat multi-output kernels as single-output kernels is very helpful, as it means that there is no need to introduce additional concepts into the API of KernelFunctions.jl, just additional kernels! This in turn simplifies downstream code as they don't need to "know" about the existence of multi-output kernels in addition to standard kernels. For example, GP libraries built on top of KernelFunctions.jl just need to know about Kernels, and they get multi-output kernels, and hence multi-output GPs, for free.

Where there is the need to specialise implementations for multi-output kernels, this is done in an encapsulated manner – parts of KernelFunctions that have nothing to do with multi-output kernels know nothing about the existence of multi-output kernels.

+end

This code is clearer (less visual noise), and has removed a possible bug – if the implementer of kernelmatrix forgets to pass the obsdim kwarg into each subsequent kernelmatrix call, it's possible to get the wrong answer.

This being said, we do support matrix-valued inputs – see Why We Have Support for Both.

AbstractVectors

Requiring all collections of inputs to be AbstractVectors resolves all of these problems, and ensures that the data is self-describing to the extent that KernelFunctions.jl requires.

Firstly, the question of how to interpret the columns and rows of a matrix of inputs is resolved. Users must wrap matrices which represent collections of inputs in either a ColVecs or RowVecs, both of which have clearly defined semantics which are hard to confuse.

By design, there is also no discrepancy between the number of inputs in the collection, and the length function – the length of a ColVecs, RowVecs, or Vector{<:Real} is equal to the number of inputs.

There is no loss of performance.

A collection of N Real-valued inputs can be represented by an AbstractVector{<:Real} of length N, rather than needing to use an AbstractMatrix{<:Real} of size either N x 1 or 1 x N. The same can be said for any other input type T, and new subtypes of AbstractVector can be added if particularly efficient ways exist to store collections of inputs of type T. A good example of this in practice is using Tuple{S, Int}, for some input type S, as the Inputs for Multiple Outputs.

This approach can also lead to clearer user code. A user need only wrap their inputs in a ColVecs or RowVecs once in their code, and this specification is automatically re-used everywhere in their code. In this sense, it is straightforward to write code in such a way that there is one unique source of "truth" about the way in which a particular data set should be interpreted. Conversely, the obsdim resolution requires that the obsdim keyword argument is passed around with the data every single time that you use it.

The benefits of the AbstractVector approach are likely most strongly felt when writing a substantial amount of code on top of KernelFunctions.jl – in the same way that using AbstractVectors inside KernelFunctions.jl removes the need for large amounts of keyword argument propagation, the same will be true of other code.

Why We Have Support for Both

In short: many people like matrices, and are familiar with obsdim-style keyword arguments.

All internals are implemented using AbstractVectors though, and the obsdim interface is just a thin layer of utility functionality which sits on top of this. To avoid confusion and silent errors, we do not favour a specific convention (rows or columns) but instead it is necessary to specify the obsdim keyword argument explicitly.

Kernels for Multiple-Outputs

There are two equally-valid perspectives on multi-output kernels: they can either be treated as matrix-valued kernels, or standard kernels on an extended input domain. Each of these perspectives are convenient in different circumstances, but the latter greatly simplifies the incorporation of multi-output kernels in KernelFunctions.

More concretely, let k_mat be a matrix-valued kernel, mapping pairs of inputs of type T to matrices of size P x P to describe the covariance between P outputs. Given inputs x and y of type T, and integers p and q, we can always find an equivalent standard kernel k mapping from pairs of inputs of type Tuple{T, Int} to the Reals as follows:

k((x, p), (y, q)) = k_mat(x, y)[p, q]

This ability to treat multi-output kernels as single-output kernels is very helpful, as it means that there is no need to introduce additional concepts into the API of KernelFunctions.jl, just additional kernels! This in turn simplifies downstream code as they don't need to "know" about the existence of multi-output kernels in addition to standard kernels. For example, GP libraries built on top of KernelFunctions.jl just need to know about Kernels, and they get multi-output kernels, and hence multi-output GPs, for free.

Where there is the need to specialise implementations for multi-output kernels, this is done in an encapsulated manner – parts of KernelFunctions that have nothing to do with multi-output kernels know nothing about the existence of multi-output kernels.

diff --git a/dev/examples/gaussian-process-priors/Manifest.toml b/dev/examples/gaussian-process-priors/Manifest.toml index 77b92e3dd..45a5875b6 100644 --- a/dev/examples/gaussian-process-priors/Manifest.toml +++ b/dev/examples/gaussian-process-priors/Manifest.toml @@ -1,6 +1,6 @@ # This file is machine-generated - editing it directly is not advised -julia_version = "1.10.0" +julia_version = "1.10.2" manifest_format = "2.0" project_hash = "3f5817959c36abf3cab0a72cc306a1c0e4f6e332" @@ -39,9 +39,9 @@ version = "0.5.1" [[deps.ChainRulesCore]] deps = ["Compat", "LinearAlgebra"] -git-tree-sha1 = "ad25e7d21ce10e01de973cdc68ad0f850a953c52" +git-tree-sha1 = "575cd02e080939a33b6df6c5853d14924c08e35b" uuid = "d360d2e6-b24c-11e9-a2a3-2a2ae2dbcce4" -version = "1.21.1" +version = "1.23.0" weakdeps = ["SparseArrays"] [deps.ChainRulesCore.extensions] @@ -83,9 +83,9 @@ version = "0.12.10" [[deps.Compat]] deps = ["TOML", "UUIDs"] -git-tree-sha1 = "75bd5b6fc5089df449b5d35fa501c846c9b6549b" +git-tree-sha1 = "c955881e3c981181362ae4088b35995446298b80" uuid = "34da2185-b29b-5c13-b0c7-acf172513d20" -version = "4.12.0" +version = "4.14.0" weakdeps = ["Dates", "LinearAlgebra"] [deps.Compat.extensions] @@ -94,7 +94,7 @@ weakdeps = ["Dates", "LinearAlgebra"] [[deps.CompilerSupportLibraries_jll]] deps = ["Artifacts", "Libdl"] uuid = "e66e0078-7015-5450-92f7-15fbd957f2ae" -version = "1.0.5+1" +version = "1.1.0+0" [[deps.CompositionsBase]] git-tree-sha1 = "802bb88cd69dfd1509f6670416bd4434015693ad" @@ -109,9 +109,9 @@ version = "0.1.2" [[deps.ConcurrentUtilities]] deps = ["Serialization", "Sockets"] -git-tree-sha1 = "8cfa272e8bdedfa88b6aefbbca7c19f1befac519" +git-tree-sha1 = "9c4708e3ed2b799e6124b5673a712dda0b596a9b" uuid = "f0e56b4a-5159-44fe-b623-3e5288b988bb" -version = "2.3.0" +version = "2.3.1" [[deps.Contour]] git-tree-sha1 = "d05d9e7b7aedff4e5b51a029dced05cfb6125781" @@ -125,9 +125,9 @@ version = "1.16.0" [[deps.DataStructures]] deps = ["Compat", "InteractiveUtils", "OrderedCollections"] -git-tree-sha1 = "ac67408d9ddf207de5cfa9a97e114352430f01ed" +git-tree-sha1 = "0f4b5d62a88d8f59003e43c25a8a90de9eb76317" uuid = "864edb3b-99cc-5e75-8d2d-829cb0a9cfe8" -version = "0.18.16" +version = "0.18.18" [[deps.Dates]] deps = ["Printf"] @@ -240,11 +240,10 @@ git-tree-sha1 = "21efd19106a55620a188615da6d3d06cd7f6ee03" uuid = "a3f928ae-7b40-5064-980b-68af3947d34b" version = "2.13.93+0" -[[deps.Formatting]] -deps = ["Printf"] -git-tree-sha1 = "8339d61043228fdd3eb658d86c926cb282ae72a8" -uuid = "59287772-0a20-5a39-b81b-1366585eb4c0" -version = "0.4.2" +[[deps.Format]] +git-tree-sha1 = "f3cf88025f6d03c194d73f5d13fee9004a108329" +uuid = "1fa38f19-a742-5d3f-a2b9-30dd87b9d5f8" +version = "1.3.6" [[deps.FreeType2_jll]] deps = ["Artifacts", "Bzip2_jll", "JLLWrappers", "Libdl", "Zlib_jll"] @@ -272,15 +271,15 @@ version = "3.3.9+0" [[deps.GR]] deps = ["Artifacts", "Base64", "DelimitedFiles", "Downloads", "GR_jll", "HTTP", "JSON", "Libdl", "LinearAlgebra", "Pkg", "Preferences", "Printf", "Random", "Serialization", "Sockets", "TOML", "Tar", "Test", "UUIDs", "p7zip_jll"] -git-tree-sha1 = "3458564589be207fa6a77dbbf8b97674c9836aab" +git-tree-sha1 = "3437ade7073682993e092ca570ad68a2aba26983" uuid = "28b8d3ca-fb5f-59d9-8090-bfdbd6d07a71" -version = "0.73.2" +version = "0.73.3" [[deps.GR_jll]] deps = ["Artifacts", "Bzip2_jll", "Cairo_jll", "FFMPEG_jll", "Fontconfig_jll", "FreeType2_jll", "GLFW_jll", "JLLWrappers", "JpegTurbo_jll", "Libdl", "Libtiff_jll", "Pixman_jll", "Qt6Base_jll", "Zlib_jll", "libpng_jll"] -git-tree-sha1 = "77f81da2964cc9fa7c0127f941e8bce37f7f1d70" +git-tree-sha1 = "a96d5c713e6aa28c242b0d25c1347e258d6541ab" uuid = "d2c73de3-f751-5644-a686-071e5b155ba9" -version = "0.73.2+0" +version = "0.73.3+0" [[deps.Gettext_jll]] deps = ["Artifacts", "CompilerSupportLibraries_jll", "JLLWrappers", "Libdl", "Libiconv_jll", "Pkg", "XML2_jll"] @@ -307,9 +306,9 @@ version = "1.0.2" [[deps.HTTP]] deps = ["Base64", "CodecZlib", "ConcurrentUtilities", "Dates", "ExceptionUnwrapping", "Logging", "LoggingExtras", "MbedTLS", "NetworkOptions", "OpenSSL", "Random", "SimpleBufferStream", "Sockets", "URIs", "UUIDs"] -git-tree-sha1 = "abbbb9ec3afd783a7cbd82ef01dcd088ea051398" +git-tree-sha1 = "db864f2d91f68a5912937af80327d288ea1f3aee" uuid = "cd3eb016-35fb-5094-929b-558a96fad6f3" -version = "1.10.1" +version = "1.10.3" [[deps.HarfBuzz_jll]] deps = ["Artifacts", "Cairo_jll", "Fontconfig_jll", "FreeType2_jll", "Glib_jll", "Graphite2_jll", "JLLWrappers", "Libdl", "Libffi_jll", "Pkg"] @@ -358,13 +357,13 @@ version = "0.21.4" [[deps.JpegTurbo_jll]] deps = ["Artifacts", "JLLWrappers", "Libdl"] -git-tree-sha1 = "60b1194df0a3298f460063de985eae7b01bc011a" +git-tree-sha1 = "3336abae9a713d2210bb57ab484b1e065edd7d23" uuid = "aacddb02-875f-59d6-b918-886e6ef4fbf8" -version = "3.0.1+0" +version = "3.0.2+0" [[deps.KernelFunctions]] deps = ["ChainRulesCore", "Compat", "CompositionsBase", "Distances", "FillArrays", "Functors", "IrrationalConstants", "LinearAlgebra", "LogExpFunctions", "Random", "Requires", "SpecialFunctions", "Statistics", "StatsBase", "TensorCore", "Test", "ZygoteRules"] -git-tree-sha1 = "654dead9bd3313311b61087ab7a0cf2b7a935cb1" +git-tree-sha1 = "9d578aa8af14cd36ba744bb46c53925bca7a65f6" repo-rev = "master" repo-url = "/home/runner/work/KernelFunctions.jl/KernelFunctions.jl" uuid = "ec8451be-7e33-11e9-00cf-bbf324bd1392" @@ -400,10 +399,10 @@ uuid = "b964fa9f-0449-5b57-a5c2-d3ea65f4040f" version = "1.3.1" [[deps.Latexify]] -deps = ["Formatting", "InteractiveUtils", "LaTeXStrings", "MacroTools", "Markdown", "OrderedCollections", "Printf", "Requires"] -git-tree-sha1 = "f428ae552340899a935973270b8d98e5a31c49fe" +deps = ["Format", "InteractiveUtils", "LaTeXStrings", "MacroTools", "Markdown", "OrderedCollections", "Requires"] +git-tree-sha1 = "cad560042a7cc108f5a4c24ea1431a9221f22c1b" uuid = "23fbe1c1-3f47-55db-b15f-69d7ec21a316" -version = "0.16.1" +version = "0.16.2" [deps.Latexify.extensions] DataFramesExt = "DataFrames" @@ -483,10 +482,10 @@ uuid = "89763e89-9b03-5906-acba-b20f662cd828" version = "4.5.1+1" [[deps.Libuuid_jll]] -deps = ["Artifacts", "JLLWrappers", "Libdl", "Pkg"] -git-tree-sha1 = "7f3efec06033682db852f8b3bc3c1d2b0a0ab066" +deps = ["Artifacts", "JLLWrappers", "Libdl"] +git-tree-sha1 = "0a04a1318df1bf510beb2562cf90fb0c386f58c4" uuid = "38a345b3-de98-5d2b-a5d3-14cd9215e700" -version = "2.36.0+0" +version = "2.39.3+1" [[deps.LinearAlgebra]] deps = ["Libdl", "OpenBLAS_jll", "libblastrampoline_jll"] @@ -500,9 +499,9 @@ version = "2.16.1" [[deps.LogExpFunctions]] deps = ["DocStringExtensions", "IrrationalConstants", "LinearAlgebra"] -git-tree-sha1 = "7d6dd4e9212aebaeed356de34ccf262a3cd415aa" +git-tree-sha1 = "18144f3e9cbe9b15b070288eef858f71b291ce37" uuid = "2ab3a3ac-af41-5b50-aa03-7779005ae688" -version = "0.3.26" +version = "0.3.27" [deps.LogExpFunctions.extensions] LogExpFunctionsChainRulesCoreExt = "ChainRulesCore" @@ -581,7 +580,7 @@ version = "1.3.5+1" [[deps.OpenBLAS_jll]] deps = ["Artifacts", "CompilerSupportLibraries_jll", "Libdl"] uuid = "4536629a-c528-5b80-bd46-f80d51c5b363" -version = "0.3.23+2" +version = "0.3.23+4" [[deps.OpenLibm_jll]] deps = ["Artifacts", "Libdl"] @@ -590,9 +589,9 @@ version = "0.8.1+2" [[deps.OpenSSL]] deps = ["BitFlags", "Dates", "MozillaCACerts_jll", "OpenSSL_jll", "Sockets"] -git-tree-sha1 = "51901a49222b09e3743c65b8847687ae5fc78eb2" +git-tree-sha1 = "af81a32750ebc831ee28bdaaba6e1067decef51e" uuid = "4d8831e6-92b7-49fb-bdf8-b643e874388c" -version = "1.4.1" +version = "1.4.2" [[deps.OpenSSL_jll]] deps = ["Artifacts", "JLLWrappers", "Libdl"] @@ -658,15 +657,15 @@ version = "3.1.0" [[deps.PlotUtils]] deps = ["ColorSchemes", "Colors", "Dates", "PrecompileTools", "Printf", "Random", "Reexport", "Statistics"] -git-tree-sha1 = "862942baf5663da528f66d24996eb6da85218e76" +git-tree-sha1 = "7b1a9df27f072ac4c9c7cbe5efb198489258d1f5" uuid = "995b91a9-d308-5afd-9ec6-746e21dbc043" -version = "1.4.0" +version = "1.4.1" [[deps.Plots]] deps = ["Base64", "Contour", "Dates", "Downloads", "FFMPEG", "FixedPointNumbers", "GR", "JLFzf", "JSON", "LaTeXStrings", "Latexify", "LinearAlgebra", "Measures", "NaNMath", "Pkg", "PlotThemes", "PlotUtils", "PrecompileTools", "Printf", "REPL", "Random", "RecipesBase", "RecipesPipeline", "Reexport", "RelocatableFolders", "Requires", "Scratch", "Showoff", "SparseArrays", "Statistics", "StatsBase", "UUIDs", "UnicodeFun", "UnitfulLatexify", "Unzip"] -git-tree-sha1 = "c4fa93d7d66acad8f6f4ff439576da9d2e890ee0" +git-tree-sha1 = "3c403c6590dd93b36752634115e20137e79ab4df" uuid = "91a5bcdd-55d7-5caf-9e0b-520d859cae80" -version = "1.40.1" +version = "1.40.2" [deps.Plots.extensions] FileIOExt = "FileIO" @@ -684,15 +683,15 @@ version = "1.40.1" [[deps.PrecompileTools]] deps = ["Preferences"] -git-tree-sha1 = "03b4c25b43cb84cee5c90aa9b5ea0a78fd848d2f" +git-tree-sha1 = "5aa36f7049a63a1528fe8f7c3f2113413ffd4e1f" uuid = "aea7be01-6a6a-4083-8856-8a6e6704d82a" -version = "1.2.0" +version = "1.2.1" [[deps.Preferences]] deps = ["TOML"] -git-tree-sha1 = "00805cd429dcb4870060ff49ef443486c262e38e" +git-tree-sha1 = "9306f6085165d270f7e3db02af26a400d580f5c6" uuid = "21216c6a-2e73-6563-6e65-726566657250" -version = "1.4.1" +version = "1.4.3" [[deps.Printf]] deps = ["Unicode"] @@ -826,9 +825,9 @@ version = "0.34.2" [[deps.StatsFuns]] deps = ["HypergeometricFunctions", "IrrationalConstants", "LogExpFunctions", "Reexport", "Rmath", "SpecialFunctions"] -git-tree-sha1 = "f625d686d5a88bcd2b15cd81f18f98186fdc0c9a" +git-tree-sha1 = "cef0472124fab0695b58ca35a77c6fb942fdab8a" uuid = "4c63d2b9-4356-54db-8cca-17b64c39e42c" -version = "1.3.0" +version = "1.3.1" [deps.StatsFuns.extensions] StatsFunsChainRulesCoreExt = "ChainRulesCore" @@ -868,9 +867,9 @@ deps = ["InteractiveUtils", "Logging", "Random", "Serialization"] uuid = "8dfed614-e22c-5e08-85e1-65c5234f0b40" [[deps.TranscodingStreams]] -git-tree-sha1 = "54194d92959d8ebaa8e26227dbe3cdefcdcd594f" +git-tree-sha1 = "3caa21522e7efac1ba21834a03734c57b4611c7e" uuid = "3bb67fe8-82b1-5028-8e26-92a6c54297fa" -version = "0.10.3" +version = "0.10.4" weakdeps = ["Random", "Test"] [deps.TranscodingStreams.extensions] @@ -939,9 +938,9 @@ version = "1.31.0+0" [[deps.XML2_jll]] deps = ["Artifacts", "JLLWrappers", "Libdl", "Libiconv_jll", "Zlib_jll"] -git-tree-sha1 = "801cbe47eae69adc50f36c3caec4758d2650741b" +git-tree-sha1 = "07e470dabc5a6a4254ffebc29a1b3fc01464e105" uuid = "02c8fc9c-b97f-50b9-bbe4-9be30ff0a78a" -version = "2.12.2+0" +version = "2.12.5+0" [[deps.XSLT_jll]] deps = ["Artifacts", "JLLWrappers", "Libdl", "Libgcrypt_jll", "Libgpg_error_jll", "Libiconv_jll", "Pkg", "XML2_jll", "Zlib_jll"] @@ -951,9 +950,9 @@ version = "1.1.34+0" [[deps.XZ_jll]] deps = ["Artifacts", "JLLWrappers", "Libdl"] -git-tree-sha1 = "522b8414d40c4cbbab8dee346ac3a09f9768f25d" +git-tree-sha1 = "37195dcb94a5970397ad425b95a9a26d0befce3a" uuid = "ffd25f8a-64ca-5728-b0f7-c24cf3aae800" -version = "5.4.5+0" +version = "5.6.0+0" [[deps.Xorg_libICE_jll]] deps = ["Libdl", "Pkg"] @@ -1171,9 +1170,9 @@ version = "1.18.0+0" [[deps.libpng_jll]] deps = ["Artifacts", "JLLWrappers", "Libdl", "Zlib_jll"] -git-tree-sha1 = "93284c28274d9e75218a416c65ec49d0e0fcdf3d" +git-tree-sha1 = "d7015d2e18a5fd9a4f47de711837e980519781a4" uuid = "b53b4c65-9356-5827-b1ea-8c7a1a84506f" -version = "1.6.40+0" +version = "1.6.43+1" [[deps.libvorbis_jll]] deps = ["Artifacts", "JLLWrappers", "Libdl", "Ogg_jll", "Pkg"] diff --git a/dev/examples/gaussian-process-priors/index.html b/dev/examples/gaussian-process-priors/index.html index 89853d22f..c284b55df 100644 --- a/dev/examples/gaussian-process-priors/index.html +++ b/dev/examples/gaussian-process-priors/index.html @@ -57,35 +57,35 @@ end;

We can now visualize a kernel and show samples from a Gaussian process with a given kernel:

plot(visualize(SqExponentialKernel()); size=(800, 210), bottommargin=5mm, topmargin=5mm)
- + - + - + - + - + - - - - - - - - - - - - - + + + + + + + + + + + + + - + - + - - - - - - - - - - - - - - - - - - - - - - - + + + + + + + + + + + + + + + + + + + + + + + - + - - - - - - - - - - - - - - - - - - - - - - - - - - - + + + + + + + + + + + + + + + + + + + + + + + + + + +

Kernel comparison

This also allows us to compare different kernels:

kernels = [
     Matern12Kernel(),
@@ -270,35 +270,35 @@ 

- + - + - + - + - + - - - - - - - - - - - - - + + + + + + + + + + + + + - + - + - - - - - - - - - - - - - - - - - - - - - - - + + + + + + + + + + + + + + + + + + + + + + + - + - - - - - - - - - - - - - - - - - - - - - - - - - - - - + + + + + + + + + + + + + + + + + + + + + + + + + + + + - + - - - - - - - - - - - - - + + + + + + + + + + + + + - + - + - - - - - - - - - - - - - - - - - - - - - - - + + + + + + + + + + + + + + + + + + + + + + + - + - - - - - - - - - - - - - - - - - - - - - - - - - - - - + + + + + + + + + + + + + + + + + + + + + + + + + + + + - + - - - - - - - - - - - - - + + + + + + + + + + + + + - + - + - - - - - - - - - - - - - - - - - - - - - - - + + + + + + + + + + + + + + + + + + + + + + + - + - - - - - - - - - - - - - - - - - - - - - - - - - - - - + + + + + + + + + + + + + + + + + + + + + + + + + + + + - + - - - - - - - - - - - - - + + + + + + + + + + + + + - + - + - - - - - - - - - - - - - - - - - - - - - - - + + + + + + + + + + + + + + + + + + + + + + + - + - - - - - - - - - - - - - - - - - - - - - - - - - - - - + + + + + + + + + + + + + + + + + + + + + + + + + + + + - + - - - - - - - - - - - - - + + + + + + + + + + + + + - + - + - - - - - - - - - - - - - - - - - - - - - - - + + + + + + + + + + + + + + + + + + + + + + + - + - - - - - - - - - - - - - - - - - - - - - - - - - - - - + + + + + + + + + + + + + + + + + + + + + + + + + + + + - + - - - - - - - - - - - - - + + + + + + + + + + + + + - + - + - - - - - - - - - - - - - - - - - - - - - - - + + + + + + + + + + + + + + + + + + + + + + + - + - - - - - - - - - - - - - - - - - - - - - - - - - - - - + + + + + + + + + + + + + + + + + + + + + + + + + + + + - + - - - - - - - - - - - - - + + + + + + + + + + + + + - + - + - - - - - - - - - - - - - - - - - - - - - - - - - - - + + + + + + + + + + + + + + + + + + + + + + + + + + + - + - - - - - - - - - - - - - - - - - - - - - - - - - - - - + + + + + + + + + + + + + + + + + + + + + + + + + + + + - + - - - - - - - - - - - - - + + + + + + + + + + + + + - + - + - - - - - - - - - - - - - - - - - - - - - - - + + + + + + + + + + + + + + + + + + + + + + + - + - - - - - - - - - - - - - - - - - - - - - - - - - - - - + + + + + + + + + + + + + + + + + + + + + + + + + + + + - + - - - - - - - - - - - - - + + + + + + + + + + + + + - + - + - - - - - - - - - - - - - - - - - - - - - - - + + + + + + + + + + + + + + + + + + + + + + + - + - - - - - - - - - - - - - - - - - - - - - - - - - - - - + + + + + + + + + + + + + + + + + + + + + + + + + + + + - + - - - - - - - - - - - - - + + + + + + + + + + + + + - + - + - - - - - - - - - - - - - - - - - - - - - - - + + + + + + + + + + + + + + + + + + + + + + + - + - - - - - - - - - - - - - - - - - - - - - - - - - - - + + + + + + + + + + + + + + + + + + + + + + + + + + +
Package and system information
@@ -2838,7 +2838,7 @@
Package and system information
[31c24e10] Distributions v0.25.107 [ec8451be] KernelFunctions v0.10.63 `/home/runner/work/KernelFunctions.jl/KernelFunctions.jl#master` [98b081ad] Literate v2.16.1 - [91a5bcdd] Plots v1.40.1 + [91a5bcdd] Plots v1.40.2 [37e2e46d] LinearAlgebra [9a3f8284] Random

@@ -2849,8 +2849,8 @@
Package and system information
System information (click to expand)
-Julia Version 1.10.0
-Commit 3120989f39b (2023-12-25 18:01 UTC)
+Julia Version 1.10.2
+Commit bd47eca2c8a (2024-03-01 10:14 UTC)
 Build Info:
   Official https://julialang.org/ release
 Platform Info:
@@ -2859,9 +2859,9 @@ 
Package and system information
WORD_SIZE: 64 LIBM: libopenlibm LLVM: libLLVM-15.0.7 (ORCJIT, znver3) - Threads: 1 on 4 virtual cores +Threads: 1 default, 0 interactive, 1 GC (on 4 virtual cores) Environment: JULIA_DEBUG = Documenter JULIA_LOAD_PATH = :/home/runner/.julia/packages/JuliaGPsDocs/7M86H/src
-

This page was generated using Literate.jl.

+

This page was generated using Literate.jl.

diff --git a/dev/examples/gaussian-process-priors/notebook.ipynb b/dev/examples/gaussian-process-priors/notebook.ipynb index 3ba72d998..55816932e 100644 --- a/dev/examples/gaussian-process-priors/notebook.ipynb +++ b/dev/examples/gaussian-process-priors/notebook.ipynb @@ -207,35 +207,35 @@ "\n", "\n", "\n", - " \n", + " \n", " \n", " \n", "\n", - "\n", + "\n", "\n", - " \n", + " \n", " \n", " \n", "\n", - "\n", + "\n", "\n", - " \n", + " \n", " \n", " \n", "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", "\n", "\n", - "\n", + "\n", "\n", - " \n", + " \n", " \n", " \n", "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", "\n", - " \n", + " \n", " \n", " \n", "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", "\n" ], "image/svg+xml": [ "\n", "\n", "\n", - " \n", + " \n", " \n", " \n", "\n", - "\n", + "\n", "\n", - " \n", + " \n", " \n", " \n", "\n", - "\n", + "\n", "\n", - " \n", + " \n", " \n", " \n", "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", "\n", "\n", - "\n", + "\n", "\n", - " \n", + " \n", " \n", " \n", "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", "\n", - " \n", + " \n", " \n", " \n", "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", "\n" ] }, @@ -632,35 +632,35 @@ "\n", "\n", "\n", - " \n", + " \n", " \n", " \n", "\n", - "\n", + "\n", "\n", - " \n", + " \n", " \n", " \n", "\n", - "\n", + "\n", "\n", - " \n", + " \n", " \n", " \n", "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", "\n", "\n", - "\n", + "\n", "\n", - " \n", + " \n", " \n", " \n", "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", "\n", - " \n", + " \n", " \n", " \n", "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", "\n", - " \n", + " \n", " \n", " \n", "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", "\n", "\n", - "\n", + "\n", "\n", - " \n", + " \n", " \n", " \n", "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", "\n", - " \n", + " \n", " \n", " \n", "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", "\n", - " \n", + " \n", " \n", " \n", "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", "\n", "\n", - "\n", + "\n", "\n", - " \n", + " \n", " \n", " \n", "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", "\n", - " \n", + " \n", " \n", " \n", "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", "\n", - " \n", + " \n", " \n", " \n", "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", "\n", "\n", - "\n", + "\n", "\n", - " \n", + " \n", " \n", " \n", "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", "\n", - " \n", + " \n", " \n", " \n", "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", "\n", - " \n", + " \n", " \n", " \n", "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", "\n", "\n", - "\n", + "\n", "\n", - " \n", + " \n", " \n", " \n", "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", "\n", - " \n", + " \n", " \n", " \n", "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", "\n", - " \n", + " \n", " \n", " \n", "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", "\n", "\n", - "\n", + "\n", "\n", - " \n", + " \n", " \n", " \n", "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", "\n", - " \n", + " \n", " \n", " \n", "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", "\n", - " \n", + " \n", " \n", " \n", "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", "\n", "\n", - "\n", + "\n", "\n", - " \n", + " \n", " \n", " \n", "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", "\n", - " \n", + " \n", " \n", " \n", "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", "\n", - " \n", + " \n", " \n", " \n", "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", "\n", "\n", - "\n", + "\n", "\n", - " \n", + " \n", " \n", " \n", "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", "\n", - " \n", + " \n", " \n", " \n", "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", "\n", - " \n", + " \n", " \n", " \n", "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", "\n", "\n", - "\n", + "\n", "\n", - " \n", + " \n", " \n", " \n", "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", "\n", - " \n", + " \n", " \n", " \n", "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", "\n", - " \n", + " \n", " \n", " \n", "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", "\n", "\n", - "\n", + "\n", "\n", - " \n", + " \n", " \n", " \n", "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", "\n", - " \n", + " \n", " \n", " \n", "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", "\n" ], "image/svg+xml": [ "\n", "\n", "\n", - " \n", + " \n", " \n", " \n", "\n", - "\n", + "\n", "\n", - " \n", + " \n", " \n", " \n", "\n", - "\n", + "\n", "\n", - " \n", + " \n", " \n", " \n", "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", "\n", "\n", - "\n", + "\n", "\n", - " \n", + " \n", " \n", " \n", "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", "\n", - " \n", + " \n", " \n", " \n", "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", "\n", - " \n", + " \n", " \n", " \n", "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", "\n", "\n", - "\n", + "\n", "\n", - " \n", + " \n", " \n", " \n", "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", "\n", - " \n", + " \n", " \n", " \n", "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", "\n", - " \n", + " \n", " \n", " \n", "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", "\n", "\n", - "\n", + "\n", "\n", - " \n", + " \n", " \n", " \n", "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", "\n", - " \n", + " \n", " \n", " \n", "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", "\n", - " \n", + " \n", " \n", " \n", "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", "\n", "\n", - "\n", + "\n", "\n", - " \n", + " \n", " \n", " \n", "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", "\n", - " \n", + " \n", " \n", " \n", "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", "\n", - " \n", + " \n", " \n", " \n", "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", "\n", "\n", - "\n", + "\n", "\n", - " \n", + " \n", " \n", " \n", "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", "\n", - " \n", + " \n", " \n", " \n", "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", "\n", - " \n", + " \n", " \n", " \n", "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", "\n", "\n", - "\n", + "\n", "\n", - " \n", + " \n", " \n", " \n", "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", "\n", - " \n", + " \n", " \n", " \n", "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", "\n", - " \n", + " \n", " \n", " \n", "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", "\n", "\n", - "\n", + "\n", "\n", - " \n", + " \n", " \n", " \n", "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", "\n", - " \n", + " \n", " \n", " \n", "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", "\n", - " \n", + " \n", " \n", " \n", "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", "\n", "\n", - "\n", + "\n", "\n", - " \n", + " \n", " \n", " \n", "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", "\n", - " \n", + " \n", " \n", " \n", "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", "\n", - " \n", + " \n", " \n", " \n", "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", "\n", "\n", - "\n", + "\n", "\n", - " \n", + " \n", " \n", " \n", "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", "\n", - " \n", + " \n", " \n", " \n", "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", "\n", - " \n", + " \n", " \n", " \n", "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", "\n", "\n", - "\n", + "\n", "\n", - " \n", + " \n", " \n", " \n", "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", "\n", - " \n", + " \n", " \n", " \n", "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", "\n" ] }, @@ -5796,7 +5796,7 @@ " [31c24e10] Distributions v0.25.107\n", " [ec8451be] KernelFunctions v0.10.63 `/home/runner/work/KernelFunctions.jl/KernelFunctions.jl#master`\n", " [98b081ad] Literate v2.16.1\n", - " [91a5bcdd] Plots v1.40.1\n", + " [91a5bcdd] Plots v1.40.2\n", " [37e2e46d] LinearAlgebra\n", " [9a3f8284] Random\n", "\n", @@ -5807,8 +5807,8 @@ "
\n", "System information (click to expand)\n", "
\n",
-    "Julia Version 1.10.0\n",
-    "Commit 3120989f39b (2023-12-25 18:01 UTC)\n",
+    "Julia Version 1.10.2\n",
+    "Commit bd47eca2c8a (2024-03-01 10:14 UTC)\n",
     "Build Info:\n",
     "  Official https://julialang.org/ release\n",
     "Platform Info:\n",
@@ -5817,7 +5817,7 @@
     "  WORD_SIZE: 64\n",
     "  LIBM: libopenlibm\n",
     "  LLVM: libLLVM-15.0.7 (ORCJIT, znver3)\n",
-    "  Threads: 1 on 4 virtual cores\n",
+    "Threads: 1 default, 0 interactive, 1 GC (on 4 virtual cores)\n",
     "Environment:\n",
     "  JULIA_DEBUG = Documenter\n",
     "  JULIA_LOAD_PATH = :/home/runner/.julia/packages/JuliaGPsDocs/7M86H/src\n",
@@ -5842,11 +5842,11 @@
    "file_extension": ".jl",
    "mimetype": "application/julia",
    "name": "julia",
-   "version": "1.10.0"
+   "version": "1.10.2"
   },
   "kernelspec": {
    "name": "julia-1.10",
-   "display_name": "Julia 1.10.0",
+   "display_name": "Julia 1.10.2",
    "language": "julia"
   }
  },
diff --git a/dev/examples/kernel-ridge-regression/Manifest.toml b/dev/examples/kernel-ridge-regression/Manifest.toml
index 35c200dba..a4f97291a 100644
--- a/dev/examples/kernel-ridge-regression/Manifest.toml
+++ b/dev/examples/kernel-ridge-regression/Manifest.toml
@@ -1,6 +1,6 @@
 # This file is machine-generated - editing it directly is not advised
 
-julia_version = "1.10.0"
+julia_version = "1.10.2"
 manifest_format = "2.0"
 project_hash = "871a60b57cfc97ea19ecb86f8d3c3aac749bf4ef"
 
@@ -39,9 +39,9 @@ version = "0.5.1"
 
 [[deps.ChainRulesCore]]
 deps = ["Compat", "LinearAlgebra"]
-git-tree-sha1 = "ad25e7d21ce10e01de973cdc68ad0f850a953c52"
+git-tree-sha1 = "575cd02e080939a33b6df6c5853d14924c08e35b"
 uuid = "d360d2e6-b24c-11e9-a2a3-2a2ae2dbcce4"
-version = "1.21.1"
+version = "1.23.0"
 weakdeps = ["SparseArrays"]
 
     [deps.ChainRulesCore.extensions]
@@ -83,9 +83,9 @@ version = "0.12.10"
 
 [[deps.Compat]]
 deps = ["TOML", "UUIDs"]
-git-tree-sha1 = "75bd5b6fc5089df449b5d35fa501c846c9b6549b"
+git-tree-sha1 = "c955881e3c981181362ae4088b35995446298b80"
 uuid = "34da2185-b29b-5c13-b0c7-acf172513d20"
-version = "4.12.0"
+version = "4.14.0"
 weakdeps = ["Dates", "LinearAlgebra"]
 
     [deps.Compat.extensions]
@@ -94,7 +94,7 @@ weakdeps = ["Dates", "LinearAlgebra"]
 [[deps.CompilerSupportLibraries_jll]]
 deps = ["Artifacts", "Libdl"]
 uuid = "e66e0078-7015-5450-92f7-15fbd957f2ae"
-version = "1.0.5+1"
+version = "1.1.0+0"
 
 [[deps.CompositionsBase]]
 git-tree-sha1 = "802bb88cd69dfd1509f6670416bd4434015693ad"
@@ -109,9 +109,9 @@ version = "0.1.2"
 
 [[deps.ConcurrentUtilities]]
 deps = ["Serialization", "Sockets"]
-git-tree-sha1 = "8cfa272e8bdedfa88b6aefbbca7c19f1befac519"
+git-tree-sha1 = "9c4708e3ed2b799e6124b5673a712dda0b596a9b"
 uuid = "f0e56b4a-5159-44fe-b623-3e5288b988bb"
-version = "2.3.0"
+version = "2.3.1"
 
 [[deps.Contour]]
 git-tree-sha1 = "d05d9e7b7aedff4e5b51a029dced05cfb6125781"
@@ -125,9 +125,9 @@ version = "1.16.0"
 
 [[deps.DataStructures]]
 deps = ["Compat", "InteractiveUtils", "OrderedCollections"]
-git-tree-sha1 = "ac67408d9ddf207de5cfa9a97e114352430f01ed"
+git-tree-sha1 = "0f4b5d62a88d8f59003e43c25a8a90de9eb76317"
 uuid = "864edb3b-99cc-5e75-8d2d-829cb0a9cfe8"
-version = "0.18.16"
+version = "0.18.18"
 
 [[deps.Dates]]
 deps = ["Printf"]
@@ -240,11 +240,10 @@ git-tree-sha1 = "21efd19106a55620a188615da6d3d06cd7f6ee03"
 uuid = "a3f928ae-7b40-5064-980b-68af3947d34b"
 version = "2.13.93+0"
 
-[[deps.Formatting]]
-deps = ["Printf"]
-git-tree-sha1 = "8339d61043228fdd3eb658d86c926cb282ae72a8"
-uuid = "59287772-0a20-5a39-b81b-1366585eb4c0"
-version = "0.4.2"
+[[deps.Format]]
+git-tree-sha1 = "f3cf88025f6d03c194d73f5d13fee9004a108329"
+uuid = "1fa38f19-a742-5d3f-a2b9-30dd87b9d5f8"
+version = "1.3.6"
 
 [[deps.FreeType2_jll]]
 deps = ["Artifacts", "Bzip2_jll", "JLLWrappers", "Libdl", "Zlib_jll"]
@@ -272,15 +271,15 @@ version = "3.3.9+0"
 
 [[deps.GR]]
 deps = ["Artifacts", "Base64", "DelimitedFiles", "Downloads", "GR_jll", "HTTP", "JSON", "Libdl", "LinearAlgebra", "Pkg", "Preferences", "Printf", "Random", "Serialization", "Sockets", "TOML", "Tar", "Test", "UUIDs", "p7zip_jll"]
-git-tree-sha1 = "3458564589be207fa6a77dbbf8b97674c9836aab"
+git-tree-sha1 = "3437ade7073682993e092ca570ad68a2aba26983"
 uuid = "28b8d3ca-fb5f-59d9-8090-bfdbd6d07a71"
-version = "0.73.2"
+version = "0.73.3"
 
 [[deps.GR_jll]]
 deps = ["Artifacts", "Bzip2_jll", "Cairo_jll", "FFMPEG_jll", "Fontconfig_jll", "FreeType2_jll", "GLFW_jll", "JLLWrappers", "JpegTurbo_jll", "Libdl", "Libtiff_jll", "Pixman_jll", "Qt6Base_jll", "Zlib_jll", "libpng_jll"]
-git-tree-sha1 = "77f81da2964cc9fa7c0127f941e8bce37f7f1d70"
+git-tree-sha1 = "a96d5c713e6aa28c242b0d25c1347e258d6541ab"
 uuid = "d2c73de3-f751-5644-a686-071e5b155ba9"
-version = "0.73.2+0"
+version = "0.73.3+0"
 
 [[deps.Gettext_jll]]
 deps = ["Artifacts", "CompilerSupportLibraries_jll", "JLLWrappers", "Libdl", "Libiconv_jll", "Pkg", "XML2_jll"]
@@ -307,9 +306,9 @@ version = "1.0.2"
 
 [[deps.HTTP]]
 deps = ["Base64", "CodecZlib", "ConcurrentUtilities", "Dates", "ExceptionUnwrapping", "Logging", "LoggingExtras", "MbedTLS", "NetworkOptions", "OpenSSL", "Random", "SimpleBufferStream", "Sockets", "URIs", "UUIDs"]
-git-tree-sha1 = "abbbb9ec3afd783a7cbd82ef01dcd088ea051398"
+git-tree-sha1 = "db864f2d91f68a5912937af80327d288ea1f3aee"
 uuid = "cd3eb016-35fb-5094-929b-558a96fad6f3"
-version = "1.10.1"
+version = "1.10.3"
 
 [[deps.HarfBuzz_jll]]
 deps = ["Artifacts", "Cairo_jll", "Fontconfig_jll", "FreeType2_jll", "Glib_jll", "Graphite2_jll", "JLLWrappers", "Libdl", "Libffi_jll", "Pkg"]
@@ -358,13 +357,13 @@ version = "0.21.4"
 
 [[deps.JpegTurbo_jll]]
 deps = ["Artifacts", "JLLWrappers", "Libdl"]
-git-tree-sha1 = "60b1194df0a3298f460063de985eae7b01bc011a"
+git-tree-sha1 = "3336abae9a713d2210bb57ab484b1e065edd7d23"
 uuid = "aacddb02-875f-59d6-b918-886e6ef4fbf8"
-version = "3.0.1+0"
+version = "3.0.2+0"
 
 [[deps.KernelFunctions]]
 deps = ["ChainRulesCore", "Compat", "CompositionsBase", "Distances", "FillArrays", "Functors", "IrrationalConstants", "LinearAlgebra", "LogExpFunctions", "Random", "Requires", "SpecialFunctions", "Statistics", "StatsBase", "TensorCore", "Test", "ZygoteRules"]
-git-tree-sha1 = "654dead9bd3313311b61087ab7a0cf2b7a935cb1"
+git-tree-sha1 = "9d578aa8af14cd36ba744bb46c53925bca7a65f6"
 repo-rev = "master"
 repo-url = "/home/runner/work/KernelFunctions.jl/KernelFunctions.jl"
 uuid = "ec8451be-7e33-11e9-00cf-bbf324bd1392"
@@ -400,10 +399,10 @@ uuid = "b964fa9f-0449-5b57-a5c2-d3ea65f4040f"
 version = "1.3.1"
 
 [[deps.Latexify]]
-deps = ["Formatting", "InteractiveUtils", "LaTeXStrings", "MacroTools", "Markdown", "OrderedCollections", "Printf", "Requires"]
-git-tree-sha1 = "f428ae552340899a935973270b8d98e5a31c49fe"
+deps = ["Format", "InteractiveUtils", "LaTeXStrings", "MacroTools", "Markdown", "OrderedCollections", "Requires"]
+git-tree-sha1 = "cad560042a7cc108f5a4c24ea1431a9221f22c1b"
 uuid = "23fbe1c1-3f47-55db-b15f-69d7ec21a316"
-version = "0.16.1"
+version = "0.16.2"
 
     [deps.Latexify.extensions]
     DataFramesExt = "DataFrames"
@@ -483,10 +482,10 @@ uuid = "89763e89-9b03-5906-acba-b20f662cd828"
 version = "4.5.1+1"
 
 [[deps.Libuuid_jll]]
-deps = ["Artifacts", "JLLWrappers", "Libdl", "Pkg"]
-git-tree-sha1 = "7f3efec06033682db852f8b3bc3c1d2b0a0ab066"
+deps = ["Artifacts", "JLLWrappers", "Libdl"]
+git-tree-sha1 = "0a04a1318df1bf510beb2562cf90fb0c386f58c4"
 uuid = "38a345b3-de98-5d2b-a5d3-14cd9215e700"
-version = "2.36.0+0"
+version = "2.39.3+1"
 
 [[deps.LinearAlgebra]]
 deps = ["Libdl", "OpenBLAS_jll", "libblastrampoline_jll"]
@@ -500,9 +499,9 @@ version = "2.16.1"
 
 [[deps.LogExpFunctions]]
 deps = ["DocStringExtensions", "IrrationalConstants", "LinearAlgebra"]
-git-tree-sha1 = "7d6dd4e9212aebaeed356de34ccf262a3cd415aa"
+git-tree-sha1 = "18144f3e9cbe9b15b070288eef858f71b291ce37"
 uuid = "2ab3a3ac-af41-5b50-aa03-7779005ae688"
-version = "0.3.26"
+version = "0.3.27"
 
     [deps.LogExpFunctions.extensions]
     LogExpFunctionsChainRulesCoreExt = "ChainRulesCore"
@@ -581,7 +580,7 @@ version = "1.3.5+1"
 [[deps.OpenBLAS_jll]]
 deps = ["Artifacts", "CompilerSupportLibraries_jll", "Libdl"]
 uuid = "4536629a-c528-5b80-bd46-f80d51c5b363"
-version = "0.3.23+2"
+version = "0.3.23+4"
 
 [[deps.OpenLibm_jll]]
 deps = ["Artifacts", "Libdl"]
@@ -590,9 +589,9 @@ version = "0.8.1+2"
 
 [[deps.OpenSSL]]
 deps = ["BitFlags", "Dates", "MozillaCACerts_jll", "OpenSSL_jll", "Sockets"]
-git-tree-sha1 = "51901a49222b09e3743c65b8847687ae5fc78eb2"
+git-tree-sha1 = "af81a32750ebc831ee28bdaaba6e1067decef51e"
 uuid = "4d8831e6-92b7-49fb-bdf8-b643e874388c"
-version = "1.4.1"
+version = "1.4.2"
 
 [[deps.OpenSSL_jll]]
 deps = ["Artifacts", "JLLWrappers", "Libdl"]
@@ -658,15 +657,15 @@ version = "3.1.0"
 
 [[deps.PlotUtils]]
 deps = ["ColorSchemes", "Colors", "Dates", "PrecompileTools", "Printf", "Random", "Reexport", "Statistics"]
-git-tree-sha1 = "862942baf5663da528f66d24996eb6da85218e76"
+git-tree-sha1 = "7b1a9df27f072ac4c9c7cbe5efb198489258d1f5"
 uuid = "995b91a9-d308-5afd-9ec6-746e21dbc043"
-version = "1.4.0"
+version = "1.4.1"
 
 [[deps.Plots]]
 deps = ["Base64", "Contour", "Dates", "Downloads", "FFMPEG", "FixedPointNumbers", "GR", "JLFzf", "JSON", "LaTeXStrings", "Latexify", "LinearAlgebra", "Measures", "NaNMath", "Pkg", "PlotThemes", "PlotUtils", "PrecompileTools", "Printf", "REPL", "Random", "RecipesBase", "RecipesPipeline", "Reexport", "RelocatableFolders", "Requires", "Scratch", "Showoff", "SparseArrays", "Statistics", "StatsBase", "UUIDs", "UnicodeFun", "UnitfulLatexify", "Unzip"]
-git-tree-sha1 = "c4fa93d7d66acad8f6f4ff439576da9d2e890ee0"
+git-tree-sha1 = "3c403c6590dd93b36752634115e20137e79ab4df"
 uuid = "91a5bcdd-55d7-5caf-9e0b-520d859cae80"
-version = "1.40.1"
+version = "1.40.2"
 
     [deps.Plots.extensions]
     FileIOExt = "FileIO"
@@ -684,15 +683,15 @@ version = "1.40.1"
 
 [[deps.PrecompileTools]]
 deps = ["Preferences"]
-git-tree-sha1 = "03b4c25b43cb84cee5c90aa9b5ea0a78fd848d2f"
+git-tree-sha1 = "5aa36f7049a63a1528fe8f7c3f2113413ffd4e1f"
 uuid = "aea7be01-6a6a-4083-8856-8a6e6704d82a"
-version = "1.2.0"
+version = "1.2.1"
 
 [[deps.Preferences]]
 deps = ["TOML"]
-git-tree-sha1 = "00805cd429dcb4870060ff49ef443486c262e38e"
+git-tree-sha1 = "9306f6085165d270f7e3db02af26a400d580f5c6"
 uuid = "21216c6a-2e73-6563-6e65-726566657250"
-version = "1.4.1"
+version = "1.4.3"
 
 [[deps.Printf]]
 deps = ["Unicode"]
@@ -826,9 +825,9 @@ version = "0.34.2"
 
 [[deps.StatsFuns]]
 deps = ["HypergeometricFunctions", "IrrationalConstants", "LogExpFunctions", "Reexport", "Rmath", "SpecialFunctions"]
-git-tree-sha1 = "f625d686d5a88bcd2b15cd81f18f98186fdc0c9a"
+git-tree-sha1 = "cef0472124fab0695b58ca35a77c6fb942fdab8a"
 uuid = "4c63d2b9-4356-54db-8cca-17b64c39e42c"
-version = "1.3.0"
+version = "1.3.1"
 
     [deps.StatsFuns.extensions]
     StatsFunsChainRulesCoreExt = "ChainRulesCore"
@@ -868,9 +867,9 @@ deps = ["InteractiveUtils", "Logging", "Random", "Serialization"]
 uuid = "8dfed614-e22c-5e08-85e1-65c5234f0b40"
 
 [[deps.TranscodingStreams]]
-git-tree-sha1 = "54194d92959d8ebaa8e26227dbe3cdefcdcd594f"
+git-tree-sha1 = "3caa21522e7efac1ba21834a03734c57b4611c7e"
 uuid = "3bb67fe8-82b1-5028-8e26-92a6c54297fa"
-version = "0.10.3"
+version = "0.10.4"
 weakdeps = ["Random", "Test"]
 
     [deps.TranscodingStreams.extensions]
@@ -939,9 +938,9 @@ version = "1.31.0+0"
 
 [[deps.XML2_jll]]
 deps = ["Artifacts", "JLLWrappers", "Libdl", "Libiconv_jll", "Zlib_jll"]
-git-tree-sha1 = "801cbe47eae69adc50f36c3caec4758d2650741b"
+git-tree-sha1 = "07e470dabc5a6a4254ffebc29a1b3fc01464e105"
 uuid = "02c8fc9c-b97f-50b9-bbe4-9be30ff0a78a"
-version = "2.12.2+0"
+version = "2.12.5+0"
 
 [[deps.XSLT_jll]]
 deps = ["Artifacts", "JLLWrappers", "Libdl", "Libgcrypt_jll", "Libgpg_error_jll", "Libiconv_jll", "Pkg", "XML2_jll", "Zlib_jll"]
@@ -951,9 +950,9 @@ version = "1.1.34+0"
 
 [[deps.XZ_jll]]
 deps = ["Artifacts", "JLLWrappers", "Libdl"]
-git-tree-sha1 = "522b8414d40c4cbbab8dee346ac3a09f9768f25d"
+git-tree-sha1 = "37195dcb94a5970397ad425b95a9a26d0befce3a"
 uuid = "ffd25f8a-64ca-5728-b0f7-c24cf3aae800"
-version = "5.4.5+0"
+version = "5.6.0+0"
 
 [[deps.Xorg_libICE_jll]]
 deps = ["Libdl", "Pkg"]
@@ -1171,9 +1170,9 @@ version = "1.18.0+0"
 
 [[deps.libpng_jll]]
 deps = ["Artifacts", "JLLWrappers", "Libdl", "Zlib_jll"]
-git-tree-sha1 = "93284c28274d9e75218a416c65ec49d0e0fcdf3d"
+git-tree-sha1 = "d7015d2e18a5fd9a4f47de711837e980519781a4"
 uuid = "b53b4c65-9356-5827-b1ea-8c7a1a84506f"
-version = "1.6.40+0"
+version = "1.6.43+1"
 
 [[deps.libvorbis_jll]]
 deps = ["Artifacts", "JLLWrappers", "Libdl", "Ogg_jll", "Pkg"]
diff --git a/dev/examples/kernel-ridge-regression/index.html b/dev/examples/kernel-ridge-regression/index.html
index c3f6272a2..4d2e26915 100644
--- a/dev/examples/kernel-ridge-regression/index.html
+++ b/dev/examples/kernel-ridge-regression/index.html
@@ -22,75 +22,75 @@
 scatter!(x_train, y_train; seriescolor=1, label="observations")
- + - + - + - + - + - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +

Linear regression

For training inputs $\mathrm{X}=(\mathbf{x}_n)_{n=1}^N$ and observations $\mathbf{y}=(y_n)_{n=1}^N$, the linear regression weights $\mathbf{w}$ using the least-squares estimator are given by

\[\mathbf{w} = (\mathrm{X}^\top \mathrm{X})^{-1} \mathrm{X}^\top \mathbf{y}\]

We predict at test inputs $\mathbf{x}_*$ using

\[\hat{y}_* = \mathbf{x}_*^\top \mathbf{w}\]

This is implemented by linear_regression:

function linear_regression(X, y, Xstar)
     weights = (X' * X) \ (X' * y)
     return Xstar * weights
@@ -99,75 +99,75 @@ 

- + - + - + - + - + - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +

Featurization

We can improve the fit by including additional features, i.e. generalizing to $\tilde{\mathrm{X}} = (\phi(x_n))_{n=1}^N$, where $\phi(x)$ constructs a feature vector for each input $x$. Here we include powers of the input, $\phi(x) = (1, x, x^2, \dots, x^d)$:

function featurize_poly(x; degree=1)
     return repeat(x, 1, degree + 1) .^ (0:degree)'
 end
@@ -183,300 +183,300 @@ 

Feat plot((featurized_fit_and_plot(degree) for degree in 1:4)...)

- + - + - + - + - + - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + - + - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + - + - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + - + - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +

Note that the fit becomes perfect when we include exactly as many orders in the features as we have in the underlying polynomial (4).

However, when increasing the number of features, we can quickly overfit to noise in the data set:

featurized_fit_and_plot(20)
- + - + - + - + - + - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +

Ridge regression

To counteract this unwanted behaviour, we can introduce regularization. This leads to ridge regression with $L_2$ regularization of the weights (Tikhonov regularization). Instead of the weights in linear regression,

\[\mathbf{w} = (\mathrm{X}^\top \mathrm{X})^{-1} \mathrm{X}^\top \mathbf{y}\]

we introduce the ridge parameter $\lambda$:

\[\mathbf{w} = (\mathrm{X}^\top \mathrm{X} + \lambda \mathbb{1})^{-1} \mathrm{X}^\top \mathbf{y}\]

As before, we predict at test inputs $\mathbf{x}_*$ using

\[\hat{y}_* = \mathbf{x}_*^\top \mathbf{w}\]

This is implemented by ridge_regression:

function ridge_regression(X, y, Xstar, lambda)
     weights = (X' * X + lambda * I) \ (X' * y)
@@ -494,232 +494,232 @@ 

- + - + - + - + - + - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + - + - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + - + - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + - + - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +

Kernel ridge regression

Instead of constructing the feature matrix explicitly, we can use kernels to replace inner products of feature vectors with a kernel evaluation: $\langle \phi(x), \phi(x') \rangle = k(x, x')$ or $\tilde{\mathrm{X}} \tilde{\mathrm{X}}^\top = \mathrm{K}$, where $\mathrm{K}_{ij} = k(x_i, x_j)$.

To apply this "kernel trick" to ridge regression, we can rewrite the ridge estimate for the weights

\[\mathbf{w} = (\mathrm{X}^\top \mathrm{X} + \lambda \mathbb{1})^{-1} \mathrm{X}^\top \mathbf{y}\]

using the matrix inversion lemma as

\[\mathbf{w} = \mathrm{X}^\top (\mathrm{X} \mathrm{X}^\top + \lambda \mathbb{1})^{-1} \mathbf{y}\]

where we can now replace the inner product with the kernel matrix,

\[\mathbf{w} = \mathrm{X}^\top (\mathrm{K} + \lambda \mathbb{1})^{-1} \mathbf{y}\]

And the prediction yields another inner product,

\[\hat{y}_* = \mathbf{x}_*^\top \mathbf{w} = \langle \mathbf{x}_*, \mathbf{w} \rangle = \mathbf{k}_* (\mathrm{K} + \lambda \mathbb{1})^{-1} \mathbf{y}\]

where $(\mathbf{k}_*)_n = k(x_*, x_n)$.

This is implemented by kernel_ridge_regression:

function kernel_ridge_regression(k, X, y, Xstar, lambda)
     K = kernelmatrix(k, X)
@@ -739,300 +739,300 @@ 

- + - + - + - + - + - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + - + - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + - + - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + - + - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +

However, we can now also use kernels that would have an infinite-dimensional feature expansion, such as the squared exponential kernel:

kernelized_fit_and_plot(SqExponentialKernel())
- + - + - + - + - + - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
Package and system information
@@ -1043,7 +1043,7 @@
Package and system information
[31c24e10] Distributions v0.25.107 [ec8451be] KernelFunctions v0.10.63 `/home/runner/work/KernelFunctions.jl/KernelFunctions.jl#master` [98b081ad] Literate v2.16.1 - [91a5bcdd] Plots v1.40.1 + [91a5bcdd] Plots v1.40.2 [37e2e46d] LinearAlgebra

To reproduce this notebook's package environment, you can @@ -1053,8 +1053,8 @@
Package and system information
System information (click to expand)
-Julia Version 1.10.0
-Commit 3120989f39b (2023-12-25 18:01 UTC)
+Julia Version 1.10.2
+Commit bd47eca2c8a (2024-03-01 10:14 UTC)
 Build Info:
   Official https://julialang.org/ release
 Platform Info:
@@ -1063,9 +1063,9 @@ 
Package and system information
WORD_SIZE: 64 LIBM: libopenlibm LLVM: libLLVM-15.0.7 (ORCJIT, znver3) - Threads: 1 on 4 virtual cores +Threads: 1 default, 0 interactive, 1 GC (on 4 virtual cores) Environment: JULIA_DEBUG = Documenter JULIA_LOAD_PATH = :/home/runner/.julia/packages/JuliaGPsDocs/7M86H/src
-

This page was generated using Literate.jl.

+


This page was generated using Literate.jl.

diff --git a/dev/examples/kernel-ridge-regression/notebook.ipynb b/dev/examples/kernel-ridge-regression/notebook.ipynb index 5dbcffc4a..e886fb473 100644 --- a/dev/examples/kernel-ridge-regression/notebook.ipynb +++ b/dev/examples/kernel-ridge-regression/notebook.ipynb @@ -58,149 +58,149 @@ "\n", "\n", "\n", - " \n", + " \n", " \n", " \n", "\n", - "\n", + "\n", "\n", - " \n", + " \n", " \n", " \n", "\n", - "\n", + "\n", "\n", - " \n", + " \n", " \n", " \n", "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n" + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n" ], "image/svg+xml": [ "\n", "\n", "\n", - " \n", + " \n", " \n", " \n", "\n", - "\n", + "\n", "\n", - " \n", + " \n", " \n", " \n", "\n", - "\n", + "\n", "\n", - " \n", + " \n", " \n", " \n", "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n" + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n" ] }, "metadata": {}, @@ -270,149 +270,149 @@ "\n", "\n", "\n", - " \n", + " \n", " \n", " \n", "\n", - "\n", + "\n", "\n", - " \n", + " \n", " \n", " \n", "\n", - "\n", + "\n", "\n", - " \n", + " \n", " \n", " \n", "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n" + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n" ], "image/svg+xml": [ "\n", "\n", "\n", - " \n", + " \n", " \n", " \n", "\n", - "\n", + "\n", "\n", - " \n", + " \n", " \n", " \n", "\n", - "\n", + "\n", "\n", - " \n", + " \n", " \n", " \n", "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n" + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n" ] }, "metadata": {}, @@ -447,464 +447,464 @@ "\n", "\n", "\n", - " \n", + " \n", " \n", " \n", "\n", - "\n", + "\n", "\n", - " \n", + " \n", " \n", " \n", "\n", - "\n", + "\n", "\n", - " \n", + " \n", " \n", " \n", "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", "\n", - " \n", + " \n", " \n", " \n", "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", "\n", - " \n", + " \n", " \n", " \n", "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", "\n", - " \n", + " \n", " \n", " \n", "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", "\n" ], "image/svg+xml": [ "\n", "\n", "\n", - " \n", + " \n", " \n", " \n", "\n", - "\n", + "\n", "\n", - " \n", + " \n", " \n", " \n", "\n", - "\n", + "\n", "\n", - " \n", + " \n", " \n", " \n", "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", "\n", - " \n", + " \n", " \n", " \n", "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", "\n", - " \n", + " \n", " \n", " \n", "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", "\n", - " \n", + " \n", " \n", " \n", "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", "\n" ] }, @@ -951,140 +951,140 @@ "\n", "\n", "\n", - " \n", + " \n", " \n", " \n", "\n", - "\n", + "\n", "\n", - " \n", + " \n", " \n", " \n", "\n", - "\n", + "\n", "\n", - " \n", + " \n", " \n", " \n", "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", "\n" ], "image/svg+xml": [ "\n", "\n", "\n", - " \n", + " \n", " \n", " \n", "\n", - "\n", + "\n", "\n", - " \n", + " \n", " \n", " \n", "\n", - "\n", + "\n", "\n", - " \n", + " \n", " \n", " \n", "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", "\n" ] }, @@ -1131,464 +1131,464 @@ "\n", "\n", "\n", - " \n", + " \n", " \n", " \n", "\n", - "\n", + "\n", "\n", - " \n", + " \n", " \n", " \n", "\n", - "\n", + "\n", "\n", - " \n", + " \n", " \n", " \n", "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", "\n", - " \n", + " \n", " \n", " \n", "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", "\n", - " \n", + " \n", " \n", " \n", "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", "\n", - " \n", + " \n", " \n", " \n", "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", "\n" ], "image/svg+xml": [ "\n", "\n", "\n", - " \n", + " \n", " \n", " \n", "\n", - "\n", + "\n", "\n", - " \n", + " \n", " \n", " \n", "\n", - "\n", + "\n", "\n", - " \n", + " \n", " \n", " \n", "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", "\n", - " \n", + " \n", " \n", " \n", "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", "\n", - " \n", + " \n", " \n", " \n", "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", "\n", - " \n", + " \n", " \n", " \n", "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", "\n" ] }, @@ -1676,464 +1676,464 @@ "\n", "\n", "\n", - " \n", + " \n", " \n", " \n", "\n", - "\n", + "\n", "\n", - " \n", + " \n", " \n", " \n", "\n", - "\n", + "\n", "\n", - " \n", + " \n", " \n", " \n", "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", "\n", - " \n", + " \n", " \n", " \n", "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", "\n", - " \n", + " \n", " \n", " \n", "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", "\n", - " \n", + " \n", " \n", " \n", "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", "\n" ], "image/svg+xml": [ "\n", "\n", "\n", - " \n", + " \n", " \n", " \n", "\n", - "\n", + "\n", "\n", - " \n", + " \n", " \n", " \n", "\n", - "\n", + "\n", "\n", - " \n", + " \n", " \n", " \n", "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", "\n", - " \n", + " \n", " \n", " \n", "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", "\n", - " \n", + " \n", " \n", " \n", "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", "\n", - " \n", + " \n", " \n", " \n", "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", "\n" ] }, @@ -2177,140 +2177,140 @@ "\n", "\n", "\n", - " \n", + " \n", " \n", " \n", "\n", - "\n", + "\n", "\n", - " \n", + " \n", " \n", " \n", "\n", - "\n", + "\n", "\n", - " \n", + " \n", " \n", " \n", "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", "\n" ], "image/svg+xml": [ "\n", "\n", "\n", - " \n", + " \n", " \n", " \n", "\n", - "\n", + "\n", "\n", - " \n", + " \n", " \n", " \n", "\n", - "\n", + "\n", "\n", - " \n", + " \n", " \n", " \n", "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", "\n" ] }, @@ -2337,7 +2337,7 @@ " [31c24e10] Distributions v0.25.107\n", " [ec8451be] KernelFunctions v0.10.63 `/home/runner/work/KernelFunctions.jl/KernelFunctions.jl#master`\n", " [98b081ad] Literate v2.16.1\n", - " [91a5bcdd] Plots v1.40.1\n", + " [91a5bcdd] Plots v1.40.2\n", " [37e2e46d] LinearAlgebra\n", "\n", "To reproduce this notebook's package environment, you can\n", @@ -2347,8 +2347,8 @@ "
\n", "System information (click to expand)\n", "
\n",
-    "Julia Version 1.10.0\n",
-    "Commit 3120989f39b (2023-12-25 18:01 UTC)\n",
+    "Julia Version 1.10.2\n",
+    "Commit bd47eca2c8a (2024-03-01 10:14 UTC)\n",
     "Build Info:\n",
     "  Official https://julialang.org/ release\n",
     "Platform Info:\n",
@@ -2357,7 +2357,7 @@
     "  WORD_SIZE: 64\n",
     "  LIBM: libopenlibm\n",
     "  LLVM: libLLVM-15.0.7 (ORCJIT, znver3)\n",
-    "  Threads: 1 on 4 virtual cores\n",
+    "Threads: 1 default, 0 interactive, 1 GC (on 4 virtual cores)\n",
     "Environment:\n",
     "  JULIA_DEBUG = Documenter\n",
     "  JULIA_LOAD_PATH = :/home/runner/.julia/packages/JuliaGPsDocs/7M86H/src\n",
@@ -2382,11 +2382,11 @@
    "file_extension": ".jl",
    "mimetype": "application/julia",
    "name": "julia",
-   "version": "1.10.0"
+   "version": "1.10.2"
   },
   "kernelspec": {
    "name": "julia-1.10",
-   "display_name": "Julia 1.10.0",
+   "display_name": "Julia 1.10.2",
    "language": "julia"
   }
  },
diff --git a/dev/examples/support-vector-machine/Manifest.toml b/dev/examples/support-vector-machine/Manifest.toml
index ee878e5da..1c0977de2 100644
--- a/dev/examples/support-vector-machine/Manifest.toml
+++ b/dev/examples/support-vector-machine/Manifest.toml
@@ -1,6 +1,6 @@
 # This file is machine-generated - editing it directly is not advised
 
-julia_version = "1.10.0"
+julia_version = "1.10.2"
 manifest_format = "2.0"
 project_hash = "d0b26683c92b747389c4e23a24f873fcb30a1126"
 
@@ -39,9 +39,9 @@ version = "0.5.1"
 
 [[deps.ChainRulesCore]]
 deps = ["Compat", "LinearAlgebra"]
-git-tree-sha1 = "ad25e7d21ce10e01de973cdc68ad0f850a953c52"
+git-tree-sha1 = "575cd02e080939a33b6df6c5853d14924c08e35b"
 uuid = "d360d2e6-b24c-11e9-a2a3-2a2ae2dbcce4"
-version = "1.21.1"
+version = "1.23.0"
 weakdeps = ["SparseArrays"]
 
     [deps.ChainRulesCore.extensions]
@@ -83,9 +83,9 @@ version = "0.12.10"
 
 [[deps.Compat]]
 deps = ["TOML", "UUIDs"]
-git-tree-sha1 = "75bd5b6fc5089df449b5d35fa501c846c9b6549b"
+git-tree-sha1 = "c955881e3c981181362ae4088b35995446298b80"
 uuid = "34da2185-b29b-5c13-b0c7-acf172513d20"
-version = "4.12.0"
+version = "4.14.0"
 weakdeps = ["Dates", "LinearAlgebra"]
 
     [deps.Compat.extensions]
@@ -94,7 +94,7 @@ weakdeps = ["Dates", "LinearAlgebra"]
 [[deps.CompilerSupportLibraries_jll]]
 deps = ["Artifacts", "Libdl"]
 uuid = "e66e0078-7015-5450-92f7-15fbd957f2ae"
-version = "1.0.5+1"
+version = "1.1.0+0"
 
 [[deps.CompositionsBase]]
 git-tree-sha1 = "802bb88cd69dfd1509f6670416bd4434015693ad"
@@ -109,9 +109,9 @@ version = "0.1.2"
 
 [[deps.ConcurrentUtilities]]
 deps = ["Serialization", "Sockets"]
-git-tree-sha1 = "8cfa272e8bdedfa88b6aefbbca7c19f1befac519"
+git-tree-sha1 = "9c4708e3ed2b799e6124b5673a712dda0b596a9b"
 uuid = "f0e56b4a-5159-44fe-b623-3e5288b988bb"
-version = "2.3.0"
+version = "2.3.1"
 
 [[deps.Contour]]
 git-tree-sha1 = "d05d9e7b7aedff4e5b51a029dced05cfb6125781"
@@ -125,9 +125,9 @@ version = "1.16.0"
 
 [[deps.DataStructures]]
 deps = ["Compat", "InteractiveUtils", "OrderedCollections"]
-git-tree-sha1 = "ac67408d9ddf207de5cfa9a97e114352430f01ed"
+git-tree-sha1 = "0f4b5d62a88d8f59003e43c25a8a90de9eb76317"
 uuid = "864edb3b-99cc-5e75-8d2d-829cb0a9cfe8"
-version = "0.18.16"
+version = "0.18.18"
 
 [[deps.Dates]]
 deps = ["Printf"]
@@ -240,11 +240,10 @@ git-tree-sha1 = "21efd19106a55620a188615da6d3d06cd7f6ee03"
 uuid = "a3f928ae-7b40-5064-980b-68af3947d34b"
 version = "2.13.93+0"
 
-[[deps.Formatting]]
-deps = ["Printf"]
-git-tree-sha1 = "8339d61043228fdd3eb658d86c926cb282ae72a8"
-uuid = "59287772-0a20-5a39-b81b-1366585eb4c0"
-version = "0.4.2"
+[[deps.Format]]
+git-tree-sha1 = "f3cf88025f6d03c194d73f5d13fee9004a108329"
+uuid = "1fa38f19-a742-5d3f-a2b9-30dd87b9d5f8"
+version = "1.3.6"
 
 [[deps.FreeType2_jll]]
 deps = ["Artifacts", "Bzip2_jll", "JLLWrappers", "Libdl", "Zlib_jll"]
@@ -272,15 +271,15 @@ version = "3.3.9+0"
 
 [[deps.GR]]
 deps = ["Artifacts", "Base64", "DelimitedFiles", "Downloads", "GR_jll", "HTTP", "JSON", "Libdl", "LinearAlgebra", "Pkg", "Preferences", "Printf", "Random", "Serialization", "Sockets", "TOML", "Tar", "Test", "UUIDs", "p7zip_jll"]
-git-tree-sha1 = "3458564589be207fa6a77dbbf8b97674c9836aab"
+git-tree-sha1 = "3437ade7073682993e092ca570ad68a2aba26983"
 uuid = "28b8d3ca-fb5f-59d9-8090-bfdbd6d07a71"
-version = "0.73.2"
+version = "0.73.3"
 
 [[deps.GR_jll]]
 deps = ["Artifacts", "Bzip2_jll", "Cairo_jll", "FFMPEG_jll", "Fontconfig_jll", "FreeType2_jll", "GLFW_jll", "JLLWrappers", "JpegTurbo_jll", "Libdl", "Libtiff_jll", "Pixman_jll", "Qt6Base_jll", "Zlib_jll", "libpng_jll"]
-git-tree-sha1 = "77f81da2964cc9fa7c0127f941e8bce37f7f1d70"
+git-tree-sha1 = "a96d5c713e6aa28c242b0d25c1347e258d6541ab"
 uuid = "d2c73de3-f751-5644-a686-071e5b155ba9"
-version = "0.73.2+0"
+version = "0.73.3+0"
 
 [[deps.Gettext_jll]]
 deps = ["Artifacts", "CompilerSupportLibraries_jll", "JLLWrappers", "Libdl", "Libiconv_jll", "Pkg", "XML2_jll"]
@@ -307,9 +306,9 @@ version = "1.0.2"
 
 [[deps.HTTP]]
 deps = ["Base64", "CodecZlib", "ConcurrentUtilities", "Dates", "ExceptionUnwrapping", "Logging", "LoggingExtras", "MbedTLS", "NetworkOptions", "OpenSSL", "Random", "SimpleBufferStream", "Sockets", "URIs", "UUIDs"]
-git-tree-sha1 = "abbbb9ec3afd783a7cbd82ef01dcd088ea051398"
+git-tree-sha1 = "db864f2d91f68a5912937af80327d288ea1f3aee"
 uuid = "cd3eb016-35fb-5094-929b-558a96fad6f3"
-version = "1.10.1"
+version = "1.10.3"
 
 [[deps.HarfBuzz_jll]]
 deps = ["Artifacts", "Cairo_jll", "Fontconfig_jll", "FreeType2_jll", "Glib_jll", "Graphite2_jll", "JLLWrappers", "Libdl", "Libffi_jll", "Pkg"]
@@ -358,13 +357,13 @@ version = "0.21.4"
 
 [[deps.JpegTurbo_jll]]
 deps = ["Artifacts", "JLLWrappers", "Libdl"]
-git-tree-sha1 = "60b1194df0a3298f460063de985eae7b01bc011a"
+git-tree-sha1 = "3336abae9a713d2210bb57ab484b1e065edd7d23"
 uuid = "aacddb02-875f-59d6-b918-886e6ef4fbf8"
-version = "3.0.1+0"
+version = "3.0.2+0"
 
 [[deps.KernelFunctions]]
 deps = ["ChainRulesCore", "Compat", "CompositionsBase", "Distances", "FillArrays", "Functors", "IrrationalConstants", "LinearAlgebra", "LogExpFunctions", "Random", "Requires", "SpecialFunctions", "Statistics", "StatsBase", "TensorCore", "Test", "ZygoteRules"]
-git-tree-sha1 = "654dead9bd3313311b61087ab7a0cf2b7a935cb1"
+git-tree-sha1 = "9d578aa8af14cd36ba744bb46c53925bca7a65f6"
 repo-rev = "master"
 repo-url = "/home/runner/work/KernelFunctions.jl/KernelFunctions.jl"
 uuid = "ec8451be-7e33-11e9-00cf-bbf324bd1392"
@@ -412,10 +411,10 @@ uuid = "b964fa9f-0449-5b57-a5c2-d3ea65f4040f"
 version = "1.3.1"
 
 [[deps.Latexify]]
-deps = ["Formatting", "InteractiveUtils", "LaTeXStrings", "MacroTools", "Markdown", "OrderedCollections", "Printf", "Requires"]
-git-tree-sha1 = "f428ae552340899a935973270b8d98e5a31c49fe"
+deps = ["Format", "InteractiveUtils", "LaTeXStrings", "MacroTools", "Markdown", "OrderedCollections", "Requires"]
+git-tree-sha1 = "cad560042a7cc108f5a4c24ea1431a9221f22c1b"
 uuid = "23fbe1c1-3f47-55db-b15f-69d7ec21a316"
-version = "0.16.1"
+version = "0.16.2"
 
     [deps.Latexify.extensions]
     DataFramesExt = "DataFrames"
@@ -495,10 +494,10 @@ uuid = "89763e89-9b03-5906-acba-b20f662cd828"
 version = "4.5.1+1"
 
 [[deps.Libuuid_jll]]
-deps = ["Artifacts", "JLLWrappers", "Libdl", "Pkg"]
-git-tree-sha1 = "7f3efec06033682db852f8b3bc3c1d2b0a0ab066"
+deps = ["Artifacts", "JLLWrappers", "Libdl"]
+git-tree-sha1 = "0a04a1318df1bf510beb2562cf90fb0c386f58c4"
 uuid = "38a345b3-de98-5d2b-a5d3-14cd9215e700"
-version = "2.36.0+0"
+version = "2.39.3+1"
 
 [[deps.LinearAlgebra]]
 deps = ["Libdl", "OpenBLAS_jll", "libblastrampoline_jll"]
@@ -512,9 +511,9 @@ version = "2.16.1"
 
 [[deps.LogExpFunctions]]
 deps = ["DocStringExtensions", "IrrationalConstants", "LinearAlgebra"]
-git-tree-sha1 = "7d6dd4e9212aebaeed356de34ccf262a3cd415aa"
+git-tree-sha1 = "18144f3e9cbe9b15b070288eef858f71b291ce37"
 uuid = "2ab3a3ac-af41-5b50-aa03-7779005ae688"
-version = "0.3.26"
+version = "0.3.27"
 
     [deps.LogExpFunctions.extensions]
     LogExpFunctionsChainRulesCoreExt = "ChainRulesCore"
@@ -593,7 +592,7 @@ version = "1.3.5+1"
 [[deps.OpenBLAS_jll]]
 deps = ["Artifacts", "CompilerSupportLibraries_jll", "Libdl"]
 uuid = "4536629a-c528-5b80-bd46-f80d51c5b363"
-version = "0.3.23+2"
+version = "0.3.23+4"
 
 [[deps.OpenLibm_jll]]
 deps = ["Artifacts", "Libdl"]
@@ -602,9 +601,9 @@ version = "0.8.1+2"
 
 [[deps.OpenSSL]]
 deps = ["BitFlags", "Dates", "MozillaCACerts_jll", "OpenSSL_jll", "Sockets"]
-git-tree-sha1 = "51901a49222b09e3743c65b8847687ae5fc78eb2"
+git-tree-sha1 = "af81a32750ebc831ee28bdaaba6e1067decef51e"
 uuid = "4d8831e6-92b7-49fb-bdf8-b643e874388c"
-version = "1.4.1"
+version = "1.4.2"
 
 [[deps.OpenSSL_jll]]
 deps = ["Artifacts", "JLLWrappers", "Libdl"]
@@ -670,15 +669,15 @@ version = "3.1.0"
 
 [[deps.PlotUtils]]
 deps = ["ColorSchemes", "Colors", "Dates", "PrecompileTools", "Printf", "Random", "Reexport", "Statistics"]
-git-tree-sha1 = "862942baf5663da528f66d24996eb6da85218e76"
+git-tree-sha1 = "7b1a9df27f072ac4c9c7cbe5efb198489258d1f5"
 uuid = "995b91a9-d308-5afd-9ec6-746e21dbc043"
-version = "1.4.0"
+version = "1.4.1"
 
 [[deps.Plots]]
 deps = ["Base64", "Contour", "Dates", "Downloads", "FFMPEG", "FixedPointNumbers", "GR", "JLFzf", "JSON", "LaTeXStrings", "Latexify", "LinearAlgebra", "Measures", "NaNMath", "Pkg", "PlotThemes", "PlotUtils", "PrecompileTools", "Printf", "REPL", "Random", "RecipesBase", "RecipesPipeline", "Reexport", "RelocatableFolders", "Requires", "Scratch", "Showoff", "SparseArrays", "Statistics", "StatsBase", "UUIDs", "UnicodeFun", "UnitfulLatexify", "Unzip"]
-git-tree-sha1 = "c4fa93d7d66acad8f6f4ff439576da9d2e890ee0"
+git-tree-sha1 = "3c403c6590dd93b36752634115e20137e79ab4df"
 uuid = "91a5bcdd-55d7-5caf-9e0b-520d859cae80"
-version = "1.40.1"
+version = "1.40.2"
 
     [deps.Plots.extensions]
     FileIOExt = "FileIO"
@@ -696,15 +695,15 @@ version = "1.40.1"
 
 [[deps.PrecompileTools]]
 deps = ["Preferences"]
-git-tree-sha1 = "03b4c25b43cb84cee5c90aa9b5ea0a78fd848d2f"
+git-tree-sha1 = "5aa36f7049a63a1528fe8f7c3f2113413ffd4e1f"
 uuid = "aea7be01-6a6a-4083-8856-8a6e6704d82a"
-version = "1.2.0"
+version = "1.2.1"
 
 [[deps.Preferences]]
 deps = ["TOML"]
-git-tree-sha1 = "00805cd429dcb4870060ff49ef443486c262e38e"
+git-tree-sha1 = "9306f6085165d270f7e3db02af26a400d580f5c6"
 uuid = "21216c6a-2e73-6563-6e65-726566657250"
-version = "1.4.1"
+version = "1.4.3"
 
 [[deps.Printf]]
 deps = ["Unicode"]
@@ -844,9 +843,9 @@ version = "0.34.2"
 
 [[deps.StatsFuns]]
 deps = ["HypergeometricFunctions", "IrrationalConstants", "LogExpFunctions", "Reexport", "Rmath", "SpecialFunctions"]
-git-tree-sha1 = "f625d686d5a88bcd2b15cd81f18f98186fdc0c9a"
+git-tree-sha1 = "cef0472124fab0695b58ca35a77c6fb942fdab8a"
 uuid = "4c63d2b9-4356-54db-8cca-17b64c39e42c"
-version = "1.3.0"
+version = "1.3.1"
 
     [deps.StatsFuns.extensions]
     StatsFunsChainRulesCoreExt = "ChainRulesCore"
@@ -886,9 +885,9 @@ deps = ["InteractiveUtils", "Logging", "Random", "Serialization"]
 uuid = "8dfed614-e22c-5e08-85e1-65c5234f0b40"
 
 [[deps.TranscodingStreams]]
-git-tree-sha1 = "54194d92959d8ebaa8e26227dbe3cdefcdcd594f"
+git-tree-sha1 = "3caa21522e7efac1ba21834a03734c57b4611c7e"
 uuid = "3bb67fe8-82b1-5028-8e26-92a6c54297fa"
-version = "0.10.3"
+version = "0.10.4"
 weakdeps = ["Random", "Test"]
 
     [deps.TranscodingStreams.extensions]
@@ -957,9 +956,9 @@ version = "1.31.0+0"
 
 [[deps.XML2_jll]]
 deps = ["Artifacts", "JLLWrappers", "Libdl", "Libiconv_jll", "Zlib_jll"]
-git-tree-sha1 = "801cbe47eae69adc50f36c3caec4758d2650741b"
+git-tree-sha1 = "07e470dabc5a6a4254ffebc29a1b3fc01464e105"
 uuid = "02c8fc9c-b97f-50b9-bbe4-9be30ff0a78a"
-version = "2.12.2+0"
+version = "2.12.5+0"
 
 [[deps.XSLT_jll]]
 deps = ["Artifacts", "JLLWrappers", "Libdl", "Libgcrypt_jll", "Libgpg_error_jll", "Libiconv_jll", "Pkg", "XML2_jll", "Zlib_jll"]
@@ -969,9 +968,9 @@ version = "1.1.34+0"
 
 [[deps.XZ_jll]]
 deps = ["Artifacts", "JLLWrappers", "Libdl"]
-git-tree-sha1 = "522b8414d40c4cbbab8dee346ac3a09f9768f25d"
+git-tree-sha1 = "37195dcb94a5970397ad425b95a9a26d0befce3a"
 uuid = "ffd25f8a-64ca-5728-b0f7-c24cf3aae800"
-version = "5.4.5+0"
+version = "5.6.0+0"
 
 [[deps.Xorg_libICE_jll]]
 deps = ["Libdl", "Pkg"]
@@ -1195,9 +1194,9 @@ version = "2.47.0+0"
 
 [[deps.libpng_jll]]
 deps = ["Artifacts", "JLLWrappers", "Libdl", "Zlib_jll"]
-git-tree-sha1 = "93284c28274d9e75218a416c65ec49d0e0fcdf3d"
+git-tree-sha1 = "d7015d2e18a5fd9a4f47de711837e980519781a4"
 uuid = "b53b4c65-9356-5827-b1ea-8c7a1a84506f"
-version = "1.6.40+0"
+version = "1.6.43+1"
 
 [[deps.libsvm_jll]]
 deps = ["Artifacts", "CompilerSupportLibraries_jll", "JLLWrappers", "LLVMOpenMP_jll", "Libdl", "Pkg"]
diff --git a/dev/examples/support-vector-machine/index.html b/dev/examples/support-vector-machine/index.html
index 242184232..eed0f3040 100644
--- a/dev/examples/support-vector-machine/index.html
+++ b/dev/examples/support-vector-machine/index.html
@@ -29,161 +29,161 @@
 scatter!(X2[:, 1], X2[:, 2]; color=:blue, label="training data: class 1")
- + - + - + - + - + - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + - + - - - - - - - - + + + + + + + +
Package and system information
@@ -194,7 +194,7 @@
Package and system information
[ec8451be] KernelFunctions v0.10.63 `/home/runner/work/KernelFunctions.jl/KernelFunctions.jl#master` [b1bec4e5] LIBSVM v0.8.0 [98b081ad] Literate v2.16.1 - [91a5bcdd] Plots v1.40.1 + [91a5bcdd] Plots v1.40.2 [37e2e46d] LinearAlgebra To reproduce this notebook's package environment, you can @@ -204,8 +204,8 @@
Package and system information
System information (click to expand)
-Julia Version 1.10.0
-Commit 3120989f39b (2023-12-25 18:01 UTC)
+Julia Version 1.10.2
+Commit bd47eca2c8a (2024-03-01 10:14 UTC)
 Build Info:
   Official https://julialang.org/ release
 Platform Info:
@@ -214,9 +214,9 @@ 
Package and system information
WORD_SIZE: 64 LIBM: libopenlibm LLVM: libLLVM-15.0.7 (ORCJIT, znver3) - Threads: 1 on 4 virtual cores +Threads: 1 default, 0 interactive, 1 GC (on 4 virtual cores) Environment: JULIA_DEBUG = Documenter JULIA_LOAD_PATH = :/home/runner/.julia/packages/JuliaGPsDocs/7M86H/src
-

This page was generated using Literate.jl.

+

This page was generated using Literate.jl.

diff --git a/dev/examples/support-vector-machine/notebook.ipynb b/dev/examples/support-vector-machine/notebook.ipynb index ec3986948..a61c8779e 100644 --- a/dev/examples/support-vector-machine/notebook.ipynb +++ b/dev/examples/support-vector-machine/notebook.ipynb @@ -190,321 +190,321 @@ "\n", "\n", "\n", - " \n", + " \n", " \n", " \n", "\n", - "\n", + "\n", "\n", - " \n", + " \n", " \n", " \n", "\n", - "\n", + "\n", "\n", - " \n", + " \n", " \n", " \n", "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", "\n", - " \n", + " \n", " \n", " \n", "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n" + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n" ], "image/svg+xml": [ "\n", "\n", "\n", - " \n", + " \n", " \n", " \n", "\n", - "\n", + "\n", "\n", - " \n", + " \n", " \n", " \n", "\n", - "\n", + "\n", "\n", - " \n", + " \n", " \n", " \n", "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", "\n", - " \n", + " \n", " \n", " \n", "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n" + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n" ] }, "metadata": {}, @@ -542,7 +542,7 @@ " [ec8451be] KernelFunctions v0.10.63 `/home/runner/work/KernelFunctions.jl/KernelFunctions.jl#master`\n", " [b1bec4e5] LIBSVM v0.8.0\n", " [98b081ad] Literate v2.16.1\n", - " [91a5bcdd] Plots v1.40.1\n", + " [91a5bcdd] Plots v1.40.2\n", " [37e2e46d] LinearAlgebra\n", "\n", "To reproduce this notebook's package environment, you can\n", @@ -552,8 +552,8 @@ "
\n", "System information (click to expand)\n", "
\n",
-    "Julia Version 1.10.0\n",
-    "Commit 3120989f39b (2023-12-25 18:01 UTC)\n",
+    "Julia Version 1.10.2\n",
+    "Commit bd47eca2c8a (2024-03-01 10:14 UTC)\n",
     "Build Info:\n",
     "  Official https://julialang.org/ release\n",
     "Platform Info:\n",
@@ -562,7 +562,7 @@
     "  WORD_SIZE: 64\n",
     "  LIBM: libopenlibm\n",
     "  LLVM: libLLVM-15.0.7 (ORCJIT, znver3)\n",
-    "  Threads: 1 on 4 virtual cores\n",
+    "Threads: 1 default, 0 interactive, 1 GC (on 4 virtual cores)\n",
     "Environment:\n",
     "  JULIA_DEBUG = Documenter\n",
     "  JULIA_LOAD_PATH = :/home/runner/.julia/packages/JuliaGPsDocs/7M86H/src\n",
@@ -587,11 +587,11 @@
    "file_extension": ".jl",
    "mimetype": "application/julia",
    "name": "julia",
-   "version": "1.10.0"
+   "version": "1.10.2"
   },
   "kernelspec": {
    "name": "julia-1.10",
-   "display_name": "Julia 1.10.0",
+   "display_name": "Julia 1.10.2",
    "language": "julia"
   }
  },
diff --git a/dev/examples/train-kernel-parameters/Manifest.toml b/dev/examples/train-kernel-parameters/Manifest.toml
index 30655e796..e5eaa4681 100644
--- a/dev/examples/train-kernel-parameters/Manifest.toml
+++ b/dev/examples/train-kernel-parameters/Manifest.toml
@@ -1,6 +1,6 @@
 # This file is machine-generated - editing it directly is not advised
 
-julia_version = "1.10.0"
+julia_version = "1.10.2"
 manifest_format = "2.0"
 project_hash = "f3af2d5178fe96f25b295696ea2c040e29f49bbf"
 
@@ -17,9 +17,9 @@ weakdeps = ["ChainRulesCore", "Test"]
 
 [[deps.Adapt]]
 deps = ["LinearAlgebra", "Requires"]
-git-tree-sha1 = "0fb305e0253fd4e833d486914367a2ee2c2e78d0"
+git-tree-sha1 = "cea4ac3f5b4bc4b3000aa55afb6e5626518948fa"
 uuid = "79e6a3ab-5dfb-504d-930d-738a2a938a0e"
-version = "4.0.1"
+version = "4.0.3"
 weakdeps = ["StaticArrays"]
 
     [deps.Adapt.extensions]
@@ -73,9 +73,9 @@ version = "0.1.1"
 
 [[deps.BenchmarkTools]]
 deps = ["JSON", "Logging", "Printf", "Profile", "Statistics", "UUIDs"]
-git-tree-sha1 = "f1f03a9fa24271160ed7e73051fba3c1a759b53f"
+git-tree-sha1 = "f1dff6729bc61f4d49e140da1af55dcd1ac97b2f"
 uuid = "6e4b80f9-dd63-53aa-95a3-0cdb28fa8baf"
-version = "1.4.0"
+version = "1.5.0"
 
 [[deps.BitFlags]]
 git-tree-sha1 = "2dc09997850d68179b69dafb58ae806167a32b1b"
@@ -107,15 +107,15 @@ version = "0.5.1"
 
 [[deps.ChainRules]]
 deps = ["Adapt", "ChainRulesCore", "Compat", "Distributed", "GPUArraysCore", "IrrationalConstants", "LinearAlgebra", "Random", "RealDot", "SparseArrays", "SparseInverseSubset", "Statistics", "StructArrays", "SuiteSparse"]
-git-tree-sha1 = "213f001d1233fd3b8ef007f50c8cab29061917d8"
+git-tree-sha1 = "4e42872be98fa3343c4f8458cbda8c5c6a6fa97c"
 uuid = "082447d4-558c-5d27-93f4-14fc19e9eca2"
-version = "1.61.0"
+version = "1.63.0"
 
 [[deps.ChainRulesCore]]
 deps = ["Compat", "LinearAlgebra"]
-git-tree-sha1 = "ad25e7d21ce10e01de973cdc68ad0f850a953c52"
+git-tree-sha1 = "575cd02e080939a33b6df6c5853d14924c08e35b"
 uuid = "d360d2e6-b24c-11e9-a2a3-2a2ae2dbcce4"
-version = "1.21.1"
+version = "1.23.0"
 weakdeps = ["SparseArrays"]
 
     [deps.ChainRulesCore.extensions]
@@ -163,9 +163,9 @@ version = "0.3.0"
 
 [[deps.Compat]]
 deps = ["TOML", "UUIDs"]
-git-tree-sha1 = "75bd5b6fc5089df449b5d35fa501c846c9b6549b"
+git-tree-sha1 = "c955881e3c981181362ae4088b35995446298b80"
 uuid = "34da2185-b29b-5c13-b0c7-acf172513d20"
-version = "4.12.0"
+version = "4.14.0"
 weakdeps = ["Dates", "LinearAlgebra"]
 
     [deps.Compat.extensions]
@@ -174,7 +174,7 @@ weakdeps = ["Dates", "LinearAlgebra"]
 [[deps.CompilerSupportLibraries_jll]]
 deps = ["Artifacts", "Libdl"]
 uuid = "e66e0078-7015-5450-92f7-15fbd957f2ae"
-version = "1.0.5+1"
+version = "1.1.0+0"
 
 [[deps.CompositionsBase]]
 git-tree-sha1 = "802bb88cd69dfd1509f6670416bd4434015693ad"
@@ -187,9 +187,9 @@ weakdeps = ["InverseFunctions"]
 
 [[deps.ConcurrentUtilities]]
 deps = ["Serialization", "Sockets"]
-git-tree-sha1 = "8cfa272e8bdedfa88b6aefbbca7c19f1befac519"
+git-tree-sha1 = "9c4708e3ed2b799e6124b5673a712dda0b596a9b"
 uuid = "f0e56b4a-5159-44fe-b623-3e5288b988bb"
-version = "2.3.0"
+version = "2.3.1"
 
 [[deps.ConstructionBase]]
 deps = ["LinearAlgebra"]
@@ -223,9 +223,9 @@ version = "1.16.0"
 
 [[deps.DataStructures]]
 deps = ["Compat", "InteractiveUtils", "OrderedCollections"]
-git-tree-sha1 = "ac67408d9ddf207de5cfa9a97e114352430f01ed"
+git-tree-sha1 = "0f4b5d62a88d8f59003e43c25a8a90de9eb76317"
 uuid = "864edb3b-99cc-5e75-8d2d-829cb0a9cfe8"
-version = "0.18.16"
+version = "0.18.18"
 
 [[deps.DataValueInterfaces]]
 git-tree-sha1 = "bfc1187b79289637fa0ef6d4436ebdfe6905cbd6"
@@ -372,9 +372,9 @@ version = "0.8.4"
 
 [[deps.Flux]]
 deps = ["Adapt", "ChainRulesCore", "Compat", "Functors", "LinearAlgebra", "MLUtils", "MacroTools", "NNlib", "OneHotArrays", "Optimisers", "Preferences", "ProgressLogging", "Random", "Reexport", "SparseArrays", "SpecialFunctions", "Statistics", "Zygote"]
-git-tree-sha1 = "39a9e46b4e92d5b56c0712adeb507555a2327240"
+git-tree-sha1 = "5a626d6ef24ae0a8590c22dc12096fb65eb66325"
 uuid = "587475ba-b771-5e3f-ad9e-33799f191a9c"
-version = "0.14.11"
+version = "0.14.13"
 
     [deps.Flux.extensions]
     FluxAMDGPUExt = "AMDGPU"
@@ -394,11 +394,10 @@ git-tree-sha1 = "21efd19106a55620a188615da6d3d06cd7f6ee03"
 uuid = "a3f928ae-7b40-5064-980b-68af3947d34b"
 version = "2.13.93+0"
 
-[[deps.Formatting]]
-deps = ["Printf"]
-git-tree-sha1 = "8339d61043228fdd3eb658d86c926cb282ae72a8"
-uuid = "59287772-0a20-5a39-b81b-1366585eb4c0"
-version = "0.4.2"
+[[deps.Format]]
+git-tree-sha1 = "f3cf88025f6d03c194d73f5d13fee9004a108329"
+uuid = "1fa38f19-a742-5d3f-a2b9-30dd87b9d5f8"
+version = "1.3.6"
 
 [[deps.ForwardDiff]]
 deps = ["CommonSubexpressions", "DiffResults", "DiffRules", "LinearAlgebra", "LogExpFunctions", "NaNMath", "Preferences", "Printf", "Random", "SpecialFunctions"]
@@ -452,15 +451,15 @@ version = "0.1.6"
 
 [[deps.GR]]
 deps = ["Artifacts", "Base64", "DelimitedFiles", "Downloads", "GR_jll", "HTTP", "JSON", "Libdl", "LinearAlgebra", "Pkg", "Preferences", "Printf", "Random", "Serialization", "Sockets", "TOML", "Tar", "Test", "UUIDs", "p7zip_jll"]
-git-tree-sha1 = "3458564589be207fa6a77dbbf8b97674c9836aab"
+git-tree-sha1 = "3437ade7073682993e092ca570ad68a2aba26983"
 uuid = "28b8d3ca-fb5f-59d9-8090-bfdbd6d07a71"
-version = "0.73.2"
+version = "0.73.3"
 
 [[deps.GR_jll]]
 deps = ["Artifacts", "Bzip2_jll", "Cairo_jll", "FFMPEG_jll", "Fontconfig_jll", "FreeType2_jll", "GLFW_jll", "JLLWrappers", "JpegTurbo_jll", "Libdl", "Libtiff_jll", "Pixman_jll", "Qt6Base_jll", "Zlib_jll", "libpng_jll"]
-git-tree-sha1 = "77f81da2964cc9fa7c0127f941e8bce37f7f1d70"
+git-tree-sha1 = "a96d5c713e6aa28c242b0d25c1347e258d6541ab"
 uuid = "d2c73de3-f751-5644-a686-071e5b155ba9"
-version = "0.73.2+0"
+version = "0.73.3+0"
 
 [[deps.Gettext_jll]]
 deps = ["Artifacts", "CompilerSupportLibraries_jll", "JLLWrappers", "Libdl", "Libiconv_jll", "Pkg", "XML2_jll"]
@@ -487,9 +486,9 @@ version = "1.0.2"
 
 [[deps.HTTP]]
 deps = ["Base64", "CodecZlib", "ConcurrentUtilities", "Dates", "ExceptionUnwrapping", "Logging", "LoggingExtras", "MbedTLS", "NetworkOptions", "OpenSSL", "Random", "SimpleBufferStream", "Sockets", "URIs", "UUIDs"]
-git-tree-sha1 = "abbbb9ec3afd783a7cbd82ef01dcd088ea051398"
+git-tree-sha1 = "db864f2d91f68a5912937af80327d288ea1f3aee"
 uuid = "cd3eb016-35fb-5094-929b-558a96fad6f3"
-version = "1.10.1"
+version = "1.10.3"
 
 [[deps.HarfBuzz_jll]]
 deps = ["Artifacts", "Cairo_jll", "Fontconfig_jll", "FreeType2_jll", "Glib_jll", "Graphite2_jll", "JLLWrappers", "Libdl", "Libffi_jll", "Pkg"]
@@ -565,9 +564,9 @@ version = "0.21.4"
 
 [[deps.JpegTurbo_jll]]
 deps = ["Artifacts", "JLLWrappers", "Libdl"]
-git-tree-sha1 = "60b1194df0a3298f460063de985eae7b01bc011a"
+git-tree-sha1 = "3336abae9a713d2210bb57ab484b1e065edd7d23"
 uuid = "aacddb02-875f-59d6-b918-886e6ef4fbf8"
-version = "3.0.1+0"
+version = "3.0.2+0"
 
 [[deps.JuliaVariables]]
 deps = ["MLStyle", "NameResolution"]
@@ -577,9 +576,9 @@ version = "0.2.4"
 
 [[deps.KernelAbstractions]]
 deps = ["Adapt", "Atomix", "InteractiveUtils", "LinearAlgebra", "MacroTools", "PrecompileTools", "Requires", "SparseArrays", "StaticArrays", "UUIDs", "UnsafeAtomics", "UnsafeAtomicsLLVM"]
-git-tree-sha1 = "4e0cb2f5aad44dcfdc91088e85dee4ecb22c791c"
+git-tree-sha1 = "ed7167240f40e62d97c1f5f7735dea6de3cc5c49"
 uuid = "63c18a36-062a-441e-b654-da1e3ab1ce7c"
-version = "0.9.16"
+version = "0.9.18"
 
     [deps.KernelAbstractions.extensions]
     EnzymeExt = "EnzymeCore"
@@ -589,7 +588,7 @@ version = "0.9.16"
 
 [[deps.KernelFunctions]]
 deps = ["ChainRulesCore", "Compat", "CompositionsBase", "Distances", "FillArrays", "Functors", "IrrationalConstants", "LinearAlgebra", "LogExpFunctions", "Random", "Requires", "SpecialFunctions", "Statistics", "StatsBase", "TensorCore", "Test", "ZygoteRules"]
-git-tree-sha1 = "654dead9bd3313311b61087ab7a0cf2b7a935cb1"
+git-tree-sha1 = "9d578aa8af14cd36ba744bb46c53925bca7a65f6"
 repo-rev = "master"
 repo-url = "/home/runner/work/KernelFunctions.jl/KernelFunctions.jl"
 uuid = "ec8451be-7e33-11e9-00cf-bbf324bd1392"
@@ -609,9 +608,9 @@ version = "3.0.0+1"
 
 [[deps.LLVM]]
 deps = ["CEnum", "LLVMExtra_jll", "Libdl", "Preferences", "Printf", "Requires", "Unicode"]
-git-tree-sha1 = "cb4619f7353fc62a1a22ffa3d7ed9791cfb47ad8"
+git-tree-sha1 = "ddab4d40513bce53c8e3157825e245224f74fae7"
 uuid = "929cbde3-209d-540e-8aea-75f648917ca0"
-version = "6.4.2"
+version = "6.6.0"
 
     [deps.LLVM.extensions]
     BFloat16sExt = "BFloat16s"
@@ -621,9 +620,9 @@ version = "6.4.2"
 
 [[deps.LLVMExtra_jll]]
 deps = ["Artifacts", "JLLWrappers", "LazyArtifacts", "Libdl", "TOML"]
-git-tree-sha1 = "98eaee04d96d973e79c25d49167668c5c8fb50e2"
+git-tree-sha1 = "88b916503aac4fb7f701bb625cd84ca5dd1677bc"
 uuid = "dad2f222-ce93-54a1-a47d-0025e8a3acab"
-version = "0.0.27+1"
+version = "0.0.29+0"
 
 [[deps.LLVMOpenMP_jll]]
 deps = ["Artifacts", "JLLWrappers", "Libdl"]
@@ -643,10 +642,10 @@ uuid = "b964fa9f-0449-5b57-a5c2-d3ea65f4040f"
 version = "1.3.1"
 
 [[deps.Latexify]]
-deps = ["Formatting", "InteractiveUtils", "LaTeXStrings", "MacroTools", "Markdown", "OrderedCollections", "Printf", "Requires"]
-git-tree-sha1 = "f428ae552340899a935973270b8d98e5a31c49fe"
+deps = ["Format", "InteractiveUtils", "LaTeXStrings", "MacroTools", "Markdown", "OrderedCollections", "Requires"]
+git-tree-sha1 = "cad560042a7cc108f5a4c24ea1431a9221f22c1b"
 uuid = "23fbe1c1-3f47-55db-b15f-69d7ec21a316"
-version = "0.16.1"
+version = "0.16.2"
 
     [deps.Latexify.extensions]
     DataFramesExt = "DataFrames"
@@ -730,10 +729,10 @@ uuid = "89763e89-9b03-5906-acba-b20f662cd828"
 version = "4.5.1+1"
 
 [[deps.Libuuid_jll]]
-deps = ["Artifacts", "JLLWrappers", "Libdl", "Pkg"]
-git-tree-sha1 = "7f3efec06033682db852f8b3bc3c1d2b0a0ab066"
+deps = ["Artifacts", "JLLWrappers", "Libdl"]
+git-tree-sha1 = "0a04a1318df1bf510beb2562cf90fb0c386f58c4"
 uuid = "38a345b3-de98-5d2b-a5d3-14cd9215e700"
-version = "2.36.0+0"
+version = "2.39.3+1"
 
 [[deps.LinearAlgebra]]
 deps = ["Libdl", "OpenBLAS_jll", "libblastrampoline_jll"]
@@ -747,9 +746,9 @@ version = "2.16.1"
 
 [[deps.LogExpFunctions]]
 deps = ["DocStringExtensions", "IrrationalConstants", "LinearAlgebra"]
-git-tree-sha1 = "7d6dd4e9212aebaeed356de34ccf262a3cd415aa"
+git-tree-sha1 = "18144f3e9cbe9b15b070288eef858f71b291ce37"
 uuid = "2ab3a3ac-af41-5b50-aa03-7779005ae688"
-version = "0.3.26"
+version = "0.3.27"
 
     [deps.LogExpFunctions.extensions]
     LogExpFunctionsChainRulesCoreExt = "ChainRulesCore"
@@ -828,9 +827,9 @@ version = "2023.1.10"
 
 [[deps.NNlib]]
 deps = ["Adapt", "Atomix", "ChainRulesCore", "GPUArraysCore", "KernelAbstractions", "LinearAlgebra", "Pkg", "Random", "Requires", "Statistics"]
-git-tree-sha1 = "d2811b435d2f571bdfdfa644bb806a66b458e186"
+git-tree-sha1 = "877f15c331337d54cf24c797d5bcb2e48ce21221"
 uuid = "872c559c-99b0-510c-b3b7-b6c96a88d5cd"
-version = "0.9.11"
+version = "0.9.12"
 
     [deps.NNlib.extensions]
     NNlibAMDGPUExt = "AMDGPU"
@@ -875,7 +874,7 @@ version = "0.2.5"
 [[deps.OpenBLAS_jll]]
 deps = ["Artifacts", "CompilerSupportLibraries_jll", "Libdl"]
 uuid = "4536629a-c528-5b80-bd46-f80d51c5b363"
-version = "0.3.23+2"
+version = "0.3.23+4"
 
 [[deps.OpenLibm_jll]]
 deps = ["Artifacts", "Libdl"]
@@ -884,9 +883,9 @@ version = "0.8.1+2"
 
 [[deps.OpenSSL]]
 deps = ["BitFlags", "Dates", "MozillaCACerts_jll", "OpenSSL_jll", "Sockets"]
-git-tree-sha1 = "51901a49222b09e3743c65b8847687ae5fc78eb2"
+git-tree-sha1 = "af81a32750ebc831ee28bdaaba6e1067decef51e"
 uuid = "4d8831e6-92b7-49fb-bdf8-b643e874388c"
-version = "1.4.1"
+version = "1.4.2"
 
 [[deps.OpenSSL_jll]]
 deps = ["Artifacts", "JLLWrappers", "Libdl"]
@@ -964,15 +963,15 @@ version = "3.1.0"
 
 [[deps.PlotUtils]]
 deps = ["ColorSchemes", "Colors", "Dates", "PrecompileTools", "Printf", "Random", "Reexport", "Statistics"]
-git-tree-sha1 = "862942baf5663da528f66d24996eb6da85218e76"
+git-tree-sha1 = "7b1a9df27f072ac4c9c7cbe5efb198489258d1f5"
 uuid = "995b91a9-d308-5afd-9ec6-746e21dbc043"
-version = "1.4.0"
+version = "1.4.1"
 
 [[deps.Plots]]
 deps = ["Base64", "Contour", "Dates", "Downloads", "FFMPEG", "FixedPointNumbers", "GR", "JLFzf", "JSON", "LaTeXStrings", "Latexify", "LinearAlgebra", "Measures", "NaNMath", "Pkg", "PlotThemes", "PlotUtils", "PrecompileTools", "Printf", "REPL", "Random", "RecipesBase", "RecipesPipeline", "Reexport", "RelocatableFolders", "Requires", "Scratch", "Showoff", "SparseArrays", "Statistics", "StatsBase", "UUIDs", "UnicodeFun", "UnitfulLatexify", "Unzip"]
-git-tree-sha1 = "c4fa93d7d66acad8f6f4ff439576da9d2e890ee0"
+git-tree-sha1 = "3c403c6590dd93b36752634115e20137e79ab4df"
 uuid = "91a5bcdd-55d7-5caf-9e0b-520d859cae80"
-version = "1.40.1"
+version = "1.40.2"
 
     [deps.Plots.extensions]
     FileIOExt = "FileIO"
@@ -990,15 +989,15 @@ version = "1.40.1"
 
 [[deps.PrecompileTools]]
 deps = ["Preferences"]
-git-tree-sha1 = "03b4c25b43cb84cee5c90aa9b5ea0a78fd848d2f"
+git-tree-sha1 = "5aa36f7049a63a1528fe8f7c3f2113413ffd4e1f"
 uuid = "aea7be01-6a6a-4083-8856-8a6e6704d82a"
-version = "1.2.0"
+version = "1.2.1"
 
 [[deps.Preferences]]
 deps = ["TOML"]
-git-tree-sha1 = "00805cd429dcb4870060ff49ef443486c262e38e"
+git-tree-sha1 = "9306f6085165d270f7e3db02af26a400d580f5c6"
 uuid = "21216c6a-2e73-6563-6e65-726566657250"
-version = "1.4.1"
+version = "1.4.3"
 
 [[deps.PrettyPrint]]
 git-tree-sha1 = "632eb4abab3449ab30c5e1afaa874f0b98b586e4"
@@ -1165,9 +1164,9 @@ version = "0.1.15"
 
 [[deps.StaticArrays]]
 deps = ["LinearAlgebra", "PrecompileTools", "Random", "StaticArraysCore"]
-git-tree-sha1 = "7b0e9c14c624e435076d19aea1e5cbdec2b9ca37"
+git-tree-sha1 = "bf074c045d3d5ffd956fa0a461da38a44685d6b2"
 uuid = "90137ffa-7385-5640-81b9-e52037218182"
-version = "1.9.2"
+version = "1.9.3"
 weakdeps = ["ChainRulesCore", "Statistics"]
 
     [deps.StaticArrays.extensions]
@@ -1198,9 +1197,9 @@ version = "0.34.2"
 
 [[deps.StatsFuns]]
 deps = ["HypergeometricFunctions", "IrrationalConstants", "LogExpFunctions", "Reexport", "Rmath", "SpecialFunctions"]
-git-tree-sha1 = "f625d686d5a88bcd2b15cd81f18f98186fdc0c9a"
+git-tree-sha1 = "cef0472124fab0695b58ca35a77c6fb942fdab8a"
 uuid = "4c63d2b9-4356-54db-8cca-17b64c39e42c"
-version = "1.3.0"
+version = "1.3.1"
 weakdeps = ["ChainRulesCore", "InverseFunctions"]
 
     [deps.StatsFuns.extensions]
@@ -1208,10 +1207,17 @@ weakdeps = ["ChainRulesCore", "InverseFunctions"]
     StatsFunsInverseFunctionsExt = "InverseFunctions"
 
 [[deps.StructArrays]]
-deps = ["Adapt", "ConstructionBase", "DataAPI", "GPUArraysCore", "StaticArraysCore", "Tables"]
-git-tree-sha1 = "1b0b1205a56dc288b71b1961d48e351520702e24"
+deps = ["ConstructionBase", "DataAPI", "Tables"]
+git-tree-sha1 = "f4dc295e983502292c4c3f951dbb4e985e35b3be"
 uuid = "09ab397b-f2b6-538f-b94a-2f83cf4a842a"
-version = "0.6.17"
+version = "0.6.18"
+weakdeps = ["Adapt", "GPUArraysCore", "SparseArrays", "StaticArrays"]
+
+    [deps.StructArrays.extensions]
+    StructArraysAdaptExt = "Adapt"
+    StructArraysGPUArraysCoreExt = "GPUArraysCore"
+    StructArraysSparseArraysExt = "SparseArrays"
+    StructArraysStaticArraysExt = "StaticArrays"
 
 [[deps.SuiteSparse]]
 deps = ["Libdl", "LinearAlgebra", "Serialization", "SparseArrays"]
@@ -1255,9 +1261,9 @@ deps = ["InteractiveUtils", "Logging", "Random", "Serialization"]
 uuid = "8dfed614-e22c-5e08-85e1-65c5234f0b40"
 
 [[deps.TranscodingStreams]]
-git-tree-sha1 = "54194d92959d8ebaa8e26227dbe3cdefcdcd594f"
+git-tree-sha1 = "3caa21522e7efac1ba21834a03734c57b4611c7e"
 uuid = "3bb67fe8-82b1-5028-8e26-92a6c54297fa"
-version = "0.10.3"
+version = "0.10.4"
 weakdeps = ["Random", "Test"]
 
     [deps.TranscodingStreams.extensions]
@@ -1354,9 +1360,9 @@ version = "1.31.0+0"
 
 [[deps.XML2_jll]]
 deps = ["Artifacts", "JLLWrappers", "Libdl", "Libiconv_jll", "Zlib_jll"]
-git-tree-sha1 = "801cbe47eae69adc50f36c3caec4758d2650741b"
+git-tree-sha1 = "07e470dabc5a6a4254ffebc29a1b3fc01464e105"
 uuid = "02c8fc9c-b97f-50b9-bbe4-9be30ff0a78a"
-version = "2.12.2+0"
+version = "2.12.5+0"
 
 [[deps.XSLT_jll]]
 deps = ["Artifacts", "JLLWrappers", "Libdl", "Libgcrypt_jll", "Libgpg_error_jll", "Libiconv_jll", "Pkg", "XML2_jll", "Zlib_jll"]
@@ -1366,9 +1372,9 @@ version = "1.1.34+0"
 
 [[deps.XZ_jll]]
 deps = ["Artifacts", "JLLWrappers", "Libdl"]
-git-tree-sha1 = "522b8414d40c4cbbab8dee346ac3a09f9768f25d"
+git-tree-sha1 = "37195dcb94a5970397ad425b95a9a26d0befce3a"
 uuid = "ffd25f8a-64ca-5728-b0f7-c24cf3aae800"
-version = "5.4.5+0"
+version = "5.6.0+0"
 
 [[deps.Xorg_libICE_jll]]
 deps = ["Libdl", "Pkg"]
@@ -1602,9 +1608,9 @@ version = "1.18.0+0"
 
 [[deps.libpng_jll]]
 deps = ["Artifacts", "JLLWrappers", "Libdl", "Zlib_jll"]
-git-tree-sha1 = "93284c28274d9e75218a416c65ec49d0e0fcdf3d"
+git-tree-sha1 = "d7015d2e18a5fd9a4f47de711837e980519781a4"
 uuid = "b53b4c65-9356-5827-b1ea-8c7a1a84506f"
-version = "1.6.40+0"
+version = "1.6.43+1"
 
 [[deps.libvorbis_jll]]
 deps = ["Artifacts", "JLLWrappers", "Libdl", "Ogg_jll", "Pkg"]
diff --git a/dev/examples/train-kernel-parameters/index.html b/dev/examples/train-kernel-parameters/index.html
index cd95e482a..de370e39f 100644
--- a/dev/examples/train-kernel-parameters/index.html
+++ b/dev/examples/train-kernel-parameters/index.html
@@ -17,108 +17,108 @@
 plot!(x_test, sinc; label="true function")
- + - + - + - + - + - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +

Manual Approach

The first option is to rebuild the parametrized kernel from a vector of parameters in each evaluation of the cost function. This is similar to the approach taken in Stheno.jl.

To train the kernel parameters via Zygote.jl, we need to create a function creating a kernel from an array. A simple way to ensure that the kernel parameters are positive is to optimize over the logarithm of the parameters.

function kernel_creator(θ)
     return (exp(θ[1]) * SqExponentialKernel() + exp(θ[2]) * Matern32Kernel()) ∘
            ScaleTransform(exp(θ[3]))
@@ -134,110 +134,110 @@ 

plot!(x_test, ŷ; label="prediction")

- + - + - + - + - + - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +

We define the following loss:

function loss(θ)
     ŷ = f(x_train, x_train, y_train, θ)
     return norm(y_train - ŷ) + exp(θ[4]) * norm(ŷ)
@@ -246,14 +246,14 @@ 

opt = Optimise.ADAGrad(0.5) grads = only((Zygote.gradient(loss, θ))) Optimise.update!(opt, θ, grads) -end

BenchmarkTools.Trial: 6392 samples with 1 evaluation.
- Range (min … max):  635.570 μs …   4.961 ms  ┊ GC (min … max): 0.00% … 24.14%
- Time  (median):     715.959 μs               ┊ GC (median):    0.00%
- Time  (mean ± σ):   779.177 μs ± 237.663 μs  ┊ GC (mean ± σ):  6.31% ± 11.86%
+end
BenchmarkTools.Trial: 6169 samples with 1 evaluation.
+ Range (min … max):  661.998 μs …   9.628 ms  ┊ GC (min … max): 0.00% … 66.28%
+ Time  (median):     743.080 μs               ┊ GC (median):    0.00%
+ Time  (mean ± σ):   806.319 μs ± 267.902 μs  ┊ GC (mean ± σ):  6.30% ± 11.71%
 
-    ▅██▇▅▄▂                                         ▁ ▁         ▂
-  ▆█████████▇▆▃▃▄▁▄▆██▇▆▆▅▃▃▁▁▃▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▃▅▇███████▇▇▇▇▇ █
-  636 μs        Histogram: log(frequency) by time       1.75 ms <
+   ▁▂▇█▇▅▄▂                                              ▁ ▁    ▁
+  ▇█████████▆▅▅▆▁▃▃▃▅▄▇▆▅▅▁▃▃▁▁▁▁▃▁▁▁▃▁▁▁▁▁▁▁▁▁▁▁▁▁▁▆▆████████▇ █
+  662 μs        Histogram: log(frequency) by time       1.74 ms <
 
  Memory estimate: 2.98 MiB, allocs estimate: 1563.

Training the model

Setting an initial value and initializing the optimizer:

θ = log.(p0) # Initial vector
 opt = Optimise.ADAGrad(0.5)

Optimize

anim = Animation()
@@ -297,14 +297,14 @@ 

opt = Optimise.ADAGrad(0.5) grads = (Zygote.gradient(loss ∘ unflatten, θ))[1] Optimise.update!(opt, θ, grads) -end

BenchmarkTools.Trial: 5448 samples with 1 evaluation.
- Range (min … max):  748.960 μs …   2.608 ms  ┊ GC (min … max): 0.00% … 47.88%
- Time  (median):     855.177 μs               ┊ GC (median):    0.00%
- Time  (mean ± σ):   914.518 μs ± 234.434 μs  ┊ GC (mean ± σ):  5.13% ± 10.86%
+end
BenchmarkTools.Trial: 5269 samples with 1 evaluation.
+ Range (min … max):  812.149 μs …   5.413 ms  ┊ GC (min … max): 0.00% … 20.33%
+ Time  (median):     885.848 μs               ┊ GC (median):    0.00%
+ Time  (mean ± σ):   945.969 μs ± 249.081 μs  ┊ GC (mean ± σ):  5.15% ± 10.95%
 
-   ▁▄▅▇█▇▅▃▁              ▁                                ▁▁   ▂
-  ▇██████████▆▇▅▅▃▃▃▁▄▁▅▆▇██▆▃▃▁▄▃▅▅▃▅▃▁▁▁▁▁▁▁▁▁▁▁▁▁▁▅▁▇▇██████ █
-  749 μs        Histogram: log(frequency) by time       1.94 ms <
+  ▃▄▅█▇▅▄▂                                                 ▁    ▁
+  █████████▇▆▅▄▄▃▄▄▁▁▄▆▇▇▇▅▅▄▃▁▁▃▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▃▅███████ █
+  812 μs        Histogram: log(frequency) by time       1.98 ms <
 
  Memory estimate: 3.08 MiB, allocs estimate: 2228.

Training the model

Optimize

opt = Optimise.ADAGrad(0.5)
 for i in 1:15
@@ -330,14 +330,14 @@ 

end

Cost for one step

@benchmark let θt = θ[:], optt = Optimise.ADAGrad(0.5)
     grads = only((Zygote.gradient(loss, θt)))
     Optimise.update!(optt, θt, grads)
-end
BenchmarkTools.Trial: 6664 samples with 1 evaluation.
- Range (min … max):  642.943 μs …   3.631 ms  ┊ GC (min … max): 0.00% … 24.37%
- Time  (median):     704.498 μs               ┊ GC (median):    0.00%
- Time  (mean ± σ):   747.310 μs ± 213.925 μs  ┊ GC (mean ± σ):  5.10% ± 10.59%
+end
BenchmarkTools.Trial: 6311 samples with 1 evaluation.
+ Range (min … max):  662.188 μs …   6.425 ms  ┊ GC (min … max): 0.00% … 15.96%
+ Time  (median):     739.754 μs               ┊ GC (median):    0.00%
+ Time  (mean ± σ):   789.411 μs ± 245.120 μs  ┊ GC (mean ± σ):  5.41% ± 10.91%
 
-  ▅▆▇█▆▄▃▁                                                 ▁▁▁▁ ▂
-  ████████▇▅▄▆▄▁▁▁▁▁▁▁▁▁▁▃▁▁▁▄▃▃▃▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▄▆█████ █
-  643 μs        Histogram: log(frequency) by time       1.76 ms <
+  ▄▅▅█▇▅▄▂                                                  ▁   ▁
+  █████████▆▃▄▄▁▁▄▆▆▅▅▃▁▁▁▁▁▁▃▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▄▅▇██████ █
+  662 μs        Histogram: log(frequency) by time       1.91 ms <
 
  Memory estimate: 2.98 MiB, allocs estimate: 1558.

Training the model

The loss at our initial parameter values:

θ = log.([1.1, 0.1, 0.01, 0.001]) # Initial vector
 loss(θ)
2.613933959118708

Initialize optimizer

opt = Optimise.ADAGrad(0.5)

Optimize

for i in 1:15
@@ -349,16 +349,17 @@ 
Package and system information
Package information (click to expand)
 Status `~/work/KernelFunctions.jl/KernelFunctions.jl/examples/train-kernel-parameters/Project.toml`
-  [6e4b80f9] BenchmarkTools v1.4.0
+  [6e4b80f9] BenchmarkTools v1.5.0
   [31c24e10] Distributions v0.25.107
-  [587475ba] Flux v0.14.11
+  [587475ba] Flux v0.14.13
   [f6369f11] ForwardDiff v0.10.36
   [ec8451be] KernelFunctions v0.10.63 `/home/runner/work/KernelFunctions.jl/KernelFunctions.jl#master`
   [98b081ad] Literate v2.16.1
-  [2412ca09] ParameterHandling v0.4.10
-  [91a5bcdd] Plots v1.40.1
+⌅ [2412ca09] ParameterHandling v0.4.10
+  [91a5bcdd] Plots v1.40.2
   [e88e6eb3] Zygote v0.6.69
   [37e2e46d] LinearAlgebra
+Info Packages marked with ⌅ have new versions available but compatibility constraints restrict them from upgrading. To see why use `status --outdated`
 
To reproduce this notebook's package environment, you can @@ -367,8 +368,8 @@
Package and system information
System information (click to expand)
-Julia Version 1.10.0
-Commit 3120989f39b (2023-12-25 18:01 UTC)
+Julia Version 1.10.2
+Commit bd47eca2c8a (2024-03-01 10:14 UTC)
 Build Info:
   Official https://julialang.org/ release
 Platform Info:
@@ -377,9 +378,9 @@ 
Package and system information
WORD_SIZE: 64 LIBM: libopenlibm LLVM: libLLVM-15.0.7 (ORCJIT, znver3) - Threads: 1 on 4 virtual cores +Threads: 1 default, 0 interactive, 1 GC (on 4 virtual cores) Environment: JULIA_DEBUG = Documenter JULIA_LOAD_PATH = :/home/runner/.julia/packages/JuliaGPsDocs/7M86H/src
-

This page was generated using Literate.jl.

+

This page was generated using Literate.jl.

diff --git a/dev/examples/train-kernel-parameters/notebook.ipynb b/dev/examples/train-kernel-parameters/notebook.ipynb index a620da226..90cfa077d 100644 --- a/dev/examples/train-kernel-parameters/notebook.ipynb +++ b/dev/examples/train-kernel-parameters/notebook.ipynb @@ -87,215 +87,215 @@ "\n", "\n", "\n", - " \n", + " \n", " \n", " \n", "\n", - "\n", + "\n", "\n", - " \n", + " \n", " \n", " \n", "\n", - "\n", + "\n", "\n", - " \n", + " \n", " \n", " \n", "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n" + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n" ], "image/svg+xml": [ "\n", "\n", "\n", - " \n", + " \n", " \n", " \n", "\n", - "\n", + "\n", "\n", - " \n", + " \n", " \n", " \n", "\n", - "\n", + "\n", "\n", - " \n", + " \n", " \n", " \n", "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n" + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n" ] }, "metadata": {}, @@ -385,219 +385,219 @@ "\n", "\n", "\n", - " \n", + " \n", " \n", " \n", "\n", - "\n", + "\n", "\n", - " \n", + " \n", " \n", " \n", "\n", - "\n", + "\n", "\n", - " \n", + " \n", " \n", " \n", "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n" + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n" ], "image/svg+xml": [ "\n", "\n", "\n", - " \n", + " \n", " \n", " \n", "\n", - "\n", + "\n", "\n", - " \n", + " \n", " \n", " \n", "\n", - "\n", + "\n", "\n", - " \n", + " \n", " \n", " \n", "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n", - "\n" + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n" ] }, "metadata": {}, @@ -673,7 +673,7 @@ { "output_type": "execute_result", "data": { - "text/plain": "BenchmarkTools.Trial: 6677 samples with 1 evaluation.\n Range \u001b[90m(\u001b[39m\u001b[36m\u001b[1mmin\u001b[22m\u001b[39m … \u001b[35mmax\u001b[39m\u001b[90m): \u001b[39m\u001b[36m\u001b[1m645.198 μs\u001b[22m\u001b[39m … \u001b[35m 5.774 ms\u001b[39m \u001b[90m┊\u001b[39m GC \u001b[90m(\u001b[39mmin … max\u001b[90m): \u001b[39m0.00% … 19.96%\n Time \u001b[90m(\u001b[39m\u001b[34m\u001b[1mmedian\u001b[22m\u001b[39m\u001b[90m): \u001b[39m\u001b[34m\u001b[1m706.031 μs \u001b[22m\u001b[39m\u001b[90m┊\u001b[39m GC \u001b[90m(\u001b[39mmedian\u001b[90m): \u001b[39m0.00%\n Time \u001b[90m(\u001b[39m\u001b[32m\u001b[1mmean\u001b[22m\u001b[39m ± \u001b[32mσ\u001b[39m\u001b[90m): \u001b[39m\u001b[32m\u001b[1m745.997 μs\u001b[22m\u001b[39m ± \u001b[32m221.619 μs\u001b[39m \u001b[90m┊\u001b[39m GC \u001b[90m(\u001b[39mmean ± σ\u001b[90m): \u001b[39m4.73% ± 10.24%\n\n \u001b[39m▅\u001b[39m▆\u001b[39m▇\u001b[34m█\u001b[39m\u001b[39m▆\u001b[32m▄\u001b[39m\u001b[39m▂\u001b[39m \u001b[39m \u001b[39m \u001b[39m \u001b[39m \u001b[39m \u001b[39m \u001b[39m \u001b[39m \u001b[39m \u001b[39m \u001b[39m \u001b[39m \u001b[39m \u001b[39m \u001b[39m \u001b[39m \u001b[39m \u001b[39m \u001b[39m \u001b[39m \u001b[39m \u001b[39m \u001b[39m \u001b[39m \u001b[39m \u001b[39m \u001b[39m \u001b[39m \u001b[39m \u001b[39m \u001b[39m \u001b[39m \u001b[39m \u001b[39m \u001b[39m \u001b[39m \u001b[39m \u001b[39m \u001b[39m \u001b[39m \u001b[39m \u001b[39m \u001b[39m \u001b[39m \u001b[39m \u001b[39m \u001b[39m \u001b[39m \u001b[39m \u001b[39m \u001b[39m \u001b[39m▁\u001b[39m▁\u001b[39m \u001b[39m▂\n \u001b[39m█\u001b[39m█\u001b[39m█\u001b[34m█\u001b[39m\u001b[39m█\u001b[32m█\u001b[39m\u001b[39m█\u001b[39m█\u001b[39m▇\u001b[39m▆\u001b[39m▆\u001b[39m▅\u001b[39m▃\u001b[39m▃\u001b[39m▃\u001b[39m▅\u001b[39m▆\u001b[39m▇\u001b[39m▆\u001b[39m▆\u001b[39m▆\u001b[39m▅\u001b[39m▄\u001b[39m▁\u001b[39m▁\u001b[39m▁\u001b[39m▁\u001b[39m▁\u001b[39m▁\u001b[39m▁\u001b[39m▁\u001b[39m▁\u001b[39m▁\u001b[39m▁\u001b[39m▁\u001b[39m▁\u001b[39m▁\u001b[39m▁\u001b[39m▁\u001b[39m▁\u001b[39m▁\u001b[39m▁\u001b[39m▁\u001b[39m▁\u001b[39m▁\u001b[39m▁\u001b[39m▁\u001b[39m▁\u001b[39m▁\u001b[39m▁\u001b[39m▁\u001b[39m▁\u001b[39m▁\u001b[39m▁\u001b[39m▁\u001b[39m▅\u001b[39m▆\u001b[39m█\u001b[39m█\u001b[39m█\u001b[39m█\u001b[39m \u001b[39m█\n 645 μs\u001b[90m \u001b[39m\u001b[90mHistogram: \u001b[39m\u001b[90m\u001b[1mlog(\u001b[22m\u001b[39m\u001b[90mfrequency\u001b[39m\u001b[90m\u001b[1m)\u001b[22m\u001b[39m\u001b[90m by time\u001b[39m 1.8 ms \u001b[0m\u001b[1m<\u001b[22m\n\n Memory estimate\u001b[90m: \u001b[39m\u001b[33m2.98 MiB\u001b[39m, allocs estimate\u001b[90m: \u001b[39m\u001b[33m1559\u001b[39m." + "text/plain": "BenchmarkTools.Trial: 6312 samples with 1 evaluation.\n Range \u001b[90m(\u001b[39m\u001b[36m\u001b[1mmin\u001b[22m\u001b[39m … \u001b[35mmax\u001b[39m\u001b[90m): \u001b[39m\u001b[36m\u001b[1m672.769 μs\u001b[22m\u001b[39m … \u001b[35m 6.500 ms\u001b[39m \u001b[90m┊\u001b[39m GC \u001b[90m(\u001b[39mmin … max\u001b[90m): \u001b[39m0.00% … 16.30%\n Time \u001b[90m(\u001b[39m\u001b[34m\u001b[1mmedian\u001b[22m\u001b[39m\u001b[90m): \u001b[39m\u001b[34m\u001b[1m744.803 μs \u001b[22m\u001b[39m\u001b[90m┊\u001b[39m GC \u001b[90m(\u001b[39mmedian\u001b[90m): \u001b[39m0.00%\n Time \u001b[90m(\u001b[39m\u001b[32m\u001b[1mmean\u001b[22m\u001b[39m ± \u001b[32mσ\u001b[39m\u001b[90m): \u001b[39m\u001b[32m\u001b[1m789.237 μs\u001b[22m\u001b[39m ± \u001b[32m237.558 μs\u001b[39m \u001b[90m┊\u001b[39m GC \u001b[90m(\u001b[39mmean ± σ\u001b[90m): \u001b[39m4.86% ± 10.35%\n\n \u001b[39m▄\u001b[39m▄\u001b[39m▆\u001b[34m█\u001b[39m\u001b[39m▆\u001b[39m▅\u001b[32m▃\u001b[39m\u001b[39m▁\u001b[39m \u001b[39m \u001b[39m \u001b[39m \u001b[39m \u001b[39m \u001b[39m \u001b[39m \u001b[39m \u001b[39m \u001b[39m \u001b[39m \u001b[39m \u001b[39m \u001b[39m \u001b[39m \u001b[39m \u001b[39m \u001b[39m \u001b[39m \u001b[39m \u001b[39m \u001b[39m \u001b[39m \u001b[39m \u001b[39m \u001b[39m \u001b[39m \u001b[39m \u001b[39m \u001b[39m \u001b[39m \u001b[39m \u001b[39m \u001b[39m \u001b[39m \u001b[39m \u001b[39m \u001b[39m \u001b[39m \u001b[39m \u001b[39m \u001b[39m \u001b[39m \u001b[39m \u001b[39m \u001b[39m \u001b[39m \u001b[39m \u001b[39m \u001b[39m \u001b[39m \u001b[39m \u001b[39m \u001b[39m▁\n \u001b[39m█\u001b[39m█\u001b[39m█\u001b[34m█\u001b[39m\u001b[39m█\u001b[39m█\u001b[32m█\u001b[39m\u001b[39m█\u001b[39m█\u001b[39m▆\u001b[39m▇\u001b[39m▃\u001b[39m▄\u001b[39m▃\u001b[39m▃\u001b[39m▁\u001b[39m▁\u001b[39m▃\u001b[39m▁\u001b[39m▁\u001b[39m▁\u001b[39m▁\u001b[39m▁\u001b[39m▁\u001b[39m▁\u001b[39m▃\u001b[39m▁\u001b[39m▁\u001b[39m▁\u001b[39m▁\u001b[39m▁\u001b[39m▁\u001b[39m▁\u001b[39m▁\u001b[39m▁\u001b[39m▁\u001b[39m▁\u001b[39m▁\u001b[39m▁\u001b[39m▁\u001b[39m▁\u001b[39m▁\u001b[39m▁\u001b[39m▁\u001b[39m▁\u001b[39m▁\u001b[39m▁\u001b[39m▁\u001b[39m▁\u001b[39m▁\u001b[39m▁\u001b[39m▁\u001b[39m▁\u001b[39m▄\u001b[39m▆\u001b[39m▇\u001b[39m█\u001b[39m█\u001b[39m█\u001b[39m█\u001b[39m█\u001b[39m \u001b[39m█\n 673 μs\u001b[90m \u001b[39m\u001b[90mHistogram: \u001b[39m\u001b[90m\u001b[1mlog(\u001b[22m\u001b[39m\u001b[90mfrequency\u001b[39m\u001b[90m\u001b[1m)\u001b[22m\u001b[39m\u001b[90m by time\u001b[39m 1.94 ms \u001b[0m\u001b[1m<\u001b[22m\n\n Memory estimate\u001b[90m: \u001b[39m\u001b[33m2.98 MiB\u001b[39m, allocs estimate\u001b[90m: \u001b[39m\u001b[33m1559\u001b[39m." }, "metadata": {}, "execution_count": 9 @@ -884,7 +884,7 @@ { "output_type": "execute_result", "data": { - "text/plain": "BenchmarkTools.Trial: 5506 samples with 1 evaluation.\n Range \u001b[90m(\u001b[39m\u001b[36m\u001b[1mmin\u001b[22m\u001b[39m … \u001b[35mmax\u001b[39m\u001b[90m): \u001b[39m\u001b[36m\u001b[1m794.795 μs\u001b[22m\u001b[39m … \u001b[35m 6.091 ms\u001b[39m \u001b[90m┊\u001b[39m GC \u001b[90m(\u001b[39mmin … max\u001b[90m): \u001b[39m0.00% … 20.62%\n Time \u001b[90m(\u001b[39m\u001b[34m\u001b[1mmedian\u001b[22m\u001b[39m\u001b[90m): \u001b[39m\u001b[34m\u001b[1m854.756 μs \u001b[22m\u001b[39m\u001b[90m┊\u001b[39m GC \u001b[90m(\u001b[39mmedian\u001b[90m): \u001b[39m0.00%\n Time \u001b[90m(\u001b[39m\u001b[32m\u001b[1mmean\u001b[22m\u001b[39m ± \u001b[32mσ\u001b[39m\u001b[90m): \u001b[39m\u001b[32m\u001b[1m904.857 μs\u001b[22m\u001b[39m ± \u001b[32m257.476 μs\u001b[39m \u001b[90m┊\u001b[39m GC \u001b[90m(\u001b[39mmean ± σ\u001b[90m): \u001b[39m4.45% ± 9.83%\n\n \u001b[39m▄\u001b[39m▆\u001b[39m█\u001b[34m▇\u001b[39m\u001b[39m▅\u001b[32m▁\u001b[39m\u001b[39m \u001b[39m \u001b[39m \u001b[39m \u001b[39m \u001b[39m \u001b[39m \u001b[39m \u001b[39m \u001b[39m \u001b[39m \u001b[39m \u001b[39m \u001b[39m \u001b[39m \u001b[39m \u001b[39m \u001b[39m \u001b[39m \u001b[39m \u001b[39m \u001b[39m \u001b[39m \u001b[39m \u001b[39m \u001b[39m \u001b[39m \u001b[39m \u001b[39m \u001b[39m \u001b[39m \u001b[39m \u001b[39m \u001b[39m \u001b[39m \u001b[39m \u001b[39m \u001b[39m \u001b[39m \u001b[39m \u001b[39m \u001b[39m \u001b[39m \u001b[39m \u001b[39m \u001b[39m \u001b[39m \u001b[39m \u001b[39m \u001b[39m \u001b[39m \u001b[39m \u001b[39m \u001b[39m \u001b[39m \u001b[39m \u001b[39m▁\n \u001b[39m█\u001b[39m█\u001b[39m█\u001b[34m█\u001b[39m\u001b[39m█\u001b[32m█\u001b[39m\u001b[39m█\u001b[39m▆\u001b[39m▄\u001b[39m▃\u001b[39m▁\u001b[39m▁\u001b[39m▁\u001b[39m▁\u001b[39m▃\u001b[39m▄\u001b[39m▇\u001b[39m▇\u001b[39m▇\u001b[39m▆\u001b[39m▄\u001b[39m▃\u001b[39m▁\u001b[39m▁\u001b[39m▁\u001b[39m▁\u001b[39m▃\u001b[39m▁\u001b[39m▁\u001b[39m▁\u001b[39m▁\u001b[39m▁\u001b[39m▁\u001b[39m▁\u001b[39m▁\u001b[39m▁\u001b[39m▁\u001b[39m▁\u001b[39m▁\u001b[39m▁\u001b[39m▁\u001b[39m▁\u001b[39m▁\u001b[39m▁\u001b[39m▁\u001b[39m▁\u001b[39m▁\u001b[39m▁\u001b[39m▁\u001b[39m▁\u001b[39m▁\u001b[39m▁\u001b[39m▁\u001b[39m▁\u001b[39m▁\u001b[39m▅\u001b[39m▆\u001b[39m▇\u001b[39m█\u001b[39m█\u001b[39m█\u001b[39m \u001b[39m█\n 795 μs\u001b[90m \u001b[39m\u001b[90mHistogram: \u001b[39m\u001b[90m\u001b[1mlog(\u001b[22m\u001b[39m\u001b[90mfrequency\u001b[39m\u001b[90m\u001b[1m)\u001b[22m\u001b[39m\u001b[90m by time\u001b[39m 2.21 ms \u001b[0m\u001b[1m<\u001b[22m\n\n Memory estimate\u001b[90m: \u001b[39m\u001b[33m3.08 MiB\u001b[39m, allocs estimate\u001b[90m: \u001b[39m\u001b[33m2228\u001b[39m." + "text/plain": "BenchmarkTools.Trial: 5319 samples with 1 evaluation.\n Range \u001b[90m(\u001b[39m\u001b[36m\u001b[1mmin\u001b[22m\u001b[39m … \u001b[35mmax\u001b[39m\u001b[90m): \u001b[39m\u001b[36m\u001b[1m815.094 μs\u001b[22m\u001b[39m … \u001b[35m 6.926 ms\u001b[39m \u001b[90m┊\u001b[39m GC \u001b[90m(\u001b[39mmin … max\u001b[90m): \u001b[39m0.00% … 21.93%\n Time \u001b[90m(\u001b[39m\u001b[34m\u001b[1mmedian\u001b[22m\u001b[39m\u001b[90m): \u001b[39m\u001b[34m\u001b[1m888.823 μs \u001b[22m\u001b[39m\u001b[90m┊\u001b[39m GC \u001b[90m(\u001b[39mmedian\u001b[90m): \u001b[39m0.00%\n Time \u001b[90m(\u001b[39m\u001b[32m\u001b[1mmean\u001b[22m\u001b[39m ± \u001b[32mσ\u001b[39m\u001b[90m): \u001b[39m\u001b[32m\u001b[1m937.384 μs\u001b[22m\u001b[39m ± \u001b[32m254.710 μs\u001b[39m \u001b[90m┊\u001b[39m GC \u001b[90m(\u001b[39mmean ± σ\u001b[90m): \u001b[39m4.39% ± 9.98%\n\n \u001b[39m▅\u001b[39m▅\u001b[39m█\u001b[34m█\u001b[39m\u001b[39m▆\u001b[32m▄\u001b[39m\u001b[39m▃\u001b[39m \u001b[39m \u001b[39m \u001b[39m \u001b[39m \u001b[39m \u001b[39m \u001b[39m \u001b[39m \u001b[39m \u001b[39m \u001b[39m \u001b[39m \u001b[39m \u001b[39m \u001b[39m \u001b[39m \u001b[39m \u001b[39m \u001b[39m \u001b[39m \u001b[39m \u001b[39m \u001b[39m \u001b[39m \u001b[39m \u001b[39m \u001b[39m \u001b[39m \u001b[39m \u001b[39m \u001b[39m \u001b[39m \u001b[39m \u001b[39m \u001b[39m \u001b[39m \u001b[39m \u001b[39m \u001b[39m \u001b[39m \u001b[39m \u001b[39m \u001b[39m \u001b[39m \u001b[39m \u001b[39m \u001b[39m \u001b[39m \u001b[39m \u001b[39m \u001b[39m \u001b[39m \u001b[39m \u001b[39m \u001b[39m▁\n \u001b[39m█\u001b[39m█\u001b[39m█\u001b[34m█\u001b[39m\u001b[39m█\u001b[32m█\u001b[39m\u001b[39m█\u001b[39m█\u001b[39m▇\u001b[39m▆\u001b[39m▆\u001b[39m▅\u001b[39m▃\u001b[39m▁\u001b[39m▃\u001b[39m▃\u001b[39m▄\u001b[39m▅\u001b[39m▆\u001b[39m▇\u001b[39m▆\u001b[39m▅\u001b[39m▃\u001b[39m▄\u001b[39m▁\u001b[39m▃\u001b[39m▁\u001b[39m▁\u001b[39m▄\u001b[39m▃\u001b[39m▃\u001b[39m▁\u001b[39m▁\u001b[39m▁\u001b[39m▁\u001b[39m▁\u001b[39m▁\u001b[39m▁\u001b[39m▁\u001b[39m▁\u001b[39m▁\u001b[39m▁\u001b[39m▁\u001b[39m▁\u001b[39m▁\u001b[39m▁\u001b[39m▁\u001b[39m▁\u001b[39m▁\u001b[39m▁\u001b[39m▁\u001b[39m▃\u001b[39m▁\u001b[39m▁\u001b[39m▆\u001b[39m▆\u001b[39m▇\u001b[39m█\u001b[39m█\u001b[39m█\u001b[39m█\u001b[39m \u001b[39m█\n 815 μs\u001b[90m \u001b[39m\u001b[90mHistogram: \u001b[39m\u001b[90m\u001b[1mlog(\u001b[22m\u001b[39m\u001b[90mfrequency\u001b[39m\u001b[90m\u001b[1m)\u001b[22m\u001b[39m\u001b[90m by time\u001b[39m 2.22 ms \u001b[0m\u001b[1m<\u001b[22m\n\n Memory estimate\u001b[90m: \u001b[39m\u001b[33m3.08 MiB\u001b[39m, allocs estimate\u001b[90m: \u001b[39m\u001b[33m2228\u001b[39m." }, "metadata": {}, "execution_count": 16 @@ -1049,7 +1049,7 @@ { "output_type": "execute_result", "data": { - "text/plain": "BenchmarkTools.Trial: 6678 samples with 1 evaluation.\n Range \u001b[90m(\u001b[39m\u001b[36m\u001b[1mmin\u001b[22m\u001b[39m … \u001b[35mmax\u001b[39m\u001b[90m): \u001b[39m\u001b[36m\u001b[1m651.439 μs\u001b[22m\u001b[39m … \u001b[35m 4.987 ms\u001b[39m \u001b[90m┊\u001b[39m GC \u001b[90m(\u001b[39mmin … max\u001b[90m): \u001b[39m0.00% … 23.01%\n Time \u001b[90m(\u001b[39m\u001b[34m\u001b[1mmedian\u001b[22m\u001b[39m\u001b[90m): \u001b[39m\u001b[34m\u001b[1m706.492 μs \u001b[22m\u001b[39m\u001b[90m┊\u001b[39m GC \u001b[90m(\u001b[39mmedian\u001b[90m): \u001b[39m0.00%\n Time \u001b[90m(\u001b[39m\u001b[32m\u001b[1mmean\u001b[22m\u001b[39m ± \u001b[32mσ\u001b[39m\u001b[90m): \u001b[39m\u001b[32m\u001b[1m745.705 μs\u001b[22m\u001b[39m ± \u001b[32m224.190 μs\u001b[39m \u001b[90m┊\u001b[39m GC \u001b[90m(\u001b[39mmean ± σ\u001b[90m): \u001b[39m4.56% ± 9.84%\n\n \u001b[39m▄\u001b[39m▇\u001b[34m█\u001b[39m\u001b[39m▆\u001b[32m▄\u001b[39m\u001b[39m▂\u001b[39m \u001b[39m \u001b[39m \u001b[39m \u001b[39m \u001b[39m \u001b[39m \u001b[39m \u001b[39m \u001b[39m \u001b[39m \u001b[39m \u001b[39m \u001b[39m \u001b[39m \u001b[39m \u001b[39m \u001b[39m \u001b[39m \u001b[39m \u001b[39m \u001b[39m \u001b[39m \u001b[39m \u001b[39m \u001b[39m \u001b[39m \u001b[39m \u001b[39m \u001b[39m \u001b[39m \u001b[39m \u001b[39m \u001b[39m \u001b[39m \u001b[39m \u001b[39m \u001b[39m \u001b[39m \u001b[39m \u001b[39m \u001b[39m \u001b[39m \u001b[39m \u001b[39m \u001b[39m \u001b[39m \u001b[39m \u001b[39m \u001b[39m \u001b[39m \u001b[39m \u001b[39m \u001b[39m \u001b[39m \u001b[39m \u001b[39m▁\n \u001b[39m█\u001b[39m█\u001b[34m█\u001b[39m\u001b[39m█\u001b[32m█\u001b[39m\u001b[39m█\u001b[39m▇\u001b[39m▆\u001b[39m▆\u001b[39m▅\u001b[39m▄\u001b[39m▁\u001b[39m▁\u001b[39m▁\u001b[39m▃\u001b[39m▁\u001b[39m▁\u001b[39m▁\u001b[39m▁\u001b[39m▁\u001b[39m▁\u001b[39m▁\u001b[39m▁\u001b[39m▃\u001b[39m▁\u001b[39m▁\u001b[39m▁\u001b[39m▁\u001b[39m▁\u001b[39m▁\u001b[39m▁\u001b[39m▁\u001b[39m▁\u001b[39m▁\u001b[39m▁\u001b[39m▁\u001b[39m▁\u001b[39m▁\u001b[39m▁\u001b[39m▁\u001b[39m▁\u001b[39m▁\u001b[39m▁\u001b[39m▁\u001b[39m▁\u001b[39m▁\u001b[39m▁\u001b[39m▁\u001b[39m▁\u001b[39m▁\u001b[39m▁\u001b[39m▁\u001b[39m▁\u001b[39m▁\u001b[39m▁\u001b[39m▄\u001b[39m▅\u001b[39m▇\u001b[39m█\u001b[39m█\u001b[39m█\u001b[39m \u001b[39m█\n 651 μs\u001b[90m \u001b[39m\u001b[90mHistogram: \u001b[39m\u001b[90m\u001b[1mlog(\u001b[22m\u001b[39m\u001b[90mfrequency\u001b[39m\u001b[90m\u001b[1m)\u001b[22m\u001b[39m\u001b[90m by time\u001b[39m 2 ms \u001b[0m\u001b[1m<\u001b[22m\n\n Memory estimate\u001b[90m: \u001b[39m\u001b[33m2.98 MiB\u001b[39m, allocs estimate\u001b[90m: \u001b[39m\u001b[33m1558\u001b[39m." + "text/plain": "BenchmarkTools.Trial: 6451 samples with 1 evaluation.\n Range \u001b[90m(\u001b[39m\u001b[36m\u001b[1mmin\u001b[22m\u001b[39m … \u001b[35mmax\u001b[39m\u001b[90m): \u001b[39m\u001b[36m\u001b[1m667.058 μs\u001b[22m\u001b[39m … \u001b[35m 6.439 ms\u001b[39m \u001b[90m┊\u001b[39m GC \u001b[90m(\u001b[39mmin … max\u001b[90m): \u001b[39m0.00% … 19.28%\n Time \u001b[90m(\u001b[39m\u001b[34m\u001b[1mmedian\u001b[22m\u001b[39m\u001b[90m): \u001b[39m\u001b[34m\u001b[1m735.005 μs \u001b[22m\u001b[39m\u001b[90m┊\u001b[39m GC \u001b[90m(\u001b[39mmedian\u001b[90m): \u001b[39m0.00%\n Time \u001b[90m(\u001b[39m\u001b[32m\u001b[1mmean\u001b[22m\u001b[39m ± \u001b[32mσ\u001b[39m\u001b[90m): \u001b[39m\u001b[32m\u001b[1m772.224 μs\u001b[22m\u001b[39m ± \u001b[32m231.994 μs\u001b[39m \u001b[90m┊\u001b[39m GC \u001b[90m(\u001b[39mmean ± σ\u001b[90m): \u001b[39m4.52% ± 9.86%\n\n \u001b[39m▆\u001b[39m▆\u001b[39m█\u001b[34m█\u001b[39m\u001b[39m▆\u001b[32m▄\u001b[39m\u001b[39m▂\u001b[39m \u001b[39m \u001b[39m \u001b[39m \u001b[39m \u001b[39m \u001b[39m \u001b[39m \u001b[39m \u001b[39m \u001b[39m \u001b[39m \u001b[39m \u001b[39m \u001b[39m \u001b[39m \u001b[39m \u001b[39m \u001b[39m \u001b[39m \u001b[39m \u001b[39m \u001b[39m \u001b[39m \u001b[39m \u001b[39m \u001b[39m \u001b[39m \u001b[39m \u001b[39m \u001b[39m \u001b[39m \u001b[39m \u001b[39m \u001b[39m \u001b[39m \u001b[39m \u001b[39m \u001b[39m \u001b[39m \u001b[39m \u001b[39m \u001b[39m \u001b[39m \u001b[39m \u001b[39m \u001b[39m \u001b[39m \u001b[39m \u001b[39m \u001b[39m \u001b[39m▁\u001b[39m \u001b[39m \u001b[39m \u001b[39m▂\n \u001b[39m█\u001b[39m█\u001b[39m█\u001b[34m█\u001b[39m\u001b[39m█\u001b[32m█\u001b[39m\u001b[39m█\u001b[39m█\u001b[39m▇\u001b[39m▅\u001b[39m▁\u001b[39m▃\u001b[39m▁\u001b[39m▁\u001b[39m▁\u001b[39m▁\u001b[39m▄\u001b[39m▁\u001b[39m▃\u001b[39m▁\u001b[39m▁\u001b[39m▁\u001b[39m▁\u001b[39m▁\u001b[39m▃\u001b[39m▁\u001b[39m▁\u001b[39m▁\u001b[39m▁\u001b[39m▁\u001b[39m▁\u001b[39m▁\u001b[39m▁\u001b[39m▁\u001b[39m▁\u001b[39m▁\u001b[39m▁\u001b[39m▁\u001b[39m▁\u001b[39m▁\u001b[39m▁\u001b[39m▁\u001b[39m▁\u001b[39m▁\u001b[39m▁\u001b[39m▁\u001b[39m▁\u001b[39m▁\u001b[39m▁\u001b[39m▁\u001b[39m▁\u001b[39m▁\u001b[39m▁\u001b[39m▅\u001b[39m▄\u001b[39m▆\u001b[39m▇\u001b[39m▇\u001b[39m█\u001b[39m█\u001b[39m█\u001b[39m \u001b[39m█\n 667 μs\u001b[90m \u001b[39m\u001b[90mHistogram: \u001b[39m\u001b[90m\u001b[1mlog(\u001b[22m\u001b[39m\u001b[90mfrequency\u001b[39m\u001b[90m\u001b[1m)\u001b[22m\u001b[39m\u001b[90m by time\u001b[39m 2.04 ms \u001b[0m\u001b[1m<\u001b[22m\n\n Memory estimate\u001b[90m: \u001b[39m\u001b[33m2.98 MiB\u001b[39m, allocs estimate\u001b[90m: \u001b[39m\u001b[33m1558\u001b[39m." }, "metadata": {}, "execution_count": 22 @@ -1169,16 +1169,17 @@ "Package information (click to expand)\n", "
\n",
     "Status `~/work/KernelFunctions.jl/KernelFunctions.jl/examples/train-kernel-parameters/Project.toml`\n",
-    "  [6e4b80f9] BenchmarkTools v1.4.0\n",
+    "  [6e4b80f9] BenchmarkTools v1.5.0\n",
     "  [31c24e10] Distributions v0.25.107\n",
-    "  [587475ba] Flux v0.14.11\n",
+    "  [587475ba] Flux v0.14.13\n",
     "  [f6369f11] ForwardDiff v0.10.36\n",
     "  [ec8451be] KernelFunctions v0.10.63 `/home/runner/work/KernelFunctions.jl/KernelFunctions.jl#master`\n",
     "  [98b081ad] Literate v2.16.1\n",
-    "  [2412ca09] ParameterHandling v0.4.10\n",
-    "  [91a5bcdd] Plots v1.40.1\n",
+    "⌅ [2412ca09] ParameterHandling v0.4.10\n",
+    "  [91a5bcdd] Plots v1.40.2\n",
     "  [e88e6eb3] Zygote v0.6.69\n",
     "  [37e2e46d] LinearAlgebra\n",
+    "Info Packages marked with ⌅ have new versions available but compatibility constraints restrict them from upgrading. To see why use `status --outdated`\n",
     "
\n", "To reproduce this notebook's package environment, you can\n", "\n", @@ -1187,8 +1188,8 @@ "
\n", "System information (click to expand)\n", "
\n",
-    "Julia Version 1.10.0\n",
-    "Commit 3120989f39b (2023-12-25 18:01 UTC)\n",
+    "Julia Version 1.10.2\n",
+    "Commit bd47eca2c8a (2024-03-01 10:14 UTC)\n",
     "Build Info:\n",
     "  Official https://julialang.org/ release\n",
     "Platform Info:\n",
@@ -1197,7 +1198,7 @@
     "  WORD_SIZE: 64\n",
     "  LIBM: libopenlibm\n",
     "  LLVM: libLLVM-15.0.7 (ORCJIT, znver3)\n",
-    "  Threads: 1 on 4 virtual cores\n",
+    "Threads: 1 default, 0 interactive, 1 GC (on 4 virtual cores)\n",
     "Environment:\n",
     "  JULIA_DEBUG = Documenter\n",
     "  JULIA_LOAD_PATH = :/home/runner/.julia/packages/JuliaGPsDocs/7M86H/src\n",
@@ -1222,11 +1223,11 @@
    "file_extension": ".jl",
    "mimetype": "application/julia",
    "name": "julia",
-   "version": "1.10.0"
+   "version": "1.10.2"
   },
   "kernelspec": {
    "name": "julia-1.10",
-   "display_name": "Julia 1.10.0",
+   "display_name": "Julia 1.10.2",
    "language": "julia"
   }
  },
diff --git a/dev/index.html b/dev/index.html
index 373e9fa6d..f32698372 100644
--- a/dev/index.html
+++ b/dev/index.html
@@ -1,2 +1,2 @@
 
-Home · KernelFunctions.jl

KernelFunctions.jl

KernelFunctions.jl is a general purpose kernel package. It provides a flexible framework for creating kernel functions and manipulating them, and an extensive collection of implementations. The main goals of this package are:

  • Flexibility: operations between kernels should be fluid and easy without breaking, with a user-friendly API.
  • Plug-and-play: being model-agnostic; including the kernels before/after other steps should be straightforward. To interoperate well with generic packages for handling parameters like ParameterHandling.jl and FluxML's Functors.jl.
  • Automatic Differentiation compatibility: all kernel functions which ought to be differentiable using AD packages like ForwardDiff.jl or Zygote.jl should be.

This package replaces the now-defunct MLKernels.jl. It incorporates lots of excellent existing work from packages such as GaussianProcesses.jl, and is used in downstream packages such as AbstractGPs.jl, ApproximateGPs.jl, Stheno.jl, and AugmentedGaussianProcesses.jl.

See the User guide for a brief introduction.

+Home · KernelFunctions.jl

KernelFunctions.jl

KernelFunctions.jl is a general purpose kernel package. It provides a flexible framework for creating kernel functions and manipulating them, and an extensive collection of implementations. The main goals of this package are:

  • Flexibility: operations between kernels should be fluid and easy without breaking, with a user-friendly API.
  • Plug-and-play: being model-agnostic; including the kernels before/after other steps should be straightforward. To interoperate well with generic packages for handling parameters like ParameterHandling.jl and FluxML's Functors.jl.
  • Automatic Differentiation compatibility: all kernel functions which ought to be differentiable using AD packages like ForwardDiff.jl or Zygote.jl should be.

This package replaces the now-defunct MLKernels.jl. It incorporates lots of excellent existing work from packages such as GaussianProcesses.jl, and is used in downstream packages such as AbstractGPs.jl, ApproximateGPs.jl, Stheno.jl, and AugmentedGaussianProcesses.jl.

See the User guide for a brief introduction.

diff --git a/dev/kernels/index.html b/dev/kernels/index.html index 5be346d23..144ead481 100644 --- a/dev/kernels/index.html +++ b/dev/kernels/index.html @@ -1,20 +1,20 @@ -Kernel Functions · KernelFunctions.jl

Kernel Functions

Base Kernels

These are the basic kernels without any transformation of the data. They are the building blocks of KernelFunctions.

Constant Kernels

KernelFunctions.WhiteKernelType
WhiteKernel()

White noise kernel.

Definition

For inputs $x, x'$, the white noise kernel is defined as

\[k(x, x') = \delta(x, x').\]

source

Cosine Kernel

KernelFunctions.CosineKernelType
CosineKernel(; metric=Euclidean())

Cosine kernel with respect to the metric.

Definition

For inputs $x, x'$ and metric $d(\cdot, \cdot)$, the cosine kernel is defined as

\[k(x, x') = \cos(\pi d(x, x')).\]

By default, $d$ is the Euclidean metric $d(x, x') = \|x - x'\|_2$.

source

Exponential Kernels

KernelFunctions.ExponentialKernelType
ExponentialKernel(; metric=Euclidean())

Exponential kernel with respect to the metric.

Definition

For inputs $x, x'$ and metric $d(\cdot, \cdot)$, the exponential kernel is defined as

\[k(x, x') = \exp\big(- d(x, x')\big).\]

By default, $d$ is the Euclidean metric $d(x, x') = \|x - x'\|_2$.

See also: GammaExponentialKernel

source
KernelFunctions.GibbsKernelType
GibbsKernel(; lengthscale)

Gibbs Kernel with lengthscale function lengthscale.

The Gibbs kernel is a non-stationary generalisation of the squared exponential kernel. The lengthscale parameter $l$ becomes a function of position $l(x)$.

Definition

For inputs $x, x'$, the Gibbs kernel with lengthscale function $l(\cdot)$ is defined as

\[k(x, x'; l) = \sqrt{\left(\frac{2 l(x) l(x')}{l(x)^2 + l(x')^2}\right)} -\quad \exp{\left(-\frac{(x - x')^2}{l(x)^2 + l(x')^2}\right)}.\]

For a constant function $l \equiv c$, one recovers the SqExponentialKernel with lengthscale c.

References

Mark N. Gibbs. "Bayesian Gaussian Processes for Regression and Classication." PhD thesis, 1997

Christopher J. Paciorek and Mark J. Schervish. "Nonstationary Covariance Functions for Gaussian Process Regression". NeurIPS, 2003

Sami Remes, Markus Heinonen, Samuel Kaski. "Non-Stationary Spectral Kernels". arXiV:1705.08736, 2017

Sami Remes, Markus Heinonen, Samuel Kaski. "Neural Non-Stationary Spectral Kernel". arXiv:1811.10978, 2018

source
KernelFunctions.SqExponentialKernelType
SqExponentialKernel(; metric=Euclidean())

Squared exponential kernel with respect to the metric.

Definition

For inputs $x, x'$ and metric $d(\cdot, \cdot)$, the squared exponential kernel is defined as

\[k(x, x') = \exp\bigg(- \frac{d(x, x')^2}{2}\bigg).\]

By default, $d$ is the Euclidean metric $d(x, x') = \|x - x'\|_2$.

See also: GammaExponentialKernel

source
KernelFunctions.GammaExponentialKernelType
GammaExponentialKernel(; γ::Real=1.0, metric=Euclidean())

γ-exponential kernel with respect to the metric and with parameter γ.

Definition

For inputs $x, x'$ and metric $d(\cdot, \cdot)$, the γ-exponential kernel[RW] with parameter $\gamma \in (0, 2]$ is defined as

\[k(x, x'; \gamma) = \exp\big(- d(x, x')^{\gamma}\big).\]

By default, $d$ is the Euclidean metric $d(x, x') = \|x - x'\|_2$.

See also: ExponentialKernel, SqExponentialKernel

source

Exponentiated Kernel

KernelFunctions.ExponentiatedKernelType
ExponentiatedKernel()

Exponentiated kernel.

Definition

For inputs $x, x' \in \mathbb{R}^d$, the exponentiated kernel is defined as

\[k(x, x') = \exp(x^\top x').\]

source

Fractional Brownian Motion Kernel

KernelFunctions.FBMKernelType
FBMKernel(; h::Real=0.5)

Fractional Brownian motion kernel with Hurst index h.

Definition

For inputs $x, x' \in \mathbb{R}^d$, the fractional Brownian motion kernel with Hurst index $h \in [0,1]$ is defined as

\[k(x, x'; h) = \frac{\|x\|_2^{2h} + \|x'\|_2^{2h} - \|x - x'\|^{2h}}{2}.\]

source

Gabor Kernel

KernelFunctions.gaborkernelFunction
gaborkernel(;
+Kernel Functions · KernelFunctions.jl

Kernel Functions

Base Kernels

These are the basic kernels without any transformation of the data. They are the building blocks of KernelFunctions.

Constant Kernels

KernelFunctions.WhiteKernelType
WhiteKernel()

White noise kernel.

Definition

For inputs $x, x'$, the white noise kernel is defined as

\[k(x, x') = \delta(x, x').\]

source

Cosine Kernel

KernelFunctions.CosineKernelType
CosineKernel(; metric=Euclidean())

Cosine kernel with respect to the metric.

Definition

For inputs $x, x'$ and metric $d(\cdot, \cdot)$, the cosine kernel is defined as

\[k(x, x') = \cos(\pi d(x, x')).\]

By default, $d$ is the Euclidean metric $d(x, x') = \|x - x'\|_2$.

source

Exponential Kernels

KernelFunctions.ExponentialKernelType
ExponentialKernel(; metric=Euclidean())

Exponential kernel with respect to the metric.

Definition

For inputs $x, x'$ and metric $d(\cdot, \cdot)$, the exponential kernel is defined as

\[k(x, x') = \exp\big(- d(x, x')\big).\]

By default, $d$ is the Euclidean metric $d(x, x') = \|x - x'\|_2$.

See also: GammaExponentialKernel

source
KernelFunctions.GibbsKernelType
GibbsKernel(; lengthscale)

Gibbs Kernel with lengthscale function lengthscale.

The Gibbs kernel is a non-stationary generalisation of the squared exponential kernel. The lengthscale parameter $l$ becomes a function of position $l(x)$.

Definition

For inputs $x, x'$, the Gibbs kernel with lengthscale function $l(\cdot)$ is defined as

\[k(x, x'; l) = \sqrt{\left(\frac{2 l(x) l(x')}{l(x)^2 + l(x')^2}\right)} +\quad \exp{\left(-\frac{(x - x')^2}{l(x)^2 + l(x')^2}\right)}.\]

For a constant function $l \equiv c$, one recovers the SqExponentialKernel with lengthscale c.

References

Mark N. Gibbs. "Bayesian Gaussian Processes for Regression and Classication." PhD thesis, 1997

Christopher J. Paciorek and Mark J. Schervish. "Nonstationary Covariance Functions for Gaussian Process Regression". NeurIPS, 2003

Sami Remes, Markus Heinonen, Samuel Kaski. "Non-Stationary Spectral Kernels". arXiV:1705.08736, 2017

Sami Remes, Markus Heinonen, Samuel Kaski. "Neural Non-Stationary Spectral Kernel". arXiv:1811.10978, 2018

source
KernelFunctions.SqExponentialKernelType
SqExponentialKernel(; metric=Euclidean())

Squared exponential kernel with respect to the metric.

Definition

For inputs $x, x'$ and metric $d(\cdot, \cdot)$, the squared exponential kernel is defined as

\[k(x, x') = \exp\bigg(- \frac{d(x, x')^2}{2}\bigg).\]

By default, $d$ is the Euclidean metric $d(x, x') = \|x - x'\|_2$.

See also: GammaExponentialKernel

source
KernelFunctions.GammaExponentialKernelType
GammaExponentialKernel(; γ::Real=1.0, metric=Euclidean())

γ-exponential kernel with respect to the metric and with parameter γ.

Definition

For inputs $x, x'$ and metric $d(\cdot, \cdot)$, the γ-exponential kernel[RW] with parameter $\gamma \in (0, 2]$ is defined as

\[k(x, x'; \gamma) = \exp\big(- d(x, x')^{\gamma}\big).\]

By default, $d$ is the Euclidean metric $d(x, x') = \|x - x'\|_2$.

See also: ExponentialKernel, SqExponentialKernel

source

Exponentiated Kernel

KernelFunctions.ExponentiatedKernelType
ExponentiatedKernel()

Exponentiated kernel.

Definition

For inputs $x, x' \in \mathbb{R}^d$, the exponentiated kernel is defined as

\[k(x, x') = \exp(x^\top x').\]

source

Fractional Brownian Motion Kernel

KernelFunctions.FBMKernelType
FBMKernel(; h::Real=0.5)

Fractional Brownian motion kernel with Hurst index h.

Definition

For inputs $x, x' \in \mathbb{R}^d$, the fractional Brownian motion kernel with Hurst index $h \in [0,1]$ is defined as

\[k(x, x'; h) = \frac{\|x\|_2^{2h} + \|x'\|_2^{2h} - \|x - x'\|^{2h}}{2}.\]

source

Gabor Kernel

KernelFunctions.gaborkernelFunction
gaborkernel(;
     sqexponential_transform=IdentityTransform(), cosine_tranform=IdentityTransform()
 )

Construct a Gabor kernel with transformations sqexponential_transform and cosine_transform of the inputs of the underlying squared exponential and cosine kernel, respectively.

Definition

For inputs $x, x' \in \mathbb{R}^d$, the Gabor kernel with transformations $f$ and $g$ of the inputs to the squared exponential and cosine kernel, respectively, is defined as

\[k(x, x'; f, g) = \exp\bigg(- \frac{\| f(x) - f(x')\|_2^2}{2}\bigg) - \cos\big(\pi \|g(x) - g(x')\|_2 \big).\]

source

Matérn Kernels

KernelFunctions.MaternKernelType
MaternKernel(; ν::Real=1.5, metric=Euclidean())

Matérn kernel of order ν with respect to the metric.

Definition

For inputs $x, x'$ and metric $d(\cdot, \cdot)$, the Matérn kernel of order $\nu > 0$ is defined as

\[k(x,x';\nu) = \frac{2^{1-\nu}}{\Gamma(\nu)}\big(\sqrt{2\nu} d(x, x')\big) K_\nu\big(\sqrt{2\nu} d(x, x')\big),\]

where $\Gamma$ is the Gamma function and $K_{\nu}$ is the modified Bessel function of the second kind of order $\nu$. By default, $d$ is the Euclidean metric $d(x, x') = \|x - x'\|_2$.

A Gaussian process with a Matérn kernel is $\lceil \nu \rceil - 1$-times differentiable in the mean-square sense.

Note

Differentiation with respect to the order ν is not currently supported.

See also: Matern12Kernel, Matern32Kernel, Matern52Kernel

source
KernelFunctions.Matern32KernelType
Matern32Kernel(; metric=Euclidean())

Matérn kernel of order $3/2$ with respect to the metric.

Definition

For inputs $x, x'$ and metric $d(\cdot, \cdot)$, the Matérn kernel of order $3/2$ is given by

\[k(x, x') = \big(1 + \sqrt{3} d(x, x') \big) \exp\big(- \sqrt{3} d(x, x') \big).\]

By default, $d$ is the Euclidean metric $d(x, x') = \|x - x'\|_2$.

See also: MaternKernel

source
KernelFunctions.Matern52KernelType
Matern52Kernel(; metric=Euclidean())

Matérn kernel of order $5/2$ with respect to the metric.

Definition

For inputs $x, x'$ and metric $d(\cdot, \cdot)$, the Matérn kernel of order $5/2$ is given by

\[k(x, x') = \bigg(1 + \sqrt{5} d(x, x') + \frac{5}{3} d(x, x')^2\bigg) - \exp\big(- \sqrt{5} d(x, x') \big).\]

By default, $d$ is the Euclidean metric $d(x, x') = \|x - x'\|_2$.

See also: MaternKernel

source

Neural Network Kernel

KernelFunctions.NeuralNetworkKernelType
NeuralNetworkKernel()

Kernel of a Gaussian process obtained as the limit of a Bayesian neural network with a single hidden layer as the number of units goes to infinity.

Definition

Consider the single-layer Bayesian neural network $f \colon \mathbb{R}^d \to \mathbb{R}$ with $h$ hidden units defined by

\[f(x; b, v, u) = b + \sqrt{\frac{\pi}{2}} \sum_{i=1}^{h} v_i \mathrm{erf}\big(u_i^\top x\big),\]

where $\mathrm{erf}$ is the error function, and with prior distributions

\[\begin{aligned} + \cos\big(\pi \|g(x) - g(x')\|_2 \big).\]

source

Matérn Kernels

KernelFunctions.MaternKernelType
MaternKernel(; ν::Real=1.5, metric=Euclidean())

Matérn kernel of order ν with respect to the metric.

Definition

For inputs $x, x'$ and metric $d(\cdot, \cdot)$, the Matérn kernel of order $\nu > 0$ is defined as

\[k(x,x';\nu) = \frac{2^{1-\nu}}{\Gamma(\nu)}\big(\sqrt{2\nu} d(x, x')\big) K_\nu\big(\sqrt{2\nu} d(x, x')\big),\]

where $\Gamma$ is the Gamma function and $K_{\nu}$ is the modified Bessel function of the second kind of order $\nu$. By default, $d$ is the Euclidean metric $d(x, x') = \|x - x'\|_2$.

A Gaussian process with a Matérn kernel is $\lceil \nu \rceil - 1$-times differentiable in the mean-square sense.

Note

Differentiation with respect to the order ν is not currently supported.

See also: Matern12Kernel, Matern32Kernel, Matern52Kernel

source
KernelFunctions.Matern32KernelType
Matern32Kernel(; metric=Euclidean())

Matérn kernel of order $3/2$ with respect to the metric.

Definition

For inputs $x, x'$ and metric $d(\cdot, \cdot)$, the Matérn kernel of order $3/2$ is given by

\[k(x, x') = \big(1 + \sqrt{3} d(x, x') \big) \exp\big(- \sqrt{3} d(x, x') \big).\]

By default, $d$ is the Euclidean metric $d(x, x') = \|x - x'\|_2$.

See also: MaternKernel

source
KernelFunctions.Matern52KernelType
Matern52Kernel(; metric=Euclidean())

Matérn kernel of order $5/2$ with respect to the metric.

Definition

For inputs $x, x'$ and metric $d(\cdot, \cdot)$, the Matérn kernel of order $5/2$ is given by

\[k(x, x') = \bigg(1 + \sqrt{5} d(x, x') + \frac{5}{3} d(x, x')^2\bigg) + \exp\big(- \sqrt{5} d(x, x') \big).\]

By default, $d$ is the Euclidean metric $d(x, x') = \|x - x'\|_2$.

See also: MaternKernel

source

Neural Network Kernel

KernelFunctions.NeuralNetworkKernelType
NeuralNetworkKernel()

Kernel of a Gaussian process obtained as the limit of a Bayesian neural network with a single hidden layer as the number of units goes to infinity.

Definition

Consider the single-layer Bayesian neural network $f \colon \mathbb{R}^d \to \mathbb{R}$ with $h$ hidden units defined by

\[f(x; b, v, u) = b + \sqrt{\frac{\pi}{2}} \sum_{i=1}^{h} v_i \mathrm{erf}\big(u_i^\top x\big),\]

where $\mathrm{erf}$ is the error function, and with prior distributions

\[\begin{aligned} b &\sim \mathcal{N}(0, \sigma_b^2),\\ v &\sim \mathcal{N}(0, \sigma_v^2 \mathrm{I}_{h}/h),\\ u_i &\sim \mathcal{N}(0, \mathrm{I}_{d}/2) \qquad (i = 1,\ldots,h). -\end{aligned}\]

As $h \to \infty$, the neural network converges to the Gaussian process

\[g(\cdot) \sim \mathcal{GP}\big(0, \sigma_b^2 + \sigma_v^2 k(\cdot, \cdot)\big),\]

where the neural network kernel $k$ is given by

\[k(x, x') = \arcsin\left(\frac{x^\top x'}{\sqrt{\big(1 + \|x\|^2_2\big) \big(1 + \|x'\|_2^2\big)}}\right)\]

for inputs $x, x' \in \mathbb{R}^d$.[CW]

source

Periodic Kernel

KernelFunctions.PeriodicKernelType
PeriodicKernel(; r::AbstractVector=ones(Float64, 1))

Periodic kernel with parameter r.

Definition

For inputs $x, x' \in \mathbb{R}^d$, the periodic kernel with parameter $r_i > 0$ is defined[DM] as

\[k(x, x'; r) = \exp\bigg(- \frac{1}{2} \sum_{i=1}^d \bigg(\frac{\sin\big(\pi(x_i - x'_i)\big)}{r_i}\bigg)^2\bigg).\]

source

Piecewise Polynomial Kernel

KernelFunctions.PiecewisePolynomialKernelType
PiecewisePolynomialKernel(; dim::Int, degree::Int=0, metric=Euclidean())
+\end{aligned}\]

As $h \to \infty$, the neural network converges to the Gaussian process

\[g(\cdot) \sim \mathcal{GP}\big(0, \sigma_b^2 + \sigma_v^2 k(\cdot, \cdot)\big),\]

where the neural network kernel $k$ is given by

\[k(x, x') = \arcsin\left(\frac{x^\top x'}{\sqrt{\big(1 + \|x\|^2_2\big) \big(1 + \|x'\|_2^2\big)}}\right)\]

for inputs $x, x' \in \mathbb{R}^d$.[CW]

source

Periodic Kernel

KernelFunctions.PeriodicKernelType
PeriodicKernel(; r::AbstractVector=ones(Float64, 1))

Periodic kernel with parameter r.

Definition

For inputs $x, x' \in \mathbb{R}^d$, the periodic kernel with parameter $r_i > 0$ is defined[DM] as

\[k(x, x'; r) = \exp\bigg(- \frac{1}{2} \sum_{i=1}^d \bigg(\frac{\sin\big(\pi(x_i - x'_i)\big)}{r_i}\bigg)^2\bigg).\]

source

Piecewise Polynomial Kernel

KernelFunctions.PiecewisePolynomialKernelType
PiecewisePolynomialKernel(; dim::Int, degree::Int=0, metric=Euclidean())
 PiecewisePolynomialKernel{degree}(; dim::Int, metric=Euclidean())

Piecewise polynomial kernel of degree degree for inputs of dimension dim with support in the unit ball with respect to the metric.

Definition

For inputs $x, x'$ of dimension $m$ and metric $d(\cdot, \cdot)$, the piecewise polynomial kernel of degree $v \in \{0,1,2,3\}$ is defined as

\[k(x, x'; v) = \max(1 - d(x, x'), 0)^{\alpha(v,m)} f_{v,m}(d(x, x')),\]

where $\alpha(v, m) = \lfloor \frac{m}{2}\rfloor + 2v + 1$ and $f_{v,m}$ are polynomials of degree $v$ given by

\[\begin{aligned} f_{0,m}(r) &= 1, \\ f_{1,m}(r) &= 1 + (j + 1) r, \\ f_{2,m}(r) &= 1 + (j + 2) r + \big((j^2 + 4j + 3) / 3\big) r^2, \\ f_{3,m}(r) &= 1 + (j + 3) r + \big((6 j^2 + 36j + 45) / 15\big) r^2 + \big((j^3 + 9 j^2 + 23j + 15) / 15\big) r^3, -\end{aligned}\]

where $j = \lfloor \frac{m}{2}\rfloor + v + 1$. By default, $d$ is the Euclidean metric $d(x, x') = \|x - x'\|_2$.

The kernel is $2v$ times continuously differentiable and the corresponding Gaussian process is hence $v$ times mean-square differentiable.

source

Polynomial Kernels

KernelFunctions.LinearKernelType
LinearKernel(; c::Real=0.0)

Linear kernel with constant offset c.

Definition

For inputs $x, x' \in \mathbb{R}^d$, the linear kernel with constant offset $c \geq 0$ is defined as

\[k(x, x'; c) = x^\top x' + c.\]

See also: PolynomialKernel

source
KernelFunctions.PolynomialKernelType
PolynomialKernel(; degree::Int=2, c::Real=0.0)

Polynomial kernel of degree degree with constant offset c.

Definition

For inputs $x, x' \in \mathbb{R}^d$, the polynomial kernel of degree $\nu \in \mathbb{N}$ with constant offset $c \geq 0$ is defined as

\[k(x, x'; c, \nu) = (x^\top x' + c)^\nu.\]

See also: LinearKernel

source

Rational Kernels

KernelFunctions.RationalKernelType
RationalKernel(; α::Real=2.0, metric=Euclidean())

Rational kernel with shape parameter α and given metric.

Definition

For inputs $x, x'$ and metric $d(\cdot, \cdot)$, the rational kernel with shape parameter $\alpha > 0$ is defined as

\[k(x, x'; \alpha) = \bigg(1 + \frac{d(x, x')}{\alpha}\bigg)^{-\alpha}.\]

By default, $d$ is the Euclidean metric $d(x, x') = \|x - x'\|_2$.

The ExponentialKernel is recovered in the limit as $\alpha \to \infty$.

See also: GammaRationalKernel

source
KernelFunctions.RationalQuadraticKernelType
RationalQuadraticKernel(; α::Real=2.0, metric=Euclidean())

Rational-quadratic kernel with respect to the metric and with shape parameter α.

Definition

For inputs $x, x'$ and metric $d(\cdot, \cdot)$, the rational-quadratic kernel with shape parameter $\alpha > 0$ is defined as

\[k(x, x'; \alpha) = \bigg(1 + \frac{d(x, x')^2}{2\alpha}\bigg)^{-\alpha}.\]

By default, $d$ is the Euclidean metric $d(x, x') = \|x - x'\|_2$.

The SqExponentialKernel is recovered in the limit as $\alpha \to \infty$.

See also: GammaRationalKernel

source
KernelFunctions.GammaRationalKernelType
GammaRationalKernel(; α::Real=2.0, γ::Real=1.0, metric=Euclidean())

γ-rational kernel with respect to the metric with shape parameters α and γ.

Definition

For inputs $x, x'$ and metric $d(\cdot, \cdot)$, the γ-rational kernel with shape parameters $\alpha > 0$ and $\gamma \in (0, 2]$ is defined as

\[k(x, x'; \alpha, \gamma) = \bigg(1 + \frac{d(x, x')^{\gamma}}{\alpha}\bigg)^{-\alpha}.\]

By default, $d$ is the Euclidean metric $d(x, x') = \|x - x'\|_2$.

The GammaExponentialKernel is recovered in the limit as $\alpha \to \infty$.

See also: RationalKernel, RationalQuadraticKernel

source

Spectral Mixture Kernels

KernelFunctions.spectral_mixture_kernelFunction
spectral_mixture_kernel(
+\end{aligned}\]

where $j = \lfloor \frac{m}{2}\rfloor + v + 1$. By default, $d$ is the Euclidean metric $d(x, x') = \|x - x'\|_2$.

The kernel is $2v$ times continuously differentiable and the corresponding Gaussian process is hence $v$ times mean-square differentiable.

source

Polynomial Kernels

KernelFunctions.LinearKernelType
LinearKernel(; c::Real=0.0)

Linear kernel with constant offset c.

Definition

For inputs $x, x' \in \mathbb{R}^d$, the linear kernel with constant offset $c \geq 0$ is defined as

\[k(x, x'; c) = x^\top x' + c.\]

See also: PolynomialKernel

source
KernelFunctions.PolynomialKernelType
PolynomialKernel(; degree::Int=2, c::Real=0.0)

Polynomial kernel of degree degree with constant offset c.

Definition

For inputs $x, x' \in \mathbb{R}^d$, the polynomial kernel of degree $\nu \in \mathbb{N}$ with constant offset $c \geq 0$ is defined as

\[k(x, x'; c, \nu) = (x^\top x' + c)^\nu.\]

See also: LinearKernel

source

Rational Kernels

KernelFunctions.RationalKernelType
RationalKernel(; α::Real=2.0, metric=Euclidean())

Rational kernel with shape parameter α and given metric.

Definition

For inputs $x, x'$ and metric $d(\cdot, \cdot)$, the rational kernel with shape parameter $\alpha > 0$ is defined as

\[k(x, x'; \alpha) = \bigg(1 + \frac{d(x, x')}{\alpha}\bigg)^{-\alpha}.\]

By default, $d$ is the Euclidean metric $d(x, x') = \|x - x'\|_2$.

The ExponentialKernel is recovered in the limit as $\alpha \to \infty$.

See also: GammaRationalKernel

source
KernelFunctions.RationalQuadraticKernelType
RationalQuadraticKernel(; α::Real=2.0, metric=Euclidean())

Rational-quadratic kernel with respect to the metric and with shape parameter α.

Definition

For inputs $x, x'$ and metric $d(\cdot, \cdot)$, the rational-quadratic kernel with shape parameter $\alpha > 0$ is defined as

\[k(x, x'; \alpha) = \bigg(1 + \frac{d(x, x')^2}{2\alpha}\bigg)^{-\alpha}.\]

By default, $d$ is the Euclidean metric $d(x, x') = \|x - x'\|_2$.

The SqExponentialKernel is recovered in the limit as $\alpha \to \infty$.

See also: GammaRationalKernel

source
KernelFunctions.GammaRationalKernelType
GammaRationalKernel(; α::Real=2.0, γ::Real=1.0, metric=Euclidean())

γ-rational kernel with respect to the metric with shape parameters α and γ.

Definition

For inputs $x, x'$ and metric $d(\cdot, \cdot)$, the γ-rational kernel with shape parameters $\alpha > 0$ and $\gamma \in (0, 2]$ is defined as

\[k(x, x'; \alpha, \gamma) = \bigg(1 + \frac{d(x, x')^{\gamma}}{\alpha}\bigg)^{-\alpha}.\]

By default, $d$ is the Euclidean metric $d(x, x') = \|x - x'\|_2$.

The GammaExponentialKernel is recovered in the limit as $\alpha \to \infty$.

See also: RationalKernel, RationalQuadraticKernel

source

Spectral Mixture Kernels

KernelFunctions.spectral_mixture_kernelFunction
spectral_mixture_kernel(
     h::Kernel=SqExponentialKernel(),
     αs::AbstractVector{<:Real},
     γs::AbstractMatrix{<:Real},
@@ -25,14 +25,14 @@
 [3] Covariance kernels for fast automatic pattern discovery and extrapolation
     with Gaussian processes, Andrew Gordon Wilson, PhD Thesis, January 2014.
     http://www.cs.cmu.edu/~andrewgw/andrewgwthesis.pdf
-[4] http://www.cs.cmu.edu/~andrewgw/pattern/.
source
KernelFunctions.spectral_mixture_product_kernelFunction
spectral_mixture_product_kernel(
     h::Kernel=SqExponentialKernel(),
     αs::AbstractMatrix{<:Real},
     γs::AbstractMatrix{<:Real},
     ωs::AbstractMatrix{<:Real},
 )

where αs are the weights of dimension (D, A), γs is the covariance matrix of dimension (D, A) and ωs are the mean vectors and is of dimension (D, A). Here, D is input dimension and A is the number of spectral components.

Spectral Mixture Product Kernel. With enough components A, the SMP kernel can model any product kernel to arbitrary precision, and is flexible even with a small number of components [1]

h is the kernel, which defaults to SqExponentialKernel if not specified.

\[ κ(x, y) = Πᵢ₌₁ᴷ Σ(αsᵢᵀ .* (h(-(γsᵢᵀ * tᵢ)²) .* cos(ωsᵢᵀ * tᵢ))), tᵢ = xᵢ - yᵢ\]

References:

[1] GPatt: Fast Multidimensional Pattern Extrapolation with GPs,
     arXiv 1310.5288, 2013, by Andrew Gordon Wilson, Elad Gilboa,
-    Arye Nehorai and John P. Cunningham
source

Wiener Kernel

Wiener Kernel

KernelFunctions.WienerKernelType
WienerKernel(; i::Int=0)
 WienerKernel{i}()

The i-times integrated Wiener process kernel function.

Definition

For inputs $x, x' \in \mathbb{R}^d$, the $i$-times integrated Wiener process kernel with $i \in \{-1, 0, 1, 2, 3\}$ is defined[SDH] as

\[k_i(x, x') = \begin{cases} \delta(x, x') & \text{if } i=-1,\\ \min\big(\|x\|_2, \|x'\|_2\big) & \text{if } i=0,\\ @@ -47,13 +47,13 @@ r_1(t, t') &= 1,\\ r_2(t, t') &= t + t' - \frac{\min(t, t')}{2},\\ r_3(t, t') &= 5 \max(t, t')^2 + 2 tt' + 3 \min(t, t')^2. -\end{aligned}\]

The WhiteKernel is recovered for $i = -1$.

source

Composite Kernels

The modular design of KernelFunctions uses base kernels as building blocks for more complex kernels. There are a variety of composite kernels implemented, including those which transform the inputs to a wrapped kernel to implement length scales, scale the variance of a kernel, and sum or multiply collections of kernels together.

KernelFunctions.TransformedKernelType
TransformedKernel(k::Kernel, t::Transform)

Kernel derived from k for which inputs are transformed via a Transform t.

The preferred way to create kernels with input transformations is to use the composition operator or its alias compose instead of TransformedKernel directly since this allows optimized implementations for specific kernels and transformations.

See also:

source

Composite Kernels

The modular design of KernelFunctions uses base kernels as building blocks for more complex kernels. There are a variety of composite kernels implemented, including those which transform the inputs to a wrapped kernel to implement length scales, scale the variance of a kernel, and sum or multiply collections of kernels together.

KernelFunctions.TransformedKernelType
TransformedKernel(k::Kernel, t::Transform)

Kernel derived from k for which inputs are transformed via a Transform t.

The preferred way to create kernels with input transformations is to use the composition operator or its alias compose instead of TransformedKernel directly since this allows optimized implementations for specific kernels and transformations.

See also:

source
Base.:∘Method
kernel ∘ transform
 ∘(kernel, transform)
 compose(kernel, transform)

Compose a kernel with a transformation transform of its inputs.

The prefix forms support chains of multiple transformations: ∘(kernel, transform1, transform2) = kernel ∘ transform1 ∘ transform2.

Definition

For inputs $x, x'$, the transformed kernel $\widetilde{k}$ derived from kernel $k$ by input transformation $t$ is defined as

\[\widetilde{k}(x, x'; k, t) = k\big(t(x), t(x')\big).\]

Examples

julia> (SqExponentialKernel() ∘ ScaleTransform(0.5))(0, 2) == exp(-0.5)
 true
 
 julia> ∘(ExponentialKernel(), ScaleTransform(2), ScaleTransform(0.5))(1, 2) == exp(-1)
-true

See also: TransformedKernel

source
KernelFunctions.ScaledKernelType
ScaledKernel(k::Kernel, σ²::Real=1.0)

Scaled kernel derived from k by multiplication with variance σ².

Definition

For inputs $x, x'$, the scaled kernel $\widetilde{k}$ derived from kernel $k$ by multiplication with variance $\sigma^2 > 0$ is defined as

\[\widetilde{k}(x, x'; k, \sigma^2) = \sigma^2 k(x, x').\]

source
KernelFunctions.KernelSumType
KernelSum <: Kernel

Create a sum of kernels. One can also use the operator +.

There are various ways in which you create a KernelSum:

The simplest way to specify a KernelSum would be to use the overloaded + operator. This is equivalent to creating a KernelSum by specifying the kernels as the arguments to the constructor.

julia> k1 = SqExponentialKernel(); k2 = LinearKernel(); X = rand(5);
+true

See also: TransformedKernel

source
KernelFunctions.ScaledKernelType
ScaledKernel(k::Kernel, σ²::Real=1.0)

Scaled kernel derived from k by multiplication with variance σ².

Definition

For inputs $x, x'$, the scaled kernel $\widetilde{k}$ derived from kernel $k$ by multiplication with variance $\sigma^2 > 0$ is defined as

\[\widetilde{k}(x, x'; k, \sigma^2) = \sigma^2 k(x, x').\]

source
KernelFunctions.KernelSumType
KernelSum <: Kernel

Create a sum of kernels. One can also use the operator +.

There are various ways in which you create a KernelSum:

The simplest way to specify a KernelSum would be to use the overloaded + operator. This is equivalent to creating a KernelSum by specifying the kernels as the arguments to the constructor.

julia> k1 = SqExponentialKernel(); k2 = LinearKernel(); X = rand(5);
 
 julia> (k = k1 + k2) == KernelSum(k1, k2)
 true
@@ -66,7 +66,7 @@
 true
 
 julia> KernelSum([k1, k2]) == KernelSum((k1, k2)) == k1 + k2
-true
source
KernelFunctions.KernelProductType
KernelProduct <: Kernel

Create a product of kernels. One can also use the overloaded operator *.

There are various ways in which you create a KernelProduct:

The simplest way to specify a KernelProduct would be to use the overloaded * operator. This is equivalent to creating a KernelProduct by specifying the kernels as the arguments to the constructor.

julia> k1 = SqExponentialKernel(); k2 = LinearKernel(); X = rand(5);
+true
source
KernelFunctions.KernelProductType
KernelProduct <: Kernel

Create a product of kernels. One can also use the overloaded operator *.

There are various ways in which you create a KernelProduct:

The simplest way to specify a KernelProduct would be to use the overloaded * operator. This is equivalent to creating a KernelProduct by specifying the kernels as the arguments to the constructor.

julia> k1 = SqExponentialKernel(); k2 = LinearKernel(); X = rand(5);
 
 julia> (k = k1 * k2) == KernelProduct(k1, k2)
 true
@@ -79,7 +79,7 @@
 true
 
 julia> KernelProduct([k1, k2]) == KernelProduct((k1, k2)) == k1 * k2
-true
source
KernelFunctions.KernelTensorProductType
KernelTensorProduct

Tensor product of kernels.

Definition

For inputs $x = (x_1, \ldots, x_n)$ and $x' = (x'_1, \ldots, x'_n)$, the tensor product of kernels $k_1, \ldots, k_n$ is defined as

\[k(x, x'; k_1, \ldots, k_n) = \Big(\bigotimes_{i=1}^n k_i\Big)(x, x') = \prod_{i=1}^n k_i(x_i, x'_i).\]

Construction

The simplest way to specify a KernelTensorProduct is to use the overloaded tensor operator or its alias (can be typed by \otimes<tab>).

julia> k1 = SqExponentialKernel(); k2 = LinearKernel(); X = rand(5, 2);
+true
source
KernelFunctions.KernelTensorProductType
KernelTensorProduct

Tensor product of kernels.

Definition

For inputs $x = (x_1, \ldots, x_n)$ and $x' = (x'_1, \ldots, x'_n)$, the tensor product of kernels $k_1, \ldots, k_n$ is defined as

\[k(x, x'; k_1, \ldots, k_n) = \Big(\bigotimes_{i=1}^n k_i\Big)(x, x') = \prod_{i=1}^n k_i(x_i, x'_i).\]

Construction

The simplest way to specify a KernelTensorProduct is to use the overloaded tensor operator or its alias (can be typed by \otimes<tab>).

julia> k1 = SqExponentialKernel(); k2 = LinearKernel(); X = rand(5, 2);
 
 julia> kernelmatrix(k1 ⊗ k2, RowVecs(X)) == kernelmatrix(k1, X[:, 1]) .* kernelmatrix(k2, X[:, 2])
 true

You can also specify a KernelTensorProduct by providing kernels as individual arguments or as an iterable data structure such as a Tuple or a Vector. Using a tuple or individual arguments guarantees that KernelTensorProduct is concretely typed but might lead to large compilation times if the number of kernels is large.

julia> KernelTensorProduct(k1, k2) == k1 ⊗ k2
@@ -89,9 +89,9 @@
 true
 
 julia> KernelTensorProduct([k1, k2]) == k1 ⊗ k2
-true
source
KernelFunctions.NormalizedKernelType
NormalizedKernel(k::Kernel)

A normalized kernel derived from k.

Definition

For inputs $x, x'$, the normalized kernel $\widetilde{k}$ derived from kernel $k$ is defined as

\[\widetilde{k}(x, x'; k) = \frac{k(x, x')}{\sqrt{k(x, x) k(x', x')}}.\]

source

Multi-output Kernels

Kernelfunctions implements multi-output kernels as scalar kernels on an extended output domain. For more details on this read the section on inputs for multi-output GPs.

For a function $f(x) \rightarrow y$ denote the inputs as $x, x'$, such that we compute the covariance between output components $y_{p}$ and $y_{p'}$. The total number of outputs is $m$.

KernelFunctions.IndependentMOKernelType
IndependentMOKernel(k::Kernel)

Kernel for multiple independent outputs with kernel k each.

Definition

For inputs $x, x'$ and output dimensions $p, p'$, the kernel $\widetilde{k}$ for independent outputs with kernel $k$ each is defined as

\[\widetilde{k}\big((x, p), (x', p')\big) = \begin{cases} +true

source
KernelFunctions.NormalizedKernelType
NormalizedKernel(k::Kernel)

A normalized kernel derived from k.

Definition

For inputs $x, x'$, the normalized kernel $\widetilde{k}$ derived from kernel $k$ is defined as

\[\widetilde{k}(x, x'; k) = \frac{k(x, x')}{\sqrt{k(x, x) k(x', x')}}.\]

source

Multi-output Kernels

Kernelfunctions implements multi-output kernels as scalar kernels on an extended output domain. For more details on this read the section on inputs for multi-output GPs.

For a function $f(x) \rightarrow y$ denote the inputs as $x, x'$, such that we compute the covariance between output components $y_{p}$ and $y_{p'}$. The total number of outputs is $m$.

KernelFunctions.IndependentMOKernelType
IndependentMOKernel(k::Kernel)

Kernel for multiple independent outputs with kernel k each.

Definition

For inputs $x, x'$ and output dimensions $p, p'$, the kernel $\widetilde{k}$ for independent outputs with kernel $k$ each is defined as

\[\widetilde{k}\big((x, p), (x', p')\big) = \begin{cases} k(x, x') & \text{if } p = p', \\ 0 & \text{otherwise}. -\end{cases}\]

Mathematically, it is equivalent to a matrix-valued kernel defined as

\[\widetilde{K}(x, x') = \mathrm{diag}\big(k(x, x'), \ldots, k(x, x')\big) \in \mathbb{R}^{m \times m},\]

where $m$ is the number of outputs.

source
KernelFunctions.LatentFactorMOKernelType
LatentFactorMOKernel(g::AbstractVector{<:Kernel}, e::MOKernel, A::AbstractMatrix)

Kernel associated with the semiparametric latent factor model.

Definition

For inputs $x, x'$ and output dimensions $p_x, p_{x'}'$, the kernel is defined as[STJ]

\[k\big((x, p_x), (x, p_{x'})\big) = \sum^{Q}_{q=1} A_{p_xq}g_q(x, x')A_{p_{x'}q} - + e\big((x, p_x), (x', p_{x'})\big),\]

where $g_1, \ldots, g_Q$ are $Q$ kernels, one for each latent process, $e$ is a multi-output kernel for $m$ outputs, and $A$ is a matrix of weights for the kernels of size $m \times Q$.

source
KernelFunctions.IntrinsicCoregionMOKernelType
IntrinsicCoregionMOKernel(; kernel::Kernel, B::AbstractMatrix)

Kernel associated with the intrinsic coregionalization model.

Definition

For inputs $x, x'$ and output dimensions $p, p'$, the kernel is defined as[ARL]

\[k\big((x, p), (x', p'); B, \tilde{k}\big) = B_{p, p'} \tilde{k}\big(x, x'\big),\]

where $B$ is a positive semidefinite matrix of size $m \times m$, with $m$ being the number of outputs, and $\tilde{k}$ is a scalar-valued kernel shared by the latent processes.

source
KernelFunctions.LinearMixingModelKernelType
LinearMixingModelKernel(k::Kernel, H::AbstractMatrix)
-LinearMixingModelKernel(Tk::AbstractVector{<:Kernel},Th::AbstractMatrix)

Kernel associated with the linear mixing model, taking a vector of Q kernels and a Q × m mixing matrix H for a function with m outputs. Also accepts a single kernel k for use across all Q basis vectors.

Definition

For inputs $x, x'$ and output dimensions $p, p'$, the kernel is defined as[BPTHST]

\[k\big((x, p), (x, p')\big) = H_{:,p}K(x, x')H_{:,p'}\]

where $K(x, x') = Diag(k_1(x, x'), ..., k_Q(x, x'))$ with zero off-diagonal entries. $H_{:,p}$ is the $p$-th column (p-th output) of $H \in \mathbb{R}^{Q \times m}$ representing $Q$ basis vectors for the $m$ dimensional output space of $f$. $k_1, \ldots, k_Q$ are $Q$ kernels, one for each latent process, $H$ is a mixing matrix of $Q$ basis vectors spanning the output space.

source
+\end{cases}\]

Mathematically, it is equivalent to a matrix-valued kernel defined as

\[\widetilde{K}(x, x') = \mathrm{diag}\big(k(x, x'), \ldots, k(x, x')\big) \in \mathbb{R}^{m \times m},\]

where $m$ is the number of outputs.

source
KernelFunctions.LatentFactorMOKernelType
LatentFactorMOKernel(g::AbstractVector{<:Kernel}, e::MOKernel, A::AbstractMatrix)

Kernel associated with the semiparametric latent factor model.

Definition

For inputs $x, x'$ and output dimensions $p_x, p_{x'}'$, the kernel is defined as[STJ]

\[k\big((x, p_x), (x, p_{x'})\big) = \sum^{Q}_{q=1} A_{p_xq}g_q(x, x')A_{p_{x'}q} + + e\big((x, p_x), (x', p_{x'})\big),\]

where $g_1, \ldots, g_Q$ are $Q$ kernels, one for each latent process, $e$ is a multi-output kernel for $m$ outputs, and $A$ is a matrix of weights for the kernels of size $m \times Q$.

source
KernelFunctions.IntrinsicCoregionMOKernelType
IntrinsicCoregionMOKernel(; kernel::Kernel, B::AbstractMatrix)

Kernel associated with the intrinsic coregionalization model.

Definition

For inputs $x, x'$ and output dimensions $p, p'$, the kernel is defined as[ARL]

\[k\big((x, p), (x', p'); B, \tilde{k}\big) = B_{p, p'} \tilde{k}\big(x, x'\big),\]

where $B$ is a positive semidefinite matrix of size $m \times m$, with $m$ being the number of outputs, and $\tilde{k}$ is a scalar-valued kernel shared by the latent processes.

source
KernelFunctions.LinearMixingModelKernelType
LinearMixingModelKernel(k::Kernel, H::AbstractMatrix)
+LinearMixingModelKernel(Tk::AbstractVector{<:Kernel},Th::AbstractMatrix)

Kernel associated with the linear mixing model, taking a vector of Q kernels and a Q × m mixing matrix H for a function with m outputs. Also accepts a single kernel k for use across all Q basis vectors.

Definition

For inputs $x, x'$ and output dimensions $p, p'$, the kernel is defined as[BPTHST]

\[k\big((x, p), (x, p')\big) = H_{:,p}K(x, x')H_{:,p'}\]

where $K(x, x') = Diag(k_1(x, x'), ..., k_Q(x, x'))$ with zero off-diagonal entries. $H_{:,p}$ is the $p$-th column (p-th output) of $H \in \mathbb{R}^{Q \times m}$ representing $Q$ basis vectors for the $m$ dimensional output space of $f$. $k_1, \ldots, k_Q$ are $Q$ kernels, one for each latent process, $H$ is a mixing matrix of $Q$ basis vectors spanning the output space.

source
diff --git a/dev/metrics/index.html b/dev/metrics/index.html index b4382483d..0b29b8f54 100644 --- a/dev/metrics/index.html +++ b/dev/metrics/index.html @@ -10,4 +10,4 @@ end @inline (dist::Delta)(a::AbstractArray,b::AbstractArray) = Distances._evaluate(dist,a,b) -@inline (dist::Delta)(a::Number,b::Number) = a==b
+@inline (dist::Delta)(a::Number,b::Number) = a==b diff --git a/dev/search/index.html b/dev/search/index.html index df30f0112..58ea9814f 100644 --- a/dev/search/index.html +++ b/dev/search/index.html @@ -1,2 +1,2 @@ -Search · KernelFunctions.jl

Loading search...

    +Search · KernelFunctions.jl

    Loading search...

      diff --git a/dev/search_index.js b/dev/search_index.js index b83b61f16..fa48646d3 100644 --- a/dev/search_index.js +++ b/dev/search_index.js @@ -1,3 +1,3 @@ var documenterSearchIndex = {"docs": -[{"location":"create_kernel/#Custom-Kernels","page":"Custom Kernels","title":"Custom Kernels","text":"","category":"section"},{"location":"create_kernel/#Creating-your-own-kernel","page":"Custom Kernels","title":"Creating your own kernel","text":"","category":"section"},{"location":"create_kernel/","page":"Custom Kernels","title":"Custom Kernels","text":"KernelFunctions.jl contains the most popular kernels already but you might want to make your own!","category":"page"},{"location":"create_kernel/","page":"Custom Kernels","title":"Custom Kernels","text":"Here are a few ways depending on how complicated your kernel is:","category":"page"},{"location":"create_kernel/#SimpleKernel-for-kernel-functions-depending-on-a-metric","page":"Custom Kernels","title":"SimpleKernel for kernel functions depending on a metric","text":"","category":"section"},{"location":"create_kernel/","page":"Custom Kernels","title":"Custom Kernels","text":"If your kernel function is of the form k(x, y) = f(d(x, y)) where d(x, y) is a PreMetric, you can construct your custom kernel by defining kappa and metric for your kernel. Here is for example how one can define the SqExponentialKernel again:","category":"page"},{"location":"create_kernel/","page":"Custom Kernels","title":"Custom Kernels","text":"struct MyKernel <: KernelFunctions.SimpleKernel end\n\nKernelFunctions.kappa(::MyKernel, d2::Real) = exp(-d2)\nKernelFunctions.metric(::MyKernel) = SqEuclidean()","category":"page"},{"location":"create_kernel/#Kernel-for-more-complex-kernels","page":"Custom Kernels","title":"Kernel for more complex kernels","text":"","category":"section"},{"location":"create_kernel/","page":"Custom Kernels","title":"Custom Kernels","text":"If your kernel does not satisfy such a representation, all you need to do is define (k::MyKernel)(x, y) and inherit from Kernel. For example, we recreate here the NeuralNetworkKernel:","category":"page"},{"location":"create_kernel/","page":"Custom Kernels","title":"Custom Kernels","text":"struct MyKernel <: KernelFunctions.Kernel end\n\n(::MyKernel)(x, y) = asin(dot(x, y) / sqrt((1 + sum(abs2, x)) * (1 + sum(abs2, y))))","category":"page"},{"location":"create_kernel/","page":"Custom Kernels","title":"Custom Kernels","text":"Note that the fallback implementation of the base Kernel evaluation does not use Distances.jl and can therefore be a bit slower.","category":"page"},{"location":"create_kernel/#Additional-Options","page":"Custom Kernels","title":"Additional Options","text":"","category":"section"},{"location":"create_kernel/","page":"Custom Kernels","title":"Custom Kernels","text":"Finally there are additional functions you can define to bring in more features:","category":"page"},{"location":"create_kernel/","page":"Custom Kernels","title":"Custom Kernels","text":"KernelFunctions.iskroncompatible(k::MyKernel): if your kernel factorizes in dimensions, you can declare your kernel as iskroncompatible(k) = true to use Kronecker methods.\nKernelFunctions.dim(x::MyDataType): by default the dimension of the inputs will only be checked for vectors of type AbstractVector{<:Real}. If you want to check the dimensionality of your inputs, dispatch the dim function on your datatype. Note that 0 is the default.\ndim is called within KernelFunctions.validate_inputs(x::MyDataType, y::MyDataType), which can instead be directly overloaded if you want to run special checks for your input types.\nkernelmatrix(k::MyKernel, ...): you can redefine the diverse kernelmatrix functions to eventually optimize the computations.\nBase.print(io::IO, k::MyKernel): if you want to specialize the printing of your kernel.","category":"page"},{"location":"create_kernel/","page":"Custom Kernels","title":"Custom Kernels","text":"KernelFunctions uses Functors.jl for specifying trainable kernel parameters in a way that is compatible with the Flux ML framework. You can use Functors.@functor if all fields of your kernel struct are trainable. Note that optimization algorithms in Flux are not compatible with scalar parameters (yet), and hence vector-valued parameters should be preferred.","category":"page"},{"location":"create_kernel/","page":"Custom Kernels","title":"Custom Kernels","text":"import Functors\n\nstruct MyKernel{T} <: KernelFunctions.Kernel\n a::Vector{T}\nend\n\nFunctors.@functor MyKernel","category":"page"},{"location":"create_kernel/","page":"Custom Kernels","title":"Custom Kernels","text":"If only a subset of the fields are trainable, you have to specify explicitly how to (re)construct the kernel with modified parameter values by implementing Functors.functor(::Type{<:MyKernel}, x) for your kernel struct:","category":"page"},{"location":"create_kernel/","page":"Custom Kernels","title":"Custom Kernels","text":"import Functors\n\nstruct MyKernel{T} <: KernelFunctions.Kernel\n n::Int\n a::Vector{T}\nend\n\nfunction Functors.functor(::Type{<:MyKernel}, x::MyKernel)\n function reconstruct_mykernel(xs)\n # keep field `n` of the original kernel and set `a` to (possibly different) `xs.a`\n return MyKernel(x.n, xs.a)\n end\n return (a = x.a,), reconstruct_mykernel\nend","category":"page"},{"location":"examples/train-kernel-parameters/","page":"Train Kernel Parameters","title":"Train Kernel Parameters","text":"EditURL = \"../../../../examples/train-kernel-parameters/script.jl\"","category":"page"},{"location":"examples/train-kernel-parameters/","page":"Train Kernel Parameters","title":"Train Kernel Parameters","text":"EditURL = \"https://github.com/JuliaGaussianProcesses/KernelFunctions.jl/blob/master/examples/train-kernel-parameters/script.jl\"","category":"page"},{"location":"examples/train-kernel-parameters/#Train-Kernel-Parameters","page":"Train Kernel Parameters","title":"Train Kernel Parameters","text":"","category":"section"},{"location":"examples/train-kernel-parameters/","page":"Train Kernel Parameters","title":"Train Kernel Parameters","text":"(Image: )","category":"page"},{"location":"examples/train-kernel-parameters/","page":"Train Kernel Parameters","title":"Train Kernel Parameters","text":"You are seeing the HTML output generated by Documenter.jl and Literate.jl from the Julia source file. The corresponding notebook can be viewed in nbviewer.","category":"page"},{"location":"examples/train-kernel-parameters/","page":"Train Kernel Parameters","title":"Train Kernel Parameters","text":"","category":"page"},{"location":"examples/train-kernel-parameters/","page":"Train Kernel Parameters","title":"Train Kernel Parameters","text":"Here we show a few ways to train (optimize) the kernel (hyper)parameters at the example of kernel-based regression using KernelFunctions.jl. All options are functionally identical, but differ a little in readability, dependencies, and computational cost.","category":"page"},{"location":"examples/train-kernel-parameters/","page":"Train Kernel Parameters","title":"Train Kernel Parameters","text":"We load KernelFunctions and some other packages. Note that while we use Zygote for automatic differentiation and Flux.optimise for optimization, you should be able to replace them with your favourite autodiff framework or optimizer.","category":"page"},{"location":"examples/train-kernel-parameters/","page":"Train Kernel Parameters","title":"Train Kernel Parameters","text":"using KernelFunctions\nusing LinearAlgebra\nusing Distributions\nusing Plots\nusing BenchmarkTools\nusing Flux\nusing Flux: Optimise\nusing Zygote\nusing Random: seed!\nseed!(42);","category":"page"},{"location":"examples/train-kernel-parameters/#Data-Generation","page":"Train Kernel Parameters","title":"Data Generation","text":"","category":"section"},{"location":"examples/train-kernel-parameters/","page":"Train Kernel Parameters","title":"Train Kernel Parameters","text":"We generate a toy dataset in 1 dimension:","category":"page"},{"location":"examples/train-kernel-parameters/","page":"Train Kernel Parameters","title":"Train Kernel Parameters","text":"xmin, xmax = -3, 3 # Bounds of the data\nN = 50 # Number of samples\nx_train = rand(Uniform(xmin, xmax), N) # sample the inputs\nσ = 0.1\ny_train = sinc.(x_train) + randn(N) * σ # evaluate a function and add some noise\nx_test = range(xmin - 0.1, xmax + 0.1; length=300)","category":"page"},{"location":"examples/train-kernel-parameters/","page":"Train Kernel Parameters","title":"Train Kernel Parameters","text":"Plot the data","category":"page"},{"location":"examples/train-kernel-parameters/","page":"Train Kernel Parameters","title":"Train Kernel Parameters","text":"scatter(x_train, y_train; label=\"data\")\nplot!(x_test, sinc; label=\"true function\")","category":"page"},{"location":"examples/train-kernel-parameters/","page":"Train Kernel Parameters","title":"Train Kernel Parameters","text":"\n\n\n \n \n \n\n\n\n \n \n \n\n\n\n \n \n \n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n","category":"page"},{"location":"examples/train-kernel-parameters/#Manual-Approach","page":"Train Kernel Parameters","title":"Manual Approach","text":"","category":"section"},{"location":"examples/train-kernel-parameters/","page":"Train Kernel Parameters","title":"Train Kernel Parameters","text":"The first option is to rebuild the parametrized kernel from a vector of parameters in each evaluation of the cost function. This is similar to the approach taken in Stheno.jl.","category":"page"},{"location":"examples/train-kernel-parameters/","page":"Train Kernel Parameters","title":"Train Kernel Parameters","text":"To train the kernel parameters via Zygote.jl, we need to create a function creating a kernel from an array. A simple way to ensure that the kernel parameters are positive is to optimize over the logarithm of the parameters.","category":"page"},{"location":"examples/train-kernel-parameters/","page":"Train Kernel Parameters","title":"Train Kernel Parameters","text":"function kernel_creator(θ)\n return (exp(θ[1]) * SqExponentialKernel() + exp(θ[2]) * Matern32Kernel()) ∘\n ScaleTransform(exp(θ[3]))\nend","category":"page"},{"location":"examples/train-kernel-parameters/","page":"Train Kernel Parameters","title":"Train Kernel Parameters","text":"From theory we know the prediction for a test set x given the kernel parameters and normalization constant:","category":"page"},{"location":"examples/train-kernel-parameters/","page":"Train Kernel Parameters","title":"Train Kernel Parameters","text":"function f(x, x_train, y_train, θ)\n k = kernel_creator(θ[1:3])\n return kernelmatrix(k, x, x_train) *\n ((kernelmatrix(k, x_train) + exp(θ[4]) * I) \\ y_train)\nend","category":"page"},{"location":"examples/train-kernel-parameters/","page":"Train Kernel Parameters","title":"Train Kernel Parameters","text":"Let's look at our prediction. With starting parameters p0 (picked so we get the right local minimum for demonstration) we get:","category":"page"},{"location":"examples/train-kernel-parameters/","page":"Train Kernel Parameters","title":"Train Kernel Parameters","text":"p0 = [1.1, 0.1, 0.01, 0.001]\nθ = log.(p0)\nŷ = f(x_test, x_train, y_train, θ)\nscatter(x_train, y_train; label=\"data\")\nplot!(x_test, sinc; label=\"true function\")\nplot!(x_test, ŷ; label=\"prediction\")","category":"page"},{"location":"examples/train-kernel-parameters/","page":"Train Kernel Parameters","title":"Train Kernel Parameters","text":"\n\n\n \n \n \n\n\n\n \n \n \n\n\n\n \n \n \n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n","category":"page"},{"location":"examples/train-kernel-parameters/","page":"Train Kernel Parameters","title":"Train Kernel Parameters","text":"We define the following loss:","category":"page"},{"location":"examples/train-kernel-parameters/","page":"Train Kernel Parameters","title":"Train Kernel Parameters","text":"function loss(θ)\n ŷ = f(x_train, x_train, y_train, θ)\n return norm(y_train - ŷ) + exp(θ[4]) * norm(ŷ)\nend","category":"page"},{"location":"examples/train-kernel-parameters/","page":"Train Kernel Parameters","title":"Train Kernel Parameters","text":"The loss with our starting point:","category":"page"},{"location":"examples/train-kernel-parameters/","page":"Train Kernel Parameters","title":"Train Kernel Parameters","text":"loss(θ)","category":"page"},{"location":"examples/train-kernel-parameters/","page":"Train Kernel Parameters","title":"Train Kernel Parameters","text":"2.613933959118708","category":"page"},{"location":"examples/train-kernel-parameters/","page":"Train Kernel Parameters","title":"Train Kernel Parameters","text":"Computational cost for one step:","category":"page"},{"location":"examples/train-kernel-parameters/","page":"Train Kernel Parameters","title":"Train Kernel Parameters","text":"@benchmark let\n θ = log.(p0)\n opt = Optimise.ADAGrad(0.5)\n grads = only((Zygote.gradient(loss, θ)))\n Optimise.update!(opt, θ, grads)\nend","category":"page"},{"location":"examples/train-kernel-parameters/","page":"Train Kernel Parameters","title":"Train Kernel Parameters","text":"BenchmarkTools.Trial: 6392 samples with 1 evaluation.\n Range (min … max): 635.570 μs … 4.961 ms ┊ GC (min … max): 0.00% … 24.14%\n Time (median): 715.959 μs ┊ GC (median): 0.00%\n Time (mean ± σ): 779.177 μs ± 237.663 μs ┊ GC (mean ± σ): 6.31% ± 11.86%\n\n ▅██▇▅▄▂ ▁ ▁ ▂\n ▆█████████▇▆▃▃▄▁▄▆██▇▆▆▅▃▃▁▁▃▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▃▅▇███████▇▇▇▇▇ █\n 636 μs Histogram: log(frequency) by time 1.75 ms <\n\n Memory estimate: 2.98 MiB, allocs estimate: 1563.","category":"page"},{"location":"examples/train-kernel-parameters/#Training-the-model","page":"Train Kernel Parameters","title":"Training the model","text":"","category":"section"},{"location":"examples/train-kernel-parameters/","page":"Train Kernel Parameters","title":"Train Kernel Parameters","text":"Setting an initial value and initializing the optimizer:","category":"page"},{"location":"examples/train-kernel-parameters/","page":"Train Kernel Parameters","title":"Train Kernel Parameters","text":"θ = log.(p0) # Initial vector\nopt = Optimise.ADAGrad(0.5)","category":"page"},{"location":"examples/train-kernel-parameters/","page":"Train Kernel Parameters","title":"Train Kernel Parameters","text":"Optimize","category":"page"},{"location":"examples/train-kernel-parameters/","page":"Train Kernel Parameters","title":"Train Kernel Parameters","text":"anim = Animation()\nfor i in 1:15\n grads = only((Zygote.gradient(loss, θ)))\n Optimise.update!(opt, θ, grads)\n scatter(\n x_train, y_train; lab=\"data\", title=\"i = $(i), Loss = $(round(loss(θ), digits = 4))\"\n )\n plot!(x_test, sinc; lab=\"true function\")\n plot!(x_test, f(x_test, x_train, y_train, θ); lab=\"Prediction\", lw=3.0)\n frame(anim)\nend\ngif(anim, \"train-kernel-param.gif\"; show_msg=false, fps=15);","category":"page"},{"location":"examples/train-kernel-parameters/","page":"Train Kernel Parameters","title":"Train Kernel Parameters","text":"(Image: )","category":"page"},{"location":"examples/train-kernel-parameters/","page":"Train Kernel Parameters","title":"Train Kernel Parameters","text":"Final loss","category":"page"},{"location":"examples/train-kernel-parameters/","page":"Train Kernel Parameters","title":"Train Kernel Parameters","text":"loss(θ)","category":"page"},{"location":"examples/train-kernel-parameters/","page":"Train Kernel Parameters","title":"Train Kernel Parameters","text":"0.5241118228076058","category":"page"},{"location":"examples/train-kernel-parameters/#Using-ParameterHandling.jl","page":"Train Kernel Parameters","title":"Using ParameterHandling.jl","text":"","category":"section"},{"location":"examples/train-kernel-parameters/","page":"Train Kernel Parameters","title":"Train Kernel Parameters","text":"Alternatively, we can use the ParameterHandling.jl package to handle the requirement that all kernel parameters should be positive. The package also allows arbitrarily nesting named tuples that make the parameters more human readable, without having to remember their position in a flat vector.","category":"page"},{"location":"examples/train-kernel-parameters/","page":"Train Kernel Parameters","title":"Train Kernel Parameters","text":"using ParameterHandling\n\nraw_initial_θ = (\n k1=positive(1.1), k2=positive(0.1), k3=positive(0.01), noise_var=positive(0.001)\n)\n\nflat_θ, unflatten = ParameterHandling.value_flatten(raw_initial_θ)","category":"page"},{"location":"examples/train-kernel-parameters/","page":"Train Kernel Parameters","title":"Train Kernel Parameters","text":"4-element Vector{Float64}:\n 0.09531016625781467\n -2.3025852420056685\n -4.6051716761053205\n -6.907770180254354","category":"page"},{"location":"examples/train-kernel-parameters/","page":"Train Kernel Parameters","title":"Train Kernel Parameters","text":"We define a few relevant functions and note that compared to the previous kernel_creator function, we do not need explicit exps.","category":"page"},{"location":"examples/train-kernel-parameters/","page":"Train Kernel Parameters","title":"Train Kernel Parameters","text":"function kernel_creator(θ)\n return (θ.k1 * SqExponentialKernel() + θ.k2 * Matern32Kernel()) ∘ ScaleTransform(θ.k3)\nend\n\nfunction f(x, x_train, y_train, θ)\n k = kernel_creator(θ)\n return kernelmatrix(k, x, x_train) *\n ((kernelmatrix(k, x_train) + θ.noise_var * I) \\ y_train)\nend\n\nfunction loss(θ)\n ŷ = f(x_train, x_train, y_train, θ)\n return norm(y_train - ŷ) + θ.noise_var * norm(ŷ)\nend\n\ninitial_θ = ParameterHandling.value(raw_initial_θ)","category":"page"},{"location":"examples/train-kernel-parameters/","page":"Train Kernel Parameters","title":"Train Kernel Parameters","text":"The loss at the initial parameter values:","category":"page"},{"location":"examples/train-kernel-parameters/","page":"Train Kernel Parameters","title":"Train Kernel Parameters","text":"(loss ∘ unflatten)(flat_θ)","category":"page"},{"location":"examples/train-kernel-parameters/","page":"Train Kernel Parameters","title":"Train Kernel Parameters","text":"2.613933959118708","category":"page"},{"location":"examples/train-kernel-parameters/","page":"Train Kernel Parameters","title":"Train Kernel Parameters","text":"Cost per step","category":"page"},{"location":"examples/train-kernel-parameters/","page":"Train Kernel Parameters","title":"Train Kernel Parameters","text":"@benchmark let\n θ = flat_θ[:]\n opt = Optimise.ADAGrad(0.5)\n grads = (Zygote.gradient(loss ∘ unflatten, θ))[1]\n Optimise.update!(opt, θ, grads)\nend","category":"page"},{"location":"examples/train-kernel-parameters/","page":"Train Kernel Parameters","title":"Train Kernel Parameters","text":"BenchmarkTools.Trial: 5448 samples with 1 evaluation.\n Range (min … max): 748.960 μs … 2.608 ms ┊ GC (min … max): 0.00% … 47.88%\n Time (median): 855.177 μs ┊ GC (median): 0.00%\n Time (mean ± σ): 914.518 μs ± 234.434 μs ┊ GC (mean ± σ): 5.13% ± 10.86%\n\n ▁▄▅▇█▇▅▃▁ ▁ ▁▁ ▂\n ▇██████████▆▇▅▅▃▃▃▁▄▁▅▆▇██▆▃▃▁▄▃▅▅▃▅▃▁▁▁▁▁▁▁▁▁▁▁▁▁▁▅▁▇▇██████ █\n 749 μs Histogram: log(frequency) by time 1.94 ms <\n\n Memory estimate: 3.08 MiB, allocs estimate: 2228.","category":"page"},{"location":"examples/train-kernel-parameters/#Training-the-model-2","page":"Train Kernel Parameters","title":"Training the model","text":"","category":"section"},{"location":"examples/train-kernel-parameters/","page":"Train Kernel Parameters","title":"Train Kernel Parameters","text":"Optimize","category":"page"},{"location":"examples/train-kernel-parameters/","page":"Train Kernel Parameters","title":"Train Kernel Parameters","text":"opt = Optimise.ADAGrad(0.5)\nfor i in 1:15\n grads = (Zygote.gradient(loss ∘ unflatten, flat_θ))[1]\n Optimise.update!(opt, flat_θ, grads)\nend","category":"page"},{"location":"examples/train-kernel-parameters/","page":"Train Kernel Parameters","title":"Train Kernel Parameters","text":"Final loss","category":"page"},{"location":"examples/train-kernel-parameters/","page":"Train Kernel Parameters","title":"Train Kernel Parameters","text":"(loss ∘ unflatten)(flat_θ)","category":"page"},{"location":"examples/train-kernel-parameters/","page":"Train Kernel Parameters","title":"Train Kernel Parameters","text":"0.524117624126251","category":"page"},{"location":"examples/train-kernel-parameters/#Flux.destructure","page":"Train Kernel Parameters","title":"Flux.destructure","text":"","category":"section"},{"location":"examples/train-kernel-parameters/","page":"Train Kernel Parameters","title":"Train Kernel Parameters","text":"If we don't want to write an explicit function to construct the kernel, we can alternatively use the Flux.destructure function. Again, we need to ensure that the parameters are positive. Note that the exp function is now part of the loss function, instead of part of the kernel construction.","category":"page"},{"location":"examples/train-kernel-parameters/","page":"Train Kernel Parameters","title":"Train Kernel Parameters","text":"We could also use ParameterHandling.jl here. To do so, one would remove the exps from the loss function below and call loss ∘ unflatten as above.","category":"page"},{"location":"examples/train-kernel-parameters/","page":"Train Kernel Parameters","title":"Train Kernel Parameters","text":"θ = [1.1, 0.1, 0.01, 0.001]\n\nkernel = (θ[1] * SqExponentialKernel() + θ[2] * Matern32Kernel()) ∘ ScaleTransform(θ[3])\n\nparams, kernelc = Flux.destructure(kernel);","category":"page"},{"location":"examples/train-kernel-parameters/","page":"Train Kernel Parameters","title":"Train Kernel Parameters","text":"This returns the trainable params of the kernel and a function to reconstruct the kernel.","category":"page"},{"location":"examples/train-kernel-parameters/","page":"Train Kernel Parameters","title":"Train Kernel Parameters","text":"kernelc(params)","category":"page"},{"location":"examples/train-kernel-parameters/","page":"Train Kernel Parameters","title":"Train Kernel Parameters","text":"Sum of 2 kernels:\n\tSquared Exponential Kernel (metric = Distances.Euclidean(0.0))\n\t\t\t- σ² = 1.1\n\tMatern 3/2 Kernel (metric = Distances.Euclidean(0.0))\n\t\t\t- σ² = 0.1\n\t- Scale Transform (s = 0.01)","category":"page"},{"location":"examples/train-kernel-parameters/","page":"Train Kernel Parameters","title":"Train Kernel Parameters","text":"From theory we know the prediction for a test set x given the kernel parameters and normalization constant","category":"page"},{"location":"examples/train-kernel-parameters/","page":"Train Kernel Parameters","title":"Train Kernel Parameters","text":"function f(x, x_train, y_train, θ)\n k = kernelc(θ[1:3])\n return kernelmatrix(k, x, x_train) * ((kernelmatrix(k, x_train) + (θ[4]) * I) \\ y_train)\nend\n\nfunction loss(θ)\n ŷ = f(x_train, x_train, y_train, exp.(θ))\n return norm(y_train - ŷ) + exp(θ[4]) * norm(ŷ)\nend","category":"page"},{"location":"examples/train-kernel-parameters/","page":"Train Kernel Parameters","title":"Train Kernel Parameters","text":"Cost for one step","category":"page"},{"location":"examples/train-kernel-parameters/","page":"Train Kernel Parameters","title":"Train Kernel Parameters","text":"@benchmark let θt = θ[:], optt = Optimise.ADAGrad(0.5)\n grads = only((Zygote.gradient(loss, θt)))\n Optimise.update!(optt, θt, grads)\nend","category":"page"},{"location":"examples/train-kernel-parameters/","page":"Train Kernel Parameters","title":"Train Kernel Parameters","text":"BenchmarkTools.Trial: 6664 samples with 1 evaluation.\n Range (min … max): 642.943 μs … 3.631 ms ┊ GC (min … max): 0.00% … 24.37%\n Time (median): 704.498 μs ┊ GC (median): 0.00%\n Time (mean ± σ): 747.310 μs ± 213.925 μs ┊ GC (mean ± σ): 5.10% ± 10.59%\n\n ▅▆▇█▆▄▃▁ ▁▁▁▁ ▂\n ████████▇▅▄▆▄▁▁▁▁▁▁▁▁▁▁▃▁▁▁▄▃▃▃▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▄▆█████ █\n 643 μs Histogram: log(frequency) by time 1.76 ms <\n\n Memory estimate: 2.98 MiB, allocs estimate: 1558.","category":"page"},{"location":"examples/train-kernel-parameters/#Training-the-model-3","page":"Train Kernel Parameters","title":"Training the model","text":"","category":"section"},{"location":"examples/train-kernel-parameters/","page":"Train Kernel Parameters","title":"Train Kernel Parameters","text":"The loss at our initial parameter values:","category":"page"},{"location":"examples/train-kernel-parameters/","page":"Train Kernel Parameters","title":"Train Kernel Parameters","text":"θ = log.([1.1, 0.1, 0.01, 0.001]) # Initial vector\nloss(θ)","category":"page"},{"location":"examples/train-kernel-parameters/","page":"Train Kernel Parameters","title":"Train Kernel Parameters","text":"2.613933959118708","category":"page"},{"location":"examples/train-kernel-parameters/","page":"Train Kernel Parameters","title":"Train Kernel Parameters","text":"Initialize optimizer","category":"page"},{"location":"examples/train-kernel-parameters/","page":"Train Kernel Parameters","title":"Train Kernel Parameters","text":"opt = Optimise.ADAGrad(0.5)","category":"page"},{"location":"examples/train-kernel-parameters/","page":"Train Kernel Parameters","title":"Train Kernel Parameters","text":"Optimize","category":"page"},{"location":"examples/train-kernel-parameters/","page":"Train Kernel Parameters","title":"Train Kernel Parameters","text":"for i in 1:15\n grads = only((Zygote.gradient(loss, θ)))\n Optimise.update!(opt, θ, grads)\nend","category":"page"},{"location":"examples/train-kernel-parameters/","page":"Train Kernel Parameters","title":"Train Kernel Parameters","text":"Final loss","category":"page"},{"location":"examples/train-kernel-parameters/","page":"Train Kernel Parameters","title":"Train Kernel Parameters","text":"loss(θ)","category":"page"},{"location":"examples/train-kernel-parameters/","page":"Train Kernel Parameters","title":"Train Kernel Parameters","text":"0.5241118228076058","category":"page"},{"location":"examples/train-kernel-parameters/","page":"Train Kernel Parameters","title":"Train Kernel Parameters","text":"
      \n
      Package and system information
      \n
      \nPackage information (click to expand)\n
      \nStatus `~/work/KernelFunctions.jl/KernelFunctions.jl/examples/train-kernel-parameters/Project.toml`\n  [6e4b80f9] BenchmarkTools v1.4.0\n  [31c24e10] Distributions v0.25.107\n  [587475ba] Flux v0.14.11\n  [f6369f11] ForwardDiff v0.10.36\n  [ec8451be] KernelFunctions v0.10.63 `/home/runner/work/KernelFunctions.jl/KernelFunctions.jl#master`\n  [98b081ad] Literate v2.16.1\n  [2412ca09] ParameterHandling v0.4.10\n  [91a5bcdd] Plots v1.40.1\n  [e88e6eb3] Zygote v0.6.69\n  [37e2e46d] LinearAlgebra\n
      \nTo reproduce this notebook's package environment, you can\n\ndownload the full Manifest.toml.\n
      \n
      \nSystem information (click to expand)\n
      \nJulia Version 1.10.0\nCommit 3120989f39b (2023-12-25 18:01 UTC)\nBuild Info:\n  Official https://julialang.org/ release\nPlatform Info:\n  OS: Linux (x86_64-linux-gnu)\n  CPU: 4 × AMD EPYC 7763 64-Core Processor\n  WORD_SIZE: 64\n  LIBM: libopenlibm\n  LLVM: libLLVM-15.0.7 (ORCJIT, znver3)\n  Threads: 1 on 4 virtual cores\nEnvironment:\n  JULIA_DEBUG = Documenter\n  JULIA_LOAD_PATH = :/home/runner/.julia/packages/JuliaGPsDocs/7M86H/src\n
      \n
      ","category":"page"},{"location":"examples/train-kernel-parameters/","page":"Train Kernel Parameters","title":"Train Kernel Parameters","text":"","category":"page"},{"location":"examples/train-kernel-parameters/","page":"Train Kernel Parameters","title":"Train Kernel Parameters","text":"This page was generated using Literate.jl.","category":"page"},{"location":"examples/kernel-ridge-regression/","page":"Kernel Ridge Regression","title":"Kernel Ridge Regression","text":"EditURL = \"../../../../examples/kernel-ridge-regression/script.jl\"","category":"page"},{"location":"examples/kernel-ridge-regression/","page":"Kernel Ridge Regression","title":"Kernel Ridge Regression","text":"EditURL = \"https://github.com/JuliaGaussianProcesses/KernelFunctions.jl/blob/master/examples/kernel-ridge-regression/script.jl\"","category":"page"},{"location":"examples/kernel-ridge-regression/#Kernel-Ridge-Regression","page":"Kernel Ridge Regression","title":"Kernel Ridge Regression","text":"","category":"section"},{"location":"examples/kernel-ridge-regression/","page":"Kernel Ridge Regression","title":"Kernel Ridge Regression","text":"(Image: )","category":"page"},{"location":"examples/kernel-ridge-regression/","page":"Kernel Ridge Regression","title":"Kernel Ridge Regression","text":"You are seeing the HTML output generated by Documenter.jl and Literate.jl from the Julia source file. The corresponding notebook can be viewed in nbviewer.","category":"page"},{"location":"examples/kernel-ridge-regression/","page":"Kernel Ridge Regression","title":"Kernel Ridge Regression","text":"","category":"page"},{"location":"examples/kernel-ridge-regression/","page":"Kernel Ridge Regression","title":"Kernel Ridge Regression","text":"Building on linear regression, we can fit non-linear data sets by introducing a feature space. In a higher-dimensional feature space, we can overfit the data; ridge regression introduces regularization to avoid this. In this notebook we show how we can use KernelFunctions.jl for kernel ridge regression.","category":"page"},{"location":"examples/kernel-ridge-regression/","page":"Kernel Ridge Regression","title":"Kernel Ridge Regression","text":"# Loading and setup of required packages\nusing KernelFunctions\nusing LinearAlgebra\nusing Distributions\n\n# Plotting\nusing Plots;\ndefault(; lw=2.0, legendfontsize=11.0, ylims=(-150, 500));\n\nusing Random: seed!\nseed!(42);","category":"page"},{"location":"examples/kernel-ridge-regression/#Toy-data","page":"Kernel Ridge Regression","title":"Toy data","text":"","category":"section"},{"location":"examples/kernel-ridge-regression/","page":"Kernel Ridge Regression","title":"Kernel Ridge Regression","text":"Here we use a one-dimensional toy problem. We generate data using the fourth-order polynomial f(x) = (x+4)(x+1)(x-1)(x-3):","category":"page"},{"location":"examples/kernel-ridge-regression/","page":"Kernel Ridge Regression","title":"Kernel Ridge Regression","text":"f_truth(x) = (x + 4) * (x + 1) * (x - 1) * (x - 3)\n\nx_train = -5:0.5:5\nx_test = -7:0.1:7\n\nnoise = rand(Uniform(-20, 20), length(x_train))\ny_train = f_truth.(x_train) + noise\ny_test = f_truth.(x_test)\n\nplot(x_test, y_test; label=raw\"$f(x)$\")\nscatter!(x_train, y_train; seriescolor=1, label=\"observations\")","category":"page"},{"location":"examples/kernel-ridge-regression/","page":"Kernel Ridge Regression","title":"Kernel Ridge Regression","text":"\n\n\n \n \n \n\n\n\n \n \n \n\n\n\n \n \n \n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n","category":"page"},{"location":"examples/kernel-ridge-regression/#Linear-regression","page":"Kernel Ridge Regression","title":"Linear regression","text":"","category":"section"},{"location":"examples/kernel-ridge-regression/","page":"Kernel Ridge Regression","title":"Kernel Ridge Regression","text":"For training inputs mathrmX=(mathbfx_n)_n=1^N and observations mathbfy=(y_n)_n=1^N, the linear regression weights mathbfw using the least-squares estimator are given by","category":"page"},{"location":"examples/kernel-ridge-regression/","page":"Kernel Ridge Regression","title":"Kernel Ridge Regression","text":"mathbfw = (mathrmX^top mathrmX)^-1 mathrmX^top mathbfy","category":"page"},{"location":"examples/kernel-ridge-regression/","page":"Kernel Ridge Regression","title":"Kernel Ridge Regression","text":"We predict at test inputs mathbfx_* using","category":"page"},{"location":"examples/kernel-ridge-regression/","page":"Kernel Ridge Regression","title":"Kernel Ridge Regression","text":"haty_* = mathbfx_*^top mathbfw","category":"page"},{"location":"examples/kernel-ridge-regression/","page":"Kernel Ridge Regression","title":"Kernel Ridge Regression","text":"This is implemented by linear_regression:","category":"page"},{"location":"examples/kernel-ridge-regression/","page":"Kernel Ridge Regression","title":"Kernel Ridge Regression","text":"function linear_regression(X, y, Xstar)\n weights = (X' * X) \\ (X' * y)\n return Xstar * weights\nend;","category":"page"},{"location":"examples/kernel-ridge-regression/","page":"Kernel Ridge Regression","title":"Kernel Ridge Regression","text":"A linear regression fit to the above data set:","category":"page"},{"location":"examples/kernel-ridge-regression/","page":"Kernel Ridge Regression","title":"Kernel Ridge Regression","text":"y_pred = linear_regression(x_train, y_train, x_test)\nscatter(x_train, y_train; label=\"observations\")\nplot!(x_test, y_pred; label=\"linear fit\")","category":"page"},{"location":"examples/kernel-ridge-regression/","page":"Kernel Ridge Regression","title":"Kernel Ridge Regression","text":"\n\n\n \n \n \n\n\n\n \n \n \n\n\n\n \n \n \n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n","category":"page"},{"location":"examples/kernel-ridge-regression/#Featurization","page":"Kernel Ridge Regression","title":"Featurization","text":"","category":"section"},{"location":"examples/kernel-ridge-regression/","page":"Kernel Ridge Regression","title":"Kernel Ridge Regression","text":"We can improve the fit by including additional features, i.e. generalizing to tildemathrmX = (phi(x_n))_n=1^N, where phi(x) constructs a feature vector for each input x. Here we include powers of the input, phi(x) = (1 x x^2 dots x^d):","category":"page"},{"location":"examples/kernel-ridge-regression/","page":"Kernel Ridge Regression","title":"Kernel Ridge Regression","text":"function featurize_poly(x; degree=1)\n return repeat(x, 1, degree + 1) .^ (0:degree)'\nend\n\nfunction featurized_fit_and_plot(degree)\n X = featurize_poly(x_train; degree=degree)\n Xstar = featurize_poly(x_test; degree=degree)\n y_pred = linear_regression(X, y_train, Xstar)\n scatter(x_train, y_train; legend=false, title=\"fit of order $degree\")\n return plot!(x_test, y_pred)\nend\n\nplot((featurized_fit_and_plot(degree) for degree in 1:4)...)","category":"page"},{"location":"examples/kernel-ridge-regression/","page":"Kernel Ridge Regression","title":"Kernel Ridge Regression","text":"\n\n\n \n \n \n\n\n\n \n \n \n\n\n\n \n \n \n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n \n \n \n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n \n \n \n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n \n \n \n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n","category":"page"},{"location":"examples/kernel-ridge-regression/","page":"Kernel Ridge Regression","title":"Kernel Ridge Regression","text":"Note that the fit becomes perfect when we include exactly as many orders in the features as we have in the underlying polynomial (4).","category":"page"},{"location":"examples/kernel-ridge-regression/","page":"Kernel Ridge Regression","title":"Kernel Ridge Regression","text":"However, when increasing the number of features, we can quickly overfit to noise in the data set:","category":"page"},{"location":"examples/kernel-ridge-regression/","page":"Kernel Ridge Regression","title":"Kernel Ridge Regression","text":"featurized_fit_and_plot(20)","category":"page"},{"location":"examples/kernel-ridge-regression/","page":"Kernel Ridge Regression","title":"Kernel Ridge Regression","text":"\n\n\n \n \n \n\n\n\n \n \n \n\n\n\n \n \n \n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n","category":"page"},{"location":"examples/kernel-ridge-regression/#Ridge-regression","page":"Kernel Ridge Regression","title":"Ridge regression","text":"","category":"section"},{"location":"examples/kernel-ridge-regression/","page":"Kernel Ridge Regression","title":"Kernel Ridge Regression","text":"To counteract this unwanted behaviour, we can introduce regularization. This leads to ridge regression with L_2 regularization of the weights (Tikhonov regularization). Instead of the weights in linear regression,","category":"page"},{"location":"examples/kernel-ridge-regression/","page":"Kernel Ridge Regression","title":"Kernel Ridge Regression","text":"mathbfw = (mathrmX^top mathrmX)^-1 mathrmX^top mathbfy","category":"page"},{"location":"examples/kernel-ridge-regression/","page":"Kernel Ridge Regression","title":"Kernel Ridge Regression","text":"we introduce the ridge parameter lambda:","category":"page"},{"location":"examples/kernel-ridge-regression/","page":"Kernel Ridge Regression","title":"Kernel Ridge Regression","text":"mathbfw = (mathrmX^top mathrmX + lambda mathbb1)^-1 mathrmX^top mathbfy","category":"page"},{"location":"examples/kernel-ridge-regression/","page":"Kernel Ridge Regression","title":"Kernel Ridge Regression","text":"As before, we predict at test inputs mathbfx_* using","category":"page"},{"location":"examples/kernel-ridge-regression/","page":"Kernel Ridge Regression","title":"Kernel Ridge Regression","text":"haty_* = mathbfx_*^top mathbfw","category":"page"},{"location":"examples/kernel-ridge-regression/","page":"Kernel Ridge Regression","title":"Kernel Ridge Regression","text":"This is implemented by ridge_regression:","category":"page"},{"location":"examples/kernel-ridge-regression/","page":"Kernel Ridge Regression","title":"Kernel Ridge Regression","text":"function ridge_regression(X, y, Xstar, lambda)\n weights = (X' * X + lambda * I) \\ (X' * y)\n return Xstar * weights\nend\n\nfunction regularized_fit_and_plot(degree, lambda)\n X = featurize_poly(x_train; degree=degree)\n Xstar = featurize_poly(x_test; degree=degree)\n y_pred = ridge_regression(X, y_train, Xstar, lambda)\n scatter(x_train, y_train; legend=false, title=\"\\$\\\\lambda=$lambda\\$\")\n return plot!(x_test, y_pred)\nend\n\nplot((regularized_fit_and_plot(20, lambda) for lambda in (1e-3, 1e-2, 1e-1, 1))...)","category":"page"},{"location":"examples/kernel-ridge-regression/","page":"Kernel Ridge Regression","title":"Kernel Ridge Regression","text":"\n\n\n \n \n \n\n\n\n \n \n \n\n\n\n \n \n \n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n \n \n \n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n \n \n \n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n \n \n \n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n","category":"page"},{"location":"examples/kernel-ridge-regression/#Kernel-ridge-regression","page":"Kernel Ridge Regression","title":"Kernel ridge regression","text":"","category":"section"},{"location":"examples/kernel-ridge-regression/","page":"Kernel Ridge Regression","title":"Kernel Ridge Regression","text":"Instead of constructing the feature matrix explicitly, we can use kernels to replace inner products of feature vectors with a kernel evaluation: langle phi(x) phi(x) rangle = k(x x) or tildemathrmX tildemathrmX^top = mathrmK, where mathrmK_ij = k(x_i x_j).","category":"page"},{"location":"examples/kernel-ridge-regression/","page":"Kernel Ridge Regression","title":"Kernel Ridge Regression","text":"To apply this \"kernel trick\" to ridge regression, we can rewrite the ridge estimate for the weights","category":"page"},{"location":"examples/kernel-ridge-regression/","page":"Kernel Ridge Regression","title":"Kernel Ridge Regression","text":"mathbfw = (mathrmX^top mathrmX + lambda mathbb1)^-1 mathrmX^top mathbfy","category":"page"},{"location":"examples/kernel-ridge-regression/","page":"Kernel Ridge Regression","title":"Kernel Ridge Regression","text":"using the matrix inversion lemma as","category":"page"},{"location":"examples/kernel-ridge-regression/","page":"Kernel Ridge Regression","title":"Kernel Ridge Regression","text":"mathbfw = mathrmX^top (mathrmX mathrmX^top + lambda mathbb1)^-1 mathbfy","category":"page"},{"location":"examples/kernel-ridge-regression/","page":"Kernel Ridge Regression","title":"Kernel Ridge Regression","text":"where we can now replace the inner product with the kernel matrix,","category":"page"},{"location":"examples/kernel-ridge-regression/","page":"Kernel Ridge Regression","title":"Kernel Ridge Regression","text":"mathbfw = mathrmX^top (mathrmK + lambda mathbb1)^-1 mathbfy","category":"page"},{"location":"examples/kernel-ridge-regression/","page":"Kernel Ridge Regression","title":"Kernel Ridge Regression","text":"And the prediction yields another inner product,","category":"page"},{"location":"examples/kernel-ridge-regression/","page":"Kernel Ridge Regression","title":"Kernel Ridge Regression","text":"haty_* = mathbfx_*^top mathbfw = langle mathbfx_* mathbfw rangle = mathbfk_* (mathrmK + lambda mathbb1)^-1 mathbfy","category":"page"},{"location":"examples/kernel-ridge-regression/","page":"Kernel Ridge Regression","title":"Kernel Ridge Regression","text":"where (mathbfk_*)_n = k(x_* x_n).","category":"page"},{"location":"examples/kernel-ridge-regression/","page":"Kernel Ridge Regression","title":"Kernel Ridge Regression","text":"This is implemented by kernel_ridge_regression:","category":"page"},{"location":"examples/kernel-ridge-regression/","page":"Kernel Ridge Regression","title":"Kernel Ridge Regression","text":"function kernel_ridge_regression(k, X, y, Xstar, lambda)\n K = kernelmatrix(k, X)\n kstar = kernelmatrix(k, Xstar, X)\n return kstar * ((K + lambda * I) \\ y)\nend;","category":"page"},{"location":"examples/kernel-ridge-regression/","page":"Kernel Ridge Regression","title":"Kernel Ridge Regression","text":"Now, instead of explicitly constructing features, we can simply pass in a PolynomialKernel object:","category":"page"},{"location":"examples/kernel-ridge-regression/","page":"Kernel Ridge Regression","title":"Kernel Ridge Regression","text":"function kernelized_fit_and_plot(kernel, lambda=1e-4)\n y_pred = kernel_ridge_regression(kernel, x_train, y_train, x_test, lambda)\n if kernel isa PolynomialKernel\n title = string(\"order \", kernel.degree)\n else\n title = string(nameof(typeof(kernel)))\n end\n scatter(x_train, y_train; label=nothing)\n return plot!(x_test, y_pred; label=nothing, title=title)\nend\n\nplot((kernelized_fit_and_plot(PolynomialKernel(; degree=degree, c=1)) for degree in 1:4)...)","category":"page"},{"location":"examples/kernel-ridge-regression/","page":"Kernel Ridge Regression","title":"Kernel Ridge Regression","text":"\n\n\n \n \n \n\n\n\n \n \n \n\n\n\n \n \n \n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n \n \n \n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n \n \n \n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n \n \n \n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n","category":"page"},{"location":"examples/kernel-ridge-regression/","page":"Kernel Ridge Regression","title":"Kernel Ridge Regression","text":"However, we can now also use kernels that would have an infinite-dimensional feature expansion, such as the squared exponential kernel:","category":"page"},{"location":"examples/kernel-ridge-regression/","page":"Kernel Ridge Regression","title":"Kernel Ridge Regression","text":"kernelized_fit_and_plot(SqExponentialKernel())","category":"page"},{"location":"examples/kernel-ridge-regression/","page":"Kernel Ridge Regression","title":"Kernel Ridge Regression","text":"\n\n\n \n \n \n\n\n\n \n \n \n\n\n\n \n \n \n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n","category":"page"},{"location":"examples/kernel-ridge-regression/","page":"Kernel Ridge Regression","title":"Kernel Ridge Regression","text":"
      \n
      Package and system information
      \n
      \nPackage information (click to expand)\n
      \nStatus `~/work/KernelFunctions.jl/KernelFunctions.jl/examples/kernel-ridge-regression/Project.toml`\n  [31c24e10] Distributions v0.25.107\n  [ec8451be] KernelFunctions v0.10.63 `/home/runner/work/KernelFunctions.jl/KernelFunctions.jl#master`\n  [98b081ad] Literate v2.16.1\n  [91a5bcdd] Plots v1.40.1\n  [37e2e46d] LinearAlgebra\n
      \nTo reproduce this notebook's package environment, you can\n\ndownload the full Manifest.toml.\n
      \n
      \nSystem information (click to expand)\n
      \nJulia Version 1.10.0\nCommit 3120989f39b (2023-12-25 18:01 UTC)\nBuild Info:\n  Official https://julialang.org/ release\nPlatform Info:\n  OS: Linux (x86_64-linux-gnu)\n  CPU: 4 × AMD EPYC 7763 64-Core Processor\n  WORD_SIZE: 64\n  LIBM: libopenlibm\n  LLVM: libLLVM-15.0.7 (ORCJIT, znver3)\n  Threads: 1 on 4 virtual cores\nEnvironment:\n  JULIA_DEBUG = Documenter\n  JULIA_LOAD_PATH = :/home/runner/.julia/packages/JuliaGPsDocs/7M86H/src\n
      \n
      ","category":"page"},{"location":"examples/kernel-ridge-regression/","page":"Kernel Ridge Regression","title":"Kernel Ridge Regression","text":"","category":"page"},{"location":"examples/kernel-ridge-regression/","page":"Kernel Ridge Regression","title":"Kernel Ridge Regression","text":"This page was generated using Literate.jl.","category":"page"},{"location":"kernels/","page":"Kernel Functions","title":"Kernel Functions","text":" CurrentModule = KernelFunctions","category":"page"},{"location":"kernels/#Kernel-Functions","page":"Kernel Functions","title":"Kernel Functions","text":"","category":"section"},{"location":"kernels/#base_kernels","page":"Kernel Functions","title":"Base Kernels","text":"","category":"section"},{"location":"kernels/","page":"Kernel Functions","title":"Kernel Functions","text":"These are the basic kernels without any transformation of the data. They are the building blocks of KernelFunctions.","category":"page"},{"location":"kernels/#Constant-Kernels","page":"Kernel Functions","title":"Constant Kernels","text":"","category":"section"},{"location":"kernels/","page":"Kernel Functions","title":"Kernel Functions","text":"ZeroKernel\nConstantKernel\nWhiteKernel\nEyeKernel","category":"page"},{"location":"kernels/#KernelFunctions.ZeroKernel","page":"Kernel Functions","title":"KernelFunctions.ZeroKernel","text":"ZeroKernel()\n\nZero kernel.\n\nDefinition\n\nFor inputs x x, the zero kernel is defined as\n\nk(x x) = 0\n\nThe output type depends on x and x.\n\nSee also: ConstantKernel\n\n\n\n\n\n","category":"type"},{"location":"kernels/#KernelFunctions.ConstantKernel","page":"Kernel Functions","title":"KernelFunctions.ConstantKernel","text":"ConstantKernel(; c::Real=1.0)\n\nKernel of constant value c.\n\nDefinition\n\nFor inputs x x, the kernel of constant value c geq 0 is defined as\n\nk(x x) = c\n\nSee also: ZeroKernel\n\n\n\n\n\n","category":"type"},{"location":"kernels/#KernelFunctions.WhiteKernel","page":"Kernel Functions","title":"KernelFunctions.WhiteKernel","text":"WhiteKernel()\n\nWhite noise kernel.\n\nDefinition\n\nFor inputs x x, the white noise kernel is defined as\n\nk(x x) = delta(x x)\n\n\n\n\n\n","category":"type"},{"location":"kernels/#KernelFunctions.EyeKernel","page":"Kernel Functions","title":"KernelFunctions.EyeKernel","text":"EyeKernel()\n\nAlias of WhiteKernel.\n\n\n\n\n\n","category":"type"},{"location":"kernels/#Cosine-Kernel","page":"Kernel Functions","title":"Cosine Kernel","text":"","category":"section"},{"location":"kernels/","page":"Kernel Functions","title":"Kernel Functions","text":"CosineKernel","category":"page"},{"location":"kernels/#KernelFunctions.CosineKernel","page":"Kernel Functions","title":"KernelFunctions.CosineKernel","text":"CosineKernel(; metric=Euclidean())\n\nCosine kernel with respect to the metric.\n\nDefinition\n\nFor inputs x x and metric d(cdot cdot), the cosine kernel is defined as\n\nk(x x) = cos(pi d(x x))\n\nBy default, d is the Euclidean metric d(x x) = x - x_2.\n\n\n\n\n\n","category":"type"},{"location":"kernels/#Exponential-Kernels","page":"Kernel Functions","title":"Exponential Kernels","text":"","category":"section"},{"location":"kernels/","page":"Kernel Functions","title":"Kernel Functions","text":"ExponentialKernel\nGibbsKernel\nLaplacianKernel\nSqExponentialKernel\nSEKernel\nGaussianKernel\nRBFKernel\nGammaExponentialKernel","category":"page"},{"location":"kernels/#KernelFunctions.ExponentialKernel","page":"Kernel Functions","title":"KernelFunctions.ExponentialKernel","text":"ExponentialKernel(; metric=Euclidean())\n\nExponential kernel with respect to the metric.\n\nDefinition\n\nFor inputs x x and metric d(cdot cdot), the exponential kernel is defined as\n\nk(x x) = expbig(- d(x x)big)\n\nBy default, d is the Euclidean metric d(x x) = x - x_2.\n\nSee also: GammaExponentialKernel\n\n\n\n\n\n","category":"type"},{"location":"kernels/#KernelFunctions.GibbsKernel","page":"Kernel Functions","title":"KernelFunctions.GibbsKernel","text":"GibbsKernel(; lengthscale)\n\nGibbs Kernel with lengthscale function lengthscale.\n\nThe Gibbs kernel is a non-stationary generalisation of the squared exponential kernel. The lengthscale parameter l becomes a function of position l(x).\n\nDefinition\n\nFor inputs x x, the Gibbs kernel with lengthscale function l(cdot) is defined as\n\nk(x x l) = sqrtleft(frac2 l(x) l(x)l(x)^2 + l(x)^2right)\nquad expleft(-frac(x - x)^2l(x)^2 + l(x)^2right)\n\nFor a constant function l equiv c, one recovers the SqExponentialKernel with lengthscale c.\n\nReferences\n\nMark N. Gibbs. \"Bayesian Gaussian Processes for Regression and Classication.\" PhD thesis, 1997\n\nChristopher J. Paciorek and Mark J. Schervish. \"Nonstationary Covariance Functions for Gaussian Process Regression\". NeurIPS, 2003\n\nSami Remes, Markus Heinonen, Samuel Kaski. \"Non-Stationary Spectral Kernels\". arXiV:1705.08736, 2017\n\nSami Remes, Markus Heinonen, Samuel Kaski. \"Neural Non-Stationary Spectral Kernel\". arXiv:1811.10978, 2018\n\n\n\n\n\n","category":"type"},{"location":"kernels/#KernelFunctions.LaplacianKernel","page":"Kernel Functions","title":"KernelFunctions.LaplacianKernel","text":"LaplacianKernel()\n\nAlias of ExponentialKernel.\n\n\n\n\n\n","category":"type"},{"location":"kernels/#KernelFunctions.SqExponentialKernel","page":"Kernel Functions","title":"KernelFunctions.SqExponentialKernel","text":"SqExponentialKernel(; metric=Euclidean())\n\nSquared exponential kernel with respect to the metric.\n\nDefinition\n\nFor inputs x x and metric d(cdot cdot), the squared exponential kernel is defined as\n\nk(x x) = expbigg(- fracd(x x)^22bigg)\n\nBy default, d is the Euclidean metric d(x x) = x - x_2.\n\nSee also: GammaExponentialKernel\n\n\n\n\n\n","category":"type"},{"location":"kernels/#KernelFunctions.SEKernel","page":"Kernel Functions","title":"KernelFunctions.SEKernel","text":"SEKernel()\n\nAlias of SqExponentialKernel.\n\n\n\n\n\n","category":"type"},{"location":"kernels/#KernelFunctions.GaussianKernel","page":"Kernel Functions","title":"KernelFunctions.GaussianKernel","text":"GaussianKernel()\n\nAlias of SqExponentialKernel.\n\n\n\n\n\n","category":"type"},{"location":"kernels/#KernelFunctions.RBFKernel","page":"Kernel Functions","title":"KernelFunctions.RBFKernel","text":"RBFKernel()\n\nAlias of SqExponentialKernel.\n\n\n\n\n\n","category":"type"},{"location":"kernels/#KernelFunctions.GammaExponentialKernel","page":"Kernel Functions","title":"KernelFunctions.GammaExponentialKernel","text":"GammaExponentialKernel(; γ::Real=1.0, metric=Euclidean())\n\nγ-exponential kernel with respect to the metric and with parameter γ.\n\nDefinition\n\nFor inputs x x and metric d(cdot cdot), the γ-exponential kernel[RW] with parameter gamma in (0 2 is defined as\n\nk(x x gamma) = expbig(- d(x x)^gammabig)\n\nBy default, d is the Euclidean metric d(x x) = x - x_2.\n\nSee also: ExponentialKernel, SqExponentialKernel\n\n[RW]: C. E. Rasmussen & C. K. I. Williams (2006). Gaussian Processes for Machine Learning.\n\n\n\n\n\n","category":"type"},{"location":"kernels/#Exponentiated-Kernel","page":"Kernel Functions","title":"Exponentiated Kernel","text":"","category":"section"},{"location":"kernels/","page":"Kernel Functions","title":"Kernel Functions","text":"ExponentiatedKernel","category":"page"},{"location":"kernels/#KernelFunctions.ExponentiatedKernel","page":"Kernel Functions","title":"KernelFunctions.ExponentiatedKernel","text":"ExponentiatedKernel()\n\nExponentiated kernel.\n\nDefinition\n\nFor inputs x x in mathbbR^d, the exponentiated kernel is defined as\n\nk(x x) = exp(x^top x)\n\n\n\n\n\n","category":"type"},{"location":"kernels/#Fractional-Brownian-Motion-Kernel","page":"Kernel Functions","title":"Fractional Brownian Motion Kernel","text":"","category":"section"},{"location":"kernels/","page":"Kernel Functions","title":"Kernel Functions","text":"FBMKernel","category":"page"},{"location":"kernels/#KernelFunctions.FBMKernel","page":"Kernel Functions","title":"KernelFunctions.FBMKernel","text":"FBMKernel(; h::Real=0.5)\n\nFractional Brownian motion kernel with Hurst index h.\n\nDefinition\n\nFor inputs x x in mathbbR^d, the fractional Brownian motion kernel with Hurst index h in 01 is defined as\n\nk(x x h) = fracx_2^2h + x_2^2h - x - x^2h2\n\n\n\n\n\n","category":"type"},{"location":"kernels/#Gabor-Kernel","page":"Kernel Functions","title":"Gabor Kernel","text":"","category":"section"},{"location":"kernels/","page":"Kernel Functions","title":"Kernel Functions","text":"gaborkernel","category":"page"},{"location":"kernels/#KernelFunctions.gaborkernel","page":"Kernel Functions","title":"KernelFunctions.gaborkernel","text":"gaborkernel(;\n sqexponential_transform=IdentityTransform(), cosine_tranform=IdentityTransform()\n)\n\nConstruct a Gabor kernel with transformations sqexponential_transform and cosine_transform of the inputs of the underlying squared exponential and cosine kernel, respectively.\n\nDefinition\n\nFor inputs x x in mathbbR^d, the Gabor kernel with transformations f and g of the inputs to the squared exponential and cosine kernel, respectively, is defined as\n\nk(x x f g) = expbigg(- frac f(x) - f(x)_2^22bigg)\n cosbig(pi g(x) - g(x)_2 big)\n\n\n\n\n\n","category":"function"},{"location":"kernels/#Matérn-Kernels","page":"Kernel Functions","title":"Matérn Kernels","text":"","category":"section"},{"location":"kernels/","page":"Kernel Functions","title":"Kernel Functions","text":"MaternKernel\nMatern12Kernel\nMatern32Kernel\nMatern52Kernel","category":"page"},{"location":"kernels/#KernelFunctions.MaternKernel","page":"Kernel Functions","title":"KernelFunctions.MaternKernel","text":"MaternKernel(; ν::Real=1.5, metric=Euclidean())\n\nMatérn kernel of order ν with respect to the metric.\n\nDefinition\n\nFor inputs x x and metric d(cdot cdot), the Matérn kernel of order nu 0 is defined as\n\nk(xxnu) = frac2^1-nuGamma(nu)big(sqrt2nu d(x x)big) K_nubig(sqrt2nu d(x x)big)\n\nwhere Gamma is the Gamma function and K_nu is the modified Bessel function of the second kind of order nu. By default, d is the Euclidean metric d(x x) = x - x_2.\n\nA Gaussian process with a Matérn kernel is lceil nu rceil - 1-times differentiable in the mean-square sense.\n\nnote: Note\nDifferentiation with respect to the order ν is not currently supported.\n\nSee also: Matern12Kernel, Matern32Kernel, Matern52Kernel\n\n\n\n\n\n","category":"type"},{"location":"kernels/#KernelFunctions.Matern12Kernel","page":"Kernel Functions","title":"KernelFunctions.Matern12Kernel","text":"Matern12Kernel()\n\nAlias of ExponentialKernel.\n\n\n\n\n\n","category":"type"},{"location":"kernels/#KernelFunctions.Matern32Kernel","page":"Kernel Functions","title":"KernelFunctions.Matern32Kernel","text":"Matern32Kernel(; metric=Euclidean())\n\nMatérn kernel of order 32 with respect to the metric.\n\nDefinition\n\nFor inputs x x and metric d(cdot cdot), the Matérn kernel of order 32 is given by\n\nk(x x) = big(1 + sqrt3 d(x x) big) expbig(- sqrt3 d(x x) big)\n\nBy default, d is the Euclidean metric d(x x) = x - x_2.\n\nSee also: MaternKernel\n\n\n\n\n\n","category":"type"},{"location":"kernels/#KernelFunctions.Matern52Kernel","page":"Kernel Functions","title":"KernelFunctions.Matern52Kernel","text":"Matern52Kernel(; metric=Euclidean())\n\nMatérn kernel of order 52 with respect to the metric.\n\nDefinition\n\nFor inputs x x and metric d(cdot cdot), the Matérn kernel of order 52 is given by\n\nk(x x) = bigg(1 + sqrt5 d(x x) + frac53 d(x x)^2bigg)\n expbig(- sqrt5 d(x x) big)\n\nBy default, d is the Euclidean metric d(x x) = x - x_2.\n\nSee also: MaternKernel\n\n\n\n\n\n","category":"type"},{"location":"kernels/#Neural-Network-Kernel","page":"Kernel Functions","title":"Neural Network Kernel","text":"","category":"section"},{"location":"kernels/","page":"Kernel Functions","title":"Kernel Functions","text":"NeuralNetworkKernel","category":"page"},{"location":"kernels/#KernelFunctions.NeuralNetworkKernel","page":"Kernel Functions","title":"KernelFunctions.NeuralNetworkKernel","text":"NeuralNetworkKernel()\n\nKernel of a Gaussian process obtained as the limit of a Bayesian neural network with a single hidden layer as the number of units goes to infinity.\n\nDefinition\n\nConsider the single-layer Bayesian neural network f colon mathbbR^d to mathbbR with h hidden units defined by\n\nf(x b v u) = b + sqrtfracpi2 sum_i=1^h v_i mathrmerfbig(u_i^top xbig)\n\nwhere mathrmerf is the error function, and with prior distributions\n\nbeginaligned\nb sim mathcalN(0 sigma_b^2)\nv sim mathcalN(0 sigma_v^2 mathrmI_hh)\nu_i sim mathcalN(0 mathrmI_d2) qquad (i = 1ldotsh)\nendaligned\n\nAs h to infty, the neural network converges to the Gaussian process\n\ng(cdot) sim mathcalGPbig(0 sigma_b^2 + sigma_v^2 k(cdot cdot)big)\n\nwhere the neural network kernel k is given by\n\nk(x x) = arcsinleft(fracx^top xsqrtbig(1 + x^2_2big) big(1 + x_2^2big)right)\n\nfor inputs x x in mathbbR^d.[CW]\n\n[CW]: C. K. I. Williams (1998). Computation with infinite neural networks.\n\n\n\n\n\n","category":"type"},{"location":"kernels/#Periodic-Kernel","page":"Kernel Functions","title":"Periodic Kernel","text":"","category":"section"},{"location":"kernels/","page":"Kernel Functions","title":"Kernel Functions","text":"PeriodicKernel\nPeriodicKernel(::DataType, ::Int)","category":"page"},{"location":"kernels/#KernelFunctions.PeriodicKernel","page":"Kernel Functions","title":"KernelFunctions.PeriodicKernel","text":"PeriodicKernel(; r::AbstractVector=ones(Float64, 1))\n\nPeriodic kernel with parameter r.\n\nDefinition\n\nFor inputs x x in mathbbR^d, the periodic kernel with parameter r_i 0 is defined[DM] as\n\nk(x x r) = expbigg(- frac12 sum_i=1^d bigg(fracsinbig(pi(x_i - x_i)big)r_ibigg)^2bigg)\n\n[DM]: D. J. C. MacKay (1998). Introduction to Gaussian Processes.\n\n\n\n\n\n","category":"type"},{"location":"kernels/#KernelFunctions.PeriodicKernel-Tuple{DataType, Int64}","page":"Kernel Functions","title":"KernelFunctions.PeriodicKernel","text":"PeriodicKernel([T=Float64, dims::Int=1])\n\nCreate a PeriodicKernel with parameter r=ones(T, dims).\n\n\n\n\n\n","category":"method"},{"location":"kernels/#Piecewise-Polynomial-Kernel","page":"Kernel Functions","title":"Piecewise Polynomial Kernel","text":"","category":"section"},{"location":"kernels/","page":"Kernel Functions","title":"Kernel Functions","text":"PiecewisePolynomialKernel","category":"page"},{"location":"kernels/#KernelFunctions.PiecewisePolynomialKernel","page":"Kernel Functions","title":"KernelFunctions.PiecewisePolynomialKernel","text":"PiecewisePolynomialKernel(; dim::Int, degree::Int=0, metric=Euclidean())\nPiecewisePolynomialKernel{degree}(; dim::Int, metric=Euclidean())\n\nPiecewise polynomial kernel of degree degree for inputs of dimension dim with support in the unit ball with respect to the metric.\n\nDefinition\n\nFor inputs x x of dimension m and metric d(cdot cdot), the piecewise polynomial kernel of degree v in 0123 is defined as\n\nk(x x v) = max(1 - d(x x) 0)^alpha(vm) f_vm(d(x x))\n\nwhere alpha(v m) = lfloor fracm2rfloor + 2v + 1 and f_vm are polynomials of degree v given by\n\nbeginaligned\nf_0m(r) = 1 \nf_1m(r) = 1 + (j + 1) r \nf_2m(r) = 1 + (j + 2) r + big((j^2 + 4j + 3) 3big) r^2 \nf_3m(r) = 1 + (j + 3) r + big((6 j^2 + 36j + 45) 15big) r^2 + big((j^3 + 9 j^2 + 23j + 15) 15big) r^3\nendaligned\n\nwhere j = lfloor fracm2rfloor + v + 1. By default, d is the Euclidean metric d(x x) = x - x_2.\n\nThe kernel is 2v times continuously differentiable and the corresponding Gaussian process is hence v times mean-square differentiable.\n\n\n\n\n\n","category":"type"},{"location":"kernels/#Polynomial-Kernels","page":"Kernel Functions","title":"Polynomial Kernels","text":"","category":"section"},{"location":"kernels/","page":"Kernel Functions","title":"Kernel Functions","text":"LinearKernel\nPolynomialKernel","category":"page"},{"location":"kernels/#KernelFunctions.LinearKernel","page":"Kernel Functions","title":"KernelFunctions.LinearKernel","text":"LinearKernel(; c::Real=0.0)\n\nLinear kernel with constant offset c.\n\nDefinition\n\nFor inputs x x in mathbbR^d, the linear kernel with constant offset c geq 0 is defined as\n\nk(x x c) = x^top x + c\n\nSee also: PolynomialKernel\n\n\n\n\n\n","category":"type"},{"location":"kernels/#KernelFunctions.PolynomialKernel","page":"Kernel Functions","title":"KernelFunctions.PolynomialKernel","text":"PolynomialKernel(; degree::Int=2, c::Real=0.0)\n\nPolynomial kernel of degree degree with constant offset c.\n\nDefinition\n\nFor inputs x x in mathbbR^d, the polynomial kernel of degree nu in mathbbN with constant offset c geq 0 is defined as\n\nk(x x c nu) = (x^top x + c)^nu\n\nSee also: LinearKernel\n\n\n\n\n\n","category":"type"},{"location":"kernels/#Rational-Kernels","page":"Kernel Functions","title":"Rational Kernels","text":"","category":"section"},{"location":"kernels/","page":"Kernel Functions","title":"Kernel Functions","text":"RationalKernel\nRationalQuadraticKernel\nGammaRationalKernel","category":"page"},{"location":"kernels/#KernelFunctions.RationalKernel","page":"Kernel Functions","title":"KernelFunctions.RationalKernel","text":"RationalKernel(; α::Real=2.0, metric=Euclidean())\n\nRational kernel with shape parameter α and given metric.\n\nDefinition\n\nFor inputs x x and metric d(cdot cdot), the rational kernel with shape parameter alpha 0 is defined as\n\nk(x x alpha) = bigg(1 + fracd(x x)alphabigg)^-alpha\n\nBy default, d is the Euclidean metric d(x x) = x - x_2.\n\nThe ExponentialKernel is recovered in the limit as alpha to infty.\n\nSee also: GammaRationalKernel\n\n\n\n\n\n","category":"type"},{"location":"kernels/#KernelFunctions.RationalQuadraticKernel","page":"Kernel Functions","title":"KernelFunctions.RationalQuadraticKernel","text":"RationalQuadraticKernel(; α::Real=2.0, metric=Euclidean())\n\nRational-quadratic kernel with respect to the metric and with shape parameter α.\n\nDefinition\n\nFor inputs x x and metric d(cdot cdot), the rational-quadratic kernel with shape parameter alpha 0 is defined as\n\nk(x x alpha) = bigg(1 + fracd(x x)^22alphabigg)^-alpha\n\nBy default, d is the Euclidean metric d(x x) = x - x_2.\n\nThe SqExponentialKernel is recovered in the limit as alpha to infty.\n\nSee also: GammaRationalKernel\n\n\n\n\n\n","category":"type"},{"location":"kernels/#KernelFunctions.GammaRationalKernel","page":"Kernel Functions","title":"KernelFunctions.GammaRationalKernel","text":"GammaRationalKernel(; α::Real=2.0, γ::Real=1.0, metric=Euclidean())\n\nγ-rational kernel with respect to the metric with shape parameters α and γ.\n\nDefinition\n\nFor inputs x x and metric d(cdot cdot), the γ-rational kernel with shape parameters alpha 0 and gamma in (0 2 is defined as\n\nk(x x alpha gamma) = bigg(1 + fracd(x x)^gammaalphabigg)^-alpha\n\nBy default, d is the Euclidean metric d(x x) = x - x_2.\n\nThe GammaExponentialKernel is recovered in the limit as alpha to infty.\n\nSee also: RationalKernel, RationalQuadraticKernel\n\n\n\n\n\n","category":"type"},{"location":"kernels/#Spectral-Mixture-Kernels","page":"Kernel Functions","title":"Spectral Mixture Kernels","text":"","category":"section"},{"location":"kernels/","page":"Kernel Functions","title":"Kernel Functions","text":"spectral_mixture_kernel\nspectral_mixture_product_kernel","category":"page"},{"location":"kernels/#KernelFunctions.spectral_mixture_kernel","page":"Kernel Functions","title":"KernelFunctions.spectral_mixture_kernel","text":"spectral_mixture_kernel(\n h::Kernel=SqExponentialKernel(),\n αs::AbstractVector{<:Real},\n γs::AbstractMatrix{<:Real},\n ωs::AbstractMatrix{<:Real},\n)\n\nwhere αs are the weights of dimension (A, ), γs is the covariance matrix of dimension (D, A) and ωs are the mean vectors and is of dimension (D, A). Here, D is input dimension and A is the number of spectral components.\n\nh is the kernel, which defaults to SqExponentialKernel if not specified.\n\nwarning: Warning\nIf you want to make sure that the constructor is type-stable, you should provide StaticArrays arguments: αs as a StaticVector, γs and ωs as StaticMatrix.\n\nGeneralised Spectral Mixture kernel function. This family of functions is dense in the family of stationary real-valued kernels with respect to the pointwise convergence.[1]\n\n κ(x y) = αs (h(-(γs * t)^2) * cos(π * ωs * t) t = x - y\n\nReferences:\n\n[1] Generalized Spectral Kernels, by Yves-Laurent Kom Samo and Stephen J. Roberts\n[2] SM: Gaussian Process Kernels for Pattern Discovery and Extrapolation,\n ICML, 2013, by Andrew Gordon Wilson and Ryan Prescott Adams,\n[3] Covariance kernels for fast automatic pattern discovery and extrapolation\n with Gaussian processes, Andrew Gordon Wilson, PhD Thesis, January 2014.\n http://www.cs.cmu.edu/~andrewgw/andrewgwthesis.pdf\n[4] http://www.cs.cmu.edu/~andrewgw/pattern/.\n\n\n\n\n\n","category":"function"},{"location":"kernels/#KernelFunctions.spectral_mixture_product_kernel","page":"Kernel Functions","title":"KernelFunctions.spectral_mixture_product_kernel","text":"spectral_mixture_product_kernel(\n h::Kernel=SqExponentialKernel(),\n αs::AbstractMatrix{<:Real},\n γs::AbstractMatrix{<:Real},\n ωs::AbstractMatrix{<:Real},\n)\n\nwhere αs are the weights of dimension (D, A), γs is the covariance matrix of dimension (D, A) and ωs are the mean vectors and is of dimension (D, A). Here, D is input dimension and A is the number of spectral components.\n\nSpectral Mixture Product Kernel. With enough components A, the SMP kernel can model any product kernel to arbitrary precision, and is flexible even with a small number of components [1]\n\nh is the kernel, which defaults to SqExponentialKernel if not specified.\n\n κ(x y) = Πᵢ₁ᴷ Σ(αsᵢᵀ * (h(-(γsᵢᵀ * tᵢ)²) * cos(ωsᵢᵀ * tᵢ))) tᵢ = xᵢ - yᵢ\n\nReferences:\n\n[1] GPatt: Fast Multidimensional Pattern Extrapolation with GPs,\n arXiv 1310.5288, 2013, by Andrew Gordon Wilson, Elad Gilboa,\n Arye Nehorai and John P. Cunningham\n\n\n\n\n\n","category":"function"},{"location":"kernels/#Wiener-Kernel","page":"Kernel Functions","title":"Wiener Kernel","text":"","category":"section"},{"location":"kernels/","page":"Kernel Functions","title":"Kernel Functions","text":"WienerKernel","category":"page"},{"location":"kernels/#KernelFunctions.WienerKernel","page":"Kernel Functions","title":"KernelFunctions.WienerKernel","text":"WienerKernel(; i::Int=0)\nWienerKernel{i}()\n\nThe i-times integrated Wiener process kernel function.\n\nDefinition\n\nFor inputs x x in mathbbR^d, the i-times integrated Wiener process kernel with i in -1 0 1 2 3 is defined[SDH] as\n\nk_i(x x) = begincases\n delta(x x) textif i=-1\n minbig(x_2 x_2big) textif i=0\n a_i1^-1 minbig(x_2 x_2big)^2i + 1\n + a_i2^-1 x - x_2 r_ibig(x_2 x_2big) minbig(x_2 x_2big)^i + 1\n textotherwise\nendcases\n\nwhere the coefficients a are given by\n\na = beginbmatrix\n3 2 \n20 12 \n252 720\nendbmatrix\n\nand the functions r_i are defined as\n\nbeginaligned\nr_1(t t) = 1\nr_2(t t) = t + t - fracmin(t t)2\nr_3(t t) = 5 max(t t)^2 + 2 tt + 3 min(t t)^2\nendaligned\n\nThe WhiteKernel is recovered for i = -1.\n\n[SDH]: Schober, Duvenaud & Hennig (2014). Probabilistic ODE Solvers with Runge-Kutta Means.\n\n\n\n\n\n","category":"type"},{"location":"kernels/#Composite-Kernels","page":"Kernel Functions","title":"Composite Kernels","text":"","category":"section"},{"location":"kernels/","page":"Kernel Functions","title":"Kernel Functions","text":"The modular design of KernelFunctions uses base kernels as building blocks for more complex kernels. There are a variety of composite kernels implemented, including those which transform the inputs to a wrapped kernel to implement length scales, scale the variance of a kernel, and sum or multiply collections of kernels together.","category":"page"},{"location":"kernels/","page":"Kernel Functions","title":"Kernel Functions","text":"TransformedKernel\n∘(::Kernel, ::Transform)\nScaledKernel\nKernelSum\nKernelProduct\nKernelTensorProduct\nNormalizedKernel","category":"page"},{"location":"kernels/#KernelFunctions.TransformedKernel","page":"Kernel Functions","title":"KernelFunctions.TransformedKernel","text":"TransformedKernel(k::Kernel, t::Transform)\n\nKernel derived from k for which inputs are transformed via a Transform t.\n\nThe preferred way to create kernels with input transformations is to use the composition operator ∘ or its alias compose instead of TransformedKernel directly since this allows optimized implementations for specific kernels and transformations.\n\nSee also: ∘\n\n\n\n\n\n","category":"type"},{"location":"kernels/#Base.:∘-Tuple{Kernel, Transform}","page":"Kernel Functions","title":"Base.:∘","text":"kernel ∘ transform\n∘(kernel, transform)\ncompose(kernel, transform)\n\nCompose a kernel with a transformation transform of its inputs.\n\nThe prefix forms support chains of multiple transformations: ∘(kernel, transform1, transform2) = kernel ∘ transform1 ∘ transform2.\n\nDefinition\n\nFor inputs x x, the transformed kernel widetildek derived from kernel k by input transformation t is defined as\n\nwidetildek(x x k t) = kbig(t(x) t(x)big)\n\nExamples\n\njulia> (SqExponentialKernel() ∘ ScaleTransform(0.5))(0, 2) == exp(-0.5)\ntrue\n\njulia> ∘(ExponentialKernel(), ScaleTransform(2), ScaleTransform(0.5))(1, 2) == exp(-1)\ntrue\n\nSee also: TransformedKernel\n\n\n\n\n\n","category":"method"},{"location":"kernels/#KernelFunctions.ScaledKernel","page":"Kernel Functions","title":"KernelFunctions.ScaledKernel","text":"ScaledKernel(k::Kernel, σ²::Real=1.0)\n\nScaled kernel derived from k by multiplication with variance σ².\n\nDefinition\n\nFor inputs x x, the scaled kernel widetildek derived from kernel k by multiplication with variance sigma^2 0 is defined as\n\nwidetildek(x x k sigma^2) = sigma^2 k(x x)\n\n\n\n\n\n","category":"type"},{"location":"kernels/#KernelFunctions.KernelSum","page":"Kernel Functions","title":"KernelFunctions.KernelSum","text":"KernelSum <: Kernel\n\nCreate a sum of kernels. One can also use the operator +.\n\nThere are various ways in which you create a KernelSum:\n\nThe simplest way to specify a KernelSum would be to use the overloaded + operator. This is equivalent to creating a KernelSum by specifying the kernels as the arguments to the constructor. \n\njulia> k1 = SqExponentialKernel(); k2 = LinearKernel(); X = rand(5);\n\njulia> (k = k1 + k2) == KernelSum(k1, k2)\ntrue\n\njulia> kernelmatrix(k1 + k2, X) == kernelmatrix(k1, X) .+ kernelmatrix(k2, X)\ntrue\n\njulia> kernelmatrix(k, X) == kernelmatrix(k1 + k2, X)\ntrue\n\nYou could also specify a KernelSum by providing a Tuple or a Vector of the kernels to be summed. We suggest you to use a Tuple when you have fewer components and a Vector when dealing with a large number of components.\n\njulia> KernelSum((k1, k2)) == k1 + k2\ntrue\n\njulia> KernelSum([k1, k2]) == KernelSum((k1, k2)) == k1 + k2\ntrue\n\n\n\n\n\n","category":"type"},{"location":"kernels/#KernelFunctions.KernelProduct","page":"Kernel Functions","title":"KernelFunctions.KernelProduct","text":"KernelProduct <: Kernel\n\nCreate a product of kernels. One can also use the overloaded operator *.\n\nThere are various ways in which you create a KernelProduct:\n\nThe simplest way to specify a KernelProduct would be to use the overloaded * operator. This is equivalent to creating a KernelProduct by specifying the kernels as the arguments to the constructor. \n\njulia> k1 = SqExponentialKernel(); k2 = LinearKernel(); X = rand(5);\n\njulia> (k = k1 * k2) == KernelProduct(k1, k2)\ntrue\n\njulia> kernelmatrix(k1 * k2, X) == kernelmatrix(k1, X) .* kernelmatrix(k2, X)\ntrue\n\njulia> kernelmatrix(k, X) == kernelmatrix(k1 * k2, X)\ntrue\n\nYou could also specify a KernelProduct by providing a Tuple or a Vector of the kernels to be multiplied. We suggest you to use a Tuple when you have fewer components and a Vector when dealing with a large number of components.\n\njulia> KernelProduct((k1, k2)) == k1 * k2\ntrue\n\njulia> KernelProduct([k1, k2]) == KernelProduct((k1, k2)) == k1 * k2\ntrue\n\n\n\n\n\n","category":"type"},{"location":"kernels/#KernelFunctions.KernelTensorProduct","page":"Kernel Functions","title":"KernelFunctions.KernelTensorProduct","text":"KernelTensorProduct\n\nTensor product of kernels.\n\nDefinition\n\nFor inputs x = (x_1 ldots x_n) and x = (x_1 ldots x_n), the tensor product of kernels k_1 ldots k_n is defined as\n\nk(x x k_1 ldots k_n) = Big(bigotimes_i=1^n k_iBig)(x x) = prod_i=1^n k_i(x_i x_i)\n\nConstruction\n\nThe simplest way to specify a KernelTensorProduct is to use the overloaded tensor operator or its alias ⊗ (can be typed by \\otimes).\n\njulia> k1 = SqExponentialKernel(); k2 = LinearKernel(); X = rand(5, 2);\n\njulia> kernelmatrix(k1 ⊗ k2, RowVecs(X)) == kernelmatrix(k1, X[:, 1]) .* kernelmatrix(k2, X[:, 2])\ntrue\n\nYou can also specify a KernelTensorProduct by providing kernels as individual arguments or as an iterable data structure such as a Tuple or a Vector. Using a tuple or individual arguments guarantees that KernelTensorProduct is concretely typed but might lead to large compilation times if the number of kernels is large.\n\njulia> KernelTensorProduct(k1, k2) == k1 ⊗ k2\ntrue\n\njulia> KernelTensorProduct((k1, k2)) == k1 ⊗ k2\ntrue\n\njulia> KernelTensorProduct([k1, k2]) == k1 ⊗ k2\ntrue\n\n\n\n\n\n","category":"type"},{"location":"kernels/#KernelFunctions.NormalizedKernel","page":"Kernel Functions","title":"KernelFunctions.NormalizedKernel","text":"NormalizedKernel(k::Kernel)\n\nA normalized kernel derived from k.\n\nDefinition\n\nFor inputs x x, the normalized kernel widetildek derived from kernel k is defined as\n\nwidetildek(x x k) = frack(x x)sqrtk(x x) k(x x)\n\n\n\n\n\n","category":"type"},{"location":"kernels/#Multi-output-Kernels","page":"Kernel Functions","title":"Multi-output Kernels","text":"","category":"section"},{"location":"kernels/","page":"Kernel Functions","title":"Kernel Functions","text":"Kernelfunctions implements multi-output kernels as scalar kernels on an extended output domain. For more details on this read the section on inputs for multi-output GPs.","category":"page"},{"location":"kernels/","page":"Kernel Functions","title":"Kernel Functions","text":"For a function f(x) rightarrow y denote the inputs as x x, such that we compute the covariance between output components y_p and y_p. The total number of outputs is m.","category":"page"},{"location":"kernels/","page":"Kernel Functions","title":"Kernel Functions","text":"MOKernel\nIndependentMOKernel\nLatentFactorMOKernel\nIntrinsicCoregionMOKernel\nLinearMixingModelKernel","category":"page"},{"location":"kernels/#KernelFunctions.MOKernel","page":"Kernel Functions","title":"KernelFunctions.MOKernel","text":"MOKernel\n\nAbstract type for kernels with multiple outpus.\n\n\n\n\n\n","category":"type"},{"location":"kernels/#KernelFunctions.IndependentMOKernel","page":"Kernel Functions","title":"KernelFunctions.IndependentMOKernel","text":"IndependentMOKernel(k::Kernel)\n\nKernel for multiple independent outputs with kernel k each.\n\nDefinition\n\nFor inputs x x and output dimensions p p, the kernel widetildek for independent outputs with kernel k each is defined as\n\nwidetildekbig((x p) (x p)big) = begincases\n k(x x) textif p = p \n 0 textotherwise\nendcases\n\nMathematically, it is equivalent to a matrix-valued kernel defined as\n\nwidetildeK(x x) = mathrmdiagbig(k(x x) ldots k(x x)big) in mathbbR^m times m\n\nwhere m is the number of outputs.\n\n\n\n\n\n","category":"type"},{"location":"kernels/#KernelFunctions.LatentFactorMOKernel","page":"Kernel Functions","title":"KernelFunctions.LatentFactorMOKernel","text":"LatentFactorMOKernel(g::AbstractVector{<:Kernel}, e::MOKernel, A::AbstractMatrix)\n\nKernel associated with the semiparametric latent factor model.\n\nDefinition\n\nFor inputs x x and output dimensions p_x p_x, the kernel is defined as[STJ]\n\nkbig((x p_x) (x p_x)big) = sum^Q_q=1 A_p_xqg_q(x x)A_p_xq\n + ebig((x p_x) (x p_x)big)\n\nwhere g_1 ldots g_Q are Q kernels, one for each latent process, e is a multi-output kernel for m outputs, and A is a matrix of weights for the kernels of size m times Q.\n\n[STJ]: M. Seeger, Y. Teh, & M. I. Jordan (2005). Semiparametric Latent Factor Models.\n\n\n\n\n\n","category":"type"},{"location":"kernels/#KernelFunctions.IntrinsicCoregionMOKernel","page":"Kernel Functions","title":"KernelFunctions.IntrinsicCoregionMOKernel","text":"IntrinsicCoregionMOKernel(; kernel::Kernel, B::AbstractMatrix)\n\nKernel associated with the intrinsic coregionalization model.\n\nDefinition\n\nFor inputs x x and output dimensions p p, the kernel is defined as[ARL]\n\nkbig((x p) (x p) B tildekbig) = B_p p tildekbig(x xbig)\n\nwhere B is a positive semidefinite matrix of size m times m, with m being the number of outputs, and tildek is a scalar-valued kernel shared by the latent processes.\n\n[ARL]: M. Álvarez, L. Rosasco, & N. Lawrence (2012). Kernels for Vector-Valued Functions: a Review.\n\n\n\n\n\n","category":"type"},{"location":"kernels/#KernelFunctions.LinearMixingModelKernel","page":"Kernel Functions","title":"KernelFunctions.LinearMixingModelKernel","text":"LinearMixingModelKernel(k::Kernel, H::AbstractMatrix)\nLinearMixingModelKernel(Tk::AbstractVector{<:Kernel},Th::AbstractMatrix)\n\nKernel associated with the linear mixing model, taking a vector of Q kernels and a Q × m mixing matrix H for a function with m outputs. Also accepts a single kernel k for use across all Q basis vectors. \n\nDefinition\n\nFor inputs x x and output dimensions p p, the kernel is defined as[BPTHST]\n\nkbig((x p) (x p)big) = H_pK(x x)H_p\n\nwhere K(x x) = Diag(k_1(x x) k_Q(x x)) with zero off-diagonal entries. H_p is the p-th column (p-th output) of H in mathbbR^Q times m representing Q basis vectors for the m dimensional output space of f. k_1 ldots k_Q are Q kernels, one for each latent process, H is a mixing matrix of Q basis vectors spanning the output space.\n\n[BPTHST]: Wessel P. Bruinsma, Eric Perim, Will Tebbutt, J. Scott Hosking, Arno Solin, Richard E. Turner (2020). Scalable Exact Inference in Multi-Output Gaussian Processes.\n\n\n\n\n\n","category":"type"},{"location":"api/#API-Library","page":"API","title":"API Library","text":"","category":"section"},{"location":"api/","page":"API","title":"API","text":"CurrentModule = KernelFunctions","category":"page"},{"location":"api/#Functions","page":"API","title":"Functions","text":"","category":"section"},{"location":"api/","page":"API","title":"API","text":"The KernelFunctions API comprises the following four functions.","category":"page"},{"location":"api/","page":"API","title":"API","text":"kernelmatrix\nkernelmatrix!\nkernelmatrix_diag\nkernelmatrix_diag!","category":"page"},{"location":"api/#KernelFunctions.kernelmatrix","page":"API","title":"KernelFunctions.kernelmatrix","text":"kernelmatrix(κ::Kernel, x::AbstractVector)\n\nCompute the kernel κ for each pair of inputs in x. Returns a matrix of size (length(x), length(x)) satisfying kernelmatrix(κ, x)[p, q] == κ(x[p], x[q]).\n\nkernelmatrix(κ::Kernel, x::AbstractVector, y::AbstractVector)\n\nCompute the kernel κ for each pair of inputs in x and y. Returns a matrix of size (length(x), length(y)) satisfying kernelmatrix(κ, x, y)[p, q] == κ(x[p], y[q]).\n\nkernelmatrix(κ::Kernel, X::AbstractMatrix; obsdim)\nkernelmatrix(κ::Kernel, X::AbstractMatrix, Y::AbstractMatrix; obsdim)\n\nIf obsdim=1, equivalent to kernelmatrix(κ, RowVecs(X)) and kernelmatrix(κ, RowVecs(X), RowVecs(Y)), respectively. If obsdim=2, equivalent to kernelmatrix(κ, ColVecs(X)) and kernelmatrix(κ, ColVecs(X), ColVecs(Y)), respectively.\n\nSee also: ColVecs, RowVecs\n\n\n\n\n\n","category":"function"},{"location":"api/#KernelFunctions.kernelmatrix!","page":"API","title":"KernelFunctions.kernelmatrix!","text":"kernelmatrix!(K::AbstractMatrix, κ::Kernel, x::AbstractVector)\nkernelmatrix!(K::AbstractMatrix, κ::Kernel, x::AbstractVector, y::AbstractVector)\n\nIn-place version of kernelmatrix where pre-allocated matrix K will be overwritten with the kernel matrix.\n\nkernelmatrix!(K::AbstractMatrix, κ::Kernel, X::AbstractMatrix; obsdim)\nkernelmatrix!(\n K::AbstractMatrix,\n κ::Kernel,\n X::AbstractMatrix,\n Y::AbstractMatrix;\n obsdim,\n)\n\nIf obsdim=1, equivalent to kernelmatrix!(K, κ, RowVecs(X)) and kernelmatrix(K, κ, RowVecs(X), RowVecs(Y)), respectively. If obsdim=2, equivalent to kernelmatrix!(K, κ, ColVecs(X)) and kernelmatrix(K, κ, ColVecs(X), ColVecs(Y)), respectively.\n\nSee also: ColVecs, RowVecs\n\n\n\n\n\n","category":"function"},{"location":"api/#KernelFunctions.kernelmatrix_diag","page":"API","title":"KernelFunctions.kernelmatrix_diag","text":"kernelmatrix_diag(κ::Kernel, x::AbstractVector)\n\nCompute the diagonal of kernelmatrix(κ, x) efficiently.\n\nkernelmatrix_diag(κ::Kernel, x::AbstractVector, y::AbstractVector)\n\nCompute the diagonal of kernelmatrix(κ, x, y) efficiently. Requires that x and y are the same length.\n\nkernelmatrix_diag(κ::Kernel, X::AbstractMatrix; obsdim)\nkernelmatrix_diag(κ::Kernel, X::AbstractMatrix, Y::AbstractMatrix; obsdim)\n\nIf obsdim=1, equivalent to kernelmatrix_diag(κ, RowVecs(X)) and kernelmatrix_diag(κ, RowVecs(X), RowVecs(Y)), respectively. If obsdim=2, equivalent to kernelmatrix_diag(κ, ColVecs(X)) and kernelmatrix_diag(κ, ColVecs(X), ColVecs(Y)), respectively.\n\nSee also: ColVecs, RowVecs\n\n\n\n\n\n","category":"function"},{"location":"api/#KernelFunctions.kernelmatrix_diag!","page":"API","title":"KernelFunctions.kernelmatrix_diag!","text":"kernelmatrix_diag!(K::AbstractVector, κ::Kernel, x::AbstractVector)\nkernelmatrix_diag!(K::AbstractVector, κ::Kernel, x::AbstractVector, y::AbstractVector)\n\nIn place version of kernelmatrix_diag.\n\nkernelmatrix_diag!(K::AbstractVector, κ::Kernel, X::AbstractMatrix; obsdim)\nkernelmatrix_diag!(\n K::AbstractVector,\n κ::Kernel,\n X::AbstractMatrix,\n Y::AbstractMatrix;\n obsdim\n)\n\nIf obsdim=1, equivalent to kernelmatrix_diag!(K, κ, RowVecs(X)) and kernelmatrix_diag!(K, κ, RowVecs(X), RowVecs(Y)), respectively. If obsdim=2, equivalent to kernelmatrix_diag!(K, κ, ColVecs(X)) and kernelmatrix_diag!(K, κ, ColVecs(X), ColVecs(Y)), respectively.\n\nSee also: ColVecs, RowVecs\n\n\n\n\n\n","category":"function"},{"location":"api/#Input-Types","page":"API","title":"Input Types","text":"","category":"section"},{"location":"api/","page":"API","title":"API","text":"The above API operates on collections of inputs. All collections of inputs in KernelFunctions.jl are represented as AbstractVectors. To understand this choice, please see the design notes on collections of inputs. The length of any such AbstractVector is equal to the number of inputs in the collection. For example, this means that","category":"page"},{"location":"api/","page":"API","title":"API","text":"size(kernelmatrix(k, x)) == (length(x), length(x))","category":"page"},{"location":"api/","page":"API","title":"API","text":"is always true, for some Kernel k, and AbstractVector x.","category":"page"},{"location":"api/#Univariate-Inputs","page":"API","title":"Univariate Inputs","text":"","category":"section"},{"location":"api/","page":"API","title":"API","text":"If each input to your kernel is Real-valued, then any AbstractVector{<:Real} is a valid representation for a collection of inputs. More generally, it's completely fine to represent a collection of inputs of type T as, for example, a Vector{T}. However, this may not be the most efficient way to represent collection of inputs. See Vector-Valued Inputs for an example.","category":"page"},{"location":"api/#Vector-Valued-Inputs","page":"API","title":"Vector-Valued Inputs","text":"","category":"section"},{"location":"api/","page":"API","title":"API","text":"We recommend that collections of vector-valued inputs are stored in an AbstractMatrix{<:Real} when possible, and wrapped inside a ColVecs or RowVecs to make their interpretation clear:","category":"page"},{"location":"api/","page":"API","title":"API","text":"ColVecs\nRowVecs","category":"page"},{"location":"api/#KernelFunctions.ColVecs","page":"API","title":"KernelFunctions.ColVecs","text":"ColVecs(X::AbstractMatrix)\n\nA lightweight wrapper for an AbstractMatrix which interprets it as a vector-of-vectors, in which each column of X represents a single vector.\n\nThat is, by writing x = ColVecs(X), you are saying \"x is a vector-of-vectors, each of which has length size(X, 1). The total number of vectors is size(X, 2).\"\n\nPhrased differently, ColVecs(X) says that X should be interpreted as a vector of horizontally-concatenated column-vectors, hence the name ColVecs.\n\njulia> X = randn(2, 5);\n\njulia> x = ColVecs(X);\n\njulia> length(x) == 5\ntrue\n\njulia> X[:, 3] == x[3]\ntrue\n\nColVecs is related to RowVecs via transposition:\n\njulia> X = randn(2, 5);\n\njulia> ColVecs(X) == RowVecs(X')\ntrue\n\n\n\n\n\n","category":"type"},{"location":"api/#KernelFunctions.RowVecs","page":"API","title":"KernelFunctions.RowVecs","text":"RowVecs(X::AbstractMatrix)\n\nA lightweight wrapper for an AbstractMatrix which interprets it as a vector-of-vectors, in which each row of X represents a single vector.\n\nThat is, by writing x = RowVecs(X), you are saying \"x is a vector-of-vectors, each of which has length size(X, 2). The total number of vectors is size(X, 1).\"\n\nPhrased differently, RowVecs(X) says that X should be interpreted as a vector of vertically-concatenated row-vectors, hence the name RowVecs.\n\nInternally, the data continues to be represented as an AbstractMatrix, so using this type does not introduce any kind of performance penalty.\n\njulia> X = randn(5, 2);\n\njulia> x = RowVecs(X);\n\njulia> length(x) == 5\ntrue\n\njulia> X[3, :] == x[3]\ntrue\n\nRowVecs is related to ColVecs via transposition:\n\njulia> X = randn(5, 2);\n\njulia> RowVecs(X) == ColVecs(X')\ntrue\n\n\n\n\n\n","category":"type"},{"location":"api/","page":"API","title":"API","text":"These types are specialised upon to ensure good performance e.g. when computing Euclidean distances between pairs of elements. The benefit of using this representation, rather than using a Vector{Vector{<:Real}}, is that optimised matrix-matrix multiplication functionality can be utilised when computing pairwise distances between inputs, which are needed for kernelmatrix computation.","category":"page"},{"location":"api/#Inputs-for-Multiple-Outputs","page":"API","title":"Inputs for Multiple Outputs","text":"","category":"section"},{"location":"api/","page":"API","title":"API","text":"KernelFunctions.jl views multi-output GPs as GPs on an extended input domain. For an explanation of this design choice, see the design notes on multi-output GPs.","category":"page"},{"location":"api/","page":"API","title":"API","text":"An input to a multi-output Kernel should be a Tuple{T, Int}, whose first element specifies a location in the domain of the multi-output GP, and whose second element specifies which output the inputs corresponds to. The type of collections of inputs for multi-output GPs is therefore AbstractVector{<:Tuple{T, Int}}.","category":"page"},{"location":"api/","page":"API","title":"API","text":"KernelFunctions.jl provides the following helper functions to reduce the cognitive load associated with working with multi-output kernels by dealing with transforming data from the formats in which it is commonly found into the format required by KernelFunctions. The intention is that users can pass their data to these functions, and use the returned values throughout their code, without having to worry further about correctly formatting their data for KernelFunctions' sake:","category":"page"},{"location":"api/","page":"API","title":"API","text":"prepare_isotopic_multi_output_data(x::AbstractVector, y::ColVecs)\nprepare_isotopic_multi_output_data(x::AbstractVector, y::RowVecs)\nprepare_heterotopic_multi_output_data","category":"page"},{"location":"api/#KernelFunctions.prepare_isotopic_multi_output_data-Tuple{AbstractVector, ColVecs}","page":"API","title":"KernelFunctions.prepare_isotopic_multi_output_data","text":"prepare_isotopic_multi_output_data(x::AbstractVector, y::ColVecs)\n\nUtility functionality to convert a collection of N = length(x) inputs x, and a vector-of-vectors y (efficiently represented by a ColVecs) into a format suitable for use with multi-output kernels.\n\ny[n] is the vector-valued output corresponding to the input x[n]. Consequently, it is necessary that length(x) == length(y).\n\nFor example, if outputs are initially stored in a num_outputs × N matrix:\n\njulia> x = [1.0, 2.0, 3.0];\n\njulia> Y = [1.1 2.1 3.1; 1.2 2.2 3.2]\n2×3 Matrix{Float64}:\n 1.1 2.1 3.1\n 1.2 2.2 3.2\n\njulia> inputs, outputs = prepare_isotopic_multi_output_data(x, ColVecs(Y));\n\njulia> inputs\n6-element KernelFunctions.MOInputIsotopicByFeatures{Float64, Vector{Float64}, Int64}:\n (1.0, 1)\n (1.0, 2)\n (2.0, 1)\n (2.0, 2)\n (3.0, 1)\n (3.0, 2)\n\njulia> outputs\n6-element Vector{Float64}:\n 1.1\n 1.2\n 2.1\n 2.2\n 3.1\n 3.2\n\nSee also prepare_heterotopic_multi_output_data.\n\n\n\n\n\n","category":"method"},{"location":"api/#KernelFunctions.prepare_isotopic_multi_output_data-Tuple{AbstractVector, RowVecs}","page":"API","title":"KernelFunctions.prepare_isotopic_multi_output_data","text":"prepare_isotopic_multi_output_data(x::AbstractVector, y::RowVecs)\n\nUtility functionality to convert a collection of N = length(x) inputs x and output vectors y (efficiently represented by a RowVecs) into a format suitable for use with multi-output kernels.\n\ny[n] is the vector-valued output corresponding to the input x[n]. Consequently, it is necessary that length(x) == length(y).\n\nFor example, if outputs are initial stored in an N × num_outputs matrix:\n\njulia> x = [1.0, 2.0, 3.0];\n\njulia> Y = [1.1 1.2; 2.1 2.2; 3.1 3.2]\n3×2 Matrix{Float64}:\n 1.1 1.2\n 2.1 2.2\n 3.1 3.2\n\njulia> inputs, outputs = prepare_isotopic_multi_output_data(x, RowVecs(Y));\n\njulia> inputs\n6-element KernelFunctions.MOInputIsotopicByOutputs{Float64, Vector{Float64}, Int64}:\n (1.0, 1)\n (2.0, 1)\n (3.0, 1)\n (1.0, 2)\n (2.0, 2)\n (3.0, 2)\n\njulia> outputs\n6-element Vector{Float64}:\n 1.1\n 2.1\n 3.1\n 1.2\n 2.2\n 3.2\n\nSee also prepare_heterotopic_multi_output_data.\n\n\n\n\n\n","category":"method"},{"location":"api/#KernelFunctions.prepare_heterotopic_multi_output_data","page":"API","title":"KernelFunctions.prepare_heterotopic_multi_output_data","text":"prepare_heterotopic_multi_output_data(\n x::AbstractVector, y::AbstractVector{<:Real}, output_indices::AbstractVector{Int},\n)\n\nUtility functionality to convert a collection of inputs x, observations y, and output_indices into a format suitable for use with multi-output kernels. Handles the situation in which only one (or a subset) of outputs are observed at each feature. Ensures that all arguments are compatible with one another, and returns a vector of inputs and a vector of outputs.\n\ny[n] should be the observed value associated with output output_indices[n] at feature x[n].\n\njulia> x = [1.0, 2.0, 3.0];\n\njulia> y = [-1.0, 0.0, 1.0];\n\njulia> output_indices = [3, 2, 1];\n\njulia> inputs, outputs = prepare_heterotopic_multi_output_data(x, y, output_indices);\n\njulia> inputs\n3-element Vector{Tuple{Float64, Int64}}:\n (1.0, 3)\n (2.0, 2)\n (3.0, 1)\n\njulia> outputs\n3-element Vector{Float64}:\n -1.0\n 0.0\n 1.0\n\nSee also prepare_isotopic_multi_output_data.\n\n\n\n\n\n","category":"function"},{"location":"api/","page":"API","title":"API","text":"The input types returned by prepare_isotopic_multi_output_data can also be constructed manually:","category":"page"},{"location":"api/","page":"API","title":"API","text":"MOInput","category":"page"},{"location":"api/#KernelFunctions.MOInput","page":"API","title":"KernelFunctions.MOInput","text":"MOInput(x::AbstractVector, out_dim::Integer)\n\nA data type to accommodate modelling multi-dimensional output data. MOInput(x, out_dim) has length length(x) * out_dim.\n\njulia> x = [1, 2, 3];\n\njulia> MOInput(x, 2)\n6-element KernelFunctions.MOInputIsotopicByOutputs{Int64, Vector{Int64}, Int64}:\n (1, 1)\n (2, 1)\n (3, 1)\n (1, 2)\n (2, 2)\n (3, 2)\n\nAs shown above, an MOInput represents a vector of tuples. The first length(x) elements represent the inputs for the first output, the second length(x) elements represent the inputs for the second output, etc. See Inputs for Multiple Outputs in the docs for more info.\n\nMOInput will be deprecated in version 0.11 in favour of MOInputIsotopicByOutputs, and removed in version 0.12.\n\n\n\n\n\n","category":"type"},{"location":"api/","page":"API","title":"API","text":"As with ColVecs and RowVecs for vector-valued input spaces, this type enables specialised implementations of e.g. kernelmatrix for MOInputs in some situations.","category":"page"},{"location":"api/","page":"API","title":"API","text":"To find out more about the background, read this review of kernels for vector-valued functions.","category":"page"},{"location":"api/#Generic-Utilities","page":"API","title":"Generic Utilities","text":"","category":"section"},{"location":"api/","page":"API","title":"API","text":"KernelFunctions also provides miscellaneous utility functions.","category":"page"},{"location":"api/","page":"API","title":"API","text":"nystrom\nNystromFact","category":"page"},{"location":"api/#KernelFunctions.nystrom","page":"API","title":"KernelFunctions.nystrom","text":"nystrom(k::Kernel, X::AbstractVector, S::AbstractVector{<:Integer})\n\nCompute a factorization of a Nystrom approximation of the square kernel matrix of data vector X with respect to kernel k, using indices S. Returns a NystromFact struct which stores a Nystrom factorization satisfying:\n\nmathbfK approx mathbfC^intercalmathbfWmathbfC\n\n\n\n\n\nnystrom(k::Kernel, X::AbstractVector, r::Real)\n\nCompute a factorization of a Nystrom approximation of the square kernel matrix of data vector X with respect to kernel k using a sample ratio of r. Returns a NystromFact struct which stores a Nystrom factorization satisfying:\n\nmathbfK approx mathbfC^intercalmathbfWmathbfC\n\n\n\n\n\nnystrom(k::Kernel, X::AbstractMatrix, S::AbstractVector{<:Integer}; obsdim)\n\nIf obsdim=1, equivalent to nystrom(k, RowVecs(X), S). If obsdim=2, equivalent to nystrom(k, ColVecs(X), S).\n\nSee also: ColVecs, RowVecs\n\n\n\n\n\nnystrom(k::Kernel, X::AbstractMatrix, r::Real; obsdim)\n\nIf obsdim=1, equivalent to nystrom(k, RowVecs(X), r). If obsdim=2, equivalent to nystrom(k, ColVecs(X), r).\n\nSee also: ColVecs, RowVecs\n\n\n\n\n\n","category":"function"},{"location":"api/#KernelFunctions.NystromFact","page":"API","title":"KernelFunctions.NystromFact","text":"NystromFact\n\nType for storing a Nystrom factorization. The factorization contains two fields: W and C, two matrices satisfying:\n\nmathbfK approx mathbfC^intercalmathbfWmathbfC\n\n\n\n\n\n","category":"type"},{"location":"api/#Conditional-Utilities","page":"API","title":"Conditional Utilities","text":"","category":"section"},{"location":"api/","page":"API","title":"API","text":"To keep the dependencies of KernelFunctions lean, some functionality is only available if specific other packages are explicitly loaded (using).","category":"page"},{"location":"api/#Kronecker.jl","page":"API","title":"Kronecker.jl","text":"","category":"section"},{"location":"api/","page":"API","title":"API","text":"https://github.com/MichielStock/Kronecker.jl","category":"page"},{"location":"api/","page":"API","title":"API","text":"kronecker_kernelmatrix\nkernelkronmat","category":"page"},{"location":"api/#KernelFunctions.kronecker_kernelmatrix","page":"API","title":"KernelFunctions.kronecker_kernelmatrix","text":"kronecker_kernelmatrix(\n k::Union{IndependentMOKernel,IntrinsicCoregionMOKernel}, x::MOI, y::MOI\n) where {MOI<:IsotopicMOInputsUnion}\n\nRequires Kronecker.jl: Computes the kernelmatrix for the IndependentMOKernel and the IntrinsicCoregionMOKernel, but returns a lazy kronecker product. This object can be very efficiently inverted or decomposed. See also kernelmatrix.\n\n\n\n\n\n","category":"function"},{"location":"api/#KernelFunctions.kernelkronmat","page":"API","title":"KernelFunctions.kernelkronmat","text":"kernelkronmat(κ::Kernel, X::AbstractVector{<:Real}, dims::Int) -> KroneckerPower\n\nReturn a KroneckerPower matrix on the D-dimensional input grid constructed by otimes_i=1^D X, where D is given by dims.\n\nwarning: Warning\nRequires Kronecker.jl and for iskroncompatible(κ) to return true.\n\n\n\n\n\nkernelkronmat(κ::Kernel, X::AbstractVector{<:AbstractVector}) -> KroneckerProduct\n\nReturns a KroneckerProduct matrix on the grid built with the collection of vectors X_i_i=1^D: otimes_i=1^D X_i.\n\nwarning: Warning\nRequires Kronecker.jl and for iskroncompatible(κ) to return true.\n\n\n\n\n\n","category":"function"},{"location":"api/#PDMats.jl","page":"API","title":"PDMats.jl","text":"","category":"section"},{"location":"api/","page":"API","title":"API","text":"https://github.com/JuliaStats/PDMats.jl","category":"page"},{"location":"api/","page":"API","title":"API","text":"kernelpdmat","category":"page"},{"location":"api/#KernelFunctions.kernelpdmat","page":"API","title":"KernelFunctions.kernelpdmat","text":"kernelpdmat(k::Kernel, X::AbstractVector)\n\nCompute a positive-definite matrix in the form of a PDMat matrix (see PDMats.jl), with the Cholesky decomposition precomputed. The algorithm adds a diagonal \"nugget\" term to the kernel matrix which is increased until positive definiteness is achieved. The algorithm gives up with an error if the nugget becomes larger than 1% of the largest value in the kernel matrix.\n\n\n\n\n\nkernelpdmat(k::Kernel, X::AbstractMatrix; obsdim)\n\nIf obsdim=1, equivalent to kernelpdmat(k, RowVecs(X)). If obsdim=2, equivalent to kernelpdmat(k, ColVecs(X)).\n\nSee also: ColVecs, RowVecs\n\n\n\n\n\n","category":"function"},{"location":"design/#Design","page":"Design","title":"Design","text":"","category":"section"},{"location":"design/#why_abstract_vectors","page":"Design","title":"Why AbstractVectors Everywhere?","text":"","category":"section"},{"location":"design/","page":"Design","title":"Design","text":"To understand the advantages of using AbstractVectors everywhere to represent collections of inputs, first consider the following properties that it is desirable for a collection of inputs to satisfy.","category":"page"},{"location":"design/#Unique-Ordering","page":"Design","title":"Unique Ordering","text":"","category":"section"},{"location":"design/","page":"Design","title":"Design","text":"There must be a clearly-defined first, second, etc element of an input collection. If this were not the case, it would not be possible to determine a unique mapping between a collection of inputs and the output of kernelmatrix, as it would not be clear what order the rows and columns of the output should appear in.","category":"page"},{"location":"design/","page":"Design","title":"Design","text":"Moreover, ordering guarantees that if you permute the collection of inputs, the ordering of the rows and columns of the kernelmatrix are correspondingly permuted.","category":"page"},{"location":"design/#Generality","page":"Design","title":"Generality","text":"","category":"section"},{"location":"design/","page":"Design","title":"Design","text":"There must be no restriction on the domain of the input. Collections of Reals, vectors, graphs, finite-dimensional domains, or really anything else that you fancy should be straightforwardly representable. Moreover, whichever input class is chosen should not prevent optimal performance from being obtained.","category":"page"},{"location":"design/#Unambiguously-Defined-Length","page":"Design","title":"Unambiguously-Defined Length","text":"","category":"section"},{"location":"design/","page":"Design","title":"Design","text":"Knowing the length of a collection of inputs is important. For example, a well-defined length guarantees that the size of the output of kernelmatrix, and related functions, are predictable. It also makes it possible to perform internal error-checking that ensures that e.g. there are the same number of inputs in two collections of inputs.","category":"page"},{"location":"design/#AbstractMatrices-Do-Not-Cut-It","page":"Design","title":"AbstractMatrices Do Not Cut It","text":"","category":"section"},{"location":"design/","page":"Design","title":"Design","text":"Notably, while AbstractMatrix objects are often used to represent collections of vector-valued inputs, they do not immediately satisfy these properties as it is unclear whether a matrix of size P x Q represents a collection of P Q-dimensional inputs (each row is an input), or Q P-dimensional inputs (each column is an input).","category":"page"},{"location":"design/","page":"Design","title":"Design","text":"Moreover, they occasionally add some aesthetic inconvenience. For example, a collection of Real-valued inputs, which might be straightforwardly represented as an AbstractVector{<:Real}, must be reshaped into a matrix.","category":"page"},{"location":"design/","page":"Design","title":"Design","text":"There are two commonly used ways to partly resolve these shortcomings:","category":"page"},{"location":"design/#Resolution-1:-Specify-a-Convention","page":"Design","title":"Resolution 1: Specify a Convention","text":"","category":"section"},{"location":"design/","page":"Design","title":"Design","text":"One way that these shortcomings can be partly resolved is by specifying a convention that everyone adheres to regarding the interpretation of rows vs columns. However, opinions about the choice of convention are often surprisingly strongly held, and users regularly have to remind themselves which convention has been chosen. While this resolves the ordering problem, and in principle defines the \"length\" of a collection of inputs, AbstractMatrixs already have a length defined in Julia, which would generally disagree with our internal notion of length. This isn't a show-stopper, but it isn't an especially clean situation.","category":"page"},{"location":"design/","page":"Design","title":"Design","text":"There is also the opportunity for some kinds of silent bugs. For example, if an input matrix happens to be square because the number of input dimensions is the same as the number of inputs, it would be hard to know whether the correct kernelmatrix has been computed. This kind of bug seems unlikely, but it exists regardless.","category":"page"},{"location":"design/","page":"Design","title":"Design","text":"Finally, suppose that your inputs are some type T that is not simply a vector of real numbers, say a graph. In this situation, how should a collection of inputs be represented? A N x 1 or 1 x N matrix is the only obvious candidate, but the additional singular dimension seems somewhat redundant.","category":"page"},{"location":"design/#Resolution-2:-Always-Specify-An-obsdim-Argument","page":"Design","title":"Resolution 2: Always Specify An obsdim Argument","text":"","category":"section"},{"location":"design/","page":"Design","title":"Design","text":"Another way to partly resolve these problems is to not commit to a convention, and instead to propagate some additional information through the codebase that specifies how the input data is to be interpreted. For example, a kernel k that represents the sum of two other kernels might implement kernelmatrix as follows:","category":"page"},{"location":"design/","page":"Design","title":"Design","text":"function kernelmatrix(k::KernelSum, x::AbstractMatrix; obsdim=1)\n return kernelmatrix(k.kernels[1], x; obsdim=obsdim) +\n kernelmatrix(k.kernels[2], x; obsdim=obsdim)\nend","category":"page"},{"location":"design/","page":"Design","title":"Design","text":"While this prevents this package from having to pre-specify a convention, it doesn't resolve the length issue, or the issue of representing collections of inputs which aren't immediately represented as vectors. Moreover, it complicates the internals; in contrast, consider what this function looks like with an AbstractVector:","category":"page"},{"location":"design/","page":"Design","title":"Design","text":"function kernelmatrix(k::KernelSum, x::AbstractVector)\n return kernelmatrix(k.kernels[1], x) + kernelmatrix(k.kernels[2], x)\nend","category":"page"},{"location":"design/","page":"Design","title":"Design","text":"This code is clearer (less visual noise), and has removed a possible bug – if the implementer of kernelmatrix forgets to pass the obsdim kwarg into each subsequent kernelmatrix call, it's possible to get the wrong answer.","category":"page"},{"location":"design/","page":"Design","title":"Design","text":"This being said, we do support matrix-valued inputs – see Why We Have Support for Both.","category":"page"},{"location":"design/#AbstractVectors","page":"Design","title":"AbstractVectors","text":"","category":"section"},{"location":"design/","page":"Design","title":"Design","text":"Requiring all collections of inputs to be AbstractVectors resolves all of these problems, and ensures that the data is self-describing to the extent that KernelFunctions.jl requires.","category":"page"},{"location":"design/","page":"Design","title":"Design","text":"Firstly, the question of how to interpret the columns and rows of a matrix of inputs is resolved. Users must wrap matrices which represent collections of inputs in either a ColVecs or RowVecs, both of which have clearly defined semantics which are hard to confuse.","category":"page"},{"location":"design/","page":"Design","title":"Design","text":"By design, there is also no discrepancy between the number of inputs in the collection, and the length function – the length of a ColVecs, RowVecs, or Vector{<:Real} is equal to the number of inputs.","category":"page"},{"location":"design/","page":"Design","title":"Design","text":"There is no loss of performance.","category":"page"},{"location":"design/","page":"Design","title":"Design","text":"A collection of N Real-valued inputs can be represented by an AbstractVector{<:Real} of length N, rather than needing to use an AbstractMatrix{<:Real} of size either N x 1 or 1 x N. The same can be said for any other input type T, and new subtypes of AbstractVector can be added if particularly efficient ways exist to store collections of inputs of type T. A good example of this in practice is using Tuple{S, Int}, for some input type S, as the Inputs for Multiple Outputs.","category":"page"},{"location":"design/","page":"Design","title":"Design","text":"This approach can also lead to clearer user code. A user need only wrap their inputs in a ColVecs or RowVecs once in their code, and this specification is automatically re-used everywhere in their code. In this sense, it is straightforward to write code in such a way that there is one unique source of \"truth\" about the way in which a particular data set should be interpreted. Conversely, the obsdim resolution requires that the obsdim keyword argument is passed around with the data every single time that you use it.","category":"page"},{"location":"design/","page":"Design","title":"Design","text":"The benefits of the AbstractVector approach are likely most strongly felt when writing a substantial amount of code on top of KernelFunctions.jl – in the same way that using AbstractVectors inside KernelFunctions.jl removes the need for large amounts of keyword argument propagation, the same will be true of other code.","category":"page"},{"location":"design/#Why-We-Have-Support-for-Both","page":"Design","title":"Why We Have Support for Both","text":"","category":"section"},{"location":"design/","page":"Design","title":"Design","text":"In short: many people like matrices, and are familiar with obsdim-style keyword arguments.","category":"page"},{"location":"design/","page":"Design","title":"Design","text":"All internals are implemented using AbstractVectors though, and the obsdim interface is just a thin layer of utility functionality which sits on top of this. To avoid confusion and silent errors, we do not favour a specific convention (rows or columns) but instead it is necessary to specify the obsdim keyword argument explicitly.","category":"page"},{"location":"design/#inputs_for_multiple_outputs","page":"Design","title":"Kernels for Multiple-Outputs","text":"","category":"section"},{"location":"design/","page":"Design","title":"Design","text":"There are two equally-valid perspectives on multi-output kernels: they can either be treated as matrix-valued kernels, or standard kernels on an extended input domain. Each of these perspectives are convenient in different circumstances, but the latter greatly simplifies the incorporation of multi-output kernels in KernelFunctions.","category":"page"},{"location":"design/","page":"Design","title":"Design","text":"More concretely, let k_mat be a matrix-valued kernel, mapping pairs of inputs of type T to matrices of size P x P to describe the covariance between P outputs. Given inputs x and y of type T, and integers p and q, we can always find an equivalent standard kernel k mapping from pairs of inputs of type Tuple{T, Int} to the Reals as follows:","category":"page"},{"location":"design/","page":"Design","title":"Design","text":"k((x, p), (y, q)) = k_mat(x, y)[p, q]","category":"page"},{"location":"design/","page":"Design","title":"Design","text":"This ability to treat multi-output kernels as single-output kernels is very helpful, as it means that there is no need to introduce additional concepts into the API of KernelFunctions.jl, just additional kernels! This in turn simplifies downstream code as they don't need to \"know\" about the existence of multi-output kernels in addition to standard kernels. For example, GP libraries built on top of KernelFunctions.jl just need to know about Kernels, and they get multi-output kernels, and hence multi-output GPs, for free.","category":"page"},{"location":"design/","page":"Design","title":"Design","text":"Where there is the need to specialise implementations for multi-output kernels, this is done in an encapsulated manner – parts of KernelFunctions that have nothing to do with multi-output kernels know nothing about the existence of multi-output kernels.","category":"page"},{"location":"examples/support-vector-machine/","page":"Support Vector Machine","title":"Support Vector Machine","text":"EditURL = \"../../../../examples/support-vector-machine/script.jl\"","category":"page"},{"location":"examples/support-vector-machine/","page":"Support Vector Machine","title":"Support Vector Machine","text":"EditURL = \"https://github.com/JuliaGaussianProcesses/KernelFunctions.jl/blob/master/examples/support-vector-machine/script.jl\"","category":"page"},{"location":"examples/support-vector-machine/#Support-Vector-Machine","page":"Support Vector Machine","title":"Support Vector Machine","text":"","category":"section"},{"location":"examples/support-vector-machine/","page":"Support Vector Machine","title":"Support Vector Machine","text":"(Image: )","category":"page"},{"location":"examples/support-vector-machine/","page":"Support Vector Machine","title":"Support Vector Machine","text":"You are seeing the HTML output generated by Documenter.jl and Literate.jl from the Julia source file. The corresponding notebook can be viewed in nbviewer.","category":"page"},{"location":"examples/support-vector-machine/","page":"Support Vector Machine","title":"Support Vector Machine","text":"","category":"page"},{"location":"examples/support-vector-machine/","page":"Support Vector Machine","title":"Support Vector Machine","text":"In this notebook we show how you can use KernelFunctions.jl to generate kernel matrices for classification with a support vector machine, as implemented by LIBSVM.","category":"page"},{"location":"examples/support-vector-machine/","page":"Support Vector Machine","title":"Support Vector Machine","text":"using Distributions\nusing KernelFunctions\nusing LIBSVM\nusing LinearAlgebra\nusing Plots\nusing Random\n\n# Set seed\nRandom.seed!(1234);","category":"page"},{"location":"examples/support-vector-machine/#Generate-half-moon-dataset","page":"Support Vector Machine","title":"Generate half-moon dataset","text":"","category":"section"},{"location":"examples/support-vector-machine/","page":"Support Vector Machine","title":"Support Vector Machine","text":"Number of samples per class:","category":"page"},{"location":"examples/support-vector-machine/","page":"Support Vector Machine","title":"Support Vector Machine","text":"n1 = n2 = 50;","category":"page"},{"location":"examples/support-vector-machine/","page":"Support Vector Machine","title":"Support Vector Machine","text":"We generate data based on SciKit-Learn's sklearn.datasets.make_moons function:","category":"page"},{"location":"examples/support-vector-machine/","page":"Support Vector Machine","title":"Support Vector Machine","text":"angle1 = range(0, π; length=n1)\nangle2 = range(0, π; length=n2)\nX1 = [cos.(angle1) sin.(angle1)] .+ 0.1 .* randn.()\nX2 = [1 .- cos.(angle2) 1 .- sin.(angle2) .- 0.5] .+ 0.1 .* randn.()\nX = [X1; X2]\nx_train = RowVecs(X)\ny_train = vcat(fill(-1, n1), fill(1, n2));","category":"page"},{"location":"examples/support-vector-machine/#Training","page":"Support Vector Machine","title":"Training","text":"","category":"section"},{"location":"examples/support-vector-machine/","page":"Support Vector Machine","title":"Support Vector Machine","text":"We create a kernel function:","category":"page"},{"location":"examples/support-vector-machine/","page":"Support Vector Machine","title":"Support Vector Machine","text":"k = SqExponentialKernel() ∘ ScaleTransform(1.5)","category":"page"},{"location":"examples/support-vector-machine/","page":"Support Vector Machine","title":"Support Vector Machine","text":"Squared Exponential Kernel (metric = Distances.Euclidean(0.0))\n\t- Scale Transform (s = 1.5)","category":"page"},{"location":"examples/support-vector-machine/","page":"Support Vector Machine","title":"Support Vector Machine","text":"LIBSVM can make use of a pre-computed kernel matrix. KernelFunctions.jl can be used to produce that using kernelmatrix:","category":"page"},{"location":"examples/support-vector-machine/","page":"Support Vector Machine","title":"Support Vector Machine","text":"model = svmtrain(kernelmatrix(k, x_train), y_train; kernel=LIBSVM.Kernel.Precomputed)","category":"page"},{"location":"examples/support-vector-machine/","page":"Support Vector Machine","title":"Support Vector Machine","text":"LIBSVM.SVM{Int64, LIBSVM.Kernel.KERNEL}(LIBSVM.SVC, LIBSVM.Kernel.Precomputed, nothing, 100, 100, 2, [-1, 1], Int32[1, 2], Float64[], Int32[], LIBSVM.SupportVectors{Vector{Int64}, Matrix{Float64}}(23, Int32[11, 12], [-1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1], [1.0 0.8982223633317491 0.9596163170022011 0.8681749917956418 0.7405298560587654 0.6670753594660519 0.1779671467515013 0.12581804740739566 0.05707943398657384 0.02764121723161683 0.033765857073249396 0.2680295766735067 0.29939058530607915 0.37151489965630213 0.3524014409758097 0.2908959282977835 0.3880509811446821 0.8766234308310106 0.82681374480545 0.8144257681324784 0.6772129558340088 0.6360692761241019 0.27226866397259536; 0.8982223633317491 1.0 0.965182128960536 0.9914891432258488 0.8867564750187009 0.9019354510254446 0.2147708440814802 0.15771406856492454 0.05887040570928494 0.017222970583007854 0.019222888349132574 0.221500149894056 0.2978310573718274 0.3053559535776424 0.2890446485251837 0.22090114119439183 0.3141485519019614 0.6220352391872924 0.5857825177211226 0.6973386670166851 0.7178826818314505 0.7710611517712889 0.4654568122945319; 0.9596163170022011 0.965182128960536 1.0 0.9626046043667029 0.8869903689807833 0.8153402743825475 0.25975227903072295 0.19192116220346336 0.08434059685077588 0.03220850516134753 0.0366758927128704 0.31408772981722 0.3824704266612618 0.4200037751884887 0.4001773046096343 0.3219312217176709 0.43280734456335546 0.750503533504958 0.6647402210580929 0.6926170128782051 0.6277007998632926 0.6433503699452944 0.32400415670963956; 0.8681749917956418 0.9914891432258488 0.9626046043667029 1.0 0.9370667957752087 0.934295025587645 0.26444251222948995 0.19879359752203962 0.07665919270519939 0.021595654487073727 0.023425682392132743 0.2566761906912133 0.3496676024988405 0.34456852113508585 0.3275077643059417 0.25092423515822787 0.35232020079983056 0.5892979561473187 0.5284801502144095 0.6217604813241744 0.6430231195027034 0.7109544049100224 0.44057810112560447; 0.7465952329465504 0.9304484985812767 0.8897100197930106 0.9678435903690089 0.9814954031669109 0.9779840213642631 0.3466778268733209 0.27206683288049266 0.10510054534990214 0.024906016068519672 0.02537581531241299 0.2819293887595886 0.4088052209237594 0.3636370022084356 0.34754098347809126 0.26247121953918195 0.3672027632424591 0.46578384178509197 0.38887087230008666 0.4701103002702702 0.5145210485797571 0.6123061110630164 0.42723601664089345; 0.7405298560587654 0.8867564750187009 0.8869903689807833 0.9370667957752087 1.0 0.9265705090470907 0.4401947652983322 0.35262403649115526 0.15320607160230898 0.041175981935510725 0.04156995738050753 0.37540034365943104 0.5190650165661463 0.46669986410386666 0.4490622985140926 0.3499350203111987 0.46962470972808273 0.4883331096338951 0.3798584854063081 0.4217137127807958 0.4303538604829861 0.5091151748567635 0.32622076287640006; 0.6670753594660519 0.9019354510254446 0.8153402743825475 0.934295025587645 0.9265705090470907 1.0 0.2827193698331228 0.2209480096839663 0.07504013686337539 0.014371637034094253 0.01439904495484344 0.2022331299863698 0.3174100045147389 0.2670237514797323 0.2539121194008726 0.18411214167319784 0.26924082958699697 0.3775897748372698 0.33049550236819253 0.4440238046305932 0.5378606847561948 0.6649412466468505 0.5362623212460318; 0.6702595528823198 0.8606244226283823 0.8314916696615914 0.9158026981710833 0.9886587344335546 0.9560332901136585 0.43011039397992096 0.34820512587201075 0.14124895742843027 0.032158966523388476 0.031580446301945404 0.32479775862613985 0.47740665014325184 0.4040619518789057 0.38836949912884633 0.29482897357398075 0.4047606090427496 0.40908071544297814 0.3195106528226398 0.37846326245707385 0.41478881966429043 0.5119444241914671 0.36732348167804796; 0.5958832606985172 0.7761919158084817 0.764543603784896 0.8444984180740654 0.9708780964035508 0.9003101222501712 0.5272836631986195 0.4399206665565803 0.19102502302045962 0.043420828006372085 0.04129805359453605 0.37942807492906316 0.5573589063405943 0.4543619576452872 0.4394388167665561 0.3375627830189747 0.45182010123593386 0.36215845599297103 0.2634668608558619 0.30067673081800705 0.3243249963940758 0.4116488074528402 0.2958856882102383; 0.6885947324331928 0.8568830230020188 0.8463587912182832 0.9136212483096897 0.9958882004323975 0.9330458507202257 0.460198027818651 0.3733503341923742 0.15937475820815994 0.0394059990098187 0.038929261709730496 0.3649438549033214 0.5202001774588284 0.4496246527453173 0.4330574268345955 0.33404405039411145 0.4506081711573255 0.43638796244437944 0.33426657209516675 0.3798417694355356 0.4001476735300954 0.48685776367870665 0.3302703252897188; 0.5090115987229845 0.66413624932743 0.6748754932289617 0.7410778570661924 0.9169250942417487 0.8005426352978952 0.6529042363424815 0.563579434224383 0.2698518181195754 0.06354590398806001 0.05851083802910193 0.45550338406982604 0.6594734277650144 0.5208034250731294 0.5076214704665837 0.3978825263055807 0.5138380455506879 0.3131959074096939 0.20890435984227568 0.22461858592695394 0.23387430115253083 0.30414333483721223 0.21357650440692952; 0.45093840203402863 0.6528439740207388 0.621293380868375 0.7281001827134798 0.8878776747702869 0.8483652168656046 0.5523273229286175 0.4772875263856446 0.19761071338591454 0.03572808338522635 0.03200591765116194 0.3254023798230707 0.5196262009999181 0.37997186968805663 0.36850957018497515 0.27531375747209685 0.37362641649043415 0.24573139502121816 0.17123499867505312 0.20822585856181272 0.24503580982318168 0.3360215134566464 0.282452943899592; 0.3853374653727806 0.5531275651750578 0.5456456216692458 0.63187924915747 0.8272788783068459 0.7400449329312268 0.6740778081979193 0.6015241987774301 0.27711939662891033 0.05315958324949652 0.046303407217568385 0.39204087574676183 0.6116325191520129 0.4372487002786384 0.42739649234665433 0.3269933488950957 0.42689765420513015 0.2149742263615225 0.1373845757765515 0.15601083788315914 0.1754427451629185 0.24492436885056065 0.19788854100040332; 0.4618500914094273 0.6128977794159381 0.6250323394686075 0.6915365382967459 0.88189001173394 0.7603121381740631 0.6931399190748629 0.6074867523562785 0.29692455405263407 0.0681577107585657 0.06159926451755685 0.4645954678327457 0.6786212904021323 0.5221512124616867 0.5102875582105725 0.40055028127076997 0.5131110004920902 0.28037311469308024 0.18119852159344918 0.19343543701106478 0.20210176995756082 0.2678694503178588 0.1914666250241892; 0.2696302037951813 0.39638152158224865 0.403404894882526 0.46949354523118425 0.6809782176032294 0.5691943970817303 0.7959863116838541 0.7460243621745607 0.383759661562218 0.07333545211725394 0.06062770952823484 0.4276361345098877 0.6667776453895222 0.44825730820354187 0.44260682761401715 0.34513919555944916 0.43243837419232345 0.14931467351759206 0.08501132835349595 0.09090598935589621 0.09966166276310144 0.14565406652997917 0.1181167187860405; 0.3002590494463073 0.4115977376244659 0.43517761205157995 0.4854137709312024 0.7013669019363106 0.5580577880472913 0.8536237862527719 0.7944463939442774 0.4423829365109098 0.10024118415255136 0.08491613895458582 0.5164984752370707 0.7562127396153628 0.5384043672737264 0.532633886946863 0.4273028327796133 0.5213353428367101 0.17995021704234954 0.10110435738873308 0.10066689644324499 0.1023456357653284 0.14371084800072215 0.10376729355409814; 0.23895991490217605 0.33249975589925446 0.3570356667989706 0.3998440847736826 0.6103384074505219 0.46903410987322885 0.8998463600755164 0.8599178355605582 0.5066796013929806 0.11396841785672483 0.0938145408103168 0.5199141678644349 0.7624727563004503 0.5243659291344558 0.5215260868720673 0.4222997164056707 0.5044431497960399 0.14229601411077292 0.07518839830888858 0.07265639350751003 0.0730416671251987 0.10523885318589322 0.07641455926683688; 0.1915586377215268 0.2782828209211749 0.2967514734538114 0.33993222993655975 0.5402177192407522 0.41457098384450736 0.8940395698131182 0.8732189313788711 0.5158710981816049 0.10661722182344716 0.08504323591413473 0.47313025445505164 0.7153672453884371 0.46747599643639337 0.4662739691425648 0.37502751142299 0.4468899647601597 0.10944611467787402 0.05573856274262572 0.054529742226822116 0.05633615986012236 0.0842054572140432 0.06479883181573562; 0.22705224038961044 0.31938520593304515 0.34224069664715906 0.385495402966531 0.5939999674274448 0.4566062935524366 0.899155765927396 0.8637017932666784 0.5087259826371333 0.11196526875215826 0.09146631735404127 0.5082775947180933 0.7512932189769945 0.5102218707630334 0.5077837081682253 0.41038655426963877 0.4900958753322606 0.1337816259051161 0.0700979881035211 0.06799122461651445 0.06885045790765319 0.10009947593776944 0.0737756297099595; 0.13939429772646458 0.19866201340261797 0.22114257212806596 0.24864914930042606 0.42520802222653464 0.302501776239137 0.9372245539818653 0.9455339290308946 0.6315082072142529 0.14232088029862777 0.11065257979685114 0.497993580135406 0.7281911500795765 0.46872860910213954 0.4714826024303301 0.3903354300403334 0.44502619470509736 0.08256405886680711 0.03835534537202231 0.034581508309323515 0.033704650675767854 0.05102221540013976 0.03704322052953614; 0.14536068431690077 0.22095226487516476 0.23447742603357294 0.274909660351764 0.45833250114914875 0.34957510182253293 0.8743978358229307 0.8761033364462417 0.5229671176330489 0.0991832584928122 0.07635359024682037 0.42104563348070084 0.6565602859173729 0.40568985690003667 0.40610443722599326 0.3248474190874977 0.3850374404413671 0.07960429303749694 0.03874287443935803 0.03821774674349671 0.040517865120029126 0.0630563129568011 0.0514750781731886; 0.14763143381426758 0.20697373951526962 0.23172641379954417 0.25814071080843315 0.43782614347195603 0.3097871356075422 0.948191751644435 0.9518038099641223 0.6412421014386851 0.14982574968207624 0.11752904351897316 0.5195447242996001 0.7497513218160706 0.4906394512641361 0.4933596158142264 0.41032684627296356 0.46659647468974 0.08895540630063033 0.041555343865819036 0.0370694847518811 0.035626747327519616 0.05330326825496781 0.03775624173646896; 0.0988933742294342 0.12124815101071282 0.1522607181459668 0.1549588355463188 0.2869756433774373 0.17057949531906372 0.9559135116536135 0.9892814033753422 0.8831001155923481 0.301076796055474 0.23744952288817556 0.6342825599362547 0.794592834668333 0.5579899828492979 0.5688794841651086 0.5133182360119145 0.5290723135633687 0.07062859133770322 0.028724907143533655 0.02063694811643021 0.0163555000764835 0.023361093148916098 0.012804211869588852; 0.1779671467515013 0.2147708440814802 0.25975227903072295 0.26444251222948995 0.4401947652983322 0.2827193698331228 1.0 0.9840440354640055 0.7746626759108745 0.259331300415348 0.21532000371278573 0.7098399745938505 0.897129803886899 0.6668121606496512 0.6725100792205622 0.5926610833970605 0.639910660474017 0.12677413631301512 0.05828369828510033 0.04500739970678052 0.03699408218865754 0.05063099709508791 0.028066326109481347; 0.12581804740739566 0.15771406856492454 0.19192116220346336 0.19879359752203962 0.35262403649115526 0.2209480096839663 0.9840440354640055 1.0 0.8196815897955259 0.2585454040835821 0.2062681823099032 0.6395908524826316 0.8253149042783318 0.5796701178937012 0.5878473919825917 0.5191073343197649 0.5515646672943586 0.08688786848645763 0.03740368096386031 0.028523064291676215 0.023677093828563235 0.033770552150204226 0.0195077208623248; 0.13635764102815603 0.17700879424928978 0.20952156641370756 0.22209299900339824 0.387023745781692 0.2526789145766851 0.9840473928465677 0.9944545163994348 0.7628015511728252 0.21862995489084638 0.17378734562194012 0.6072327127100751 0.8120913399429163 0.5587815024853605 0.5649398681849662 0.48910402339466696 0.531808973769114 0.09017310666726844 0.03999940594359134 0.032134604355827 0.02797743047976866 0.04045661550302929 0.024940657478324198; 0.03738526797037774 0.0518722922443791 0.06469761673719454 0.07006166077431174 0.15008890206353956 0.08665512641758483 0.7784277540898672 0.8696233384872661 0.8146647085648284 0.22136689810316545 0.15743791729748416 0.3968445993948711 0.5391261134094336 0.32293628667971663 0.33300869834150976 0.2972564334545365 0.29975944910030017 0.023848286045824146 0.008420633155828174 0.006112619378410749 0.00516049180195299 0.008268679880159445 0.0052975977111670795; 0.042144581324510336 0.0494746902418995 0.06742933413721425 0.06600663846708471 0.13930785683731703 0.07183957093968951 0.7790781696938223 0.8536051172386016 0.9590138186434369 0.3903921195934206 0.2950426290497306 0.5288899615616742 0.6189309135561756 0.4262505758493195 0.44128790558923603 0.4208889858046393 0.39942670313494494 0.032438697615701076 0.011172295974476632 0.006870690987412239 0.004857666087518073 0.007060720348024933 0.0034344193347111735; 0.05312991544390621 0.05941513640392731 0.08194706236820062 0.07813203139092116 0.15906453281962285 0.08151519050847295 0.8085392731314344 0.8681139298684074 0.9848651453003653 0.443246308165159 0.3456733523248384 0.6037896564824075 0.6813118162978405 0.4950714556869529 0.5113993322285312 0.49277881830487924 0.4667583768828605 0.0429226487701534 0.015239324750107681 0.009170625347595353 0.006257708243494053 0.00875319860215057 0.0039747823519354995; 0.05707943398657384 0.05887040570928494 0.08434059685077588 0.07665919270519939 0.15320607160230898 0.07504013686337539 0.7746626759108745 0.8196815897955259 1.0 0.5440195902835544 0.43906982651942317 0.6673967547317672 0.7023400808140081 0.5485677304431098 0.5670659502059522 0.5624209010974132 0.5201938090487075 0.05039216793847928 0.01791524354832784 0.010043369678747171 0.006325305381042625 0.00840984174401409 0.0033555198691232182; 0.0239028553317282 0.030184756247153024 0.040854218058537374 0.041510328633494326 0.09483008177233723 0.048340772261097814 0.6717838510012278 0.7676403150888159 0.8793112190224525 0.3119804989346951 0.22247711097256292 0.3913818163355702 0.481152264589789 0.30253086008317853 0.3150901394384516 0.29804879470704987 0.2801577440280252 0.0171896769354173 0.005490096384196609 0.0034159985987988898 0.0025153999701487658 0.0039043781883587865 0.002087265775451305; 0.031461748783796255 0.03877763079418788 0.05235828887075337 0.05259747403989502 0.11577900027217572 0.059784830395976214 0.7284683797887744 0.8167846933186389 0.9160729338118098 0.3382566896841279 0.24707350935618375 0.44839573463782234 0.5435604959271204 0.35385233350424394 0.3673850921598674 0.3474062654846442 0.32935166307385894 0.023094272546330587 0.007660825323682672 0.004791128233787445 0.003499654757292123 0.005296572963316763 0.002754516536638302; 0.0202635020156832 0.0209515968555071 0.031856292217650496 0.028621916401232315 0.06575704582809962 0.028713198747047974 0.5561499763492528 0.6276780889906601 0.9225361894208831 0.537563426752421 0.40805784519141053 0.45230817484429436 0.46279255715733986 0.34051076155197635 0.35722093503244967 0.36821581556576116 0.3181318631289292 0.018326268390209892 0.005525107949270952 0.002764399785944711 0.0016269608580749297 0.0022643385581135253 0.0008634402327095031; 0.03917881206249196 0.04187141495743542 0.0602770437308414 0.05569284553771938 0.11787868067422283 0.05651647912609812 0.7170279636314995 0.7811392796869542 0.9868755418205789 0.507181661805653 0.3946817184897088 0.5692597969620196 0.6111220282142971 0.45306704851231877 0.47065706470710794 0.467410267660561 0.4263135976218018 0.0335840367945797 0.0112547984221705 0.006246809772638481 0.003975045243333954 0.005489661056955841 0.0022783159887154033; 0.04503255088410135 0.0439137702967996 0.06568078940616273 0.05754622086226787 0.11797430518899149 0.0542341914980732 0.6848945717357943 0.7301845715877101 0.9867821313846442 0.6248955778853352 0.5079623735649396 0.6419956195484627 0.6383844961053452 0.5159463011662203 0.5357861713088927 0.5480569804281196 0.4892367713976803 0.0425914300452601 0.014482620179200524 0.0074868753371495735 0.0043749674598642044 0.0056845719431328585 0.002045738397676184; 0.020015288581754945 0.019539997383675744 0.0305983363763088 0.026560055130788258 0.060568717016068814 0.02548996952986846 0.5212517692914928 0.5837589968251036 0.9104128747187193 0.6087759155456162 0.4723327572739916 0.4681713902460597 0.4553175340992092 0.3517956141533186 0.3694712715628193 0.3893559371646623 0.3298272088546007 0.01929969464392214 0.005784803454931787 0.0027392873350079857 0.0015186313676977968 0.00204248371497284 0.0007090901969261304; 0.020666052296526998 0.023467210104345954 0.03394439944734565 0.03227571537361978 0.07474354689359762 0.03478309139442751 0.6050706431823535 0.6902380132644816 0.9157011962379701 0.4252868608831877 0.31207368821244674 0.41915963873727863 0.46633517155931187 0.3173525840288823 0.3321740061129138 0.32987532663994706 0.29493718842290595 0.01682097194725037 0.005145059748221362 0.0028260660732993795 0.0018375597258384604 0.0027007138580146654 0.0012020186941257484; 0.02453101432184634 0.020098364892380587 0.033807186321682364 0.026519136869644185 0.05668770862680506 0.022080288969644857 0.4402683441382192 0.4723140706097761 0.8338110421697035 0.8296862086740819 0.7008351158890717 0.5440749777729216 0.45737691515438816 0.41745398965791825 0.43796100209216043 0.48788632097996637 0.39735131936242424 0.028530362783343567 0.008830718283431016 0.003661080212258824 0.0017275175383884805 0.0020652670212335587 0.0005465605530047422; 0.010772381969517289 0.00808979915556405 0.014769223974223632 0.010907548917201483 0.025105066248112838 0.00861005735113974 0.27446992177652213 0.30268057727357456 0.6563261019548249 0.8423122954493703 0.7087959360554233 0.3816217504493905 0.2915784265479316 0.27667027303245256 0.29339896442424834 0.3462653613105691 0.2623718667950521 0.014036631757401184 0.003892193697218935 0.0013824598149453928 0.000570663656527681 0.0006655357305495249 0.00014821787811893474; 0.013102125076381291 0.009501254374372565 0.01743056916880381 0.012641115714999962 0.028109874095783 0.009665720685806211 0.2779944240132419 0.3009422157277764 0.6496100565156806 0.8921208950268356 0.7707215094068156 0.4153473315697215 0.3097033352945198 0.30652449538189097 0.3242452180605052 0.38457230531389347 0.29222754331272954 0.0176491442562162 0.005052976710183603 0.0017795086910056906 0.0007191096459963922 0.0008135871421187968 0.0001729568595091208; 0.010048640046801751 0.007556801509146678 0.01383630447940828 0.010219144448059669 0.023720538640699704 0.008086205235382792 0.2673028201381964 0.2959612329986008 0.6480835552991425 0.8326590465631913 0.6977877995262115 0.3695874583691571 0.282221028802535 0.26655787487127675 0.28291666669202015 0.3345015952465225 0.2525390976336497 0.013094227782891262 0.003595847554140322 0.001270090006133873 0.0005228431863253228 0.0006118815220643209 0.00013617786605785216; 0.022349563450720516 0.015126611441125632 0.027736426902443566 0.019448577925125717 0.03948248730102741 0.013912843541017676 0.29407180499185603 0.3036682389024967 0.6295948777410747 0.9760878235703699 0.8971483297034317 0.510873410149174 0.3648754706376355 0.3962367174131936 0.4160536437133086 0.49505438923170403 0.38266060041477884 0.032002468097764726 0.010030791663580315 0.003541268511179874 0.001388397560167188 0.0014686522229405998 0.0002871893433972941; 0.010020005268819202 0.006809770300057249 0.013011105080329613 0.009061216335098812 0.0203203311204038 0.006608239130026839 0.2181377208067792 0.23591612481445193 0.5571316265641847 0.9019480717339876 0.7928873516646797 0.3622506780828305 0.25412502873565396 0.26395075463572065 0.2800632926839618 0.34220302701090444 0.25218637023914864 0.014558062704000863 0.004058708035772421 0.0013244042014423662 0.0004960057823668772 0.000542796465649497 0.00010316672930654405; 0.01106332568616253 0.006887193194115914 0.013554569709165635 0.008985016403844404 0.019232335629336378 0.00608875040752228 0.18606570349931004 0.19565849249653774 0.48225133952423743 0.9453876390694811 0.8725629002342125 0.3644945741438321 0.23844406528632542 0.27079632723779745 0.28673978102317077 0.35878774000611285 0.26108600973253054 0.017606861104932217 0.005073103983778686 0.001567301714496989 0.0005453728216072849 0.0005601086767381032 9.34005648300471e-5; 0.01657908523512305 0.011582747078381944 0.021314338205802762 0.015167963313664786 0.032355502612115034 0.011201905491084553 0.28242091851773066 0.29908821320436807 0.6382850228010263 0.9397497324044695 0.8367176827710091 0.45643200424037295 0.3319584631188468 0.3443458093220082 0.3630946413341951 0.4323367624470917 0.3303136751483621 0.023116247423315056 0.006887297927401206 0.0024132953755383753 0.0009557168244199856 0.001045242382536408 0.00021182221215739486; 0.02764121723161683 0.017222970583007854 0.03220850516134753 0.021595654487073727 0.041175981935510725 0.014371637034094253 0.259331300415348 0.2585454040835821 0.5440195902835544 1.0 0.9702064166750799 0.530245218387723 0.3560307890203881 0.4238742345642878 0.44332843787390197 0.5365850265213121 0.4136915663687128 0.04297162933120202 0.01420754067753986 0.00483919981495718 0.0017871776252900312 0.0017700462041020729 0.00030801400193521903; 0.033765857073249396 0.019222888349132574 0.0366758927128704 0.023425682392132743 0.04156995738050753 0.01439904495484344 0.21532000371278573 0.2062681823099032 0.43906982651942317 0.9702064166750799 1.0 0.5266287956023967 0.3307939384157074 0.43601990045358663 0.4539182184875824 0.5591960801816629 0.4306259447317991 0.057385150231952155 0.02019180375858439 0.006632129374597611 0.002298515218977774 0.0021163880760639107 0.00032463677682729514; 0.02553578820521561 0.0158644870161105 0.029838914605637666 0.019953454894567846 0.038401012604959726 0.013269650716428449 0.2509244773194308 0.251186735069333 0.5371677022503326 0.9995646051393732 0.9667899675361993 0.513751166350262 0.3433357299717392 0.4081395073506391 0.42733645217635563 0.5192419691029252 0.39799729028929487 0.03990347409204987 0.013038212795705478 0.0043927214930709535 0.0016094244034674434 0.0015961297818598565 0.0002754711459087229; 0.014684836067894124 0.007863396938894306 0.016093036470496366 0.009825718452488018 0.0189014126385701 0.005844366606428707 0.13849007975235583 0.13706934957395644 0.3505681951871089 0.9408488893951146 0.9551754247357206 0.3639737993290779 0.21243271709551226 0.2843646309443169 0.29919652216983733 0.3870026762693761 0.2792653571316844 0.027184350652692695 0.008556747740127378 0.002461193413214283 0.0007652106152402142 0.0006987527433192667 9.397135335575793e-5; 0.021425769737078368 0.011961446785158796 0.023616854868073967 0.014831082339553991 0.027747811476019044 0.009060931748526886 0.17752833012020347 0.17401094944541437 0.40750501628989283 0.9721752042840985 0.985527711544548 0.4396878920679833 0.2681896715550059 0.35127963758750874 0.36800110805489483 0.46402546238163483 0.34531620678243374 0.037617774125204916 0.012368962147916036 0.0038068720385083673 0.001257026852505441 0.0011643628241787124 0.00016961483943713626; 0.2680295766735067 0.221500149894056 0.31408772981722 0.2566761906912133 0.37540034365943104 0.2022331299863698 0.7098399745938505 0.6395908524826316 0.6673967547317672 0.530245218387723 0.5266287956023967 1.0 0.9204011029470381 0.9763204269194343 0.9834744899012892 0.9800453255225883 0.9670027469525624 0.2868166055620744 0.1396173432997934 0.0802050018418445 0.04629682347632554 0.04928910656810396 0.015144618580948671; 0.29939058530607915 0.2978310573718274 0.3824704266612618 0.3496676024988405 0.5190650165661463 0.3174100045147389 0.897129803886899 0.8253149042783318 0.7023400808140081 0.3560307890203881 0.3307939384157074 0.9204011029470381 1.0 0.9092611854055276 0.9117485102579749 0.8437830107817399 0.8913223951781822 0.26003030092553525 0.1303827813666727 0.09015777074260106 0.06341517666632882 0.07518254013006463 0.03137762242865076; 0.37151489965630213 0.3053559535776424 0.4200037751884887 0.34456852113508585 0.46669986410386666 0.2670237514797323 0.6668121606496512 0.5796701178937012 0.5485677304431098 0.4238742345642878 0.43601990045358663 0.9763204269194343 0.9092611854055276 1.0 0.9993139026257465 0.9765798100043396 0.9987957619674939 0.39412877854863043 0.2098767698470255 0.12780803517686787 0.07623734052074496 0.07896775479712795 0.02470799941856937; 0.3524014409758097 0.2890446485251837 0.4001773046096343 0.3275077643059417 0.4490622985140926 0.2539121194008726 0.6725100792205622 0.5878473919825917 0.5670659502059522 0.44332843787390197 0.4539182184875824 0.9834744899012892 0.9117485102579749 0.9993139026257465 1.0 0.9815187936126483 0.9969143579721336 0.37562956856679103 0.19703163333480644 0.11850074321536054 0.07008717427360553 0.07279694039680448 0.022599211149285328; 0.39465231175913795 0.31587551141322445 0.43642677197878865 0.3530737581159885 0.46672673050010677 0.26715889535835474 0.622155656205321 0.5337483231906022 0.5046262630854164 0.41089803125627195 0.43085691731721165 0.961384843855092 0.8785837060449794 0.9969370612351908 0.9945537399369616 0.97609997854102 0.9995568027554731 0.4295214100756645 0.2338386266049015 0.14126529235623905 0.08283907919566438 0.08390006402557657 0.025324299741833464; 0.2908959282977835 0.22090114119439183 0.3219312217176709 0.25092423515822787 0.3499350203111987 0.18411214167319784 0.5926610833970605 0.5191073343197649 0.5624209010974132 0.5365850265213121 0.5591960801816629 0.9800453255225883 0.8437830107817399 0.9765798100043396 0.9815187936126483 1.0 0.9760363402932355 0.3396761689366942 0.17122729619266394 0.09352156540595309 0.05035919518075388 0.05038867912666139 0.01366097735931823; 0.3514836919845527 0.25827905726818784 0.37376328779438567 0.2871031630484072 0.3774708549469005 0.20356442769718722 0.527254124515189 0.4483191866084228 0.46652883939298834 0.4676429245287537 0.505344784277436 0.9424242881899214 0.7976158902909226 0.971098264163409 0.9709671758028702 0.9864017590543077 0.9781505431297239 0.42127512095013353 0.22585472214185204 0.1251177654786094 0.06693346651287566 0.06454104481834383 0.01690822793519799; 0.3880509811446821 0.3141485519019614 0.43280734456335546 0.35232020079983056 0.46962470972808273 0.26924082958699697 0.639910660474017 0.5515646672943586 0.5201938090487075 0.4136915663687128 0.4306259447317991 0.9670027469525624 0.8913223951781822 0.9987957619674939 0.9969143579721336 0.9760363402932355 1.0 0.4175584225095954 0.22574366435124235 0.13708488052338838 0.08106449463694666 0.08284201786267843 0.02541251142320079; 0.37961657901188395 0.25192283506342733 0.37396034073513706 0.27177013275063033 0.3316295475090898 0.17635188715914188 0.3805158733692937 0.31022132785589657 0.32995603743246077 0.4085529653091237 0.4713640504577533 0.8305039703656368 0.6510083421165387 0.8867144586887958 0.8825664115254728 0.9163437520779888 0.9045793138133195 0.5037709664927494 0.28752565713902434 0.1520569532177624 0.07546592900068064 0.0671567954033988 0.01522336553239881; 0.4702831429583742 0.3213335719746325 0.4597881972580735 0.34164544472267744 0.39813969852691955 0.22491433200447894 0.3765147624730655 0.3011668417307773 0.2904874285924168 0.3265331196970446 0.3807124240825537 0.7964028874115803 0.6475849179588237 0.878341751854222 0.8691210753475811 0.879767432078501 0.8991737718575608 0.5982998949558497 0.3633531261013041 0.20600362457781846 0.10808286790933438 0.09633093909886627 0.02331194601865261; 0.526795288057018 0.3940391213462967 0.5388779005079533 0.42266049253674903 0.49969396504753033 0.298501330474785 0.4644036535619441 0.37649106648873704 0.3319524786282605 0.300825324374188 0.33846013954020493 0.8391821514224157 0.7387531166728089 0.9266670512086935 0.9155922954238254 0.89576457108235 0.9431591120717556 0.6061425707332225 0.37032063184699016 0.22846758678703566 0.1315297906335085 0.12382813579712616 0.03470780014308738; 0.326708533419188 0.2430452816843132 0.3528498094599045 0.2724891632283225 0.36673219291844344 0.19573728896816622 0.5532605885480469 0.4759742322481321 0.5037422336172169 0.49604598976832465 0.5285094201634164 0.9600456733812273 0.81751796330167 0.9761094484348516 0.9780088993021706 0.9950015905287559 0.9801978533385329 0.3878742388347221 0.20288958549179464 0.11167812171391878 0.05985022255614695 0.05853988792012992 0.015526771107081744; 0.5619207849380442 0.38384512737319587 0.5345032833933165 0.39943888193571503 0.4372646966133805 0.26024918463886315 0.3280472343236939 0.2549281555415764 0.22593650441538846 0.24574382979335188 0.29442973469375133 0.7110437799713303 0.5859751095762021 0.8158127630991835 0.8019023386938486 0.7975187692574014 0.8407203767536858 0.7064956340024616 0.4621084542264419 0.27596525043225517 0.1494008366974313 0.13051365577846652 0.03227936170155413; 0.6430706658631261 0.45006841886534327 0.6078444172622047 0.46271359870464523 0.4874771713377931 0.30510864810623706 0.31132575025507137 0.23812417686552184 0.19454650831020817 0.19524813962125906 0.23598888415659178 0.6574493787863069 0.5585838247963583 0.7749782758058278 0.7580532503023097 0.7381503679619124 0.8009681622942774 0.7806334105161824 0.5376983537943203 0.34025026525820296 0.1929222057620414 0.16874582807373442 0.04406772611548003; 0.7231647963646257 0.5056851000850311 0.6655850205917616 0.509266230976612 0.5052334681260294 0.3323247700429202 0.2584463004813027 0.19226030577292572 0.14491990312468314 0.1411260347060979 0.17513016835138412 0.5593127995804646 0.4810917747190298 0.6847600297393145 0.6655572238457713 0.6373481626655303 0.7122690618954836 0.868572031742886 0.642367727412946 0.4270069268585453 0.2493471672518346 0.21386764096672656 0.05697307908855304; 0.6316227556282686 0.40743503740482884 0.5606965055218779 0.4084491148331317 0.4039103078167204 0.2507422704206328 0.21745362046021305 0.16055009821413815 0.13282860708891472 0.15755645787174735 0.20099144507894512 0.5395481654674688 0.4314691749241478 0.6558364264722154 0.6389336863189395 0.6324492597708506 0.6850231211028036 0.8312537908937804 0.6045643859848966 0.3697286562941463 0.19778414654099596 0.16205085142829598 0.037740738988576696; 0.7401079629435721 0.5428295877160711 0.7031806864585685 0.5526524918348173 0.5625668683648273 0.37535456334989986 0.3049670044118855 0.23040058563288934 0.16976699088029149 0.1490124483780644 0.1800509174051033 0.6031858853281993 0.5388764898497265 0.7305283258443571 0.7108620688983256 0.6720656802992333 0.756102320334855 0.846021274392779 0.614290483608926 0.42053331865050503 0.25553244125721153 0.22690103237680803 0.06489412637619173; 0.7774171295635275 0.5183587196531081 0.6671856182761694 0.5011275436908067 0.4439717097364589 0.3097901912593574 0.15499968292212957 0.10904978972531522 0.07466261292508483 0.07694096980958384 0.10184805385603908 0.3794525394526092 0.3203312365762359 0.4963074588528718 0.4773711871808398 0.4516508142874447 0.5236420497371012 0.9644741236400529 0.8060189579870737 0.5597466310187597 0.32836661338632694 0.2646093759094215 0.0677441637131662; 0.8181308100490108 0.5561243868737823 0.698575060593573 0.5324654697166794 0.45752368437996377 0.33230408198860933 0.14048366252196495 0.09759286203453564 0.06262356218316165 0.06053539611033345 0.08069598037760621 0.3368883734358375 0.29138104057505876 0.45023153710385133 0.4313705725132026 0.40132171704812075 0.4761351054919458 0.9870428644119942 0.8596029953159898 0.6251306019773151 0.38047255238126104 0.306893210606778 0.0820423232658685; 0.7328677140374111 0.45686249352211705 0.5790469862711584 0.4222181599919014 0.33080357486099415 0.2432499454983849 0.07720430058897658 0.05106405047245464 0.032093401151864966 0.0365474112100408 0.05215633420045098 0.22181457534510465 0.18089489184494278 0.3111666840414051 0.2960670894164659 0.27828806542602247 0.3336621538628581 0.9586604604379003 0.9112627551291514 0.6526801185016419 0.37822360606526256 0.2823655882683267 0.0674470136857995; 0.7515232704337115 0.4757416292999305 0.5968371635299031 0.43944272397447637 0.34313013101024226 0.2559666012514232 0.07813514522593748 0.051658187653842315 0.03172311639963259 0.0347198448127321 0.0493745953085397 0.21905996675855813 0.18130230551737894 0.3085046198554848 0.2932786599324534 0.273623488478172 0.33068669985787785 0.9653564796429168 0.9249527424633506 0.6753413167030635 0.3987464830705314 0.2999086114297358 0.07359809262134936; 0.8766234308310106 0.6220352391872924 0.750503533504958 0.5892979561473187 0.4883331096338951 0.3775897748372698 0.12677413631301512 0.08688786848645763 0.05039216793847928 0.04297162933120202 0.057385150231952155 0.2868166055620744 0.26003030092553525 0.39412877854863043 0.37562956856679103 0.3396761689366942 0.4175584225095954 1.0 0.9191899116391927 0.7213092819491199 0.46889441622459455 0.3828925347230843 0.11144275482456456; 0.8119268400431263 0.5616029351132952 0.6474473854693175 0.5069931386383051 0.3661982883577541 0.311649120820824 0.05829641266312954 0.03742883021724568 0.018577266239422643 0.01567128213557683 0.0223986923163361 0.14511479304623873 0.13244337225859604 0.21687298617558284 0.2038675593583631 0.17921730237397668 0.23341261540835537 0.9285566477571606 0.9982284285344054 0.8549662123185122 0.57919180548843 0.4463677903452444 0.1306883528369306; 0.82681374480545 0.5857825177211226 0.6647402210580929 0.5284801502144095 0.3798584854063081 0.33049550236819253 0.05828369828510033 0.03740368096386031 0.01791524354832784 0.01420754067753986 0.02019180375858439 0.1396173432997934 0.1303827813666727 0.2098767698470255 0.19703163333480644 0.17122729619266394 0.22574366435124235 0.9191899116391927 1.0 0.8822702532119937 0.6152795204486762 0.47978346763635193 0.14650071169191572; 0.822809521622991 0.5948037025016687 0.661623292177756 0.5335057440538382 0.37607346176075984 0.33764855167756436 0.05284701060910418 0.033662036456081526 0.015324720335777151 0.011444005832743251 0.016295443067505327 0.12330208096295307 0.11778021084065203 0.18808039767468007 0.17608616190280213 0.1508937925165944 0.2025242052804488 0.8905990321407163 0.9966670165629197 0.9131581140097302 0.6576167422401595 0.5153533855760121 0.16390625359706507; 0.7332380005310724 0.47966685651323304 0.561270991827678 0.4262991752111288 0.2962414470348876 0.24973189593616663 0.04268972320232483 0.026811837976605572 0.013507495536566362 0.012805825713678201 0.018952874629094556 0.11843861804971419 0.1036060543552713 0.18023196881388742 0.16904117283919085 0.1504765142358632 0.1952861714325238 0.8869877129551653 0.9842217337887395 0.8203423350576758 0.5328025632598246 0.39324760863940933 0.10628991718440418; 0.7187809376310704 0.4761254396371267 0.54656074185958 0.4194008209483607 0.28353726603906537 0.24690043097609538 0.03654000044570406 0.022681143697437862 0.010842333019539772 0.009820791891003388 0.014650008345521303 0.10032334124866865 0.08930018079843059 0.15556684998216727 0.14542078103482645 0.12776685893780718 0.16895268664252294 0.8526318330670217 0.9806401970911786 0.8475881317911022 0.5659917900792554 0.416984564845228 0.11622851300686755; 0.8144257681324784 0.6973386670166851 0.6926170128782051 0.6217604813241744 0.4217137127807958 0.4440238046305932 0.04500739970678052 0.028523064291676215 0.010043369678747171 0.00483919981495718 0.006632129374597611 0.0802050018418445 0.09015777074260106 0.12780803517686787 0.11850074321536054 0.09352156540595309 0.13708488052338838 0.7213092819491199 0.8822702532119937 1.0 0.8855923329934225 0.7530367236790085 0.3226608405615575; 0.7464895770624135 0.5594351352421899 0.5895776094001356 0.4895311220674548 0.319643609141359 0.3143758003701468 0.03288525229499247 0.020274680144724787 0.007964199195053414 0.005222443768633631 0.0075982008115135 0.07437307723826476 0.0746522813603369 0.11956373319655095 0.11089845782781486 0.09158557641127746 0.12957243004933744 0.7640400790614297 0.9450363171851921 0.9591969772942559 0.7472623695905949 0.5830550774121601 0.20244287264478314; 0.7530030245919059 0.5431234673379363 0.5891717554326777 0.47712932171407574 0.3166487512948445 0.2985208761222979 0.035649105448995015 0.022080553461329762 0.009286847786499046 0.006751278027436788 0.009875494227672841 0.08544232307820761 0.08252777611615113 0.13536175637433384 0.1259284644875634 0.10617928562160212 0.14670596019757695 0.8076447950677335 0.9703716409266376 0.932762917865407 0.690834745158495 0.5304385201337295 0.17171039308680663; 0.8235336782281178 0.6405111214096829 0.6751170957730195 0.5708263832967825 0.3913225084959072 0.3799338589125705 0.04683919106442962 0.02961614926842436 0.011883757732761544 0.00729943919314527 0.010273385080068998 0.09769000743415807 0.10036147536710764 0.1529345825839084 0.1424119148518332 0.11739714058688706 0.1645958816308224 0.8155975950934103 0.9611942517618203 0.9745530337256978 0.7718025112426161 0.6243230838122039 0.22676040918673873; 0.6867468433693716 0.5331666327402476 0.5401499281722351 0.4602101153093634 0.2874857014122235 0.3013099698294809 0.02441925668731399 0.014789893132036517 0.005242128466115685 0.003076690211929651 0.00451111846293758 0.05293690792581018 0.05540759589202033 0.08787213190071044 0.08102581610167775 0.06510209996936658 0.09551563939482735 0.6693717151437683 0.8805139185030549 0.962976949860865 0.7977752137717908 0.625906121249835 0.23426112680791764; 0.6123022959216866 0.4999976772262982 0.48269268480676464 0.42595719513313823 0.25449585634974026 0.28756933949188684 0.017618908610842147 0.010493756806911569 0.0032937887074633177 0.001666429247984283 0.0024505350619450157 0.03569336809287439 0.03950235279527152 0.061294039927021746 0.056157172390876364 0.04361457849901721 0.06678097895617935 0.5585565589403414 0.784685636948393 0.9400073084858939 0.8422729086934179 0.6700290694351737 0.27733472967577766; 0.5208071432762769 0.44790727672851793 0.4112620733348449 0.37629227912890906 0.2144655272958558 0.26209976266649593 0.011972269254734097 0.007004322353126643 0.001936938649234545 0.0008408093661474044 0.0012409452628291877 0.022638140748971483 0.026538216827193314 0.040289711699862236 0.036665355400374564 0.02748985316984334 0.04400815817355934 0.44374622631342764 0.6680658291966547 0.8797756865237958 0.8549882792013699 0.6895556341694858 0.3166405198571644; 0.5891681247974004 0.6090837294623026 0.5248133057027008 0.5327362038278587 0.33368944124026373 0.43261895730019134 0.022886397413172065 0.01418017863278169 0.0035688318479617353 0.0010351877552663635 0.0013814109028705312 0.030641695234509592 0.041619482122225035 0.0524656862959539 0.047934392864311254 0.03415916876832351 0.05624683422303738 0.4144251498547713 0.5845546024608902 0.8657575960256906 0.9832623417700396 0.9050029006040501 0.5474453725428048; 0.6772129558340088 0.7178826818314505 0.6277007998632926 0.6430231195027034 0.4303538604829861 0.5378606847561948 0.03699408218865754 0.023677093828563235 0.006325305381042625 0.0017871776252900312 0.002298515218977774 0.04629682347632554 0.06341517666632882 0.07623734052074496 0.07008717427360553 0.05035919518075388 0.08106449463694666 0.46889441622459455 0.6152795204486762 0.8855923329934225 1.0 0.95317462877265 0.5874035637208949; 0.5749644296822503 0.5614640934267493 0.49299286294186273 0.48468047798417707 0.2942832944957974 0.3752064819757136 0.018804501447141315 0.011435890401526988 0.002955760433948493 0.0009704067285291062 0.0013371056870898458 0.028038292943887058 0.036400349970850336 0.04864456022600958 0.044380183342761476 0.032110546803158134 0.052463579032817904 0.42901781989141297 0.6188663970979027 0.885220316479004 0.9589037954214876 0.8466889875934079 0.4705785731761298; 0.5966298504595813 0.6497959021929185 0.5508610734329186 0.575651052751428 0.37181328597797036 0.48854896269750847 0.02743297932990087 0.01730727969785368 0.004264146632423846 0.0011057799083934747 0.0014316397886228248 0.03325744456483746 0.04705515434685252 0.056183095043217876 0.051409730819183534 0.036153416730729984 0.0598890375403915 0.3978048409442719 0.5482579229273747 0.8366872356964472 0.9911306477477079 0.9482157541926657 0.6192712131547955; 0.6360692761241019 0.7710611517712889 0.6433503699452944 0.7109544049100224 0.5091151748567635 0.6649412466468505 0.05063099709508791 0.033770552150204226 0.00840984174401409 0.0017700462041020729 0.0021163880760639107 0.04928910656810396 0.07518254013006463 0.07896775479712795 0.07279694039680448 0.05038867912666139 0.08284201786267843 0.3828925347230843 0.47978346763635193 0.7530367236790085 0.95317462877265 1.0 0.7536774603025823; 0.37477044321888536 0.4349863927333212 0.3388839022817558 0.3720494346085627 0.21519711025421284 0.3253138317147909 0.010037093974181666 0.0060467417410972725 0.0011875118732063588 0.0002480707283038141 0.0003300424159388177 0.011407470827613422 0.017521330331462935 0.020814669795918655 0.018784268442022023 0.012463171657650072 0.02238095595512163 0.227469653973088 0.3632368032599243 0.6505486498358429 0.8743314602327298 0.8366314316745298 0.6323552576352361; 0.4308494365065684 0.5521961142878209 0.42568510936624304 0.49172463488628315 0.3159269411000278 0.47114743317643165 0.020125652113509692 0.012821784701747635 0.0025763216834118296 0.0004459216390885389 0.000548863514666901 0.01860102653889555 0.03056557154533914 0.03212075732520414 0.029217027826914762 0.01915472861237837 0.03399621120406532 0.23833695351973763 0.3445536527228856 0.6283027956928183 0.8935450448439365 0.9340508327639085 0.803367738440086; 0.42588556793724963 0.514599887489171 0.40225454811321515 0.45008699593865603 0.2764547431869833 0.4095893081877711 0.015610214582617616 0.009697779434859725 0.001968411270910264 0.0003845846633845169 0.0004920266487864112 0.016205332099098156 0.025484432603544913 0.028598027606397586 0.025939220913629255 0.017212264615019613 0.030497065232210633 0.24978921540415677 0.3752433713507336 0.6671808968998799 0.9094166321017612 0.9070024503194692 0.717969244338302; 0.25616724404676827 0.36064432234093263 0.25458348363076183 0.31381508570267086 0.18628923825270452 0.3185209800604343 0.0082263007895151 0.005088772835099111 0.0008198400207983819 0.00010788314848899607 0.0001332981971132782 0.006693772956053487 0.012191493624789688 0.01228286410458982 0.011044429533844548 0.0068049702629956145 0.0130512812361402 0.1253743856343511 0.2039176585333789 0.43910102741499013 0.7211374530045427 0.7744022012346895 0.802098239373476; 0.28531440235914857 0.4002858555284273 0.28636704091166787 0.3513821641026911 0.21409246587444314 0.35723120432949906 0.010439773572454236 0.006534374708804699 0.001095215285484182 0.00014740693870887804 0.0001802894669450021 0.00845972297584198 0.015272137647549196 0.015263798894214076 0.013764458115931638 0.008556389143525862 0.016176721587662174 0.1409025065904775 0.22200076169575395 0.46602649035313587 0.7524428671746313 0.8131717721925134 0.8296434593833473; 0.24903410779923643 0.35904933061176014 0.25127407578055966 0.31395393231584934 0.18847617044543946 0.3249998991514615 0.008516438259839807 0.005305086528612752 0.0008432117182715994 0.00010526431950490199 0.0001284332102931466 0.006629678280122173 0.012309626926329458 0.012112089324700176 0.010895282626047198 0.006667249932579161 0.012840082356890036 0.11887527798719272 0.1919477286751715 0.4200001386101343 0.7041663984132576 0.76897079297106 0.8256707173813529; 0.22886067841582372 0.3877508069941566 0.26235980762663824 0.35736249399182723 0.2450825591516219 0.42670313988850206 0.01598520519681239 0.010717359719015747 0.0016793924477414128 0.00015125353768022373 0.00016529025327248037 0.009017807932986543 0.018843636469128917 0.015426968275166236 0.013999905415617844 0.008311948797036561 0.01600287954319619 0.09324670458766982 0.13395130511607745 0.31152426355295504 0.5837246484964872 0.7272861670149322 0.9767715590737112; 0.22999157462853717 0.3739192895482388 0.2539677229913929 0.3387311467563276 0.22207608922889335 0.3892663265674966 0.012660670510976866 0.008288158348560032 0.0012841413529425093 0.00012498125252143175 0.00014115406860773277 0.007787904576648366 0.015827422324756663 0.013643282830861742 0.012338925297899555 0.007359598121597874 0.01424797224163647 0.09737079522136055 0.14601439689786624 0.337413917584164 0.6179492443333177 0.741809464150622 0.9445687130630891; 0.17472537202867305 0.31946950253651746 0.20757504481875824 0.2960646945670436 0.2043837708525398 0.37471801359443757 0.012869683364298182 0.00870439065575205 0.001255146163388405 9.40099767528817e-5 9.993772188331276e-5 0.006372183619931549 0.014253099520981844 0.0109507207417646 0.009919585760279565 0.005713849249271425 0.011310824574965789 0.06534582815086555 0.09496735574229902 0.23825024277808274 0.48407885194721434 0.6304863404013172 0.9601419182825064; 0.27226866397259536 0.4654568122945319 0.32400415670963956 0.44057810112560447 0.32622076287640006 0.5362623212460318 0.028066326109481347 0.0195077208623248 0.0033555198691232182 0.00030801400193521903 0.00032463677682729514 0.015144618580948671 0.03137762242865076 0.02470799941856937 0.022599211149285328 0.01366097735931823 0.02541251142320079 0.11144275482456456 0.14650071169191572 0.3226608405615575 0.5874035637208949 0.7536774603025823 1.0; 0.1378713016583555 0.2643436673110578 0.16623288617478832 0.24439157750945995 0.16654595332868682 0.319866938036741 0.009632633551903613 0.0064994006534456195 0.0008695399375500576 5.753081953910816e-5 6.056131084754415e-5 0.004425777296702749 0.010366726569778452 0.007709894663229764 0.006963352996732379 0.003917093375501153 0.007956455372461188 0.04872180791505365 0.07277779799754423 0.19415214531611902 0.41848453319317924 0.5573025470555464 0.9233874841127124], Int32[1, 2, 3, 4, 6, 7, 24, 25, 30, 46, 47, 51, 52, 53, 54, 56, 58, 72, 74, 78, 86, 89, 99], LIBSVM.SVMNode[LIBSVM.SVMNode(0, 1.0), LIBSVM.SVMNode(0, 2.0), LIBSVM.SVMNode(0, 3.0), LIBSVM.SVMNode(0, 4.0), LIBSVM.SVMNode(0, 6.0), LIBSVM.SVMNode(0, 7.0), LIBSVM.SVMNode(0, 24.0), LIBSVM.SVMNode(0, 25.0), LIBSVM.SVMNode(0, 30.0), LIBSVM.SVMNode(0, 46.0), LIBSVM.SVMNode(0, 47.0), LIBSVM.SVMNode(0, 51.0), LIBSVM.SVMNode(0, 52.0), LIBSVM.SVMNode(0, 53.0), LIBSVM.SVMNode(0, 54.0), LIBSVM.SVMNode(0, 56.0), LIBSVM.SVMNode(0, 58.0), LIBSVM.SVMNode(0, 72.0), LIBSVM.SVMNode(0, 74.0), LIBSVM.SVMNode(0, 78.0), LIBSVM.SVMNode(0, 86.0), LIBSVM.SVMNode(0, 89.0), LIBSVM.SVMNode(0, 99.0)]), 0.0, [1.0; 1.0; 1.0; 1.0; 0.4545873718969774; 0.36172853884920114; 1.0; 1.0; 0.9976825435225717; 1.0; 1.0; -1.0; -1.0; -1.0; -1.0; -0.5005315477488701; -0.21806563021962358; -1.0; -0.3833339180359196; -1.0; -0.7120673582643366; -1.0; -1.0;;], Float64[], Float64[], [-0.015075000482567661], 3, 0.01, 200.0, 0.001, 1.0, 0.5, 0.1, true, false)","category":"page"},{"location":"examples/support-vector-machine/#Prediction","page":"Support Vector Machine","title":"Prediction","text":"","category":"section"},{"location":"examples/support-vector-machine/","page":"Support Vector Machine","title":"Support Vector Machine","text":"For evaluation, we create a 100×100 2D grid based on the extent of the training data:","category":"page"},{"location":"examples/support-vector-machine/","page":"Support Vector Machine","title":"Support Vector Machine","text":"test_range = range(floor(Int, minimum(X)), ceil(Int, maximum(X)); length=100)\nx_test = ColVecs(mapreduce(collect, hcat, Iterators.product(test_range, test_range)));","category":"page"},{"location":"examples/support-vector-machine/","page":"Support Vector Machine","title":"Support Vector Machine","text":"Again, we pass the result of KernelFunctions.jl's kernelmatrix to LIBSVM:","category":"page"},{"location":"examples/support-vector-machine/","page":"Support Vector Machine","title":"Support Vector Machine","text":"y_pred, _ = svmpredict(model, kernelmatrix(k, x_train, x_test));","category":"page"},{"location":"examples/support-vector-machine/","page":"Support Vector Machine","title":"Support Vector Machine","text":"We can see that the kernelized, non-linear classification successfully separates the two classes in the training data:","category":"page"},{"location":"examples/support-vector-machine/","page":"Support Vector Machine","title":"Support Vector Machine","text":"plot(; lim=extrema(test_range), aspect_ratio=1)\ncontourf!(\n test_range,\n test_range,\n y_pred;\n levels=1,\n color=cgrad(:redsblues),\n alpha=0.7,\n colorbar_title=\"prediction\",\n)\nscatter!(X1[:, 1], X1[:, 2]; color=:red, label=\"training data: class –1\")\nscatter!(X2[:, 1], X2[:, 2]; color=:blue, label=\"training data: class 1\")","category":"page"},{"location":"examples/support-vector-machine/","page":"Support Vector Machine","title":"Support Vector Machine","text":"\n\n\n \n \n \n\n\n\n \n \n \n\n\n\n \n \n \n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n \n \n \n\n\n\n\n\n\n\n\n\n","category":"page"},{"location":"examples/support-vector-machine/","page":"Support Vector Machine","title":"Support Vector Machine","text":"
      \n
      Package and system information
      \n
      \nPackage information (click to expand)\n
      \nStatus `~/work/KernelFunctions.jl/KernelFunctions.jl/examples/support-vector-machine/Project.toml`\n  [31c24e10] Distributions v0.25.107\n  [ec8451be] KernelFunctions v0.10.63 `/home/runner/work/KernelFunctions.jl/KernelFunctions.jl#master`\n  [b1bec4e5] LIBSVM v0.8.0\n  [98b081ad] Literate v2.16.1\n  [91a5bcdd] Plots v1.40.1\n  [37e2e46d] LinearAlgebra\n
      \nTo reproduce this notebook's package environment, you can\n\ndownload the full Manifest.toml.\n
      \n
      \nSystem information (click to expand)\n
      \nJulia Version 1.10.0\nCommit 3120989f39b (2023-12-25 18:01 UTC)\nBuild Info:\n  Official https://julialang.org/ release\nPlatform Info:\n  OS: Linux (x86_64-linux-gnu)\n  CPU: 4 × AMD EPYC 7763 64-Core Processor\n  WORD_SIZE: 64\n  LIBM: libopenlibm\n  LLVM: libLLVM-15.0.7 (ORCJIT, znver3)\n  Threads: 1 on 4 virtual cores\nEnvironment:\n  JULIA_DEBUG = Documenter\n  JULIA_LOAD_PATH = :/home/runner/.julia/packages/JuliaGPsDocs/7M86H/src\n
      \n
      ","category":"page"},{"location":"examples/support-vector-machine/","page":"Support Vector Machine","title":"Support Vector Machine","text":"","category":"page"},{"location":"examples/support-vector-machine/","page":"Support Vector Machine","title":"Support Vector Machine","text":"This page was generated using Literate.jl.","category":"page"},{"location":"metrics/#Metrics","page":"Metrics","title":"Metrics","text":"","category":"section"},{"location":"metrics/","page":"Metrics","title":"Metrics","text":"SimpleKernel implementations rely on Distances.jl for efficiently computing the pairwise matrix. This requires a distance measure or metric, such as the commonly used SqEuclidean and Euclidean.","category":"page"},{"location":"metrics/","page":"Metrics","title":"Metrics","text":"The metric used by a given kernel type is specified as","category":"page"},{"location":"metrics/","page":"Metrics","title":"Metrics","text":"KernelFunctions.metric(::CustomKernel) = SqEuclidean()","category":"page"},{"location":"metrics/","page":"Metrics","title":"Metrics","text":"However, there are kernels that can be implemented efficiently using \"metrics\" that do not respect all the definitions expected by Distances.jl. For this reason, KernelFunctions.jl provides additional \"metrics\" such as DotProduct (langle x y rangle) and Delta (delta(xy)).","category":"page"},{"location":"metrics/#Adding-a-new-metric","page":"Metrics","title":"Adding a new metric","text":"","category":"section"},{"location":"metrics/","page":"Metrics","title":"Metrics","text":"If you want to create a new \"metric\" just implement the following:","category":"page"},{"location":"metrics/","page":"Metrics","title":"Metrics","text":"struct Delta <: Distances.PreMetric\nend\n\n@inline function Distances._evaluate(::Delta,a::AbstractVector{T},b::AbstractVector{T}) where {T}\n @boundscheck if length(a) != length(b)\n throw(DimensionMismatch(\"first array has length $(length(a)) which does not match the length of the second, $(length(b)).\"))\n end\n return a==b\nend\n\n@inline (dist::Delta)(a::AbstractArray,b::AbstractArray) = Distances._evaluate(dist,a,b)\n@inline (dist::Delta)(a::Number,b::Number) = a==b","category":"page"},{"location":"transform/#input_transforms","page":"Input Transforms","title":"Input Transforms","text":"","category":"section"},{"location":"transform/#Overview","page":"Input Transforms","title":"Overview","text":"","category":"section"},{"location":"transform/","page":"Input Transforms","title":"Input Transforms","text":"Transforms are designed to change input data before passing it on to a kernel object.","category":"page"},{"location":"transform/","page":"Input Transforms","title":"Input Transforms","text":"It can be as standard as IdentityTransform returning the same input, or multiplying the data by a scalar with ScaleTransform or by a vector with ARDTransform. There is a more general FunctionTransform that uses a function and applies it to each input.","category":"page"},{"location":"transform/","page":"Input Transforms","title":"Input Transforms","text":"You can also create a pipeline of Transforms via ChainTransform, e.g.,","category":"page"},{"location":"transform/","page":"Input Transforms","title":"Input Transforms","text":"LowRankTransform(rand(10, 5)) ∘ ScaleTransform(2.0)","category":"page"},{"location":"transform/","page":"Input Transforms","title":"Input Transforms","text":"A transformation t can be applied to a single input x with t(x) and to multiple inputs xs with map(t, xs).","category":"page"},{"location":"transform/","page":"Input Transforms","title":"Input Transforms","text":"Kernels can be coupled with input transformations with ∘ or its alias compose. It falls back to creating a TransformedKernel but allows more optimized implementations for specific kernels and transformations.","category":"page"},{"location":"transform/#List-of-Input-Transforms","page":"Input Transforms","title":"List of Input Transforms","text":"","category":"section"},{"location":"transform/","page":"Input Transforms","title":"Input Transforms","text":"Transform\nIdentityTransform\nScaleTransform\nARDTransform\nARDTransform(::Real, ::Integer)\nLinearTransform\nFunctionTransform\nSelectTransform\nChainTransform\nPeriodicTransform","category":"page"},{"location":"transform/#KernelFunctions.Transform","page":"Input Transforms","title":"KernelFunctions.Transform","text":"Transform\n\nAbstract type defining a transformation of the input.\n\n\n\n\n\n","category":"type"},{"location":"transform/#KernelFunctions.IdentityTransform","page":"Input Transforms","title":"KernelFunctions.IdentityTransform","text":"IdentityTransform()\n\nTransformation that returns exactly the input.\n\n\n\n\n\n","category":"type"},{"location":"transform/#KernelFunctions.ScaleTransform","page":"Input Transforms","title":"KernelFunctions.ScaleTransform","text":"ScaleTransform(l::Real)\n\nTransformation that multiplies the input elementwise with l.\n\nExamples\n\njulia> l = rand(); t = ScaleTransform(l); X = rand(100, 10);\n\njulia> map(t, ColVecs(X)) == ColVecs(l .* X)\ntrue\n\n\n\n\n\n","category":"type"},{"location":"transform/#KernelFunctions.ARDTransform","page":"Input Transforms","title":"KernelFunctions.ARDTransform","text":"ARDTransform(v::AbstractVector)\n\nTransformation that multiplies the input elementwise by v.\n\nExamples\n\njulia> v = rand(10); t = ARDTransform(v); X = rand(10, 100);\n\njulia> map(t, ColVecs(X)) == ColVecs(v .* X)\ntrue\n\n\n\n\n\n","category":"type"},{"location":"transform/#KernelFunctions.ARDTransform-Tuple{Real, Integer}","page":"Input Transforms","title":"KernelFunctions.ARDTransform","text":"ARDTransform(s::Real, dims::Integer)\n\nCreate an ARDTransform with vector fill(s, dims).\n\n\n\n\n\n","category":"method"},{"location":"transform/#KernelFunctions.LinearTransform","page":"Input Transforms","title":"KernelFunctions.LinearTransform","text":"LinearTransform(A::AbstractMatrix)\n\nLinear transformation of the input realised by the matrix A.\n\nThe second dimension of A must match the number of features of the target.\n\nExamples\n\njulia> A = rand(10, 5); t = LinearTransform(A); X = rand(5, 100);\n\njulia> map(t, ColVecs(X)) == ColVecs(A * X)\ntrue\n\n\n\n\n\n","category":"type"},{"location":"transform/#KernelFunctions.FunctionTransform","page":"Input Transforms","title":"KernelFunctions.FunctionTransform","text":"FunctionTransform(f)\n\nTransformation that applies function f to the input.\n\nMake sure that f can act on an input. For instance, if the inputs are vectors, use f(x) = sin.(x) instead of f = sin.\n\nExamples\n\njulia> f(x) = sum(x); t = FunctionTransform(f); X = randn(100, 10);\n\njulia> map(t, ColVecs(X)) == ColVecs(sum(X; dims=1))\ntrue\n\n\n\n\n\n","category":"type"},{"location":"transform/#KernelFunctions.SelectTransform","page":"Input Transforms","title":"KernelFunctions.SelectTransform","text":"SelectTransform(dims)\n\nTransformation that selects the dimensions dims of the input.\n\nExamples\n\njulia> dims = [1, 3, 5, 6, 7]; t = SelectTransform(dims); X = rand(100, 10);\n\njulia> map(t, ColVecs(X)) == ColVecs(X[dims, :])\ntrue\n\n\n\n\n\n","category":"type"},{"location":"transform/#KernelFunctions.ChainTransform","page":"Input Transforms","title":"KernelFunctions.ChainTransform","text":"ChainTransform(transforms)\n\nTransformation that applies a chain of transformations ts to the input.\n\nThe transformation first(ts) is applied first.\n\nExamples\n\njulia> l = rand(); A = rand(3, 4); t1 = ScaleTransform(l); t2 = LinearTransform(A);\n\njulia> X = rand(4, 10);\n\njulia> map(ChainTransform([t1, t2]), ColVecs(X)) == ColVecs(A * (l .* X))\ntrue\n\njulia> map(t2 ∘ t1, ColVecs(X)) == ColVecs(A * (l .* X))\ntrue\n\n\n\n\n\n","category":"type"},{"location":"transform/#KernelFunctions.PeriodicTransform","page":"Input Transforms","title":"KernelFunctions.PeriodicTransform","text":"PeriodicTransform(f)\n\nTransformation that maps the input elementwise onto the unit circle with frequency f.\n\nSamples from a GP with a kernel with this transformation applied to the inputs will produce samples with frequency f.\n\nExamples\n\njulia> f = rand(); t = PeriodicTransform(f); x = rand();\n\njulia> t(x) == [sinpi(2 * f * x), cospi(2 * f * x)]\ntrue\n\n\n\n\n\n","category":"type"},{"location":"transform/#Convenience-functions","page":"Input Transforms","title":"Convenience functions","text":"","category":"section"},{"location":"transform/","page":"Input Transforms","title":"Input Transforms","text":"with_lengthscale\nmedian_heuristic_transform","category":"page"},{"location":"transform/#KernelFunctions.with_lengthscale","page":"Input Transforms","title":"KernelFunctions.with_lengthscale","text":"with_lengthscale(kernel::Kernel, lengthscale::Real)\n\nConstruct a transformed kernel with lengthscale.\n\nExamples\n\njulia> kernel = with_lengthscale(SqExponentialKernel(), 2.5);\n\njulia> x = rand(2);\n\njulia> y = rand(2);\n\njulia> kernel(x, y) ≈ (SqExponentialKernel() ∘ ScaleTransform(0.4))(x, y)\ntrue\n\n\n\n\n\nwith_lengthscale(kernel::Kernel, lengthscales::AbstractVector{<:Real})\n\nConstruct a transformed \"ARD\" kernel with different lengthscales for each dimension.\n\nExamples\n\njulia> kernel = with_lengthscale(SqExponentialKernel(), [0.5, 2.5]);\n\njulia> x = rand(2);\n\njulia> y = rand(2);\n\njulia> kernel(x, y) ≈ (SqExponentialKernel() ∘ ARDTransform([2, 0.4]))(x, y)\ntrue\n\n\n\n\n\n","category":"function"},{"location":"transform/#KernelFunctions.median_heuristic_transform","page":"Input Transforms","title":"KernelFunctions.median_heuristic_transform","text":"median_heuristic_transform(distance, x::AbstractVector)\n\nCreate a ScaleTransform that divides the input elementwise by the median distance of the data points in x.\n\nThe distance has to support pairwise evaluation with KernelFunctions.pairwise. All PreMetrics of the package Distances.jl such as Euclidean satisfy this requirement automatically.\n\nExamples\n\njulia> using Distances, Statistics\n\njulia> x = ColVecs(rand(100, 10));\n\njulia> t = median_heuristic_transform(Euclidean(), x);\n\njulia> y = map(t, x);\n\njulia> median(euclidean(y[i], y[j]) for i in 1:10, j in 1:10 if i != j) ≈ 1\ntrue\n\n\n\n\n\n","category":"function"},{"location":"userguide/#User-guide","page":"User guide","title":"User guide","text":"","category":"section"},{"location":"userguide/#Kernel-Creation","page":"User guide","title":"Kernel Creation","text":"","category":"section"},{"location":"userguide/","page":"User guide","title":"User guide","text":"To create a kernel object, choose one of the pre-implemented kernels, see Kernel Functions, or create your own, see Creating your own kernel. For example, a squared exponential kernel is created by","category":"page"},{"location":"userguide/","page":"User guide","title":"User guide","text":" k = SqExponentialKernel()","category":"page"},{"location":"userguide/","page":"User guide","title":"User guide","text":"tip: How do I set the lengthscale(s)?\nInstead of having lengthscale(s) for each kernel we use Transform objects which act on the inputs before passing them to the kernel. Note that the transforms such as ScaleTransform and ARDTransform multiply the input by a scale factor, which corresponds to the inverse of the lengthscale. For example, a lengthscale of 0.5 is equivalent to premultiplying the input by 2.0, and you can create the corresponding kernel in either of the following equivalent ways: k = SqExponentialKernel() ∘ ScaleTransform(2.0)\n k = compose(SqExponentialKernel(), ScaleTransform(2.0))Alternatively, you can use the convenience function with_lengthscale:k = with_lengthscale(SqExponentialKernel(), 0.5)with_lengthscale also works with vector-valued lengthscales for multiple-dimensional inputs, and is equivalent to pre-composing with an ARDTransform:length_scales = [1.0, 2.0]\nk = with_lengthscale(SqExponentialKernel(), length_scales)\nk = SqExponentialKernel() ∘ ARDTransform(1 ./ length_scales)Check the Input Transforms page for more details.","category":"page"},{"location":"userguide/","page":"User guide","title":"User guide","text":"tip: How do I set the kernel variance?\nTo premultiply the kernel by a variance, you can use * with a scalar number: k = 3.0 * SqExponentialKernel()","category":"page"},{"location":"userguide/","page":"User guide","title":"User guide","text":"tip: How do I use a Mahalanobis kernel?\nThe MahalanobisKernel(; P=P), defined byk(x x P) = expbig(- (x - x)^top P (x - x)big)for a positive definite matrix P = Q^top Q, was removed in 0.9. Instead you can use a squared exponential kernel together with a LinearTransform of the inputs:k = SqExponentialKernel() ∘ LinearTransform(sqrt(2) .* Q)Analogously, you can combine other kernels such as the PiecewisePolynomialKernel with a LinearTransform of the inputs to obtain a kernel that is a function of the Mahalanobis distance between inputs.","category":"page"},{"location":"userguide/#Using-a-Kernel-Function","page":"User guide","title":"Using a Kernel Function","text":"","category":"section"},{"location":"userguide/","page":"User guide","title":"User guide","text":"To evaluate the kernel function on two vectors you simply call the kernel object:","category":"page"},{"location":"userguide/","page":"User guide","title":"User guide","text":"k = SqExponentialKernel()\nx1 = rand(3)\nx2 = rand(3)\nk(x1, x2)","category":"page"},{"location":"userguide/#Creating-a-Kernel-Matrix","page":"User guide","title":"Creating a Kernel Matrix","text":"","category":"section"},{"location":"userguide/","page":"User guide","title":"User guide","text":"Kernel matrices can be created via the kernelmatrix function or kernelmatrix_diag for only the diagonal. For example, for a collection of 10 Real-valued inputs:","category":"page"},{"location":"userguide/","page":"User guide","title":"User guide","text":"k = SqExponentialKernel()\nx = rand(10)\nkernelmatrix(k, x) # 10x10 matrix","category":"page"},{"location":"userguide/","page":"User guide","title":"User guide","text":"If your inputs are multi-dimensional, it is common to represent them as a matrix. For example","category":"page"},{"location":"userguide/","page":"User guide","title":"User guide","text":"X = rand(10, 5)","category":"page"},{"location":"userguide/","page":"User guide","title":"User guide","text":"However, it is ambiguous whether this represents a collection of 10 5-dimensional row-vectors, or 5 10-dimensional column-vectors. Therefore, we require users to provide some more information.","category":"page"},{"location":"userguide/","page":"User guide","title":"User guide","text":"You can write RowVecs(X) to declare that X contains 10 5-dimensional row-vectors, or ColVecs(X) to declare that X contains 5 10-dimensional column-vectors, then","category":"page"},{"location":"userguide/","page":"User guide","title":"User guide","text":"kernelmatrix(k, RowVecs(X)) # returns a 10×10 matrix -- each row of X treated as input\nkernelmatrix(k, ColVecs(X)) # returns a 5×5 matrix -- each column of X treated as input","category":"page"},{"location":"userguide/","page":"User guide","title":"User guide","text":"This is the mechanism used throughout KernelFunctions.jl to handle multi-dimensional inputs.","category":"page"},{"location":"userguide/","page":"User guide","title":"User guide","text":"You can utilise the obsdim keyword argument if you prefer:","category":"page"},{"location":"userguide/","page":"User guide","title":"User guide","text":"kernelmatrix(k, X; obsdim=1) # same as RowVecs(X)\nkernelmatrix(k, X; obsdim=2) # same as ColVecs(X)","category":"page"},{"location":"userguide/","page":"User guide","title":"User guide","text":"This is similar to the convention used in Distances.jl.","category":"page"},{"location":"userguide/#So-what-type-should-I-use-to-represent-a-collection-of-inputs?","page":"User guide","title":"So what type should I use to represent a collection of inputs?","text":"","category":"section"},{"location":"userguide/","page":"User guide","title":"User guide","text":"The central assumption made by KernelFunctions.jl is that all collections of N inputs are represented by AbstractVectors of length N. Abstraction is then used to ensure that efficiency is retained, ColVecs and RowVecs being the most obvious examples of this.","category":"page"},{"location":"userguide/","page":"User guide","title":"User guide","text":"Concretely:","category":"page"},{"location":"userguide/","page":"User guide","title":"User guide","text":"For Real-valued inputs (scalars), a Vector{<:Real} is fine.\nFor vector-valued inputs, consider a ColVecs or RowVecs.\nFor a new input type, simply represent collections of inputs of this type as an AbstractVector.","category":"page"},{"location":"userguide/","page":"User guide","title":"User guide","text":"See Input Types and Design for a more thorough discussion of the considerations made when this design was adopted.","category":"page"},{"location":"userguide/","page":"User guide","title":"User guide","text":"The obsdim kwarg mentioned above is a special case for vector-valued inputs stored in a matrix. It is implemented as a lightweight wrapper that constructs either a RowVecs or ColVecs from your inputs, and passes this on.","category":"page"},{"location":"userguide/#Output-Types","page":"User guide","title":"Output Types","text":"","category":"section"},{"location":"userguide/","page":"User guide","title":"User guide","text":"In addition to plain Matrix-like output, KernelFunctions.jl supports specific output types:","category":"page"},{"location":"userguide/","page":"User guide","title":"User guide","text":"For a positive-definite matrix object of type PDMat from PDMats.jl, you can call the following:","category":"page"},{"location":"userguide/","page":"User guide","title":"User guide","text":"using PDMats\nk = SqExponentialKernel()\nK = kernelpdmat(k, RowVecs(X)) # PDMat\nK = kernelpdmat(k, X; obsdim=1) # PDMat","category":"page"},{"location":"userguide/","page":"User guide","title":"User guide","text":"It will create a matrix and in case of bad conditioning will add some diagonal noise until the matrix is considered positive-definite; it will then return a PDMat object. For this method to work in your code you need to include using PDMats first.","category":"page"},{"location":"userguide/","page":"User guide","title":"User guide","text":"For a Kronecker matrix, we rely on Kronecker.jl. Here are two examples:","category":"page"},{"location":"userguide/","page":"User guide","title":"User guide","text":"using Kronecker\nx = range(0, 1; length=10)\ny = range(0, 1; length=50)\nK = kernelkronmat(k, [x, y]) # Kronecker matrix\nK = kernelkronmat(k, x, 5) # Kronecker matrix","category":"page"},{"location":"userguide/","page":"User guide","title":"User guide","text":"Make sure that k is a kernel compatible with such constructions (with iskroncompatible(k)). Both methods will return a Kronecker matrix. For those methods to work in your code you need to include using Kronecker first.","category":"page"},{"location":"userguide/","page":"User guide","title":"User guide","text":"For a Nystrom approximation: kernelmatrix(nystrom(k, X, ρ, obsdim=1)) where ρ is the fraction of data samples used in the approximation.","category":"page"},{"location":"userguide/#Composite-Kernels","page":"User guide","title":"Composite Kernels","text":"","category":"section"},{"location":"userguide/","page":"User guide","title":"User guide","text":"Sums and products of kernels are also valid kernels. They can be created via KernelSum and KernelProduct or using simple operators + and *. For example:","category":"page"},{"location":"userguide/","page":"User guide","title":"User guide","text":"k1 = SqExponentialKernel()\nk2 = Matern32Kernel()\nk = 0.5 * k1 + 0.2 * k2 # KernelSum\nk = k1 * k2 # KernelProduct","category":"page"},{"location":"userguide/#Kernel-Parameters","page":"User guide","title":"Kernel Parameters","text":"","category":"section"},{"location":"userguide/","page":"User guide","title":"User guide","text":"What if you want to differentiate through the kernel parameters? This is easy even in a highly nested structure such as:","category":"page"},{"location":"userguide/","page":"User guide","title":"User guide","text":"k = (\n 0.5 * SqExponentialKernel() * Matern12Kernel() +\n 0.2 * (LinearKernel() ∘ ScaleTransform(2.0) + PolynomialKernel())\n) ∘ ARDTransform([0.1, 0.5])","category":"page"},{"location":"userguide/","page":"User guide","title":"User guide","text":"One can access the named tuple of trainable parameters via Functors.functor from Functors.jl. This means that in practice you can implicitly optimize the kernel parameters by calling:","category":"page"},{"location":"userguide/","page":"User guide","title":"User guide","text":"using Flux\nkernelparams = Flux.params(k)\nFlux.gradient(kernelparams) do\n # ... some loss function on the kernel ....\nend","category":"page"},{"location":"examples/gaussian-process-priors/","page":"Gaussian process prior samples","title":"Gaussian process prior samples","text":"EditURL = \"../../../../examples/gaussian-process-priors/script.jl\"","category":"page"},{"location":"examples/gaussian-process-priors/","page":"Gaussian process prior samples","title":"Gaussian process prior samples","text":"EditURL = \"https://github.com/JuliaGaussianProcesses/KernelFunctions.jl/blob/master/examples/gaussian-process-priors/script.jl\"","category":"page"},{"location":"examples/gaussian-process-priors/#Gaussian-process-prior-samples","page":"Gaussian process prior samples","title":"Gaussian process prior samples","text":"","category":"section"},{"location":"examples/gaussian-process-priors/","page":"Gaussian process prior samples","title":"Gaussian process prior samples","text":"(Image: )","category":"page"},{"location":"examples/gaussian-process-priors/","page":"Gaussian process prior samples","title":"Gaussian process prior samples","text":"You are seeing the HTML output generated by Documenter.jl and Literate.jl from the Julia source file. The corresponding notebook can be viewed in nbviewer.","category":"page"},{"location":"examples/gaussian-process-priors/","page":"Gaussian process prior samples","title":"Gaussian process prior samples","text":"","category":"page"},{"location":"examples/gaussian-process-priors/","page":"Gaussian process prior samples","title":"Gaussian process prior samples","text":"The kernels defined in this package can also be used to specify the covariance of a Gaussian process prior. A Gaussian process (GP) is defined by its mean function m(cdot) and its covariance function or kernel k(cdot cdot):","category":"page"},{"location":"examples/gaussian-process-priors/","page":"Gaussian process prior samples","title":"Gaussian process prior samples","text":" f sim mathcalGPbig(m(cdot) k(cdot cdot)big)","category":"page"},{"location":"examples/gaussian-process-priors/","page":"Gaussian process prior samples","title":"Gaussian process prior samples","text":"In this notebook we show how the choice of kernel affects the samples from a GP (with zero mean).","category":"page"},{"location":"examples/gaussian-process-priors/","page":"Gaussian process prior samples","title":"Gaussian process prior samples","text":"# Load required packages\nusing KernelFunctions, LinearAlgebra\nusing Plots, Plots.PlotMeasures\ndefault(; lw=1.0, legendfontsize=8.0)\nusing Random: seed!\nseed!(42); # reproducibility","category":"page"},{"location":"examples/gaussian-process-priors/#Evaluation-at-finite-set-of-points","page":"Gaussian process prior samples","title":"Evaluation at finite set of points","text":"","category":"section"},{"location":"examples/gaussian-process-priors/","page":"Gaussian process prior samples","title":"Gaussian process prior samples","text":"The function values mathbff = f(x_n)_n=1^N of the GP at a finite number N of points X = x_n_n=1^N follow a multivariate normal distribution mathbff sim mathcalMVN(mathbfm mathrmK) with mean vector mathbfm and covariance matrix mathrmK, where","category":"page"},{"location":"examples/gaussian-process-priors/","page":"Gaussian process prior samples","title":"Gaussian process prior samples","text":"beginaligned\n mathbfm_i = m(x_i) \n mathrmK_ij = k(x_i x_j)\nendaligned","category":"page"},{"location":"examples/gaussian-process-priors/","page":"Gaussian process prior samples","title":"Gaussian process prior samples","text":"with 1 le i j le N.","category":"page"},{"location":"examples/gaussian-process-priors/","page":"Gaussian process prior samples","title":"Gaussian process prior samples","text":"We can visualize the infinite-dimensional GP by evaluating it on a fine grid to approximate the dense real line:","category":"page"},{"location":"examples/gaussian-process-priors/","page":"Gaussian process prior samples","title":"Gaussian process prior samples","text":"num_inputs = 101\nxlim = (-5, 5)\nX = range(xlim...; length=num_inputs);","category":"page"},{"location":"examples/gaussian-process-priors/","page":"Gaussian process prior samples","title":"Gaussian process prior samples","text":"Given a kernel k, we can compute the kernel matrix as K = kernelmatrix(k, X).","category":"page"},{"location":"examples/gaussian-process-priors/#Random-samples","page":"Gaussian process prior samples","title":"Random samples","text":"","category":"section"},{"location":"examples/gaussian-process-priors/","page":"Gaussian process prior samples","title":"Gaussian process prior samples","text":"To sample from the multivariate normal distribution p(mathbff) = mathcalMVN(0 mathrmK), we could make use of Distributions.jl and call rand(MvNormal(K)). Alternatively, we could use the AbstractGPs.jl package and construct a GP object which we evaluate at the points of interest and from which we can then sample: rand(GP(k)(X)).","category":"page"},{"location":"examples/gaussian-process-priors/","page":"Gaussian process prior samples","title":"Gaussian process prior samples","text":"Here, we will explicitly construct samples using the Cholesky factorization mathrmL = operatornamecholesky(mathrmK), with mathbff = mathrmL mathbfv, where mathbfv sim mathcalN(0 mathbfI) is a vector of standard-normal random variables.","category":"page"},{"location":"examples/gaussian-process-priors/","page":"Gaussian process prior samples","title":"Gaussian process prior samples","text":"We will use the same randomness mathbfv to generate comparable samples across different kernels.","category":"page"},{"location":"examples/gaussian-process-priors/","page":"Gaussian process prior samples","title":"Gaussian process prior samples","text":"num_samples = 7\nv = randn(num_inputs, num_samples);","category":"page"},{"location":"examples/gaussian-process-priors/","page":"Gaussian process prior samples","title":"Gaussian process prior samples","text":"Mathematically, a kernel matrix is by definition positive semi-definite, but due to finite-precision inaccuracies, the computed kernel matrix might not be exactly positive definite. To avoid Cholesky errors, we add a small \"nugget\" term on the diagonal:","category":"page"},{"location":"examples/gaussian-process-priors/","page":"Gaussian process prior samples","title":"Gaussian process prior samples","text":"function mvn_sample(K)\n L = cholesky(K + 1e-6 * I)\n f = L.L * v\n return f\nend;","category":"page"},{"location":"examples/gaussian-process-priors/#Visualization","page":"Gaussian process prior samples","title":"Visualization","text":"","category":"section"},{"location":"examples/gaussian-process-priors/","page":"Gaussian process prior samples","title":"Gaussian process prior samples","text":"We now define a function that visualizes a kernel for us.","category":"page"},{"location":"examples/gaussian-process-priors/","page":"Gaussian process prior samples","title":"Gaussian process prior samples","text":"function visualize(k::Kernel)\n K = kernelmatrix(k, X)\n f = mvn_sample(K)\n\n p_kernel_2d = heatmap(\n X,\n X,\n K;\n yflip=true,\n colorbar=false,\n ylabel=string(nameof(typeof(k))),\n ylim=xlim,\n yticks=([xlim[1], 0, xlim[end]], [\"\\u22125\", raw\"$x'$\", \"5\"]),\n vlim=(0, 1),\n title=raw\"$k(x, x')$\",\n aspect_ratio=:equal,\n left_margin=5mm,\n )\n\n p_kernel_cut = plot(\n X,\n k.(X, 0.0);\n title=string(raw\"$k(x, x_\\mathrm{ref})$\"),\n label=raw\"$x_\\mathrm{ref}=0.0$\",\n legend=:topleft,\n foreground_color_legend=nothing,\n )\n plot!(X, k.(X, 1.5); label=raw\"$x_\\mathrm{ref}=1.5$\")\n\n p_samples = plot(X, f; c=\"blue\", title=raw\"$f(x)$\", ylim=(-3, 3), label=nothing)\n\n return plot(\n p_kernel_2d,\n p_kernel_cut,\n p_samples;\n layout=(1, 3),\n xlabel=raw\"$x$\",\n xlim=xlim,\n xticks=collect(xlim),\n )\nend;","category":"page"},{"location":"examples/gaussian-process-priors/","page":"Gaussian process prior samples","title":"Gaussian process prior samples","text":"We can now visualize a kernel and show samples from a Gaussian process with a given kernel:","category":"page"},{"location":"examples/gaussian-process-priors/","page":"Gaussian process prior samples","title":"Gaussian process prior samples","text":"plot(visualize(SqExponentialKernel()); size=(800, 210), bottommargin=5mm, topmargin=5mm)","category":"page"},{"location":"examples/gaussian-process-priors/","page":"Gaussian process prior samples","title":"Gaussian process prior samples","text":"\n\n\n \n \n \n\n\n\n \n \n \n\n\n\n \n \n \n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n \n \n \n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n \n \n \n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n","category":"page"},{"location":"examples/gaussian-process-priors/#Kernel-comparison","page":"Gaussian process prior samples","title":"Kernel comparison","text":"","category":"section"},{"location":"examples/gaussian-process-priors/","page":"Gaussian process prior samples","title":"Gaussian process prior samples","text":"This also allows us to compare different kernels:","category":"page"},{"location":"examples/gaussian-process-priors/","page":"Gaussian process prior samples","title":"Gaussian process prior samples","text":"kernels = [\n Matern12Kernel(),\n Matern32Kernel(),\n Matern52Kernel(),\n SqExponentialKernel(),\n WhiteKernel(),\n ConstantKernel(),\n LinearKernel(),\n compose(PeriodicKernel(), ScaleTransform(0.2)),\n NeuralNetworkKernel(),\n GibbsKernel(; lengthscale=x -> sum(exp ∘ sin, x)),\n]\nplot(\n [visualize(k) for k in kernels]...;\n layout=(length(kernels), 1),\n size=(800, 220 * length(kernels) + 100),\n)","category":"page"},{"location":"examples/gaussian-process-priors/","page":"Gaussian process prior samples","title":"Gaussian process prior samples","text":"\n\n\n \n \n \n\n\n\n \n \n \n\n\n\n \n \n \n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n \n \n \n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n \n \n \n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n \n \n \n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n \n \n \n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n \n \n \n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n \n \n \n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n \n \n \n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n \n \n \n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n \n \n \n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n \n \n \n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n \n \n \n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n \n \n \n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n \n \n \n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n \n \n \n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n \n \n \n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n \n \n \n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n \n \n \n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n \n \n \n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n \n \n \n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n \n \n \n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n \n \n \n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n \n \n \n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n \n \n \n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n \n \n \n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n \n \n \n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n \n \n \n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n \n \n \n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n \n \n \n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n \n \n \n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n","category":"page"},{"location":"examples/gaussian-process-priors/","page":"Gaussian process prior samples","title":"Gaussian process prior samples","text":"
      \n
      Package and system information
      \n
      \nPackage information (click to expand)\n
      \nStatus `~/work/KernelFunctions.jl/KernelFunctions.jl/examples/gaussian-process-priors/Project.toml`\n  [31c24e10] Distributions v0.25.107\n  [ec8451be] KernelFunctions v0.10.63 `/home/runner/work/KernelFunctions.jl/KernelFunctions.jl#master`\n  [98b081ad] Literate v2.16.1\n  [91a5bcdd] Plots v1.40.1\n  [37e2e46d] LinearAlgebra\n  [9a3f8284] Random\n
      \nTo reproduce this notebook's package environment, you can\n\ndownload the full Manifest.toml.\n
      \n
      \nSystem information (click to expand)\n
      \nJulia Version 1.10.0\nCommit 3120989f39b (2023-12-25 18:01 UTC)\nBuild Info:\n  Official https://julialang.org/ release\nPlatform Info:\n  OS: Linux (x86_64-linux-gnu)\n  CPU: 4 × AMD EPYC 7763 64-Core Processor\n  WORD_SIZE: 64\n  LIBM: libopenlibm\n  LLVM: libLLVM-15.0.7 (ORCJIT, znver3)\n  Threads: 1 on 4 virtual cores\nEnvironment:\n  JULIA_DEBUG = Documenter\n  JULIA_LOAD_PATH = :/home/runner/.julia/packages/JuliaGPsDocs/7M86H/src\n
      \n
      ","category":"page"},{"location":"examples/gaussian-process-priors/","page":"Gaussian process prior samples","title":"Gaussian process prior samples","text":"","category":"page"},{"location":"examples/gaussian-process-priors/","page":"Gaussian process prior samples","title":"Gaussian process prior samples","text":"This page was generated using Literate.jl.","category":"page"},{"location":"#KernelFunctions.jl","page":"Home","title":"KernelFunctions.jl","text":"","category":"section"},{"location":"","page":"Home","title":"Home","text":"KernelFunctions.jl is a general purpose kernel package. It provides a flexible framework for creating kernel functions and manipulating them, and an extensive collection of implementations. The main goals of this package are:","category":"page"},{"location":"","page":"Home","title":"Home","text":"Flexibility: operations between kernels should be fluid and easy without breaking, with a user-friendly API.\nPlug-and-play: being model-agnostic; including the kernels before/after other steps should be straightforward. To interoperate well with generic packages for handling parameters like ParameterHandling.jl and FluxML's Functors.jl.\nAutomatic Differentiation compatibility: all kernel functions which ought to be differentiable using AD packages like ForwardDiff.jl or Zygote.jl should be.","category":"page"},{"location":"","page":"Home","title":"Home","text":"This package replaces the now-defunct MLKernels.jl. It incorporates lots of excellent existing work from packages such as GaussianProcesses.jl, and is used in downstream packages such as AbstractGPs.jl, ApproximateGPs.jl, Stheno.jl, and AugmentedGaussianProcesses.jl.","category":"page"},{"location":"","page":"Home","title":"Home","text":"See the User guide for a brief introduction.","category":"page"}] +[{"location":"create_kernel/#Custom-Kernels","page":"Custom Kernels","title":"Custom Kernels","text":"","category":"section"},{"location":"create_kernel/#Creating-your-own-kernel","page":"Custom Kernels","title":"Creating your own kernel","text":"","category":"section"},{"location":"create_kernel/","page":"Custom Kernels","title":"Custom Kernels","text":"KernelFunctions.jl contains the most popular kernels already but you might want to make your own!","category":"page"},{"location":"create_kernel/","page":"Custom Kernels","title":"Custom Kernels","text":"Here are a few ways depending on how complicated your kernel is:","category":"page"},{"location":"create_kernel/#SimpleKernel-for-kernel-functions-depending-on-a-metric","page":"Custom Kernels","title":"SimpleKernel for kernel functions depending on a metric","text":"","category":"section"},{"location":"create_kernel/","page":"Custom Kernels","title":"Custom Kernels","text":"If your kernel function is of the form k(x, y) = f(d(x, y)) where d(x, y) is a PreMetric, you can construct your custom kernel by defining kappa and metric for your kernel. Here is for example how one can define the SqExponentialKernel again:","category":"page"},{"location":"create_kernel/","page":"Custom Kernels","title":"Custom Kernels","text":"struct MyKernel <: KernelFunctions.SimpleKernel end\n\nKernelFunctions.kappa(::MyKernel, d2::Real) = exp(-d2)\nKernelFunctions.metric(::MyKernel) = SqEuclidean()","category":"page"},{"location":"create_kernel/#Kernel-for-more-complex-kernels","page":"Custom Kernels","title":"Kernel for more complex kernels","text":"","category":"section"},{"location":"create_kernel/","page":"Custom Kernels","title":"Custom Kernels","text":"If your kernel does not satisfy such a representation, all you need to do is define (k::MyKernel)(x, y) and inherit from Kernel. For example, we recreate here the NeuralNetworkKernel:","category":"page"},{"location":"create_kernel/","page":"Custom Kernels","title":"Custom Kernels","text":"struct MyKernel <: KernelFunctions.Kernel end\n\n(::MyKernel)(x, y) = asin(dot(x, y) / sqrt((1 + sum(abs2, x)) * (1 + sum(abs2, y))))","category":"page"},{"location":"create_kernel/","page":"Custom Kernels","title":"Custom Kernels","text":"Note that the fallback implementation of the base Kernel evaluation does not use Distances.jl and can therefore be a bit slower.","category":"page"},{"location":"create_kernel/#Additional-Options","page":"Custom Kernels","title":"Additional Options","text":"","category":"section"},{"location":"create_kernel/","page":"Custom Kernels","title":"Custom Kernels","text":"Finally there are additional functions you can define to bring in more features:","category":"page"},{"location":"create_kernel/","page":"Custom Kernels","title":"Custom Kernels","text":"KernelFunctions.iskroncompatible(k::MyKernel): if your kernel factorizes in dimensions, you can declare your kernel as iskroncompatible(k) = true to use Kronecker methods.\nKernelFunctions.dim(x::MyDataType): by default the dimension of the inputs will only be checked for vectors of type AbstractVector{<:Real}. If you want to check the dimensionality of your inputs, dispatch the dim function on your datatype. Note that 0 is the default.\ndim is called within KernelFunctions.validate_inputs(x::MyDataType, y::MyDataType), which can instead be directly overloaded if you want to run special checks for your input types.\nkernelmatrix(k::MyKernel, ...): you can redefine the diverse kernelmatrix functions to eventually optimize the computations.\nBase.print(io::IO, k::MyKernel): if you want to specialize the printing of your kernel.","category":"page"},{"location":"create_kernel/","page":"Custom Kernels","title":"Custom Kernels","text":"KernelFunctions uses Functors.jl for specifying trainable kernel parameters in a way that is compatible with the Flux ML framework. You can use Functors.@functor if all fields of your kernel struct are trainable. Note that optimization algorithms in Flux are not compatible with scalar parameters (yet), and hence vector-valued parameters should be preferred.","category":"page"},{"location":"create_kernel/","page":"Custom Kernels","title":"Custom Kernels","text":"import Functors\n\nstruct MyKernel{T} <: KernelFunctions.Kernel\n a::Vector{T}\nend\n\nFunctors.@functor MyKernel","category":"page"},{"location":"create_kernel/","page":"Custom Kernels","title":"Custom Kernels","text":"If only a subset of the fields are trainable, you have to specify explicitly how to (re)construct the kernel with modified parameter values by implementing Functors.functor(::Type{<:MyKernel}, x) for your kernel struct:","category":"page"},{"location":"create_kernel/","page":"Custom Kernels","title":"Custom Kernels","text":"import Functors\n\nstruct MyKernel{T} <: KernelFunctions.Kernel\n n::Int\n a::Vector{T}\nend\n\nfunction Functors.functor(::Type{<:MyKernel}, x::MyKernel)\n function reconstruct_mykernel(xs)\n # keep field `n` of the original kernel and set `a` to (possibly different) `xs.a`\n return MyKernel(x.n, xs.a)\n end\n return (a = x.a,), reconstruct_mykernel\nend","category":"page"},{"location":"examples/train-kernel-parameters/","page":"Train Kernel Parameters","title":"Train Kernel Parameters","text":"EditURL = \"../../../../examples/train-kernel-parameters/script.jl\"","category":"page"},{"location":"examples/train-kernel-parameters/","page":"Train Kernel Parameters","title":"Train Kernel Parameters","text":"EditURL = \"https://github.com/JuliaGaussianProcesses/KernelFunctions.jl/blob/master/examples/train-kernel-parameters/script.jl\"","category":"page"},{"location":"examples/train-kernel-parameters/#Train-Kernel-Parameters","page":"Train Kernel Parameters","title":"Train Kernel Parameters","text":"","category":"section"},{"location":"examples/train-kernel-parameters/","page":"Train Kernel Parameters","title":"Train Kernel Parameters","text":"(Image: )","category":"page"},{"location":"examples/train-kernel-parameters/","page":"Train Kernel Parameters","title":"Train Kernel Parameters","text":"You are seeing the HTML output generated by Documenter.jl and Literate.jl from the Julia source file. The corresponding notebook can be viewed in nbviewer.","category":"page"},{"location":"examples/train-kernel-parameters/","page":"Train Kernel Parameters","title":"Train Kernel Parameters","text":"","category":"page"},{"location":"examples/train-kernel-parameters/","page":"Train Kernel Parameters","title":"Train Kernel Parameters","text":"Here we show a few ways to train (optimize) the kernel (hyper)parameters at the example of kernel-based regression using KernelFunctions.jl. All options are functionally identical, but differ a little in readability, dependencies, and computational cost.","category":"page"},{"location":"examples/train-kernel-parameters/","page":"Train Kernel Parameters","title":"Train Kernel Parameters","text":"We load KernelFunctions and some other packages. Note that while we use Zygote for automatic differentiation and Flux.optimise for optimization, you should be able to replace them with your favourite autodiff framework or optimizer.","category":"page"},{"location":"examples/train-kernel-parameters/","page":"Train Kernel Parameters","title":"Train Kernel Parameters","text":"using KernelFunctions\nusing LinearAlgebra\nusing Distributions\nusing Plots\nusing BenchmarkTools\nusing Flux\nusing Flux: Optimise\nusing Zygote\nusing Random: seed!\nseed!(42);","category":"page"},{"location":"examples/train-kernel-parameters/#Data-Generation","page":"Train Kernel Parameters","title":"Data Generation","text":"","category":"section"},{"location":"examples/train-kernel-parameters/","page":"Train Kernel Parameters","title":"Train Kernel Parameters","text":"We generate a toy dataset in 1 dimension:","category":"page"},{"location":"examples/train-kernel-parameters/","page":"Train Kernel Parameters","title":"Train Kernel Parameters","text":"xmin, xmax = -3, 3 # Bounds of the data\nN = 50 # Number of samples\nx_train = rand(Uniform(xmin, xmax), N) # sample the inputs\nσ = 0.1\ny_train = sinc.(x_train) + randn(N) * σ # evaluate a function and add some noise\nx_test = range(xmin - 0.1, xmax + 0.1; length=300)","category":"page"},{"location":"examples/train-kernel-parameters/","page":"Train Kernel Parameters","title":"Train Kernel Parameters","text":"Plot the data","category":"page"},{"location":"examples/train-kernel-parameters/","page":"Train Kernel Parameters","title":"Train Kernel Parameters","text":"scatter(x_train, y_train; label=\"data\")\nplot!(x_test, sinc; label=\"true function\")","category":"page"},{"location":"examples/train-kernel-parameters/","page":"Train Kernel Parameters","title":"Train Kernel Parameters","text":"\n\n\n \n \n \n\n\n\n \n \n \n\n\n\n \n \n \n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n","category":"page"},{"location":"examples/train-kernel-parameters/#Manual-Approach","page":"Train Kernel Parameters","title":"Manual Approach","text":"","category":"section"},{"location":"examples/train-kernel-parameters/","page":"Train Kernel Parameters","title":"Train Kernel Parameters","text":"The first option is to rebuild the parametrized kernel from a vector of parameters in each evaluation of the cost function. This is similar to the approach taken in Stheno.jl.","category":"page"},{"location":"examples/train-kernel-parameters/","page":"Train Kernel Parameters","title":"Train Kernel Parameters","text":"To train the kernel parameters via Zygote.jl, we need to create a function creating a kernel from an array. A simple way to ensure that the kernel parameters are positive is to optimize over the logarithm of the parameters.","category":"page"},{"location":"examples/train-kernel-parameters/","page":"Train Kernel Parameters","title":"Train Kernel Parameters","text":"function kernel_creator(θ)\n return (exp(θ[1]) * SqExponentialKernel() + exp(θ[2]) * Matern32Kernel()) ∘\n ScaleTransform(exp(θ[3]))\nend","category":"page"},{"location":"examples/train-kernel-parameters/","page":"Train Kernel Parameters","title":"Train Kernel Parameters","text":"From theory we know the prediction for a test set x given the kernel parameters and normalization constant:","category":"page"},{"location":"examples/train-kernel-parameters/","page":"Train Kernel Parameters","title":"Train Kernel Parameters","text":"function f(x, x_train, y_train, θ)\n k = kernel_creator(θ[1:3])\n return kernelmatrix(k, x, x_train) *\n ((kernelmatrix(k, x_train) + exp(θ[4]) * I) \\ y_train)\nend","category":"page"},{"location":"examples/train-kernel-parameters/","page":"Train Kernel Parameters","title":"Train Kernel Parameters","text":"Let's look at our prediction. With starting parameters p0 (picked so we get the right local minimum for demonstration) we get:","category":"page"},{"location":"examples/train-kernel-parameters/","page":"Train Kernel Parameters","title":"Train Kernel Parameters","text":"p0 = [1.1, 0.1, 0.01, 0.001]\nθ = log.(p0)\nŷ = f(x_test, x_train, y_train, θ)\nscatter(x_train, y_train; label=\"data\")\nplot!(x_test, sinc; label=\"true function\")\nplot!(x_test, ŷ; label=\"prediction\")","category":"page"},{"location":"examples/train-kernel-parameters/","page":"Train Kernel Parameters","title":"Train Kernel Parameters","text":"\n\n\n \n \n \n\n\n\n \n \n \n\n\n\n \n \n \n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n","category":"page"},{"location":"examples/train-kernel-parameters/","page":"Train Kernel Parameters","title":"Train Kernel Parameters","text":"We define the following loss:","category":"page"},{"location":"examples/train-kernel-parameters/","page":"Train Kernel Parameters","title":"Train Kernel Parameters","text":"function loss(θ)\n ŷ = f(x_train, x_train, y_train, θ)\n return norm(y_train - ŷ) + exp(θ[4]) * norm(ŷ)\nend","category":"page"},{"location":"examples/train-kernel-parameters/","page":"Train Kernel Parameters","title":"Train Kernel Parameters","text":"The loss with our starting point:","category":"page"},{"location":"examples/train-kernel-parameters/","page":"Train Kernel Parameters","title":"Train Kernel Parameters","text":"loss(θ)","category":"page"},{"location":"examples/train-kernel-parameters/","page":"Train Kernel Parameters","title":"Train Kernel Parameters","text":"2.613933959118708","category":"page"},{"location":"examples/train-kernel-parameters/","page":"Train Kernel Parameters","title":"Train Kernel Parameters","text":"Computational cost for one step:","category":"page"},{"location":"examples/train-kernel-parameters/","page":"Train Kernel Parameters","title":"Train Kernel Parameters","text":"@benchmark let\n θ = log.(p0)\n opt = Optimise.ADAGrad(0.5)\n grads = only((Zygote.gradient(loss, θ)))\n Optimise.update!(opt, θ, grads)\nend","category":"page"},{"location":"examples/train-kernel-parameters/","page":"Train Kernel Parameters","title":"Train Kernel Parameters","text":"BenchmarkTools.Trial: 6169 samples with 1 evaluation.\n Range (min … max): 661.998 μs … 9.628 ms ┊ GC (min … max): 0.00% … 66.28%\n Time (median): 743.080 μs ┊ GC (median): 0.00%\n Time (mean ± σ): 806.319 μs ± 267.902 μs ┊ GC (mean ± σ): 6.30% ± 11.71%\n\n ▁▂▇█▇▅▄▂ ▁ ▁ ▁\n ▇█████████▆▅▅▆▁▃▃▃▅▄▇▆▅▅▁▃▃▁▁▁▁▃▁▁▁▃▁▁▁▁▁▁▁▁▁▁▁▁▁▁▆▆████████▇ █\n 662 μs Histogram: log(frequency) by time 1.74 ms <\n\n Memory estimate: 2.98 MiB, allocs estimate: 1563.","category":"page"},{"location":"examples/train-kernel-parameters/#Training-the-model","page":"Train Kernel Parameters","title":"Training the model","text":"","category":"section"},{"location":"examples/train-kernel-parameters/","page":"Train Kernel Parameters","title":"Train Kernel Parameters","text":"Setting an initial value and initializing the optimizer:","category":"page"},{"location":"examples/train-kernel-parameters/","page":"Train Kernel Parameters","title":"Train Kernel Parameters","text":"θ = log.(p0) # Initial vector\nopt = Optimise.ADAGrad(0.5)","category":"page"},{"location":"examples/train-kernel-parameters/","page":"Train Kernel Parameters","title":"Train Kernel Parameters","text":"Optimize","category":"page"},{"location":"examples/train-kernel-parameters/","page":"Train Kernel Parameters","title":"Train Kernel Parameters","text":"anim = Animation()\nfor i in 1:15\n grads = only((Zygote.gradient(loss, θ)))\n Optimise.update!(opt, θ, grads)\n scatter(\n x_train, y_train; lab=\"data\", title=\"i = $(i), Loss = $(round(loss(θ), digits = 4))\"\n )\n plot!(x_test, sinc; lab=\"true function\")\n plot!(x_test, f(x_test, x_train, y_train, θ); lab=\"Prediction\", lw=3.0)\n frame(anim)\nend\ngif(anim, \"train-kernel-param.gif\"; show_msg=false, fps=15);","category":"page"},{"location":"examples/train-kernel-parameters/","page":"Train Kernel Parameters","title":"Train Kernel Parameters","text":"(Image: )","category":"page"},{"location":"examples/train-kernel-parameters/","page":"Train Kernel Parameters","title":"Train Kernel Parameters","text":"Final loss","category":"page"},{"location":"examples/train-kernel-parameters/","page":"Train Kernel Parameters","title":"Train Kernel Parameters","text":"loss(θ)","category":"page"},{"location":"examples/train-kernel-parameters/","page":"Train Kernel Parameters","title":"Train Kernel Parameters","text":"0.5241118228076058","category":"page"},{"location":"examples/train-kernel-parameters/#Using-ParameterHandling.jl","page":"Train Kernel Parameters","title":"Using ParameterHandling.jl","text":"","category":"section"},{"location":"examples/train-kernel-parameters/","page":"Train Kernel Parameters","title":"Train Kernel Parameters","text":"Alternatively, we can use the ParameterHandling.jl package to handle the requirement that all kernel parameters should be positive. The package also allows arbitrarily nesting named tuples that make the parameters more human readable, without having to remember their position in a flat vector.","category":"page"},{"location":"examples/train-kernel-parameters/","page":"Train Kernel Parameters","title":"Train Kernel Parameters","text":"using ParameterHandling\n\nraw_initial_θ = (\n k1=positive(1.1), k2=positive(0.1), k3=positive(0.01), noise_var=positive(0.001)\n)\n\nflat_θ, unflatten = ParameterHandling.value_flatten(raw_initial_θ)","category":"page"},{"location":"examples/train-kernel-parameters/","page":"Train Kernel Parameters","title":"Train Kernel Parameters","text":"4-element Vector{Float64}:\n 0.09531016625781467\n -2.3025852420056685\n -4.6051716761053205\n -6.907770180254354","category":"page"},{"location":"examples/train-kernel-parameters/","page":"Train Kernel Parameters","title":"Train Kernel Parameters","text":"We define a few relevant functions and note that compared to the previous kernel_creator function, we do not need explicit exps.","category":"page"},{"location":"examples/train-kernel-parameters/","page":"Train Kernel Parameters","title":"Train Kernel Parameters","text":"function kernel_creator(θ)\n return (θ.k1 * SqExponentialKernel() + θ.k2 * Matern32Kernel()) ∘ ScaleTransform(θ.k3)\nend\n\nfunction f(x, x_train, y_train, θ)\n k = kernel_creator(θ)\n return kernelmatrix(k, x, x_train) *\n ((kernelmatrix(k, x_train) + θ.noise_var * I) \\ y_train)\nend\n\nfunction loss(θ)\n ŷ = f(x_train, x_train, y_train, θ)\n return norm(y_train - ŷ) + θ.noise_var * norm(ŷ)\nend\n\ninitial_θ = ParameterHandling.value(raw_initial_θ)","category":"page"},{"location":"examples/train-kernel-parameters/","page":"Train Kernel Parameters","title":"Train Kernel Parameters","text":"The loss at the initial parameter values:","category":"page"},{"location":"examples/train-kernel-parameters/","page":"Train Kernel Parameters","title":"Train Kernel Parameters","text":"(loss ∘ unflatten)(flat_θ)","category":"page"},{"location":"examples/train-kernel-parameters/","page":"Train Kernel Parameters","title":"Train Kernel Parameters","text":"2.613933959118708","category":"page"},{"location":"examples/train-kernel-parameters/","page":"Train Kernel Parameters","title":"Train Kernel Parameters","text":"Cost per step","category":"page"},{"location":"examples/train-kernel-parameters/","page":"Train Kernel Parameters","title":"Train Kernel Parameters","text":"@benchmark let\n θ = flat_θ[:]\n opt = Optimise.ADAGrad(0.5)\n grads = (Zygote.gradient(loss ∘ unflatten, θ))[1]\n Optimise.update!(opt, θ, grads)\nend","category":"page"},{"location":"examples/train-kernel-parameters/","page":"Train Kernel Parameters","title":"Train Kernel Parameters","text":"BenchmarkTools.Trial: 5269 samples with 1 evaluation.\n Range (min … max): 812.149 μs … 5.413 ms ┊ GC (min … max): 0.00% … 20.33%\n Time (median): 885.848 μs ┊ GC (median): 0.00%\n Time (mean ± σ): 945.969 μs ± 249.081 μs ┊ GC (mean ± σ): 5.15% ± 10.95%\n\n ▃▄▅█▇▅▄▂ ▁ ▁\n █████████▇▆▅▄▄▃▄▄▁▁▄▆▇▇▇▅▅▄▃▁▁▃▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▃▅███████ █\n 812 μs Histogram: log(frequency) by time 1.98 ms <\n\n Memory estimate: 3.08 MiB, allocs estimate: 2228.","category":"page"},{"location":"examples/train-kernel-parameters/#Training-the-model-2","page":"Train Kernel Parameters","title":"Training the model","text":"","category":"section"},{"location":"examples/train-kernel-parameters/","page":"Train Kernel Parameters","title":"Train Kernel Parameters","text":"Optimize","category":"page"},{"location":"examples/train-kernel-parameters/","page":"Train Kernel Parameters","title":"Train Kernel Parameters","text":"opt = Optimise.ADAGrad(0.5)\nfor i in 1:15\n grads = (Zygote.gradient(loss ∘ unflatten, flat_θ))[1]\n Optimise.update!(opt, flat_θ, grads)\nend","category":"page"},{"location":"examples/train-kernel-parameters/","page":"Train Kernel Parameters","title":"Train Kernel Parameters","text":"Final loss","category":"page"},{"location":"examples/train-kernel-parameters/","page":"Train Kernel Parameters","title":"Train Kernel Parameters","text":"(loss ∘ unflatten)(flat_θ)","category":"page"},{"location":"examples/train-kernel-parameters/","page":"Train Kernel Parameters","title":"Train Kernel Parameters","text":"0.524117624126251","category":"page"},{"location":"examples/train-kernel-parameters/#Flux.destructure","page":"Train Kernel Parameters","title":"Flux.destructure","text":"","category":"section"},{"location":"examples/train-kernel-parameters/","page":"Train Kernel Parameters","title":"Train Kernel Parameters","text":"If we don't want to write an explicit function to construct the kernel, we can alternatively use the Flux.destructure function. Again, we need to ensure that the parameters are positive. Note that the exp function is now part of the loss function, instead of part of the kernel construction.","category":"page"},{"location":"examples/train-kernel-parameters/","page":"Train Kernel Parameters","title":"Train Kernel Parameters","text":"We could also use ParameterHandling.jl here. To do so, one would remove the exps from the loss function below and call loss ∘ unflatten as above.","category":"page"},{"location":"examples/train-kernel-parameters/","page":"Train Kernel Parameters","title":"Train Kernel Parameters","text":"θ = [1.1, 0.1, 0.01, 0.001]\n\nkernel = (θ[1] * SqExponentialKernel() + θ[2] * Matern32Kernel()) ∘ ScaleTransform(θ[3])\n\nparams, kernelc = Flux.destructure(kernel);","category":"page"},{"location":"examples/train-kernel-parameters/","page":"Train Kernel Parameters","title":"Train Kernel Parameters","text":"This returns the trainable params of the kernel and a function to reconstruct the kernel.","category":"page"},{"location":"examples/train-kernel-parameters/","page":"Train Kernel Parameters","title":"Train Kernel Parameters","text":"kernelc(params)","category":"page"},{"location":"examples/train-kernel-parameters/","page":"Train Kernel Parameters","title":"Train Kernel Parameters","text":"Sum of 2 kernels:\n\tSquared Exponential Kernel (metric = Distances.Euclidean(0.0))\n\t\t\t- σ² = 1.1\n\tMatern 3/2 Kernel (metric = Distances.Euclidean(0.0))\n\t\t\t- σ² = 0.1\n\t- Scale Transform (s = 0.01)","category":"page"},{"location":"examples/train-kernel-parameters/","page":"Train Kernel Parameters","title":"Train Kernel Parameters","text":"From theory we know the prediction for a test set x given the kernel parameters and normalization constant","category":"page"},{"location":"examples/train-kernel-parameters/","page":"Train Kernel Parameters","title":"Train Kernel Parameters","text":"function f(x, x_train, y_train, θ)\n k = kernelc(θ[1:3])\n return kernelmatrix(k, x, x_train) * ((kernelmatrix(k, x_train) + (θ[4]) * I) \\ y_train)\nend\n\nfunction loss(θ)\n ŷ = f(x_train, x_train, y_train, exp.(θ))\n return norm(y_train - ŷ) + exp(θ[4]) * norm(ŷ)\nend","category":"page"},{"location":"examples/train-kernel-parameters/","page":"Train Kernel Parameters","title":"Train Kernel Parameters","text":"Cost for one step","category":"page"},{"location":"examples/train-kernel-parameters/","page":"Train Kernel Parameters","title":"Train Kernel Parameters","text":"@benchmark let θt = θ[:], optt = Optimise.ADAGrad(0.5)\n grads = only((Zygote.gradient(loss, θt)))\n Optimise.update!(optt, θt, grads)\nend","category":"page"},{"location":"examples/train-kernel-parameters/","page":"Train Kernel Parameters","title":"Train Kernel Parameters","text":"BenchmarkTools.Trial: 6311 samples with 1 evaluation.\n Range (min … max): 662.188 μs … 6.425 ms ┊ GC (min … max): 0.00% … 15.96%\n Time (median): 739.754 μs ┊ GC (median): 0.00%\n Time (mean ± σ): 789.411 μs ± 245.120 μs ┊ GC (mean ± σ): 5.41% ± 10.91%\n\n ▄▅▅█▇▅▄▂ ▁ ▁\n █████████▆▃▄▄▁▁▄▆▆▅▅▃▁▁▁▁▁▁▃▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▄▅▇██████ █\n 662 μs Histogram: log(frequency) by time 1.91 ms <\n\n Memory estimate: 2.98 MiB, allocs estimate: 1558.","category":"page"},{"location":"examples/train-kernel-parameters/#Training-the-model-3","page":"Train Kernel Parameters","title":"Training the model","text":"","category":"section"},{"location":"examples/train-kernel-parameters/","page":"Train Kernel Parameters","title":"Train Kernel Parameters","text":"The loss at our initial parameter values:","category":"page"},{"location":"examples/train-kernel-parameters/","page":"Train Kernel Parameters","title":"Train Kernel Parameters","text":"θ = log.([1.1, 0.1, 0.01, 0.001]) # Initial vector\nloss(θ)","category":"page"},{"location":"examples/train-kernel-parameters/","page":"Train Kernel Parameters","title":"Train Kernel Parameters","text":"2.613933959118708","category":"page"},{"location":"examples/train-kernel-parameters/","page":"Train Kernel Parameters","title":"Train Kernel Parameters","text":"Initialize optimizer","category":"page"},{"location":"examples/train-kernel-parameters/","page":"Train Kernel Parameters","title":"Train Kernel Parameters","text":"opt = Optimise.ADAGrad(0.5)","category":"page"},{"location":"examples/train-kernel-parameters/","page":"Train Kernel Parameters","title":"Train Kernel Parameters","text":"Optimize","category":"page"},{"location":"examples/train-kernel-parameters/","page":"Train Kernel Parameters","title":"Train Kernel Parameters","text":"for i in 1:15\n grads = only((Zygote.gradient(loss, θ)))\n Optimise.update!(opt, θ, grads)\nend","category":"page"},{"location":"examples/train-kernel-parameters/","page":"Train Kernel Parameters","title":"Train Kernel Parameters","text":"Final loss","category":"page"},{"location":"examples/train-kernel-parameters/","page":"Train Kernel Parameters","title":"Train Kernel Parameters","text":"loss(θ)","category":"page"},{"location":"examples/train-kernel-parameters/","page":"Train Kernel Parameters","title":"Train Kernel Parameters","text":"0.5241118228076058","category":"page"},{"location":"examples/train-kernel-parameters/","page":"Train Kernel Parameters","title":"Train Kernel Parameters","text":"
      \n
      Package and system information
      \n
      \nPackage information (click to expand)\n
      \nStatus `~/work/KernelFunctions.jl/KernelFunctions.jl/examples/train-kernel-parameters/Project.toml`\n  [6e4b80f9] BenchmarkTools v1.5.0\n  [31c24e10] Distributions v0.25.107\n  [587475ba] Flux v0.14.13\n  [f6369f11] ForwardDiff v0.10.36\n  [ec8451be] KernelFunctions v0.10.63 `/home/runner/work/KernelFunctions.jl/KernelFunctions.jl#master`\n  [98b081ad] Literate v2.16.1\n⌅ [2412ca09] ParameterHandling v0.4.10\n  [91a5bcdd] Plots v1.40.2\n  [e88e6eb3] Zygote v0.6.69\n  [37e2e46d] LinearAlgebra\nInfo Packages marked with ⌅ have new versions available but compatibility constraints restrict them from upgrading. To see why use `status --outdated`\n
      \nTo reproduce this notebook's package environment, you can\n\ndownload the full Manifest.toml.\n
      \n
      \nSystem information (click to expand)\n
      \nJulia Version 1.10.2\nCommit bd47eca2c8a (2024-03-01 10:14 UTC)\nBuild Info:\n  Official https://julialang.org/ release\nPlatform Info:\n  OS: Linux (x86_64-linux-gnu)\n  CPU: 4 × AMD EPYC 7763 64-Core Processor\n  WORD_SIZE: 64\n  LIBM: libopenlibm\n  LLVM: libLLVM-15.0.7 (ORCJIT, znver3)\nThreads: 1 default, 0 interactive, 1 GC (on 4 virtual cores)\nEnvironment:\n  JULIA_DEBUG = Documenter\n  JULIA_LOAD_PATH = :/home/runner/.julia/packages/JuliaGPsDocs/7M86H/src\n
      \n
      ","category":"page"},{"location":"examples/train-kernel-parameters/","page":"Train Kernel Parameters","title":"Train Kernel Parameters","text":"","category":"page"},{"location":"examples/train-kernel-parameters/","page":"Train Kernel Parameters","title":"Train Kernel Parameters","text":"This page was generated using Literate.jl.","category":"page"},{"location":"examples/kernel-ridge-regression/","page":"Kernel Ridge Regression","title":"Kernel Ridge Regression","text":"EditURL = \"../../../../examples/kernel-ridge-regression/script.jl\"","category":"page"},{"location":"examples/kernel-ridge-regression/","page":"Kernel Ridge Regression","title":"Kernel Ridge Regression","text":"EditURL = \"https://github.com/JuliaGaussianProcesses/KernelFunctions.jl/blob/master/examples/kernel-ridge-regression/script.jl\"","category":"page"},{"location":"examples/kernel-ridge-regression/#Kernel-Ridge-Regression","page":"Kernel Ridge Regression","title":"Kernel Ridge Regression","text":"","category":"section"},{"location":"examples/kernel-ridge-regression/","page":"Kernel Ridge Regression","title":"Kernel Ridge Regression","text":"(Image: )","category":"page"},{"location":"examples/kernel-ridge-regression/","page":"Kernel Ridge Regression","title":"Kernel Ridge Regression","text":"You are seeing the HTML output generated by Documenter.jl and Literate.jl from the Julia source file. The corresponding notebook can be viewed in nbviewer.","category":"page"},{"location":"examples/kernel-ridge-regression/","page":"Kernel Ridge Regression","title":"Kernel Ridge Regression","text":"","category":"page"},{"location":"examples/kernel-ridge-regression/","page":"Kernel Ridge Regression","title":"Kernel Ridge Regression","text":"Building on linear regression, we can fit non-linear data sets by introducing a feature space. In a higher-dimensional feature space, we can overfit the data; ridge regression introduces regularization to avoid this. In this notebook we show how we can use KernelFunctions.jl for kernel ridge regression.","category":"page"},{"location":"examples/kernel-ridge-regression/","page":"Kernel Ridge Regression","title":"Kernel Ridge Regression","text":"# Loading and setup of required packages\nusing KernelFunctions\nusing LinearAlgebra\nusing Distributions\n\n# Plotting\nusing Plots;\ndefault(; lw=2.0, legendfontsize=11.0, ylims=(-150, 500));\n\nusing Random: seed!\nseed!(42);","category":"page"},{"location":"examples/kernel-ridge-regression/#Toy-data","page":"Kernel Ridge Regression","title":"Toy data","text":"","category":"section"},{"location":"examples/kernel-ridge-regression/","page":"Kernel Ridge Regression","title":"Kernel Ridge Regression","text":"Here we use a one-dimensional toy problem. We generate data using the fourth-order polynomial f(x) = (x+4)(x+1)(x-1)(x-3):","category":"page"},{"location":"examples/kernel-ridge-regression/","page":"Kernel Ridge Regression","title":"Kernel Ridge Regression","text":"f_truth(x) = (x + 4) * (x + 1) * (x - 1) * (x - 3)\n\nx_train = -5:0.5:5\nx_test = -7:0.1:7\n\nnoise = rand(Uniform(-20, 20), length(x_train))\ny_train = f_truth.(x_train) + noise\ny_test = f_truth.(x_test)\n\nplot(x_test, y_test; label=raw\"$f(x)$\")\nscatter!(x_train, y_train; seriescolor=1, label=\"observations\")","category":"page"},{"location":"examples/kernel-ridge-regression/","page":"Kernel Ridge Regression","title":"Kernel Ridge Regression","text":"\n\n\n \n \n \n\n\n\n \n \n \n\n\n\n \n \n \n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n","category":"page"},{"location":"examples/kernel-ridge-regression/#Linear-regression","page":"Kernel Ridge Regression","title":"Linear regression","text":"","category":"section"},{"location":"examples/kernel-ridge-regression/","page":"Kernel Ridge Regression","title":"Kernel Ridge Regression","text":"For training inputs mathrmX=(mathbfx_n)_n=1^N and observations mathbfy=(y_n)_n=1^N, the linear regression weights mathbfw using the least-squares estimator are given by","category":"page"},{"location":"examples/kernel-ridge-regression/","page":"Kernel Ridge Regression","title":"Kernel Ridge Regression","text":"mathbfw = (mathrmX^top mathrmX)^-1 mathrmX^top mathbfy","category":"page"},{"location":"examples/kernel-ridge-regression/","page":"Kernel Ridge Regression","title":"Kernel Ridge Regression","text":"We predict at test inputs mathbfx_* using","category":"page"},{"location":"examples/kernel-ridge-regression/","page":"Kernel Ridge Regression","title":"Kernel Ridge Regression","text":"haty_* = mathbfx_*^top mathbfw","category":"page"},{"location":"examples/kernel-ridge-regression/","page":"Kernel Ridge Regression","title":"Kernel Ridge Regression","text":"This is implemented by linear_regression:","category":"page"},{"location":"examples/kernel-ridge-regression/","page":"Kernel Ridge Regression","title":"Kernel Ridge Regression","text":"function linear_regression(X, y, Xstar)\n weights = (X' * X) \\ (X' * y)\n return Xstar * weights\nend;","category":"page"},{"location":"examples/kernel-ridge-regression/","page":"Kernel Ridge Regression","title":"Kernel Ridge Regression","text":"A linear regression fit to the above data set:","category":"page"},{"location":"examples/kernel-ridge-regression/","page":"Kernel Ridge Regression","title":"Kernel Ridge Regression","text":"y_pred = linear_regression(x_train, y_train, x_test)\nscatter(x_train, y_train; label=\"observations\")\nplot!(x_test, y_pred; label=\"linear fit\")","category":"page"},{"location":"examples/kernel-ridge-regression/","page":"Kernel Ridge Regression","title":"Kernel Ridge Regression","text":"\n\n\n \n \n \n\n\n\n \n \n \n\n\n\n \n \n \n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n","category":"page"},{"location":"examples/kernel-ridge-regression/#Featurization","page":"Kernel Ridge Regression","title":"Featurization","text":"","category":"section"},{"location":"examples/kernel-ridge-regression/","page":"Kernel Ridge Regression","title":"Kernel Ridge Regression","text":"We can improve the fit by including additional features, i.e. generalizing to tildemathrmX = (phi(x_n))_n=1^N, where phi(x) constructs a feature vector for each input x. Here we include powers of the input, phi(x) = (1 x x^2 dots x^d):","category":"page"},{"location":"examples/kernel-ridge-regression/","page":"Kernel Ridge Regression","title":"Kernel Ridge Regression","text":"function featurize_poly(x; degree=1)\n return repeat(x, 1, degree + 1) .^ (0:degree)'\nend\n\nfunction featurized_fit_and_plot(degree)\n X = featurize_poly(x_train; degree=degree)\n Xstar = featurize_poly(x_test; degree=degree)\n y_pred = linear_regression(X, y_train, Xstar)\n scatter(x_train, y_train; legend=false, title=\"fit of order $degree\")\n return plot!(x_test, y_pred)\nend\n\nplot((featurized_fit_and_plot(degree) for degree in 1:4)...)","category":"page"},{"location":"examples/kernel-ridge-regression/","page":"Kernel Ridge Regression","title":"Kernel Ridge Regression","text":"\n\n\n \n \n \n\n\n\n \n \n \n\n\n\n \n \n \n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n \n \n \n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n \n \n \n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n \n \n \n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n","category":"page"},{"location":"examples/kernel-ridge-regression/","page":"Kernel Ridge Regression","title":"Kernel Ridge Regression","text":"Note that the fit becomes perfect when we include exactly as many orders in the features as we have in the underlying polynomial (4).","category":"page"},{"location":"examples/kernel-ridge-regression/","page":"Kernel Ridge Regression","title":"Kernel Ridge Regression","text":"However, when increasing the number of features, we can quickly overfit to noise in the data set:","category":"page"},{"location":"examples/kernel-ridge-regression/","page":"Kernel Ridge Regression","title":"Kernel Ridge Regression","text":"featurized_fit_and_plot(20)","category":"page"},{"location":"examples/kernel-ridge-regression/","page":"Kernel Ridge Regression","title":"Kernel Ridge Regression","text":"\n\n\n \n \n \n\n\n\n \n \n \n\n\n\n \n \n \n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n","category":"page"},{"location":"examples/kernel-ridge-regression/#Ridge-regression","page":"Kernel Ridge Regression","title":"Ridge regression","text":"","category":"section"},{"location":"examples/kernel-ridge-regression/","page":"Kernel Ridge Regression","title":"Kernel Ridge Regression","text":"To counteract this unwanted behaviour, we can introduce regularization. This leads to ridge regression with L_2 regularization of the weights (Tikhonov regularization). Instead of the weights in linear regression,","category":"page"},{"location":"examples/kernel-ridge-regression/","page":"Kernel Ridge Regression","title":"Kernel Ridge Regression","text":"mathbfw = (mathrmX^top mathrmX)^-1 mathrmX^top mathbfy","category":"page"},{"location":"examples/kernel-ridge-regression/","page":"Kernel Ridge Regression","title":"Kernel Ridge Regression","text":"we introduce the ridge parameter lambda:","category":"page"},{"location":"examples/kernel-ridge-regression/","page":"Kernel Ridge Regression","title":"Kernel Ridge Regression","text":"mathbfw = (mathrmX^top mathrmX + lambda mathbb1)^-1 mathrmX^top mathbfy","category":"page"},{"location":"examples/kernel-ridge-regression/","page":"Kernel Ridge Regression","title":"Kernel Ridge Regression","text":"As before, we predict at test inputs mathbfx_* using","category":"page"},{"location":"examples/kernel-ridge-regression/","page":"Kernel Ridge Regression","title":"Kernel Ridge Regression","text":"haty_* = mathbfx_*^top mathbfw","category":"page"},{"location":"examples/kernel-ridge-regression/","page":"Kernel Ridge Regression","title":"Kernel Ridge Regression","text":"This is implemented by ridge_regression:","category":"page"},{"location":"examples/kernel-ridge-regression/","page":"Kernel Ridge Regression","title":"Kernel Ridge Regression","text":"function ridge_regression(X, y, Xstar, lambda)\n weights = (X' * X + lambda * I) \\ (X' * y)\n return Xstar * weights\nend\n\nfunction regularized_fit_and_plot(degree, lambda)\n X = featurize_poly(x_train; degree=degree)\n Xstar = featurize_poly(x_test; degree=degree)\n y_pred = ridge_regression(X, y_train, Xstar, lambda)\n scatter(x_train, y_train; legend=false, title=\"\\$\\\\lambda=$lambda\\$\")\n return plot!(x_test, y_pred)\nend\n\nplot((regularized_fit_and_plot(20, lambda) for lambda in (1e-3, 1e-2, 1e-1, 1))...)","category":"page"},{"location":"examples/kernel-ridge-regression/","page":"Kernel Ridge Regression","title":"Kernel Ridge Regression","text":"\n\n\n \n \n \n\n\n\n \n \n \n\n\n\n \n \n \n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n \n \n \n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n \n \n \n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n \n \n \n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n","category":"page"},{"location":"examples/kernel-ridge-regression/#Kernel-ridge-regression","page":"Kernel Ridge Regression","title":"Kernel ridge regression","text":"","category":"section"},{"location":"examples/kernel-ridge-regression/","page":"Kernel Ridge Regression","title":"Kernel Ridge Regression","text":"Instead of constructing the feature matrix explicitly, we can use kernels to replace inner products of feature vectors with a kernel evaluation: langle phi(x) phi(x) rangle = k(x x) or tildemathrmX tildemathrmX^top = mathrmK, where mathrmK_ij = k(x_i x_j).","category":"page"},{"location":"examples/kernel-ridge-regression/","page":"Kernel Ridge Regression","title":"Kernel Ridge Regression","text":"To apply this \"kernel trick\" to ridge regression, we can rewrite the ridge estimate for the weights","category":"page"},{"location":"examples/kernel-ridge-regression/","page":"Kernel Ridge Regression","title":"Kernel Ridge Regression","text":"mathbfw = (mathrmX^top mathrmX + lambda mathbb1)^-1 mathrmX^top mathbfy","category":"page"},{"location":"examples/kernel-ridge-regression/","page":"Kernel Ridge Regression","title":"Kernel Ridge Regression","text":"using the matrix inversion lemma as","category":"page"},{"location":"examples/kernel-ridge-regression/","page":"Kernel Ridge Regression","title":"Kernel Ridge Regression","text":"mathbfw = mathrmX^top (mathrmX mathrmX^top + lambda mathbb1)^-1 mathbfy","category":"page"},{"location":"examples/kernel-ridge-regression/","page":"Kernel Ridge Regression","title":"Kernel Ridge Regression","text":"where we can now replace the inner product with the kernel matrix,","category":"page"},{"location":"examples/kernel-ridge-regression/","page":"Kernel Ridge Regression","title":"Kernel Ridge Regression","text":"mathbfw = mathrmX^top (mathrmK + lambda mathbb1)^-1 mathbfy","category":"page"},{"location":"examples/kernel-ridge-regression/","page":"Kernel Ridge Regression","title":"Kernel Ridge Regression","text":"And the prediction yields another inner product,","category":"page"},{"location":"examples/kernel-ridge-regression/","page":"Kernel Ridge Regression","title":"Kernel Ridge Regression","text":"haty_* = mathbfx_*^top mathbfw = langle mathbfx_* mathbfw rangle = mathbfk_* (mathrmK + lambda mathbb1)^-1 mathbfy","category":"page"},{"location":"examples/kernel-ridge-regression/","page":"Kernel Ridge Regression","title":"Kernel Ridge Regression","text":"where (mathbfk_*)_n = k(x_* x_n).","category":"page"},{"location":"examples/kernel-ridge-regression/","page":"Kernel Ridge Regression","title":"Kernel Ridge Regression","text":"This is implemented by kernel_ridge_regression:","category":"page"},{"location":"examples/kernel-ridge-regression/","page":"Kernel Ridge Regression","title":"Kernel Ridge Regression","text":"function kernel_ridge_regression(k, X, y, Xstar, lambda)\n K = kernelmatrix(k, X)\n kstar = kernelmatrix(k, Xstar, X)\n return kstar * ((K + lambda * I) \\ y)\nend;","category":"page"},{"location":"examples/kernel-ridge-regression/","page":"Kernel Ridge Regression","title":"Kernel Ridge Regression","text":"Now, instead of explicitly constructing features, we can simply pass in a PolynomialKernel object:","category":"page"},{"location":"examples/kernel-ridge-regression/","page":"Kernel Ridge Regression","title":"Kernel Ridge Regression","text":"function kernelized_fit_and_plot(kernel, lambda=1e-4)\n y_pred = kernel_ridge_regression(kernel, x_train, y_train, x_test, lambda)\n if kernel isa PolynomialKernel\n title = string(\"order \", kernel.degree)\n else\n title = string(nameof(typeof(kernel)))\n end\n scatter(x_train, y_train; label=nothing)\n return plot!(x_test, y_pred; label=nothing, title=title)\nend\n\nplot((kernelized_fit_and_plot(PolynomialKernel(; degree=degree, c=1)) for degree in 1:4)...)","category":"page"},{"location":"examples/kernel-ridge-regression/","page":"Kernel Ridge Regression","title":"Kernel Ridge Regression","text":"\n\n\n \n \n \n\n\n\n \n \n \n\n\n\n \n \n \n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n \n \n \n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n \n \n \n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n \n \n \n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n","category":"page"},{"location":"examples/kernel-ridge-regression/","page":"Kernel Ridge Regression","title":"Kernel Ridge Regression","text":"However, we can now also use kernels that would have an infinite-dimensional feature expansion, such as the squared exponential kernel:","category":"page"},{"location":"examples/kernel-ridge-regression/","page":"Kernel Ridge Regression","title":"Kernel Ridge Regression","text":"kernelized_fit_and_plot(SqExponentialKernel())","category":"page"},{"location":"examples/kernel-ridge-regression/","page":"Kernel Ridge Regression","title":"Kernel Ridge Regression","text":"\n\n\n \n \n \n\n\n\n \n \n \n\n\n\n \n \n \n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n","category":"page"},{"location":"examples/kernel-ridge-regression/","page":"Kernel Ridge Regression","title":"Kernel Ridge Regression","text":"
      \n
      Package and system information
      \n
      \nPackage information (click to expand)\n
      \nStatus `~/work/KernelFunctions.jl/KernelFunctions.jl/examples/kernel-ridge-regression/Project.toml`\n  [31c24e10] Distributions v0.25.107\n  [ec8451be] KernelFunctions v0.10.63 `/home/runner/work/KernelFunctions.jl/KernelFunctions.jl#master`\n  [98b081ad] Literate v2.16.1\n  [91a5bcdd] Plots v1.40.2\n  [37e2e46d] LinearAlgebra\n
      \nTo reproduce this notebook's package environment, you can\n\ndownload the full Manifest.toml.\n
      \n
      \nSystem information (click to expand)\n
      \nJulia Version 1.10.2\nCommit bd47eca2c8a (2024-03-01 10:14 UTC)\nBuild Info:\n  Official https://julialang.org/ release\nPlatform Info:\n  OS: Linux (x86_64-linux-gnu)\n  CPU: 4 × AMD EPYC 7763 64-Core Processor\n  WORD_SIZE: 64\n  LIBM: libopenlibm\n  LLVM: libLLVM-15.0.7 (ORCJIT, znver3)\nThreads: 1 default, 0 interactive, 1 GC (on 4 virtual cores)\nEnvironment:\n  JULIA_DEBUG = Documenter\n  JULIA_LOAD_PATH = :/home/runner/.julia/packages/JuliaGPsDocs/7M86H/src\n
      \n
      ","category":"page"},{"location":"examples/kernel-ridge-regression/","page":"Kernel Ridge Regression","title":"Kernel Ridge Regression","text":"","category":"page"},{"location":"examples/kernel-ridge-regression/","page":"Kernel Ridge Regression","title":"Kernel Ridge Regression","text":"This page was generated using Literate.jl.","category":"page"},{"location":"kernels/","page":"Kernel Functions","title":"Kernel Functions","text":" CurrentModule = KernelFunctions","category":"page"},{"location":"kernels/#Kernel-Functions","page":"Kernel Functions","title":"Kernel Functions","text":"","category":"section"},{"location":"kernels/#base_kernels","page":"Kernel Functions","title":"Base Kernels","text":"","category":"section"},{"location":"kernels/","page":"Kernel Functions","title":"Kernel Functions","text":"These are the basic kernels without any transformation of the data. They are the building blocks of KernelFunctions.","category":"page"},{"location":"kernels/#Constant-Kernels","page":"Kernel Functions","title":"Constant Kernels","text":"","category":"section"},{"location":"kernels/","page":"Kernel Functions","title":"Kernel Functions","text":"ZeroKernel\nConstantKernel\nWhiteKernel\nEyeKernel","category":"page"},{"location":"kernels/#KernelFunctions.ZeroKernel","page":"Kernel Functions","title":"KernelFunctions.ZeroKernel","text":"ZeroKernel()\n\nZero kernel.\n\nDefinition\n\nFor inputs x x, the zero kernel is defined as\n\nk(x x) = 0\n\nThe output type depends on x and x.\n\nSee also: ConstantKernel\n\n\n\n\n\n","category":"type"},{"location":"kernels/#KernelFunctions.ConstantKernel","page":"Kernel Functions","title":"KernelFunctions.ConstantKernel","text":"ConstantKernel(; c::Real=1.0)\n\nKernel of constant value c.\n\nDefinition\n\nFor inputs x x, the kernel of constant value c geq 0 is defined as\n\nk(x x) = c\n\nSee also: ZeroKernel\n\n\n\n\n\n","category":"type"},{"location":"kernels/#KernelFunctions.WhiteKernel","page":"Kernel Functions","title":"KernelFunctions.WhiteKernel","text":"WhiteKernel()\n\nWhite noise kernel.\n\nDefinition\n\nFor inputs x x, the white noise kernel is defined as\n\nk(x x) = delta(x x)\n\n\n\n\n\n","category":"type"},{"location":"kernels/#KernelFunctions.EyeKernel","page":"Kernel Functions","title":"KernelFunctions.EyeKernel","text":"EyeKernel()\n\nAlias of WhiteKernel.\n\n\n\n\n\n","category":"type"},{"location":"kernels/#Cosine-Kernel","page":"Kernel Functions","title":"Cosine Kernel","text":"","category":"section"},{"location":"kernels/","page":"Kernel Functions","title":"Kernel Functions","text":"CosineKernel","category":"page"},{"location":"kernels/#KernelFunctions.CosineKernel","page":"Kernel Functions","title":"KernelFunctions.CosineKernel","text":"CosineKernel(; metric=Euclidean())\n\nCosine kernel with respect to the metric.\n\nDefinition\n\nFor inputs x x and metric d(cdot cdot), the cosine kernel is defined as\n\nk(x x) = cos(pi d(x x))\n\nBy default, d is the Euclidean metric d(x x) = x - x_2.\n\n\n\n\n\n","category":"type"},{"location":"kernels/#Exponential-Kernels","page":"Kernel Functions","title":"Exponential Kernels","text":"","category":"section"},{"location":"kernels/","page":"Kernel Functions","title":"Kernel Functions","text":"ExponentialKernel\nGibbsKernel\nLaplacianKernel\nSqExponentialKernel\nSEKernel\nGaussianKernel\nRBFKernel\nGammaExponentialKernel","category":"page"},{"location":"kernels/#KernelFunctions.ExponentialKernel","page":"Kernel Functions","title":"KernelFunctions.ExponentialKernel","text":"ExponentialKernel(; metric=Euclidean())\n\nExponential kernel with respect to the metric.\n\nDefinition\n\nFor inputs x x and metric d(cdot cdot), the exponential kernel is defined as\n\nk(x x) = expbig(- d(x x)big)\n\nBy default, d is the Euclidean metric d(x x) = x - x_2.\n\nSee also: GammaExponentialKernel\n\n\n\n\n\n","category":"type"},{"location":"kernels/#KernelFunctions.GibbsKernel","page":"Kernel Functions","title":"KernelFunctions.GibbsKernel","text":"GibbsKernel(; lengthscale)\n\nGibbs Kernel with lengthscale function lengthscale.\n\nThe Gibbs kernel is a non-stationary generalisation of the squared exponential kernel. The lengthscale parameter l becomes a function of position l(x).\n\nDefinition\n\nFor inputs x x, the Gibbs kernel with lengthscale function l(cdot) is defined as\n\nk(x x l) = sqrtleft(frac2 l(x) l(x)l(x)^2 + l(x)^2right)\nquad expleft(-frac(x - x)^2l(x)^2 + l(x)^2right)\n\nFor a constant function l equiv c, one recovers the SqExponentialKernel with lengthscale c.\n\nReferences\n\nMark N. Gibbs. \"Bayesian Gaussian Processes for Regression and Classication.\" PhD thesis, 1997\n\nChristopher J. Paciorek and Mark J. Schervish. \"Nonstationary Covariance Functions for Gaussian Process Regression\". NeurIPS, 2003\n\nSami Remes, Markus Heinonen, Samuel Kaski. \"Non-Stationary Spectral Kernels\". arXiV:1705.08736, 2017\n\nSami Remes, Markus Heinonen, Samuel Kaski. \"Neural Non-Stationary Spectral Kernel\". arXiv:1811.10978, 2018\n\n\n\n\n\n","category":"type"},{"location":"kernels/#KernelFunctions.LaplacianKernel","page":"Kernel Functions","title":"KernelFunctions.LaplacianKernel","text":"LaplacianKernel()\n\nAlias of ExponentialKernel.\n\n\n\n\n\n","category":"type"},{"location":"kernels/#KernelFunctions.SqExponentialKernel","page":"Kernel Functions","title":"KernelFunctions.SqExponentialKernel","text":"SqExponentialKernel(; metric=Euclidean())\n\nSquared exponential kernel with respect to the metric.\n\nDefinition\n\nFor inputs x x and metric d(cdot cdot), the squared exponential kernel is defined as\n\nk(x x) = expbigg(- fracd(x x)^22bigg)\n\nBy default, d is the Euclidean metric d(x x) = x - x_2.\n\nSee also: GammaExponentialKernel\n\n\n\n\n\n","category":"type"},{"location":"kernels/#KernelFunctions.SEKernel","page":"Kernel Functions","title":"KernelFunctions.SEKernel","text":"SEKernel()\n\nAlias of SqExponentialKernel.\n\n\n\n\n\n","category":"type"},{"location":"kernels/#KernelFunctions.GaussianKernel","page":"Kernel Functions","title":"KernelFunctions.GaussianKernel","text":"GaussianKernel()\n\nAlias of SqExponentialKernel.\n\n\n\n\n\n","category":"type"},{"location":"kernels/#KernelFunctions.RBFKernel","page":"Kernel Functions","title":"KernelFunctions.RBFKernel","text":"RBFKernel()\n\nAlias of SqExponentialKernel.\n\n\n\n\n\n","category":"type"},{"location":"kernels/#KernelFunctions.GammaExponentialKernel","page":"Kernel Functions","title":"KernelFunctions.GammaExponentialKernel","text":"GammaExponentialKernel(; γ::Real=1.0, metric=Euclidean())\n\nγ-exponential kernel with respect to the metric and with parameter γ.\n\nDefinition\n\nFor inputs x x and metric d(cdot cdot), the γ-exponential kernel[RW] with parameter gamma in (0 2 is defined as\n\nk(x x gamma) = expbig(- d(x x)^gammabig)\n\nBy default, d is the Euclidean metric d(x x) = x - x_2.\n\nSee also: ExponentialKernel, SqExponentialKernel\n\n[RW]: C. E. Rasmussen & C. K. I. Williams (2006). Gaussian Processes for Machine Learning.\n\n\n\n\n\n","category":"type"},{"location":"kernels/#Exponentiated-Kernel","page":"Kernel Functions","title":"Exponentiated Kernel","text":"","category":"section"},{"location":"kernels/","page":"Kernel Functions","title":"Kernel Functions","text":"ExponentiatedKernel","category":"page"},{"location":"kernels/#KernelFunctions.ExponentiatedKernel","page":"Kernel Functions","title":"KernelFunctions.ExponentiatedKernel","text":"ExponentiatedKernel()\n\nExponentiated kernel.\n\nDefinition\n\nFor inputs x x in mathbbR^d, the exponentiated kernel is defined as\n\nk(x x) = exp(x^top x)\n\n\n\n\n\n","category":"type"},{"location":"kernels/#Fractional-Brownian-Motion-Kernel","page":"Kernel Functions","title":"Fractional Brownian Motion Kernel","text":"","category":"section"},{"location":"kernels/","page":"Kernel Functions","title":"Kernel Functions","text":"FBMKernel","category":"page"},{"location":"kernels/#KernelFunctions.FBMKernel","page":"Kernel Functions","title":"KernelFunctions.FBMKernel","text":"FBMKernel(; h::Real=0.5)\n\nFractional Brownian motion kernel with Hurst index h.\n\nDefinition\n\nFor inputs x x in mathbbR^d, the fractional Brownian motion kernel with Hurst index h in 01 is defined as\n\nk(x x h) = fracx_2^2h + x_2^2h - x - x^2h2\n\n\n\n\n\n","category":"type"},{"location":"kernels/#Gabor-Kernel","page":"Kernel Functions","title":"Gabor Kernel","text":"","category":"section"},{"location":"kernels/","page":"Kernel Functions","title":"Kernel Functions","text":"gaborkernel","category":"page"},{"location":"kernels/#KernelFunctions.gaborkernel","page":"Kernel Functions","title":"KernelFunctions.gaborkernel","text":"gaborkernel(;\n sqexponential_transform=IdentityTransform(), cosine_tranform=IdentityTransform()\n)\n\nConstruct a Gabor kernel with transformations sqexponential_transform and cosine_transform of the inputs of the underlying squared exponential and cosine kernel, respectively.\n\nDefinition\n\nFor inputs x x in mathbbR^d, the Gabor kernel with transformations f and g of the inputs to the squared exponential and cosine kernel, respectively, is defined as\n\nk(x x f g) = expbigg(- frac f(x) - f(x)_2^22bigg)\n cosbig(pi g(x) - g(x)_2 big)\n\n\n\n\n\n","category":"function"},{"location":"kernels/#Matérn-Kernels","page":"Kernel Functions","title":"Matérn Kernels","text":"","category":"section"},{"location":"kernels/","page":"Kernel Functions","title":"Kernel Functions","text":"MaternKernel\nMatern12Kernel\nMatern32Kernel\nMatern52Kernel","category":"page"},{"location":"kernels/#KernelFunctions.MaternKernel","page":"Kernel Functions","title":"KernelFunctions.MaternKernel","text":"MaternKernel(; ν::Real=1.5, metric=Euclidean())\n\nMatérn kernel of order ν with respect to the metric.\n\nDefinition\n\nFor inputs x x and metric d(cdot cdot), the Matérn kernel of order nu 0 is defined as\n\nk(xxnu) = frac2^1-nuGamma(nu)big(sqrt2nu d(x x)big) K_nubig(sqrt2nu d(x x)big)\n\nwhere Gamma is the Gamma function and K_nu is the modified Bessel function of the second kind of order nu. By default, d is the Euclidean metric d(x x) = x - x_2.\n\nA Gaussian process with a Matérn kernel is lceil nu rceil - 1-times differentiable in the mean-square sense.\n\nnote: Note\nDifferentiation with respect to the order ν is not currently supported.\n\nSee also: Matern12Kernel, Matern32Kernel, Matern52Kernel\n\n\n\n\n\n","category":"type"},{"location":"kernels/#KernelFunctions.Matern12Kernel","page":"Kernel Functions","title":"KernelFunctions.Matern12Kernel","text":"Matern12Kernel()\n\nAlias of ExponentialKernel.\n\n\n\n\n\n","category":"type"},{"location":"kernels/#KernelFunctions.Matern32Kernel","page":"Kernel Functions","title":"KernelFunctions.Matern32Kernel","text":"Matern32Kernel(; metric=Euclidean())\n\nMatérn kernel of order 32 with respect to the metric.\n\nDefinition\n\nFor inputs x x and metric d(cdot cdot), the Matérn kernel of order 32 is given by\n\nk(x x) = big(1 + sqrt3 d(x x) big) expbig(- sqrt3 d(x x) big)\n\nBy default, d is the Euclidean metric d(x x) = x - x_2.\n\nSee also: MaternKernel\n\n\n\n\n\n","category":"type"},{"location":"kernels/#KernelFunctions.Matern52Kernel","page":"Kernel Functions","title":"KernelFunctions.Matern52Kernel","text":"Matern52Kernel(; metric=Euclidean())\n\nMatérn kernel of order 52 with respect to the metric.\n\nDefinition\n\nFor inputs x x and metric d(cdot cdot), the Matérn kernel of order 52 is given by\n\nk(x x) = bigg(1 + sqrt5 d(x x) + frac53 d(x x)^2bigg)\n expbig(- sqrt5 d(x x) big)\n\nBy default, d is the Euclidean metric d(x x) = x - x_2.\n\nSee also: MaternKernel\n\n\n\n\n\n","category":"type"},{"location":"kernels/#Neural-Network-Kernel","page":"Kernel Functions","title":"Neural Network Kernel","text":"","category":"section"},{"location":"kernels/","page":"Kernel Functions","title":"Kernel Functions","text":"NeuralNetworkKernel","category":"page"},{"location":"kernels/#KernelFunctions.NeuralNetworkKernel","page":"Kernel Functions","title":"KernelFunctions.NeuralNetworkKernel","text":"NeuralNetworkKernel()\n\nKernel of a Gaussian process obtained as the limit of a Bayesian neural network with a single hidden layer as the number of units goes to infinity.\n\nDefinition\n\nConsider the single-layer Bayesian neural network f colon mathbbR^d to mathbbR with h hidden units defined by\n\nf(x b v u) = b + sqrtfracpi2 sum_i=1^h v_i mathrmerfbig(u_i^top xbig)\n\nwhere mathrmerf is the error function, and with prior distributions\n\nbeginaligned\nb sim mathcalN(0 sigma_b^2)\nv sim mathcalN(0 sigma_v^2 mathrmI_hh)\nu_i sim mathcalN(0 mathrmI_d2) qquad (i = 1ldotsh)\nendaligned\n\nAs h to infty, the neural network converges to the Gaussian process\n\ng(cdot) sim mathcalGPbig(0 sigma_b^2 + sigma_v^2 k(cdot cdot)big)\n\nwhere the neural network kernel k is given by\n\nk(x x) = arcsinleft(fracx^top xsqrtbig(1 + x^2_2big) big(1 + x_2^2big)right)\n\nfor inputs x x in mathbbR^d.[CW]\n\n[CW]: C. K. I. Williams (1998). Computation with infinite neural networks.\n\n\n\n\n\n","category":"type"},{"location":"kernels/#Periodic-Kernel","page":"Kernel Functions","title":"Periodic Kernel","text":"","category":"section"},{"location":"kernels/","page":"Kernel Functions","title":"Kernel Functions","text":"PeriodicKernel\nPeriodicKernel(::DataType, ::Int)","category":"page"},{"location":"kernels/#KernelFunctions.PeriodicKernel","page":"Kernel Functions","title":"KernelFunctions.PeriodicKernel","text":"PeriodicKernel(; r::AbstractVector=ones(Float64, 1))\n\nPeriodic kernel with parameter r.\n\nDefinition\n\nFor inputs x x in mathbbR^d, the periodic kernel with parameter r_i 0 is defined[DM] as\n\nk(x x r) = expbigg(- frac12 sum_i=1^d bigg(fracsinbig(pi(x_i - x_i)big)r_ibigg)^2bigg)\n\n[DM]: D. J. C. MacKay (1998). Introduction to Gaussian Processes.\n\n\n\n\n\n","category":"type"},{"location":"kernels/#KernelFunctions.PeriodicKernel-Tuple{DataType, Int64}","page":"Kernel Functions","title":"KernelFunctions.PeriodicKernel","text":"PeriodicKernel([T=Float64, dims::Int=1])\n\nCreate a PeriodicKernel with parameter r=ones(T, dims).\n\n\n\n\n\n","category":"method"},{"location":"kernels/#Piecewise-Polynomial-Kernel","page":"Kernel Functions","title":"Piecewise Polynomial Kernel","text":"","category":"section"},{"location":"kernels/","page":"Kernel Functions","title":"Kernel Functions","text":"PiecewisePolynomialKernel","category":"page"},{"location":"kernels/#KernelFunctions.PiecewisePolynomialKernel","page":"Kernel Functions","title":"KernelFunctions.PiecewisePolynomialKernel","text":"PiecewisePolynomialKernel(; dim::Int, degree::Int=0, metric=Euclidean())\nPiecewisePolynomialKernel{degree}(; dim::Int, metric=Euclidean())\n\nPiecewise polynomial kernel of degree degree for inputs of dimension dim with support in the unit ball with respect to the metric.\n\nDefinition\n\nFor inputs x x of dimension m and metric d(cdot cdot), the piecewise polynomial kernel of degree v in 0123 is defined as\n\nk(x x v) = max(1 - d(x x) 0)^alpha(vm) f_vm(d(x x))\n\nwhere alpha(v m) = lfloor fracm2rfloor + 2v + 1 and f_vm are polynomials of degree v given by\n\nbeginaligned\nf_0m(r) = 1 \nf_1m(r) = 1 + (j + 1) r \nf_2m(r) = 1 + (j + 2) r + big((j^2 + 4j + 3) 3big) r^2 \nf_3m(r) = 1 + (j + 3) r + big((6 j^2 + 36j + 45) 15big) r^2 + big((j^3 + 9 j^2 + 23j + 15) 15big) r^3\nendaligned\n\nwhere j = lfloor fracm2rfloor + v + 1. By default, d is the Euclidean metric d(x x) = x - x_2.\n\nThe kernel is 2v times continuously differentiable and the corresponding Gaussian process is hence v times mean-square differentiable.\n\n\n\n\n\n","category":"type"},{"location":"kernels/#Polynomial-Kernels","page":"Kernel Functions","title":"Polynomial Kernels","text":"","category":"section"},{"location":"kernels/","page":"Kernel Functions","title":"Kernel Functions","text":"LinearKernel\nPolynomialKernel","category":"page"},{"location":"kernels/#KernelFunctions.LinearKernel","page":"Kernel Functions","title":"KernelFunctions.LinearKernel","text":"LinearKernel(; c::Real=0.0)\n\nLinear kernel with constant offset c.\n\nDefinition\n\nFor inputs x x in mathbbR^d, the linear kernel with constant offset c geq 0 is defined as\n\nk(x x c) = x^top x + c\n\nSee also: PolynomialKernel\n\n\n\n\n\n","category":"type"},{"location":"kernels/#KernelFunctions.PolynomialKernel","page":"Kernel Functions","title":"KernelFunctions.PolynomialKernel","text":"PolynomialKernel(; degree::Int=2, c::Real=0.0)\n\nPolynomial kernel of degree degree with constant offset c.\n\nDefinition\n\nFor inputs x x in mathbbR^d, the polynomial kernel of degree nu in mathbbN with constant offset c geq 0 is defined as\n\nk(x x c nu) = (x^top x + c)^nu\n\nSee also: LinearKernel\n\n\n\n\n\n","category":"type"},{"location":"kernels/#Rational-Kernels","page":"Kernel Functions","title":"Rational Kernels","text":"","category":"section"},{"location":"kernels/","page":"Kernel Functions","title":"Kernel Functions","text":"RationalKernel\nRationalQuadraticKernel\nGammaRationalKernel","category":"page"},{"location":"kernels/#KernelFunctions.RationalKernel","page":"Kernel Functions","title":"KernelFunctions.RationalKernel","text":"RationalKernel(; α::Real=2.0, metric=Euclidean())\n\nRational kernel with shape parameter α and given metric.\n\nDefinition\n\nFor inputs x x and metric d(cdot cdot), the rational kernel with shape parameter alpha 0 is defined as\n\nk(x x alpha) = bigg(1 + fracd(x x)alphabigg)^-alpha\n\nBy default, d is the Euclidean metric d(x x) = x - x_2.\n\nThe ExponentialKernel is recovered in the limit as alpha to infty.\n\nSee also: GammaRationalKernel\n\n\n\n\n\n","category":"type"},{"location":"kernels/#KernelFunctions.RationalQuadraticKernel","page":"Kernel Functions","title":"KernelFunctions.RationalQuadraticKernel","text":"RationalQuadraticKernel(; α::Real=2.0, metric=Euclidean())\n\nRational-quadratic kernel with respect to the metric and with shape parameter α.\n\nDefinition\n\nFor inputs x x and metric d(cdot cdot), the rational-quadratic kernel with shape parameter alpha 0 is defined as\n\nk(x x alpha) = bigg(1 + fracd(x x)^22alphabigg)^-alpha\n\nBy default, d is the Euclidean metric d(x x) = x - x_2.\n\nThe SqExponentialKernel is recovered in the limit as alpha to infty.\n\nSee also: GammaRationalKernel\n\n\n\n\n\n","category":"type"},{"location":"kernels/#KernelFunctions.GammaRationalKernel","page":"Kernel Functions","title":"KernelFunctions.GammaRationalKernel","text":"GammaRationalKernel(; α::Real=2.0, γ::Real=1.0, metric=Euclidean())\n\nγ-rational kernel with respect to the metric with shape parameters α and γ.\n\nDefinition\n\nFor inputs x x and metric d(cdot cdot), the γ-rational kernel with shape parameters alpha 0 and gamma in (0 2 is defined as\n\nk(x x alpha gamma) = bigg(1 + fracd(x x)^gammaalphabigg)^-alpha\n\nBy default, d is the Euclidean metric d(x x) = x - x_2.\n\nThe GammaExponentialKernel is recovered in the limit as alpha to infty.\n\nSee also: RationalKernel, RationalQuadraticKernel\n\n\n\n\n\n","category":"type"},{"location":"kernels/#Spectral-Mixture-Kernels","page":"Kernel Functions","title":"Spectral Mixture Kernels","text":"","category":"section"},{"location":"kernels/","page":"Kernel Functions","title":"Kernel Functions","text":"spectral_mixture_kernel\nspectral_mixture_product_kernel","category":"page"},{"location":"kernels/#KernelFunctions.spectral_mixture_kernel","page":"Kernel Functions","title":"KernelFunctions.spectral_mixture_kernel","text":"spectral_mixture_kernel(\n h::Kernel=SqExponentialKernel(),\n αs::AbstractVector{<:Real},\n γs::AbstractMatrix{<:Real},\n ωs::AbstractMatrix{<:Real},\n)\n\nwhere αs are the weights of dimension (A, ), γs is the covariance matrix of dimension (D, A) and ωs are the mean vectors and is of dimension (D, A). Here, D is input dimension and A is the number of spectral components.\n\nh is the kernel, which defaults to SqExponentialKernel if not specified.\n\nwarning: Warning\nIf you want to make sure that the constructor is type-stable, you should provide StaticArrays arguments: αs as a StaticVector, γs and ωs as StaticMatrix.\n\nGeneralised Spectral Mixture kernel function. This family of functions is dense in the family of stationary real-valued kernels with respect to the pointwise convergence.[1]\n\n κ(x y) = αs (h(-(γs * t)^2) * cos(π * ωs * t) t = x - y\n\nReferences:\n\n[1] Generalized Spectral Kernels, by Yves-Laurent Kom Samo and Stephen J. Roberts\n[2] SM: Gaussian Process Kernels for Pattern Discovery and Extrapolation,\n ICML, 2013, by Andrew Gordon Wilson and Ryan Prescott Adams,\n[3] Covariance kernels for fast automatic pattern discovery and extrapolation\n with Gaussian processes, Andrew Gordon Wilson, PhD Thesis, January 2014.\n http://www.cs.cmu.edu/~andrewgw/andrewgwthesis.pdf\n[4] http://www.cs.cmu.edu/~andrewgw/pattern/.\n\n\n\n\n\n","category":"function"},{"location":"kernels/#KernelFunctions.spectral_mixture_product_kernel","page":"Kernel Functions","title":"KernelFunctions.spectral_mixture_product_kernel","text":"spectral_mixture_product_kernel(\n h::Kernel=SqExponentialKernel(),\n αs::AbstractMatrix{<:Real},\n γs::AbstractMatrix{<:Real},\n ωs::AbstractMatrix{<:Real},\n)\n\nwhere αs are the weights of dimension (D, A), γs is the covariance matrix of dimension (D, A) and ωs are the mean vectors and is of dimension (D, A). Here, D is input dimension and A is the number of spectral components.\n\nSpectral Mixture Product Kernel. With enough components A, the SMP kernel can model any product kernel to arbitrary precision, and is flexible even with a small number of components [1]\n\nh is the kernel, which defaults to SqExponentialKernel if not specified.\n\n κ(x y) = Πᵢ₁ᴷ Σ(αsᵢᵀ * (h(-(γsᵢᵀ * tᵢ)²) * cos(ωsᵢᵀ * tᵢ))) tᵢ = xᵢ - yᵢ\n\nReferences:\n\n[1] GPatt: Fast Multidimensional Pattern Extrapolation with GPs,\n arXiv 1310.5288, 2013, by Andrew Gordon Wilson, Elad Gilboa,\n Arye Nehorai and John P. Cunningham\n\n\n\n\n\n","category":"function"},{"location":"kernels/#Wiener-Kernel","page":"Kernel Functions","title":"Wiener Kernel","text":"","category":"section"},{"location":"kernels/","page":"Kernel Functions","title":"Kernel Functions","text":"WienerKernel","category":"page"},{"location":"kernels/#KernelFunctions.WienerKernel","page":"Kernel Functions","title":"KernelFunctions.WienerKernel","text":"WienerKernel(; i::Int=0)\nWienerKernel{i}()\n\nThe i-times integrated Wiener process kernel function.\n\nDefinition\n\nFor inputs x x in mathbbR^d, the i-times integrated Wiener process kernel with i in -1 0 1 2 3 is defined[SDH] as\n\nk_i(x x) = begincases\n delta(x x) textif i=-1\n minbig(x_2 x_2big) textif i=0\n a_i1^-1 minbig(x_2 x_2big)^2i + 1\n + a_i2^-1 x - x_2 r_ibig(x_2 x_2big) minbig(x_2 x_2big)^i + 1\n textotherwise\nendcases\n\nwhere the coefficients a are given by\n\na = beginbmatrix\n3 2 \n20 12 \n252 720\nendbmatrix\n\nand the functions r_i are defined as\n\nbeginaligned\nr_1(t t) = 1\nr_2(t t) = t + t - fracmin(t t)2\nr_3(t t) = 5 max(t t)^2 + 2 tt + 3 min(t t)^2\nendaligned\n\nThe WhiteKernel is recovered for i = -1.\n\n[SDH]: Schober, Duvenaud & Hennig (2014). Probabilistic ODE Solvers with Runge-Kutta Means.\n\n\n\n\n\n","category":"type"},{"location":"kernels/#Composite-Kernels","page":"Kernel Functions","title":"Composite Kernels","text":"","category":"section"},{"location":"kernels/","page":"Kernel Functions","title":"Kernel Functions","text":"The modular design of KernelFunctions uses base kernels as building blocks for more complex kernels. There are a variety of composite kernels implemented, including those which transform the inputs to a wrapped kernel to implement length scales, scale the variance of a kernel, and sum or multiply collections of kernels together.","category":"page"},{"location":"kernels/","page":"Kernel Functions","title":"Kernel Functions","text":"TransformedKernel\n∘(::Kernel, ::Transform)\nScaledKernel\nKernelSum\nKernelProduct\nKernelTensorProduct\nNormalizedKernel","category":"page"},{"location":"kernels/#KernelFunctions.TransformedKernel","page":"Kernel Functions","title":"KernelFunctions.TransformedKernel","text":"TransformedKernel(k::Kernel, t::Transform)\n\nKernel derived from k for which inputs are transformed via a Transform t.\n\nThe preferred way to create kernels with input transformations is to use the composition operator ∘ or its alias compose instead of TransformedKernel directly since this allows optimized implementations for specific kernels and transformations.\n\nSee also: ∘\n\n\n\n\n\n","category":"type"},{"location":"kernels/#Base.:∘-Tuple{Kernel, Transform}","page":"Kernel Functions","title":"Base.:∘","text":"kernel ∘ transform\n∘(kernel, transform)\ncompose(kernel, transform)\n\nCompose a kernel with a transformation transform of its inputs.\n\nThe prefix forms support chains of multiple transformations: ∘(kernel, transform1, transform2) = kernel ∘ transform1 ∘ transform2.\n\nDefinition\n\nFor inputs x x, the transformed kernel widetildek derived from kernel k by input transformation t is defined as\n\nwidetildek(x x k t) = kbig(t(x) t(x)big)\n\nExamples\n\njulia> (SqExponentialKernel() ∘ ScaleTransform(0.5))(0, 2) == exp(-0.5)\ntrue\n\njulia> ∘(ExponentialKernel(), ScaleTransform(2), ScaleTransform(0.5))(1, 2) == exp(-1)\ntrue\n\nSee also: TransformedKernel\n\n\n\n\n\n","category":"method"},{"location":"kernels/#KernelFunctions.ScaledKernel","page":"Kernel Functions","title":"KernelFunctions.ScaledKernel","text":"ScaledKernel(k::Kernel, σ²::Real=1.0)\n\nScaled kernel derived from k by multiplication with variance σ².\n\nDefinition\n\nFor inputs x x, the scaled kernel widetildek derived from kernel k by multiplication with variance sigma^2 0 is defined as\n\nwidetildek(x x k sigma^2) = sigma^2 k(x x)\n\n\n\n\n\n","category":"type"},{"location":"kernels/#KernelFunctions.KernelSum","page":"Kernel Functions","title":"KernelFunctions.KernelSum","text":"KernelSum <: Kernel\n\nCreate a sum of kernels. One can also use the operator +.\n\nThere are various ways in which you create a KernelSum:\n\nThe simplest way to specify a KernelSum would be to use the overloaded + operator. This is equivalent to creating a KernelSum by specifying the kernels as the arguments to the constructor. \n\njulia> k1 = SqExponentialKernel(); k2 = LinearKernel(); X = rand(5);\n\njulia> (k = k1 + k2) == KernelSum(k1, k2)\ntrue\n\njulia> kernelmatrix(k1 + k2, X) == kernelmatrix(k1, X) .+ kernelmatrix(k2, X)\ntrue\n\njulia> kernelmatrix(k, X) == kernelmatrix(k1 + k2, X)\ntrue\n\nYou could also specify a KernelSum by providing a Tuple or a Vector of the kernels to be summed. We suggest you to use a Tuple when you have fewer components and a Vector when dealing with a large number of components.\n\njulia> KernelSum((k1, k2)) == k1 + k2\ntrue\n\njulia> KernelSum([k1, k2]) == KernelSum((k1, k2)) == k1 + k2\ntrue\n\n\n\n\n\n","category":"type"},{"location":"kernels/#KernelFunctions.KernelProduct","page":"Kernel Functions","title":"KernelFunctions.KernelProduct","text":"KernelProduct <: Kernel\n\nCreate a product of kernels. One can also use the overloaded operator *.\n\nThere are various ways in which you create a KernelProduct:\n\nThe simplest way to specify a KernelProduct would be to use the overloaded * operator. This is equivalent to creating a KernelProduct by specifying the kernels as the arguments to the constructor. \n\njulia> k1 = SqExponentialKernel(); k2 = LinearKernel(); X = rand(5);\n\njulia> (k = k1 * k2) == KernelProduct(k1, k2)\ntrue\n\njulia> kernelmatrix(k1 * k2, X) == kernelmatrix(k1, X) .* kernelmatrix(k2, X)\ntrue\n\njulia> kernelmatrix(k, X) == kernelmatrix(k1 * k2, X)\ntrue\n\nYou could also specify a KernelProduct by providing a Tuple or a Vector of the kernels to be multiplied. We suggest you to use a Tuple when you have fewer components and a Vector when dealing with a large number of components.\n\njulia> KernelProduct((k1, k2)) == k1 * k2\ntrue\n\njulia> KernelProduct([k1, k2]) == KernelProduct((k1, k2)) == k1 * k2\ntrue\n\n\n\n\n\n","category":"type"},{"location":"kernels/#KernelFunctions.KernelTensorProduct","page":"Kernel Functions","title":"KernelFunctions.KernelTensorProduct","text":"KernelTensorProduct\n\nTensor product of kernels.\n\nDefinition\n\nFor inputs x = (x_1 ldots x_n) and x = (x_1 ldots x_n), the tensor product of kernels k_1 ldots k_n is defined as\n\nk(x x k_1 ldots k_n) = Big(bigotimes_i=1^n k_iBig)(x x) = prod_i=1^n k_i(x_i x_i)\n\nConstruction\n\nThe simplest way to specify a KernelTensorProduct is to use the overloaded tensor operator or its alias ⊗ (can be typed by \\otimes).\n\njulia> k1 = SqExponentialKernel(); k2 = LinearKernel(); X = rand(5, 2);\n\njulia> kernelmatrix(k1 ⊗ k2, RowVecs(X)) == kernelmatrix(k1, X[:, 1]) .* kernelmatrix(k2, X[:, 2])\ntrue\n\nYou can also specify a KernelTensorProduct by providing kernels as individual arguments or as an iterable data structure such as a Tuple or a Vector. Using a tuple or individual arguments guarantees that KernelTensorProduct is concretely typed but might lead to large compilation times if the number of kernels is large.\n\njulia> KernelTensorProduct(k1, k2) == k1 ⊗ k2\ntrue\n\njulia> KernelTensorProduct((k1, k2)) == k1 ⊗ k2\ntrue\n\njulia> KernelTensorProduct([k1, k2]) == k1 ⊗ k2\ntrue\n\n\n\n\n\n","category":"type"},{"location":"kernels/#KernelFunctions.NormalizedKernel","page":"Kernel Functions","title":"KernelFunctions.NormalizedKernel","text":"NormalizedKernel(k::Kernel)\n\nA normalized kernel derived from k.\n\nDefinition\n\nFor inputs x x, the normalized kernel widetildek derived from kernel k is defined as\n\nwidetildek(x x k) = frack(x x)sqrtk(x x) k(x x)\n\n\n\n\n\n","category":"type"},{"location":"kernels/#Multi-output-Kernels","page":"Kernel Functions","title":"Multi-output Kernels","text":"","category":"section"},{"location":"kernels/","page":"Kernel Functions","title":"Kernel Functions","text":"Kernelfunctions implements multi-output kernels as scalar kernels on an extended output domain. For more details on this read the section on inputs for multi-output GPs.","category":"page"},{"location":"kernels/","page":"Kernel Functions","title":"Kernel Functions","text":"For a function f(x) rightarrow y denote the inputs as x x, such that we compute the covariance between output components y_p and y_p. The total number of outputs is m.","category":"page"},{"location":"kernels/","page":"Kernel Functions","title":"Kernel Functions","text":"MOKernel\nIndependentMOKernel\nLatentFactorMOKernel\nIntrinsicCoregionMOKernel\nLinearMixingModelKernel","category":"page"},{"location":"kernels/#KernelFunctions.MOKernel","page":"Kernel Functions","title":"KernelFunctions.MOKernel","text":"MOKernel\n\nAbstract type for kernels with multiple outpus.\n\n\n\n\n\n","category":"type"},{"location":"kernels/#KernelFunctions.IndependentMOKernel","page":"Kernel Functions","title":"KernelFunctions.IndependentMOKernel","text":"IndependentMOKernel(k::Kernel)\n\nKernel for multiple independent outputs with kernel k each.\n\nDefinition\n\nFor inputs x x and output dimensions p p, the kernel widetildek for independent outputs with kernel k each is defined as\n\nwidetildekbig((x p) (x p)big) = begincases\n k(x x) textif p = p \n 0 textotherwise\nendcases\n\nMathematically, it is equivalent to a matrix-valued kernel defined as\n\nwidetildeK(x x) = mathrmdiagbig(k(x x) ldots k(x x)big) in mathbbR^m times m\n\nwhere m is the number of outputs.\n\n\n\n\n\n","category":"type"},{"location":"kernels/#KernelFunctions.LatentFactorMOKernel","page":"Kernel Functions","title":"KernelFunctions.LatentFactorMOKernel","text":"LatentFactorMOKernel(g::AbstractVector{<:Kernel}, e::MOKernel, A::AbstractMatrix)\n\nKernel associated with the semiparametric latent factor model.\n\nDefinition\n\nFor inputs x x and output dimensions p_x p_x, the kernel is defined as[STJ]\n\nkbig((x p_x) (x p_x)big) = sum^Q_q=1 A_p_xqg_q(x x)A_p_xq\n + ebig((x p_x) (x p_x)big)\n\nwhere g_1 ldots g_Q are Q kernels, one for each latent process, e is a multi-output kernel for m outputs, and A is a matrix of weights for the kernels of size m times Q.\n\n[STJ]: M. Seeger, Y. Teh, & M. I. Jordan (2005). Semiparametric Latent Factor Models.\n\n\n\n\n\n","category":"type"},{"location":"kernels/#KernelFunctions.IntrinsicCoregionMOKernel","page":"Kernel Functions","title":"KernelFunctions.IntrinsicCoregionMOKernel","text":"IntrinsicCoregionMOKernel(; kernel::Kernel, B::AbstractMatrix)\n\nKernel associated with the intrinsic coregionalization model.\n\nDefinition\n\nFor inputs x x and output dimensions p p, the kernel is defined as[ARL]\n\nkbig((x p) (x p) B tildekbig) = B_p p tildekbig(x xbig)\n\nwhere B is a positive semidefinite matrix of size m times m, with m being the number of outputs, and tildek is a scalar-valued kernel shared by the latent processes.\n\n[ARL]: M. Álvarez, L. Rosasco, & N. Lawrence (2012). Kernels for Vector-Valued Functions: a Review.\n\n\n\n\n\n","category":"type"},{"location":"kernels/#KernelFunctions.LinearMixingModelKernel","page":"Kernel Functions","title":"KernelFunctions.LinearMixingModelKernel","text":"LinearMixingModelKernel(k::Kernel, H::AbstractMatrix)\nLinearMixingModelKernel(Tk::AbstractVector{<:Kernel},Th::AbstractMatrix)\n\nKernel associated with the linear mixing model, taking a vector of Q kernels and a Q × m mixing matrix H for a function with m outputs. Also accepts a single kernel k for use across all Q basis vectors. \n\nDefinition\n\nFor inputs x x and output dimensions p p, the kernel is defined as[BPTHST]\n\nkbig((x p) (x p)big) = H_pK(x x)H_p\n\nwhere K(x x) = Diag(k_1(x x) k_Q(x x)) with zero off-diagonal entries. H_p is the p-th column (p-th output) of H in mathbbR^Q times m representing Q basis vectors for the m dimensional output space of f. k_1 ldots k_Q are Q kernels, one for each latent process, H is a mixing matrix of Q basis vectors spanning the output space.\n\n[BPTHST]: Wessel P. Bruinsma, Eric Perim, Will Tebbutt, J. Scott Hosking, Arno Solin, Richard E. Turner (2020). Scalable Exact Inference in Multi-Output Gaussian Processes.\n\n\n\n\n\n","category":"type"},{"location":"api/#API-Library","page":"API","title":"API Library","text":"","category":"section"},{"location":"api/","page":"API","title":"API","text":"CurrentModule = KernelFunctions","category":"page"},{"location":"api/#Functions","page":"API","title":"Functions","text":"","category":"section"},{"location":"api/","page":"API","title":"API","text":"The KernelFunctions API comprises the following four functions.","category":"page"},{"location":"api/","page":"API","title":"API","text":"kernelmatrix\nkernelmatrix!\nkernelmatrix_diag\nkernelmatrix_diag!","category":"page"},{"location":"api/#KernelFunctions.kernelmatrix","page":"API","title":"KernelFunctions.kernelmatrix","text":"kernelmatrix(κ::Kernel, x::AbstractVector)\n\nCompute the kernel κ for each pair of inputs in x. Returns a matrix of size (length(x), length(x)) satisfying kernelmatrix(κ, x)[p, q] == κ(x[p], x[q]).\n\nkernelmatrix(κ::Kernel, x::AbstractVector, y::AbstractVector)\n\nCompute the kernel κ for each pair of inputs in x and y. Returns a matrix of size (length(x), length(y)) satisfying kernelmatrix(κ, x, y)[p, q] == κ(x[p], y[q]).\n\nkernelmatrix(κ::Kernel, X::AbstractMatrix; obsdim)\nkernelmatrix(κ::Kernel, X::AbstractMatrix, Y::AbstractMatrix; obsdim)\n\nIf obsdim=1, equivalent to kernelmatrix(κ, RowVecs(X)) and kernelmatrix(κ, RowVecs(X), RowVecs(Y)), respectively. If obsdim=2, equivalent to kernelmatrix(κ, ColVecs(X)) and kernelmatrix(κ, ColVecs(X), ColVecs(Y)), respectively.\n\nSee also: ColVecs, RowVecs\n\n\n\n\n\n","category":"function"},{"location":"api/#KernelFunctions.kernelmatrix!","page":"API","title":"KernelFunctions.kernelmatrix!","text":"kernelmatrix!(K::AbstractMatrix, κ::Kernel, x::AbstractVector)\nkernelmatrix!(K::AbstractMatrix, κ::Kernel, x::AbstractVector, y::AbstractVector)\n\nIn-place version of kernelmatrix where pre-allocated matrix K will be overwritten with the kernel matrix.\n\nkernelmatrix!(K::AbstractMatrix, κ::Kernel, X::AbstractMatrix; obsdim)\nkernelmatrix!(\n K::AbstractMatrix,\n κ::Kernel,\n X::AbstractMatrix,\n Y::AbstractMatrix;\n obsdim,\n)\n\nIf obsdim=1, equivalent to kernelmatrix!(K, κ, RowVecs(X)) and kernelmatrix(K, κ, RowVecs(X), RowVecs(Y)), respectively. If obsdim=2, equivalent to kernelmatrix!(K, κ, ColVecs(X)) and kernelmatrix(K, κ, ColVecs(X), ColVecs(Y)), respectively.\n\nSee also: ColVecs, RowVecs\n\n\n\n\n\n","category":"function"},{"location":"api/#KernelFunctions.kernelmatrix_diag","page":"API","title":"KernelFunctions.kernelmatrix_diag","text":"kernelmatrix_diag(κ::Kernel, x::AbstractVector)\n\nCompute the diagonal of kernelmatrix(κ, x) efficiently.\n\nkernelmatrix_diag(κ::Kernel, x::AbstractVector, y::AbstractVector)\n\nCompute the diagonal of kernelmatrix(κ, x, y) efficiently. Requires that x and y are the same length.\n\nkernelmatrix_diag(κ::Kernel, X::AbstractMatrix; obsdim)\nkernelmatrix_diag(κ::Kernel, X::AbstractMatrix, Y::AbstractMatrix; obsdim)\n\nIf obsdim=1, equivalent to kernelmatrix_diag(κ, RowVecs(X)) and kernelmatrix_diag(κ, RowVecs(X), RowVecs(Y)), respectively. If obsdim=2, equivalent to kernelmatrix_diag(κ, ColVecs(X)) and kernelmatrix_diag(κ, ColVecs(X), ColVecs(Y)), respectively.\n\nSee also: ColVecs, RowVecs\n\n\n\n\n\n","category":"function"},{"location":"api/#KernelFunctions.kernelmatrix_diag!","page":"API","title":"KernelFunctions.kernelmatrix_diag!","text":"kernelmatrix_diag!(K::AbstractVector, κ::Kernel, x::AbstractVector)\nkernelmatrix_diag!(K::AbstractVector, κ::Kernel, x::AbstractVector, y::AbstractVector)\n\nIn place version of kernelmatrix_diag.\n\nkernelmatrix_diag!(K::AbstractVector, κ::Kernel, X::AbstractMatrix; obsdim)\nkernelmatrix_diag!(\n K::AbstractVector,\n κ::Kernel,\n X::AbstractMatrix,\n Y::AbstractMatrix;\n obsdim\n)\n\nIf obsdim=1, equivalent to kernelmatrix_diag!(K, κ, RowVecs(X)) and kernelmatrix_diag!(K, κ, RowVecs(X), RowVecs(Y)), respectively. If obsdim=2, equivalent to kernelmatrix_diag!(K, κ, ColVecs(X)) and kernelmatrix_diag!(K, κ, ColVecs(X), ColVecs(Y)), respectively.\n\nSee also: ColVecs, RowVecs\n\n\n\n\n\n","category":"function"},{"location":"api/#Input-Types","page":"API","title":"Input Types","text":"","category":"section"},{"location":"api/","page":"API","title":"API","text":"The above API operates on collections of inputs. All collections of inputs in KernelFunctions.jl are represented as AbstractVectors. To understand this choice, please see the design notes on collections of inputs. The length of any such AbstractVector is equal to the number of inputs in the collection. For example, this means that","category":"page"},{"location":"api/","page":"API","title":"API","text":"size(kernelmatrix(k, x)) == (length(x), length(x))","category":"page"},{"location":"api/","page":"API","title":"API","text":"is always true, for some Kernel k, and AbstractVector x.","category":"page"},{"location":"api/#Univariate-Inputs","page":"API","title":"Univariate Inputs","text":"","category":"section"},{"location":"api/","page":"API","title":"API","text":"If each input to your kernel is Real-valued, then any AbstractVector{<:Real} is a valid representation for a collection of inputs. More generally, it's completely fine to represent a collection of inputs of type T as, for example, a Vector{T}. However, this may not be the most efficient way to represent collection of inputs. See Vector-Valued Inputs for an example.","category":"page"},{"location":"api/#Vector-Valued-Inputs","page":"API","title":"Vector-Valued Inputs","text":"","category":"section"},{"location":"api/","page":"API","title":"API","text":"We recommend that collections of vector-valued inputs are stored in an AbstractMatrix{<:Real} when possible, and wrapped inside a ColVecs or RowVecs to make their interpretation clear:","category":"page"},{"location":"api/","page":"API","title":"API","text":"ColVecs\nRowVecs","category":"page"},{"location":"api/#KernelFunctions.ColVecs","page":"API","title":"KernelFunctions.ColVecs","text":"ColVecs(X::AbstractMatrix)\n\nA lightweight wrapper for an AbstractMatrix which interprets it as a vector-of-vectors, in which each column of X represents a single vector.\n\nThat is, by writing x = ColVecs(X), you are saying \"x is a vector-of-vectors, each of which has length size(X, 1). The total number of vectors is size(X, 2).\"\n\nPhrased differently, ColVecs(X) says that X should be interpreted as a vector of horizontally-concatenated column-vectors, hence the name ColVecs.\n\njulia> X = randn(2, 5);\n\njulia> x = ColVecs(X);\n\njulia> length(x) == 5\ntrue\n\njulia> X[:, 3] == x[3]\ntrue\n\nColVecs is related to RowVecs via transposition:\n\njulia> X = randn(2, 5);\n\njulia> ColVecs(X) == RowVecs(X')\ntrue\n\n\n\n\n\n","category":"type"},{"location":"api/#KernelFunctions.RowVecs","page":"API","title":"KernelFunctions.RowVecs","text":"RowVecs(X::AbstractMatrix)\n\nA lightweight wrapper for an AbstractMatrix which interprets it as a vector-of-vectors, in which each row of X represents a single vector.\n\nThat is, by writing x = RowVecs(X), you are saying \"x is a vector-of-vectors, each of which has length size(X, 2). The total number of vectors is size(X, 1).\"\n\nPhrased differently, RowVecs(X) says that X should be interpreted as a vector of vertically-concatenated row-vectors, hence the name RowVecs.\n\nInternally, the data continues to be represented as an AbstractMatrix, so using this type does not introduce any kind of performance penalty.\n\njulia> X = randn(5, 2);\n\njulia> x = RowVecs(X);\n\njulia> length(x) == 5\ntrue\n\njulia> X[3, :] == x[3]\ntrue\n\nRowVecs is related to ColVecs via transposition:\n\njulia> X = randn(5, 2);\n\njulia> RowVecs(X) == ColVecs(X')\ntrue\n\n\n\n\n\n","category":"type"},{"location":"api/","page":"API","title":"API","text":"These types are specialised upon to ensure good performance e.g. when computing Euclidean distances between pairs of elements. The benefit of using this representation, rather than using a Vector{Vector{<:Real}}, is that optimised matrix-matrix multiplication functionality can be utilised when computing pairwise distances between inputs, which are needed for kernelmatrix computation.","category":"page"},{"location":"api/#Inputs-for-Multiple-Outputs","page":"API","title":"Inputs for Multiple Outputs","text":"","category":"section"},{"location":"api/","page":"API","title":"API","text":"KernelFunctions.jl views multi-output GPs as GPs on an extended input domain. For an explanation of this design choice, see the design notes on multi-output GPs.","category":"page"},{"location":"api/","page":"API","title":"API","text":"An input to a multi-output Kernel should be a Tuple{T, Int}, whose first element specifies a location in the domain of the multi-output GP, and whose second element specifies which output the inputs corresponds to. The type of collections of inputs for multi-output GPs is therefore AbstractVector{<:Tuple{T, Int}}.","category":"page"},{"location":"api/","page":"API","title":"API","text":"KernelFunctions.jl provides the following helper functions to reduce the cognitive load associated with working with multi-output kernels by dealing with transforming data from the formats in which it is commonly found into the format required by KernelFunctions. The intention is that users can pass their data to these functions, and use the returned values throughout their code, without having to worry further about correctly formatting their data for KernelFunctions' sake:","category":"page"},{"location":"api/","page":"API","title":"API","text":"prepare_isotopic_multi_output_data(x::AbstractVector, y::ColVecs)\nprepare_isotopic_multi_output_data(x::AbstractVector, y::RowVecs)\nprepare_heterotopic_multi_output_data","category":"page"},{"location":"api/#KernelFunctions.prepare_isotopic_multi_output_data-Tuple{AbstractVector, ColVecs}","page":"API","title":"KernelFunctions.prepare_isotopic_multi_output_data","text":"prepare_isotopic_multi_output_data(x::AbstractVector, y::ColVecs)\n\nUtility functionality to convert a collection of N = length(x) inputs x, and a vector-of-vectors y (efficiently represented by a ColVecs) into a format suitable for use with multi-output kernels.\n\ny[n] is the vector-valued output corresponding to the input x[n]. Consequently, it is necessary that length(x) == length(y).\n\nFor example, if outputs are initially stored in a num_outputs × N matrix:\n\njulia> x = [1.0, 2.0, 3.0];\n\njulia> Y = [1.1 2.1 3.1; 1.2 2.2 3.2]\n2×3 Matrix{Float64}:\n 1.1 2.1 3.1\n 1.2 2.2 3.2\n\njulia> inputs, outputs = prepare_isotopic_multi_output_data(x, ColVecs(Y));\n\njulia> inputs\n6-element KernelFunctions.MOInputIsotopicByFeatures{Float64, Vector{Float64}, Int64}:\n (1.0, 1)\n (1.0, 2)\n (2.0, 1)\n (2.0, 2)\n (3.0, 1)\n (3.0, 2)\n\njulia> outputs\n6-element Vector{Float64}:\n 1.1\n 1.2\n 2.1\n 2.2\n 3.1\n 3.2\n\nSee also prepare_heterotopic_multi_output_data.\n\n\n\n\n\n","category":"method"},{"location":"api/#KernelFunctions.prepare_isotopic_multi_output_data-Tuple{AbstractVector, RowVecs}","page":"API","title":"KernelFunctions.prepare_isotopic_multi_output_data","text":"prepare_isotopic_multi_output_data(x::AbstractVector, y::RowVecs)\n\nUtility functionality to convert a collection of N = length(x) inputs x and output vectors y (efficiently represented by a RowVecs) into a format suitable for use with multi-output kernels.\n\ny[n] is the vector-valued output corresponding to the input x[n]. Consequently, it is necessary that length(x) == length(y).\n\nFor example, if outputs are initial stored in an N × num_outputs matrix:\n\njulia> x = [1.0, 2.0, 3.0];\n\njulia> Y = [1.1 1.2; 2.1 2.2; 3.1 3.2]\n3×2 Matrix{Float64}:\n 1.1 1.2\n 2.1 2.2\n 3.1 3.2\n\njulia> inputs, outputs = prepare_isotopic_multi_output_data(x, RowVecs(Y));\n\njulia> inputs\n6-element KernelFunctions.MOInputIsotopicByOutputs{Float64, Vector{Float64}, Int64}:\n (1.0, 1)\n (2.0, 1)\n (3.0, 1)\n (1.0, 2)\n (2.0, 2)\n (3.0, 2)\n\njulia> outputs\n6-element Vector{Float64}:\n 1.1\n 2.1\n 3.1\n 1.2\n 2.2\n 3.2\n\nSee also prepare_heterotopic_multi_output_data.\n\n\n\n\n\n","category":"method"},{"location":"api/#KernelFunctions.prepare_heterotopic_multi_output_data","page":"API","title":"KernelFunctions.prepare_heterotopic_multi_output_data","text":"prepare_heterotopic_multi_output_data(\n x::AbstractVector, y::AbstractVector{<:Real}, output_indices::AbstractVector{Int},\n)\n\nUtility functionality to convert a collection of inputs x, observations y, and output_indices into a format suitable for use with multi-output kernels. Handles the situation in which only one (or a subset) of outputs are observed at each feature. Ensures that all arguments are compatible with one another, and returns a vector of inputs and a vector of outputs.\n\ny[n] should be the observed value associated with output output_indices[n] at feature x[n].\n\njulia> x = [1.0, 2.0, 3.0];\n\njulia> y = [-1.0, 0.0, 1.0];\n\njulia> output_indices = [3, 2, 1];\n\njulia> inputs, outputs = prepare_heterotopic_multi_output_data(x, y, output_indices);\n\njulia> inputs\n3-element Vector{Tuple{Float64, Int64}}:\n (1.0, 3)\n (2.0, 2)\n (3.0, 1)\n\njulia> outputs\n3-element Vector{Float64}:\n -1.0\n 0.0\n 1.0\n\nSee also prepare_isotopic_multi_output_data.\n\n\n\n\n\n","category":"function"},{"location":"api/","page":"API","title":"API","text":"The input types returned by prepare_isotopic_multi_output_data can also be constructed manually:","category":"page"},{"location":"api/","page":"API","title":"API","text":"MOInput","category":"page"},{"location":"api/#KernelFunctions.MOInput","page":"API","title":"KernelFunctions.MOInput","text":"MOInput(x::AbstractVector, out_dim::Integer)\n\nA data type to accommodate modelling multi-dimensional output data. MOInput(x, out_dim) has length length(x) * out_dim.\n\njulia> x = [1, 2, 3];\n\njulia> MOInput(x, 2)\n6-element KernelFunctions.MOInputIsotopicByOutputs{Int64, Vector{Int64}, Int64}:\n (1, 1)\n (2, 1)\n (3, 1)\n (1, 2)\n (2, 2)\n (3, 2)\n\nAs shown above, an MOInput represents a vector of tuples. The first length(x) elements represent the inputs for the first output, the second length(x) elements represent the inputs for the second output, etc. See Inputs for Multiple Outputs in the docs for more info.\n\nMOInput will be deprecated in version 0.11 in favour of MOInputIsotopicByOutputs, and removed in version 0.12.\n\n\n\n\n\n","category":"type"},{"location":"api/","page":"API","title":"API","text":"As with ColVecs and RowVecs for vector-valued input spaces, this type enables specialised implementations of e.g. kernelmatrix for MOInputs in some situations.","category":"page"},{"location":"api/","page":"API","title":"API","text":"To find out more about the background, read this review of kernels for vector-valued functions.","category":"page"},{"location":"api/#Generic-Utilities","page":"API","title":"Generic Utilities","text":"","category":"section"},{"location":"api/","page":"API","title":"API","text":"KernelFunctions also provides miscellaneous utility functions.","category":"page"},{"location":"api/","page":"API","title":"API","text":"nystrom\nNystromFact","category":"page"},{"location":"api/#KernelFunctions.nystrom","page":"API","title":"KernelFunctions.nystrom","text":"nystrom(k::Kernel, X::AbstractVector, S::AbstractVector{<:Integer})\n\nCompute a factorization of a Nystrom approximation of the square kernel matrix of data vector X with respect to kernel k, using indices S. Returns a NystromFact struct which stores a Nystrom factorization satisfying:\n\nmathbfK approx mathbfC^intercalmathbfWmathbfC\n\n\n\n\n\nnystrom(k::Kernel, X::AbstractVector, r::Real)\n\nCompute a factorization of a Nystrom approximation of the square kernel matrix of data vector X with respect to kernel k using a sample ratio of r. Returns a NystromFact struct which stores a Nystrom factorization satisfying:\n\nmathbfK approx mathbfC^intercalmathbfWmathbfC\n\n\n\n\n\nnystrom(k::Kernel, X::AbstractMatrix, S::AbstractVector{<:Integer}; obsdim)\n\nIf obsdim=1, equivalent to nystrom(k, RowVecs(X), S). If obsdim=2, equivalent to nystrom(k, ColVecs(X), S).\n\nSee also: ColVecs, RowVecs\n\n\n\n\n\nnystrom(k::Kernel, X::AbstractMatrix, r::Real; obsdim)\n\nIf obsdim=1, equivalent to nystrom(k, RowVecs(X), r). If obsdim=2, equivalent to nystrom(k, ColVecs(X), r).\n\nSee also: ColVecs, RowVecs\n\n\n\n\n\n","category":"function"},{"location":"api/#KernelFunctions.NystromFact","page":"API","title":"KernelFunctions.NystromFact","text":"NystromFact\n\nType for storing a Nystrom factorization. The factorization contains two fields: W and C, two matrices satisfying:\n\nmathbfK approx mathbfC^intercalmathbfWmathbfC\n\n\n\n\n\n","category":"type"},{"location":"api/#Conditional-Utilities","page":"API","title":"Conditional Utilities","text":"","category":"section"},{"location":"api/","page":"API","title":"API","text":"To keep the dependencies of KernelFunctions lean, some functionality is only available if specific other packages are explicitly loaded (using).","category":"page"},{"location":"api/#Kronecker.jl","page":"API","title":"Kronecker.jl","text":"","category":"section"},{"location":"api/","page":"API","title":"API","text":"https://github.com/MichielStock/Kronecker.jl","category":"page"},{"location":"api/","page":"API","title":"API","text":"kronecker_kernelmatrix\nkernelkronmat","category":"page"},{"location":"api/#KernelFunctions.kronecker_kernelmatrix","page":"API","title":"KernelFunctions.kronecker_kernelmatrix","text":"kronecker_kernelmatrix(\n k::Union{IndependentMOKernel,IntrinsicCoregionMOKernel}, x::MOI, y::MOI\n) where {MOI<:IsotopicMOInputsUnion}\n\nRequires Kronecker.jl: Computes the kernelmatrix for the IndependentMOKernel and the IntrinsicCoregionMOKernel, but returns a lazy kronecker product. This object can be very efficiently inverted or decomposed. See also kernelmatrix.\n\n\n\n\n\n","category":"function"},{"location":"api/#KernelFunctions.kernelkronmat","page":"API","title":"KernelFunctions.kernelkronmat","text":"kernelkronmat(κ::Kernel, X::AbstractVector{<:Real}, dims::Int) -> KroneckerPower\n\nReturn a KroneckerPower matrix on the D-dimensional input grid constructed by otimes_i=1^D X, where D is given by dims.\n\nwarning: Warning\nRequires Kronecker.jl and for iskroncompatible(κ) to return true.\n\n\n\n\n\nkernelkronmat(κ::Kernel, X::AbstractVector{<:AbstractVector}) -> KroneckerProduct\n\nReturns a KroneckerProduct matrix on the grid built with the collection of vectors X_i_i=1^D: otimes_i=1^D X_i.\n\nwarning: Warning\nRequires Kronecker.jl and for iskroncompatible(κ) to return true.\n\n\n\n\n\n","category":"function"},{"location":"api/#PDMats.jl","page":"API","title":"PDMats.jl","text":"","category":"section"},{"location":"api/","page":"API","title":"API","text":"https://github.com/JuliaStats/PDMats.jl","category":"page"},{"location":"api/","page":"API","title":"API","text":"kernelpdmat","category":"page"},{"location":"api/#KernelFunctions.kernelpdmat","page":"API","title":"KernelFunctions.kernelpdmat","text":"kernelpdmat(k::Kernel, X::AbstractVector)\n\nCompute a positive-definite matrix in the form of a PDMat matrix (see PDMats.jl), with the Cholesky decomposition precomputed. The algorithm adds a diagonal \"nugget\" term to the kernel matrix which is increased until positive definiteness is achieved. The algorithm gives up with an error if the nugget becomes larger than 1% of the largest value in the kernel matrix.\n\n\n\n\n\nkernelpdmat(k::Kernel, X::AbstractMatrix; obsdim)\n\nIf obsdim=1, equivalent to kernelpdmat(k, RowVecs(X)). If obsdim=2, equivalent to kernelpdmat(k, ColVecs(X)).\n\nSee also: ColVecs, RowVecs\n\n\n\n\n\n","category":"function"},{"location":"design/#Design","page":"Design","title":"Design","text":"","category":"section"},{"location":"design/#why_abstract_vectors","page":"Design","title":"Why AbstractVectors Everywhere?","text":"","category":"section"},{"location":"design/","page":"Design","title":"Design","text":"To understand the advantages of using AbstractVectors everywhere to represent collections of inputs, first consider the following properties that it is desirable for a collection of inputs to satisfy.","category":"page"},{"location":"design/#Unique-Ordering","page":"Design","title":"Unique Ordering","text":"","category":"section"},{"location":"design/","page":"Design","title":"Design","text":"There must be a clearly-defined first, second, etc element of an input collection. If this were not the case, it would not be possible to determine a unique mapping between a collection of inputs and the output of kernelmatrix, as it would not be clear what order the rows and columns of the output should appear in.","category":"page"},{"location":"design/","page":"Design","title":"Design","text":"Moreover, ordering guarantees that if you permute the collection of inputs, the ordering of the rows and columns of the kernelmatrix are correspondingly permuted.","category":"page"},{"location":"design/#Generality","page":"Design","title":"Generality","text":"","category":"section"},{"location":"design/","page":"Design","title":"Design","text":"There must be no restriction on the domain of the input. Collections of Reals, vectors, graphs, finite-dimensional domains, or really anything else that you fancy should be straightforwardly representable. Moreover, whichever input class is chosen should not prevent optimal performance from being obtained.","category":"page"},{"location":"design/#Unambiguously-Defined-Length","page":"Design","title":"Unambiguously-Defined Length","text":"","category":"section"},{"location":"design/","page":"Design","title":"Design","text":"Knowing the length of a collection of inputs is important. For example, a well-defined length guarantees that the size of the output of kernelmatrix, and related functions, are predictable. It also makes it possible to perform internal error-checking that ensures that e.g. there are the same number of inputs in two collections of inputs.","category":"page"},{"location":"design/#AbstractMatrices-Do-Not-Cut-It","page":"Design","title":"AbstractMatrices Do Not Cut It","text":"","category":"section"},{"location":"design/","page":"Design","title":"Design","text":"Notably, while AbstractMatrix objects are often used to represent collections of vector-valued inputs, they do not immediately satisfy these properties as it is unclear whether a matrix of size P x Q represents a collection of P Q-dimensional inputs (each row is an input), or Q P-dimensional inputs (each column is an input).","category":"page"},{"location":"design/","page":"Design","title":"Design","text":"Moreover, they occasionally add some aesthetic inconvenience. For example, a collection of Real-valued inputs, which might be straightforwardly represented as an AbstractVector{<:Real}, must be reshaped into a matrix.","category":"page"},{"location":"design/","page":"Design","title":"Design","text":"There are two commonly used ways to partly resolve these shortcomings:","category":"page"},{"location":"design/#Resolution-1:-Specify-a-Convention","page":"Design","title":"Resolution 1: Specify a Convention","text":"","category":"section"},{"location":"design/","page":"Design","title":"Design","text":"One way that these shortcomings can be partly resolved is by specifying a convention that everyone adheres to regarding the interpretation of rows vs columns. However, opinions about the choice of convention are often surprisingly strongly held, and users regularly have to remind themselves which convention has been chosen. While this resolves the ordering problem, and in principle defines the \"length\" of a collection of inputs, AbstractMatrixs already have a length defined in Julia, which would generally disagree with our internal notion of length. This isn't a show-stopper, but it isn't an especially clean situation.","category":"page"},{"location":"design/","page":"Design","title":"Design","text":"There is also the opportunity for some kinds of silent bugs. For example, if an input matrix happens to be square because the number of input dimensions is the same as the number of inputs, it would be hard to know whether the correct kernelmatrix has been computed. This kind of bug seems unlikely, but it exists regardless.","category":"page"},{"location":"design/","page":"Design","title":"Design","text":"Finally, suppose that your inputs are some type T that is not simply a vector of real numbers, say a graph. In this situation, how should a collection of inputs be represented? A N x 1 or 1 x N matrix is the only obvious candidate, but the additional singular dimension seems somewhat redundant.","category":"page"},{"location":"design/#Resolution-2:-Always-Specify-An-obsdim-Argument","page":"Design","title":"Resolution 2: Always Specify An obsdim Argument","text":"","category":"section"},{"location":"design/","page":"Design","title":"Design","text":"Another way to partly resolve these problems is to not commit to a convention, and instead to propagate some additional information through the codebase that specifies how the input data is to be interpreted. For example, a kernel k that represents the sum of two other kernels might implement kernelmatrix as follows:","category":"page"},{"location":"design/","page":"Design","title":"Design","text":"function kernelmatrix(k::KernelSum, x::AbstractMatrix; obsdim=1)\n return kernelmatrix(k.kernels[1], x; obsdim=obsdim) +\n kernelmatrix(k.kernels[2], x; obsdim=obsdim)\nend","category":"page"},{"location":"design/","page":"Design","title":"Design","text":"While this prevents this package from having to pre-specify a convention, it doesn't resolve the length issue, or the issue of representing collections of inputs which aren't immediately represented as vectors. Moreover, it complicates the internals; in contrast, consider what this function looks like with an AbstractVector:","category":"page"},{"location":"design/","page":"Design","title":"Design","text":"function kernelmatrix(k::KernelSum, x::AbstractVector)\n return kernelmatrix(k.kernels[1], x) + kernelmatrix(k.kernels[2], x)\nend","category":"page"},{"location":"design/","page":"Design","title":"Design","text":"This code is clearer (less visual noise), and has removed a possible bug – if the implementer of kernelmatrix forgets to pass the obsdim kwarg into each subsequent kernelmatrix call, it's possible to get the wrong answer.","category":"page"},{"location":"design/","page":"Design","title":"Design","text":"This being said, we do support matrix-valued inputs – see Why We Have Support for Both.","category":"page"},{"location":"design/#AbstractVectors","page":"Design","title":"AbstractVectors","text":"","category":"section"},{"location":"design/","page":"Design","title":"Design","text":"Requiring all collections of inputs to be AbstractVectors resolves all of these problems, and ensures that the data is self-describing to the extent that KernelFunctions.jl requires.","category":"page"},{"location":"design/","page":"Design","title":"Design","text":"Firstly, the question of how to interpret the columns and rows of a matrix of inputs is resolved. Users must wrap matrices which represent collections of inputs in either a ColVecs or RowVecs, both of which have clearly defined semantics which are hard to confuse.","category":"page"},{"location":"design/","page":"Design","title":"Design","text":"By design, there is also no discrepancy between the number of inputs in the collection, and the length function – the length of a ColVecs, RowVecs, or Vector{<:Real} is equal to the number of inputs.","category":"page"},{"location":"design/","page":"Design","title":"Design","text":"There is no loss of performance.","category":"page"},{"location":"design/","page":"Design","title":"Design","text":"A collection of N Real-valued inputs can be represented by an AbstractVector{<:Real} of length N, rather than needing to use an AbstractMatrix{<:Real} of size either N x 1 or 1 x N. The same can be said for any other input type T, and new subtypes of AbstractVector can be added if particularly efficient ways exist to store collections of inputs of type T. A good example of this in practice is using Tuple{S, Int}, for some input type S, as the Inputs for Multiple Outputs.","category":"page"},{"location":"design/","page":"Design","title":"Design","text":"This approach can also lead to clearer user code. A user need only wrap their inputs in a ColVecs or RowVecs once in their code, and this specification is automatically re-used everywhere in their code. In this sense, it is straightforward to write code in such a way that there is one unique source of \"truth\" about the way in which a particular data set should be interpreted. Conversely, the obsdim resolution requires that the obsdim keyword argument is passed around with the data every single time that you use it.","category":"page"},{"location":"design/","page":"Design","title":"Design","text":"The benefits of the AbstractVector approach are likely most strongly felt when writing a substantial amount of code on top of KernelFunctions.jl – in the same way that using AbstractVectors inside KernelFunctions.jl removes the need for large amounts of keyword argument propagation, the same will be true of other code.","category":"page"},{"location":"design/#Why-We-Have-Support-for-Both","page":"Design","title":"Why We Have Support for Both","text":"","category":"section"},{"location":"design/","page":"Design","title":"Design","text":"In short: many people like matrices, and are familiar with obsdim-style keyword arguments.","category":"page"},{"location":"design/","page":"Design","title":"Design","text":"All internals are implemented using AbstractVectors though, and the obsdim interface is just a thin layer of utility functionality which sits on top of this. To avoid confusion and silent errors, we do not favour a specific convention (rows or columns) but instead it is necessary to specify the obsdim keyword argument explicitly.","category":"page"},{"location":"design/#inputs_for_multiple_outputs","page":"Design","title":"Kernels for Multiple-Outputs","text":"","category":"section"},{"location":"design/","page":"Design","title":"Design","text":"There are two equally-valid perspectives on multi-output kernels: they can either be treated as matrix-valued kernels, or standard kernels on an extended input domain. Each of these perspectives are convenient in different circumstances, but the latter greatly simplifies the incorporation of multi-output kernels in KernelFunctions.","category":"page"},{"location":"design/","page":"Design","title":"Design","text":"More concretely, let k_mat be a matrix-valued kernel, mapping pairs of inputs of type T to matrices of size P x P to describe the covariance between P outputs. Given inputs x and y of type T, and integers p and q, we can always find an equivalent standard kernel k mapping from pairs of inputs of type Tuple{T, Int} to the Reals as follows:","category":"page"},{"location":"design/","page":"Design","title":"Design","text":"k((x, p), (y, q)) = k_mat(x, y)[p, q]","category":"page"},{"location":"design/","page":"Design","title":"Design","text":"This ability to treat multi-output kernels as single-output kernels is very helpful, as it means that there is no need to introduce additional concepts into the API of KernelFunctions.jl, just additional kernels! This in turn simplifies downstream code as they don't need to \"know\" about the existence of multi-output kernels in addition to standard kernels. For example, GP libraries built on top of KernelFunctions.jl just need to know about Kernels, and they get multi-output kernels, and hence multi-output GPs, for free.","category":"page"},{"location":"design/","page":"Design","title":"Design","text":"Where there is the need to specialise implementations for multi-output kernels, this is done in an encapsulated manner – parts of KernelFunctions that have nothing to do with multi-output kernels know nothing about the existence of multi-output kernels.","category":"page"},{"location":"examples/support-vector-machine/","page":"Support Vector Machine","title":"Support Vector Machine","text":"EditURL = \"../../../../examples/support-vector-machine/script.jl\"","category":"page"},{"location":"examples/support-vector-machine/","page":"Support Vector Machine","title":"Support Vector Machine","text":"EditURL = \"https://github.com/JuliaGaussianProcesses/KernelFunctions.jl/blob/master/examples/support-vector-machine/script.jl\"","category":"page"},{"location":"examples/support-vector-machine/#Support-Vector-Machine","page":"Support Vector Machine","title":"Support Vector Machine","text":"","category":"section"},{"location":"examples/support-vector-machine/","page":"Support Vector Machine","title":"Support Vector Machine","text":"(Image: )","category":"page"},{"location":"examples/support-vector-machine/","page":"Support Vector Machine","title":"Support Vector Machine","text":"You are seeing the HTML output generated by Documenter.jl and Literate.jl from the Julia source file. The corresponding notebook can be viewed in nbviewer.","category":"page"},{"location":"examples/support-vector-machine/","page":"Support Vector Machine","title":"Support Vector Machine","text":"","category":"page"},{"location":"examples/support-vector-machine/","page":"Support Vector Machine","title":"Support Vector Machine","text":"In this notebook we show how you can use KernelFunctions.jl to generate kernel matrices for classification with a support vector machine, as implemented by LIBSVM.","category":"page"},{"location":"examples/support-vector-machine/","page":"Support Vector Machine","title":"Support Vector Machine","text":"using Distributions\nusing KernelFunctions\nusing LIBSVM\nusing LinearAlgebra\nusing Plots\nusing Random\n\n# Set seed\nRandom.seed!(1234);","category":"page"},{"location":"examples/support-vector-machine/#Generate-half-moon-dataset","page":"Support Vector Machine","title":"Generate half-moon dataset","text":"","category":"section"},{"location":"examples/support-vector-machine/","page":"Support Vector Machine","title":"Support Vector Machine","text":"Number of samples per class:","category":"page"},{"location":"examples/support-vector-machine/","page":"Support Vector Machine","title":"Support Vector Machine","text":"n1 = n2 = 50;","category":"page"},{"location":"examples/support-vector-machine/","page":"Support Vector Machine","title":"Support Vector Machine","text":"We generate data based on SciKit-Learn's sklearn.datasets.make_moons function:","category":"page"},{"location":"examples/support-vector-machine/","page":"Support Vector Machine","title":"Support Vector Machine","text":"angle1 = range(0, π; length=n1)\nangle2 = range(0, π; length=n2)\nX1 = [cos.(angle1) sin.(angle1)] .+ 0.1 .* randn.()\nX2 = [1 .- cos.(angle2) 1 .- sin.(angle2) .- 0.5] .+ 0.1 .* randn.()\nX = [X1; X2]\nx_train = RowVecs(X)\ny_train = vcat(fill(-1, n1), fill(1, n2));","category":"page"},{"location":"examples/support-vector-machine/#Training","page":"Support Vector Machine","title":"Training","text":"","category":"section"},{"location":"examples/support-vector-machine/","page":"Support Vector Machine","title":"Support Vector Machine","text":"We create a kernel function:","category":"page"},{"location":"examples/support-vector-machine/","page":"Support Vector Machine","title":"Support Vector Machine","text":"k = SqExponentialKernel() ∘ ScaleTransform(1.5)","category":"page"},{"location":"examples/support-vector-machine/","page":"Support Vector Machine","title":"Support Vector Machine","text":"Squared Exponential Kernel (metric = Distances.Euclidean(0.0))\n\t- Scale Transform (s = 1.5)","category":"page"},{"location":"examples/support-vector-machine/","page":"Support Vector Machine","title":"Support Vector Machine","text":"LIBSVM can make use of a pre-computed kernel matrix. KernelFunctions.jl can be used to produce that using kernelmatrix:","category":"page"},{"location":"examples/support-vector-machine/","page":"Support Vector Machine","title":"Support Vector Machine","text":"model = svmtrain(kernelmatrix(k, x_train), y_train; kernel=LIBSVM.Kernel.Precomputed)","category":"page"},{"location":"examples/support-vector-machine/","page":"Support Vector Machine","title":"Support Vector Machine","text":"LIBSVM.SVM{Int64, LIBSVM.Kernel.KERNEL}(LIBSVM.SVC, LIBSVM.Kernel.Precomputed, nothing, 100, 100, 2, [-1, 1], Int32[1, 2], Float64[], Int32[], LIBSVM.SupportVectors{Vector{Int64}, Matrix{Float64}}(23, Int32[11, 12], [-1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1], [1.0 0.8982223633317491 0.9596163170022011 0.8681749917956418 0.7405298560587654 0.6670753594660519 0.1779671467515013 0.12581804740739566 0.05707943398657384 0.02764121723161683 0.033765857073249396 0.2680295766735067 0.29939058530607915 0.37151489965630213 0.3524014409758097 0.2908959282977835 0.3880509811446821 0.8766234308310106 0.82681374480545 0.8144257681324784 0.6772129558340088 0.6360692761241019 0.27226866397259536; 0.8982223633317491 1.0 0.965182128960536 0.9914891432258488 0.8867564750187009 0.9019354510254446 0.2147708440814802 0.15771406856492454 0.05887040570928494 0.017222970583007854 0.019222888349132574 0.221500149894056 0.2978310573718274 0.3053559535776424 0.2890446485251837 0.22090114119439183 0.3141485519019614 0.6220352391872924 0.5857825177211226 0.6973386670166851 0.7178826818314505 0.7710611517712889 0.4654568122945319; 0.9596163170022011 0.965182128960536 1.0 0.9626046043667029 0.8869903689807833 0.8153402743825475 0.25975227903072295 0.19192116220346336 0.08434059685077588 0.03220850516134753 0.0366758927128704 0.31408772981722 0.3824704266612618 0.4200037751884887 0.4001773046096343 0.3219312217176709 0.43280734456335546 0.750503533504958 0.6647402210580929 0.6926170128782051 0.6277007998632926 0.6433503699452944 0.32400415670963956; 0.8681749917956418 0.9914891432258488 0.9626046043667029 1.0 0.9370667957752087 0.934295025587645 0.26444251222948995 0.19879359752203962 0.07665919270519939 0.021595654487073727 0.023425682392132743 0.2566761906912133 0.3496676024988405 0.34456852113508585 0.3275077643059417 0.25092423515822787 0.35232020079983056 0.5892979561473187 0.5284801502144095 0.6217604813241744 0.6430231195027034 0.7109544049100224 0.44057810112560447; 0.7465952329465504 0.9304484985812767 0.8897100197930106 0.9678435903690089 0.9814954031669109 0.9779840213642631 0.3466778268733209 0.27206683288049266 0.10510054534990214 0.024906016068519672 0.02537581531241299 0.2819293887595886 0.4088052209237594 0.3636370022084356 0.34754098347809126 0.26247121953918195 0.3672027632424591 0.46578384178509197 0.38887087230008666 0.4701103002702702 0.5145210485797571 0.6123061110630164 0.42723601664089345; 0.7405298560587654 0.8867564750187009 0.8869903689807833 0.9370667957752087 1.0 0.9265705090470907 0.4401947652983322 0.35262403649115526 0.15320607160230898 0.041175981935510725 0.04156995738050753 0.37540034365943104 0.5190650165661463 0.46669986410386666 0.4490622985140926 0.3499350203111987 0.46962470972808273 0.4883331096338951 0.3798584854063081 0.4217137127807958 0.4303538604829861 0.5091151748567635 0.32622076287640006; 0.6670753594660519 0.9019354510254446 0.8153402743825475 0.934295025587645 0.9265705090470907 1.0 0.2827193698331228 0.2209480096839663 0.07504013686337539 0.014371637034094253 0.01439904495484344 0.2022331299863698 0.3174100045147389 0.2670237514797323 0.2539121194008726 0.18411214167319784 0.26924082958699697 0.3775897748372698 0.33049550236819253 0.4440238046305932 0.5378606847561948 0.6649412466468505 0.5362623212460318; 0.6702595528823198 0.8606244226283823 0.8314916696615914 0.9158026981710833 0.9886587344335546 0.9560332901136585 0.43011039397992096 0.34820512587201075 0.14124895742843027 0.032158966523388476 0.031580446301945404 0.32479775862613985 0.47740665014325184 0.4040619518789057 0.38836949912884633 0.29482897357398075 0.4047606090427496 0.40908071544297814 0.3195106528226398 0.37846326245707385 0.41478881966429043 0.5119444241914671 0.36732348167804796; 0.5958832606985172 0.7761919158084817 0.764543603784896 0.8444984180740654 0.9708780964035508 0.9003101222501712 0.5272836631986195 0.4399206665565803 0.19102502302045962 0.043420828006372085 0.04129805359453605 0.37942807492906316 0.5573589063405943 0.4543619576452872 0.4394388167665561 0.3375627830189747 0.45182010123593386 0.36215845599297103 0.2634668608558619 0.30067673081800705 0.3243249963940758 0.4116488074528402 0.2958856882102383; 0.6885947324331928 0.8568830230020188 0.8463587912182832 0.9136212483096897 0.9958882004323975 0.9330458507202257 0.460198027818651 0.3733503341923742 0.15937475820815994 0.0394059990098187 0.038929261709730496 0.3649438549033214 0.5202001774588284 0.4496246527453173 0.4330574268345955 0.33404405039411145 0.4506081711573255 0.43638796244437944 0.33426657209516675 0.3798417694355356 0.4001476735300954 0.48685776367870665 0.3302703252897188; 0.5090115987229845 0.66413624932743 0.6748754932289617 0.7410778570661924 0.9169250942417487 0.8005426352978952 0.6529042363424815 0.563579434224383 0.2698518181195754 0.06354590398806001 0.05851083802910193 0.45550338406982604 0.6594734277650144 0.5208034250731294 0.5076214704665837 0.3978825263055807 0.5138380455506879 0.3131959074096939 0.20890435984227568 0.22461858592695394 0.23387430115253083 0.30414333483721223 0.21357650440692952; 0.45093840203402863 0.6528439740207388 0.621293380868375 0.7281001827134798 0.8878776747702869 0.8483652168656046 0.5523273229286175 0.4772875263856446 0.19761071338591454 0.03572808338522635 0.03200591765116194 0.3254023798230707 0.5196262009999181 0.37997186968805663 0.36850957018497515 0.27531375747209685 0.37362641649043415 0.24573139502121816 0.17123499867505312 0.20822585856181272 0.24503580982318168 0.3360215134566464 0.282452943899592; 0.3853374653727806 0.5531275651750578 0.5456456216692458 0.63187924915747 0.8272788783068459 0.7400449329312268 0.6740778081979193 0.6015241987774301 0.27711939662891033 0.05315958324949652 0.046303407217568385 0.39204087574676183 0.6116325191520129 0.4372487002786384 0.42739649234665433 0.3269933488950957 0.42689765420513015 0.2149742263615225 0.1373845757765515 0.15601083788315914 0.1754427451629185 0.24492436885056065 0.19788854100040332; 0.4618500914094273 0.6128977794159381 0.6250323394686075 0.6915365382967459 0.88189001173394 0.7603121381740631 0.6931399190748629 0.6074867523562785 0.29692455405263407 0.0681577107585657 0.06159926451755685 0.4645954678327457 0.6786212904021323 0.5221512124616867 0.5102875582105725 0.40055028127076997 0.5131110004920902 0.28037311469308024 0.18119852159344918 0.19343543701106478 0.20210176995756082 0.2678694503178588 0.1914666250241892; 0.2696302037951813 0.39638152158224865 0.403404894882526 0.46949354523118425 0.6809782176032294 0.5691943970817303 0.7959863116838541 0.7460243621745607 0.383759661562218 0.07333545211725394 0.06062770952823484 0.4276361345098877 0.6667776453895222 0.44825730820354187 0.44260682761401715 0.34513919555944916 0.43243837419232345 0.14931467351759206 0.08501132835349595 0.09090598935589621 0.09966166276310144 0.14565406652997917 0.1181167187860405; 0.3002590494463073 0.4115977376244659 0.43517761205157995 0.4854137709312024 0.7013669019363106 0.5580577880472913 0.8536237862527719 0.7944463939442774 0.4423829365109098 0.10024118415255136 0.08491613895458582 0.5164984752370707 0.7562127396153628 0.5384043672737264 0.532633886946863 0.4273028327796133 0.5213353428367101 0.17995021704234954 0.10110435738873308 0.10066689644324499 0.1023456357653284 0.14371084800072215 0.10376729355409814; 0.23895991490217605 0.33249975589925446 0.3570356667989706 0.3998440847736826 0.6103384074505219 0.46903410987322885 0.8998463600755164 0.8599178355605582 0.5066796013929806 0.11396841785672483 0.0938145408103168 0.5199141678644349 0.7624727563004503 0.5243659291344558 0.5215260868720673 0.4222997164056707 0.5044431497960399 0.14229601411077292 0.07518839830888858 0.07265639350751003 0.0730416671251987 0.10523885318589322 0.07641455926683688; 0.1915586377215268 0.2782828209211749 0.2967514734538114 0.33993222993655975 0.5402177192407522 0.41457098384450736 0.8940395698131182 0.8732189313788711 0.5158710981816049 0.10661722182344716 0.08504323591413473 0.47313025445505164 0.7153672453884371 0.46747599643639337 0.4662739691425648 0.37502751142299 0.4468899647601597 0.10944611467787402 0.05573856274262572 0.054529742226822116 0.05633615986012236 0.0842054572140432 0.06479883181573562; 0.22705224038961044 0.31938520593304515 0.34224069664715906 0.385495402966531 0.5939999674274448 0.4566062935524366 0.899155765927396 0.8637017932666784 0.5087259826371333 0.11196526875215826 0.09146631735404127 0.5082775947180933 0.7512932189769945 0.5102218707630334 0.5077837081682253 0.41038655426963877 0.4900958753322606 0.1337816259051161 0.0700979881035211 0.06799122461651445 0.06885045790765319 0.10009947593776944 0.0737756297099595; 0.13939429772646458 0.19866201340261797 0.22114257212806596 0.24864914930042606 0.42520802222653464 0.302501776239137 0.9372245539818653 0.9455339290308946 0.6315082072142529 0.14232088029862777 0.11065257979685114 0.497993580135406 0.7281911500795765 0.46872860910213954 0.4714826024303301 0.3903354300403334 0.44502619470509736 0.08256405886680711 0.03835534537202231 0.034581508309323515 0.033704650675767854 0.05102221540013976 0.03704322052953614; 0.14536068431690077 0.22095226487516476 0.23447742603357294 0.274909660351764 0.45833250114914875 0.34957510182253293 0.8743978358229307 0.8761033364462417 0.5229671176330489 0.0991832584928122 0.07635359024682037 0.42104563348070084 0.6565602859173729 0.40568985690003667 0.40610443722599326 0.3248474190874977 0.3850374404413671 0.07960429303749694 0.03874287443935803 0.03821774674349671 0.040517865120029126 0.0630563129568011 0.0514750781731886; 0.14763143381426758 0.20697373951526962 0.23172641379954417 0.25814071080843315 0.43782614347195603 0.3097871356075422 0.948191751644435 0.9518038099641223 0.6412421014386851 0.14982574968207624 0.11752904351897316 0.5195447242996001 0.7497513218160706 0.4906394512641361 0.4933596158142264 0.41032684627296356 0.46659647468974 0.08895540630063033 0.041555343865819036 0.0370694847518811 0.035626747327519616 0.05330326825496781 0.03775624173646896; 0.0988933742294342 0.12124815101071282 0.1522607181459668 0.1549588355463188 0.2869756433774373 0.17057949531906372 0.9559135116536135 0.9892814033753422 0.8831001155923481 0.301076796055474 0.23744952288817556 0.6342825599362547 0.794592834668333 0.5579899828492979 0.5688794841651086 0.5133182360119145 0.5290723135633687 0.07062859133770322 0.028724907143533655 0.02063694811643021 0.0163555000764835 0.023361093148916098 0.012804211869588852; 0.1779671467515013 0.2147708440814802 0.25975227903072295 0.26444251222948995 0.4401947652983322 0.2827193698331228 1.0 0.9840440354640055 0.7746626759108745 0.259331300415348 0.21532000371278573 0.7098399745938505 0.897129803886899 0.6668121606496512 0.6725100792205622 0.5926610833970605 0.639910660474017 0.12677413631301512 0.05828369828510033 0.04500739970678052 0.03699408218865754 0.05063099709508791 0.028066326109481347; 0.12581804740739566 0.15771406856492454 0.19192116220346336 0.19879359752203962 0.35262403649115526 0.2209480096839663 0.9840440354640055 1.0 0.8196815897955259 0.2585454040835821 0.2062681823099032 0.6395908524826316 0.8253149042783318 0.5796701178937012 0.5878473919825917 0.5191073343197649 0.5515646672943586 0.08688786848645763 0.03740368096386031 0.028523064291676215 0.023677093828563235 0.033770552150204226 0.0195077208623248; 0.13635764102815603 0.17700879424928978 0.20952156641370756 0.22209299900339824 0.387023745781692 0.2526789145766851 0.9840473928465677 0.9944545163994348 0.7628015511728252 0.21862995489084638 0.17378734562194012 0.6072327127100751 0.8120913399429163 0.5587815024853605 0.5649398681849662 0.48910402339466696 0.531808973769114 0.09017310666726844 0.03999940594359134 0.032134604355827 0.02797743047976866 0.04045661550302929 0.024940657478324198; 0.03738526797037774 0.0518722922443791 0.06469761673719454 0.07006166077431174 0.15008890206353956 0.08665512641758483 0.7784277540898672 0.8696233384872661 0.8146647085648284 0.22136689810316545 0.15743791729748416 0.3968445993948711 0.5391261134094336 0.32293628667971663 0.33300869834150976 0.2972564334545365 0.29975944910030017 0.023848286045824146 0.008420633155828174 0.006112619378410749 0.00516049180195299 0.008268679880159445 0.0052975977111670795; 0.042144581324510336 0.0494746902418995 0.06742933413721425 0.06600663846708471 0.13930785683731703 0.07183957093968951 0.7790781696938223 0.8536051172386016 0.9590138186434369 0.3903921195934206 0.2950426290497306 0.5288899615616742 0.6189309135561756 0.4262505758493195 0.44128790558923603 0.4208889858046393 0.39942670313494494 0.032438697615701076 0.011172295974476632 0.006870690987412239 0.004857666087518073 0.007060720348024933 0.0034344193347111735; 0.05312991544390621 0.05941513640392731 0.08194706236820062 0.07813203139092116 0.15906453281962285 0.08151519050847295 0.8085392731314344 0.8681139298684074 0.9848651453003653 0.443246308165159 0.3456733523248384 0.6037896564824075 0.6813118162978405 0.4950714556869529 0.5113993322285312 0.49277881830487924 0.4667583768828605 0.0429226487701534 0.015239324750107681 0.009170625347595353 0.006257708243494053 0.00875319860215057 0.0039747823519354995; 0.05707943398657384 0.05887040570928494 0.08434059685077588 0.07665919270519939 0.15320607160230898 0.07504013686337539 0.7746626759108745 0.8196815897955259 1.0 0.5440195902835544 0.43906982651942317 0.6673967547317672 0.7023400808140081 0.5485677304431098 0.5670659502059522 0.5624209010974132 0.5201938090487075 0.05039216793847928 0.01791524354832784 0.010043369678747171 0.006325305381042625 0.00840984174401409 0.0033555198691232182; 0.0239028553317282 0.030184756247153024 0.040854218058537374 0.041510328633494326 0.09483008177233723 0.048340772261097814 0.6717838510012278 0.7676403150888159 0.8793112190224525 0.3119804989346951 0.22247711097256292 0.3913818163355702 0.481152264589789 0.30253086008317853 0.3150901394384516 0.29804879470704987 0.2801577440280252 0.0171896769354173 0.005490096384196609 0.0034159985987988898 0.0025153999701487658 0.0039043781883587865 0.002087265775451305; 0.031461748783796255 0.03877763079418788 0.05235828887075337 0.05259747403989502 0.11577900027217572 0.059784830395976214 0.7284683797887744 0.8167846933186389 0.9160729338118098 0.3382566896841279 0.24707350935618375 0.44839573463782234 0.5435604959271204 0.35385233350424394 0.3673850921598674 0.3474062654846442 0.32935166307385894 0.023094272546330587 0.007660825323682672 0.004791128233787445 0.003499654757292123 0.005296572963316763 0.002754516536638302; 0.0202635020156832 0.0209515968555071 0.031856292217650496 0.028621916401232315 0.06575704582809962 0.028713198747047974 0.5561499763492528 0.6276780889906601 0.9225361894208831 0.537563426752421 0.40805784519141053 0.45230817484429436 0.46279255715733986 0.34051076155197635 0.35722093503244967 0.36821581556576116 0.3181318631289292 0.018326268390209892 0.005525107949270952 0.002764399785944711 0.0016269608580749297 0.0022643385581135253 0.0008634402327095031; 0.03917881206249196 0.04187141495743542 0.0602770437308414 0.05569284553771938 0.11787868067422283 0.05651647912609812 0.7170279636314995 0.7811392796869542 0.9868755418205789 0.507181661805653 0.3946817184897088 0.5692597969620196 0.6111220282142971 0.45306704851231877 0.47065706470710794 0.467410267660561 0.4263135976218018 0.0335840367945797 0.0112547984221705 0.006246809772638481 0.003975045243333954 0.005489661056955841 0.0022783159887154033; 0.04503255088410135 0.0439137702967996 0.06568078940616273 0.05754622086226787 0.11797430518899149 0.0542341914980732 0.6848945717357943 0.7301845715877101 0.9867821313846442 0.6248955778853352 0.5079623735649396 0.6419956195484627 0.6383844961053452 0.5159463011662203 0.5357861713088927 0.5480569804281196 0.4892367713976803 0.0425914300452601 0.014482620179200524 0.0074868753371495735 0.0043749674598642044 0.0056845719431328585 0.002045738397676184; 0.020015288581754945 0.019539997383675744 0.0305983363763088 0.026560055130788258 0.060568717016068814 0.02548996952986846 0.5212517692914928 0.5837589968251036 0.9104128747187193 0.6087759155456162 0.4723327572739916 0.4681713902460597 0.4553175340992092 0.3517956141533186 0.3694712715628193 0.3893559371646623 0.3298272088546007 0.01929969464392214 0.005784803454931787 0.0027392873350079857 0.0015186313676977968 0.00204248371497284 0.0007090901969261304; 0.020666052296526998 0.023467210104345954 0.03394439944734565 0.03227571537361978 0.07474354689359762 0.03478309139442751 0.6050706431823535 0.6902380132644816 0.9157011962379701 0.4252868608831877 0.31207368821244674 0.41915963873727863 0.46633517155931187 0.3173525840288823 0.3321740061129138 0.32987532663994706 0.29493718842290595 0.01682097194725037 0.005145059748221362 0.0028260660732993795 0.0018375597258384604 0.0027007138580146654 0.0012020186941257484; 0.02453101432184634 0.020098364892380587 0.033807186321682364 0.026519136869644185 0.05668770862680506 0.022080288969644857 0.4402683441382192 0.4723140706097761 0.8338110421697035 0.8296862086740819 0.7008351158890717 0.5440749777729216 0.45737691515438816 0.41745398965791825 0.43796100209216043 0.48788632097996637 0.39735131936242424 0.028530362783343567 0.008830718283431016 0.003661080212258824 0.0017275175383884805 0.0020652670212335587 0.0005465605530047422; 0.010772381969517289 0.00808979915556405 0.014769223974223632 0.010907548917201483 0.025105066248112838 0.00861005735113974 0.27446992177652213 0.30268057727357456 0.6563261019548249 0.8423122954493703 0.7087959360554233 0.3816217504493905 0.2915784265479316 0.27667027303245256 0.29339896442424834 0.3462653613105691 0.2623718667950521 0.014036631757401184 0.003892193697218935 0.0013824598149453928 0.000570663656527681 0.0006655357305495249 0.00014821787811893474; 0.013102125076381291 0.009501254374372565 0.01743056916880381 0.012641115714999962 0.028109874095783 0.009665720685806211 0.2779944240132419 0.3009422157277764 0.6496100565156806 0.8921208950268356 0.7707215094068156 0.4153473315697215 0.3097033352945198 0.30652449538189097 0.3242452180605052 0.38457230531389347 0.29222754331272954 0.0176491442562162 0.005052976710183603 0.0017795086910056906 0.0007191096459963922 0.0008135871421187968 0.0001729568595091208; 0.010048640046801751 0.007556801509146678 0.01383630447940828 0.010219144448059669 0.023720538640699704 0.008086205235382792 0.2673028201381964 0.2959612329986008 0.6480835552991425 0.8326590465631913 0.6977877995262115 0.3695874583691571 0.282221028802535 0.26655787487127675 0.28291666669202015 0.3345015952465225 0.2525390976336497 0.013094227782891262 0.003595847554140322 0.001270090006133873 0.0005228431863253228 0.0006118815220643209 0.00013617786605785216; 0.022349563450720516 0.015126611441125632 0.027736426902443566 0.019448577925125717 0.03948248730102741 0.013912843541017676 0.29407180499185603 0.3036682389024967 0.6295948777410747 0.9760878235703699 0.8971483297034317 0.510873410149174 0.3648754706376355 0.3962367174131936 0.4160536437133086 0.49505438923170403 0.38266060041477884 0.032002468097764726 0.010030791663580315 0.003541268511179874 0.001388397560167188 0.0014686522229405998 0.0002871893433972941; 0.010020005268819202 0.006809770300057249 0.013011105080329613 0.009061216335098812 0.0203203311204038 0.006608239130026839 0.2181377208067792 0.23591612481445193 0.5571316265641847 0.9019480717339876 0.7928873516646797 0.3622506780828305 0.25412502873565396 0.26395075463572065 0.2800632926839618 0.34220302701090444 0.25218637023914864 0.014558062704000863 0.004058708035772421 0.0013244042014423662 0.0004960057823668772 0.000542796465649497 0.00010316672930654405; 0.01106332568616253 0.006887193194115914 0.013554569709165635 0.008985016403844404 0.019232335629336378 0.00608875040752228 0.18606570349931004 0.19565849249653774 0.48225133952423743 0.9453876390694811 0.8725629002342125 0.3644945741438321 0.23844406528632542 0.27079632723779745 0.28673978102317077 0.35878774000611285 0.26108600973253054 0.017606861104932217 0.005073103983778686 0.001567301714496989 0.0005453728216072849 0.0005601086767381032 9.34005648300471e-5; 0.01657908523512305 0.011582747078381944 0.021314338205802762 0.015167963313664786 0.032355502612115034 0.011201905491084553 0.28242091851773066 0.29908821320436807 0.6382850228010263 0.9397497324044695 0.8367176827710091 0.45643200424037295 0.3319584631188468 0.3443458093220082 0.3630946413341951 0.4323367624470917 0.3303136751483621 0.023116247423315056 0.006887297927401206 0.0024132953755383753 0.0009557168244199856 0.001045242382536408 0.00021182221215739486; 0.02764121723161683 0.017222970583007854 0.03220850516134753 0.021595654487073727 0.041175981935510725 0.014371637034094253 0.259331300415348 0.2585454040835821 0.5440195902835544 1.0 0.9702064166750799 0.530245218387723 0.3560307890203881 0.4238742345642878 0.44332843787390197 0.5365850265213121 0.4136915663687128 0.04297162933120202 0.01420754067753986 0.00483919981495718 0.0017871776252900312 0.0017700462041020729 0.00030801400193521903; 0.033765857073249396 0.019222888349132574 0.0366758927128704 0.023425682392132743 0.04156995738050753 0.01439904495484344 0.21532000371278573 0.2062681823099032 0.43906982651942317 0.9702064166750799 1.0 0.5266287956023967 0.3307939384157074 0.43601990045358663 0.4539182184875824 0.5591960801816629 0.4306259447317991 0.057385150231952155 0.02019180375858439 0.006632129374597611 0.002298515218977774 0.0021163880760639107 0.00032463677682729514; 0.02553578820521561 0.0158644870161105 0.029838914605637666 0.019953454894567846 0.038401012604959726 0.013269650716428449 0.2509244773194308 0.251186735069333 0.5371677022503326 0.9995646051393732 0.9667899675361993 0.513751166350262 0.3433357299717392 0.4081395073506391 0.42733645217635563 0.5192419691029252 0.39799729028929487 0.03990347409204987 0.013038212795705478 0.0043927214930709535 0.0016094244034674434 0.0015961297818598565 0.0002754711459087229; 0.014684836067894124 0.007863396938894306 0.016093036470496366 0.009825718452488018 0.0189014126385701 0.005844366606428707 0.13849007975235583 0.13706934957395644 0.3505681951871089 0.9408488893951146 0.9551754247357206 0.3639737993290779 0.21243271709551226 0.2843646309443169 0.29919652216983733 0.3870026762693761 0.2792653571316844 0.027184350652692695 0.008556747740127378 0.002461193413214283 0.0007652106152402142 0.0006987527433192667 9.397135335575793e-5; 0.021425769737078368 0.011961446785158796 0.023616854868073967 0.014831082339553991 0.027747811476019044 0.009060931748526886 0.17752833012020347 0.17401094944541437 0.40750501628989283 0.9721752042840985 0.985527711544548 0.4396878920679833 0.2681896715550059 0.35127963758750874 0.36800110805489483 0.46402546238163483 0.34531620678243374 0.037617774125204916 0.012368962147916036 0.0038068720385083673 0.001257026852505441 0.0011643628241787124 0.00016961483943713626; 0.2680295766735067 0.221500149894056 0.31408772981722 0.2566761906912133 0.37540034365943104 0.2022331299863698 0.7098399745938505 0.6395908524826316 0.6673967547317672 0.530245218387723 0.5266287956023967 1.0 0.9204011029470381 0.9763204269194343 0.9834744899012892 0.9800453255225883 0.9670027469525624 0.2868166055620744 0.1396173432997934 0.0802050018418445 0.04629682347632554 0.04928910656810396 0.015144618580948671; 0.29939058530607915 0.2978310573718274 0.3824704266612618 0.3496676024988405 0.5190650165661463 0.3174100045147389 0.897129803886899 0.8253149042783318 0.7023400808140081 0.3560307890203881 0.3307939384157074 0.9204011029470381 1.0 0.9092611854055276 0.9117485102579749 0.8437830107817399 0.8913223951781822 0.26003030092553525 0.1303827813666727 0.09015777074260106 0.06341517666632882 0.07518254013006463 0.03137762242865076; 0.37151489965630213 0.3053559535776424 0.4200037751884887 0.34456852113508585 0.46669986410386666 0.2670237514797323 0.6668121606496512 0.5796701178937012 0.5485677304431098 0.4238742345642878 0.43601990045358663 0.9763204269194343 0.9092611854055276 1.0 0.9993139026257465 0.9765798100043396 0.9987957619674939 0.39412877854863043 0.2098767698470255 0.12780803517686787 0.07623734052074496 0.07896775479712795 0.02470799941856937; 0.3524014409758097 0.2890446485251837 0.4001773046096343 0.3275077643059417 0.4490622985140926 0.2539121194008726 0.6725100792205622 0.5878473919825917 0.5670659502059522 0.44332843787390197 0.4539182184875824 0.9834744899012892 0.9117485102579749 0.9993139026257465 1.0 0.9815187936126483 0.9969143579721336 0.37562956856679103 0.19703163333480644 0.11850074321536054 0.07008717427360553 0.07279694039680448 0.022599211149285328; 0.39465231175913795 0.31587551141322445 0.43642677197878865 0.3530737581159885 0.46672673050010677 0.26715889535835474 0.622155656205321 0.5337483231906022 0.5046262630854164 0.41089803125627195 0.43085691731721165 0.961384843855092 0.8785837060449794 0.9969370612351908 0.9945537399369616 0.97609997854102 0.9995568027554731 0.4295214100756645 0.2338386266049015 0.14126529235623905 0.08283907919566438 0.08390006402557657 0.025324299741833464; 0.2908959282977835 0.22090114119439183 0.3219312217176709 0.25092423515822787 0.3499350203111987 0.18411214167319784 0.5926610833970605 0.5191073343197649 0.5624209010974132 0.5365850265213121 0.5591960801816629 0.9800453255225883 0.8437830107817399 0.9765798100043396 0.9815187936126483 1.0 0.9760363402932355 0.3396761689366942 0.17122729619266394 0.09352156540595309 0.05035919518075388 0.05038867912666139 0.01366097735931823; 0.3514836919845527 0.25827905726818784 0.37376328779438567 0.2871031630484072 0.3774708549469005 0.20356442769718722 0.527254124515189 0.4483191866084228 0.46652883939298834 0.4676429245287537 0.505344784277436 0.9424242881899214 0.7976158902909226 0.971098264163409 0.9709671758028702 0.9864017590543077 0.9781505431297239 0.42127512095013353 0.22585472214185204 0.1251177654786094 0.06693346651287566 0.06454104481834383 0.01690822793519799; 0.3880509811446821 0.3141485519019614 0.43280734456335546 0.35232020079983056 0.46962470972808273 0.26924082958699697 0.639910660474017 0.5515646672943586 0.5201938090487075 0.4136915663687128 0.4306259447317991 0.9670027469525624 0.8913223951781822 0.9987957619674939 0.9969143579721336 0.9760363402932355 1.0 0.4175584225095954 0.22574366435124235 0.13708488052338838 0.08106449463694666 0.08284201786267843 0.02541251142320079; 0.37961657901188395 0.25192283506342733 0.37396034073513706 0.27177013275063033 0.3316295475090898 0.17635188715914188 0.3805158733692937 0.31022132785589657 0.32995603743246077 0.4085529653091237 0.4713640504577533 0.8305039703656368 0.6510083421165387 0.8867144586887958 0.8825664115254728 0.9163437520779888 0.9045793138133195 0.5037709664927494 0.28752565713902434 0.1520569532177624 0.07546592900068064 0.0671567954033988 0.01522336553239881; 0.4702831429583742 0.3213335719746325 0.4597881972580735 0.34164544472267744 0.39813969852691955 0.22491433200447894 0.3765147624730655 0.3011668417307773 0.2904874285924168 0.3265331196970446 0.3807124240825537 0.7964028874115803 0.6475849179588237 0.878341751854222 0.8691210753475811 0.879767432078501 0.8991737718575608 0.5982998949558497 0.3633531261013041 0.20600362457781846 0.10808286790933438 0.09633093909886627 0.02331194601865261; 0.526795288057018 0.3940391213462967 0.5388779005079533 0.42266049253674903 0.49969396504753033 0.298501330474785 0.4644036535619441 0.37649106648873704 0.3319524786282605 0.300825324374188 0.33846013954020493 0.8391821514224157 0.7387531166728089 0.9266670512086935 0.9155922954238254 0.89576457108235 0.9431591120717556 0.6061425707332225 0.37032063184699016 0.22846758678703566 0.1315297906335085 0.12382813579712616 0.03470780014308738; 0.326708533419188 0.2430452816843132 0.3528498094599045 0.2724891632283225 0.36673219291844344 0.19573728896816622 0.5532605885480469 0.4759742322481321 0.5037422336172169 0.49604598976832465 0.5285094201634164 0.9600456733812273 0.81751796330167 0.9761094484348516 0.9780088993021706 0.9950015905287559 0.9801978533385329 0.3878742388347221 0.20288958549179464 0.11167812171391878 0.05985022255614695 0.05853988792012992 0.015526771107081744; 0.5619207849380442 0.38384512737319587 0.5345032833933165 0.39943888193571503 0.4372646966133805 0.26024918463886315 0.3280472343236939 0.2549281555415764 0.22593650441538846 0.24574382979335188 0.29442973469375133 0.7110437799713303 0.5859751095762021 0.8158127630991835 0.8019023386938486 0.7975187692574014 0.8407203767536858 0.7064956340024616 0.4621084542264419 0.27596525043225517 0.1494008366974313 0.13051365577846652 0.03227936170155413; 0.6430706658631261 0.45006841886534327 0.6078444172622047 0.46271359870464523 0.4874771713377931 0.30510864810623706 0.31132575025507137 0.23812417686552184 0.19454650831020817 0.19524813962125906 0.23598888415659178 0.6574493787863069 0.5585838247963583 0.7749782758058278 0.7580532503023097 0.7381503679619124 0.8009681622942774 0.7806334105161824 0.5376983537943203 0.34025026525820296 0.1929222057620414 0.16874582807373442 0.04406772611548003; 0.7231647963646257 0.5056851000850311 0.6655850205917616 0.509266230976612 0.5052334681260294 0.3323247700429202 0.2584463004813027 0.19226030577292572 0.14491990312468314 0.1411260347060979 0.17513016835138412 0.5593127995804646 0.4810917747190298 0.6847600297393145 0.6655572238457713 0.6373481626655303 0.7122690618954836 0.868572031742886 0.642367727412946 0.4270069268585453 0.2493471672518346 0.21386764096672656 0.05697307908855304; 0.6316227556282686 0.40743503740482884 0.5606965055218779 0.4084491148331317 0.4039103078167204 0.2507422704206328 0.21745362046021305 0.16055009821413815 0.13282860708891472 0.15755645787174735 0.20099144507894512 0.5395481654674688 0.4314691749241478 0.6558364264722154 0.6389336863189395 0.6324492597708506 0.6850231211028036 0.8312537908937804 0.6045643859848966 0.3697286562941463 0.19778414654099596 0.16205085142829598 0.037740738988576696; 0.7401079629435721 0.5428295877160711 0.7031806864585685 0.5526524918348173 0.5625668683648273 0.37535456334989986 0.3049670044118855 0.23040058563288934 0.16976699088029149 0.1490124483780644 0.1800509174051033 0.6031858853281993 0.5388764898497265 0.7305283258443571 0.7108620688983256 0.6720656802992333 0.756102320334855 0.846021274392779 0.614290483608926 0.42053331865050503 0.25553244125721153 0.22690103237680803 0.06489412637619173; 0.7774171295635275 0.5183587196531081 0.6671856182761694 0.5011275436908067 0.4439717097364589 0.3097901912593574 0.15499968292212957 0.10904978972531522 0.07466261292508483 0.07694096980958384 0.10184805385603908 0.3794525394526092 0.3203312365762359 0.4963074588528718 0.4773711871808398 0.4516508142874447 0.5236420497371012 0.9644741236400529 0.8060189579870737 0.5597466310187597 0.32836661338632694 0.2646093759094215 0.0677441637131662; 0.8181308100490108 0.5561243868737823 0.698575060593573 0.5324654697166794 0.45752368437996377 0.33230408198860933 0.14048366252196495 0.09759286203453564 0.06262356218316165 0.06053539611033345 0.08069598037760621 0.3368883734358375 0.29138104057505876 0.45023153710385133 0.4313705725132026 0.40132171704812075 0.4761351054919458 0.9870428644119942 0.8596029953159898 0.6251306019773151 0.38047255238126104 0.306893210606778 0.0820423232658685; 0.7328677140374111 0.45686249352211705 0.5790469862711584 0.4222181599919014 0.33080357486099415 0.2432499454983849 0.07720430058897658 0.05106405047245464 0.032093401151864966 0.0365474112100408 0.05215633420045098 0.22181457534510465 0.18089489184494278 0.3111666840414051 0.2960670894164659 0.27828806542602247 0.3336621538628581 0.9586604604379003 0.9112627551291514 0.6526801185016419 0.37822360606526256 0.2823655882683267 0.0674470136857995; 0.7515232704337115 0.4757416292999305 0.5968371635299031 0.43944272397447637 0.34313013101024226 0.2559666012514232 0.07813514522593748 0.051658187653842315 0.03172311639963259 0.0347198448127321 0.0493745953085397 0.21905996675855813 0.18130230551737894 0.3085046198554848 0.2932786599324534 0.273623488478172 0.33068669985787785 0.9653564796429168 0.9249527424633506 0.6753413167030635 0.3987464830705314 0.2999086114297358 0.07359809262134936; 0.8766234308310106 0.6220352391872924 0.750503533504958 0.5892979561473187 0.4883331096338951 0.3775897748372698 0.12677413631301512 0.08688786848645763 0.05039216793847928 0.04297162933120202 0.057385150231952155 0.2868166055620744 0.26003030092553525 0.39412877854863043 0.37562956856679103 0.3396761689366942 0.4175584225095954 1.0 0.9191899116391927 0.7213092819491199 0.46889441622459455 0.3828925347230843 0.11144275482456456; 0.8119268400431263 0.5616029351132952 0.6474473854693175 0.5069931386383051 0.3661982883577541 0.311649120820824 0.05829641266312954 0.03742883021724568 0.018577266239422643 0.01567128213557683 0.0223986923163361 0.14511479304623873 0.13244337225859604 0.21687298617558284 0.2038675593583631 0.17921730237397668 0.23341261540835537 0.9285566477571606 0.9982284285344054 0.8549662123185122 0.57919180548843 0.4463677903452444 0.1306883528369306; 0.82681374480545 0.5857825177211226 0.6647402210580929 0.5284801502144095 0.3798584854063081 0.33049550236819253 0.05828369828510033 0.03740368096386031 0.01791524354832784 0.01420754067753986 0.02019180375858439 0.1396173432997934 0.1303827813666727 0.2098767698470255 0.19703163333480644 0.17122729619266394 0.22574366435124235 0.9191899116391927 1.0 0.8822702532119937 0.6152795204486762 0.47978346763635193 0.14650071169191572; 0.822809521622991 0.5948037025016687 0.661623292177756 0.5335057440538382 0.37607346176075984 0.33764855167756436 0.05284701060910418 0.033662036456081526 0.015324720335777151 0.011444005832743251 0.016295443067505327 0.12330208096295307 0.11778021084065203 0.18808039767468007 0.17608616190280213 0.1508937925165944 0.2025242052804488 0.8905990321407163 0.9966670165629197 0.9131581140097302 0.6576167422401595 0.5153533855760121 0.16390625359706507; 0.7332380005310724 0.47966685651323304 0.561270991827678 0.4262991752111288 0.2962414470348876 0.24973189593616663 0.04268972320232483 0.026811837976605572 0.013507495536566362 0.012805825713678201 0.018952874629094556 0.11843861804971419 0.1036060543552713 0.18023196881388742 0.16904117283919085 0.1504765142358632 0.1952861714325238 0.8869877129551653 0.9842217337887395 0.8203423350576758 0.5328025632598246 0.39324760863940933 0.10628991718440418; 0.7187809376310704 0.4761254396371267 0.54656074185958 0.4194008209483607 0.28353726603906537 0.24690043097609538 0.03654000044570406 0.022681143697437862 0.010842333019539772 0.009820791891003388 0.014650008345521303 0.10032334124866865 0.08930018079843059 0.15556684998216727 0.14542078103482645 0.12776685893780718 0.16895268664252294 0.8526318330670217 0.9806401970911786 0.8475881317911022 0.5659917900792554 0.416984564845228 0.11622851300686755; 0.8144257681324784 0.6973386670166851 0.6926170128782051 0.6217604813241744 0.4217137127807958 0.4440238046305932 0.04500739970678052 0.028523064291676215 0.010043369678747171 0.00483919981495718 0.006632129374597611 0.0802050018418445 0.09015777074260106 0.12780803517686787 0.11850074321536054 0.09352156540595309 0.13708488052338838 0.7213092819491199 0.8822702532119937 1.0 0.8855923329934225 0.7530367236790085 0.3226608405615575; 0.7464895770624135 0.5594351352421899 0.5895776094001356 0.4895311220674548 0.319643609141359 0.3143758003701468 0.03288525229499247 0.020274680144724787 0.007964199195053414 0.005222443768633631 0.0075982008115135 0.07437307723826476 0.0746522813603369 0.11956373319655095 0.11089845782781486 0.09158557641127746 0.12957243004933744 0.7640400790614297 0.9450363171851921 0.9591969772942559 0.7472623695905949 0.5830550774121601 0.20244287264478314; 0.7530030245919059 0.5431234673379363 0.5891717554326777 0.47712932171407574 0.3166487512948445 0.2985208761222979 0.035649105448995015 0.022080553461329762 0.009286847786499046 0.006751278027436788 0.009875494227672841 0.08544232307820761 0.08252777611615113 0.13536175637433384 0.1259284644875634 0.10617928562160212 0.14670596019757695 0.8076447950677335 0.9703716409266376 0.932762917865407 0.690834745158495 0.5304385201337295 0.17171039308680663; 0.8235336782281178 0.6405111214096829 0.6751170957730195 0.5708263832967825 0.3913225084959072 0.3799338589125705 0.04683919106442962 0.02961614926842436 0.011883757732761544 0.00729943919314527 0.010273385080068998 0.09769000743415807 0.10036147536710764 0.1529345825839084 0.1424119148518332 0.11739714058688706 0.1645958816308224 0.8155975950934103 0.9611942517618203 0.9745530337256978 0.7718025112426161 0.6243230838122039 0.22676040918673873; 0.6867468433693716 0.5331666327402476 0.5401499281722351 0.4602101153093634 0.2874857014122235 0.3013099698294809 0.02441925668731399 0.014789893132036517 0.005242128466115685 0.003076690211929651 0.00451111846293758 0.05293690792581018 0.05540759589202033 0.08787213190071044 0.08102581610167775 0.06510209996936658 0.09551563939482735 0.6693717151437683 0.8805139185030549 0.962976949860865 0.7977752137717908 0.625906121249835 0.23426112680791764; 0.6123022959216866 0.4999976772262982 0.48269268480676464 0.42595719513313823 0.25449585634974026 0.28756933949188684 0.017618908610842147 0.010493756806911569 0.0032937887074633177 0.001666429247984283 0.0024505350619450157 0.03569336809287439 0.03950235279527152 0.061294039927021746 0.056157172390876364 0.04361457849901721 0.06678097895617935 0.5585565589403414 0.784685636948393 0.9400073084858939 0.8422729086934179 0.6700290694351737 0.27733472967577766; 0.5208071432762769 0.44790727672851793 0.4112620733348449 0.37629227912890906 0.2144655272958558 0.26209976266649593 0.011972269254734097 0.007004322353126643 0.001936938649234545 0.0008408093661474044 0.0012409452628291877 0.022638140748971483 0.026538216827193314 0.040289711699862236 0.036665355400374564 0.02748985316984334 0.04400815817355934 0.44374622631342764 0.6680658291966547 0.8797756865237958 0.8549882792013699 0.6895556341694858 0.3166405198571644; 0.5891681247974004 0.6090837294623026 0.5248133057027008 0.5327362038278587 0.33368944124026373 0.43261895730019134 0.022886397413172065 0.01418017863278169 0.0035688318479617353 0.0010351877552663635 0.0013814109028705312 0.030641695234509592 0.041619482122225035 0.0524656862959539 0.047934392864311254 0.03415916876832351 0.05624683422303738 0.4144251498547713 0.5845546024608902 0.8657575960256906 0.9832623417700396 0.9050029006040501 0.5474453725428048; 0.6772129558340088 0.7178826818314505 0.6277007998632926 0.6430231195027034 0.4303538604829861 0.5378606847561948 0.03699408218865754 0.023677093828563235 0.006325305381042625 0.0017871776252900312 0.002298515218977774 0.04629682347632554 0.06341517666632882 0.07623734052074496 0.07008717427360553 0.05035919518075388 0.08106449463694666 0.46889441622459455 0.6152795204486762 0.8855923329934225 1.0 0.95317462877265 0.5874035637208949; 0.5749644296822503 0.5614640934267493 0.49299286294186273 0.48468047798417707 0.2942832944957974 0.3752064819757136 0.018804501447141315 0.011435890401526988 0.002955760433948493 0.0009704067285291062 0.0013371056870898458 0.028038292943887058 0.036400349970850336 0.04864456022600958 0.044380183342761476 0.032110546803158134 0.052463579032817904 0.42901781989141297 0.6188663970979027 0.885220316479004 0.9589037954214876 0.8466889875934079 0.4705785731761298; 0.5966298504595813 0.6497959021929185 0.5508610734329186 0.575651052751428 0.37181328597797036 0.48854896269750847 0.02743297932990087 0.01730727969785368 0.004264146632423846 0.0011057799083934747 0.0014316397886228248 0.03325744456483746 0.04705515434685252 0.056183095043217876 0.051409730819183534 0.036153416730729984 0.0598890375403915 0.3978048409442719 0.5482579229273747 0.8366872356964472 0.9911306477477079 0.9482157541926657 0.6192712131547955; 0.6360692761241019 0.7710611517712889 0.6433503699452944 0.7109544049100224 0.5091151748567635 0.6649412466468505 0.05063099709508791 0.033770552150204226 0.00840984174401409 0.0017700462041020729 0.0021163880760639107 0.04928910656810396 0.07518254013006463 0.07896775479712795 0.07279694039680448 0.05038867912666139 0.08284201786267843 0.3828925347230843 0.47978346763635193 0.7530367236790085 0.95317462877265 1.0 0.7536774603025823; 0.37477044321888536 0.4349863927333212 0.3388839022817558 0.3720494346085627 0.21519711025421284 0.3253138317147909 0.010037093974181666 0.0060467417410972725 0.0011875118732063588 0.0002480707283038141 0.0003300424159388177 0.011407470827613422 0.017521330331462935 0.020814669795918655 0.018784268442022023 0.012463171657650072 0.02238095595512163 0.227469653973088 0.3632368032599243 0.6505486498358429 0.8743314602327298 0.8366314316745298 0.6323552576352361; 0.4308494365065684 0.5521961142878209 0.42568510936624304 0.49172463488628315 0.3159269411000278 0.47114743317643165 0.020125652113509692 0.012821784701747635 0.0025763216834118296 0.0004459216390885389 0.000548863514666901 0.01860102653889555 0.03056557154533914 0.03212075732520414 0.029217027826914762 0.01915472861237837 0.03399621120406532 0.23833695351973763 0.3445536527228856 0.6283027956928183 0.8935450448439365 0.9340508327639085 0.803367738440086; 0.42588556793724963 0.514599887489171 0.40225454811321515 0.45008699593865603 0.2764547431869833 0.4095893081877711 0.015610214582617616 0.009697779434859725 0.001968411270910264 0.0003845846633845169 0.0004920266487864112 0.016205332099098156 0.025484432603544913 0.028598027606397586 0.025939220913629255 0.017212264615019613 0.030497065232210633 0.24978921540415677 0.3752433713507336 0.6671808968998799 0.9094166321017612 0.9070024503194692 0.717969244338302; 0.25616724404676827 0.36064432234093263 0.25458348363076183 0.31381508570267086 0.18628923825270452 0.3185209800604343 0.0082263007895151 0.005088772835099111 0.0008198400207983819 0.00010788314848899607 0.0001332981971132782 0.006693772956053487 0.012191493624789688 0.01228286410458982 0.011044429533844548 0.0068049702629956145 0.0130512812361402 0.1253743856343511 0.2039176585333789 0.43910102741499013 0.7211374530045427 0.7744022012346895 0.802098239373476; 0.28531440235914857 0.4002858555284273 0.28636704091166787 0.3513821641026911 0.21409246587444314 0.35723120432949906 0.010439773572454236 0.006534374708804699 0.001095215285484182 0.00014740693870887804 0.0001802894669450021 0.00845972297584198 0.015272137647549196 0.015263798894214076 0.013764458115931638 0.008556389143525862 0.016176721587662174 0.1409025065904775 0.22200076169575395 0.46602649035313587 0.7524428671746313 0.8131717721925134 0.8296434593833473; 0.24903410779923643 0.35904933061176014 0.25127407578055966 0.31395393231584934 0.18847617044543946 0.3249998991514615 0.008516438259839807 0.005305086528612752 0.0008432117182715994 0.00010526431950490199 0.0001284332102931466 0.006629678280122173 0.012309626926329458 0.012112089324700176 0.010895282626047198 0.006667249932579161 0.012840082356890036 0.11887527798719272 0.1919477286751715 0.4200001386101343 0.7041663984132576 0.76897079297106 0.8256707173813529; 0.22886067841582372 0.3877508069941566 0.26235980762663824 0.35736249399182723 0.2450825591516219 0.42670313988850206 0.01598520519681239 0.010717359719015747 0.0016793924477414128 0.00015125353768022373 0.00016529025327248037 0.009017807932986543 0.018843636469128917 0.015426968275166236 0.013999905415617844 0.008311948797036561 0.01600287954319619 0.09324670458766982 0.13395130511607745 0.31152426355295504 0.5837246484964872 0.7272861670149322 0.9767715590737112; 0.22999157462853717 0.3739192895482388 0.2539677229913929 0.3387311467563276 0.22207608922889335 0.3892663265674966 0.012660670510976866 0.008288158348560032 0.0012841413529425093 0.00012498125252143175 0.00014115406860773277 0.007787904576648366 0.015827422324756663 0.013643282830861742 0.012338925297899555 0.007359598121597874 0.01424797224163647 0.09737079522136055 0.14601439689786624 0.337413917584164 0.6179492443333177 0.741809464150622 0.9445687130630891; 0.17472537202867305 0.31946950253651746 0.20757504481875824 0.2960646945670436 0.2043837708525398 0.37471801359443757 0.012869683364298182 0.00870439065575205 0.001255146163388405 9.40099767528817e-5 9.993772188331276e-5 0.006372183619931549 0.014253099520981844 0.0109507207417646 0.009919585760279565 0.005713849249271425 0.011310824574965789 0.06534582815086555 0.09496735574229902 0.23825024277808274 0.48407885194721434 0.6304863404013172 0.9601419182825064; 0.27226866397259536 0.4654568122945319 0.32400415670963956 0.44057810112560447 0.32622076287640006 0.5362623212460318 0.028066326109481347 0.0195077208623248 0.0033555198691232182 0.00030801400193521903 0.00032463677682729514 0.015144618580948671 0.03137762242865076 0.02470799941856937 0.022599211149285328 0.01366097735931823 0.02541251142320079 0.11144275482456456 0.14650071169191572 0.3226608405615575 0.5874035637208949 0.7536774603025823 1.0; 0.1378713016583555 0.2643436673110578 0.16623288617478832 0.24439157750945995 0.16654595332868682 0.319866938036741 0.009632633551903613 0.0064994006534456195 0.0008695399375500576 5.753081953910816e-5 6.056131084754415e-5 0.004425777296702749 0.010366726569778452 0.007709894663229764 0.006963352996732379 0.003917093375501153 0.007956455372461188 0.04872180791505365 0.07277779799754423 0.19415214531611902 0.41848453319317924 0.5573025470555464 0.9233874841127124], Int32[1, 2, 3, 4, 6, 7, 24, 25, 30, 46, 47, 51, 52, 53, 54, 56, 58, 72, 74, 78, 86, 89, 99], LIBSVM.SVMNode[LIBSVM.SVMNode(0, 1.0), LIBSVM.SVMNode(0, 2.0), LIBSVM.SVMNode(0, 3.0), LIBSVM.SVMNode(0, 4.0), LIBSVM.SVMNode(0, 6.0), LIBSVM.SVMNode(0, 7.0), LIBSVM.SVMNode(0, 24.0), LIBSVM.SVMNode(0, 25.0), LIBSVM.SVMNode(0, 30.0), LIBSVM.SVMNode(0, 46.0), LIBSVM.SVMNode(0, 47.0), LIBSVM.SVMNode(0, 51.0), LIBSVM.SVMNode(0, 52.0), LIBSVM.SVMNode(0, 53.0), LIBSVM.SVMNode(0, 54.0), LIBSVM.SVMNode(0, 56.0), LIBSVM.SVMNode(0, 58.0), LIBSVM.SVMNode(0, 72.0), LIBSVM.SVMNode(0, 74.0), LIBSVM.SVMNode(0, 78.0), LIBSVM.SVMNode(0, 86.0), LIBSVM.SVMNode(0, 89.0), LIBSVM.SVMNode(0, 99.0)]), 0.0, [1.0; 1.0; 1.0; 1.0; 0.4545873718969774; 0.36172853884920114; 1.0; 1.0; 0.9976825435225717; 1.0; 1.0; -1.0; -1.0; -1.0; -1.0; -0.5005315477488701; -0.21806563021962358; -1.0; -0.3833339180359196; -1.0; -0.7120673582643366; -1.0; -1.0;;], Float64[], Float64[], [-0.015075000482567661], 3, 0.01, 200.0, 0.001, 1.0, 0.5, 0.1, true, false)","category":"page"},{"location":"examples/support-vector-machine/#Prediction","page":"Support Vector Machine","title":"Prediction","text":"","category":"section"},{"location":"examples/support-vector-machine/","page":"Support Vector Machine","title":"Support Vector Machine","text":"For evaluation, we create a 100×100 2D grid based on the extent of the training data:","category":"page"},{"location":"examples/support-vector-machine/","page":"Support Vector Machine","title":"Support Vector Machine","text":"test_range = range(floor(Int, minimum(X)), ceil(Int, maximum(X)); length=100)\nx_test = ColVecs(mapreduce(collect, hcat, Iterators.product(test_range, test_range)));","category":"page"},{"location":"examples/support-vector-machine/","page":"Support Vector Machine","title":"Support Vector Machine","text":"Again, we pass the result of KernelFunctions.jl's kernelmatrix to LIBSVM:","category":"page"},{"location":"examples/support-vector-machine/","page":"Support Vector Machine","title":"Support Vector Machine","text":"y_pred, _ = svmpredict(model, kernelmatrix(k, x_train, x_test));","category":"page"},{"location":"examples/support-vector-machine/","page":"Support Vector Machine","title":"Support Vector Machine","text":"We can see that the kernelized, non-linear classification successfully separates the two classes in the training data:","category":"page"},{"location":"examples/support-vector-machine/","page":"Support Vector Machine","title":"Support Vector Machine","text":"plot(; lim=extrema(test_range), aspect_ratio=1)\ncontourf!(\n test_range,\n test_range,\n y_pred;\n levels=1,\n color=cgrad(:redsblues),\n alpha=0.7,\n colorbar_title=\"prediction\",\n)\nscatter!(X1[:, 1], X1[:, 2]; color=:red, label=\"training data: class –1\")\nscatter!(X2[:, 1], X2[:, 2]; color=:blue, label=\"training data: class 1\")","category":"page"},{"location":"examples/support-vector-machine/","page":"Support Vector Machine","title":"Support Vector Machine","text":"\n\n\n \n \n \n\n\n\n \n \n \n\n\n\n \n \n \n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n \n \n \n\n\n\n\n\n\n\n\n\n","category":"page"},{"location":"examples/support-vector-machine/","page":"Support Vector Machine","title":"Support Vector Machine","text":"
      \n
      Package and system information
      \n
      \nPackage information (click to expand)\n
      \nStatus `~/work/KernelFunctions.jl/KernelFunctions.jl/examples/support-vector-machine/Project.toml`\n  [31c24e10] Distributions v0.25.107\n  [ec8451be] KernelFunctions v0.10.63 `/home/runner/work/KernelFunctions.jl/KernelFunctions.jl#master`\n  [b1bec4e5] LIBSVM v0.8.0\n  [98b081ad] Literate v2.16.1\n  [91a5bcdd] Plots v1.40.2\n  [37e2e46d] LinearAlgebra\n
      \nTo reproduce this notebook's package environment, you can\n\ndownload the full Manifest.toml.\n
      \n
      \nSystem information (click to expand)\n
      \nJulia Version 1.10.2\nCommit bd47eca2c8a (2024-03-01 10:14 UTC)\nBuild Info:\n  Official https://julialang.org/ release\nPlatform Info:\n  OS: Linux (x86_64-linux-gnu)\n  CPU: 4 × AMD EPYC 7763 64-Core Processor\n  WORD_SIZE: 64\n  LIBM: libopenlibm\n  LLVM: libLLVM-15.0.7 (ORCJIT, znver3)\nThreads: 1 default, 0 interactive, 1 GC (on 4 virtual cores)\nEnvironment:\n  JULIA_DEBUG = Documenter\n  JULIA_LOAD_PATH = :/home/runner/.julia/packages/JuliaGPsDocs/7M86H/src\n
      \n
      ","category":"page"},{"location":"examples/support-vector-machine/","page":"Support Vector Machine","title":"Support Vector Machine","text":"","category":"page"},{"location":"examples/support-vector-machine/","page":"Support Vector Machine","title":"Support Vector Machine","text":"This page was generated using Literate.jl.","category":"page"},{"location":"metrics/#Metrics","page":"Metrics","title":"Metrics","text":"","category":"section"},{"location":"metrics/","page":"Metrics","title":"Metrics","text":"SimpleKernel implementations rely on Distances.jl for efficiently computing the pairwise matrix. This requires a distance measure or metric, such as the commonly used SqEuclidean and Euclidean.","category":"page"},{"location":"metrics/","page":"Metrics","title":"Metrics","text":"The metric used by a given kernel type is specified as","category":"page"},{"location":"metrics/","page":"Metrics","title":"Metrics","text":"KernelFunctions.metric(::CustomKernel) = SqEuclidean()","category":"page"},{"location":"metrics/","page":"Metrics","title":"Metrics","text":"However, there are kernels that can be implemented efficiently using \"metrics\" that do not respect all the definitions expected by Distances.jl. For this reason, KernelFunctions.jl provides additional \"metrics\" such as DotProduct (langle x y rangle) and Delta (delta(xy)).","category":"page"},{"location":"metrics/#Adding-a-new-metric","page":"Metrics","title":"Adding a new metric","text":"","category":"section"},{"location":"metrics/","page":"Metrics","title":"Metrics","text":"If you want to create a new \"metric\" just implement the following:","category":"page"},{"location":"metrics/","page":"Metrics","title":"Metrics","text":"struct Delta <: Distances.PreMetric\nend\n\n@inline function Distances._evaluate(::Delta,a::AbstractVector{T},b::AbstractVector{T}) where {T}\n @boundscheck if length(a) != length(b)\n throw(DimensionMismatch(\"first array has length $(length(a)) which does not match the length of the second, $(length(b)).\"))\n end\n return a==b\nend\n\n@inline (dist::Delta)(a::AbstractArray,b::AbstractArray) = Distances._evaluate(dist,a,b)\n@inline (dist::Delta)(a::Number,b::Number) = a==b","category":"page"},{"location":"transform/#input_transforms","page":"Input Transforms","title":"Input Transforms","text":"","category":"section"},{"location":"transform/#Overview","page":"Input Transforms","title":"Overview","text":"","category":"section"},{"location":"transform/","page":"Input Transforms","title":"Input Transforms","text":"Transforms are designed to change input data before passing it on to a kernel object.","category":"page"},{"location":"transform/","page":"Input Transforms","title":"Input Transforms","text":"It can be as standard as IdentityTransform returning the same input, or multiplying the data by a scalar with ScaleTransform or by a vector with ARDTransform. There is a more general FunctionTransform that uses a function and applies it to each input.","category":"page"},{"location":"transform/","page":"Input Transforms","title":"Input Transforms","text":"You can also create a pipeline of Transforms via ChainTransform, e.g.,","category":"page"},{"location":"transform/","page":"Input Transforms","title":"Input Transforms","text":"LowRankTransform(rand(10, 5)) ∘ ScaleTransform(2.0)","category":"page"},{"location":"transform/","page":"Input Transforms","title":"Input Transforms","text":"A transformation t can be applied to a single input x with t(x) and to multiple inputs xs with map(t, xs).","category":"page"},{"location":"transform/","page":"Input Transforms","title":"Input Transforms","text":"Kernels can be coupled with input transformations with ∘ or its alias compose. It falls back to creating a TransformedKernel but allows more optimized implementations for specific kernels and transformations.","category":"page"},{"location":"transform/#List-of-Input-Transforms","page":"Input Transforms","title":"List of Input Transforms","text":"","category":"section"},{"location":"transform/","page":"Input Transforms","title":"Input Transforms","text":"Transform\nIdentityTransform\nScaleTransform\nARDTransform\nARDTransform(::Real, ::Integer)\nLinearTransform\nFunctionTransform\nSelectTransform\nChainTransform\nPeriodicTransform","category":"page"},{"location":"transform/#KernelFunctions.Transform","page":"Input Transforms","title":"KernelFunctions.Transform","text":"Transform\n\nAbstract type defining a transformation of the input.\n\n\n\n\n\n","category":"type"},{"location":"transform/#KernelFunctions.IdentityTransform","page":"Input Transforms","title":"KernelFunctions.IdentityTransform","text":"IdentityTransform()\n\nTransformation that returns exactly the input.\n\n\n\n\n\n","category":"type"},{"location":"transform/#KernelFunctions.ScaleTransform","page":"Input Transforms","title":"KernelFunctions.ScaleTransform","text":"ScaleTransform(l::Real)\n\nTransformation that multiplies the input elementwise with l.\n\nExamples\n\njulia> l = rand(); t = ScaleTransform(l); X = rand(100, 10);\n\njulia> map(t, ColVecs(X)) == ColVecs(l .* X)\ntrue\n\n\n\n\n\n","category":"type"},{"location":"transform/#KernelFunctions.ARDTransform","page":"Input Transforms","title":"KernelFunctions.ARDTransform","text":"ARDTransform(v::AbstractVector)\n\nTransformation that multiplies the input elementwise by v.\n\nExamples\n\njulia> v = rand(10); t = ARDTransform(v); X = rand(10, 100);\n\njulia> map(t, ColVecs(X)) == ColVecs(v .* X)\ntrue\n\n\n\n\n\n","category":"type"},{"location":"transform/#KernelFunctions.ARDTransform-Tuple{Real, Integer}","page":"Input Transforms","title":"KernelFunctions.ARDTransform","text":"ARDTransform(s::Real, dims::Integer)\n\nCreate an ARDTransform with vector fill(s, dims).\n\n\n\n\n\n","category":"method"},{"location":"transform/#KernelFunctions.LinearTransform","page":"Input Transforms","title":"KernelFunctions.LinearTransform","text":"LinearTransform(A::AbstractMatrix)\n\nLinear transformation of the input realised by the matrix A.\n\nThe second dimension of A must match the number of features of the target.\n\nExamples\n\njulia> A = rand(10, 5); t = LinearTransform(A); X = rand(5, 100);\n\njulia> map(t, ColVecs(X)) == ColVecs(A * X)\ntrue\n\n\n\n\n\n","category":"type"},{"location":"transform/#KernelFunctions.FunctionTransform","page":"Input Transforms","title":"KernelFunctions.FunctionTransform","text":"FunctionTransform(f)\n\nTransformation that applies function f to the input.\n\nMake sure that f can act on an input. For instance, if the inputs are vectors, use f(x) = sin.(x) instead of f = sin.\n\nExamples\n\njulia> f(x) = sum(x); t = FunctionTransform(f); X = randn(100, 10);\n\njulia> map(t, ColVecs(X)) == ColVecs(sum(X; dims=1))\ntrue\n\n\n\n\n\n","category":"type"},{"location":"transform/#KernelFunctions.SelectTransform","page":"Input Transforms","title":"KernelFunctions.SelectTransform","text":"SelectTransform(dims)\n\nTransformation that selects the dimensions dims of the input.\n\nExamples\n\njulia> dims = [1, 3, 5, 6, 7]; t = SelectTransform(dims); X = rand(100, 10);\n\njulia> map(t, ColVecs(X)) == ColVecs(X[dims, :])\ntrue\n\n\n\n\n\n","category":"type"},{"location":"transform/#KernelFunctions.ChainTransform","page":"Input Transforms","title":"KernelFunctions.ChainTransform","text":"ChainTransform(transforms)\n\nTransformation that applies a chain of transformations ts to the input.\n\nThe transformation first(ts) is applied first.\n\nExamples\n\njulia> l = rand(); A = rand(3, 4); t1 = ScaleTransform(l); t2 = LinearTransform(A);\n\njulia> X = rand(4, 10);\n\njulia> map(ChainTransform([t1, t2]), ColVecs(X)) == ColVecs(A * (l .* X))\ntrue\n\njulia> map(t2 ∘ t1, ColVecs(X)) == ColVecs(A * (l .* X))\ntrue\n\n\n\n\n\n","category":"type"},{"location":"transform/#KernelFunctions.PeriodicTransform","page":"Input Transforms","title":"KernelFunctions.PeriodicTransform","text":"PeriodicTransform(f)\n\nTransformation that maps the input elementwise onto the unit circle with frequency f.\n\nSamples from a GP with a kernel with this transformation applied to the inputs will produce samples with frequency f.\n\nExamples\n\njulia> f = rand(); t = PeriodicTransform(f); x = rand();\n\njulia> t(x) == [sinpi(2 * f * x), cospi(2 * f * x)]\ntrue\n\n\n\n\n\n","category":"type"},{"location":"transform/#Convenience-functions","page":"Input Transforms","title":"Convenience functions","text":"","category":"section"},{"location":"transform/","page":"Input Transforms","title":"Input Transforms","text":"with_lengthscale\nmedian_heuristic_transform","category":"page"},{"location":"transform/#KernelFunctions.with_lengthscale","page":"Input Transforms","title":"KernelFunctions.with_lengthscale","text":"with_lengthscale(kernel::Kernel, lengthscale::Real)\n\nConstruct a transformed kernel with lengthscale.\n\nExamples\n\njulia> kernel = with_lengthscale(SqExponentialKernel(), 2.5);\n\njulia> x = rand(2);\n\njulia> y = rand(2);\n\njulia> kernel(x, y) ≈ (SqExponentialKernel() ∘ ScaleTransform(0.4))(x, y)\ntrue\n\n\n\n\n\nwith_lengthscale(kernel::Kernel, lengthscales::AbstractVector{<:Real})\n\nConstruct a transformed \"ARD\" kernel with different lengthscales for each dimension.\n\nExamples\n\njulia> kernel = with_lengthscale(SqExponentialKernel(), [0.5, 2.5]);\n\njulia> x = rand(2);\n\njulia> y = rand(2);\n\njulia> kernel(x, y) ≈ (SqExponentialKernel() ∘ ARDTransform([2, 0.4]))(x, y)\ntrue\n\n\n\n\n\n","category":"function"},{"location":"transform/#KernelFunctions.median_heuristic_transform","page":"Input Transforms","title":"KernelFunctions.median_heuristic_transform","text":"median_heuristic_transform(distance, x::AbstractVector)\n\nCreate a ScaleTransform that divides the input elementwise by the median distance of the data points in x.\n\nThe distance has to support pairwise evaluation with KernelFunctions.pairwise. All PreMetrics of the package Distances.jl such as Euclidean satisfy this requirement automatically.\n\nExamples\n\njulia> using Distances, Statistics\n\njulia> x = ColVecs(rand(100, 10));\n\njulia> t = median_heuristic_transform(Euclidean(), x);\n\njulia> y = map(t, x);\n\njulia> median(euclidean(y[i], y[j]) for i in 1:10, j in 1:10 if i != j) ≈ 1\ntrue\n\n\n\n\n\n","category":"function"},{"location":"userguide/#User-guide","page":"User guide","title":"User guide","text":"","category":"section"},{"location":"userguide/#Kernel-Creation","page":"User guide","title":"Kernel Creation","text":"","category":"section"},{"location":"userguide/","page":"User guide","title":"User guide","text":"To create a kernel object, choose one of the pre-implemented kernels, see Kernel Functions, or create your own, see Creating your own kernel. For example, a squared exponential kernel is created by","category":"page"},{"location":"userguide/","page":"User guide","title":"User guide","text":" k = SqExponentialKernel()","category":"page"},{"location":"userguide/","page":"User guide","title":"User guide","text":"tip: How do I set the lengthscale(s)?\nInstead of having lengthscale(s) for each kernel we use Transform objects which act on the inputs before passing them to the kernel. Note that the transforms such as ScaleTransform and ARDTransform multiply the input by a scale factor, which corresponds to the inverse of the lengthscale. For example, a lengthscale of 0.5 is equivalent to premultiplying the input by 2.0, and you can create the corresponding kernel in either of the following equivalent ways: k = SqExponentialKernel() ∘ ScaleTransform(2.0)\n k = compose(SqExponentialKernel(), ScaleTransform(2.0))Alternatively, you can use the convenience function with_lengthscale:k = with_lengthscale(SqExponentialKernel(), 0.5)with_lengthscale also works with vector-valued lengthscales for multiple-dimensional inputs, and is equivalent to pre-composing with an ARDTransform:length_scales = [1.0, 2.0]\nk = with_lengthscale(SqExponentialKernel(), length_scales)\nk = SqExponentialKernel() ∘ ARDTransform(1 ./ length_scales)Check the Input Transforms page for more details.","category":"page"},{"location":"userguide/","page":"User guide","title":"User guide","text":"tip: How do I set the kernel variance?\nTo premultiply the kernel by a variance, you can use * with a scalar number: k = 3.0 * SqExponentialKernel()","category":"page"},{"location":"userguide/","page":"User guide","title":"User guide","text":"tip: How do I use a Mahalanobis kernel?\nThe MahalanobisKernel(; P=P), defined byk(x x P) = expbig(- (x - x)^top P (x - x)big)for a positive definite matrix P = Q^top Q, was removed in 0.9. Instead you can use a squared exponential kernel together with a LinearTransform of the inputs:k = SqExponentialKernel() ∘ LinearTransform(sqrt(2) .* Q)Analogously, you can combine other kernels such as the PiecewisePolynomialKernel with a LinearTransform of the inputs to obtain a kernel that is a function of the Mahalanobis distance between inputs.","category":"page"},{"location":"userguide/#Using-a-Kernel-Function","page":"User guide","title":"Using a Kernel Function","text":"","category":"section"},{"location":"userguide/","page":"User guide","title":"User guide","text":"To evaluate the kernel function on two vectors you simply call the kernel object:","category":"page"},{"location":"userguide/","page":"User guide","title":"User guide","text":"k = SqExponentialKernel()\nx1 = rand(3)\nx2 = rand(3)\nk(x1, x2)","category":"page"},{"location":"userguide/#Creating-a-Kernel-Matrix","page":"User guide","title":"Creating a Kernel Matrix","text":"","category":"section"},{"location":"userguide/","page":"User guide","title":"User guide","text":"Kernel matrices can be created via the kernelmatrix function or kernelmatrix_diag for only the diagonal. For example, for a collection of 10 Real-valued inputs:","category":"page"},{"location":"userguide/","page":"User guide","title":"User guide","text":"k = SqExponentialKernel()\nx = rand(10)\nkernelmatrix(k, x) # 10x10 matrix","category":"page"},{"location":"userguide/","page":"User guide","title":"User guide","text":"If your inputs are multi-dimensional, it is common to represent them as a matrix. For example","category":"page"},{"location":"userguide/","page":"User guide","title":"User guide","text":"X = rand(10, 5)","category":"page"},{"location":"userguide/","page":"User guide","title":"User guide","text":"However, it is ambiguous whether this represents a collection of 10 5-dimensional row-vectors, or 5 10-dimensional column-vectors. Therefore, we require users to provide some more information.","category":"page"},{"location":"userguide/","page":"User guide","title":"User guide","text":"You can write RowVecs(X) to declare that X contains 10 5-dimensional row-vectors, or ColVecs(X) to declare that X contains 5 10-dimensional column-vectors, then","category":"page"},{"location":"userguide/","page":"User guide","title":"User guide","text":"kernelmatrix(k, RowVecs(X)) # returns a 10×10 matrix -- each row of X treated as input\nkernelmatrix(k, ColVecs(X)) # returns a 5×5 matrix -- each column of X treated as input","category":"page"},{"location":"userguide/","page":"User guide","title":"User guide","text":"This is the mechanism used throughout KernelFunctions.jl to handle multi-dimensional inputs.","category":"page"},{"location":"userguide/","page":"User guide","title":"User guide","text":"You can utilise the obsdim keyword argument if you prefer:","category":"page"},{"location":"userguide/","page":"User guide","title":"User guide","text":"kernelmatrix(k, X; obsdim=1) # same as RowVecs(X)\nkernelmatrix(k, X; obsdim=2) # same as ColVecs(X)","category":"page"},{"location":"userguide/","page":"User guide","title":"User guide","text":"This is similar to the convention used in Distances.jl.","category":"page"},{"location":"userguide/#So-what-type-should-I-use-to-represent-a-collection-of-inputs?","page":"User guide","title":"So what type should I use to represent a collection of inputs?","text":"","category":"section"},{"location":"userguide/","page":"User guide","title":"User guide","text":"The central assumption made by KernelFunctions.jl is that all collections of N inputs are represented by AbstractVectors of length N. Abstraction is then used to ensure that efficiency is retained, ColVecs and RowVecs being the most obvious examples of this.","category":"page"},{"location":"userguide/","page":"User guide","title":"User guide","text":"Concretely:","category":"page"},{"location":"userguide/","page":"User guide","title":"User guide","text":"For Real-valued inputs (scalars), a Vector{<:Real} is fine.\nFor vector-valued inputs, consider a ColVecs or RowVecs.\nFor a new input type, simply represent collections of inputs of this type as an AbstractVector.","category":"page"},{"location":"userguide/","page":"User guide","title":"User guide","text":"See Input Types and Design for a more thorough discussion of the considerations made when this design was adopted.","category":"page"},{"location":"userguide/","page":"User guide","title":"User guide","text":"The obsdim kwarg mentioned above is a special case for vector-valued inputs stored in a matrix. It is implemented as a lightweight wrapper that constructs either a RowVecs or ColVecs from your inputs, and passes this on.","category":"page"},{"location":"userguide/#Output-Types","page":"User guide","title":"Output Types","text":"","category":"section"},{"location":"userguide/","page":"User guide","title":"User guide","text":"In addition to plain Matrix-like output, KernelFunctions.jl supports specific output types:","category":"page"},{"location":"userguide/","page":"User guide","title":"User guide","text":"For a positive-definite matrix object of type PDMat from PDMats.jl, you can call the following:","category":"page"},{"location":"userguide/","page":"User guide","title":"User guide","text":"using PDMats\nk = SqExponentialKernel()\nK = kernelpdmat(k, RowVecs(X)) # PDMat\nK = kernelpdmat(k, X; obsdim=1) # PDMat","category":"page"},{"location":"userguide/","page":"User guide","title":"User guide","text":"It will create a matrix and in case of bad conditioning will add some diagonal noise until the matrix is considered positive-definite; it will then return a PDMat object. For this method to work in your code you need to include using PDMats first.","category":"page"},{"location":"userguide/","page":"User guide","title":"User guide","text":"For a Kronecker matrix, we rely on Kronecker.jl. Here are two examples:","category":"page"},{"location":"userguide/","page":"User guide","title":"User guide","text":"using Kronecker\nx = range(0, 1; length=10)\ny = range(0, 1; length=50)\nK = kernelkronmat(k, [x, y]) # Kronecker matrix\nK = kernelkronmat(k, x, 5) # Kronecker matrix","category":"page"},{"location":"userguide/","page":"User guide","title":"User guide","text":"Make sure that k is a kernel compatible with such constructions (with iskroncompatible(k)). Both methods will return a Kronecker matrix. For those methods to work in your code you need to include using Kronecker first.","category":"page"},{"location":"userguide/","page":"User guide","title":"User guide","text":"For a Nystrom approximation: kernelmatrix(nystrom(k, X, ρ, obsdim=1)) where ρ is the fraction of data samples used in the approximation.","category":"page"},{"location":"userguide/#Composite-Kernels","page":"User guide","title":"Composite Kernels","text":"","category":"section"},{"location":"userguide/","page":"User guide","title":"User guide","text":"Sums and products of kernels are also valid kernels. They can be created via KernelSum and KernelProduct or using simple operators + and *. For example:","category":"page"},{"location":"userguide/","page":"User guide","title":"User guide","text":"k1 = SqExponentialKernel()\nk2 = Matern32Kernel()\nk = 0.5 * k1 + 0.2 * k2 # KernelSum\nk = k1 * k2 # KernelProduct","category":"page"},{"location":"userguide/#Kernel-Parameters","page":"User guide","title":"Kernel Parameters","text":"","category":"section"},{"location":"userguide/","page":"User guide","title":"User guide","text":"What if you want to differentiate through the kernel parameters? This is easy even in a highly nested structure such as:","category":"page"},{"location":"userguide/","page":"User guide","title":"User guide","text":"k = (\n 0.5 * SqExponentialKernel() * Matern12Kernel() +\n 0.2 * (LinearKernel() ∘ ScaleTransform(2.0) + PolynomialKernel())\n) ∘ ARDTransform([0.1, 0.5])","category":"page"},{"location":"userguide/","page":"User guide","title":"User guide","text":"One can access the named tuple of trainable parameters via Functors.functor from Functors.jl. This means that in practice you can implicitly optimize the kernel parameters by calling:","category":"page"},{"location":"userguide/","page":"User guide","title":"User guide","text":"using Flux\nkernelparams = Flux.params(k)\nFlux.gradient(kernelparams) do\n # ... some loss function on the kernel ....\nend","category":"page"},{"location":"examples/gaussian-process-priors/","page":"Gaussian process prior samples","title":"Gaussian process prior samples","text":"EditURL = \"../../../../examples/gaussian-process-priors/script.jl\"","category":"page"},{"location":"examples/gaussian-process-priors/","page":"Gaussian process prior samples","title":"Gaussian process prior samples","text":"EditURL = \"https://github.com/JuliaGaussianProcesses/KernelFunctions.jl/blob/master/examples/gaussian-process-priors/script.jl\"","category":"page"},{"location":"examples/gaussian-process-priors/#Gaussian-process-prior-samples","page":"Gaussian process prior samples","title":"Gaussian process prior samples","text":"","category":"section"},{"location":"examples/gaussian-process-priors/","page":"Gaussian process prior samples","title":"Gaussian process prior samples","text":"(Image: )","category":"page"},{"location":"examples/gaussian-process-priors/","page":"Gaussian process prior samples","title":"Gaussian process prior samples","text":"You are seeing the HTML output generated by Documenter.jl and Literate.jl from the Julia source file. The corresponding notebook can be viewed in nbviewer.","category":"page"},{"location":"examples/gaussian-process-priors/","page":"Gaussian process prior samples","title":"Gaussian process prior samples","text":"","category":"page"},{"location":"examples/gaussian-process-priors/","page":"Gaussian process prior samples","title":"Gaussian process prior samples","text":"The kernels defined in this package can also be used to specify the covariance of a Gaussian process prior. A Gaussian process (GP) is defined by its mean function m(cdot) and its covariance function or kernel k(cdot cdot):","category":"page"},{"location":"examples/gaussian-process-priors/","page":"Gaussian process prior samples","title":"Gaussian process prior samples","text":" f sim mathcalGPbig(m(cdot) k(cdot cdot)big)","category":"page"},{"location":"examples/gaussian-process-priors/","page":"Gaussian process prior samples","title":"Gaussian process prior samples","text":"In this notebook we show how the choice of kernel affects the samples from a GP (with zero mean).","category":"page"},{"location":"examples/gaussian-process-priors/","page":"Gaussian process prior samples","title":"Gaussian process prior samples","text":"# Load required packages\nusing KernelFunctions, LinearAlgebra\nusing Plots, Plots.PlotMeasures\ndefault(; lw=1.0, legendfontsize=8.0)\nusing Random: seed!\nseed!(42); # reproducibility","category":"page"},{"location":"examples/gaussian-process-priors/#Evaluation-at-finite-set-of-points","page":"Gaussian process prior samples","title":"Evaluation at finite set of points","text":"","category":"section"},{"location":"examples/gaussian-process-priors/","page":"Gaussian process prior samples","title":"Gaussian process prior samples","text":"The function values mathbff = f(x_n)_n=1^N of the GP at a finite number N of points X = x_n_n=1^N follow a multivariate normal distribution mathbff sim mathcalMVN(mathbfm mathrmK) with mean vector mathbfm and covariance matrix mathrmK, where","category":"page"},{"location":"examples/gaussian-process-priors/","page":"Gaussian process prior samples","title":"Gaussian process prior samples","text":"beginaligned\n mathbfm_i = m(x_i) \n mathrmK_ij = k(x_i x_j)\nendaligned","category":"page"},{"location":"examples/gaussian-process-priors/","page":"Gaussian process prior samples","title":"Gaussian process prior samples","text":"with 1 le i j le N.","category":"page"},{"location":"examples/gaussian-process-priors/","page":"Gaussian process prior samples","title":"Gaussian process prior samples","text":"We can visualize the infinite-dimensional GP by evaluating it on a fine grid to approximate the dense real line:","category":"page"},{"location":"examples/gaussian-process-priors/","page":"Gaussian process prior samples","title":"Gaussian process prior samples","text":"num_inputs = 101\nxlim = (-5, 5)\nX = range(xlim...; length=num_inputs);","category":"page"},{"location":"examples/gaussian-process-priors/","page":"Gaussian process prior samples","title":"Gaussian process prior samples","text":"Given a kernel k, we can compute the kernel matrix as K = kernelmatrix(k, X).","category":"page"},{"location":"examples/gaussian-process-priors/#Random-samples","page":"Gaussian process prior samples","title":"Random samples","text":"","category":"section"},{"location":"examples/gaussian-process-priors/","page":"Gaussian process prior samples","title":"Gaussian process prior samples","text":"To sample from the multivariate normal distribution p(mathbff) = mathcalMVN(0 mathrmK), we could make use of Distributions.jl and call rand(MvNormal(K)). Alternatively, we could use the AbstractGPs.jl package and construct a GP object which we evaluate at the points of interest and from which we can then sample: rand(GP(k)(X)).","category":"page"},{"location":"examples/gaussian-process-priors/","page":"Gaussian process prior samples","title":"Gaussian process prior samples","text":"Here, we will explicitly construct samples using the Cholesky factorization mathrmL = operatornamecholesky(mathrmK), with mathbff = mathrmL mathbfv, where mathbfv sim mathcalN(0 mathbfI) is a vector of standard-normal random variables.","category":"page"},{"location":"examples/gaussian-process-priors/","page":"Gaussian process prior samples","title":"Gaussian process prior samples","text":"We will use the same randomness mathbfv to generate comparable samples across different kernels.","category":"page"},{"location":"examples/gaussian-process-priors/","page":"Gaussian process prior samples","title":"Gaussian process prior samples","text":"num_samples = 7\nv = randn(num_inputs, num_samples);","category":"page"},{"location":"examples/gaussian-process-priors/","page":"Gaussian process prior samples","title":"Gaussian process prior samples","text":"Mathematically, a kernel matrix is by definition positive semi-definite, but due to finite-precision inaccuracies, the computed kernel matrix might not be exactly positive definite. To avoid Cholesky errors, we add a small \"nugget\" term on the diagonal:","category":"page"},{"location":"examples/gaussian-process-priors/","page":"Gaussian process prior samples","title":"Gaussian process prior samples","text":"function mvn_sample(K)\n L = cholesky(K + 1e-6 * I)\n f = L.L * v\n return f\nend;","category":"page"},{"location":"examples/gaussian-process-priors/#Visualization","page":"Gaussian process prior samples","title":"Visualization","text":"","category":"section"},{"location":"examples/gaussian-process-priors/","page":"Gaussian process prior samples","title":"Gaussian process prior samples","text":"We now define a function that visualizes a kernel for us.","category":"page"},{"location":"examples/gaussian-process-priors/","page":"Gaussian process prior samples","title":"Gaussian process prior samples","text":"function visualize(k::Kernel)\n K = kernelmatrix(k, X)\n f = mvn_sample(K)\n\n p_kernel_2d = heatmap(\n X,\n X,\n K;\n yflip=true,\n colorbar=false,\n ylabel=string(nameof(typeof(k))),\n ylim=xlim,\n yticks=([xlim[1], 0, xlim[end]], [\"\\u22125\", raw\"$x'$\", \"5\"]),\n vlim=(0, 1),\n title=raw\"$k(x, x')$\",\n aspect_ratio=:equal,\n left_margin=5mm,\n )\n\n p_kernel_cut = plot(\n X,\n k.(X, 0.0);\n title=string(raw\"$k(x, x_\\mathrm{ref})$\"),\n label=raw\"$x_\\mathrm{ref}=0.0$\",\n legend=:topleft,\n foreground_color_legend=nothing,\n )\n plot!(X, k.(X, 1.5); label=raw\"$x_\\mathrm{ref}=1.5$\")\n\n p_samples = plot(X, f; c=\"blue\", title=raw\"$f(x)$\", ylim=(-3, 3), label=nothing)\n\n return plot(\n p_kernel_2d,\n p_kernel_cut,\n p_samples;\n layout=(1, 3),\n xlabel=raw\"$x$\",\n xlim=xlim,\n xticks=collect(xlim),\n )\nend;","category":"page"},{"location":"examples/gaussian-process-priors/","page":"Gaussian process prior samples","title":"Gaussian process prior samples","text":"We can now visualize a kernel and show samples from a Gaussian process with a given kernel:","category":"page"},{"location":"examples/gaussian-process-priors/","page":"Gaussian process prior samples","title":"Gaussian process prior samples","text":"plot(visualize(SqExponentialKernel()); size=(800, 210), bottommargin=5mm, topmargin=5mm)","category":"page"},{"location":"examples/gaussian-process-priors/","page":"Gaussian process prior samples","title":"Gaussian process prior samples","text":"\n\n\n \n \n \n\n\n\n \n \n \n\n\n\n \n \n \n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n \n \n \n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n \n \n \n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n","category":"page"},{"location":"examples/gaussian-process-priors/#Kernel-comparison","page":"Gaussian process prior samples","title":"Kernel comparison","text":"","category":"section"},{"location":"examples/gaussian-process-priors/","page":"Gaussian process prior samples","title":"Gaussian process prior samples","text":"This also allows us to compare different kernels:","category":"page"},{"location":"examples/gaussian-process-priors/","page":"Gaussian process prior samples","title":"Gaussian process prior samples","text":"kernels = [\n Matern12Kernel(),\n Matern32Kernel(),\n Matern52Kernel(),\n SqExponentialKernel(),\n WhiteKernel(),\n ConstantKernel(),\n LinearKernel(),\n compose(PeriodicKernel(), ScaleTransform(0.2)),\n NeuralNetworkKernel(),\n GibbsKernel(; lengthscale=x -> sum(exp ∘ sin, x)),\n]\nplot(\n [visualize(k) for k in kernels]...;\n layout=(length(kernels), 1),\n size=(800, 220 * length(kernels) + 100),\n)","category":"page"},{"location":"examples/gaussian-process-priors/","page":"Gaussian process prior samples","title":"Gaussian process prior samples","text":"\n\n\n \n \n \n\n\n\n \n \n \n\n\n\n \n \n \n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n \n \n \n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n \n \n \n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n \n \n \n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n \n \n \n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n \n \n \n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n \n \n \n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n \n \n \n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n \n \n \n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n \n \n \n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n \n \n \n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n \n \n \n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n \n \n \n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n \n \n \n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n \n \n \n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n \n \n \n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n \n \n \n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n \n \n \n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n \n \n \n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n \n \n \n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n \n \n \n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n \n \n \n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n \n \n \n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n \n \n \n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n \n \n \n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n \n \n \n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n \n \n \n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n \n \n \n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n \n \n \n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n \n \n \n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n","category":"page"},{"location":"examples/gaussian-process-priors/","page":"Gaussian process prior samples","title":"Gaussian process prior samples","text":"
      \n
      Package and system information
      \n
      \nPackage information (click to expand)\n
      \nStatus `~/work/KernelFunctions.jl/KernelFunctions.jl/examples/gaussian-process-priors/Project.toml`\n  [31c24e10] Distributions v0.25.107\n  [ec8451be] KernelFunctions v0.10.63 `/home/runner/work/KernelFunctions.jl/KernelFunctions.jl#master`\n  [98b081ad] Literate v2.16.1\n  [91a5bcdd] Plots v1.40.2\n  [37e2e46d] LinearAlgebra\n  [9a3f8284] Random\n
      \nTo reproduce this notebook's package environment, you can\n\ndownload the full Manifest.toml.\n
      \n
      \nSystem information (click to expand)\n
      \nJulia Version 1.10.2\nCommit bd47eca2c8a (2024-03-01 10:14 UTC)\nBuild Info:\n  Official https://julialang.org/ release\nPlatform Info:\n  OS: Linux (x86_64-linux-gnu)\n  CPU: 4 × AMD EPYC 7763 64-Core Processor\n  WORD_SIZE: 64\n  LIBM: libopenlibm\n  LLVM: libLLVM-15.0.7 (ORCJIT, znver3)\nThreads: 1 default, 0 interactive, 1 GC (on 4 virtual cores)\nEnvironment:\n  JULIA_DEBUG = Documenter\n  JULIA_LOAD_PATH = :/home/runner/.julia/packages/JuliaGPsDocs/7M86H/src\n
      \n
      ","category":"page"},{"location":"examples/gaussian-process-priors/","page":"Gaussian process prior samples","title":"Gaussian process prior samples","text":"","category":"page"},{"location":"examples/gaussian-process-priors/","page":"Gaussian process prior samples","title":"Gaussian process prior samples","text":"This page was generated using Literate.jl.","category":"page"},{"location":"#KernelFunctions.jl","page":"Home","title":"KernelFunctions.jl","text":"","category":"section"},{"location":"","page":"Home","title":"Home","text":"KernelFunctions.jl is a general purpose kernel package. It provides a flexible framework for creating kernel functions and manipulating them, and an extensive collection of implementations. The main goals of this package are:","category":"page"},{"location":"","page":"Home","title":"Home","text":"Flexibility: operations between kernels should be fluid and easy without breaking, with a user-friendly API.\nPlug-and-play: being model-agnostic; including the kernels before/after other steps should be straightforward. To interoperate well with generic packages for handling parameters like ParameterHandling.jl and FluxML's Functors.jl.\nAutomatic Differentiation compatibility: all kernel functions which ought to be differentiable using AD packages like ForwardDiff.jl or Zygote.jl should be.","category":"page"},{"location":"","page":"Home","title":"Home","text":"This package replaces the now-defunct MLKernels.jl. It incorporates lots of excellent existing work from packages such as GaussianProcesses.jl, and is used in downstream packages such as AbstractGPs.jl, ApproximateGPs.jl, Stheno.jl, and AugmentedGaussianProcesses.jl.","category":"page"},{"location":"","page":"Home","title":"Home","text":"See the User guide for a brief introduction.","category":"page"}] } diff --git a/dev/transform/index.html b/dev/transform/index.html index eddecd424..3e4192e68 100644 --- a/dev/transform/index.html +++ b/dev/transform/index.html @@ -1,20 +1,20 @@ -Input Transforms · KernelFunctions.jl

      Input Transforms

      Overview

      Transforms are designed to change input data before passing it on to a kernel object.

      It can be as standard as IdentityTransform returning the same input, or multiplying the data by a scalar with ScaleTransform or by a vector with ARDTransform. There is a more general FunctionTransform that uses a function and applies it to each input.

      You can also create a pipeline of Transforms via ChainTransform, e.g.,

      LowRankTransform(rand(10, 5)) ∘ ScaleTransform(2.0)

      A transformation t can be applied to a single input x with t(x) and to multiple inputs xs with map(t, xs).

      Kernels can be coupled with input transformations with or its alias compose. It falls back to creating a TransformedKernel but allows more optimized implementations for specific kernels and transformations.

      List of Input Transforms

      KernelFunctions.ScaleTransformType
      ScaleTransform(l::Real)

      Transformation that multiplies the input elementwise with l.

      Examples

      julia> l = rand(); t = ScaleTransform(l); X = rand(100, 10);
      +Input Transforms · KernelFunctions.jl

      Input Transforms

      Overview

      Transforms are designed to change input data before passing it on to a kernel object.

      It can be as standard as IdentityTransform returning the same input, or multiplying the data by a scalar with ScaleTransform or by a vector with ARDTransform. There is a more general FunctionTransform that uses a function and applies it to each input.

      You can also create a pipeline of Transforms via ChainTransform, e.g.,

      LowRankTransform(rand(10, 5)) ∘ ScaleTransform(2.0)

      A transformation t can be applied to a single input x with t(x) and to multiple inputs xs with map(t, xs).

      Kernels can be coupled with input transformations with or its alias compose. It falls back to creating a TransformedKernel but allows more optimized implementations for specific kernels and transformations.

      List of Input Transforms

      KernelFunctions.ScaleTransformType
      ScaleTransform(l::Real)

      Transformation that multiplies the input elementwise with l.

      Examples

      julia> l = rand(); t = ScaleTransform(l); X = rand(100, 10);
       
       julia> map(t, ColVecs(X)) == ColVecs(l .* X)
      -true
      source
      KernelFunctions.ARDTransformType
      ARDTransform(v::AbstractVector)

      Transformation that multiplies the input elementwise by v.

      Examples

      julia> v = rand(10); t = ARDTransform(v); X = rand(10, 100);
      +true
      source
      KernelFunctions.ARDTransformType
      ARDTransform(v::AbstractVector)

      Transformation that multiplies the input elementwise by v.

      Examples

      julia> v = rand(10); t = ARDTransform(v); X = rand(10, 100);
       
       julia> map(t, ColVecs(X)) == ColVecs(v .* X)
      -true
      source
      KernelFunctions.LinearTransformType
      LinearTransform(A::AbstractMatrix)

      Linear transformation of the input realised by the matrix A.

      The second dimension of A must match the number of features of the target.

      Examples

      julia> A = rand(10, 5); t = LinearTransform(A); X = rand(5, 100);
      +true
      source
      KernelFunctions.LinearTransformType
      LinearTransform(A::AbstractMatrix)

      Linear transformation of the input realised by the matrix A.

      The second dimension of A must match the number of features of the target.

      Examples

      julia> A = rand(10, 5); t = LinearTransform(A); X = rand(5, 100);
       
       julia> map(t, ColVecs(X)) == ColVecs(A * X)
      -true
      source
      KernelFunctions.FunctionTransformType
      FunctionTransform(f)

      Transformation that applies function f to the input.

      Make sure that f can act on an input. For instance, if the inputs are vectors, use f(x) = sin.(x) instead of f = sin.

      Examples

      julia> f(x) = sum(x); t = FunctionTransform(f); X = randn(100, 10);
      +true
      source
      KernelFunctions.FunctionTransformType
      FunctionTransform(f)

      Transformation that applies function f to the input.

      Make sure that f can act on an input. For instance, if the inputs are vectors, use f(x) = sin.(x) instead of f = sin.

      Examples

      julia> f(x) = sum(x); t = FunctionTransform(f); X = randn(100, 10);
       
       julia> map(t, ColVecs(X)) == ColVecs(sum(X; dims=1))
      -true
      source
      KernelFunctions.SelectTransformType
      SelectTransform(dims)

      Transformation that selects the dimensions dims of the input.

      Examples

      julia> dims = [1, 3, 5, 6, 7]; t = SelectTransform(dims); X = rand(100, 10);
      +true
      source
      KernelFunctions.SelectTransformType
      SelectTransform(dims)

      Transformation that selects the dimensions dims of the input.

      Examples

      julia> dims = [1, 3, 5, 6, 7]; t = SelectTransform(dims); X = rand(100, 10);
       
       julia> map(t, ColVecs(X)) == ColVecs(X[dims, :])
      -true
      source
      KernelFunctions.ChainTransformType
      ChainTransform(transforms)

      Transformation that applies a chain of transformations ts to the input.

      The transformation first(ts) is applied first.

      Examples

      julia> l = rand(); A = rand(3, 4); t1 = ScaleTransform(l); t2 = LinearTransform(A);
      +true
      source
      KernelFunctions.ChainTransformType
      ChainTransform(transforms)

      Transformation that applies a chain of transformations ts to the input.

      The transformation first(ts) is applied first.

      Examples

      julia> l = rand(); A = rand(3, 4); t1 = ScaleTransform(l); t2 = LinearTransform(A);
       
       julia> X = rand(4, 10);
       
      @@ -22,24 +22,24 @@
       true
       
       julia> map(t2 ∘ t1, ColVecs(X)) == ColVecs(A * (l .* X))
      -true
      source
      KernelFunctions.PeriodicTransformType
      PeriodicTransform(f)

      Transformation that maps the input elementwise onto the unit circle with frequency f.

      Samples from a GP with a kernel with this transformation applied to the inputs will produce samples with frequency f.

      Examples

      julia> f = rand(); t = PeriodicTransform(f); x = rand();
      +true
      source
      KernelFunctions.PeriodicTransformType
      PeriodicTransform(f)

      Transformation that maps the input elementwise onto the unit circle with frequency f.

      Samples from a GP with a kernel with this transformation applied to the inputs will produce samples with frequency f.

      Examples

      julia> f = rand(); t = PeriodicTransform(f); x = rand();
       
       julia> t(x) == [sinpi(2 * f * x), cospi(2 * f * x)]
      -true
      source

      Convenience functions

      KernelFunctions.with_lengthscaleFunction
      with_lengthscale(kernel::Kernel, lengthscale::Real)

      Construct a transformed kernel with lengthscale.

      Examples

      julia> kernel = with_lengthscale(SqExponentialKernel(), 2.5);
      +true
      source

      Convenience functions

      KernelFunctions.with_lengthscaleFunction
      with_lengthscale(kernel::Kernel, lengthscale::Real)

      Construct a transformed kernel with lengthscale.

      Examples

      julia> kernel = with_lengthscale(SqExponentialKernel(), 2.5);
       
       julia> x = rand(2);
       
       julia> y = rand(2);
       
       julia> kernel(x, y) ≈ (SqExponentialKernel() ∘ ScaleTransform(0.4))(x, y)
      -true
      source
      with_lengthscale(kernel::Kernel, lengthscales::AbstractVector{<:Real})

      Construct a transformed "ARD" kernel with different lengthscales for each dimension.

      Examples

      julia> kernel = with_lengthscale(SqExponentialKernel(), [0.5, 2.5]);
      +true
      source
      with_lengthscale(kernel::Kernel, lengthscales::AbstractVector{<:Real})

      Construct a transformed "ARD" kernel with different lengthscales for each dimension.

      Examples

      julia> kernel = with_lengthscale(SqExponentialKernel(), [0.5, 2.5]);
       
       julia> x = rand(2);
       
       julia> y = rand(2);
       
       julia> kernel(x, y) ≈ (SqExponentialKernel() ∘ ARDTransform([2, 0.4]))(x, y)
      -true
      source
      KernelFunctions.median_heuristic_transformFunction
      median_heuristic_transform(distance, x::AbstractVector)

      Create a ScaleTransform that divides the input elementwise by the median distance of the data points in x.

      The distance has to support pairwise evaluation with KernelFunctions.pairwise. All PreMetrics of the package Distances.jl such as Euclidean satisfy this requirement automatically.

      Examples

      julia> using Distances, Statistics
      +true
      source
      KernelFunctions.median_heuristic_transformFunction
      median_heuristic_transform(distance, x::AbstractVector)

      Create a ScaleTransform that divides the input elementwise by the median distance of the data points in x.

      The distance has to support pairwise evaluation with KernelFunctions.pairwise. All PreMetrics of the package Distances.jl such as Euclidean satisfy this requirement automatically.

      Examples

      julia> using Distances, Statistics
       
       julia> x = ColVecs(rand(100, 10));
       
      @@ -48,4 +48,4 @@
       julia> y = map(t, x);
       
       julia> median(euclidean(y[i], y[j]) for i in 1:10, j in 1:10 if i != j) ≈ 1
      -true
      source
      +true
      source
      diff --git a/dev/userguide/index.html b/dev/userguide/index.html index 72e56332f..57615073b 100644 --- a/dev/userguide/index.html +++ b/dev/userguide/index.html @@ -26,4 +26,4 @@ kernelparams = Flux.params(k) Flux.gradient(kernelparams) do # ... some loss function on the kernel .... -end +end