diff --git a/dev/.documenter-siteinfo.json b/dev/.documenter-siteinfo.json index d3bd6e6e3..421b5e82c 100644 --- a/dev/.documenter-siteinfo.json +++ b/dev/.documenter-siteinfo.json @@ -1 +1 @@ -{"documenter":{"julia_version":"1.10.0","generation_timestamp":"2024-01-24T01:54:27","documenter_version":"1.2.1"}} \ No newline at end of file +{"documenter":{"julia_version":"1.10.0","generation_timestamp":"2024-01-25T11:14:55","documenter_version":"1.2.1"}} \ No newline at end of file diff --git a/dev/caesar_framework/index.html b/dev/caesar_framework/index.html index 69281139e..266238cc2 100644 --- a/dev/caesar_framework/index.html +++ b/dev/caesar_framework/index.html @@ -1,2 +1,2 @@ -Pkg Framework · Caesar.jl

The Caesar Framework

The Caesar.jl package is an "umbrella" framework around other dedicated algorithmic packages. While most of the packages are implemented in native Julia (JuliaPro), a few dependencies are wrapped C libraries. Note that C/C++ can be incorporated with zero overhead, such as was done with AprilTags.jl.

FAQ: Why use Julia?

AMP / IIF / RoME

Robot motion estimate (RoME.jl) can operate in the conventional SLAM manner, using local memory (dictionaries), or alternatively distribute over a persisted DistributedFactorGraph.jl through common serialization and graph storage/database technologies, see this article as example [1.3]. A variety of 2D plotting, 3D visualization, serialization, middleware, and analysis tools come standard as provided by the associated packages. RoME.jl combines reference frame transformations and robotics SLAM tool around the back-end solver provides by IncrementalInference.jl.

Details about the accompanying packages:

  • IncrementalInference.jl supplies the algebraic logic for factor graph inference with Bayes tree and depends on several packages itself.
  • RoME.jl introduces nodes and factors that are useful to robotic navigation.
  • ApproxManifoldProducts.jl provides on-manifold belief product operations.

Visualization (Arena.jl/RoMEPlotting.jl)

Caesar visualization (plotting of results, graphs, and data) is provided by 2D and 3D packages respectively:

  • RoMEPlotting.jl are a set of scripts that provide MATLAB style plotting of factor graph beliefs, mostly supporting 2D visualization with some support for projections of 3D;
  • Arena.jl package, which is a collection of 3D visualization tools.

Multilanguage Interops: NavAbility.io SDKs and APIs

The Caesar framework is not limited to direct Julia use. Check out www.NavAbility.io, or contact directly at (info@navabiliyt.io), for more details. Also see the community multi-language page for details.

+Pkg Framework · Caesar.jl

The Caesar Framework

The Caesar.jl package is an "umbrella" framework around other dedicated algorithmic packages. While most of the packages are implemented in native Julia (JuliaPro), a few dependencies are wrapped C libraries. Note that C/C++ can be incorporated with zero overhead, such as was done with AprilTags.jl.

FAQ: Why use Julia?

AMP / IIF / RoME

Robot motion estimate (RoME.jl) can operate in the conventional SLAM manner, using local memory (dictionaries), or alternatively distribute over a persisted DistributedFactorGraph.jl through common serialization and graph storage/database technologies, see this article as example [1.3]. A variety of 2D plotting, 3D visualization, serialization, middleware, and analysis tools come standard as provided by the associated packages. RoME.jl combines reference frame transformations and robotics SLAM tool around the back-end solver provides by IncrementalInference.jl.

Details about the accompanying packages:

  • IncrementalInference.jl supplies the algebraic logic for factor graph inference with Bayes tree and depends on several packages itself.
  • RoME.jl introduces nodes and factors that are useful to robotic navigation.
  • ApproxManifoldProducts.jl provides on-manifold belief product operations.

Visualization (Arena.jl/RoMEPlotting.jl)

Caesar visualization (plotting of results, graphs, and data) is provided by 2D and 3D packages respectively:

  • RoMEPlotting.jl are a set of scripts that provide MATLAB style plotting of factor graph beliefs, mostly supporting 2D visualization with some support for projections of 3D;
  • Arena.jl package, which is a collection of 3D visualization tools.

Multilanguage Interops: NavAbility.io SDKs and APIs

The Caesar framework is not limited to direct Julia use. Check out www.NavAbility.io, or contact directly at (info@navabiliyt.io), for more details. Also see the community multi-language page for details.

diff --git a/dev/concepts/2d_plotting/index.html b/dev/concepts/2d_plotting/index.html index 0859704b6..7c8c01acf 100644 --- a/dev/concepts/2d_plotting/index.html +++ b/dev/concepts/2d_plotting/index.html @@ -232,4 +232,4 @@

plAll = plotBelief(fg, [:x0; :x1], levels=3)
 # plAll |> PNG("/tmp/testX1.png",20cm,15cm)

-

Note

The functions hstack and vstack is provided through the Gadfly package and allows the user to build a near arbitrary composition of plots.

Please see KernelDensityEstimatePlotting package source for more features.

+

Note

The functions hstack and vstack is provided through the Gadfly package and allows the user to build a near arbitrary composition of plots.

Please see KernelDensityEstimatePlotting package source for more features.

diff --git a/dev/concepts/arena_visualizations/index.html b/dev/concepts/arena_visualizations/index.html index b816a037d..b37bbea78 100644 --- a/dev/concepts/arena_visualizations/index.html +++ b/dev/concepts/arena_visualizations/index.html @@ -35,4 +35,4 @@ vc = startdefaultvisualization() visualize(fg, vc, drawlandms=true, densitymeshes=[:l1;:x2]) visualizeDensityMesh!(vc, fg, :l1) -# visualizeallposes!(vc, fg, drawlandms=false)

For more information see JuliaRobotcs/MeshCat.jl.

2nd Generation 3D Viewer (VTK / Director)

Note

This code is obsolete

Previous versions used the much larger VTK based Director available via DrakeVisualizer.jl package. This requires the following preinstalled packages:

    sudo apt-get install libvtk5-qt4-dev python-vtk

1st Generation MIT LCM Collections viewer

This code has been removed.

+# visualizeallposes!(vc, fg, drawlandms=false)

For more information see JuliaRobotcs/MeshCat.jl.

2nd Generation 3D Viewer (VTK / Director)

Note

This code is obsolete

Previous versions used the much larger VTK based Director available via DrakeVisualizer.jl package. This requires the following preinstalled packages:

    sudo apt-get install libvtk5-qt4-dev python-vtk

1st Generation MIT LCM Collections viewer

This code has been removed.

diff --git a/dev/concepts/available_varfacs/index.html b/dev/concepts/available_varfacs/index.html index 201f2a4a2..cb892cdd7 100644 --- a/dev/concepts/available_varfacs/index.html +++ b/dev/concepts/available_varfacs/index.html @@ -26,4 +26,4 @@ M = SE(3) p0 = identity_element(M) -δ(x,y,z,α,β,γ) = vee(M, p0, log(M, p0, zRz))source

<!– PartialPose3XYYaw –> <!– PartialPriorRollPitchZ –>

Extending Caesar with New Variables and Factors

A question that frequently arises is how to design custom variables and factors to solve a specific type of graph. One strength of Caesar is the ability to incorporate new variables and factors at will. Please refer to Adding Factors for more information on creating your own factors.

+δ(x,y,z,α,β,γ) = vee(M, p0, log(M, p0, zRz))source

<!– PartialPose3XYYaw –> <!– PartialPriorRollPitchZ –>

Extending Caesar with New Variables and Factors

A question that frequently arises is how to design custom variables and factors to solve a specific type of graph. One strength of Caesar is the ability to incorporate new variables and factors at will. Please refer to Adding Factors for more information on creating your own factors.

diff --git a/dev/concepts/building_graphs/index.html b/dev/concepts/building_graphs/index.html index f7941f6a1..3c51eea1c 100644 --- a/dev/concepts/building_graphs/index.html +++ b/dev/concepts/building_graphs/index.html @@ -53,4 +53,4 @@ end

[OPTIONAL] Understanding Internal Factor Naming Convention

The factor name used by Caesar is automatically generated from

addFactor!(fg, [:x0; :x1],...)

will create a factor with name :x0x1f1

When you were to add a another factor betweem :x0, :x1:

addFactor!(fg, [:x0; :x1],...)

will create a second factor with the name :x0x1f2.

Adding Tags

It is possible to add tags to variables and factors that make later graph management tasks easier, e.g.:

addVariable!(fg, :l7_3, Pose2, tags=[:APRILTAG; :LANDMARK])

Drawing the Factor Graph

Once you have a graph, you can visualize the graph as follows (beware though if the fg object is large):

# requires `sudo apt-get install graphviz
 drawGraph(fg, show=true)

By setting show=true, the application evince will be called to show the fg.pdf file that was created using GraphViz. A GraphPlot.jl visualization engine is also available.

using GraphPlot
 plotDFG(fg)
IncrementalInference.drawGraphFunction
drawGraph(fgl; viewerapp, filepath, engine, show)
-

Draw and show the factor graph <:AbstractDFG via system graphviz and xdot app.

Notes

  • Requires system install on Linux of sudo apt-get install xdot
  • Should not be calling outside programs.
  • Need long term solution
  • DFG's toDotFile a better solution – view with xdot application.
  • also try engine={"sfdp","fdp","dot","twopi","circo","neato"}

Notes:

  • Calls external system application xdot to read the .dot file format
    • toDot(fg,file=...); @async run(`xdot file.dot`)

Related

drawGraphCliq, drawTree, printCliqSummary, spyCliqMat

source

For more details, see the DFG docs on Drawing Graphs.

When to Instantiate Poses (i.e. new Variables in Factor Graph)

Consider a robot traversing some area while exploring, localizing, and wanting to find strong loop-closure features for consistent mapping. The creation of new poses and landmark variables is a trade-off in computational complexity and marginalization errors made during factor graph construction. Common triggers for new poses are:

Computation will progress faster if poses and landmarks are very sparse. To extract the benefit of dense reconstructions, one approach is to use the factor graph as sparse index in history about the general progression of the trajectory and use additional processing from dense sensor data for high-fidelity map reconstructions. Either interpolations, or better direct reconstructions from inertial data can be used for dense reconstruction.

For completeness, one could also re-project the most meaningful measurements from sensor measurements between pose epochs as though measured from the pose epoch. This approach essentially marginalizes the local dead reckoning drift errors into the local interpose re-projections, but helps keep the pose count low.

In addition, see Fixed-lag Solving for limiting during inference the number of fluid variables manually to a user desired count.

Which Variables and Factors to use

See the next page on available variables and factors

+

Draw and show the factor graph <:AbstractDFG via system graphviz and xdot app.

Notes

Notes:

Related

drawGraphCliq, drawTree, printCliqSummary, spyCliqMat

source

For more details, see the DFG docs on Drawing Graphs.

When to Instantiate Poses (i.e. new Variables in Factor Graph)

Consider a robot traversing some area while exploring, localizing, and wanting to find strong loop-closure features for consistent mapping. The creation of new poses and landmark variables is a trade-off in computational complexity and marginalization errors made during factor graph construction. Common triggers for new poses are:

Computation will progress faster if poses and landmarks are very sparse. To extract the benefit of dense reconstructions, one approach is to use the factor graph as sparse index in history about the general progression of the trajectory and use additional processing from dense sensor data for high-fidelity map reconstructions. Either interpolations, or better direct reconstructions from inertial data can be used for dense reconstruction.

For completeness, one could also re-project the most meaningful measurements from sensor measurements between pose epochs as though measured from the pose epoch. This approach essentially marginalizes the local dead reckoning drift errors into the local interpose re-projections, but helps keep the pose count low.

In addition, see Fixed-lag Solving for limiting during inference the number of fluid variables manually to a user desired count.

Which Variables and Factors to use

See the next page on available variables and factors

diff --git a/dev/concepts/compile_binary/index.html b/dev/concepts/compile_binary/index.html index a4d863a0a..eddc8114c 100644 --- a/dev/concepts/compile_binary/index.html +++ b/dev/concepts/compile_binary/index.html @@ -1,2 +1,2 @@ -Compile Binaries · Caesar.jl

Compile Binaries

Broader Julia ecosystem work on compiling shared libraries and images is hosted by PackageCompiler.jl, see documentation there.

Compiling RoME.so

A default RoME system image script can be used compileRoME/compileRoMESysimage.jl to reduce the "time-to-first-plot".

To use RoME with the newly created sysimage, start julia with:

julia -O3 -J ~/.julia/dev/RoME/compileRoME/RoMESysimage.so

Which should dramatically cut down on the load time of the included package JIT compilation. More packages or functions can be added to the binary, depending on the application. Furthermore, full executable binaries can easily be made with PackageCompiler.jl.

More Info

Note

Also see this Julia Binaries Blog. More on discourse.. Also see new brute force sysimg work at Fezzik.jl.

Note

Contents of a previous blog post this AOT vs JIT compiling blog post has been wrapped into PackageCompiler.jl.

+Compile Binaries · Caesar.jl

Compile Binaries

Broader Julia ecosystem work on compiling shared libraries and images is hosted by PackageCompiler.jl, see documentation there.

Compiling RoME.so

A default RoME system image script can be used compileRoME/compileRoMESysimage.jl to reduce the "time-to-first-plot".

To use RoME with the newly created sysimage, start julia with:

julia -O3 -J ~/.julia/dev/RoME/compileRoME/RoMESysimage.so

Which should dramatically cut down on the load time of the included package JIT compilation. More packages or functions can be added to the binary, depending on the application. Furthermore, full executable binaries can easily be made with PackageCompiler.jl.

More Info

Note

Also see this Julia Binaries Blog. More on discourse.. Also see new brute force sysimg work at Fezzik.jl.

Note

Contents of a previous blog post this AOT vs JIT compiling blog post has been wrapped into PackageCompiler.jl.

diff --git a/dev/concepts/concepts/index.html b/dev/concepts/concepts/index.html index 396627f64..0ff646f54 100644 --- a/dev/concepts/concepts/index.html +++ b/dev/concepts/concepts/index.html @@ -1,2 +1,2 @@ -Initial Concepts · Caesar.jl

Graph Concepts

Factor graphs are bipartite consisting of variables and factors, which are connected by edges to form a graph structure. The terminology of nodes is reserved for actually storing the data on some graph oriented technology.

What are Variables and Factors

Variables, denoted as the larger nodes in the figur below, represent state variables of interest such as vehicle or landmark positions, sensor calibration parameters, and more. Variables are likely hidden values which are not directly observed, but we want to estimate them them from observed data and at least some minimal algebra structure from probabilistic measurement models.

Factors, the smaller nodes in the figure, represent the algebraic interaction between particular variables, which is captured through edges. Factors must adhere to the limits of probabilistic models – for example conditional likelihoods capture the likelihood correlations between variables; while priors (unary to one variable) represent absolute information to be introduced. A heterogeneous factor graph illustration is shown below; also see a broader discussion linked on the literature page.

factorgraphexample

We assume factors are constructed from statistically independent measurements (i.e. no direct correlations between measurements other than the known algebraic model that might connect them), then we can use Probabilistic Chain rule to write inference operation down (unnormalized):

\[P(\Theta | Z) \propto P(Z | \Theta) P(\Theta)\]

This unnormalized "Bayes rule" is a consequence of two ideas, namely the probabilistic chain rule where Theta represents all variables and Z represents all measurements or data

\[P(\Theta , Z) = P(Z | \Theta) P(\Theta)\]

or similarly,

\[P(\Theta, Z) = P(\Theta | Z) P(Z).\]

The inference objective is to invert this system, so as to find the states given the product between all the likelihood models (based on the data):

\[P(\Theta | Z) \propto \prod_i P(Z_i | \Theta_i) \prod_j P(\Theta_j)\]

We use the uncorrelated measurement process assumption that measurements Z are independent given the constructed algebraic model.

Note

Strictly speaking, factors are actually "observed variables" that are stochastically "fixed" and not free for estimation in the conventional SLAM perspective. Waving hands over the fact that factors encode both the algebraic model and the observed measurement values provides a perspective on learning structure of a problem, including more mundane operations such as sensor calibration or learning of channel transfer models.

+Initial Concepts · Caesar.jl

Graph Concepts

Factor graphs are bipartite consisting of variables and factors, which are connected by edges to form a graph structure. The terminology of nodes is reserved for actually storing the data on some graph oriented technology.

What are Variables and Factors

Variables, denoted as the larger nodes in the figur below, represent state variables of interest such as vehicle or landmark positions, sensor calibration parameters, and more. Variables are likely hidden values which are not directly observed, but we want to estimate them them from observed data and at least some minimal algebra structure from probabilistic measurement models.

Factors, the smaller nodes in the figure, represent the algebraic interaction between particular variables, which is captured through edges. Factors must adhere to the limits of probabilistic models – for example conditional likelihoods capture the likelihood correlations between variables; while priors (unary to one variable) represent absolute information to be introduced. A heterogeneous factor graph illustration is shown below; also see a broader discussion linked on the literature page.

factorgraphexample

We assume factors are constructed from statistically independent measurements (i.e. no direct correlations between measurements other than the known algebraic model that might connect them), then we can use Probabilistic Chain rule to write inference operation down (unnormalized):

\[P(\Theta | Z) \propto P(Z | \Theta) P(\Theta)\]

This unnormalized "Bayes rule" is a consequence of two ideas, namely the probabilistic chain rule where Theta represents all variables and Z represents all measurements or data

\[P(\Theta , Z) = P(Z | \Theta) P(\Theta)\]

or similarly,

\[P(\Theta, Z) = P(\Theta | Z) P(Z).\]

The inference objective is to invert this system, so as to find the states given the product between all the likelihood models (based on the data):

\[P(\Theta | Z) \propto \prod_i P(Z_i | \Theta_i) \prod_j P(\Theta_j)\]

We use the uncorrelated measurement process assumption that measurements Z are independent given the constructed algebraic model.

Note

Strictly speaking, factors are actually "observed variables" that are stochastically "fixed" and not free for estimation in the conventional SLAM perspective. Waving hands over the fact that factors encode both the algebraic model and the observed measurement values provides a perspective on learning structure of a problem, including more mundane operations such as sensor calibration or learning of channel transfer models.

diff --git a/dev/concepts/dataassociation/index.html b/dev/concepts/dataassociation/index.html index 8615c22e4..eddd9128b 100644 --- a/dev/concepts/dataassociation/index.html +++ b/dev/concepts/dataassociation/index.html @@ -33,4 +33,4 @@ KDE.BallTreeDensity <: IIF.SamplableBelief Distribution.Rayleigh <: IIF.SamplableBelief Distribution.Uniform <: IIF.SamplableBelief -Distribution.MvNormal <: IIF.SamplableBelief

One of the more exotic examples is to natively represent Synthetic Aperture Sonar (SAS) as a deeply non-Gaussian factor in the factor graph. See Synthetic Aperture Sonar SLAM. Also see the full AUV stack using a single reference beacon and Towards Real-Time Underwater Acoustic Navigation.

Null Hypothesis

Sometimes there is basic uncertainty about whether a measurement is at all valid. Note that the above examples (multihypo and Mixture) still accept that a certain association definitely exists. A null hypothesis models the situation in which a factor might be completely bogus, in which case it should be ignored. The underlying mechanics of this approach are not entirely straightforward since removing one or more factors essentially changes the structure of the graph. That said, IncrementalInference.jl employs a reasonable stand-in solution that does not require changing the graph structure and can simply be included for any factor.

addFactor!(fg, [:x7;:l13], Pose2Point2Range(...), nullhypo=0.1)

This keyword indicates to the solver that there is a 10% chance that this factor is not valid.

Note

An entirely separate page is reserved for incorporating Flux neural network models into Caesar.jl as highly plastic and trainable (i.e. learnable) factors.

+Distribution.MvNormal <: IIF.SamplableBelief

One of the more exotic examples is to natively represent Synthetic Aperture Sonar (SAS) as a deeply non-Gaussian factor in the factor graph. See Synthetic Aperture Sonar SLAM. Also see the full AUV stack using a single reference beacon and Towards Real-Time Underwater Acoustic Navigation.

Null Hypothesis

Sometimes there is basic uncertainty about whether a measurement is at all valid. Note that the above examples (multihypo and Mixture) still accept that a certain association definitely exists. A null hypothesis models the situation in which a factor might be completely bogus, in which case it should be ignored. The underlying mechanics of this approach are not entirely straightforward since removing one or more factors essentially changes the structure of the graph. That said, IncrementalInference.jl employs a reasonable stand-in solution that does not require changing the graph structure and can simply be included for any factor.

addFactor!(fg, [:x7;:l13], Pose2Point2Range(...), nullhypo=0.1)

This keyword indicates to the solver that there is a 10% chance that this factor is not valid.

Note

An entirely separate page is reserved for incorporating Flux neural network models into Caesar.jl as highly plastic and trainable (i.e. learnable) factors.

diff --git a/dev/concepts/entry_data/index.html b/dev/concepts/entry_data/index.html index 6fbcf0797..83c0d5132 100644 --- a/dev/concepts/entry_data/index.html +++ b/dev/concepts/entry_data/index.html @@ -64,4 +64,4 @@ addData!(dfg,:default_folder_store,:x0,:nnModel, mdlBytes, mimeType="application/bson/octet-stream", - description="BSON.@load PipeBuffer(readBytes) model")

Experimental Features

Loading images is a relatively common task, hence a convenience function has been developed, when using ImageMagick try Caesar.fetchDataImage.

+ description="BSON.@load PipeBuffer(readBytes) model")

Experimental Features

Loading images is a relatively common task, hence a convenience function has been developed, when using ImageMagick try Caesar.fetchDataImage.

diff --git a/dev/concepts/flux_factors/index.html b/dev/concepts/flux_factors/index.html index 11089fcf7..5ac6b1556 100644 --- a/dev/concepts/flux_factors/index.html +++ b/dev/concepts/flux_factors/index.html @@ -1,2 +1,2 @@ -Flux (NN) Factors · Caesar.jl
+Flux (NN) Factors · Caesar.jl
diff --git a/dev/concepts/interacting_fgs/index.html b/dev/concepts/interacting_fgs/index.html index e7c9a0ae3..67cb151b7 100644 --- a/dev/concepts/interacting_fgs/index.html +++ b/dev/concepts/interacting_fgs/index.html @@ -54,4 +54,4 @@

Return the number of dimensions this factor vertex fc influences.

DevNotes

source
DistributedFactorGraphs.getManifoldFunction
getManifold(_)
 

Interface function to return the <:ManifoldsBase.AbstractManifold object of variableType<:InferenceVariable.

source
getManifold(mkd)
 getManifold(mkd, asPartial)
-

Return the manifold on which this ManifoldKernelDensity is defined.

DevNotes

  • TODO currently ignores the .partial aspect (captured in parameter L)
source
+

Return the manifold on which this ManifoldKernelDensity is defined.

DevNotes

source diff --git a/dev/concepts/mmisam_alg/index.html b/dev/concepts/mmisam_alg/index.html index 76b40e1cb..89788f755 100644 --- a/dev/concepts/mmisam_alg/index.html +++ b/dev/concepts/mmisam_alg/index.html @@ -1,2 +1,2 @@ -Non-Gaussian Algorithm · Caesar.jl

Multimodal incremental Smoothing and Mapping Algorithm

Note

Major refactoring of documentation under way 2020Q1. Much of the previous text has be repositioned and being improved. See references for details and check back here for updates in the coming weeks.

Caesar.jl uses an approximate sum-product inference algorithm (mmiSAM) works. Until then, see related literature for more details.

Joint Probability

General Factor Graph – i.e. non-Gaussian and multi-modal

mmfgbt

Inference on Bayes/Junction/Elimination Tree

See tree solve video here.

Bayes/Junction tree example

Algorithm combats the so called curse-of-dimensionality on the basis of eight principles outlined in the thesis work "Multimodal and Inertial Sensor Solutions to Navigation-type Factor Graphs".

Chapman-Kolmogorov (Belief Propagation / Sum-product)

The main computational effort is to focus compute cycles on dominant modes exhibited by the data, by dropping low likelihood modes (although not indefinitely) and not sacrificing accuracy individual major features.

D. Fourie, A. T. Espinoza, M. Kaess, and J. J. Leonard, “Characterizing marginalization and incremental operations on the Bayes tree,” in International Workshop on Algorithmic Foundations of Robotics (WAFR), 2020, submitted, under review.

Focussing Computation on Tree

Link to new dedicated Bayes tree pages. The following sections describe different elements of clique recycling.

Incremental Updates

Recycling computations similar to iSAM2, with option to complete future downward pass.

Fixed-Lag operation (out-marginalization)

Active user (likely) computational limits on message passing. Also mixed priority solving

Federated Tree Solution (Multi session/agent)

Tentatively see the multisession page.

Clique State Machine

The CSM is used to govern the inference process within a clique. A FunctionalStateMachine.jl implementation is used to allow for initialization / incremental-recycling / fixed-lag solving, and will soon support federated branch solving as well as unidirectional message passing for fixed-lead operations. See the following video for an auto-generated–-using csmAnimate–-concurrent clique solving example.

Sequential Nested Gibbs Method

Current default inference method. See [Fourie et al., IROS 2016]

Convolution Approximation (Quasi-Deterministic)

Convolution operations are used to implement the numerical computation of the probabilistic chain rule:

\[P(A, B) = P(A | B)P(B)\]

Proposal distributions are computed by means of (analytical or numerical – i.e. "algebraic") factor which defines a residual function:

\[\delta : S \times \Eta \rightarrow \mathcal{R}\]

where $S \times \Eta$ is the domain such that $\theta_i \in S, \, \eta \sim P(\Eta)$, and $P(\cdot)$ is a probability.

Please follow, a more detailed description is on the convolutional computations page.

Stochastic Product Approx of Infinite Functionals

See mixed-manifold products presented in the literature section.

writing in progress

Mixture Parametric Method

Work In Progress – deferred for progress on full functional methods, but likely to have Gaussian legacy algorithm with mixture model expansion added in the near future.

Chapman-Kolmogorov

Work in progress

+Non-Gaussian Algorithm · Caesar.jl

Multimodal incremental Smoothing and Mapping Algorithm

Note

Major refactoring of documentation under way 2020Q1. Much of the previous text has be repositioned and being improved. See references for details and check back here for updates in the coming weeks.

Caesar.jl uses an approximate sum-product inference algorithm (mmiSAM) works. Until then, see related literature for more details.

Joint Probability

General Factor Graph – i.e. non-Gaussian and multi-modal

mmfgbt

Inference on Bayes/Junction/Elimination Tree

See tree solve video here.

Bayes/Junction tree example

Algorithm combats the so called curse-of-dimensionality on the basis of eight principles outlined in the thesis work "Multimodal and Inertial Sensor Solutions to Navigation-type Factor Graphs".

Chapman-Kolmogorov (Belief Propagation / Sum-product)

The main computational effort is to focus compute cycles on dominant modes exhibited by the data, by dropping low likelihood modes (although not indefinitely) and not sacrificing accuracy individual major features.

D. Fourie, A. T. Espinoza, M. Kaess, and J. J. Leonard, “Characterizing marginalization and incremental operations on the Bayes tree,” in International Workshop on Algorithmic Foundations of Robotics (WAFR), 2020, submitted, under review.

Focussing Computation on Tree

Link to new dedicated Bayes tree pages. The following sections describe different elements of clique recycling.

Incremental Updates

Recycling computations similar to iSAM2, with option to complete future downward pass.

Fixed-Lag operation (out-marginalization)

Active user (likely) computational limits on message passing. Also mixed priority solving

Federated Tree Solution (Multi session/agent)

Tentatively see the multisession page.

Clique State Machine

The CSM is used to govern the inference process within a clique. A FunctionalStateMachine.jl implementation is used to allow for initialization / incremental-recycling / fixed-lag solving, and will soon support federated branch solving as well as unidirectional message passing for fixed-lead operations. See the following video for an auto-generated–-using csmAnimate–-concurrent clique solving example.

Sequential Nested Gibbs Method

Current default inference method. See [Fourie et al., IROS 2016]

Convolution Approximation (Quasi-Deterministic)

Convolution operations are used to implement the numerical computation of the probabilistic chain rule:

\[P(A, B) = P(A | B)P(B)\]

Proposal distributions are computed by means of (analytical or numerical – i.e. "algebraic") factor which defines a residual function:

\[\delta : S \times \Eta \rightarrow \mathcal{R}\]

where $S \times \Eta$ is the domain such that $\theta_i \in S, \, \eta \sim P(\Eta)$, and $P(\cdot)$ is a probability.

Please follow, a more detailed description is on the convolutional computations page.

Stochastic Product Approx of Infinite Functionals

See mixed-manifold products presented in the literature section.

writing in progress

Mixture Parametric Method

Work In Progress – deferred for progress on full functional methods, but likely to have Gaussian legacy algorithm with mixture model expansion added in the near future.

Chapman-Kolmogorov

Work in progress

diff --git a/dev/concepts/multilang/index.html b/dev/concepts/multilang/index.html index 271bf5c61..555936147 100644 --- a/dev/concepts/multilang/index.html +++ b/dev/concepts/multilang/index.html @@ -10,4 +10,4 @@ # Start the server over ZMQ start(zmqConfig) -# give the server a minute to start up ...

The current tests are a good place to see some examples of the current interfacing functions. Feel free to change the ZMQ interface for to any of the ZMQ supported modes of data transport, such as Interprocess Communication (IPC) vs. TCP.

Alternative Methods

Interfacing from languages like Python may also be achieved using PyCall.jl although little work has been done in the Caesar.jl framework to explore this path. Julia is itself interactive/dynamic and has plenty of line-by-line and Integrated Development Environment support – consider trying Julia for your application.

+# give the server a minute to start up ...

The current tests are a good place to see some examples of the current interfacing functions. Feel free to change the ZMQ interface for to any of the ZMQ supported modes of data transport, such as Interprocess Communication (IPC) vs. TCP.

Alternative Methods

Interfacing from languages like Python may also be achieved using PyCall.jl although little work has been done in the Caesar.jl framework to explore this path. Julia is itself interactive/dynamic and has plenty of line-by-line and Integrated Development Environment support – consider trying Julia for your application.

diff --git a/dev/concepts/multisession/index.html b/dev/concepts/multisession/index.html index dc8e6f113..29c66ec55 100644 --- a/dev/concepts/multisession/index.html +++ b/dev/concepts/multisession/index.html @@ -1,2 +1,2 @@ -Multi-session/agent Solving · Caesar.jl

Multisession Operation

Having all the data consolidated in a factor graph allows us to do something we find really exciting: reason against data for different robots, different robot sessions, even different users. Of course, this is all optional, and must be explicitly configured, but if enabled, current inference solutions can make use of historical data to continually improve their solutions.

Consider a single robot working in a common environment that has driven around the same area a number of times and has identified a landmark that is (probably) the same. We can automatically close the loop and use the information from the prior data to improve our current solution. This is called a multisession solve.

To perform a multisession solve, you need to specify that a session is part of a common environment, e.g 'lab'. A user then requests a multisession solve (manually for the moment), and this creates relationships between common landmarks. The collective information is used to produce a consensus on the shared landmarks. A chain of session solves is then created, and the information is propagated into the individual sessions, improving their results.

Steps in Multisession Solve

The following steps are performed by the user:

  • Indicate which sessions are part of a common environment - this is done via GraffSDK when the session is created
  • Request a multisession solve

Upon request, the solver performs the following actions:

  • Updates the common existing multisession landmarks with any new information (propagation from session to common information)
  • Builds common landmarks for any new sessions or updated data
  • Solves the common, multisession graph
  • Propagates the common consensus result to the individual sessions
  • Freezes all the session landmarks so that the session solving does not update the consensus result
  • Requests session solves for all the updated sessions

Note the current approach is well positioned to transition to the "Federated Bayes (Junction) Tree" multisession solving method, and will be updated accordingly in due coarse. The Federated method will allow faster multi-session solving times by avoiding the current iterated approach.

Example

Consider three sessions which exist in the same, shared environment. In this environment, during each session the robot identified the same l0 landmark, as shown in the below figure. Independent Sessions

If we examine this in terms of the estimates of the actual landmarks, we have three independent densities (blue, green, and orange) giving measures of l0 located at (20, 0):

Independent densities

Now we trigger a multisession solve. For each landmark that is seen in multiple session, we produce a common landmark (we call a prime landmark) and link it to the session landmarks via factors - all denoted in black outline.

Linked landmarks

A multisession solve is performed, which for each common (prime) landmark, we produce a common estimate. In terms of densities, this is a single answer for the disparate information, as shown in red in the below figure (for a slightly different dataset):

Prime density

This information is then propagated back to the individual session landmarks, giving one common density for each landmark. As above, our green, blue, and orange individual densities are now all updated to match the consensus shown in black:

Prime density

The session landmarks are then frozen, and individual session solves are triggered to propagate the information back into the sessions. Until the federated upgrade is completed, the above process is iterated a few times to allow information to cross propagate through all sessions. There federated tree solution requires only a single iteration up and down the federated Bayes (Junction) tree.

Next Steps

This provides an initial implementation for stitching data from multiple sessions, robots, and users. In the short term, we may trigger this automatically for any shared environments. Multisession solving along with other automated techniques for additional measurement discovery in data allows the system to 'dream' – i.e. reducing succint info from the large volumes of heterogenous sensor data.

In the medium future we will extend this functionality to operate in the Bayes tree, which we call 'federated solving', so that we perform the operation using cached results of subtrees.

+Multi-session/agent Solving · Caesar.jl

Multisession Operation

Having all the data consolidated in a factor graph allows us to do something we find really exciting: reason against data for different robots, different robot sessions, even different users. Of course, this is all optional, and must be explicitly configured, but if enabled, current inference solutions can make use of historical data to continually improve their solutions.

Consider a single robot working in a common environment that has driven around the same area a number of times and has identified a landmark that is (probably) the same. We can automatically close the loop and use the information from the prior data to improve our current solution. This is called a multisession solve.

To perform a multisession solve, you need to specify that a session is part of a common environment, e.g 'lab'. A user then requests a multisession solve (manually for the moment), and this creates relationships between common landmarks. The collective information is used to produce a consensus on the shared landmarks. A chain of session solves is then created, and the information is propagated into the individual sessions, improving their results.

Steps in Multisession Solve

The following steps are performed by the user:

  • Indicate which sessions are part of a common environment - this is done via GraffSDK when the session is created
  • Request a multisession solve

Upon request, the solver performs the following actions:

  • Updates the common existing multisession landmarks with any new information (propagation from session to common information)
  • Builds common landmarks for any new sessions or updated data
  • Solves the common, multisession graph
  • Propagates the common consensus result to the individual sessions
  • Freezes all the session landmarks so that the session solving does not update the consensus result
  • Requests session solves for all the updated sessions

Note the current approach is well positioned to transition to the "Federated Bayes (Junction) Tree" multisession solving method, and will be updated accordingly in due coarse. The Federated method will allow faster multi-session solving times by avoiding the current iterated approach.

Example

Consider three sessions which exist in the same, shared environment. In this environment, during each session the robot identified the same l0 landmark, as shown in the below figure. Independent Sessions

If we examine this in terms of the estimates of the actual landmarks, we have three independent densities (blue, green, and orange) giving measures of l0 located at (20, 0):

Independent densities

Now we trigger a multisession solve. For each landmark that is seen in multiple session, we produce a common landmark (we call a prime landmark) and link it to the session landmarks via factors - all denoted in black outline.

Linked landmarks

A multisession solve is performed, which for each common (prime) landmark, we produce a common estimate. In terms of densities, this is a single answer for the disparate information, as shown in red in the below figure (for a slightly different dataset):

Prime density

This information is then propagated back to the individual session landmarks, giving one common density for each landmark. As above, our green, blue, and orange individual densities are now all updated to match the consensus shown in black:

Prime density

The session landmarks are then frozen, and individual session solves are triggered to propagate the information back into the sessions. Until the federated upgrade is completed, the above process is iterated a few times to allow information to cross propagate through all sessions. There federated tree solution requires only a single iteration up and down the federated Bayes (Junction) tree.

Next Steps

This provides an initial implementation for stitching data from multiple sessions, robots, and users. In the short term, we may trigger this automatically for any shared environments. Multisession solving along with other automated techniques for additional measurement discovery in data allows the system to 'dream' – i.e. reducing succint info from the large volumes of heterogenous sensor data.

In the medium future we will extend this functionality to operate in the Bayes tree, which we call 'federated solving', so that we perform the operation using cached results of subtrees.

diff --git a/dev/concepts/parallel_processing/index.html b/dev/concepts/parallel_processing/index.html index 65d671814..aeb3f72ab 100644 --- a/dev/concepts/parallel_processing/index.html +++ b/dev/concepts/parallel_processing/index.html @@ -17,4 +17,4 @@ # helper function MyThreadSafeFactor(z) = MyThreadSafeFactor(z, [MyInplaceMem(0) for i in 1:Threads.nthreads()]) -# in residual function just use `thr_inplace = cfo.factor.inplace[Threads.threadid()]`
Note

Beyond the cases discussed above, other features in the IncrementalInference.jl code base (especially regarding the Bayes tree) are already multithreaded.

Factor Caching (In-place operations)

In-place memory operations for factors can have a significant performance improvement. See the Cache and Stash section for more details.

+# in residual function just use `thr_inplace = cfo.factor.inplace[Threads.threadid()]`
Note

Beyond the cases discussed above, other features in the IncrementalInference.jl code base (especially regarding the Bayes tree) are already multithreaded.

Factor Caching (In-place operations)

In-place memory operations for factors can have a significant performance improvement. See the Cache and Stash section for more details.

diff --git a/dev/concepts/solving_graphs/index.html b/dev/concepts/solving_graphs/index.html index c26b3631e..cf22f85f3 100644 --- a/dev/concepts/solving_graphs/index.html +++ b/dev/concepts/solving_graphs/index.html @@ -53,4 +53,4 @@ initAll!(dfg, solveKey; _parametricInit, solvable, N)

Perform graphinit over all variables with solvable=1 (default).

See also: ensureSolvable!, (EXPERIMENTAL 'treeinit')

source

Using Incremental Updates (Clique Recycling I)

One of the major features of the MM-iSAMv2 algorithm (implemented by IncrementalInference.jl) is reducing computational load by recycling and marginalizing different (usually older) parts of the factor graph. In order to utilize the benefits of recycing, the previous Bayes (Junction) tree should also be provided as input (see fixed-lag examples for more details):

tree = solveTree!(fg, tree)

Using Clique out-marginalization (Clique Recycling II)

When building sysmtes with limited computation resources, the out-marginalization of cliques on the Bayes tree can be used. This approach limits the amount of variables that are inferred on each solution of the graph. This method is also a compliment to the above Incremental Recycling – these two methods can work in tandem. There is a default setting for a FIFO out-marginalization strategy (with some additional tricks):

defaultFixedLagOnTree!(fg, 50, limitfixeddown=true)

This call will keep the latest 50 variables fluid for inference during Bayes tree inference. The keyword limitfixeddown=true in this case will also prevent downward message passing on the Bayes tree from propagating into the out-marginalized branches on the tree. A later page in this documentation will discuss how the inference algorithm and Bayes tree aspects are put together.

Synchronizing Over a Factor Graph

When adding Variables and Factors, use solvable=0 to disable the new fragments until ready for inference, for example

addVariable!(fg, :x45, Pose2, solvable=0)
 newfct = addFactor!(fg, [:x11,:x12], Pose2Pose2, solvable=0)

These parts of the factor graph can simply be activated for solving:

setSolvable!(fg, :x45, 1)
-setSolvable!(fg, newfct.label, 1)
+setSolvable!(fg, newfct.label, 1) diff --git a/dev/concepts/stash_and_cache/index.html b/dev/concepts/stash_and_cache/index.html index d85b6f58a..f65c889f3 100644 --- a/dev/concepts/stash_and_cache/index.html +++ b/dev/concepts/stash_and_cache/index.html @@ -7,4 +7,4 @@ # continue regular use, e.g. mfc = MyFactor(...) addFactor!(fg, [:a;:b], mfc) -# ... source

In-Place vs. In-Line Cache

Depending on your particular bent, two different cache models might be more appealing. The design of preambleCache does not preclude either design options, and actually promote use of either depending on the particular situation at hand. The purpose of preambleCache is to provide and opportunity for caching when working with factors in the factor graph rather than dictate one design over the other.

CalcFactor.cache::T

One likely use of the preambleCache function is for in-place memory allocation for solver hot-loop operations. Consider for example a getSample or factor residual calculation that is memory intensive. The best way to improve performance is remove any memory allocations during the hot-loop. For this reason the CalcFactor object has a cache::T field which will have exactly the type ::T that is returned by the user's preambleCache dispatch override. To usein the factor getSample or residual functions, simply use the calcfactor.cache field.

Pulling Data from Stores

The Caesar.jl framework supports various data store designs. Some of these data stores are likely best suited for in-line caching design. Values can be retrieved from a data store during the preambleCache step, irrespective of where the data is stored.

If the user chooses to store weird and wonderful caching links to alternative hardware via the described caching, go forth and be productive! Consider sharing enhancements back the public repositories.

Stash Serialization

Note

Stashing uses Additional (Large) Data storage and retrieval following starved graph design considerations.

Some applications use graph factors with with large memory requirements during computation. Often, it is not efficient/performant to store large data blobs directly within the graph when persisted. Caesar.jl therefore supports a concept called stashing (similar to starved graphs), where particular operationally important data is stored separately from the graph which can then be retrieved during the preambleCache step – a.k.a. unstashing.

Deserialize-only Stash Design (i.e. unstashing)

Presently, we recommend following a deserialize-only design. This is where factor graph are reconstituted from some persisted storage into computable form in memory, a.k.a. loadDFG. During the load steps, factors are added to the destination graph using the addFactor! calls, which in turn call preambleCache for each factor.

Therefore, factors which are persisted using the 'stash' methodology are only fully reconstructed after the preambleCache step, and the user is responsible for defining the preambleCache override for a particular factor. The desired stashed data should also already be available in said data store before the factor graph is loaded.

Caesar.jl does have factors that can use the stash design, but are currently only available as experimental features. Specifically, see the ScatterAlignPose2 factor code.

Modifying the overall Caesar.jl code for both read and write stashing might be considered in future work but is not in the current roadmap.

Notes

Please see or open issues for specific questions not yet covered here. You can also reach out via Slack, or contact NavAbility.io for help.

+# ... source

In-Place vs. In-Line Cache

Depending on your particular bent, two different cache models might be more appealing. The design of preambleCache does not preclude either design options, and actually promote use of either depending on the particular situation at hand. The purpose of preambleCache is to provide and opportunity for caching when working with factors in the factor graph rather than dictate one design over the other.

CalcFactor.cache::T

One likely use of the preambleCache function is for in-place memory allocation for solver hot-loop operations. Consider for example a getSample or factor residual calculation that is memory intensive. The best way to improve performance is remove any memory allocations during the hot-loop. For this reason the CalcFactor object has a cache::T field which will have exactly the type ::T that is returned by the user's preambleCache dispatch override. To usein the factor getSample or residual functions, simply use the calcfactor.cache field.

Pulling Data from Stores

The Caesar.jl framework supports various data store designs. Some of these data stores are likely best suited for in-line caching design. Values can be retrieved from a data store during the preambleCache step, irrespective of where the data is stored.

If the user chooses to store weird and wonderful caching links to alternative hardware via the described caching, go forth and be productive! Consider sharing enhancements back the public repositories.

Stash Serialization

Note

Stashing uses Additional (Large) Data storage and retrieval following starved graph design considerations.

Some applications use graph factors with with large memory requirements during computation. Often, it is not efficient/performant to store large data blobs directly within the graph when persisted. Caesar.jl therefore supports a concept called stashing (similar to starved graphs), where particular operationally important data is stored separately from the graph which can then be retrieved during the preambleCache step – a.k.a. unstashing.

Deserialize-only Stash Design (i.e. unstashing)

Presently, we recommend following a deserialize-only design. This is where factor graph are reconstituted from some persisted storage into computable form in memory, a.k.a. loadDFG. During the load steps, factors are added to the destination graph using the addFactor! calls, which in turn call preambleCache for each factor.

Therefore, factors which are persisted using the 'stash' methodology are only fully reconstructed after the preambleCache step, and the user is responsible for defining the preambleCache override for a particular factor. The desired stashed data should also already be available in said data store before the factor graph is loaded.

Caesar.jl does have factors that can use the stash design, but are currently only available as experimental features. Specifically, see the ScatterAlignPose2 factor code.

Modifying the overall Caesar.jl code for both read and write stashing might be considered in future work but is not in the current roadmap.

Notes

Please see or open issues for specific questions not yet covered here. You can also reach out via Slack, or contact NavAbility.io for help.

diff --git a/dev/concepts/using_julia/index.html b/dev/concepts/using_julia/index.html index ffd2a91b8..4baea02b0 100644 --- a/dev/concepts/using_julia/index.html +++ b/dev/concepts/using_julia/index.html @@ -48,4 +48,4 @@ user@...$ julia -e "println(\"one more time.\")" one more time. user@...$ julia -e "println(\"...testing...\")" -...testing...
Note

When searching for Julia related help online, use the phrase 'julialang' instead of just 'julia'. For example, search for 'julialang workflow tips' or 'julialang performance tips'. Also, see FAQ - Why are first runs slow?, which is due to Just-In-Time/Pre compiling and caching.

Next Steps

Although Caesar is Julia-based, it provides multi-language support with a ZMQ interface. This is discussed in Caesar Multi-Language Support. Caesar.jl also supports various visualizations and plots by using Arena, RoMEPlotting, and Director. This is discussed in Visualization with Arena.jl and RoMEPlotting.jl.

+...testing...
Note

When searching for Julia related help online, use the phrase 'julialang' instead of just 'julia'. For example, search for 'julialang workflow tips' or 'julialang performance tips'. Also, see FAQ - Why are first runs slow?, which is due to Just-In-Time/Pre compiling and caching.

Next Steps

Although Caesar is Julia-based, it provides multi-language support with a ZMQ interface. This is discussed in Caesar Multi-Language Support. Caesar.jl also supports various visualizations and plots by using Arena, RoMEPlotting, and Director. This is discussed in Visualization with Arena.jl and RoMEPlotting.jl.

diff --git a/dev/concepts/using_manifolds/index.html b/dev/concepts/using_manifolds/index.html index f372d15f3..7c782974b 100644 --- a/dev/concepts/using_manifolds/index.html +++ b/dev/concepts/using_manifolds/index.html @@ -1,2 +1,2 @@ -Using Manifolds.jl · Caesar.jl

On-Manifold Operations

Caesar.jl and libraries have adopted JuliaManifolds/Manifolds.jl as foundation for developing the algebraic operations used.

The Community has been developing high quality documentation for Manifolds.jl, and we encourage the interested reader to learn and use everything available there.

Separate Manifold Beliefs Page

See building a Manifold Kernel Density or for more information.

Why Manifolds.jl

There is much to be said about how and why Manifolds.jl is the right decision for building a next-gen factor graph solver. We believe the future will show that mathematicians are way ahead of the curve, and that adopting a manifold approach will pretty much be the only way to develop the required mathematical operations in Caesar.jl for the forseable future.

Are Manifolds Difficult? No.

Do you need a math degree to be able to use Manifolds.jl? No you don't since Caesar.jl and related packages have already packaged many of the common functions and factors you need to get going.

This page is meant to open the door for readers to learn more about how things work under the hood, and empower the Community to rapidly develop upon existing work. This page is also intended to show that the Caesar.jl related packages are being developed with strong focus on consolidation, single definition functionality, and serious cross discipline considerations.

If you are looking for rapid help or more expertise on a particular issue, consider reaching out by opening Issues or connecting to the ongoing chats in the Slack Channel.

What Are Manifolds

If you are a newcomer to the term Manifold and want to learn more, fear not even though your first search results might be somewhat disorienting.

The rest of this page is meant to introduce the basics, and point you to handy resources. Caesar.jl and NavAbility support open Community and are upstreaming improvements to Manifolds.jl, including code updates and documentation improvements.

'One Page' Summary of Manifolds

Imagine you have a sheet of paper and you draw with a pencil a short line segment on the page. Now draw a second line segment from the end of the first. That should be pretty easy on a flat surface, right?

When the piece of paper is lying flat on the table, you have a line in the Euclidean(2) manifold, and you can easily assign [x,y] coordinates to describe these lines or vectors. Note coordinates here is a precise technical term.

If you roll the paper into a cyclinder... well now you have line segments on a cylindrical manifold. The question is, how to conduct mathematical operations concisely and consistently indepependent of the shape of your manifold? And, how to 'unroll' the paper for simple computations on a locally flat surface.

How far can the math go before there just isn't a good recipe for writing down generic operations? Turns out a few smart people have been working to solve this and the keyword here is Manifold.

If you are drinking some coffee right now, then you are moving the cup in Euclidean(3) space, that is you assume the motion is in flat coordinates [x;y;z]. A more technical way to say that is that the Euclidean manifold has zero curvature.

What if you are concerned with the orientation of the cup too–-as in not spill the hot contents everywhere–-then you might actually want to work on the SpecialEuclidean(3) manifold – that is 3 degrees of translational freedom, and 3 degrees of rotational freedom. You might have heard of Lie Groups and Lie Algebras, well that is exactly it, Lie Groups are a special set of Group Manifolds and associated operations that are already supported by JuliaManifolds/Manifolds.jl.

Things are a little easier for a robot traveling around on a flat 2D surface. If your robot is moving around with coordinates $[x,y,\theta]$, well then you are working with the coordinates of the SpecialEuclidean(2) manifold. There is more to say on how the coordinates $[x,y,\theta]$ get converted into the $\mathfrak{se}(2)$ Lie algebra, and that gets converted into a Lie Group element – i.e. $([x;y], \mathrm{RotMat}(\theta))$. More on that later.

Perhaps you are interested in relativistic effects where time as the fourth dimension is of interest, well then the Minkowski space provides Group and Manifold constructs for that – actually Minkowski falls under the supported Lorentz Manifolds.

The point here is that the math for drawing line segments in each of these manifolds above is almost exactly the same, thanks to the abstractions that have already been developed. And, many more powerful constructs exist which will become more apparent as you continue to work with Manifolds.

7 Things to know First

As a robotics, navigation, or control person who wants to get started, you need to know what the following terms mean:

  • Q1) What are manifold points, tangent vectors, and user coordinates,
  • Q2) What does the logarithm map of a manifold do,
  • Q3) What does the exponential map of a manifold do,
  • Q4) What do the vee and hat operations do,
  • Q5) What is the difference between Riemannian or Group manifolds,
  • Q6) Is a retraction the same as the exponential map,
  • Q7) Is a projection the same as a logarithm map,

Know it sounds like a lot, but the point of this paragraph is that if you are able to answer these seven questions for yourself, then you will be empowered to venture into the math of manifolds much more easily. And, everything will begin to make sense. A lot of sense, to the point that you might agree with our assesment that JuliaManifolds/Manifolds.jl is the right direction for the future.

Although you will be able to find many answers for these seven questions in many places, our answers are listed at the bottom of this page.

Manifold Tutorials

The rest of this page is devoted to showing you how to use the math, write your own code to do new things beyond what Caesar.jl can already do. If you are willing to share any contributions, please do so by opening pull requests against the related repos.

Using Manifolds in Factors

The best way to show this is just dive straight into a factor that actually uses a Manifolds mechanization, and RoME.Pose2Pose2 is a fairly straight forward example. This factor gets used for rigid transforms on a 2D plane, with coordinates $[x,y,\theta]$ as alluded to above.

A Tutorial on Rotations

Note

Work in progress, Upstream Tutorial

A Tutorial on 2D Rigid Transforms

Note

Work in progress, Upstream Tutorial

Existing Manifolds

The most popular Manifolds used in Caesar.jl related packages are:

Group Manifolds

Riemannian Manifolds (Work in progress)

Note

Caesar.jl encourages the JuliaManifolds approach to defining new manifolds, and can readily be used for Caesar.jl related operations.

Creating a new Manifold

JuliaManifolds.jl is designed to make it as easy as possible to define your own manifold and then get all the benefits of the Manifolds.jl ecosystem. Follow the documentation there to make your own manifold, which can then readily be used with all the features of both JuliaManifolds as well as the Caesar.jl related packages.

Answers to 7 Questions

Q1) What are Point, Tangents, Coordinates

A manifold $\mathcal{M}$ is a collection of points that together create the given space. Points are like round sprinkles on the donut. The representation of points will vary from manifold to manifold. Sometimes it is even possible to have different representations for the same point on a manifold. These are usually denoted as $p$.

Tangent vectors (we prefer tangents for clarity) is a vector $X$ that emanates from a point on a manifold tangential to the manifold curvature. A vector lives in the tangent space of the manifold, a locally flat region around a point $X\in T_p \mathcal{M}$. On the donut, imagine a rod-shaped sprinkle stuck along the tangent of the surface at a particular point $p$. The tangent space is the collection of all possible tangents at $p$.

Coordinates are a user defined property that uses the Euclidean nature of the tangent space at point $p$ to operate as a regular linear space. Coordinates are just a list of the indepedent coordinate dimensions of the tangent space values collected together. Read this part carefully, as it can easily be confused with a conventional tangent vector in a regular Euclidean space.

For example, a tangent vector to the Euclidean(2) manifold, at the origin point $(0,0)$ is what you likely are familiar with from school as a "vector" (not the coordinates, although that happens to be the same thing in the trivial case). For Euclidean space, a vector from point $p$ of length $[x,y]$ looks like the line segment between points $p$ and $q$ on the underlying manifold.

This trivial overlapping of "vectors" in the Euclidean Manifold, and in a tangent space around $p$, and coordinates for that tangent space, are no longer trivial when the manifold has curvature.

Q2) What is the Logarithm map

The logarithm X = logmap(M,p,q) computes, based at point $p$, the tangent vector $X$ on the tangent plane $T_p\mathcal{M}$ from $p$. In other words, image a string following the curve of a manifold from $p$ to $q$, pick up that string from $q$ while holding $p$ firm, until the string is flat against the tangent space emminating from $p$. The logarithm is the opposite of the exponential map.

Multiple logmap interpretations exist, for example in the case of $SpecialEuclidean(N)$ there are multiple definitions for $\oplus$ and $\ominus$, see [2.15]. When using a library, it is worth testing how logmap and expmap are computed (away from the identity element for Groups).

Q3) What is the Exponential map

The exponential map does the opposite of the logarithm. Imagine a tangent vector $X$ emanating from point $p$. The length and direction of $X$ can be wrapped onto the curvature of the manifold to form a line on the manifold surface.

Q4) What does vee/hat do

vee is an operation that converts a tangent vector representation into a coordinate representation. For example Lie algebra elements are tangent vector elements, so vee([0 -w; w 0]) = w. And visa versa for hat(w) = [0 -w; w 0], which goes from coordinates to tangent vectors.

Q5) What Riemannian vs. Group Manifolds

Groups are mathematical structures which often fit well inside the manifold way of working. For example in robotics, Lie Groups are popular under SpecialEuclidean(N) <: AbstractGroupManifold. Groups also have a well defined action. Most prominently for our usage, groups are sets of points for which there exists an identity point. Riemannian manifolds are more general than Lie groups, specifically Riemannian manifolds do not have an identity point.

An easy example is that the Euclidean(N) manifold does not have an identity element, since what we know as $[0,0]$ is actually a coordinate base point for the local tangent space, and which just happens to look the same as the underlying Euclidean(N) manifold. The TranslationGroup(N) exists as an additional structure over the Euclidean(N) space which has a defined identity element as well as a defined operations on points.

Q6) Retraction vs. Exp map

Retractions are numerically efficient approximations to convert a tangent vector into a point on the manifold. The exponential map is the theoretically precise retraction, but may well be computationally expensive beyond the need for most applications.

Q7) Projection vs. Log map

The term projection can be somewhat ambiguous between references. In Manifolds.jl, projections either project a point in the embedding to a point on the manifold, or a vector from the embedding onto a tangent space at a certain point.

Confusion, can easily happen between cases where there is no ambient space around a particular manifold. Then the term projection may be moot.

In Manifolds.jl, an inverse retraction is an approximate logmap of a point up from the manifold onto a tangent space – i.e. not a projection. It is important not to confuse a point on the manifold as a point in the ambient space, when thinking about the term projection.

It is best to make sure you know which one is being used in any particular situation.

Note

For a slightly deeper dive into the relation between embedding, ambient space, and projections, see the background conversation here.

+Using Manifolds.jl · Caesar.jl

On-Manifold Operations

Caesar.jl and libraries have adopted JuliaManifolds/Manifolds.jl as foundation for developing the algebraic operations used.

The Community has been developing high quality documentation for Manifolds.jl, and we encourage the interested reader to learn and use everything available there.

Separate Manifold Beliefs Page

See building a Manifold Kernel Density or for more information.

Why Manifolds.jl

There is much to be said about how and why Manifolds.jl is the right decision for building a next-gen factor graph solver. We believe the future will show that mathematicians are way ahead of the curve, and that adopting a manifold approach will pretty much be the only way to develop the required mathematical operations in Caesar.jl for the forseable future.

Are Manifolds Difficult? No.

Do you need a math degree to be able to use Manifolds.jl? No you don't since Caesar.jl and related packages have already packaged many of the common functions and factors you need to get going.

This page is meant to open the door for readers to learn more about how things work under the hood, and empower the Community to rapidly develop upon existing work. This page is also intended to show that the Caesar.jl related packages are being developed with strong focus on consolidation, single definition functionality, and serious cross discipline considerations.

If you are looking for rapid help or more expertise on a particular issue, consider reaching out by opening Issues or connecting to the ongoing chats in the Slack Channel.

What Are Manifolds

If you are a newcomer to the term Manifold and want to learn more, fear not even though your first search results might be somewhat disorienting.

The rest of this page is meant to introduce the basics, and point you to handy resources. Caesar.jl and NavAbility support open Community and are upstreaming improvements to Manifolds.jl, including code updates and documentation improvements.

'One Page' Summary of Manifolds

Imagine you have a sheet of paper and you draw with a pencil a short line segment on the page. Now draw a second line segment from the end of the first. That should be pretty easy on a flat surface, right?

When the piece of paper is lying flat on the table, you have a line in the Euclidean(2) manifold, and you can easily assign [x,y] coordinates to describe these lines or vectors. Note coordinates here is a precise technical term.

If you roll the paper into a cyclinder... well now you have line segments on a cylindrical manifold. The question is, how to conduct mathematical operations concisely and consistently indepependent of the shape of your manifold? And, how to 'unroll' the paper for simple computations on a locally flat surface.

How far can the math go before there just isn't a good recipe for writing down generic operations? Turns out a few smart people have been working to solve this and the keyword here is Manifold.

If you are drinking some coffee right now, then you are moving the cup in Euclidean(3) space, that is you assume the motion is in flat coordinates [x;y;z]. A more technical way to say that is that the Euclidean manifold has zero curvature.

What if you are concerned with the orientation of the cup too–-as in not spill the hot contents everywhere–-then you might actually want to work on the SpecialEuclidean(3) manifold – that is 3 degrees of translational freedom, and 3 degrees of rotational freedom. You might have heard of Lie Groups and Lie Algebras, well that is exactly it, Lie Groups are a special set of Group Manifolds and associated operations that are already supported by JuliaManifolds/Manifolds.jl.

Things are a little easier for a robot traveling around on a flat 2D surface. If your robot is moving around with coordinates $[x,y,\theta]$, well then you are working with the coordinates of the SpecialEuclidean(2) manifold. There is more to say on how the coordinates $[x,y,\theta]$ get converted into the $\mathfrak{se}(2)$ Lie algebra, and that gets converted into a Lie Group element – i.e. $([x;y], \mathrm{RotMat}(\theta))$. More on that later.

Perhaps you are interested in relativistic effects where time as the fourth dimension is of interest, well then the Minkowski space provides Group and Manifold constructs for that – actually Minkowski falls under the supported Lorentz Manifolds.

The point here is that the math for drawing line segments in each of these manifolds above is almost exactly the same, thanks to the abstractions that have already been developed. And, many more powerful constructs exist which will become more apparent as you continue to work with Manifolds.

7 Things to know First

As a robotics, navigation, or control person who wants to get started, you need to know what the following terms mean:

  • Q1) What are manifold points, tangent vectors, and user coordinates,
  • Q2) What does the logarithm map of a manifold do,
  • Q3) What does the exponential map of a manifold do,
  • Q4) What do the vee and hat operations do,
  • Q5) What is the difference between Riemannian or Group manifolds,
  • Q6) Is a retraction the same as the exponential map,
  • Q7) Is a projection the same as a logarithm map,

Know it sounds like a lot, but the point of this paragraph is that if you are able to answer these seven questions for yourself, then you will be empowered to venture into the math of manifolds much more easily. And, everything will begin to make sense. A lot of sense, to the point that you might agree with our assesment that JuliaManifolds/Manifolds.jl is the right direction for the future.

Although you will be able to find many answers for these seven questions in many places, our answers are listed at the bottom of this page.

Manifold Tutorials

The rest of this page is devoted to showing you how to use the math, write your own code to do new things beyond what Caesar.jl can already do. If you are willing to share any contributions, please do so by opening pull requests against the related repos.

Using Manifolds in Factors

The best way to show this is just dive straight into a factor that actually uses a Manifolds mechanization, and RoME.Pose2Pose2 is a fairly straight forward example. This factor gets used for rigid transforms on a 2D plane, with coordinates $[x,y,\theta]$ as alluded to above.

A Tutorial on Rotations

Note

Work in progress, Upstream Tutorial

A Tutorial on 2D Rigid Transforms

Note

Work in progress, Upstream Tutorial

Existing Manifolds

The most popular Manifolds used in Caesar.jl related packages are:

Group Manifolds

Riemannian Manifolds (Work in progress)

Note

Caesar.jl encourages the JuliaManifolds approach to defining new manifolds, and can readily be used for Caesar.jl related operations.

Creating a new Manifold

JuliaManifolds.jl is designed to make it as easy as possible to define your own manifold and then get all the benefits of the Manifolds.jl ecosystem. Follow the documentation there to make your own manifold, which can then readily be used with all the features of both JuliaManifolds as well as the Caesar.jl related packages.

Answers to 7 Questions

Q1) What are Point, Tangents, Coordinates

A manifold $\mathcal{M}$ is a collection of points that together create the given space. Points are like round sprinkles on the donut. The representation of points will vary from manifold to manifold. Sometimes it is even possible to have different representations for the same point on a manifold. These are usually denoted as $p$.

Tangent vectors (we prefer tangents for clarity) is a vector $X$ that emanates from a point on a manifold tangential to the manifold curvature. A vector lives in the tangent space of the manifold, a locally flat region around a point $X\in T_p \mathcal{M}$. On the donut, imagine a rod-shaped sprinkle stuck along the tangent of the surface at a particular point $p$. The tangent space is the collection of all possible tangents at $p$.

Coordinates are a user defined property that uses the Euclidean nature of the tangent space at point $p$ to operate as a regular linear space. Coordinates are just a list of the indepedent coordinate dimensions of the tangent space values collected together. Read this part carefully, as it can easily be confused with a conventional tangent vector in a regular Euclidean space.

For example, a tangent vector to the Euclidean(2) manifold, at the origin point $(0,0)$ is what you likely are familiar with from school as a "vector" (not the coordinates, although that happens to be the same thing in the trivial case). For Euclidean space, a vector from point $p$ of length $[x,y]$ looks like the line segment between points $p$ and $q$ on the underlying manifold.

This trivial overlapping of "vectors" in the Euclidean Manifold, and in a tangent space around $p$, and coordinates for that tangent space, are no longer trivial when the manifold has curvature.

Q2) What is the Logarithm map

The logarithm X = logmap(M,p,q) computes, based at point $p$, the tangent vector $X$ on the tangent plane $T_p\mathcal{M}$ from $p$. In other words, image a string following the curve of a manifold from $p$ to $q$, pick up that string from $q$ while holding $p$ firm, until the string is flat against the tangent space emminating from $p$. The logarithm is the opposite of the exponential map.

Multiple logmap interpretations exist, for example in the case of $SpecialEuclidean(N)$ there are multiple definitions for $\oplus$ and $\ominus$, see [2.15]. When using a library, it is worth testing how logmap and expmap are computed (away from the identity element for Groups).

Q3) What is the Exponential map

The exponential map does the opposite of the logarithm. Imagine a tangent vector $X$ emanating from point $p$. The length and direction of $X$ can be wrapped onto the curvature of the manifold to form a line on the manifold surface.

Q4) What does vee/hat do

vee is an operation that converts a tangent vector representation into a coordinate representation. For example Lie algebra elements are tangent vector elements, so vee([0 -w; w 0]) = w. And visa versa for hat(w) = [0 -w; w 0], which goes from coordinates to tangent vectors.

Q5) What Riemannian vs. Group Manifolds

Groups are mathematical structures which often fit well inside the manifold way of working. For example in robotics, Lie Groups are popular under SpecialEuclidean(N) <: AbstractGroupManifold. Groups also have a well defined action. Most prominently for our usage, groups are sets of points for which there exists an identity point. Riemannian manifolds are more general than Lie groups, specifically Riemannian manifolds do not have an identity point.

An easy example is that the Euclidean(N) manifold does not have an identity element, since what we know as $[0,0]$ is actually a coordinate base point for the local tangent space, and which just happens to look the same as the underlying Euclidean(N) manifold. The TranslationGroup(N) exists as an additional structure over the Euclidean(N) space which has a defined identity element as well as a defined operations on points.

Q6) Retraction vs. Exp map

Retractions are numerically efficient approximations to convert a tangent vector into a point on the manifold. The exponential map is the theoretically precise retraction, but may well be computationally expensive beyond the need for most applications.

Q7) Projection vs. Log map

The term projection can be somewhat ambiguous between references. In Manifolds.jl, projections either project a point in the embedding to a point on the manifold, or a vector from the embedding onto a tangent space at a certain point.

Confusion, can easily happen between cases where there is no ambient space around a particular manifold. Then the term projection may be moot.

In Manifolds.jl, an inverse retraction is an approximate logmap of a point up from the manifold onto a tangent space – i.e. not a projection. It is important not to confuse a point on the manifold as a point in the ambient space, when thinking about the term projection.

It is best to make sure you know which one is being used in any particular situation.

Note

For a slightly deeper dive into the relation between embedding, ambient space, and projections, see the background conversation here.

diff --git a/dev/concepts/why_nongaussian/index.html b/dev/concepts/why_nongaussian/index.html index 3cabc4259..351e47e95 100644 --- a/dev/concepts/why_nongaussian/index.html +++ b/dev/concepts/why_nongaussian/index.html @@ -1,2 +1,2 @@ -Gaussian vs. Non-Gaussian · Caesar.jl

Why/Where does non-Gaussian data come from?

Gaussian error models in measurement or data cues will only be Gaussian (normally distributed) if all physics/decisions/systematic-errors/calibration/etc. has a correct algebraic model in all circumstances. Caesar.jl and MM-iSAMv2 is heavily focussed on state-estimation from a plethora of heterogenous data that may not yet have perfect algebraic models. Four major categories of non-Gaussian errors have thus far been considered:

  • Uncertain decisions (a.k.a. data association), such as a robot trying to decide if a navigation loop-closure can be deduced from a repeat observation of a similar object or measurement from current and past data. These issues are commonly also referred to as multi-hypothesis.
  • Underdetermined or underdefined systems where there are more variables than constraining measurements to fully define the system as a single mode–-a.k.a solution ambiguity. For example, in 2D consider two range measurements resulting in two possible locations through trilateration.
  • Nonlinearity. For example in 2D, consider a Pose2 odometry where the orientation is uncertain: The resulting belief of where a next pose might be (convolution with odometry factor) results in a banana shape curve, even though the entire process is driven by assumed Gaussian belief.
  • Physics of the measurement process. Many measurement processes exhibit non-Gaussian behaviour. For example, acoustic/radio time-of-flight measurements, using either pulse-train or matched filtering, result in an "energy intensity" over time/distance of what the range to a scattering-target/source might be–i.e. highly non-Gaussian.

Next Steps

Quick links to related pages:

    +Gaussian vs. Non-Gaussian · Caesar.jl

    Why/Where does non-Gaussian data come from?

    Gaussian error models in measurement or data cues will only be Gaussian (normally distributed) if all physics/decisions/systematic-errors/calibration/etc. has a correct algebraic model in all circumstances. Caesar.jl and MM-iSAMv2 is heavily focussed on state-estimation from a plethora of heterogenous data that may not yet have perfect algebraic models. Four major categories of non-Gaussian errors have thus far been considered:

    • Uncertain decisions (a.k.a. data association), such as a robot trying to decide if a navigation loop-closure can be deduced from a repeat observation of a similar object or measurement from current and past data. These issues are commonly also referred to as multi-hypothesis.
    • Underdetermined or underdefined systems where there are more variables than constraining measurements to fully define the system as a single mode–-a.k.a solution ambiguity. For example, in 2D consider two range measurements resulting in two possible locations through trilateration.
    • Nonlinearity. For example in 2D, consider a Pose2 odometry where the orientation is uncertain: The resulting belief of where a next pose might be (convolution with odometry factor) results in a banana shape curve, even though the entire process is driven by assumed Gaussian belief.
    • Physics of the measurement process. Many measurement processes exhibit non-Gaussian behaviour. For example, acoustic/radio time-of-flight measurements, using either pulse-train or matched filtering, result in an "energy intensity" over time/distance of what the range to a scattering-target/source might be–i.e. highly non-Gaussian.

    Next Steps

    Quick links to related pages:

      diff --git a/dev/concepts/zero_install/index.html b/dev/concepts/zero_install/index.html index 2f51488fb..268fc4e49 100644 --- a/dev/concepts/zero_install/index.html +++ b/dev/concepts/zero_install/index.html @@ -1,2 +1,2 @@ -Zero Install Solution · Caesar.jl
      +Zero Install Solution · Caesar.jl
      diff --git a/dev/dev/internal_fncs/index.html b/dev/dev/internal_fncs/index.html index 6cea952ea..db4e78f0c 100644 --- a/dev/dev/internal_fncs/index.html +++ b/dev/dev/internal_fncs/index.html @@ -40,4 +40,4 @@ P(A|B=B(.))

      Various Internal Function Docs

      IncrementalInference._solveCCWNumeric!Function
      _solveCCWNumeric!(ccwl; ...)
       _solveCCWNumeric!(ccwl, _slack; perturb)
      -

      Solve free variable x by root finding residual function fgr.usrfnc(res, x). This is the penultimate step before calling numerical operations to move actual estimates, which is done by an internally created lambda function.

      Notes

      • Assumes cpt_.p is already set to desired X decision variable dimensions and size.
      • Assumes only ccw.particleidx will be solved for
      • small random (off-manifold) perturbation used to prevent trivial solver cases, div by 0 etc.
        • perturb is necessary for NLsolve (obsolete) cases, and smaller than 1e-10 will result in test failure
      • Also incorporates the active hypo lookup

      DevNotes

      • TODO testshuffle is now obsolete, should be removed
      • TODO perhaps consolidate perturbation with inflation or nullhypo
      source
      +

      Solve free variable x by root finding residual function fgr.usrfnc(res, x). This is the penultimate step before calling numerical operations to move actual estimates, which is done by an internally created lambda function.

      Notes

      DevNotes

      source diff --git a/dev/dev/known_issues/index.html b/dev/dev/known_issues/index.html index 79bfdf323..78999a424 100644 --- a/dev/dev/known_issues/index.html +++ b/dev/dev/known_issues/index.html @@ -1,2 +1,2 @@ -Known Issue List · Caesar.jl

      Known Issues

      This page is used to list known issues:

      • Arena.jl is fairly behind on a number of updates and deprecations. Fixes for this are planned 2021Q2.
      • RoMEPlotting.jl main features like plotSLAM2D are working, but some of the other features are not fully up to date with recent changes in upstream packages. This too will be updated around Summer 2021.

      Features To Be Restored

      Install 3D Visualization Utils (e.g. Arena.jl)

      3D Visualizations are provided by Arena.jl as well as development package Amphitheater.jl. Please follow instructions on the Visualizations page for a variety of 3D utilities.

      Note

      Arena.jl and Amphitheater.jl are currently being refactored as part of the broader DistributedFactorGraph migration, the features are are in beta stage (1Q2020).

      Install the latest master branch version with

      (v1.5) pkg> add Arena#master

      Install "Just the ZMQ/ROS Runtime Solver" (Linux)

      Work in progress (see issue #278).

      +Known Issue List · Caesar.jl

      Known Issues

      This page is used to list known issues:

      • Arena.jl is fairly behind on a number of updates and deprecations. Fixes for this are planned 2021Q2.
      • RoMEPlotting.jl main features like plotSLAM2D are working, but some of the other features are not fully up to date with recent changes in upstream packages. This too will be updated around Summer 2021.

      Features To Be Restored

      Install 3D Visualization Utils (e.g. Arena.jl)

      3D Visualizations are provided by Arena.jl as well as development package Amphitheater.jl. Please follow instructions on the Visualizations page for a variety of 3D utilities.

      Note

      Arena.jl and Amphitheater.jl are currently being refactored as part of the broader DistributedFactorGraph migration, the features are are in beta stage (1Q2020).

      Install the latest master branch version with

      (v1.5) pkg> add Arena#master

      Install "Just the ZMQ/ROS Runtime Solver" (Linux)

      Work in progress (see issue #278).

      diff --git a/dev/dev/wiki/index.html b/dev/dev/wiki/index.html index af95cd8c8..20403c795 100644 --- a/dev/dev/wiki/index.html +++ b/dev/dev/wiki/index.html @@ -1,2 +1,2 @@ -Wiki Pointers · Caesar.jl

      Developers Documentation

      High Level Requirements

      Wiki to formalize some of the overall objectives.

      Standardizing the API, verbNoun Definitions:

      The API derives from a set of standard definitions for verbs and Nouns, please see the developer wiki regarding these definitions.

      DistributedFactorGraphs.jl Docs

      These are more hardy developer docs, such as the lower level data management API etc.

      Design Wiki, Data and Architecture

      More developer zone material will be added here in the future, but for the time being check out the Caesar Wiki.

      Tree and CSM References

      Major upgrades to how the tree and CSM works is tracked in IIF issue 889.

      Coding Templates

      We've started to organize useful coding templates that are not available elsewhere (such as JuliaDocs) in a more local developers

      Shortcuts for vscode IDE

      See wiki

      Parametric Solve Whiteboard

      https://github.com/JuliaRobotics/IncrementalInference.jl/wiki/Parametric-Solve-Whiteboard

      Early PoC work on Tree based Initialization

      https://github.com/JuliaRobotics/IncrementalInference.jl/wiki/Tree-Based-Initialization

      Wiki for variable ordering links.

      +Wiki Pointers · Caesar.jl

      Developers Documentation

      High Level Requirements

      Wiki to formalize some of the overall objectives.

      Standardizing the API, verbNoun Definitions:

      The API derives from a set of standard definitions for verbs and Nouns, please see the developer wiki regarding these definitions.

      DistributedFactorGraphs.jl Docs

      These are more hardy developer docs, such as the lower level data management API etc.

      Design Wiki, Data and Architecture

      More developer zone material will be added here in the future, but for the time being check out the Caesar Wiki.

      Tree and CSM References

      Major upgrades to how the tree and CSM works is tracked in IIF issue 889.

      Coding Templates

      We've started to organize useful coding templates that are not available elsewhere (such as JuliaDocs) in a more local developers

      Shortcuts for vscode IDE

      See wiki

      Parametric Solve Whiteboard

      https://github.com/JuliaRobotics/IncrementalInference.jl/wiki/Parametric-Solve-Whiteboard

      Early PoC work on Tree based Initialization

      https://github.com/JuliaRobotics/IncrementalInference.jl/wiki/Tree-Based-Initialization

      Wiki for variable ordering links.

      diff --git a/dev/examples/adding_variables_factors/index.html b/dev/examples/adding_variables_factors/index.html index d55324be6..0a7c62fe0 100644 --- a/dev/examples/adding_variables_factors/index.html +++ b/dev/examples/adding_variables_factors/index.html @@ -1,2 +1,2 @@ -Variable/Factor Considerations · Caesar.jl

      Variable/Factor Considerations

      A couple of important points:

      • You do not need to modify or insert your new code into Caesar/RoME/IncrementalInference source code libraries – they can be created and run anywhere on-the-fly!
      • As long as the factors exist in the working space when the solver is run, the factors are automatically used – this is possible due to Julia's multiple dispatch design
      • Caesar.jl is designed to allow you to add new variables and factors to your own independent repository and incorporate them at will at compile-time or even run-time
      • Residual function definitions for new factors types use a callable struct (a.k.a functor) architecture to simultaneously allow:
        • Multiple dispatch (i.e. 'polymorphic' behavior)
        • Meta-data and in-place memory storage for advanced and performant code
        • An outside callback implementation style
      • In most robotics scenarios, there is no need for new variables or factors:
        • Variables have various mechanisms that allow you to attach data to them, e.g. raw sensory data or identified April tags, so you do not need to create a new variable type just to store data
        • New variables are required only if you are representing a new state - TODO: Example of needed state
        • New factors are needed if:
          • You need to represent a constraint for a variable (known as a singleton) and that constraint type doesn't exist
          • You need to represent a constraint between two variables and that constraint type doesn't exist

      All factors inherit from one of the following types, depending on their function:

      • AbstractPrior is for priors (unary factors) that provide an absolute constraint for a single variable. A simple example of this is an absolute GPS prior, or equivalently a (0, 0, 0) starting location in a Pose2 scenario.
        • Requires: A getSample function
      • AbstractRelativeMinimize uses Optim.jl and is for relative factors that introduce an algebraic relationship between two or more variables. A simple example of this is an odometry factor between two pose variables, or a range factor indicating the range between a pose and another variable.
        • Requires: A getSample function and a residual function definition
        • The minimize suffix specifies that the residual function of this factor will be enforced by numerical minimization (find me the minimum of this function)
      • [NEW] AbstractManifoldMinimize uses Manopt.jl.

      How do you decide which to use?

      • If you are creating factors for world-frame information that will be tied to a single variable, inherit from <:AbstractPrior
        • GPS coordinates should be priors
      • If you are creating factors for local-frame relationships between variables, inherit from IIF.AbstractRelativeMinimize
        • Odometry and bearing deltas should be introduced as pairwise factors and should be local frame

      TBD: Users should start with IIF.AbstractRelativeMinimize, discuss why and when they should promote their factors to IIF.AbstractRelativeRoots.

      Note

      AbstractRelativeMinimize does not imply that the overall inference algorithm only minimizes an objective function. The MM-iSAM algorithm is built around fixed-point analysis. Minimization is used here to locally enforce the residual function.

      What you need to build in the new factor:

      • A struct for the factor itself
      • A sampler function to return measurements from the random ditributions
      • If you are building a <:AbstractRelative you need to define a residual function to introduce the relative algebraic relationship between the variables
        • Minimization function should be lower-bounded and smooth
      • A packed type of the factor which must be named Packed[Factor name], and allows the factor to be packed/transmitted/unpacked
      • Serialization and deserialization methods
        • These are convert functions that pack and unpack the factor (which may be highly complex) into serialization-compatible formats
        • As the factors are mostly comprised of distributions (of type SampleableBelief), while JSON3.jl` is used for serialization.
      +Variable/Factor Considerations · Caesar.jl

      Variable/Factor Considerations

      A couple of important points:

      • You do not need to modify or insert your new code into Caesar/RoME/IncrementalInference source code libraries – they can be created and run anywhere on-the-fly!
      • As long as the factors exist in the working space when the solver is run, the factors are automatically used – this is possible due to Julia's multiple dispatch design
      • Caesar.jl is designed to allow you to add new variables and factors to your own independent repository and incorporate them at will at compile-time or even run-time
      • Residual function definitions for new factors types use a callable struct (a.k.a functor) architecture to simultaneously allow:
        • Multiple dispatch (i.e. 'polymorphic' behavior)
        • Meta-data and in-place memory storage for advanced and performant code
        • An outside callback implementation style
      • In most robotics scenarios, there is no need for new variables or factors:
        • Variables have various mechanisms that allow you to attach data to them, e.g. raw sensory data or identified April tags, so you do not need to create a new variable type just to store data
        • New variables are required only if you are representing a new state - TODO: Example of needed state
        • New factors are needed if:
          • You need to represent a constraint for a variable (known as a singleton) and that constraint type doesn't exist
          • You need to represent a constraint between two variables and that constraint type doesn't exist

      All factors inherit from one of the following types, depending on their function:

      • AbstractPrior is for priors (unary factors) that provide an absolute constraint for a single variable. A simple example of this is an absolute GPS prior, or equivalently a (0, 0, 0) starting location in a Pose2 scenario.
        • Requires: A getSample function
      • AbstractRelativeMinimize uses Optim.jl and is for relative factors that introduce an algebraic relationship between two or more variables. A simple example of this is an odometry factor between two pose variables, or a range factor indicating the range between a pose and another variable.
        • Requires: A getSample function and a residual function definition
        • The minimize suffix specifies that the residual function of this factor will be enforced by numerical minimization (find me the minimum of this function)
      • [NEW] AbstractManifoldMinimize uses Manopt.jl.

      How do you decide which to use?

      • If you are creating factors for world-frame information that will be tied to a single variable, inherit from <:AbstractPrior
        • GPS coordinates should be priors
      • If you are creating factors for local-frame relationships between variables, inherit from IIF.AbstractRelativeMinimize
        • Odometry and bearing deltas should be introduced as pairwise factors and should be local frame

      TBD: Users should start with IIF.AbstractRelativeMinimize, discuss why and when they should promote their factors to IIF.AbstractRelativeRoots.

      Note

      AbstractRelativeMinimize does not imply that the overall inference algorithm only minimizes an objective function. The MM-iSAM algorithm is built around fixed-point analysis. Minimization is used here to locally enforce the residual function.

      What you need to build in the new factor:

      • A struct for the factor itself
      • A sampler function to return measurements from the random ditributions
      • If you are building a <:AbstractRelative you need to define a residual function to introduce the relative algebraic relationship between the variables
        • Minimization function should be lower-bounded and smooth
      • A packed type of the factor which must be named Packed[Factor name], and allows the factor to be packed/transmitted/unpacked
      • Serialization and deserialization methods
        • These are convert functions that pack and unpack the factor (which may be highly complex) into serialization-compatible formats
        • As the factors are mostly comprised of distributions (of type SampleableBelief), while JSON3.jl` is used for serialization.
      diff --git a/dev/examples/basic_continuousscalar/index.html b/dev/examples/basic_continuousscalar/index.html index cd105294f..1c59770aa 100644 --- a/dev/examples/basic_continuousscalar/index.html +++ b/dev/examples/basic_continuousscalar/index.html @@ -42,4 +42,4 @@ # and visualization plotKDE(fg, [:x0, :x1, :x2, :x3])

      The resulting posterior marginal beliefs over all the system variables are:

      -

      It is import to note that although this tutorial ends with all marginal beliefs having near Gaussian shape and are unimodal, that the package supports multi-modal belief estimates during both the prediction and global inference processes. In fact, many of the same underlying inference functions are involved with the automatic initialization process and the global multi-modal iSAM inference procedure. This concludes the ContinuousScalar tutorial particular to the IncrementalInference package.

      +

      It is import to note that although this tutorial ends with all marginal beliefs having near Gaussian shape and are unimodal, that the package supports multi-modal belief estimates during both the prediction and global inference processes. In fact, many of the same underlying inference functions are involved with the automatic initialization process and the global multi-modal iSAM inference procedure. This concludes the ContinuousScalar tutorial particular to the IncrementalInference package.

      diff --git a/dev/examples/basic_definingfactors/index.html b/dev/examples/basic_definingfactors/index.html index 8743cbe03..dcb740bf0 100644 --- a/dev/examples/basic_definingfactors/index.html +++ b/dev/examples/basic_definingfactors/index.html @@ -10,4 +10,4 @@ someBelief = manikde!(Position{1}, pts) # and build your new factor as an object -myprior = MyPrior(someBelief)

      and add it to the existing factor graph from earlier, lets say:

      addFactor!(fg, [:x1], myprior)
      Note

      Variable types Postion{1} or ContinuousEuclid{1} are algebraically equivalent.

      That's it, this factor is now part of the graph. This should be a solvable graph:

      solveGraph!(fg); # exact alias of solveTree!(fg)

      Later we will see how to ensure these new factors can be properly serialized to work with features like saveDFG and loadDFG. See What is CalcFactor for more details.

      See the next page on how to build your own Custom Relative Factor. Serialization of factors is also discussed in more detail at Standardized Factor Serialization.

      +myprior = MyPrior(someBelief)

      and add it to the existing factor graph from earlier, lets say:

      addFactor!(fg, [:x1], myprior)
      Note

      Variable types Postion{1} or ContinuousEuclid{1} are algebraically equivalent.

      That's it, this factor is now part of the graph. This should be a solvable graph:

      solveGraph!(fg); # exact alias of solveTree!(fg)

      Later we will see how to ensure these new factors can be properly serialized to work with features like saveDFG and loadDFG. See What is CalcFactor for more details.

      See the next page on how to build your own Custom Relative Factor. Serialization of factors is also discussed in more detail at Standardized Factor Serialization.

      diff --git a/dev/examples/basic_hexagonal2d/index.html b/dev/examples/basic_hexagonal2d/index.html index e64afd6b5..fd1002d9a 100644 --- a/dev/examples/basic_hexagonal2d/index.html +++ b/dev/examples/basic_hexagonal2d/index.html @@ -40,4 +40,4 @@ tree = solveTree!(fg, tree) # redraw -pl = drawPosesLandms(fg)

      test

      This concludes the Hexagonal 2D SLAM example.

      Interest: The Bayes (Junction) tree

      The Bayes (Junction) tree is used as an acyclic (has no loops) computational object, an exact algebraic refactorizating of factor graph, to perform the associated sum-product inference. The visual structure of the tree can be extracted by modifying the command tree = wipeBuildNewTree!(fg, drawpdf=true) to produce representations such as this in bt.pdf.

      exbt2d

      +pl = drawPosesLandms(fg)

      test

      This concludes the Hexagonal 2D SLAM example.

      Interest: The Bayes (Junction) tree

      The Bayes (Junction) tree is used as an acyclic (has no loops) computational object, an exact algebraic refactorizating of factor graph, to perform the associated sum-product inference. The visual structure of the tree can be extracted by modifying the command tree = wipeBuildNewTree!(fg, drawpdf=true) to produce representations such as this in bt.pdf.

      exbt2d

      diff --git a/dev/examples/basic_slamedonut/index.html b/dev/examples/basic_slamedonut/index.html index 3fc6c91a8..b13203d38 100644 --- a/dev/examples/basic_slamedonut/index.html +++ b/dev/examples/basic_slamedonut/index.html @@ -143,4 +143,4 @@ # Gadfly.draw(PDF("/tmp/testLocsAll.pdf", 20cm, 10cm),pl) pl = drawLandms(fg) -# Gadfly.draw(PDF("/tmp/testAll.pdf", 20cm, 10cm),pl)

      testall

      This example used the default of N=200 particles per marginal belief. By increasing the number to N=300 throughout the test many more modes and interesting features can be explored, and we refer the reader to an alternative and longer discussion on the same example, in Chapter 6 here.

      +# Gadfly.draw(PDF("/tmp/testAll.pdf", 20cm, 10cm),pl)

      testall

      This example used the default of N=200 particles per marginal belief. By increasing the number to N=300 throughout the test many more modes and interesting features can be explored, and we refer the reader to an alternative and longer discussion on the same example, in Chapter 6 here.

      diff --git a/dev/examples/canonical_graphs/index.html b/dev/examples/canonical_graphs/index.html index 50697d042..a3fc114df 100644 --- a/dev/examples/canonical_graphs/index.html +++ b/dev/examples/canonical_graphs/index.html @@ -126,4 +126,4 @@ spine_t, kwargs... ) -

      Generate canonical helix graph that expands along a spiral pattern, analogous flower petals.

      Notes

      Related

      generateGraph_Helix2D!, generateGraph_Helix2DSlew!

      source

      <!– RoME.generateGraph_Honeycomb! –>

      +

      Generate canonical helix graph that expands along a spiral pattern, analogous flower petals.

      Notes

      Related

      generateGraph_Helix2D!, generateGraph_Helix2DSlew!

      source

      <!– RoME.generateGraph_Honeycomb! –>

      diff --git a/dev/examples/custom_factor_features/index.html b/dev/examples/custom_factor_features/index.html index 9650ccc2d..88e0bcdc1 100644 --- a/dev/examples/custom_factor_features/index.html +++ b/dev/examples/custom_factor_features/index.html @@ -28,4 +28,4 @@ # list the contents ls(fg2), lsf(fg2) -# should see :x0 and :x0f1 listed +# should see :x0 and :x0f1 listed diff --git a/dev/examples/custom_relative_factors/index.html b/dev/examples/custom_relative_factors/index.html index 12cf64575..5457ac385 100644 --- a/dev/examples/custom_relative_factors/index.html +++ b/dev/examples/custom_relative_factors/index.html @@ -20,4 +20,4 @@ q̂ = Manifolds.compose(M, p, exp(M, identity_element(M, p), X)) Xc = vee(M, q, log(M, q, q̂)) return Xc -end

      It is recommended to leave the incoming types unrestricted. If you must define the types, make sure to allow sufficient dispatch freedom (i.e. dispatch to concrete types) and not force operations to "non-concrete" types. Usage can be very case specific, and hence better to let Julia type-inference automation do the hard work of inferring the concrete types.

      Note

      At present (2021) the residual function should return the residual value as a coordinate (not as tangent vectors or manifold points). Ongoing work is in progress, and likely to return residual values as manifold tangent vectors instead.

      Serialization

      Serialization of factors is also discussed in more detail at Standardized Factor Serialization.

      +end

      It is recommended to leave the incoming types unrestricted. If you must define the types, make sure to allow sufficient dispatch freedom (i.e. dispatch to concrete types) and not force operations to "non-concrete" types. Usage can be very case specific, and hence better to let Julia type-inference automation do the hard work of inferring the concrete types.

      Note

      At present (2021) the residual function should return the residual value as a coordinate (not as tangent vectors or manifold points). Ongoing work is in progress, and likely to return residual values as manifold tangent vectors instead.

      Serialization

      Serialization of factors is also discussed in more detail at Standardized Factor Serialization.

      diff --git a/dev/examples/custom_variables/index.html b/dev/examples/custom_variables/index.html index 49ca00903..23a700f83 100644 --- a/dev/examples/custom_variables/index.html +++ b/dev/examples/custom_variables/index.html @@ -8,4 +8,4 @@ Pose2, SpecialEuclidean(2), ArrayPartition(MVector{2}(0.0,0.0), MMatrix{2,2}(1.0,0.0,0.0,1.0)) -)

      Here we used Manifolds.SpecialEuclidean(2) as the variable manifold, and the default data representation is similar to Manifolds.identity_element(SpecialEuclidean(2)), or Float32[1.0 0; 0 1], etc. In the example above, we used StaticArrays.MVector, StaticArrays.MMatrix for better performance, owing to better heap vs. stack memory management.

      DistributedFactorGraphs.@defVariableMacro
      @defVariable StructName manifolds<:ManifoldsBase.AbstractManifold

      A macro to create a new variable with name StructName and manifolds. Note that the manifolds is an object and must be a subtype of ManifoldsBase.AbstractManifold. See documentation in Manifolds.jl on making your own.

      Example:

      DFG.@defVariable Pose2 SpecialEuclidean(2) ArrayPartition([0;0.0],[1 0; 0 1.0])
      source
      Note

      Users can implement their own manifolds using the ManifoldsBase.jl API; and the tutorial. See JuliaManifolds/Manifolds.jl for general information.

      +)

      Here we used Manifolds.SpecialEuclidean(2) as the variable manifold, and the default data representation is similar to Manifolds.identity_element(SpecialEuclidean(2)), or Float32[1.0 0; 0 1], etc. In the example above, we used StaticArrays.MVector, StaticArrays.MMatrix for better performance, owing to better heap vs. stack memory management.

      DistributedFactorGraphs.@defVariableMacro
      @defVariable StructName manifolds<:ManifoldsBase.AbstractManifold

      A macro to create a new variable with name StructName and manifolds. Note that the manifolds is an object and must be a subtype of ManifoldsBase.AbstractManifold. See documentation in Manifolds.jl on making your own.

      Example:

      DFG.@defVariable Pose2 SpecialEuclidean(2) ArrayPartition([0;0.0],[1 0; 0 1.0])
      source
      Note

      Users can implement their own manifolds using the ManifoldsBase.jl API; and the tutorial. See JuliaManifolds/Manifolds.jl for general information.

      diff --git a/dev/examples/deadreckontether/index.html b/dev/examples/deadreckontether/index.html index 68280d537..db2a7e4c1 100644 --- a/dev/examples/deadreckontether/index.html +++ b/dev/examples/deadreckontether/index.html @@ -27,4 +27,4 @@

      Helper function to duplicate values from a special factor variable into standard factor and variable. Returns the name of the new factor.

      Notes:

      Related

      addVariable!, addFactor!

      source
      RoME.accumulateDiscreteLocalFrame!Function
      accumulateDiscreteLocalFrame!(mpp, DX, Qc; ...)
       accumulateDiscreteLocalFrame!(mpp, DX, Qc, dt; Fk, Gk, Phik)
       

      Advance an odometry factor as though integrating an ODE – i.e. $X_2 = X_1 ⊕ ΔX$. Accepts continuous domain process noise density Qc which is internally integrated to discrete process noise Qd. $DX$ is assumed to already be incrementally integrated before this function. See related accumulateContinuousLocalFrame! for fully continuous system propagation.

      Notes

      • This update stays in the same reference frame but updates the local vector as though accumulating measurement values over time.
      • Kalman filter would have used for noise propagation: $Pk1 = F*Pk*F' + Qdk$
      • From Chirikjian, Vol.II, 2012, p.35: Jacobian SE(2), Jr = [cθ sθ 0; -sθ cθ 0; 0 0 1] – i.e. dSE2/dX' = SE2([0;0;-θ])
      • DX = dX/dt*Dt
      • assumed process noise for {}^b Qc = {}^b [x;y;yaw] = [fwd; sideways; rotation.rate]

      Dev Notes

      • TODO many operations here can be done in-place.

      Related

      accumulateContinuousLocalFrame!, accumulateDiscreteReferenceFrame!, accumulateFactorMeans

      source
      IncrementalInference.accumulateFactorMeansFunction
      accumulateFactorMeans(dfg, fctsyms; solveKey)
      -

      Accumulate chains of binary factors–-potentially starting from a prior–-as a parameteric mean value only.

      Notes

      • Not used during tree inference.
      • Expected uses are for user analysis of factors and estimates.
      • real-time dead reckoning chain prediction.
      • Returns mean value as coordinates

      DevNotes

      Related:

      approxConvBelief, solveFactorParametric, RoME.MutablePose2Pose2Gaussian

      source
      RoME.MutablePose2Pose2GaussianType
      mutable struct MutablePose2Pose2Gaussian <: AbstractManifoldMinimize

      Specialized Pose2Pose2 factor type (Gaussian), which allows for rapid accumulation of odometry information as a branch on the factor graph.

      source

      Additional Notes

      This will be consolidated with text above:

      for a new dead reckon tether solution;

      +

      Accumulate chains of binary factors–-potentially starting from a prior–-as a parameteric mean value only.

      Notes

      DevNotes

      Related:

      approxConvBelief, solveFactorParametric, RoME.MutablePose2Pose2Gaussian

      source
      RoME.MutablePose2Pose2GaussianType
      mutable struct MutablePose2Pose2Gaussian <: AbstractManifoldMinimize

      Specialized Pose2Pose2 factor type (Gaussian), which allows for rapid accumulation of odometry information as a branch on the factor graph.

      source

      Additional Notes

      This will be consolidated with text above:

      for a new dead reckon tether solution;

      diff --git a/dev/examples/examples/index.html b/dev/examples/examples/index.html index 66d6f87b9..bfcd8aaab 100644 --- a/dev/examples/examples/index.html +++ b/dev/examples/examples/index.html @@ -6,4 +6,4 @@

      Hexagonal 2D

      Batch Mode

      A simple 2D hexagonal robot trajectory example is expanded below using techniques developed in simultaneous localization and mapping (SLAM).

      Bayes Tree Fixed-Lag Solving - Hexagonal2D Revisited

      The hexagonal fixed-lag example shows how tree based clique recycling can be achieved. A further example is given in the real-world underwater example below.

      An Underdetermined Solution (a.k.a. SLAM-e-donut)

      This tutorial describes (unforced multimodality) a range-only system where there are always more variable dimensions than range measurements made, see Underdeterminied Example here The error distribution over ranges could be nearly anything, but are restricted to Gaussian-only in this example to illustrate an alternative point – other examples show inference results where highly non-Gaussian error distributions are used.

      Multi-modal range only example (click here or image for full Vimeo):

      IMAGE ALT TEXT HERE

      Towards Real-Time Underwater Acoustic Navigation

      This example uses "dead reckon tethering" (DRT) to perform many of the common robot odometry and high frequency pose updated operations. These features are a staple and standard part of the distributed factor graph system.

      Click on image (or this link to Vimeo) for a video illustration:

      AUV SLAM

      Uncertain Data Associations, (forced multi-hypothesis)

      This example presents a novel multimodal solution to an otherwise intractible multihypothesis SLAM problem. This work spans the entire Victoria Park dataset, and resolves a solution over roughly 10000 variable dimensions with 2^1700 (yes to teh power 1700) theoretically possible modes. At the time of first solution in 2016, a full batch solution took around 3 hours to compute on a very spartan early implementation.

      -

      The fractional multi-hypothesis assignments addFactor!(..., multihypo=[1.0; 0.5;0.5]). Similarly for tri-nary or higher multi-hypotheses.

      Probabilistic Data Association (Uncertain loop closures)

      Example where the standard multihypothesis addFactor!(.., multihypo=[1.0;0.5;0.5]) interface is used. This is from the Kitti driving dataset. Video here. The data association and multihypothesis section discusses this feature in more detail.

      IMAGE ALT TEXT HERE

      Synthetic Aperture Sonar SLAM

      The full functional (approximate sum-product) inference approach can be used to natively imbed single hydrophone acoustic waveform data into highly non-Gaussian SAS factors–that implicitly perform beamforming/micro-location–-for a simultaneous localization and mapping solution (image links to video). See the Raw Correlator Probability (Matched Filter) Section for more details.

      IMAGE ALT TEXT HERE

      Marine Surface Vehicle with ROS

      Note

      See initial example here, and native ROS support section here.

      Simulated Ambiguous SONAR in 3D

      Intersection of ambiguous elevation angle from planar SONAR sensor:

      IMAGE ALT TEXT HERE

      Bi-modal belief

      IMAGE ALT TEXT HERE

      Multi-session Indoor Robot

      Multi-session Turtlebot example of the second floor in the Stata Center:

      Turtlebot Multi-session animation

      See the multisession information page for more details, as well as academic work:

      More Examples

      Please see examples folders for Caesar and RoME for more examples, with expanded documentation in the works.

      Adding Factors - Simple Factor Design

      Caesar can be extended with new variables and factors without changing the core code. An example of this design pattern is provided in this example.

      Defining New Variables and Factor

      Adding Factors - DynPose Factor

      Intermediate Example: Adding Dynamic Factors and Variables

      +

      The fractional multi-hypothesis assignments addFactor!(..., multihypo=[1.0; 0.5;0.5]). Similarly for tri-nary or higher multi-hypotheses.

      Probabilistic Data Association (Uncertain loop closures)

      Example where the standard multihypothesis addFactor!(.., multihypo=[1.0;0.5;0.5]) interface is used. This is from the Kitti driving dataset. Video here. The data association and multihypothesis section discusses this feature in more detail.

      IMAGE ALT TEXT HERE

      Synthetic Aperture Sonar SLAM

      The full functional (approximate sum-product) inference approach can be used to natively imbed single hydrophone acoustic waveform data into highly non-Gaussian SAS factors–that implicitly perform beamforming/micro-location–-for a simultaneous localization and mapping solution (image links to video). See the Raw Correlator Probability (Matched Filter) Section for more details.

      IMAGE ALT TEXT HERE

      Marine Surface Vehicle with ROS

      Note

      See initial example here, and native ROS support section here.

      Simulated Ambiguous SONAR in 3D

      Intersection of ambiguous elevation angle from planar SONAR sensor:

      IMAGE ALT TEXT HERE

      Bi-modal belief

      IMAGE ALT TEXT HERE

      Multi-session Indoor Robot

      Multi-session Turtlebot example of the second floor in the Stata Center:

      Turtlebot Multi-session animation

      See the multisession information page for more details, as well as academic work:

      More Examples

      Please see examples folders for Caesar and RoME for more examples, with expanded documentation in the works.

      Adding Factors - Simple Factor Design

      Caesar can be extended with new variables and factors without changing the core code. An example of this design pattern is provided in this example.

      Defining New Variables and Factor

      Adding Factors - DynPose Factor

      Intermediate Example: Adding Dynamic Factors and Variables

      diff --git a/dev/examples/interm_fixedlag_hexagonal/index.html b/dev/examples/interm_fixedlag_hexagonal/index.html index 4a8b47cf5..c189c0fc9 100644 --- a/dev/examples/interm_fixedlag_hexagonal/index.html +++ b/dev/examples/interm_fixedlag_hexagonal/index.html @@ -78,4 +78,4 @@ Guide.xlabel("Solving Iteration"), Guide.ylabel("Solving Time (seconds)"), Guide.manual_color_key("Legend", ["fixed", "batch"], ["green", "magenta"])) -Gadfly.draw(PNG("results_comparison.png", 12cm, 15cm), plt)

      Results

      Warning

      Note these results are out of date, much improved performance is possible and work is in progress to improve the documentation around this feature.

      Preliminary results for the comparison can be seen below. However, this is just a start and we need to perform more testing. At the moment we are working on providing consistent results and further improving performance/flattening the fixed-lag time. It should be noted that the below graph is not to demonstrate the absolute solve time, but rather the relative behavior of full-graph solve vs. fixed-lag.

      Timing comparison of full solve vs. fixed-lag

      NOTE Work is underway (aka "Project Tree House") to reduce overhead computations that result in poorer fixed-lag solving times. We expect the fixed-lag performance to improve in the coming months (Written Nov 2018). Please file issues if a deeper discussion is required.

      Additional Example

      Work In Progress, but In the mean time see the following examples:

      https://github.com/JuliaRobotics/Caesar.jl/blob/master/examples/wheeled/racecar/apriltagandzed_slam.jl

      +Gadfly.draw(PNG("results_comparison.png", 12cm, 15cm), plt)

      Results

      Warning

      Note these results are out of date, much improved performance is possible and work is in progress to improve the documentation around this feature.

      Preliminary results for the comparison can be seen below. However, this is just a start and we need to perform more testing. At the moment we are working on providing consistent results and further improving performance/flattening the fixed-lag time. It should be noted that the below graph is not to demonstrate the absolute solve time, but rather the relative behavior of full-graph solve vs. fixed-lag.

      Timing comparison of full solve vs. fixed-lag

      NOTE Work is underway (aka "Project Tree House") to reduce overhead computations that result in poorer fixed-lag solving times. We expect the fixed-lag performance to improve in the coming months (Written Nov 2018). Please file issues if a deeper discussion is required.

      Additional Example

      Work In Progress, but In the mean time see the following examples:

      https://github.com/JuliaRobotics/Caesar.jl/blob/master/examples/wheeled/racecar/apriltagandzed_slam.jl

      diff --git a/dev/examples/legacy_deffactors/index.html b/dev/examples/legacy_deffactors/index.html index c3595a408..24d960d5b 100644 --- a/dev/examples/legacy_deffactors/index.html +++ b/dev/examples/legacy_deffactors/index.html @@ -31,4 +31,4 @@ # the broadcast operators with automatically vectorize res = z .- (x1[1:2] .- x1[1:2]) return res -end +end diff --git a/dev/examples/parametric_solve/index.html b/dev/examples/parametric_solve/index.html index 455ae2aab..c635c59a6 100644 --- a/dev/examples/parametric_solve/index.html +++ b/dev/examples/parametric_solve/index.html @@ -12,4 +12,4 @@ 0 1/var(s.range)] return meas, iΣ -end

      The Factor

      The factor is evaluated in a cost function using the Mahalanobis distance and the measurement should therefore match the residual returned.

      Optimization

      IncrementalInference.solveGraphParametric! uses Optim.jl. The factors that are supported should have a gradient and Hessian available/exists and therefore it makes use of TwiceDifferentiable. Full control of Optim's setup is possible with keyword arguments.

      +end

      The Factor

      The factor is evaluated in a cost function using the Mahalanobis distance and the measurement should therefore match the residual returned.

      Optimization

      IncrementalInference.solveGraphParametric! uses Optim.jl. The factors that are supported should have a gradient and Hessian available/exists and therefore it makes use of TwiceDifferentiable. Full control of Optim's setup is possible with keyword arguments.

      diff --git a/dev/examples/using_images/index.html b/dev/examples/using_images/index.html index b89636c71..609cc92eb 100644 --- a/dev/examples/using_images/index.html +++ b/dev/examples/using_images/index.html @@ -28,4 +28,4 @@ end # free AprilTags library memory -freeDetector!(detector)

      DevNotes

      Related

      AprilTags.detect, PackedPose2AprilTag4Corners, generateCostAprilTagsPreimageCalib

      source

      Using Images.jl

      The Caesar.jl ecosystem support use of the JuliaImages/Images.jl suite of packages. Please see documentation there for the wealth of features implemented.

      Handy Notes

      Converting between images and PNG format:

      bytes = Caesar.toFormat(format"PNG", img)
      Note

      More details to follow.

      Images enables ScatterAlign

      See point cloud alignment page for details on ScatterAlignPose

      +freeDetector!(detector)

      DevNotes

      Related

      AprilTags.detect, PackedPose2AprilTag4Corners, generateCostAprilTagsPreimageCalib

      source

      Using Images.jl

      The Caesar.jl ecosystem support use of the JuliaImages/Images.jl suite of packages. Please see documentation there for the wealth of features implemented.

      Handy Notes

      Converting between images and PNG format:

      bytes = Caesar.toFormat(format"PNG", img)
      Note

      More details to follow.

      Images enables ScatterAlign

      See point cloud alignment page for details on ScatterAlignPose

      diff --git a/dev/examples/using_pcl/index.html b/dev/examples/using_pcl/index.html index 0fe635ca1..1d7081a79 100644 --- a/dev/examples/using_pcl/index.html +++ b/dev/examples/using_pcl/index.html @@ -36,4 +36,4 @@ X_fix = readdlm(io1) X_mov = readdlm(io2) -H, HX_mov, stat = Caesar._PCL.alignICP_Simple(X_fix, X_mov; verbose=true)

      Notes

      DevNotes

      See also: PointCloud

      source

      Visualizing Point Clouds

      See work in progress on alng with example code on the page 3D Visualization.

      +H, HX_mov, stat = Caesar._PCL.alignICP_Simple(X_fix, X_mov; verbose=true)

      Notes

      DevNotes

      See also: PointCloud

      source

      Visualizing Point Clouds

      See work in progress on alng with example code on the page 3D Visualization.

      diff --git a/dev/examples/using_ros/index.html b/dev/examples/using_ros/index.html index 93bc40753..fa0ec4de9 100644 --- a/dev/examples/using_ros/index.html +++ b/dev/examples/using_ros/index.html @@ -71,4 +71,4 @@ rmsg = Caesar._PCL.toROSPointCloud2(wPC2);

      More Tools for Real-Time

      See tools such as

      ST = manageSolveTree!(robotslam.dfg, robotslam.solveSettings, dbg=false)
      RoME.manageSolveTree!Function
      manageSolveTree!(dfg, mss; dbg, timinglog, limitfixeddown)
       

      Asynchronous solver manager that can run concurrently while other Tasks are modifying a common distributed factor graph object.

      Notes

      • When adding Variables and Factors, use solvable=0 to disable the new fragments until ready for inference.
        • e.g. addVariable!(fg, :x45, Pose2, solvable=0)
        • These parts of the factor graph can simply be activated for solving setSolvable!(fg, :x45, 1)
      source

      for solving a factor graph while the middleware processes are modifying the graph, while documentation is being completed see the code here: https://github.com/JuliaRobotics/RoME.jl/blob/a662d45e22ae4db2b6ee20410b00b75361294545/src/Slam.jl#L175-L288

      To stop or trigger a new solve in the SLAM manager you can just use either of these

      RoME.stopManageSolveTree!Function
      stopManageSolveTree!(slam)
       

      Stops a manageSolveTree! session. Usually up to the user to do so as a SLAM process comes to completion.

      Related

      manageSolveTree!

      source
      RoME.triggerSolve!Function
      triggerSolve!(slam)
      -

      Trigger a factor graph solveTree!(slam.dfg,...) after clearing the solvable buffer slam.?? (assuming the manageSolveTree! task is already running).

      Notes

      • Used in combination with manageSolveTree!
      source
      Note

      Native code for consuming rosbags also includes methods:

      RosbagSubscriber, loop!, getROSPyMsgTimestamp, nanosecond2datetime
      Note

      Additional notes about tricks that came up during development is kept in this wiki.

      Note

      See ongoing RobotOS.jl discussion on building a direct C++ interface and skipping PyCall.jl entirely: https://github.com/jdlangs/RobotOS.jl/issues/59

      +

      Trigger a factor graph solveTree!(slam.dfg,...) after clearing the solvable buffer slam.?? (assuming the manageSolveTree! task is already running).

      Notes

      source
      Note

      Native code for consuming rosbags also includes methods:

      RosbagSubscriber, loop!, getROSPyMsgTimestamp, nanosecond2datetime
      Note

      Additional notes about tricks that came up during development is kept in this wiki.

      Note

      See ongoing RobotOS.jl discussion on building a direct C++ interface and skipping PyCall.jl entirely: https://github.com/jdlangs/RobotOS.jl/issues/59

      diff --git a/dev/faq/index.html b/dev/faq/index.html index fdab029e5..97f64761a 100644 --- a/dev/faq/index.html +++ b/dev/faq/index.html @@ -13,4 +13,4 @@ ... # more stuff end -end # let block

      See Stack overflow on let or the Julia docs page on scoping. Also note it is good practice to use local scope (i.e. inside a function) variables for performance reasons.

      Note

      This behaviour is going to change in Julia 1.5 back to what Julia 0.6 was in interactive cases, and therefore likely less of a problem in future versions. See Julia 1.5 Change Notes, ([#28789], [#33864]).

      How to Enable @debug Logging.jl

      https://stackoverflow.com/questions/53548681/how-to-enable-debugging-messages-in-juno-julia-editor

      Julia Images.jl Axis Convention

      Julia Images.jl follows the common `::Array column-major–-i.e. vertical-major–-index convention

      How does JSON-Schema work?

      Caesar.jl intends to follow json-schema.org, see step-by-step guide here.

      How to get Julia memory allocation points?

      See discourse discussion.

      Increase Linux Open File Limit?

      If you see the error "Open Files Limit", please follow these intructions on your local system. This is likely to happen when debug code and a large number of files are stored in the general solution specific logpath.

      +end # let block

      See Stack overflow on let or the Julia docs page on scoping. Also note it is good practice to use local scope (i.e. inside a function) variables for performance reasons.

      Note

      This behaviour is going to change in Julia 1.5 back to what Julia 0.6 was in interactive cases, and therefore likely less of a problem in future versions. See Julia 1.5 Change Notes, ([#28789], [#33864]).

      How to Enable @debug Logging.jl

      https://stackoverflow.com/questions/53548681/how-to-enable-debugging-messages-in-juno-julia-editor

      Julia Images.jl Axis Convention

      Julia Images.jl follows the common `::Array column-major–-i.e. vertical-major–-index convention

      How does JSON-Schema work?

      Caesar.jl intends to follow json-schema.org, see step-by-step guide here.

      How to get Julia memory allocation points?

      See discourse discussion.

      Increase Linux Open File Limit?

      If you see the error "Open Files Limit", please follow these intructions on your local system. This is likely to happen when debug code and a large number of files are stored in the general solution specific logpath.

      diff --git a/dev/func_ref/index.html b/dev/func_ref/index.html index 1b6cb659a..1dba9089f 100644 --- a/dev/func_ref/index.html +++ b/dev/func_ref/index.html @@ -177,4 +177,4 @@ logger )

      Perform computations required for the upward message passing during belief propation on the Bayes (Junction) tree. This function is usually called as via remote_call for multiprocess dispatch.

      Notes

      DevNotes

      source
      IncrementalInference.resetVariableAllInitializations!Function
      resetVariableAllInitializations!(fgl)
      -

      Reset initialization flag on all variables in ::AbstractDFG.

      Notes

      • Numerical values remain, but inference will overwrite since init flags are now false.
      source
      +

      Reset initialization flag on all variables in ::AbstractDFG.

      Notes

      source diff --git a/dev/index.html b/dev/index.html index 34a42320c..87b7ad088 100644 --- a/dev/index.html +++ b/dev/index.html @@ -1,4 +1,4 @@ Welcome · Caesar.jl

      -

      Open Community

      Click here to go to the Caesar.jl Github repo:

      source

      Caesar.jl is a community project to facilate software technology development for localization and mapping from multiple sensor data, and multiple sessions or human / semi-autonomous / autonomous agents. This software is being developed with broadly Industry 4.0, Robotics, and Work of the Future in mind. Caesar.jl is an "umbrella package" to combine many other libraries from across the Julia package ecosystem.

      Commercial Products and Services

      NavAbility.io builds products and services which simplify the administration and support of the Caesar.jl community, please reach out for any additional information (info@navability.io), or using the community links provided below.

      Various mapping and localization solutions are possible both for commercial and R&D. We recommend taking a look at:

      Follow this page to see the NavAbility Tutorials which are zero install and build around specific application examples.

      Origins and Ongoing Research

      Caesar.jl developed as a spin-out project from MIT's Computer Science and Artificial Intelligence Laboratory. See related works on the literature page. Many future directions are in the works – including fundamental research, implementation quality/performance, and system integration.

      Consider citing our work: CITATION.bib.

      Community, Issues, Comments, or Help

      Post Issues, or Discussions for community help. Maintainers can easily transfer Issues to the best suited package location if necessary. Also see the history of changes and ongoing work can via the Milestone pages (click through badges here). You can also get in touch via Slack at .

      Note

      Please help improve this documentation–if something confuses you, chances are you're not alone. It's easy to do as you read along: just click on the "Edit on GitHub" link above, and then edit the files directly in your browser. Your changes will be vetted by developers before becoming permanent, so don't worry about whether you might say something wrong.

      JuliaRobotics Code of Conduct

      The Caesar.jl project is part of the JuliaRobotics organization and adheres to the JuliaRobotics code-of-conduct.

      Next Steps

      For installation steps, examples/tutorials, and concepts please refer to the following pages:

      +

      Open Community

      Click here to go to the Caesar.jl Github repo:

      source

      Caesar.jl is a community project to facilate software technology development for localization and mapping from multiple sensor data, and multiple sessions or human / semi-autonomous / autonomous agents. This software is being developed with broadly Industry 4.0, Robotics, and Work of the Future in mind. Caesar.jl is an "umbrella package" to combine many other libraries from across the Julia package ecosystem.

      Commercial Products and Services

      NavAbility.io builds products and services which simplify the administration and support of the Caesar.jl community, please reach out for any additional information (info@navability.io), or using the community links provided below.

      Various mapping and localization solutions are possible both for commercial and R&D. We recommend taking a look at:

      Follow this page to see the NavAbility Tutorials which are zero install and build around specific application examples.

      Origins and Ongoing Research

      Caesar.jl developed as a spin-out project from MIT's Computer Science and Artificial Intelligence Laboratory. See related works on the literature page. Many future directions are in the works – including fundamental research, implementation quality/performance, and system integration.

      Consider citing our work: CITATION.bib.

      Community, Issues, Comments, or Help

      Post Issues, or Discussions for community help. Maintainers can easily transfer Issues to the best suited package location if necessary. Also see the history of changes and ongoing work can via the Milestone pages (click through badges here). You can also get in touch via Slack at .

      Note

      Please help improve this documentation–if something confuses you, chances are you're not alone. It's easy to do as you read along: just click on the "Edit on GitHub" link above, and then edit the files directly in your browser. Your changes will be vetted by developers before becoming permanent, so don't worry about whether you might say something wrong.

      JuliaRobotics Code of Conduct

      The Caesar.jl project is part of the JuliaRobotics organization and adheres to the JuliaRobotics code-of-conduct.

      Next Steps

      For installation steps, examples/tutorials, and concepts please refer to the following pages:

      diff --git a/dev/install_viz/index.html b/dev/install_viz/index.html index e1e040343..411543942 100644 --- a/dev/install_viz/index.html +++ b/dev/install_viz/index.html @@ -1,2 +1,2 @@ -Installing Viz · Caesar.jl
      +Installing Viz · Caesar.jl
      diff --git a/dev/installation_environment/index.html b/dev/installation_environment/index.html index fe725d2a6..7ea6bf9f2 100644 --- a/dev/installation_environment/index.html +++ b/dev/installation_environment/index.html @@ -22,4 +22,4 @@

      In VSCode, open the command pallette by pressing Ctrl + Shift + p. There are a wealth of tips and tricks on how to use VSCode. See this JuliaCon presentation for as a general introduction into 'piece-by-piece' code execution and much much more. Working in one of the Julia IDEs like VS Code or Juno should feel something like this (Gif borrowed from DiffEqFlux.jl):

      -

      There are a variety of useful packages in VSCode, such as GitLens, LiveShare, and Todo Browser as just a few highlights. These VSCode Extensions are independent of the already vast JuliaLang Package Ecosystem (see JuliaObserver.com).

      +

      There are a variety of useful packages in VSCode, such as GitLens, LiveShare, and Todo Browser as just a few highlights. These VSCode Extensions are independent of the already vast JuliaLang Package Ecosystem (see JuliaObserver.com).

      diff --git a/dev/introduction/index.html b/dev/introduction/index.html index 5c4e5198f..958608e35 100644 --- a/dev/introduction/index.html +++ b/dev/introduction/index.html @@ -1,2 +1,2 @@ -Introduction · Caesar.jl

      Introduction

      Caesar is an open-source robotic software stack aimed at localization and mapping for robotics, using non-Gaussian graphical model state-estimation techniques. The factor graph method is well suited to combining heterogeneous and ambiguous sensor data streams. The focus is predominantly on geometric/spatial/semantic estimation tasks related to simultaneous localization and mapping (SLAM). The software is also highly extensible and well suited to a variety of estimation /filtering-type tasks — especially in non-Gaussian/multimodal settings. Check out a brief description on why non-Gaussian / multi-modal data processing needs arise.

      A Few Highlights

      Caesar.jl addresses numerous issues that arise in prior SLAM solutions, including:

      • Distributed Factor Graph representation deeply-coupled with an on-Manifold probabilistic algebra language;
      • Localization using different algorithms:
        • MM-iSAMv2
        • Parametric methods, including regular Guassian or Max-Mixtures.
        • Other multi-parametric and non-Gaussian algorithms are presently being implemented.
      • Solving under-defined systems,
      • Inference with non-Gaussian measurements,
      • Standard features for natively handling ambiguous data association and multi-hypotheses,
        • Native multi-modal (hypothesis) representation in the factor-graph, see Data Association and Hypotheses:
        • Multi-modal and non-parametric representation of constraints;
        • Gaussian distributions are but one of the many representations of measurement error;
      • Simplifying bespoke factor development,
      • Centralized (or peer-to-peer decentralized) factor-graph persistence,
        • i.e. Federated multi-session/agent reduction.
      • Multi-CPU inference.
      • Out-of-library extendable for Custom New Variables and Factors;
      • Natively supports legacy Gaussian parametric and max-mixtures solutions;
      • Local in-memory solving on the device as well as database-driven centralized solving (micro-service architecture);
      • Natively support Clique Recycling (i.e. fixed-lag out-marginalization) for continuous operation as well as off-line batch solving, see more at Using Incremental Updates (Clique Recycling I);
      • Natively supports Dead Reckon Tethering;
      • Natively supports Federated multi-session/agent solving;
      • Native support for Entry=>Data blobs for storing large format data.
      • Middleware support, e.g. see the ROS Integration Page.
      +Introduction · Caesar.jl

      Introduction

      Caesar is an open-source robotic software stack aimed at localization and mapping for robotics, using non-Gaussian graphical model state-estimation techniques. The factor graph method is well suited to combining heterogeneous and ambiguous sensor data streams. The focus is predominantly on geometric/spatial/semantic estimation tasks related to simultaneous localization and mapping (SLAM). The software is also highly extensible and well suited to a variety of estimation /filtering-type tasks — especially in non-Gaussian/multimodal settings. Check out a brief description on why non-Gaussian / multi-modal data processing needs arise.

      A Few Highlights

      Caesar.jl addresses numerous issues that arise in prior SLAM solutions, including:

      • Distributed Factor Graph representation deeply-coupled with an on-Manifold probabilistic algebra language;
      • Localization using different algorithms:
        • MM-iSAMv2
        • Parametric methods, including regular Guassian or Max-Mixtures.
        • Other multi-parametric and non-Gaussian algorithms are presently being implemented.
      • Solving under-defined systems,
      • Inference with non-Gaussian measurements,
      • Standard features for natively handling ambiguous data association and multi-hypotheses,
        • Native multi-modal (hypothesis) representation in the factor-graph, see Data Association and Hypotheses:
        • Multi-modal and non-parametric representation of constraints;
        • Gaussian distributions are but one of the many representations of measurement error;
      • Simplifying bespoke factor development,
      • Centralized (or peer-to-peer decentralized) factor-graph persistence,
        • i.e. Federated multi-session/agent reduction.
      • Multi-CPU inference.
      • Out-of-library extendable for Custom New Variables and Factors;
      • Natively supports legacy Gaussian parametric and max-mixtures solutions;
      • Local in-memory solving on the device as well as database-driven centralized solving (micro-service architecture);
      • Natively support Clique Recycling (i.e. fixed-lag out-marginalization) for continuous operation as well as off-line batch solving, see more at Using Incremental Updates (Clique Recycling I);
      • Natively supports Dead Reckon Tethering;
      • Natively supports Federated multi-session/agent solving;
      • Native support for Entry=>Data blobs for storing large format data.
      • Middleware support, e.g. see the ROS Integration Page.
      diff --git a/dev/principles/approxConvDensities/index.html b/dev/principles/approxConvDensities/index.html index 0e596431d..b9bbcd000 100644 --- a/dev/principles/approxConvDensities/index.html +++ b/dev/principles/approxConvDensities/index.html @@ -37,4 +37,4 @@ approxDeconv(fcto, ccw; N, measurement, retries)

      Inverse solve of predicted noise value and returns tuple of (newly calculated-predicted, and known measurements) values.

      Notes

      DevNotes

      Related

      approxDeconv, _solveCCWNumeric!

      source
      approxDeconv(dfg, fctsym; ...)
       approxDeconv(dfg, fctsym, solveKey; retries)
      -

      Generalized deconvolution to find the predicted measurement values of the factor fctsym in dfg. Inverse solve of predicted noise value and returns tuple of (newly predicted, and known "measured" noise) values.

      Notes

      Related

      approxConvBelief, deconvSolveKey

      source

      This feature is not yet as feature rich as the approxConvBelief function, and also requires further work to improve the consistency of the calculation – but none the less exists and is useful in many applications.

      +

      Generalized deconvolution to find the predicted measurement values of the factor fctsym in dfg. Inverse solve of predicted noise value and returns tuple of (newly predicted, and known "measured" noise) values.

      Notes

      Related

      approxConvBelief, deconvSolveKey

      source

      This feature is not yet as feature rich as the approxConvBelief function, and also requires further work to improve the consistency of the calculation – but none the less exists and is useful in many applications.

      diff --git a/dev/principles/bayestreePrinciples/index.html b/dev/principles/bayestreePrinciples/index.html index a3c9b66d7..afb979585 100644 --- a/dev/principles/bayestreePrinciples/index.html +++ b/dev/principles/bayestreePrinciples/index.html @@ -47,4 +47,4 @@ spyCliqMat(tree, :x1) # provided by IncrementalInference #or embedded in graphviz -drawTree(tree, imgs=true, show=true)

      Clique State Machine

      The mmisam solver is based on a state machine design to handle the inter and intra clique operations during a variety of situations. Use of the clique state machine (CSM) makes debugging, development, verification, and modification of the algorithm real easy. Contact us for any support regarding modifications to the default algorithm. For pre-docs on working with CSM, please see IIF #443.

      STATUS of a Clique

      CSM currently uses the following statusses for each of the cliques during the inference process.

      [:initialized;:upsolved;:marginalized;:downsolved;:uprecycled]

      Bayes Tree Legend (from IIF)

      The color legend for the refactored CSM from issue.

      +drawTree(tree, imgs=true, show=true)

      Clique State Machine

      The mmisam solver is based on a state machine design to handle the inter and intra clique operations during a variety of situations. Use of the clique state machine (CSM) makes debugging, development, verification, and modification of the algorithm real easy. Contact us for any support regarding modifications to the default algorithm. For pre-docs on working with CSM, please see IIF #443.

      STATUS of a Clique

      CSM currently uses the following statusses for each of the cliques during the inference process.

      [:initialized;:upsolved;:marginalized;:downsolved;:uprecycled]

      Bayes Tree Legend (from IIF)

      The color legend for the refactored CSM from issue.

      diff --git a/dev/principles/filterCorrespondence/index.html b/dev/principles/filterCorrespondence/index.html index b326b47d8..b61fa7eea 100644 --- a/dev/principles/filterCorrespondence/index.html +++ b/dev/principles/filterCorrespondence/index.html @@ -6,4 +6,4 @@

      Alternatively, indirect measurements of the state variables are should be modeled with the most sensible function

      \[y = h(\theta, \eta)\\ \delta_j(\theta_j, \eta_j) = \ominus h_j(\theta_j, \eta_j) \oplus y_j,\]

      which approximates the underlying (on-manifold) stochastics and physics of the process at hand. The measurement models can be used to project belief through a measurement function, and should be recognized as a standard representation for a Hidden Markov Model (HMM):

      -

      Beyond Filtering

      Consider a multi-sensory system along with data transmission delays, variable sampling rates, etc.; when designing a filtering system to track one or multiple targets, it quickly becomes difficult to augment state vectors with the required state and measurement histories. In contrast, the factor graph as a language allows for heterogeneous data streams to be combined in a common inference framework, and is discussed further in the building distributed factor graphs section.

      Note

      Factor graphs are constructed along with the evolution of time which allows the mmisam inference algorithm to resolve variable marginal estimates both forward and backwards in time. Conventional filtering only allows for forward-backward "smoothing" as two separate processes. When inferring over a factor graph, all variables and factors are considered simultaneously according the topological connectivity irrespective of when and where which measurements were made or communicated – as long as the factor graph (probabilistic model) captures the stochastics of the situation with sufficient accuracy.

      TODO: Multi-modal (belief) vs. multi-hypothesis – see thesis work on multimodal solutions in the mean time.

      Note

      Mmisam allows for parametric, non-parametric, or intensity noise models which can be incorporated into any differentiable residual function.

      Anecdotal Example (EKF-SLAM / MSC-KF)

      WIP: Explain how this method is similar to EKF-SLAM and MSC-KF...

      +

      Beyond Filtering

      Consider a multi-sensory system along with data transmission delays, variable sampling rates, etc.; when designing a filtering system to track one or multiple targets, it quickly becomes difficult to augment state vectors with the required state and measurement histories. In contrast, the factor graph as a language allows for heterogeneous data streams to be combined in a common inference framework, and is discussed further in the building distributed factor graphs section.

      Note

      Factor graphs are constructed along with the evolution of time which allows the mmisam inference algorithm to resolve variable marginal estimates both forward and backwards in time. Conventional filtering only allows for forward-backward "smoothing" as two separate processes. When inferring over a factor graph, all variables and factors are considered simultaneously according the topological connectivity irrespective of when and where which measurements were made or communicated – as long as the factor graph (probabilistic model) captures the stochastics of the situation with sufficient accuracy.

      TODO: Multi-modal (belief) vs. multi-hypothesis – see thesis work on multimodal solutions in the mean time.

      Note

      Mmisam allows for parametric, non-parametric, or intensity noise models which can be incorporated into any differentiable residual function.

      Anecdotal Example (EKF-SLAM / MSC-KF)

      WIP: Explain how this method is similar to EKF-SLAM and MSC-KF...

      diff --git a/dev/principles/initializingOnBayesTree/index.html b/dev/principles/initializingOnBayesTree/index.html index 896ce8ecd..83ec7cdc0 100644 --- a/dev/principles/initializingOnBayesTree/index.html +++ b/dev/principles/initializingOnBayesTree/index.html @@ -1,4 +1,4 @@ Advanced Bayes Tree Topics · Caesar.jl

      Advanced Topics on Bayes Tree

      Definitions

      Squashing or collapsing the Bayes tree back into a 'flat' Bayes net, by chain rule:

      \[p(x,y) = p(x|y)p(y) = p(y|x)p(x) \\ p(x,y,z) = p(x|y,z)p(y,z) = p(x,y|z)p(z) = p(x|y,z)p(y|z)p(z) \\ -p(x,y,z) = p(x|y,z)p(y)p(z) \, \text{iff y is independent of z,} \, also p(y|z)=p(y)\]

      Are cliques in the Bayes (Junction) tree densly connected?

      Yes and no. From the chordal Bayes net's perspective (obtained through the elimination game in order to build the clique tree), the nodes of the Bayes tree are indeed fully connected subgraphs (they are called cliques after all!). From the perspective of the subgraph of the original factor graph induced by the clique's variables, cliques need not be fully connected, since we are assuming the factor graph as sparse, and that no new information can be created out of nothing–-hence each clique must be sparse. That said, the potential exists for the inference within a clique to become densly connected (experience full "fill-in"). See the paper on square-root-SAM, where the connection between dense covariance matrix of a Kalman filter (EKF-SLAM) is actually related to the inverse square root (rectangular) matrix which structure equivalent to the clique subgraph adjacency matrix.

      Also remember that the intermediate Bayes net (which has densly connected cliques) hides the underlying tree structure – think of the Bayes net as looking at the tree from on top or below, thereby encoding the dense connectivity in the structure of the tree itself. All information below any clique of the tree is encoded in the upward marginal belief messages at that point (i.e. the densly connected aspects pertained lower down in the tree).

      LU/QR vs. Belief Propagation

      LU/QR is a special case (Parametric/Linear) of more general belief propagation. The story though is more intricate, where QR/LU assume that product-factors can be formed through the chain rule – using congruency – it is not that straight forward with general beliefs. In the general case we are almost forced to use belief propagation, which in turn implies special care is needed to describe the relationship between sparse factor graph fragments in cliques on the tree, and the more densely connected structure of the Bayes Net.

      Bayes Tree vs Bayes Net

      The Bayes tree is a purely symbolic structure – i.e. special grouping of factors that all come from the factor graph joint product (product of independently sampled likelihood/conditional models):

      \[[\Theta | Z] \propto \prod_i \, [ Z_i=z_i | \Theta_i ]\]

      A sparse factor graph problem can be squashed into smaller dense problem of product-factor conditionals (from variable elimination). Therefore each product-factor (aka "smart factor" in other uses of the language) represent both the factors as well as the sequencing of cliques in that branch. This process repeats recursively from the root down to the leaves. The leaves of the tree have no further reduced product factors condensing child cliques below, and therefore sparse factor fragments can be computed to start the upward belief propagation process. More importantly, as belief propagation progresses up the tree, upward belief messages (on clique separators) capture the same structure as the densely connected Bayes net but each clique in the Bayes tree still only contains sparse fragments from the original factor graph. The structure of the tree (combined parent-child relationships) encodes the same information as the product-factor conditionals!

      Initialization on the Tree

      It more challenging but possible to initialize all variables in a factor graph through belief propagation on the Bayes tree.

      As a thought experiment: Wouldn't it be awesome if we could compile the upsolve as a symbolic process only, and only assign numerical values once during a single downsolve procedure. The origin of this idea comes from the realization that a complete upsolve on the Bayes (Junction) tree is very nearly the same thing finding good numerical initialization values for the factor graph. If the up-init-solve can be performed as a purely symbolic process, it would greatly simplify numerical computations by deferring them to the down solve alone.

      Trying to do initialization for real, we might want to replace up-init-symbolic operations with numerical equivalents. Either way, it would be worth knowing what the equivalent numerical operations of a full up-init-solve of an uninitialized factor graph would look like.

      In general, if a clique can not be initialized based on information from lower down in that branch of the tree; more information is need from the parent. In the Gaussian (more accurately the congruent factor) case, all information lower down in the branch–-i.e. the relationships between variables in parent–-can be summarized by a new conditional product-factor that is computed with the probabilistic chain rule. To restate, the process of squashing the Bayes tree branch back down into a Bayes net, is effectively the the chain rule process used in variable elimination.

      Note

      Question, are cascading up and down solves are required if you do not use eliminated factor conditionals in parent cliques.

      Gaussian-only special case

      Elimination of variables and factors using chain rule reduction is a special case of belief propagation, and thus far only the reduction of congruent beliefs (such as Gaussian) is known.

      These computations can be parallelized depending on the conditional independence structure of the Bayes tree – separate branches are effectively separate chain rule instances. This is precisely the same process exploited by multi-frontal QR matrix factorization.

      On the down solve the conditionals–-from eliminated chains of previously eliminated variables and factors–-can be used for inference directly in the parent.

      See node x1 to x3 in IncrementalInference issue 464. It does not branch or provide additional prior information. so it is collapsed into one factor between x1 and x3, solved in the root and the individual variable can be solved by inference.

      Note

      Question, what does the Jacobian in Gaussian only case mean with regard to a symbolic upsolve?

      +p(x,y,z) = p(x|y,z)p(y)p(z) \, \text{iff y is independent of z,} \, also p(y|z)=p(y)\]

      Are cliques in the Bayes (Junction) tree densly connected?

      Yes and no. From the chordal Bayes net's perspective (obtained through the elimination game in order to build the clique tree), the nodes of the Bayes tree are indeed fully connected subgraphs (they are called cliques after all!). From the perspective of the subgraph of the original factor graph induced by the clique's variables, cliques need not be fully connected, since we are assuming the factor graph as sparse, and that no new information can be created out of nothing–-hence each clique must be sparse. That said, the potential exists for the inference within a clique to become densly connected (experience full "fill-in"). See the paper on square-root-SAM, where the connection between dense covariance matrix of a Kalman filter (EKF-SLAM) is actually related to the inverse square root (rectangular) matrix which structure equivalent to the clique subgraph adjacency matrix.

      Also remember that the intermediate Bayes net (which has densly connected cliques) hides the underlying tree structure – think of the Bayes net as looking at the tree from on top or below, thereby encoding the dense connectivity in the structure of the tree itself. All information below any clique of the tree is encoded in the upward marginal belief messages at that point (i.e. the densly connected aspects pertained lower down in the tree).

      LU/QR vs. Belief Propagation

      LU/QR is a special case (Parametric/Linear) of more general belief propagation. The story though is more intricate, where QR/LU assume that product-factors can be formed through the chain rule – using congruency – it is not that straight forward with general beliefs. In the general case we are almost forced to use belief propagation, which in turn implies special care is needed to describe the relationship between sparse factor graph fragments in cliques on the tree, and the more densely connected structure of the Bayes Net.

      Bayes Tree vs Bayes Net

      The Bayes tree is a purely symbolic structure – i.e. special grouping of factors that all come from the factor graph joint product (product of independently sampled likelihood/conditional models):

      \[[\Theta | Z] \propto \prod_i \, [ Z_i=z_i | \Theta_i ]\]

      A sparse factor graph problem can be squashed into smaller dense problem of product-factor conditionals (from variable elimination). Therefore each product-factor (aka "smart factor" in other uses of the language) represent both the factors as well as the sequencing of cliques in that branch. This process repeats recursively from the root down to the leaves. The leaves of the tree have no further reduced product factors condensing child cliques below, and therefore sparse factor fragments can be computed to start the upward belief propagation process. More importantly, as belief propagation progresses up the tree, upward belief messages (on clique separators) capture the same structure as the densely connected Bayes net but each clique in the Bayes tree still only contains sparse fragments from the original factor graph. The structure of the tree (combined parent-child relationships) encodes the same information as the product-factor conditionals!

      Initialization on the Tree

      It more challenging but possible to initialize all variables in a factor graph through belief propagation on the Bayes tree.

      As a thought experiment: Wouldn't it be awesome if we could compile the upsolve as a symbolic process only, and only assign numerical values once during a single downsolve procedure. The origin of this idea comes from the realization that a complete upsolve on the Bayes (Junction) tree is very nearly the same thing finding good numerical initialization values for the factor graph. If the up-init-solve can be performed as a purely symbolic process, it would greatly simplify numerical computations by deferring them to the down solve alone.

      Trying to do initialization for real, we might want to replace up-init-symbolic operations with numerical equivalents. Either way, it would be worth knowing what the equivalent numerical operations of a full up-init-solve of an uninitialized factor graph would look like.

      In general, if a clique can not be initialized based on information from lower down in that branch of the tree; more information is need from the parent. In the Gaussian (more accurately the congruent factor) case, all information lower down in the branch–-i.e. the relationships between variables in parent–-can be summarized by a new conditional product-factor that is computed with the probabilistic chain rule. To restate, the process of squashing the Bayes tree branch back down into a Bayes net, is effectively the the chain rule process used in variable elimination.

      Note

      Question, are cascading up and down solves are required if you do not use eliminated factor conditionals in parent cliques.

      Gaussian-only special case

      Elimination of variables and factors using chain rule reduction is a special case of belief propagation, and thus far only the reduction of congruent beliefs (such as Gaussian) is known.

      These computations can be parallelized depending on the conditional independence structure of the Bayes tree – separate branches are effectively separate chain rule instances. This is precisely the same process exploited by multi-frontal QR matrix factorization.

      On the down solve the conditionals–-from eliminated chains of previously eliminated variables and factors–-can be used for inference directly in the parent.

      See node x1 to x3 in IncrementalInference issue 464. It does not branch or provide additional prior information. so it is collapsed into one factor between x1 and x3, solved in the root and the individual variable can be solved by inference.

      Note

      Question, what does the Jacobian in Gaussian only case mean with regard to a symbolic upsolve?

      diff --git a/dev/principles/interm_dynpose/index.html b/dev/principles/interm_dynpose/index.html index 8b2822187..9a9db9ff8 100644 --- a/dev/principles/interm_dynpose/index.html +++ b/dev/principles/interm_dynpose/index.html @@ -106,4 +106,4 @@ @show x1 = getKDEMax(getBelief(getVariable(fg, :x1))) @show x2 = getKDEMax(getBelief(getVariable(fg, :x2)))

      Producing output:

      x0 = getKDEMax(getBelief(getVariable(fg, :x0))) = [0.101503, -0.0273216, 9.86718, 9.91146]
       x1 = getKDEMax(getBelief(getVariable(fg, :x1))) = [10.0087, 9.95139, 10.0622, 10.0195]
      -x2 = getKDEMax(getBelief(getVariable(fg, :x2))) = [19.9381, 19.9791, 10.0056, 9.92442]

      IncrementalInference.jl Defining Factors (Future API)

      We would like to remove the idx indexing from the residual function calls, since that is an unnecessary burden on the user. Instead, the package will use views and SubArray types to simplify the interface. Please contact author for more details (8 June 2018).

      Contributions

      Thanks to mc2922 for raising the catalyst issue and conversations that followed from JuliaRobotics/RoME.jl#60.

      +x2 = getKDEMax(getBelief(getVariable(fg, :x2))) = [19.9381, 19.9791, 10.0056, 9.92442]

      IncrementalInference.jl Defining Factors (Future API)

      We would like to remove the idx indexing from the residual function calls, since that is an unnecessary burden on the user. Instead, the package will use views and SubArray types to simplify the interface. Please contact author for more details (8 June 2018).

      Contributions

      Thanks to mc2922 for raising the catalyst issue and conversations that followed from JuliaRobotics/RoME.jl#60.

      diff --git a/dev/principles/multiplyingDensities/index.html b/dev/principles/multiplyingDensities/index.html index f19a96c07..f94a0b240 100644 --- a/dev/principles/multiplyingDensities/index.html +++ b/dev/principles/multiplyingDensities/index.html @@ -86,4 +86,4 @@ prior = Factor('Prior', ['x0'], Normal(np.zeros(1,1)-3.0, np.eye(1)) ) e.AddFactor(prior) prior = Factor('Prior', ['x0'], Normal(np.zeros(1,1)+3.0, np.eye(1)) ) - e.AddFactor(prior) + e.AddFactor(prior) diff --git a/dev/refs/literature/index.html b/dev/refs/literature/index.html index 74e1e68fe..0f35f0895 100644 --- a/dev/refs/literature/index.html +++ b/dev/refs/literature/index.html @@ -1,2 +1,2 @@ -References · Caesar.jl

      Literature

      Newly created page to list related references and additional literature pertaining to this package.

      Direct References

      [1.1] Fourie, D., Leonard, J., Kaess, M.: "A Nonparametric Belief Solution to the Bayes Tree" IEEE/RSJ Intl. Conf. on Intelligent Robots and Systems (IROS), (2016).

      [1.2] Fourie, D.: "Multi-modal and Inertial Sensor Solutions for Navigation-type Factor Graphs", Ph.D. Thesis, Massachusetts Institute of Technology Electrical Engineering and Computer Science together with Woods Hole Oceanographic Institution Department for Applied Ocean Science and Engineering, September 2017.

      [1.3] Fourie, D., Claassens, S., Pillai, S., Mata, R., Leonard, J.: "SLAMinDB: Centralized graph databases for mobile robotics", IEEE Intl. Conf. on Robotics and Automation (ICRA), Singapore, 2017.

      [1.4] Cheung, M., Fourie, D., Rypkema, N., Vaz Teixeira, P., Schmidt, H., and Leonard, J.: "Non-Gaussian SLAM utilizing Synthetic Aperture Sonar", Intl. Conf. On Robotics and Automation (ICRA), IEEE, Montreal, 2019.

      [1.5] Doherty, K., Fourie, D., Leonard, J.: "Multimodal Semantic SLAM with Probabilistic Data Association", Intl. Conf. On Robotics and Automation (ICRA), IEEE, Montreal, 2019.

      [1.6] Fourie, D., Vaz Teixeira, P., Leonard, J.: "Non-parametric Mixed-Manifold Products using Multiscale Kernel Densities", IEEE Intl. Conf. on Intelligent Robots and Systems (IROS), (2019),.

      [1.7] Teixeira, P.N.V., Fourie, D., Kaess, M. and Leonard, J.J., 2019, September. "Dense, sonar-based reconstruction of underwater scenes". In 2019 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS) (pp. 8060-8066). IEEE.

      [1.8] Fourie, D., Leonard, J.: "Inertial Odometry with Retroactive Sensor Calibration", 2015-2019.

      [1.9] Koolen, T. and Deits, R., 2019. Julia for robotics: Simulation and real-time control in a high-level programming language. IEEE, Intl. Conference on Robotics and Automation, ICRA (2019).

      [1.10] Fourie, D., Espinoza, A. T., Kaess, M., and Leonard, J. J., “Characterizing marginalization and incremental operations on the Bayes tree,” in International Workshop on Algorithmic Foundations of Robotics (WAFR), 2020, Oulu, Finland, Springer Publishing.

      [1.11] Fourie, D., Rypkema, N., Claassens, S., Vaz Teixeira, P., Fischell, E., and Leonard, J.J., "Towards Real-Time Non-Gaussian SLAM for Underdetermined Navigation", in IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), October 2020, Las Vegas, USA.

      [1.12] J. Terblanche, S. Claassens and D. Fourie, "Multimodal Navigation-Affordance Matching for SLAM," in IEEE Robotics and Automation Letters, vol. 6, no. 4, pp. 7728-7735, Oct. 2021, doi: 10.1109/LRA.2021.3098788. Also presented at, IEEE 17th International Conference on Automation Science and Engineering, August 2021, Lyon, France.

      Important References

      [2.1] Kaess, Michael, et al. "iSAM2: Incremental smoothing and mapping using the Bayes tree" The International Journal of Robotics Research (2011): 0278364911430419.

      [2.2] Kaess, Michael, et al. "The Bayes tree: An algorithmic foundation for probabilistic robot mapping." Algorithmic Foundations of Robotics IX. Springer, Berlin, Heidelberg, 2010. 157-173.

      [2.3] Kschischang, Frank R., Brendan J. Frey, and Hans-Andrea Loeliger. "Factor graphs and the sum-product algorithm." IEEE Transactions on information theory 47.2 (2001): 498-519.

      [2.4] Dellaert, Frank, and Michael Kaess. "Factor graphs for robot perception." Foundations and Trends® in Robotics 6.1-2 (2017): 1-139.

      [2.5] Sudderth, E.B., Ihler, A.T., Isard, M., Freeman, W.T. and Willsky, A.S., 2010. "Nonparametric belief propagation." Communications of the ACM, 53(10), pp.95-103

      [2.6] Paskin, Mark A. "Thin junction tree filters for simultaneous localization and mapping." in Int. Joint Conf. on Artificial Intelligence. 2003.

      [2.7] Farrell, J., and Matthew B.: "The global positioning system and inertial navigation." Vol. 61. New York: Mcgraw-hill, 1999.

      [2.8] Zarchan, Paul, and Howard Musoff, eds. Fundamentals of Kalman filtering: a practical approach. American Institute of Aeronautics and Astronautics, Inc., 2013.

      [2.9] Rypkema, N. R.,: "Underwater & Out of Sight: Towards Ubiquity in UnderwaterRobotics", Ph.D. Thesis, Massachusetts Institute of Technology Electrical Engineering and Computer Science together with Woods Hole Oceanographic Institution Department for Applied Ocean Science and Engineering, September 2019.

      [2.10] Vaz Teixeira, P.: "Dense, Sonar-based Reconstruction of Underwater Scenes", Ph.D. Thesis, Massachusetts Institute of Technology Electrical Engineering and Computer Science together with Woods Hole Oceanographic Institution Department for Applied Ocean Science and Engineering, September 2019.

      [2.11] Hanebeck, Uwe D. "FLUX: Progressive State Estimation Based on Zakai-type Distributed Ordinary Differential Equations." arXiv preprint arXiv:1808.02825 (2018).

      [2.12] Muandet, Krikamol, et al. "Kernel mean embedding of distributions: A review and beyond." Foundations and Trends® in Machine Learning 10.1-2 (2017): 1-141.

      [2.13] Hsiao, M. and Kaess, M., 2019, May. "MH-iSAM2: Multi-hypothesis iSAM using Bayes Tree and Hypo-tree". In 2019 International Conference on Robotics and Automation (ICRA) (pp. 1274-1280). IEEE.

      [2.14] Arnborg, S., Corneil, D.G. and Proskurowski, A., 1987. "Complexity of finding embeddings in a k-tree". SIAM Journal on Algebraic Discrete Methods, 8(2), pp.277-284.

      [2.15a] Sola, J., Deray, J. and Atchuthan, D., 2018. "A micro Lie theory for state estimation in robotics". arXiv preprint arXiv:1812.01537, and tech report. And cheatsheet w/ suspected typos.

      [2.15b] Delleart F., 2012. Lie Groups for Beginners.

      [2.15c] Eade E., 2017 Lie Groups for 2D and 3D Transformations.

      [2.15d] Chirikjian, G.S., 2015. Partial bi-invariance of SE(3) metrics. Journal of Computing and Information Science in Engineering, 15(1).

      [2.15e] Pennec, X. and Lorenzi, M., 2020. Beyond Riemannian geometry: The affine connection setting for transformation groups. In Riemannian Geometric Statistics in Medical Image Analysis (pp. 169-229). Academic Press.

      [2.15f] Žefran, M., Kumar, V. and Croke, C., 1996, August. Choice of Riemannian metrics for rigid body kinematics. In International Design Engineering Technical Conferences and Computers and Information in Engineering Conference (Vol. 97584, p. V02BT02A030). American Society of Mechanical Engineers.

      [2.15g] Chirikjian, G.S. and Zhou, S., 1998. Metrics on motion and deformation of solid models.

      [2.16] Kaess, M. and Dellaert, F., 2009. Covariance recovery from a square root information matrix for data association. Robotics and autonomous systems, 57(12), pp.1198-1210.

      [2.17] Bishop, C.M., 2006. Pattern recognition and machine learning. New York: Springer. ISBN 978-0-387-31073-2.

      Additional References

      [3.1] Duits, Remco, Erik J. Bekkers, and Alexey Mashtakov. "Fourier Transform on the Homogeneous Space of 3D Positions and Orientations for Exact Solutions to Linear Parabolic and (Hypo-) Elliptic PDEs". arXiv preprint arXiv:1811.00363 (2018).

      [3.2] Mohamed, S., Rosca, M., Figurnov, M. and Mnih, A., 2019. "Monte carlo gradient estimation in machine learning". arXiv preprint arXiv:1906.10652.

      [3.3] Rackauckas, C., Ma, Y., Martensen, J., Warner, C., Zubov, K., Supekar, R., Skinner, D., Ramadhan, A., Edelman, A., "Universal Differential Equations for Scientific Machine Learning", Archive online, DOI: 2001.04385.

      [3.4] Boumal, Nicolas. An introduction to optimization on smooth manifolds. Available online, May, 2020.

      [3.5] Relationship between the Hessianand Covariance Matrix forGaussian Random Variables, John Wiley & Sons

      [3.6] Pennec, Xavier. Intrinsic Statistics on Riemannian Manifolds: Basic Tools for Geometric Measurements, HAL Archive, 2011, Inria, France.

      [3.7] Weber, P., Medina-Oliva, G., Simon, C., et al., 2012. Overview on Bayesian networks applications for dependability risk analysis and maintenance areas. Appl. Artif. Intell. 25 (4), 671e682. https://doi.org/10.1016/j.engappai.2010.06.002. Preprint PDF.

      [3.8] Wang, H.R., Ye, L.T., Xu, X.Y., et al., 2010. Bayesian networks precipitation model based on hidden markov analysis and its application. Sci. China Technol. Sci. 53 (2), 539e547. https://doi.org/10.1007/s11431-010-0034-3.

      [3.9] Mangelson, J.G., Dominic, D., Eustice, R.M. and Vasudevan, R., 2018, May. Pairwise consistent measurement set maximization for robust multi-robot map merging. In 2018 IEEE international conference on robotics and automation (ICRA) (pp. 2916-2923). IEEE.

      [3.10] Bourgeois, F. and Lassalle, J.C., 1971. An extension of the Munkres algorithm for the assignment problem to rectangular matrices. Communications of the ACM, 14(12), pp.802-804.

      Signal Processing (Beamforming and Channel Deconvolution)

      [4.1] Van Trees, H.L., 2004. Optimum array processing: Part IV of detection, estimation, and modulation theory. John Wiley & Sons.

      [4.2a] Dowling, D.R., 2013. "Acoustic Blind Deconvolution and Unconventional Nonlinear Beamforming in Shallow Ocean Environments". MICHIGAN UNIV ANN ARBOR DEPT OF MECHANICAL ENGINEERING.

      [4.2b] Hossein Abadi, S., 2013. "Blind deconvolution in multipath environments and extensions to remote source localization", paper, thesis.

      Contact or Tactile

      [5.1] Suresh, S., Bauza, M., Yu, K.T., Mangelson, J.G., Rodriguez, A. and Kaess, M., 2021, May. Tactile SLAM: Real-time inference of shape and pose from planar pushing. In 2021 IEEE International Conference on Robotics and Automation (ICRA) (pp. 11322-11328). IEEE.

      +References · Caesar.jl

      Literature

      Newly created page to list related references and additional literature pertaining to this package.

      Direct References

      [1.1] Fourie, D., Leonard, J., Kaess, M.: "A Nonparametric Belief Solution to the Bayes Tree" IEEE/RSJ Intl. Conf. on Intelligent Robots and Systems (IROS), (2016).

      [1.2] Fourie, D.: "Multi-modal and Inertial Sensor Solutions for Navigation-type Factor Graphs", Ph.D. Thesis, Massachusetts Institute of Technology Electrical Engineering and Computer Science together with Woods Hole Oceanographic Institution Department for Applied Ocean Science and Engineering, September 2017.

      [1.3] Fourie, D., Claassens, S., Pillai, S., Mata, R., Leonard, J.: "SLAMinDB: Centralized graph databases for mobile robotics", IEEE Intl. Conf. on Robotics and Automation (ICRA), Singapore, 2017.

      [1.4] Cheung, M., Fourie, D., Rypkema, N., Vaz Teixeira, P., Schmidt, H., and Leonard, J.: "Non-Gaussian SLAM utilizing Synthetic Aperture Sonar", Intl. Conf. On Robotics and Automation (ICRA), IEEE, Montreal, 2019.

      [1.5] Doherty, K., Fourie, D., Leonard, J.: "Multimodal Semantic SLAM with Probabilistic Data Association", Intl. Conf. On Robotics and Automation (ICRA), IEEE, Montreal, 2019.

      [1.6] Fourie, D., Vaz Teixeira, P., Leonard, J.: "Non-parametric Mixed-Manifold Products using Multiscale Kernel Densities", IEEE Intl. Conf. on Intelligent Robots and Systems (IROS), (2019),.

      [1.7] Teixeira, P.N.V., Fourie, D., Kaess, M. and Leonard, J.J., 2019, September. "Dense, sonar-based reconstruction of underwater scenes". In 2019 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS) (pp. 8060-8066). IEEE.

      [1.8] Fourie, D., Leonard, J.: "Inertial Odometry with Retroactive Sensor Calibration", 2015-2019.

      [1.9] Koolen, T. and Deits, R., 2019. Julia for robotics: Simulation and real-time control in a high-level programming language. IEEE, Intl. Conference on Robotics and Automation, ICRA (2019).

      [1.10] Fourie, D., Espinoza, A. T., Kaess, M., and Leonard, J. J., “Characterizing marginalization and incremental operations on the Bayes tree,” in International Workshop on Algorithmic Foundations of Robotics (WAFR), 2020, Oulu, Finland, Springer Publishing.

      [1.11] Fourie, D., Rypkema, N., Claassens, S., Vaz Teixeira, P., Fischell, E., and Leonard, J.J., "Towards Real-Time Non-Gaussian SLAM for Underdetermined Navigation", in IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), October 2020, Las Vegas, USA.

      [1.12] J. Terblanche, S. Claassens and D. Fourie, "Multimodal Navigation-Affordance Matching for SLAM," in IEEE Robotics and Automation Letters, vol. 6, no. 4, pp. 7728-7735, Oct. 2021, doi: 10.1109/LRA.2021.3098788. Also presented at, IEEE 17th International Conference on Automation Science and Engineering, August 2021, Lyon, France.

      Important References

      [2.1] Kaess, Michael, et al. "iSAM2: Incremental smoothing and mapping using the Bayes tree" The International Journal of Robotics Research (2011): 0278364911430419.

      [2.2] Kaess, Michael, et al. "The Bayes tree: An algorithmic foundation for probabilistic robot mapping." Algorithmic Foundations of Robotics IX. Springer, Berlin, Heidelberg, 2010. 157-173.

      [2.3] Kschischang, Frank R., Brendan J. Frey, and Hans-Andrea Loeliger. "Factor graphs and the sum-product algorithm." IEEE Transactions on information theory 47.2 (2001): 498-519.

      [2.4] Dellaert, Frank, and Michael Kaess. "Factor graphs for robot perception." Foundations and Trends® in Robotics 6.1-2 (2017): 1-139.

      [2.5] Sudderth, E.B., Ihler, A.T., Isard, M., Freeman, W.T. and Willsky, A.S., 2010. "Nonparametric belief propagation." Communications of the ACM, 53(10), pp.95-103

      [2.6] Paskin, Mark A. "Thin junction tree filters for simultaneous localization and mapping." in Int. Joint Conf. on Artificial Intelligence. 2003.

      [2.7] Farrell, J., and Matthew B.: "The global positioning system and inertial navigation." Vol. 61. New York: Mcgraw-hill, 1999.

      [2.8] Zarchan, Paul, and Howard Musoff, eds. Fundamentals of Kalman filtering: a practical approach. American Institute of Aeronautics and Astronautics, Inc., 2013.

      [2.9] Rypkema, N. R.,: "Underwater & Out of Sight: Towards Ubiquity in UnderwaterRobotics", Ph.D. Thesis, Massachusetts Institute of Technology Electrical Engineering and Computer Science together with Woods Hole Oceanographic Institution Department for Applied Ocean Science and Engineering, September 2019.

      [2.10] Vaz Teixeira, P.: "Dense, Sonar-based Reconstruction of Underwater Scenes", Ph.D. Thesis, Massachusetts Institute of Technology Electrical Engineering and Computer Science together with Woods Hole Oceanographic Institution Department for Applied Ocean Science and Engineering, September 2019.

      [2.11] Hanebeck, Uwe D. "FLUX: Progressive State Estimation Based on Zakai-type Distributed Ordinary Differential Equations." arXiv preprint arXiv:1808.02825 (2018).

      [2.12] Muandet, Krikamol, et al. "Kernel mean embedding of distributions: A review and beyond." Foundations and Trends® in Machine Learning 10.1-2 (2017): 1-141.

      [2.13] Hsiao, M. and Kaess, M., 2019, May. "MH-iSAM2: Multi-hypothesis iSAM using Bayes Tree and Hypo-tree". In 2019 International Conference on Robotics and Automation (ICRA) (pp. 1274-1280). IEEE.

      [2.14] Arnborg, S., Corneil, D.G. and Proskurowski, A., 1987. "Complexity of finding embeddings in a k-tree". SIAM Journal on Algebraic Discrete Methods, 8(2), pp.277-284.

      [2.15a] Sola, J., Deray, J. and Atchuthan, D., 2018. "A micro Lie theory for state estimation in robotics". arXiv preprint arXiv:1812.01537, and tech report. And cheatsheet w/ suspected typos.

      [2.15b] Delleart F., 2012. Lie Groups for Beginners.

      [2.15c] Eade E., 2017 Lie Groups for 2D and 3D Transformations.

      [2.15d] Chirikjian, G.S., 2015. Partial bi-invariance of SE(3) metrics. Journal of Computing and Information Science in Engineering, 15(1).

      [2.15e] Pennec, X. and Lorenzi, M., 2020. Beyond Riemannian geometry: The affine connection setting for transformation groups. In Riemannian Geometric Statistics in Medical Image Analysis (pp. 169-229). Academic Press.

      [2.15f] Žefran, M., Kumar, V. and Croke, C., 1996, August. Choice of Riemannian metrics for rigid body kinematics. In International Design Engineering Technical Conferences and Computers and Information in Engineering Conference (Vol. 97584, p. V02BT02A030). American Society of Mechanical Engineers.

      [2.15g] Chirikjian, G.S. and Zhou, S., 1998. Metrics on motion and deformation of solid models.

      [2.16] Kaess, M. and Dellaert, F., 2009. Covariance recovery from a square root information matrix for data association. Robotics and autonomous systems, 57(12), pp.1198-1210.

      [2.17] Bishop, C.M., 2006. Pattern recognition and machine learning. New York: Springer. ISBN 978-0-387-31073-2.

      Additional References

      [3.1] Duits, Remco, Erik J. Bekkers, and Alexey Mashtakov. "Fourier Transform on the Homogeneous Space of 3D Positions and Orientations for Exact Solutions to Linear Parabolic and (Hypo-) Elliptic PDEs". arXiv preprint arXiv:1811.00363 (2018).

      [3.2] Mohamed, S., Rosca, M., Figurnov, M. and Mnih, A., 2019. "Monte carlo gradient estimation in machine learning". arXiv preprint arXiv:1906.10652.

      [3.3] Rackauckas, C., Ma, Y., Martensen, J., Warner, C., Zubov, K., Supekar, R., Skinner, D., Ramadhan, A., Edelman, A., "Universal Differential Equations for Scientific Machine Learning", Archive online, DOI: 2001.04385.

      [3.4] Boumal, Nicolas. An introduction to optimization on smooth manifolds. Available online, May, 2020.

      [3.5] Relationship between the Hessianand Covariance Matrix forGaussian Random Variables, John Wiley & Sons

      [3.6] Pennec, Xavier. Intrinsic Statistics on Riemannian Manifolds: Basic Tools for Geometric Measurements, HAL Archive, 2011, Inria, France.

      [3.7] Weber, P., Medina-Oliva, G., Simon, C., et al., 2012. Overview on Bayesian networks applications for dependability risk analysis and maintenance areas. Appl. Artif. Intell. 25 (4), 671e682. https://doi.org/10.1016/j.engappai.2010.06.002. Preprint PDF.

      [3.8] Wang, H.R., Ye, L.T., Xu, X.Y., et al., 2010. Bayesian networks precipitation model based on hidden markov analysis and its application. Sci. China Technol. Sci. 53 (2), 539e547. https://doi.org/10.1007/s11431-010-0034-3.

      [3.9] Mangelson, J.G., Dominic, D., Eustice, R.M. and Vasudevan, R., 2018, May. Pairwise consistent measurement set maximization for robust multi-robot map merging. In 2018 IEEE international conference on robotics and automation (ICRA) (pp. 2916-2923). IEEE.

      [3.10] Bourgeois, F. and Lassalle, J.C., 1971. An extension of the Munkres algorithm for the assignment problem to rectangular matrices. Communications of the ACM, 14(12), pp.802-804.

      Signal Processing (Beamforming and Channel Deconvolution)

      [4.1] Van Trees, H.L., 2004. Optimum array processing: Part IV of detection, estimation, and modulation theory. John Wiley & Sons.

      [4.2a] Dowling, D.R., 2013. "Acoustic Blind Deconvolution and Unconventional Nonlinear Beamforming in Shallow Ocean Environments". MICHIGAN UNIV ANN ARBOR DEPT OF MECHANICAL ENGINEERING.

      [4.2b] Hossein Abadi, S., 2013. "Blind deconvolution in multipath environments and extensions to remote source localization", paper, thesis.

      Contact or Tactile

      [5.1] Suresh, S., Bauza, M., Yu, K.T., Mangelson, J.G., Rodriguez, A. and Kaess, M., 2021, May. Tactile SLAM: Real-time inference of shape and pose from planar pushing. In 2021 IEEE International Conference on Robotics and Automation (ICRA) (pp. 11322-11328). IEEE.