From 34a1a4c2fd403f9d2a65a6fe886c9865a266f4fe Mon Sep 17 00:00:00 2001 From: "Documenter.jl" Date: Thu, 22 Feb 2024 01:23:52 +0000 Subject: [PATCH] build based on 08ddbf2 --- dev/.documenter-siteinfo.json | 2 +- dev/caesar_framework/index.html | 2 +- dev/concepts/2d_plotting/index.html | 2 +- dev/concepts/arena_visualizations/index.html | 2 +- dev/concepts/available_varfacs/index.html | 2 +- dev/concepts/building_graphs/index.html | 2 +- dev/concepts/compile_binary/index.html | 2 +- dev/concepts/concepts/index.html | 2 +- dev/concepts/dataassociation/index.html | 2 +- dev/concepts/entry_data/index.html | 2 +- dev/concepts/flux_factors/index.html | 2 +- dev/concepts/interacting_fgs/index.html | 2 +- dev/concepts/mmisam_alg/index.html | 2 +- dev/concepts/multilang/index.html | 2 +- dev/concepts/multisession/index.html | 2 +- dev/concepts/parallel_processing/index.html | 2 +- dev/concepts/solving_graphs/index.html | 2 +- dev/concepts/stash_and_cache/index.html | 2 +- dev/concepts/using_julia/index.html | 2 +- dev/concepts/using_manifolds/index.html | 2 +- dev/concepts/why_nongaussian/index.html | 2 +- dev/concepts/zero_install/index.html | 2 +- dev/dev/internal_fncs/index.html | 2 +- dev/dev/known_issues/index.html | 2 +- dev/dev/wiki/index.html | 2 +- .../adding_variables_factors/index.html | 2 +- .../basic_continuousscalar/index.html | 2 +- dev/examples/basic_definingfactors/index.html | 2 +- dev/examples/basic_hexagonal2d/index.html | 2 +- dev/examples/basic_slamedonut/index.html | 2 +- dev/examples/canonical_graphs/index.html | 2 +- .../custom_factor_features/index.html | 2 +- .../custom_relative_factors/index.html | 2 +- dev/examples/custom_variables/index.html | 2 +- dev/examples/deadreckontether/index.html | 2 +- dev/examples/examples/index.html | 2 +- .../interm_fixedlag_hexagonal/index.html | 2 +- dev/examples/legacy_deffactors/index.html | 2 +- dev/examples/parametric_solve/index.html | 2 +- dev/examples/using_images/index.html | 2 +- dev/examples/using_pcl/index.html | 2 +- dev/examples/using_ros/index.html | 2 +- dev/faq/index.html | 2 +- dev/func_ref/index.html | 2 +- dev/index.html | 2 +- dev/install_viz/index.html | 2 +- dev/installation_environment/index.html | 35 ++++++++----------- dev/introduction/index.html | 2 +- dev/principles/approxConvDensities/index.html | 2 +- dev/principles/bayestreePrinciples/index.html | 2 +- .../filterCorrespondence/index.html | 2 +- .../initializingOnBayesTree/index.html | 2 +- dev/principles/interm_dynpose/index.html | 2 +- .../multiplyingDensities/index.html | 2 +- dev/refs/literature/index.html | 2 +- dev/search_index.js | 2 +- 56 files changed, 69 insertions(+), 76 deletions(-) diff --git a/dev/.documenter-siteinfo.json b/dev/.documenter-siteinfo.json index d0d53eb39..1b83d0e54 100644 --- a/dev/.documenter-siteinfo.json +++ b/dev/.documenter-siteinfo.json @@ -1 +1 @@ -{"documenter":{"julia_version":"1.10.1","generation_timestamp":"2024-02-21T23:53:59","documenter_version":"1.2.1"}} \ No newline at end of file +{"documenter":{"julia_version":"1.10.1","generation_timestamp":"2024-02-22T01:23:45","documenter_version":"1.2.1"}} \ No newline at end of file diff --git a/dev/caesar_framework/index.html b/dev/caesar_framework/index.html index b8b6f9fbd..969449b56 100644 --- a/dev/caesar_framework/index.html +++ b/dev/caesar_framework/index.html @@ -1,2 +1,2 @@ -Pkg Framework · Caesar.jl

The Caesar Framework

The Caesar.jl package is an "umbrella" framework around other dedicated algorithmic packages. While most of the packages are implemented in native Julia (JuliaPro), a few dependencies are wrapped C libraries. Note that C/C++ can be incorporated with zero overhead, such as was done with AprilTags.jl.

FAQ: Why use Julia?

AMP / IIF / RoME

Robot motion estimate (RoME.jl) can operate in the conventional SLAM manner, using local memory (dictionaries), or alternatively distribute over a persisted DistributedFactorGraph.jl through common serialization and graph storage/database technologies, see this article as example [1.3]. A variety of 2D plotting, 3D visualization, serialization, middleware, and analysis tools come standard as provided by the associated packages. RoME.jl combines reference frame transformations and robotics SLAM tool around the back-end solver provides by IncrementalInference.jl.

Details about the accompanying packages:

  • IncrementalInference.jl supplies the algebraic logic for factor graph inference with Bayes tree and depends on several packages itself.
  • RoME.jl introduces nodes and factors that are useful to robotic navigation.
  • ApproxManifoldProducts.jl provides on-manifold belief product operations.

Visualization (Arena.jl/RoMEPlotting.jl)

Caesar visualization (plotting of results, graphs, and data) is provided by 2D and 3D packages respectively:

  • RoMEPlotting.jl are a set of scripts that provide MATLAB style plotting of factor graph beliefs, mostly supporting 2D visualization with some support for projections of 3D;
  • Arena.jl package, which is a collection of 3D visualization tools.

Multilanguage Interops: NavAbility.io SDKs and APIs

The Caesar framework is not limited to direct Julia use. Check out www.NavAbility.io, or contact directly at (info@navabiliyt.io), for more details. Also see the community multi-language page for details.

+Pkg Framework · Caesar.jl

The Caesar Framework

The Caesar.jl package is an "umbrella" framework around other dedicated algorithmic packages. While most of the packages are implemented in native Julia (JuliaPro), a few dependencies are wrapped C libraries. Note that C/C++ can be incorporated with zero overhead, such as was done with AprilTags.jl.

FAQ: Why use Julia?

AMP / IIF / RoME

Robot motion estimate (RoME.jl) can operate in the conventional SLAM manner, using local memory (dictionaries), or alternatively distribute over a persisted DistributedFactorGraph.jl through common serialization and graph storage/database technologies, see this article as example [1.3]. A variety of 2D plotting, 3D visualization, serialization, middleware, and analysis tools come standard as provided by the associated packages. RoME.jl combines reference frame transformations and robotics SLAM tool around the back-end solver provides by IncrementalInference.jl.

Details about the accompanying packages:

  • IncrementalInference.jl supplies the algebraic logic for factor graph inference with Bayes tree and depends on several packages itself.
  • RoME.jl introduces nodes and factors that are useful to robotic navigation.
  • ApproxManifoldProducts.jl provides on-manifold belief product operations.

Visualization (Arena.jl/RoMEPlotting.jl)

Caesar visualization (plotting of results, graphs, and data) is provided by 2D and 3D packages respectively:

  • RoMEPlotting.jl are a set of scripts that provide MATLAB style plotting of factor graph beliefs, mostly supporting 2D visualization with some support for projections of 3D;
  • Arena.jl package, which is a collection of 3D visualization tools.

Multilanguage Interops: NavAbility.io SDKs and APIs

The Caesar framework is not limited to direct Julia use. Check out www.NavAbility.io, or contact directly at (info@navabiliyt.io), for more details. Also see the community multi-language page for details.

diff --git a/dev/concepts/2d_plotting/index.html b/dev/concepts/2d_plotting/index.html index e3d2a6eff..40c1736bb 100644 --- a/dev/concepts/2d_plotting/index.html +++ b/dev/concepts/2d_plotting/index.html @@ -232,4 +232,4 @@

plAll = plotBelief(fg, [:x0; :x1], levels=3)
 # plAll |> PNG("/tmp/testX1.png",20cm,15cm)

-

Note

The functions hstack and vstack is provided through the Gadfly package and allows the user to build a near arbitrary composition of plots.

Please see KernelDensityEstimatePlotting package source for more features.

+

Note

The functions hstack and vstack is provided through the Gadfly package and allows the user to build a near arbitrary composition of plots.

Please see KernelDensityEstimatePlotting package source for more features.

diff --git a/dev/concepts/arena_visualizations/index.html b/dev/concepts/arena_visualizations/index.html index c166f12b6..43e9a99fb 100644 --- a/dev/concepts/arena_visualizations/index.html +++ b/dev/concepts/arena_visualizations/index.html @@ -70,4 +70,4 @@ vc = startdefaultvisualization() visualize(fg, vc, drawlandms=true, densitymeshes=[:l1;:x2]) visualizeDensityMesh!(vc, fg, :l1) -# visualizeallposes!(vc, fg, drawlandms=false)

For more information see JuliaRobotcs/MeshCat.jl.

2nd Generation 3D Viewer (VTK / Director)

Note

This code is obsolete

Previous versions used the much larger VTK based Director available via DrakeVisualizer.jl package. This requires the following preinstalled packages:

    sudo apt-get install libvtk5-qt4-dev python-vtk

1st Generation MIT LCM Collections viewer

This code has been removed.

+# visualizeallposes!(vc, fg, drawlandms=false)

For more information see JuliaRobotcs/MeshCat.jl.

2nd Generation 3D Viewer (VTK / Director)

Note

This code is obsolete

Previous versions used the much larger VTK based Director available via DrakeVisualizer.jl package. This requires the following preinstalled packages:

    sudo apt-get install libvtk5-qt4-dev python-vtk

1st Generation MIT LCM Collections viewer

This code has been removed.

diff --git a/dev/concepts/available_varfacs/index.html b/dev/concepts/available_varfacs/index.html index ade51cfae..c2d494abc 100644 --- a/dev/concepts/available_varfacs/index.html +++ b/dev/concepts/available_varfacs/index.html @@ -26,4 +26,4 @@ M = SE(3) p0 = identity_element(M) -δ(x,y,z,α,β,γ) = vee(M, p0, log(M, p0, zRz))source

<!– PartialPose3XYYaw –> <!– PartialPriorRollPitchZ –>

Extending Caesar with New Variables and Factors

A question that frequently arises is how to design custom variables and factors to solve a specific type of graph. One strength of Caesar is the ability to incorporate new variables and factors at will. Please refer to Adding Factors for more information on creating your own factors.

+δ(x,y,z,α,β,γ) = vee(M, p0, log(M, p0, zRz))source

<!– PartialPose3XYYaw –> <!– PartialPriorRollPitchZ –>

Extending Caesar with New Variables and Factors

A question that frequently arises is how to design custom variables and factors to solve a specific type of graph. One strength of Caesar is the ability to incorporate new variables and factors at will. Please refer to Adding Factors for more information on creating your own factors.

diff --git a/dev/concepts/building_graphs/index.html b/dev/concepts/building_graphs/index.html index 8d1ac5ce2..7d33512f5 100644 --- a/dev/concepts/building_graphs/index.html +++ b/dev/concepts/building_graphs/index.html @@ -53,4 +53,4 @@ end

[OPTIONAL] Understanding Internal Factor Naming Convention

The factor name used by Caesar is automatically generated from

addFactor!(fg, [:x0; :x1],...)

will create a factor with name :x0x1f1

When you were to add a another factor betweem :x0, :x1:

addFactor!(fg, [:x0; :x1],...)

will create a second factor with the name :x0x1f2.

Adding Tags

It is possible to add tags to variables and factors that make later graph management tasks easier, e.g.:

addVariable!(fg, :l7_3, Pose2, tags=[:APRILTAG; :LANDMARK])

Drawing the Factor Graph

Once you have a graph, you can visualize the graph as follows (beware though if the fg object is large):

# requires `sudo apt-get install graphviz
 drawGraph(fg, show=true)

By setting show=true, the application evince will be called to show the fg.pdf file that was created using GraphViz. A GraphPlot.jl visualization engine is also available.

using GraphPlot
 plotDFG(fg)
IncrementalInference.drawGraphFunction
drawGraph(fgl; viewerapp, filepath, engine, show)
-

Draw and show the factor graph <:AbstractDFG via system graphviz and xdot app.

Notes

  • Requires system install on Linux of sudo apt-get install xdot
  • Should not be calling outside programs.
  • Need long term solution
  • DFG's toDotFile a better solution – view with xdot application.
  • also try engine={"sfdp","fdp","dot","twopi","circo","neato"}

Notes:

  • Calls external system application xdot to read the .dot file format
    • toDot(fg,file=...); @async run(`xdot file.dot`)

Related

drawGraphCliq, drawTree, printCliqSummary, spyCliqMat

source

For more details, see the DFG docs on Drawing Graphs.

When to Instantiate Poses (i.e. new Variables in Factor Graph)

Consider a robot traversing some area while exploring, localizing, and wanting to find strong loop-closure features for consistent mapping. The creation of new poses and landmark variables is a trade-off in computational complexity and marginalization errors made during factor graph construction. Common triggers for new poses are:

Computation will progress faster if poses and landmarks are very sparse. To extract the benefit of dense reconstructions, one approach is to use the factor graph as sparse index in history about the general progression of the trajectory and use additional processing from dense sensor data for high-fidelity map reconstructions. Either interpolations, or better direct reconstructions from inertial data can be used for dense reconstruction.

For completeness, one could also re-project the most meaningful measurements from sensor measurements between pose epochs as though measured from the pose epoch. This approach essentially marginalizes the local dead reckoning drift errors into the local interpose re-projections, but helps keep the pose count low.

In addition, see Fixed-lag Solving for limiting during inference the number of fluid variables manually to a user desired count.

Which Variables and Factors to use

See the next page on available variables and factors

+

Draw and show the factor graph <:AbstractDFG via system graphviz and xdot app.

Notes

Notes:

Related

drawGraphCliq, drawTree, printCliqSummary, spyCliqMat

source

For more details, see the DFG docs on Drawing Graphs.

When to Instantiate Poses (i.e. new Variables in Factor Graph)

Consider a robot traversing some area while exploring, localizing, and wanting to find strong loop-closure features for consistent mapping. The creation of new poses and landmark variables is a trade-off in computational complexity and marginalization errors made during factor graph construction. Common triggers for new poses are:

Computation will progress faster if poses and landmarks are very sparse. To extract the benefit of dense reconstructions, one approach is to use the factor graph as sparse index in history about the general progression of the trajectory and use additional processing from dense sensor data for high-fidelity map reconstructions. Either interpolations, or better direct reconstructions from inertial data can be used for dense reconstruction.

For completeness, one could also re-project the most meaningful measurements from sensor measurements between pose epochs as though measured from the pose epoch. This approach essentially marginalizes the local dead reckoning drift errors into the local interpose re-projections, but helps keep the pose count low.

In addition, see Fixed-lag Solving for limiting during inference the number of fluid variables manually to a user desired count.

Which Variables and Factors to use

See the next page on available variables and factors

diff --git a/dev/concepts/compile_binary/index.html b/dev/concepts/compile_binary/index.html index cd99a09fb..105fd384e 100644 --- a/dev/concepts/compile_binary/index.html +++ b/dev/concepts/compile_binary/index.html @@ -1,2 +1,2 @@ -Compile Binaries · Caesar.jl

Compile Binaries

Broader Julia ecosystem work on compiling shared libraries and images is hosted by PackageCompiler.jl, see documentation there.

Compiling RoME.so

A default RoME system image script can be used compileRoME/compileRoMESysimage.jl to reduce the "time-to-first-plot".

To use RoME with the newly created sysimage, start julia with:

julia -O3 -J ~/.julia/dev/RoME/compileRoME/RoMESysimage.so

Which should dramatically cut down on the load time of the included package JIT compilation. More packages or functions can be added to the binary, depending on the application. Furthermore, full executable binaries can easily be made with PackageCompiler.jl.

More Info

Note

Also see this Julia Binaries Blog. More on discourse.. Also see new brute force sysimg work at Fezzik.jl.

Note

Contents of a previous blog post this AOT vs JIT compiling blog post has been wrapped into PackageCompiler.jl.

+Compile Binaries · Caesar.jl

Compile Binaries

Broader Julia ecosystem work on compiling shared libraries and images is hosted by PackageCompiler.jl, see documentation there.

Compiling RoME.so

A default RoME system image script can be used compileRoME/compileRoMESysimage.jl to reduce the "time-to-first-plot".

To use RoME with the newly created sysimage, start julia with:

julia -O3 -J ~/.julia/dev/RoME/compileRoME/RoMESysimage.so

Which should dramatically cut down on the load time of the included package JIT compilation. More packages or functions can be added to the binary, depending on the application. Furthermore, full executable binaries can easily be made with PackageCompiler.jl.

More Info

Note

Also see this Julia Binaries Blog. More on discourse.. Also see new brute force sysimg work at Fezzik.jl.

Note

Contents of a previous blog post this AOT vs JIT compiling blog post has been wrapped into PackageCompiler.jl.

diff --git a/dev/concepts/concepts/index.html b/dev/concepts/concepts/index.html index a1f72a50c..1b230e9fc 100644 --- a/dev/concepts/concepts/index.html +++ b/dev/concepts/concepts/index.html @@ -1,2 +1,2 @@ -Initial Concepts · Caesar.jl

Graph Concepts

Factor graphs are bipartite consisting of variables and factors, which are connected by edges to form a graph structure. The terminology of nodes is reserved for actually storing the data on some graph oriented technology.

What are Variables and Factors

Variables, denoted as the larger nodes in the figur below, represent state variables of interest such as vehicle or landmark positions, sensor calibration parameters, and more. Variables are likely hidden values which are not directly observed, but we want to estimate them them from observed data and at least some minimal algebra structure from probabilistic measurement models.

Factors, the smaller nodes in the figure, represent the algebraic interaction between particular variables, which is captured through edges. Factors must adhere to the limits of probabilistic models – for example conditional likelihoods capture the likelihood correlations between variables; while priors (unary to one variable) represent absolute information to be introduced. A heterogeneous factor graph illustration is shown below; also see a broader discussion linked on the literature page.

factorgraphexample

We assume factors are constructed from statistically independent measurements (i.e. no direct correlations between measurements other than the known algebraic model that might connect them), then we can use Probabilistic Chain rule to write inference operation down (unnormalized):

\[P(\Theta | Z) \propto P(Z | \Theta) P(\Theta)\]

This unnormalized "Bayes rule" is a consequence of two ideas, namely the probabilistic chain rule where Theta represents all variables and Z represents all measurements or data

\[P(\Theta , Z) = P(Z | \Theta) P(\Theta)\]

or similarly,

\[P(\Theta, Z) = P(\Theta | Z) P(Z).\]

The inference objective is to invert this system, so as to find the states given the product between all the likelihood models (based on the data):

\[P(\Theta | Z) \propto \prod_i P(Z_i | \Theta_i) \prod_j P(\Theta_j)\]

We use the uncorrelated measurement process assumption that measurements Z are independent given the constructed algebraic model.

Note

Strictly speaking, factors are actually "observed variables" that are stochastically "fixed" and not free for estimation in the conventional SLAM perspective. Waving hands over the fact that factors encode both the algebraic model and the observed measurement values provides a perspective on learning structure of a problem, including more mundane operations such as sensor calibration or learning of channel transfer models.

+Initial Concepts · Caesar.jl

Graph Concepts

Factor graphs are bipartite consisting of variables and factors, which are connected by edges to form a graph structure. The terminology of nodes is reserved for actually storing the data on some graph oriented technology.

What are Variables and Factors

Variables, denoted as the larger nodes in the figur below, represent state variables of interest such as vehicle or landmark positions, sensor calibration parameters, and more. Variables are likely hidden values which are not directly observed, but we want to estimate them them from observed data and at least some minimal algebra structure from probabilistic measurement models.

Factors, the smaller nodes in the figure, represent the algebraic interaction between particular variables, which is captured through edges. Factors must adhere to the limits of probabilistic models – for example conditional likelihoods capture the likelihood correlations between variables; while priors (unary to one variable) represent absolute information to be introduced. A heterogeneous factor graph illustration is shown below; also see a broader discussion linked on the literature page.

factorgraphexample

We assume factors are constructed from statistically independent measurements (i.e. no direct correlations between measurements other than the known algebraic model that might connect them), then we can use Probabilistic Chain rule to write inference operation down (unnormalized):

\[P(\Theta | Z) \propto P(Z | \Theta) P(\Theta)\]

This unnormalized "Bayes rule" is a consequence of two ideas, namely the probabilistic chain rule where Theta represents all variables and Z represents all measurements or data

\[P(\Theta , Z) = P(Z | \Theta) P(\Theta)\]

or similarly,

\[P(\Theta, Z) = P(\Theta | Z) P(Z).\]

The inference objective is to invert this system, so as to find the states given the product between all the likelihood models (based on the data):

\[P(\Theta | Z) \propto \prod_i P(Z_i | \Theta_i) \prod_j P(\Theta_j)\]

We use the uncorrelated measurement process assumption that measurements Z are independent given the constructed algebraic model.

Note

Strictly speaking, factors are actually "observed variables" that are stochastically "fixed" and not free for estimation in the conventional SLAM perspective. Waving hands over the fact that factors encode both the algebraic model and the observed measurement values provides a perspective on learning structure of a problem, including more mundane operations such as sensor calibration or learning of channel transfer models.

diff --git a/dev/concepts/dataassociation/index.html b/dev/concepts/dataassociation/index.html index 685b96261..2b51b3b72 100644 --- a/dev/concepts/dataassociation/index.html +++ b/dev/concepts/dataassociation/index.html @@ -33,4 +33,4 @@ KDE.BallTreeDensity <: IIF.SamplableBelief Distribution.Rayleigh <: IIF.SamplableBelief Distribution.Uniform <: IIF.SamplableBelief -Distribution.MvNormal <: IIF.SamplableBelief

One of the more exotic examples is to natively represent Synthetic Aperture Sonar (SAS) as a deeply non-Gaussian factor in the factor graph. See Synthetic Aperture Sonar SLAM. Also see the full AUV stack using a single reference beacon and Towards Real-Time Underwater Acoustic Navigation.

Null Hypothesis

Sometimes there is basic uncertainty about whether a measurement is at all valid. Note that the above examples (multihypo and Mixture) still accept that a certain association definitely exists. A null hypothesis models the situation in which a factor might be completely bogus, in which case it should be ignored. The underlying mechanics of this approach are not entirely straightforward since removing one or more factors essentially changes the structure of the graph. That said, IncrementalInference.jl employs a reasonable stand-in solution that does not require changing the graph structure and can simply be included for any factor.

addFactor!(fg, [:x7;:l13], Pose2Point2Range(...), nullhypo=0.1)

This keyword indicates to the solver that there is a 10% chance that this factor is not valid.

Note
+Distribution.MvNormal <: IIF.SamplableBelief

One of the more exotic examples is to natively represent Synthetic Aperture Sonar (SAS) as a deeply non-Gaussian factor in the factor graph. See Synthetic Aperture Sonar SLAM. Also see the full AUV stack using a single reference beacon and Towards Real-Time Underwater Acoustic Navigation.

Null Hypothesis

Sometimes there is basic uncertainty about whether a measurement is at all valid. Note that the above examples (multihypo and Mixture) still accept that a certain association definitely exists. A null hypothesis models the situation in which a factor might be completely bogus, in which case it should be ignored. The underlying mechanics of this approach are not entirely straightforward since removing one or more factors essentially changes the structure of the graph. That said, IncrementalInference.jl employs a reasonable stand-in solution that does not require changing the graph structure and can simply be included for any factor.

addFactor!(fg, [:x7;:l13], Pose2Point2Range(...), nullhypo=0.1)

This keyword indicates to the solver that there is a 10% chance that this factor is not valid.

Note
diff --git a/dev/concepts/entry_data/index.html b/dev/concepts/entry_data/index.html index 63fa1d512..6834b1cdf 100644 --- a/dev/concepts/entry_data/index.html +++ b/dev/concepts/entry_data/index.html @@ -64,4 +64,4 @@ addData!(dfg,:default_folder_store,:x0,:nnModel, mdlBytes, mimeType="application/bson/octet-stream", - description="BSON.@load PipeBuffer(readBytes) model")

Experimental Features

Loading images is a relatively common task, hence a convenience function has been developed, when using ImageMagick try Caesar.fetchDataImage.

+ description="BSON.@load PipeBuffer(readBytes) model")

Experimental Features

Loading images is a relatively common task, hence a convenience function has been developed, when using ImageMagick try Caesar.fetchDataImage.

diff --git a/dev/concepts/flux_factors/index.html b/dev/concepts/flux_factors/index.html index f03d7ad48..9d0a0062b 100644 --- a/dev/concepts/flux_factors/index.html +++ b/dev/concepts/flux_factors/index.html @@ -1,2 +1,2 @@ -Flux (NN) Factors · Caesar.jl
+Flux (NN) Factors · Caesar.jl
diff --git a/dev/concepts/interacting_fgs/index.html b/dev/concepts/interacting_fgs/index.html index 0bcf45ace..f8c595652 100644 --- a/dev/concepts/interacting_fgs/index.html +++ b/dev/concepts/interacting_fgs/index.html @@ -54,4 +54,4 @@

Return the number of dimensions this factor vertex fc influences.

DevNotes

source
DistributedFactorGraphs.getManifoldFunction
getManifold(_)
 

Interface function to return the <:ManifoldsBase.AbstractManifold object of variableType<:InferenceVariable.

source
getManifold(mkd)
 getManifold(mkd, asPartial)
-

Return the manifold on which this ManifoldKernelDensity is defined.

DevNotes

  • TODO currently ignores the .partial aspect (captured in parameter L)
source
+

Return the manifold on which this ManifoldKernelDensity is defined.

DevNotes

source diff --git a/dev/concepts/mmisam_alg/index.html b/dev/concepts/mmisam_alg/index.html index edd491dfc..015a23df0 100644 --- a/dev/concepts/mmisam_alg/index.html +++ b/dev/concepts/mmisam_alg/index.html @@ -1,2 +1,2 @@ -Non-Gaussian Algorithm · Caesar.jl

Multimodal incremental Smoothing and Mapping Algorithm

Note

Major refactoring of documentation under way 2020Q1. Much of the previous text has be repositioned and being improved. See references for details and check back here for updates in the coming weeks.

Caesar.jl uses an approximate sum-product inference algorithm (mmiSAM) works. Until then, see related literature for more details.

Joint Probability

General Factor Graph – i.e. non-Gaussian and multi-modal

mmfgbt

Inference on Bayes/Junction/Elimination Tree

See tree solve video here.

Bayes/Junction tree example

Algorithm combats the so called curse-of-dimensionality on the basis of eight principles outlined in the thesis work "Multimodal and Inertial Sensor Solutions to Navigation-type Factor Graphs".

Chapman-Kolmogorov (Belief Propagation / Sum-product)

The main computational effort is to focus compute cycles on dominant modes exhibited by the data, by dropping low likelihood modes (although not indefinitely) and not sacrificing accuracy individual major features.

D. Fourie, A. T. Espinoza, M. Kaess, and J. J. Leonard, “Characterizing marginalization and incremental operations on the Bayes tree,” in International Workshop on Algorithmic Foundations of Robotics (WAFR), 2020, submitted, under review.

Focussing Computation on Tree

Link to new dedicated Bayes tree pages. The following sections describe different elements of clique recycling.

Incremental Updates

Recycling computations similar to iSAM2, with option to complete future downward pass.

Fixed-Lag operation (out-marginalization)

Active user (likely) computational limits on message passing. Also mixed priority solving

Federated Tree Solution (Multi session/agent)

Tentatively see the multisession page.

Clique State Machine

The CSM is used to govern the inference process within a clique. A FunctionalStateMachine.jl implementation is used to allow for initialization / incremental-recycling / fixed-lag solving, and will soon support federated branch solving as well as unidirectional message passing for fixed-lead operations. See the following video for an auto-generated–-using csmAnimate–-concurrent clique solving example.

Sequential Nested Gibbs Method

Current default inference method. See [Fourie et al., IROS 2016]

Convolution Approximation (Quasi-Deterministic)

Convolution operations are used to implement the numerical computation of the probabilistic chain rule:

\[P(A, B) = P(A | B)P(B)\]

Proposal distributions are computed by means of (analytical or numerical – i.e. "algebraic") factor which defines a residual function:

\[\delta : S \times \Eta \rightarrow \mathcal{R}\]

where $S \times \Eta$ is the domain such that $\theta_i \in S, \, \eta \sim P(\Eta)$, and $P(\cdot)$ is a probability.

Please follow, a more detailed description is on the convolutional computations page.

Stochastic Product Approx of Infinite Functionals

See mixed-manifold products presented in the literature section.

writing in progress

Mixture Parametric Method

Work In Progress – deferred for progress on full functional methods, but likely to have Gaussian legacy algorithm with mixture model expansion added in the near future.

Chapman-Kolmogorov

Work in progress

+Non-Gaussian Algorithm · Caesar.jl

Multimodal incremental Smoothing and Mapping Algorithm

Note

Major refactoring of documentation under way 2020Q1. Much of the previous text has be repositioned and being improved. See references for details and check back here for updates in the coming weeks.

Caesar.jl uses an approximate sum-product inference algorithm (mmiSAM) works. Until then, see related literature for more details.

Joint Probability

General Factor Graph – i.e. non-Gaussian and multi-modal

mmfgbt

Inference on Bayes/Junction/Elimination Tree

See tree solve video here.

Bayes/Junction tree example

Algorithm combats the so called curse-of-dimensionality on the basis of eight principles outlined in the thesis work "Multimodal and Inertial Sensor Solutions to Navigation-type Factor Graphs".

Chapman-Kolmogorov (Belief Propagation / Sum-product)

The main computational effort is to focus compute cycles on dominant modes exhibited by the data, by dropping low likelihood modes (although not indefinitely) and not sacrificing accuracy individual major features.

D. Fourie, A. T. Espinoza, M. Kaess, and J. J. Leonard, “Characterizing marginalization and incremental operations on the Bayes tree,” in International Workshop on Algorithmic Foundations of Robotics (WAFR), 2020, submitted, under review.

Focussing Computation on Tree

Link to new dedicated Bayes tree pages. The following sections describe different elements of clique recycling.

Incremental Updates

Recycling computations similar to iSAM2, with option to complete future downward pass.

Fixed-Lag operation (out-marginalization)

Active user (likely) computational limits on message passing. Also mixed priority solving

Federated Tree Solution (Multi session/agent)

Tentatively see the multisession page.

Clique State Machine

The CSM is used to govern the inference process within a clique. A FunctionalStateMachine.jl implementation is used to allow for initialization / incremental-recycling / fixed-lag solving, and will soon support federated branch solving as well as unidirectional message passing for fixed-lead operations. See the following video for an auto-generated–-using csmAnimate–-concurrent clique solving example.

Sequential Nested Gibbs Method

Current default inference method. See [Fourie et al., IROS 2016]

Convolution Approximation (Quasi-Deterministic)

Convolution operations are used to implement the numerical computation of the probabilistic chain rule:

\[P(A, B) = P(A | B)P(B)\]

Proposal distributions are computed by means of (analytical or numerical – i.e. "algebraic") factor which defines a residual function:

\[\delta : S \times \Eta \rightarrow \mathcal{R}\]

where $S \times \Eta$ is the domain such that $\theta_i \in S, \, \eta \sim P(\Eta)$, and $P(\cdot)$ is a probability.

Please follow, a more detailed description is on the convolutional computations page.

Stochastic Product Approx of Infinite Functionals

See mixed-manifold products presented in the literature section.

writing in progress

Mixture Parametric Method

Work In Progress – deferred for progress on full functional methods, but likely to have Gaussian legacy algorithm with mixture model expansion added in the near future.

Chapman-Kolmogorov

Work in progress

diff --git a/dev/concepts/multilang/index.html b/dev/concepts/multilang/index.html index bb8601267..4b51aadac 100644 --- a/dev/concepts/multilang/index.html +++ b/dev/concepts/multilang/index.html @@ -10,4 +10,4 @@ # Start the server over ZMQ start(zmqConfig) -# give the server a minute to start up ...

The current tests are a good place to see some examples of the current interfacing functions. Feel free to change the ZMQ interface for to any of the ZMQ supported modes of data transport, such as Interprocess Communication (IPC) vs. TCP.

Alternative Methods

Interfacing from languages like Python may also be achieved using PyCall.jl although little work has been done in the Caesar.jl framework to explore this path. Julia is itself interactive/dynamic and has plenty of line-by-line and Integrated Development Environment support – consider trying Julia for your application.

+# give the server a minute to start up ...

The current tests are a good place to see some examples of the current interfacing functions. Feel free to change the ZMQ interface for to any of the ZMQ supported modes of data transport, such as Interprocess Communication (IPC) vs. TCP.

Alternative Methods

Interfacing from languages like Python may also be achieved using PyCall.jl although little work has been done in the Caesar.jl framework to explore this path. Julia is itself interactive/dynamic and has plenty of line-by-line and Integrated Development Environment support – consider trying Julia for your application.

diff --git a/dev/concepts/multisession/index.html b/dev/concepts/multisession/index.html index 28e732842..dc07be749 100644 --- a/dev/concepts/multisession/index.html +++ b/dev/concepts/multisession/index.html @@ -1,2 +1,2 @@ -Multi-session/agent Solving · Caesar.jl

Multisession Operation

Having all the data consolidated in a factor graph allows us to do something we find really exciting: reason against data for different robots, different robot sessions, even different users. Of course, this is all optional, and must be explicitly configured, but if enabled, current inference solutions can make use of historical data to continually improve their solutions.

Consider a single robot working in a common environment that has driven around the same area a number of times and has identified a landmark that is (probably) the same. We can automatically close the loop and use the information from the prior data to improve our current solution. This is called a multisession solve.

To perform a multisession solve, you need to specify that a session is part of a common environment, e.g 'lab'. A user then requests a multisession solve (manually for the moment), and this creates relationships between common landmarks. The collective information is used to produce a consensus on the shared landmarks. A chain of session solves is then created, and the information is propagated into the individual sessions, improving their results.

Steps in Multisession Solve

The following steps are performed by the user:

  • Indicate which sessions are part of a common environment - this is done via GraffSDK when the session is created
  • Request a multisession solve

Upon request, the solver performs the following actions:

  • Updates the common existing multisession landmarks with any new information (propagation from session to common information)
  • Builds common landmarks for any new sessions or updated data
  • Solves the common, multisession graph
  • Propagates the common consensus result to the individual sessions
  • Freezes all the session landmarks so that the session solving does not update the consensus result
  • Requests session solves for all the updated sessions

Note the current approach is well positioned to transition to the "Federated Bayes (Junction) Tree" multisession solving method, and will be updated accordingly in due coarse. The Federated method will allow faster multi-session solving times by avoiding the current iterated approach.

Example

Consider three sessions which exist in the same, shared environment. In this environment, during each session the robot identified the same l0 landmark, as shown in the below figure. Independent Sessions

If we examine this in terms of the estimates of the actual landmarks, we have three independent densities (blue, green, and orange) giving measures of l0 located at (20, 0):

Independent densities

Now we trigger a multisession solve. For each landmark that is seen in multiple session, we produce a common landmark (we call a prime landmark) and link it to the session landmarks via factors - all denoted in black outline.

Linked landmarks

A multisession solve is performed, which for each common (prime) landmark, we produce a common estimate. In terms of densities, this is a single answer for the disparate information, as shown in red in the below figure (for a slightly different dataset):

Prime density

This information is then propagated back to the individual session landmarks, giving one common density for each landmark. As above, our green, blue, and orange individual densities are now all updated to match the consensus shown in black:

Prime density

The session landmarks are then frozen, and individual session solves are triggered to propagate the information back into the sessions. Until the federated upgrade is completed, the above process is iterated a few times to allow information to cross propagate through all sessions. There federated tree solution requires only a single iteration up and down the federated Bayes (Junction) tree.

Next Steps

This provides an initial implementation for stitching data from multiple sessions, robots, and users. In the short term, we may trigger this automatically for any shared environments. Multisession solving along with other automated techniques for additional measurement discovery in data allows the system to 'dream' – i.e. reducing succint info from the large volumes of heterogenous sensor data.

In the medium future we will extend this functionality to operate in the Bayes tree, which we call 'federated solving', so that we perform the operation using cached results of subtrees.

+Multi-session/agent Solving · Caesar.jl

Multisession Operation

Having all the data consolidated in a factor graph allows us to do something we find really exciting: reason against data for different robots, different robot sessions, even different users. Of course, this is all optional, and must be explicitly configured, but if enabled, current inference solutions can make use of historical data to continually improve their solutions.

Consider a single robot working in a common environment that has driven around the same area a number of times and has identified a landmark that is (probably) the same. We can automatically close the loop and use the information from the prior data to improve our current solution. This is called a multisession solve.

To perform a multisession solve, you need to specify that a session is part of a common environment, e.g 'lab'. A user then requests a multisession solve (manually for the moment), and this creates relationships between common landmarks. The collective information is used to produce a consensus on the shared landmarks. A chain of session solves is then created, and the information is propagated into the individual sessions, improving their results.

Steps in Multisession Solve

The following steps are performed by the user:

  • Indicate which sessions are part of a common environment - this is done via GraffSDK when the session is created
  • Request a multisession solve

Upon request, the solver performs the following actions:

  • Updates the common existing multisession landmarks with any new information (propagation from session to common information)
  • Builds common landmarks for any new sessions or updated data
  • Solves the common, multisession graph
  • Propagates the common consensus result to the individual sessions
  • Freezes all the session landmarks so that the session solving does not update the consensus result
  • Requests session solves for all the updated sessions

Note the current approach is well positioned to transition to the "Federated Bayes (Junction) Tree" multisession solving method, and will be updated accordingly in due coarse. The Federated method will allow faster multi-session solving times by avoiding the current iterated approach.

Example

Consider three sessions which exist in the same, shared environment. In this environment, during each session the robot identified the same l0 landmark, as shown in the below figure. Independent Sessions

If we examine this in terms of the estimates of the actual landmarks, we have three independent densities (blue, green, and orange) giving measures of l0 located at (20, 0):

Independent densities

Now we trigger a multisession solve. For each landmark that is seen in multiple session, we produce a common landmark (we call a prime landmark) and link it to the session landmarks via factors - all denoted in black outline.

Linked landmarks

A multisession solve is performed, which for each common (prime) landmark, we produce a common estimate. In terms of densities, this is a single answer for the disparate information, as shown in red in the below figure (for a slightly different dataset):

Prime density

This information is then propagated back to the individual session landmarks, giving one common density for each landmark. As above, our green, blue, and orange individual densities are now all updated to match the consensus shown in black:

Prime density

The session landmarks are then frozen, and individual session solves are triggered to propagate the information back into the sessions. Until the federated upgrade is completed, the above process is iterated a few times to allow information to cross propagate through all sessions. There federated tree solution requires only a single iteration up and down the federated Bayes (Junction) tree.

Next Steps

This provides an initial implementation for stitching data from multiple sessions, robots, and users. In the short term, we may trigger this automatically for any shared environments. Multisession solving along with other automated techniques for additional measurement discovery in data allows the system to 'dream' – i.e. reducing succint info from the large volumes of heterogenous sensor data.

In the medium future we will extend this functionality to operate in the Bayes tree, which we call 'federated solving', so that we perform the operation using cached results of subtrees.

diff --git a/dev/concepts/parallel_processing/index.html b/dev/concepts/parallel_processing/index.html index bda4b290b..8b4d9bf37 100644 --- a/dev/concepts/parallel_processing/index.html +++ b/dev/concepts/parallel_processing/index.html @@ -17,4 +17,4 @@ # helper function MyThreadSafeFactor(z) = MyThreadSafeFactor(z, [MyInplaceMem(0) for i in 1:Threads.nthreads()]) -# in residual function just use `thr_inplace = cfo.factor.inplace[Threads.threadid()]`
Note

Beyond the cases discussed above, other features in the IncrementalInference.jl code base (especially regarding the Bayes tree) are already multithreaded.

Factor Caching (In-place operations)

In-place memory operations for factors can have a significant performance improvement. See the Cache and Stash section for more details.

+# in residual function just use `thr_inplace = cfo.factor.inplace[Threads.threadid()]`
Note

Beyond the cases discussed above, other features in the IncrementalInference.jl code base (especially regarding the Bayes tree) are already multithreaded.

Factor Caching (In-place operations)

In-place memory operations for factors can have a significant performance improvement. See the Cache and Stash section for more details.

diff --git a/dev/concepts/solving_graphs/index.html b/dev/concepts/solving_graphs/index.html index b9944f0c8..07c446113 100644 --- a/dev/concepts/solving_graphs/index.html +++ b/dev/concepts/solving_graphs/index.html @@ -53,4 +53,4 @@ initAll!(dfg, solveKey; _parametricInit, solvable, N)

Perform graphinit over all variables with solvable=1 (default).

See also: ensureSolvable!, (EXPERIMENTAL 'treeinit')

source

Using Incremental Updates (Clique Recycling I)

One of the major features of the MM-iSAMv2 algorithm (implemented by IncrementalInference.jl) is reducing computational load by recycling and marginalizing different (usually older) parts of the factor graph. In order to utilize the benefits of recycing, the previous Bayes (Junction) tree should also be provided as input (see fixed-lag examples for more details):

tree = solveTree!(fg, tree)

Using Clique out-marginalization (Clique Recycling II)

When building sysmtes with limited computation resources, the out-marginalization of cliques on the Bayes tree can be used. This approach limits the amount of variables that are inferred on each solution of the graph. This method is also a compliment to the above Incremental Recycling – these two methods can work in tandem. There is a default setting for a FIFO out-marginalization strategy (with some additional tricks):

defaultFixedLagOnTree!(fg, 50, limitfixeddown=true)

This call will keep the latest 50 variables fluid for inference during Bayes tree inference. The keyword limitfixeddown=true in this case will also prevent downward message passing on the Bayes tree from propagating into the out-marginalized branches on the tree. A later page in this documentation will discuss how the inference algorithm and Bayes tree aspects are put together.

Synchronizing Over a Factor Graph

When adding Variables and Factors, use solvable=0 to disable the new fragments until ready for inference, for example

addVariable!(fg, :x45, Pose2, solvable=0)
 newfct = addFactor!(fg, [:x11,:x12], Pose2Pose2, solvable=0)

These parts of the factor graph can simply be activated for solving:

setSolvable!(fg, :x45, 1)
-setSolvable!(fg, newfct.label, 1)
+setSolvable!(fg, newfct.label, 1) diff --git a/dev/concepts/stash_and_cache/index.html b/dev/concepts/stash_and_cache/index.html index c690cb8ff..2a9fe9602 100644 --- a/dev/concepts/stash_and_cache/index.html +++ b/dev/concepts/stash_and_cache/index.html @@ -7,4 +7,4 @@ # continue regular use, e.g. mfc = MyFactor(...) addFactor!(fg, [:a;:b], mfc) -# ... source

In-Place vs. In-Line Cache

Depending on your particular bent, two different cache models might be more appealing. The design of preambleCache does not preclude either design options, and actually promote use of either depending on the particular situation at hand. The purpose of preambleCache is to provide and opportunity for caching when working with factors in the factor graph rather than dictate one design over the other.

CalcFactor.cache::T

One likely use of the preambleCache function is for in-place memory allocation for solver hot-loop operations. Consider for example a getSample or factor residual calculation that is memory intensive. The best way to improve performance is remove any memory allocations during the hot-loop. For this reason the CalcFactor object has a cache::T field which will have exactly the type ::T that is returned by the user's preambleCache dispatch override. To usein the factor getSample or residual functions, simply use the calcfactor.cache field.

Pulling Data from Stores

The Caesar.jl framework supports various data store designs. Some of these data stores are likely best suited for in-line caching design. Values can be retrieved from a data store during the preambleCache step, irrespective of where the data is stored.

If the user chooses to store weird and wonderful caching links to alternative hardware via the described caching, go forth and be productive! Consider sharing enhancements back the public repositories.

Stash Serialization

Note

Stashing uses Additional (Large) Data storage and retrieval following starved graph design considerations.

Some applications use graph factors with with large memory requirements during computation. Often, it is not efficient/performant to store large data blobs directly within the graph when persisted. Caesar.jl therefore supports a concept called stashing (similar to starved graphs), where particular operationally important data is stored separately from the graph which can then be retrieved during the preambleCache step – a.k.a. unstashing.

Deserialize-only Stash Design (i.e. unstashing)

Presently, we recommend following a deserialize-only design. This is where factor graph are reconstituted from some persisted storage into computable form in memory, a.k.a. loadDFG. During the load steps, factors are added to the destination graph using the addFactor! calls, which in turn call preambleCache for each factor.

Therefore, factors which are persisted using the 'stash' methodology are only fully reconstructed after the preambleCache step, and the user is responsible for defining the preambleCache override for a particular factor. The desired stashed data should also already be available in said data store before the factor graph is loaded.

Caesar.jl does have factors that can use the stash design, but are currently only available as experimental features. Specifically, see the ScatterAlignPose2 factor code.

Modifying the overall Caesar.jl code for both read and write stashing might be considered in future work but is not in the current roadmap.

Notes

Please see or open issues for specific questions not yet covered here. You can also reach out via Slack, or contact NavAbility.io for help.

+# ... source

In-Place vs. In-Line Cache

Depending on your particular bent, two different cache models might be more appealing. The design of preambleCache does not preclude either design options, and actually promote use of either depending on the particular situation at hand. The purpose of preambleCache is to provide and opportunity for caching when working with factors in the factor graph rather than dictate one design over the other.

CalcFactor.cache::T

One likely use of the preambleCache function is for in-place memory allocation for solver hot-loop operations. Consider for example a getSample or factor residual calculation that is memory intensive. The best way to improve performance is remove any memory allocations during the hot-loop. For this reason the CalcFactor object has a cache::T field which will have exactly the type ::T that is returned by the user's preambleCache dispatch override. To usein the factor getSample or residual functions, simply use the calcfactor.cache field.

Pulling Data from Stores

The Caesar.jl framework supports various data store designs. Some of these data stores are likely best suited for in-line caching design. Values can be retrieved from a data store during the preambleCache step, irrespective of where the data is stored.

If the user chooses to store weird and wonderful caching links to alternative hardware via the described caching, go forth and be productive! Consider sharing enhancements back the public repositories.

Stash Serialization

Note

Stashing uses Additional (Large) Data storage and retrieval following starved graph design considerations.

Some applications use graph factors with with large memory requirements during computation. Often, it is not efficient/performant to store large data blobs directly within the graph when persisted. Caesar.jl therefore supports a concept called stashing (similar to starved graphs), where particular operationally important data is stored separately from the graph which can then be retrieved during the preambleCache step – a.k.a. unstashing.

Deserialize-only Stash Design (i.e. unstashing)

Presently, we recommend following a deserialize-only design. This is where factor graph are reconstituted from some persisted storage into computable form in memory, a.k.a. loadDFG. During the load steps, factors are added to the destination graph using the addFactor! calls, which in turn call preambleCache for each factor.

Therefore, factors which are persisted using the 'stash' methodology are only fully reconstructed after the preambleCache step, and the user is responsible for defining the preambleCache override for a particular factor. The desired stashed data should also already be available in said data store before the factor graph is loaded.

Caesar.jl does have factors that can use the stash design, but are currently only available as experimental features. Specifically, see the ScatterAlignPose2 factor code.

Modifying the overall Caesar.jl code for both read and write stashing might be considered in future work but is not in the current roadmap.

Notes

Please see or open issues for specific questions not yet covered here. You can also reach out via Slack, or contact NavAbility.io for help.

diff --git a/dev/concepts/using_julia/index.html b/dev/concepts/using_julia/index.html index 1db099303..9db9f4aab 100644 --- a/dev/concepts/using_julia/index.html +++ b/dev/concepts/using_julia/index.html @@ -48,4 +48,4 @@ user@...$ julia -e "println(\"one more time.\")" one more time. user@...$ julia -e "println(\"...testing...\")" -...testing...
Note

When searching for Julia related help online, use the phrase 'julialang' instead of just 'julia'. For example, search for 'julialang workflow tips' or 'julialang performance tips'. Also, see FAQ - Why are first runs slow?, which is due to Just-In-Time/Pre compiling and caching.

Next Steps

Although Caesar is Julia-based, it provides multi-language support with a ZMQ interface. This is discussed in Caesar Multi-Language Support. Caesar.jl also supports various visualizations and plots by using Arena, RoMEPlotting, and Director. This is discussed in Visualization with Arena.jl and RoMEPlotting.jl.

+...testing...
Note

When searching for Julia related help online, use the phrase 'julialang' instead of just 'julia'. For example, search for 'julialang workflow tips' or 'julialang performance tips'. Also, see FAQ - Why are first runs slow?, which is due to Just-In-Time/Pre compiling and caching.

Next Steps

Although Caesar is Julia-based, it provides multi-language support with a ZMQ interface. This is discussed in Caesar Multi-Language Support. Caesar.jl also supports various visualizations and plots by using Arena, RoMEPlotting, and Director. This is discussed in Visualization with Arena.jl and RoMEPlotting.jl.

diff --git a/dev/concepts/using_manifolds/index.html b/dev/concepts/using_manifolds/index.html index 9a06cc810..c82a22d4b 100644 --- a/dev/concepts/using_manifolds/index.html +++ b/dev/concepts/using_manifolds/index.html @@ -1,2 +1,2 @@ -Using Manifolds.jl · Caesar.jl

On-Manifold Operations

Caesar.jl and libraries have adopted JuliaManifolds/Manifolds.jl as foundation for developing the algebraic operations used.

The Community has been developing high quality documentation for Manifolds.jl, and we encourage the interested reader to learn and use everything available there.

Separate Manifold Beliefs Page

See building a Manifold Kernel Density or for more information.

Why Manifolds.jl

There is much to be said about how and why Manifolds.jl is the right decision for building a next-gen factor graph solver. We believe the future will show that mathematicians are way ahead of the curve, and that adopting a manifold approach will pretty much be the only way to develop the required mathematical operations in Caesar.jl for the forseable future.

Are Manifolds Difficult? No.

Do you need a math degree to be able to use Manifolds.jl? No you don't since Caesar.jl and related packages have already packaged many of the common functions and factors you need to get going.

This page is meant to open the door for readers to learn more about how things work under the hood, and empower the Community to rapidly develop upon existing work. This page is also intended to show that the Caesar.jl related packages are being developed with strong focus on consolidation, single definition functionality, and serious cross discipline considerations.

If you are looking for rapid help or more expertise on a particular issue, consider reaching out by opening Issues or connecting to the ongoing chats in the Slack Channel.

What Are Manifolds

If you are a newcomer to the term Manifold and want to learn more, fear not even though your first search results might be somewhat disorienting.

The rest of this page is meant to introduce the basics, and point you to handy resources. Caesar.jl and NavAbility support open Community and are upstreaming improvements to Manifolds.jl, including code updates and documentation improvements.

'One Page' Summary of Manifolds

Imagine you have a sheet of paper and you draw with a pencil a short line segment on the page. Now draw a second line segment from the end of the first. That should be pretty easy on a flat surface, right?

When the piece of paper is lying flat on the table, you have a line in the Euclidean(2) manifold, and you can easily assign [x,y] coordinates to describe these lines or vectors. Note coordinates here is a precise technical term.

If you roll the paper into a cyclinder... well now you have line segments on a cylindrical manifold. The question is, how to conduct mathematical operations concisely and consistently indepependent of the shape of your manifold? And, how to 'unroll' the paper for simple computations on a locally flat surface.

How far can the math go before there just isn't a good recipe for writing down generic operations? Turns out a few smart people have been working to solve this and the keyword here is Manifold.

If you are drinking some coffee right now, then you are moving the cup in Euclidean(3) space, that is you assume the motion is in flat coordinates [x;y;z]. A more technical way to say that is that the Euclidean manifold has zero curvature.

What if you are concerned with the orientation of the cup too–-as in not spill the hot contents everywhere–-then you might actually want to work on the SpecialEuclidean(3) manifold – that is 3 degrees of translational freedom, and 3 degrees of rotational freedom. You might have heard of Lie Groups and Lie Algebras, well that is exactly it, Lie Groups are a special set of Group Manifolds and associated operations that are already supported by JuliaManifolds/Manifolds.jl.

Things are a little easier for a robot traveling around on a flat 2D surface. If your robot is moving around with coordinates $[x,y,\theta]$, well then you are working with the coordinates of the SpecialEuclidean(2) manifold. There is more to say on how the coordinates $[x,y,\theta]$ get converted into the $\mathfrak{se}(2)$ Lie algebra, and that gets converted into a Lie Group element – i.e. $([x;y], \mathrm{RotMat}(\theta))$. More on that later.

Perhaps you are interested in relativistic effects where time as the fourth dimension is of interest, well then the Minkowski space provides Group and Manifold constructs for that – actually Minkowski falls under the supported Lorentz Manifolds.

The point here is that the math for drawing line segments in each of these manifolds above is almost exactly the same, thanks to the abstractions that have already been developed. And, many more powerful constructs exist which will become more apparent as you continue to work with Manifolds.

7 Things to know First

As a robotics, navigation, or control person who wants to get started, you need to know what the following terms mean:

  • Q1) What are manifold points, tangent vectors, and user coordinates,
  • Q2) What does the logarithm map of a manifold do,
  • Q3) What does the exponential map of a manifold do,
  • Q4) What do the vee and hat operations do,
  • Q5) What is the difference between Riemannian or Group manifolds,
  • Q6) Is a retraction the same as the exponential map,
  • Q7) Is a projection the same as a logarithm map,

Know it sounds like a lot, but the point of this paragraph is that if you are able to answer these seven questions for yourself, then you will be empowered to venture into the math of manifolds much more easily. And, everything will begin to make sense. A lot of sense, to the point that you might agree with our assesment that JuliaManifolds/Manifolds.jl is the right direction for the future.

Although you will be able to find many answers for these seven questions in many places, our answers are listed at the bottom of this page.

Manifold Tutorials

The rest of this page is devoted to showing you how to use the math, write your own code to do new things beyond what Caesar.jl can already do. If you are willing to share any contributions, please do so by opening pull requests against the related repos.

Using Manifolds in Factors

The best way to show this is just dive straight into a factor that actually uses a Manifolds mechanization, and RoME.Pose2Pose2 is a fairly straight forward example. This factor gets used for rigid transforms on a 2D plane, with coordinates $[x,y,\theta]$ as alluded to above.

A Tutorial on Rotations

Note

Work in progress, Upstream Tutorial

A Tutorial on 2D Rigid Transforms

Note

Work in progress, Upstream Tutorial

Existing Manifolds

The most popular Manifolds used in Caesar.jl related packages are:

Group Manifolds

Riemannian Manifolds (Work in progress)

Note

Caesar.jl encourages the JuliaManifolds approach to defining new manifolds, and can readily be used for Caesar.jl related operations.

Creating a new Manifold

JuliaManifolds.jl is designed to make it as easy as possible to define your own manifold and then get all the benefits of the Manifolds.jl ecosystem. Follow the documentation there to make your own manifold, which can then readily be used with all the features of both JuliaManifolds as well as the Caesar.jl related packages.

Answers to 7 Questions

Q1) What are Point, Tangents, Coordinates

A manifold $\mathcal{M}$ is a collection of points that together create the given space. Points are like round sprinkles on the donut. The representation of points will vary from manifold to manifold. Sometimes it is even possible to have different representations for the same point on a manifold. These are usually denoted as $p$.

Tangent vectors (we prefer tangents for clarity) is a vector $X$ that emanates from a point on a manifold tangential to the manifold curvature. A vector lives in the tangent space of the manifold, a locally flat region around a point $X\in T_p \mathcal{M}$. On the donut, imagine a rod-shaped sprinkle stuck along the tangent of the surface at a particular point $p$. The tangent space is the collection of all possible tangents at $p$.

Coordinates are a user defined property that uses the Euclidean nature of the tangent space at point $p$ to operate as a regular linear space. Coordinates are just a list of the indepedent coordinate dimensions of the tangent space values collected together. Read this part carefully, as it can easily be confused with a conventional tangent vector in a regular Euclidean space.

For example, a tangent vector to the Euclidean(2) manifold, at the origin point $(0,0)$ is what you likely are familiar with from school as a "vector" (not the coordinates, although that happens to be the same thing in the trivial case). For Euclidean space, a vector from point $p$ of length $[x,y]$ looks like the line segment between points $p$ and $q$ on the underlying manifold.

This trivial overlapping of "vectors" in the Euclidean Manifold, and in a tangent space around $p$, and coordinates for that tangent space, are no longer trivial when the manifold has curvature.

Q2) What is the Logarithm map

The logarithm X = logmap(M,p,q) computes, based at point $p$, the tangent vector $X$ on the tangent plane $T_p\mathcal{M}$ from $p$. In other words, image a string following the curve of a manifold from $p$ to $q$, pick up that string from $q$ while holding $p$ firm, until the string is flat against the tangent space emminating from $p$. The logarithm is the opposite of the exponential map.

Multiple logmap interpretations exist, for example in the case of $SpecialEuclidean(N)$ there are multiple definitions for $\oplus$ and $\ominus$, see [2.15]. When using a library, it is worth testing how logmap and expmap are computed (away from the identity element for Groups).

Q3) What is the Exponential map

The exponential map does the opposite of the logarithm. Imagine a tangent vector $X$ emanating from point $p$. The length and direction of $X$ can be wrapped onto the curvature of the manifold to form a line on the manifold surface.

Q4) What does vee/hat do

vee is an operation that converts a tangent vector representation into a coordinate representation. For example Lie algebra elements are tangent vector elements, so vee([0 -w; w 0]) = w. And visa versa for hat(w) = [0 -w; w 0], which goes from coordinates to tangent vectors.

Q5) What Riemannian vs. Group Manifolds

Groups are mathematical structures which often fit well inside the manifold way of working. For example in robotics, Lie Groups are popular under SpecialEuclidean(N) <: AbstractGroupManifold. Groups also have a well defined action. Most prominently for our usage, groups are sets of points for which there exists an identity point. Riemannian manifolds are more general than Lie groups, specifically Riemannian manifolds do not have an identity point.

An easy example is that the Euclidean(N) manifold does not have an identity element, since what we know as $[0,0]$ is actually a coordinate base point for the local tangent space, and which just happens to look the same as the underlying Euclidean(N) manifold. The TranslationGroup(N) exists as an additional structure over the Euclidean(N) space which has a defined identity element as well as a defined operations on points.

Q6) Retraction vs. Exp map

Retractions are numerically efficient approximations to convert a tangent vector into a point on the manifold. The exponential map is the theoretically precise retraction, but may well be computationally expensive beyond the need for most applications.

Q7) Projection vs. Log map

The term projection can be somewhat ambiguous between references. In Manifolds.jl, projections either project a point in the embedding to a point on the manifold, or a vector from the embedding onto a tangent space at a certain point.

Confusion, can easily happen between cases where there is no ambient space around a particular manifold. Then the term projection may be moot.

In Manifolds.jl, an inverse retraction is an approximate logmap of a point up from the manifold onto a tangent space – i.e. not a projection. It is important not to confuse a point on the manifold as a point in the ambient space, when thinking about the term projection.

It is best to make sure you know which one is being used in any particular situation.

Note

For a slightly deeper dive into the relation between embedding, ambient space, and projections, see the background conversation here.

+Using Manifolds.jl · Caesar.jl

On-Manifold Operations

Caesar.jl and libraries have adopted JuliaManifolds/Manifolds.jl as foundation for developing the algebraic operations used.

The Community has been developing high quality documentation for Manifolds.jl, and we encourage the interested reader to learn and use everything available there.

Separate Manifold Beliefs Page

See building a Manifold Kernel Density or for more information.

Why Manifolds.jl

There is much to be said about how and why Manifolds.jl is the right decision for building a next-gen factor graph solver. We believe the future will show that mathematicians are way ahead of the curve, and that adopting a manifold approach will pretty much be the only way to develop the required mathematical operations in Caesar.jl for the forseable future.

Are Manifolds Difficult? No.

Do you need a math degree to be able to use Manifolds.jl? No you don't since Caesar.jl and related packages have already packaged many of the common functions and factors you need to get going.

This page is meant to open the door for readers to learn more about how things work under the hood, and empower the Community to rapidly develop upon existing work. This page is also intended to show that the Caesar.jl related packages are being developed with strong focus on consolidation, single definition functionality, and serious cross discipline considerations.

If you are looking for rapid help or more expertise on a particular issue, consider reaching out by opening Issues or connecting to the ongoing chats in the Slack Channel.

What Are Manifolds

If you are a newcomer to the term Manifold and want to learn more, fear not even though your first search results might be somewhat disorienting.

The rest of this page is meant to introduce the basics, and point you to handy resources. Caesar.jl and NavAbility support open Community and are upstreaming improvements to Manifolds.jl, including code updates and documentation improvements.

'One Page' Summary of Manifolds

Imagine you have a sheet of paper and you draw with a pencil a short line segment on the page. Now draw a second line segment from the end of the first. That should be pretty easy on a flat surface, right?

When the piece of paper is lying flat on the table, you have a line in the Euclidean(2) manifold, and you can easily assign [x,y] coordinates to describe these lines or vectors. Note coordinates here is a precise technical term.

If you roll the paper into a cyclinder... well now you have line segments on a cylindrical manifold. The question is, how to conduct mathematical operations concisely and consistently indepependent of the shape of your manifold? And, how to 'unroll' the paper for simple computations on a locally flat surface.

How far can the math go before there just isn't a good recipe for writing down generic operations? Turns out a few smart people have been working to solve this and the keyword here is Manifold.

If you are drinking some coffee right now, then you are moving the cup in Euclidean(3) space, that is you assume the motion is in flat coordinates [x;y;z]. A more technical way to say that is that the Euclidean manifold has zero curvature.

What if you are concerned with the orientation of the cup too–-as in not spill the hot contents everywhere–-then you might actually want to work on the SpecialEuclidean(3) manifold – that is 3 degrees of translational freedom, and 3 degrees of rotational freedom. You might have heard of Lie Groups and Lie Algebras, well that is exactly it, Lie Groups are a special set of Group Manifolds and associated operations that are already supported by JuliaManifolds/Manifolds.jl.

Things are a little easier for a robot traveling around on a flat 2D surface. If your robot is moving around with coordinates $[x,y,\theta]$, well then you are working with the coordinates of the SpecialEuclidean(2) manifold. There is more to say on how the coordinates $[x,y,\theta]$ get converted into the $\mathfrak{se}(2)$ Lie algebra, and that gets converted into a Lie Group element – i.e. $([x;y], \mathrm{RotMat}(\theta))$. More on that later.

Perhaps you are interested in relativistic effects where time as the fourth dimension is of interest, well then the Minkowski space provides Group and Manifold constructs for that – actually Minkowski falls under the supported Lorentz Manifolds.

The point here is that the math for drawing line segments in each of these manifolds above is almost exactly the same, thanks to the abstractions that have already been developed. And, many more powerful constructs exist which will become more apparent as you continue to work with Manifolds.

7 Things to know First

As a robotics, navigation, or control person who wants to get started, you need to know what the following terms mean:

  • Q1) What are manifold points, tangent vectors, and user coordinates,
  • Q2) What does the logarithm map of a manifold do,
  • Q3) What does the exponential map of a manifold do,
  • Q4) What do the vee and hat operations do,
  • Q5) What is the difference between Riemannian or Group manifolds,
  • Q6) Is a retraction the same as the exponential map,
  • Q7) Is a projection the same as a logarithm map,

Know it sounds like a lot, but the point of this paragraph is that if you are able to answer these seven questions for yourself, then you will be empowered to venture into the math of manifolds much more easily. And, everything will begin to make sense. A lot of sense, to the point that you might agree with our assesment that JuliaManifolds/Manifolds.jl is the right direction for the future.

Although you will be able to find many answers for these seven questions in many places, our answers are listed at the bottom of this page.

Manifold Tutorials

The rest of this page is devoted to showing you how to use the math, write your own code to do new things beyond what Caesar.jl can already do. If you are willing to share any contributions, please do so by opening pull requests against the related repos.

Using Manifolds in Factors

The best way to show this is just dive straight into a factor that actually uses a Manifolds mechanization, and RoME.Pose2Pose2 is a fairly straight forward example. This factor gets used for rigid transforms on a 2D plane, with coordinates $[x,y,\theta]$ as alluded to above.

A Tutorial on Rotations

Note

Work in progress, Upstream Tutorial

A Tutorial on 2D Rigid Transforms

Note

Work in progress, Upstream Tutorial

Existing Manifolds

The most popular Manifolds used in Caesar.jl related packages are:

Group Manifolds

Riemannian Manifolds (Work in progress)

Note

Caesar.jl encourages the JuliaManifolds approach to defining new manifolds, and can readily be used for Caesar.jl related operations.

Creating a new Manifold

JuliaManifolds.jl is designed to make it as easy as possible to define your own manifold and then get all the benefits of the Manifolds.jl ecosystem. Follow the documentation there to make your own manifold, which can then readily be used with all the features of both JuliaManifolds as well as the Caesar.jl related packages.

Answers to 7 Questions

Q1) What are Point, Tangents, Coordinates

A manifold $\mathcal{M}$ is a collection of points that together create the given space. Points are like round sprinkles on the donut. The representation of points will vary from manifold to manifold. Sometimes it is even possible to have different representations for the same point on a manifold. These are usually denoted as $p$.

Tangent vectors (we prefer tangents for clarity) is a vector $X$ that emanates from a point on a manifold tangential to the manifold curvature. A vector lives in the tangent space of the manifold, a locally flat region around a point $X\in T_p \mathcal{M}$. On the donut, imagine a rod-shaped sprinkle stuck along the tangent of the surface at a particular point $p$. The tangent space is the collection of all possible tangents at $p$.

Coordinates are a user defined property that uses the Euclidean nature of the tangent space at point $p$ to operate as a regular linear space. Coordinates are just a list of the indepedent coordinate dimensions of the tangent space values collected together. Read this part carefully, as it can easily be confused with a conventional tangent vector in a regular Euclidean space.

For example, a tangent vector to the Euclidean(2) manifold, at the origin point $(0,0)$ is what you likely are familiar with from school as a "vector" (not the coordinates, although that happens to be the same thing in the trivial case). For Euclidean space, a vector from point $p$ of length $[x,y]$ looks like the line segment between points $p$ and $q$ on the underlying manifold.

This trivial overlapping of "vectors" in the Euclidean Manifold, and in a tangent space around $p$, and coordinates for that tangent space, are no longer trivial when the manifold has curvature.

Q2) What is the Logarithm map

The logarithm X = logmap(M,p,q) computes, based at point $p$, the tangent vector $X$ on the tangent plane $T_p\mathcal{M}$ from $p$. In other words, image a string following the curve of a manifold from $p$ to $q$, pick up that string from $q$ while holding $p$ firm, until the string is flat against the tangent space emminating from $p$. The logarithm is the opposite of the exponential map.

Multiple logmap interpretations exist, for example in the case of $SpecialEuclidean(N)$ there are multiple definitions for $\oplus$ and $\ominus$, see [2.15]. When using a library, it is worth testing how logmap and expmap are computed (away from the identity element for Groups).

Q3) What is the Exponential map

The exponential map does the opposite of the logarithm. Imagine a tangent vector $X$ emanating from point $p$. The length and direction of $X$ can be wrapped onto the curvature of the manifold to form a line on the manifold surface.

Q4) What does vee/hat do

vee is an operation that converts a tangent vector representation into a coordinate representation. For example Lie algebra elements are tangent vector elements, so vee([0 -w; w 0]) = w. And visa versa for hat(w) = [0 -w; w 0], which goes from coordinates to tangent vectors.

Q5) What Riemannian vs. Group Manifolds

Groups are mathematical structures which often fit well inside the manifold way of working. For example in robotics, Lie Groups are popular under SpecialEuclidean(N) <: AbstractGroupManifold. Groups also have a well defined action. Most prominently for our usage, groups are sets of points for which there exists an identity point. Riemannian manifolds are more general than Lie groups, specifically Riemannian manifolds do not have an identity point.

An easy example is that the Euclidean(N) manifold does not have an identity element, since what we know as $[0,0]$ is actually a coordinate base point for the local tangent space, and which just happens to look the same as the underlying Euclidean(N) manifold. The TranslationGroup(N) exists as an additional structure over the Euclidean(N) space which has a defined identity element as well as a defined operations on points.

Q6) Retraction vs. Exp map

Retractions are numerically efficient approximations to convert a tangent vector into a point on the manifold. The exponential map is the theoretically precise retraction, but may well be computationally expensive beyond the need for most applications.

Q7) Projection vs. Log map

The term projection can be somewhat ambiguous between references. In Manifolds.jl, projections either project a point in the embedding to a point on the manifold, or a vector from the embedding onto a tangent space at a certain point.

Confusion, can easily happen between cases where there is no ambient space around a particular manifold. Then the term projection may be moot.

In Manifolds.jl, an inverse retraction is an approximate logmap of a point up from the manifold onto a tangent space – i.e. not a projection. It is important not to confuse a point on the manifold as a point in the ambient space, when thinking about the term projection.

It is best to make sure you know which one is being used in any particular situation.

Note

For a slightly deeper dive into the relation between embedding, ambient space, and projections, see the background conversation here.

diff --git a/dev/concepts/why_nongaussian/index.html b/dev/concepts/why_nongaussian/index.html index f7857e493..efec85e8c 100644 --- a/dev/concepts/why_nongaussian/index.html +++ b/dev/concepts/why_nongaussian/index.html @@ -1,2 +1,2 @@ -Gaussian vs. Non-Gaussian · Caesar.jl

Why/Where does non-Gaussian data come from?

Gaussian error models in measurement or data cues will only be Gaussian (normally distributed) if all physics/decisions/systematic-errors/calibration/etc. has a correct algebraic model in all circumstances. Caesar.jl and MM-iSAMv2 is heavily focussed on state-estimation from a plethora of heterogenous data that may not yet have perfect algebraic models. Four major categories of non-Gaussian errors have thus far been considered:

  • Uncertain decisions (a.k.a. data association), such as a robot trying to decide if a navigation loop-closure can be deduced from a repeat observation of a similar object or measurement from current and past data. These issues are commonly also referred to as multi-hypothesis.
  • Underdetermined or underdefined systems where there are more variables than constraining measurements to fully define the system as a single mode–-a.k.a solution ambiguity. For example, in 2D consider two range measurements resulting in two possible locations through trilateration.
  • Nonlinearity. For example in 2D, consider a Pose2 odometry where the orientation is uncertain: The resulting belief of where a next pose might be (convolution with odometry factor) results in a banana shape curve, even though the entire process is driven by assumed Gaussian belief.
  • Physics of the measurement process. Many measurement processes exhibit non-Gaussian behaviour. For example, acoustic/radio time-of-flight measurements, using either pulse-train or matched filtering, result in an "energy intensity" over time/distance of what the range to a scattering-target/source might be–i.e. highly non-Gaussian.

Next Steps

Quick links to related pages:

    +Gaussian vs. Non-Gaussian · Caesar.jl

    Why/Where does non-Gaussian data come from?

    Gaussian error models in measurement or data cues will only be Gaussian (normally distributed) if all physics/decisions/systematic-errors/calibration/etc. has a correct algebraic model in all circumstances. Caesar.jl and MM-iSAMv2 is heavily focussed on state-estimation from a plethora of heterogenous data that may not yet have perfect algebraic models. Four major categories of non-Gaussian errors have thus far been considered:

    • Uncertain decisions (a.k.a. data association), such as a robot trying to decide if a navigation loop-closure can be deduced from a repeat observation of a similar object or measurement from current and past data. These issues are commonly also referred to as multi-hypothesis.
    • Underdetermined or underdefined systems where there are more variables than constraining measurements to fully define the system as a single mode–-a.k.a solution ambiguity. For example, in 2D consider two range measurements resulting in two possible locations through trilateration.
    • Nonlinearity. For example in 2D, consider a Pose2 odometry where the orientation is uncertain: The resulting belief of where a next pose might be (convolution with odometry factor) results in a banana shape curve, even though the entire process is driven by assumed Gaussian belief.
    • Physics of the measurement process. Many measurement processes exhibit non-Gaussian behaviour. For example, acoustic/radio time-of-flight measurements, using either pulse-train or matched filtering, result in an "energy intensity" over time/distance of what the range to a scattering-target/source might be–i.e. highly non-Gaussian.

    Next Steps

    Quick links to related pages:

      diff --git a/dev/concepts/zero_install/index.html b/dev/concepts/zero_install/index.html index bb384f24b..6337e1de2 100644 --- a/dev/concepts/zero_install/index.html +++ b/dev/concepts/zero_install/index.html @@ -1,2 +1,2 @@ -Zero Install Solution · Caesar.jl
      +Zero Install Solution · Caesar.jl
      diff --git a/dev/dev/internal_fncs/index.html b/dev/dev/internal_fncs/index.html index 2e16da233..98716c5a9 100644 --- a/dev/dev/internal_fncs/index.html +++ b/dev/dev/internal_fncs/index.html @@ -40,4 +40,4 @@ P(A|B=B(.))

      Various Internal Function Docs

      IncrementalInference._solveCCWNumeric!Function
      _solveCCWNumeric!(ccwl; ...)
       _solveCCWNumeric!(ccwl, _slack; perturb)
      -

      Solve free variable x by root finding residual function fgr.usrfnc(res, x). This is the penultimate step before calling numerical operations to move actual estimates, which is done by an internally created lambda function.

      Notes

      • Assumes cpt_.p is already set to desired X decision variable dimensions and size.
      • Assumes only ccw.particleidx will be solved for
      • small random (off-manifold) perturbation used to prevent trivial solver cases, div by 0 etc.
        • perturb is necessary for NLsolve (obsolete) cases, and smaller than 1e-10 will result in test failure
      • Also incorporates the active hypo lookup

      DevNotes

      • TODO testshuffle is now obsolete, should be removed
      • TODO perhaps consolidate perturbation with inflation or nullhypo
      source
      +

      Solve free variable x by root finding residual function fgr.usrfnc(res, x). This is the penultimate step before calling numerical operations to move actual estimates, which is done by an internally created lambda function.

      Notes

      DevNotes

      source diff --git a/dev/dev/known_issues/index.html b/dev/dev/known_issues/index.html index 8d582c0a3..b6e0c363e 100644 --- a/dev/dev/known_issues/index.html +++ b/dev/dev/known_issues/index.html @@ -1,2 +1,2 @@ -Known Issue List · Caesar.jl

      Known Issues

      This page is used to list known issues:

      • Arena.jl is fairly behind on a number of updates and deprecations. Fixes for this are planned 2021Q2.
      • RoMEPlotting.jl main features like plotSLAM2D are working, but some of the other features are not fully up to date with recent changes in upstream packages. This too will be updated around Summer 2021.

      Features To Be Restored

      Install 3D Visualization Utils (e.g. Arena.jl)

      3D Visualizations are provided by Arena.jl as well as development package Amphitheater.jl. Please follow instructions on the Visualizations page for a variety of 3D utilities.

      Note

      Arena.jl and Amphitheater.jl are currently being refactored as part of the broader DistributedFactorGraph migration, the features are are in beta stage (1Q2020).

      Install the latest master branch version with

      (v1.5) pkg> add Arena#master

      Install "Just the ZMQ/ROS Runtime Solver" (Linux)

      Work in progress (see issue #278).

      +Known Issue List · Caesar.jl

      Known Issues

      This page is used to list known issues:

      • Arena.jl is fairly behind on a number of updates and deprecations. Fixes for this are planned 2021Q2.
      • RoMEPlotting.jl main features like plotSLAM2D are working, but some of the other features are not fully up to date with recent changes in upstream packages. This too will be updated around Summer 2021.

      Features To Be Restored

      Install 3D Visualization Utils (e.g. Arena.jl)

      3D Visualizations are provided by Arena.jl as well as development package Amphitheater.jl. Please follow instructions on the Visualizations page for a variety of 3D utilities.

      Note

      Arena.jl and Amphitheater.jl are currently being refactored as part of the broader DistributedFactorGraph migration, the features are are in beta stage (1Q2020).

      Install the latest master branch version with

      (v1.5) pkg> add Arena#master

      Install "Just the ZMQ/ROS Runtime Solver" (Linux)

      Work in progress (see issue #278).

      diff --git a/dev/dev/wiki/index.html b/dev/dev/wiki/index.html index a996874a5..ec0e00f81 100644 --- a/dev/dev/wiki/index.html +++ b/dev/dev/wiki/index.html @@ -1,2 +1,2 @@ -Wiki Pointers · Caesar.jl

      Developers Documentation

      High Level Requirements

      Wiki to formalize some of the overall objectives.

      Standardizing the API, verbNoun Definitions:

      The API derives from a set of standard definitions for verbs and Nouns, please see the developer wiki regarding these definitions.

      DistributedFactorGraphs.jl Docs

      These are more hardy developer docs, such as the lower level data management API etc.

      Design Wiki, Data and Architecture

      More developer zone material will be added here in the future, but for the time being check out the Caesar Wiki.

      Tree and CSM References

      Major upgrades to how the tree and CSM works is tracked in IIF issue 889.

      Coding Templates

      We've started to organize useful coding templates that are not available elsewhere (such as JuliaDocs) in a more local developers

      Shortcuts for vscode IDE

      See wiki

      Parametric Solve Whiteboard

      https://github.com/JuliaRobotics/IncrementalInference.jl/wiki/Parametric-Solve-Whiteboard

      Early PoC work on Tree based Initialization

      https://github.com/JuliaRobotics/IncrementalInference.jl/wiki/Tree-Based-Initialization

      Wiki for variable ordering links.

      +Wiki Pointers · Caesar.jl

      Developers Documentation

      High Level Requirements

      Wiki to formalize some of the overall objectives.

      Standardizing the API, verbNoun Definitions:

      The API derives from a set of standard definitions for verbs and Nouns, please see the developer wiki regarding these definitions.

      DistributedFactorGraphs.jl Docs

      These are more hardy developer docs, such as the lower level data management API etc.

      Design Wiki, Data and Architecture

      More developer zone material will be added here in the future, but for the time being check out the Caesar Wiki.

      Tree and CSM References

      Major upgrades to how the tree and CSM works is tracked in IIF issue 889.

      Coding Templates

      We've started to organize useful coding templates that are not available elsewhere (such as JuliaDocs) in a more local developers

      Shortcuts for vscode IDE

      See wiki

      Parametric Solve Whiteboard

      https://github.com/JuliaRobotics/IncrementalInference.jl/wiki/Parametric-Solve-Whiteboard

      Early PoC work on Tree based Initialization

      https://github.com/JuliaRobotics/IncrementalInference.jl/wiki/Tree-Based-Initialization

      Wiki for variable ordering links.

      diff --git a/dev/examples/adding_variables_factors/index.html b/dev/examples/adding_variables_factors/index.html index 48efc2318..842f3fdfc 100644 --- a/dev/examples/adding_variables_factors/index.html +++ b/dev/examples/adding_variables_factors/index.html @@ -1,2 +1,2 @@ -Variable/Factor Considerations · Caesar.jl

      Variable/Factor Considerations

      A couple of important points:

      • You do not need to modify or insert your new code into Caesar/RoME/IncrementalInference source code libraries – they can be created and run anywhere on-the-fly!
      • As long as the factors exist in the working space when the solver is run, the factors are automatically used – this is possible due to Julia's multiple dispatch design
      • Caesar.jl is designed to allow you to add new variables and factors to your own independent repository and incorporate them at will at compile-time or even run-time
      • Residual function definitions for new factors types use a callable struct (a.k.a functor) architecture to simultaneously allow:
        • Multiple dispatch (i.e. 'polymorphic' behavior)
        • Meta-data and in-place memory storage for advanced and performant code
        • An outside callback implementation style
      • In most robotics scenarios, there is no need for new variables or factors:
        • Variables have various mechanisms that allow you to attach data to them, e.g. raw sensory data or identified April tags, so you do not need to create a new variable type just to store data
        • New variables are required only if you are representing a new state - TODO: Example of needed state
        • New factors are needed if:
          • You need to represent a constraint for a variable (known as a singleton) and that constraint type doesn't exist
          • You need to represent a constraint between two variables and that constraint type doesn't exist

      All factors inherit from one of the following types, depending on their function:

      • AbstractPrior is for priors (unary factors) that provide an absolute constraint for a single variable. A simple example of this is an absolute GPS prior, or equivalently a (0, 0, 0) starting location in a Pose2 scenario.
        • Requires: A getSample function
      • AbstractRelativeMinimize uses Optim.jl and is for relative factors that introduce an algebraic relationship between two or more variables. A simple example of this is an odometry factor between two pose variables, or a range factor indicating the range between a pose and another variable.
        • Requires: A getSample function and a residual function definition
        • The minimize suffix specifies that the residual function of this factor will be enforced by numerical minimization (find me the minimum of this function)
      • [NEW] AbstractManifoldMinimize uses Manopt.jl.

      How do you decide which to use?

      • If you are creating factors for world-frame information that will be tied to a single variable, inherit from <:AbstractPrior
        • GPS coordinates should be priors
      • If you are creating factors for local-frame relationships between variables, inherit from IIF.AbstractRelativeMinimize
        • Odometry and bearing deltas should be introduced as pairwise factors and should be local frame

      TBD: Users should start with IIF.AbstractRelativeMinimize, discuss why and when they should promote their factors to IIF.AbstractRelativeRoots.

      Note

      AbstractRelativeMinimize does not imply that the overall inference algorithm only minimizes an objective function. The MM-iSAM algorithm is built around fixed-point analysis. Minimization is used here to locally enforce the residual function.

      What you need to build in the new factor:

      • A struct for the factor itself
      • A sampler function to return measurements from the random ditributions
      • If you are building a <:AbstractRelative you need to define a residual function to introduce the relative algebraic relationship between the variables
        • Minimization function should be lower-bounded and smooth
      • A packed type of the factor which must be named Packed[Factor name], and allows the factor to be packed/transmitted/unpacked
      • Serialization and deserialization methods
        • These are convert functions that pack and unpack the factor (which may be highly complex) into serialization-compatible formats
        • As the factors are mostly comprised of distributions (of type SampleableBelief), while JSON3.jl` is used for serialization.
      +Variable/Factor Considerations · Caesar.jl

      Variable/Factor Considerations

      A couple of important points:

      • You do not need to modify or insert your new code into Caesar/RoME/IncrementalInference source code libraries – they can be created and run anywhere on-the-fly!
      • As long as the factors exist in the working space when the solver is run, the factors are automatically used – this is possible due to Julia's multiple dispatch design
      • Caesar.jl is designed to allow you to add new variables and factors to your own independent repository and incorporate them at will at compile-time or even run-time
      • Residual function definitions for new factors types use a callable struct (a.k.a functor) architecture to simultaneously allow:
        • Multiple dispatch (i.e. 'polymorphic' behavior)
        • Meta-data and in-place memory storage for advanced and performant code
        • An outside callback implementation style
      • In most robotics scenarios, there is no need for new variables or factors:
        • Variables have various mechanisms that allow you to attach data to them, e.g. raw sensory data or identified April tags, so you do not need to create a new variable type just to store data
        • New variables are required only if you are representing a new state - TODO: Example of needed state
        • New factors are needed if:
          • You need to represent a constraint for a variable (known as a singleton) and that constraint type doesn't exist
          • You need to represent a constraint between two variables and that constraint type doesn't exist

      All factors inherit from one of the following types, depending on their function:

      • AbstractPrior is for priors (unary factors) that provide an absolute constraint for a single variable. A simple example of this is an absolute GPS prior, or equivalently a (0, 0, 0) starting location in a Pose2 scenario.
        • Requires: A getSample function
      • AbstractRelativeMinimize uses Optim.jl and is for relative factors that introduce an algebraic relationship between two or more variables. A simple example of this is an odometry factor between two pose variables, or a range factor indicating the range between a pose and another variable.
        • Requires: A getSample function and a residual function definition
        • The minimize suffix specifies that the residual function of this factor will be enforced by numerical minimization (find me the minimum of this function)
      • [NEW] AbstractManifoldMinimize uses Manopt.jl.

      How do you decide which to use?

      • If you are creating factors for world-frame information that will be tied to a single variable, inherit from <:AbstractPrior
        • GPS coordinates should be priors
      • If you are creating factors for local-frame relationships between variables, inherit from IIF.AbstractRelativeMinimize
        • Odometry and bearing deltas should be introduced as pairwise factors and should be local frame

      TBD: Users should start with IIF.AbstractRelativeMinimize, discuss why and when they should promote their factors to IIF.AbstractRelativeRoots.

      Note

      AbstractRelativeMinimize does not imply that the overall inference algorithm only minimizes an objective function. The MM-iSAM algorithm is built around fixed-point analysis. Minimization is used here to locally enforce the residual function.

      What you need to build in the new factor:

      • A struct for the factor itself
      • A sampler function to return measurements from the random ditributions
      • If you are building a <:AbstractRelative you need to define a residual function to introduce the relative algebraic relationship between the variables
        • Minimization function should be lower-bounded and smooth
      • A packed type of the factor which must be named Packed[Factor name], and allows the factor to be packed/transmitted/unpacked
      • Serialization and deserialization methods
        • These are convert functions that pack and unpack the factor (which may be highly complex) into serialization-compatible formats
        • As the factors are mostly comprised of distributions (of type SampleableBelief), while JSON3.jl` is used for serialization.
      diff --git a/dev/examples/basic_continuousscalar/index.html b/dev/examples/basic_continuousscalar/index.html index 6795ee1cc..7ca2fc4c5 100644 --- a/dev/examples/basic_continuousscalar/index.html +++ b/dev/examples/basic_continuousscalar/index.html @@ -42,4 +42,4 @@ # and visualization plotKDE(fg, [:x0, :x1, :x2, :x3])

      The resulting posterior marginal beliefs over all the system variables are:

      -

      It is import to note that although this tutorial ends with all marginal beliefs having near Gaussian shape and are unimodal, that the package supports multi-modal belief estimates during both the prediction and global inference processes. In fact, many of the same underlying inference functions are involved with the automatic initialization process and the global multi-modal iSAM inference procedure. This concludes the ContinuousScalar tutorial particular to the IncrementalInference package.

      +

      It is import to note that although this tutorial ends with all marginal beliefs having near Gaussian shape and are unimodal, that the package supports multi-modal belief estimates during both the prediction and global inference processes. In fact, many of the same underlying inference functions are involved with the automatic initialization process and the global multi-modal iSAM inference procedure. This concludes the ContinuousScalar tutorial particular to the IncrementalInference package.

      diff --git a/dev/examples/basic_definingfactors/index.html b/dev/examples/basic_definingfactors/index.html index 7bfe1bacc..82e81181e 100644 --- a/dev/examples/basic_definingfactors/index.html +++ b/dev/examples/basic_definingfactors/index.html @@ -10,4 +10,4 @@ someBelief = manikde!(Position{1}, pts) # and build your new factor as an object -myprior = MyPrior(someBelief)

      and add it to the existing factor graph from earlier, lets say:

      addFactor!(fg, [:x1], myprior)
      Note

      Variable types Postion{1} or ContinuousEuclid{1} are algebraically equivalent.

      That's it, this factor is now part of the graph. This should be a solvable graph:

      solveGraph!(fg); # exact alias of solveTree!(fg)

      Later we will see how to ensure these new factors can be properly serialized to work with features like saveDFG and loadDFG. See What is CalcFactor for more details.

      See the next page on how to build your own Custom Relative Factor. Serialization of factors is also discussed in more detail at Standardized Factor Serialization.

      +myprior = MyPrior(someBelief)

      and add it to the existing factor graph from earlier, lets say:

      addFactor!(fg, [:x1], myprior)
      Note

      Variable types Postion{1} or ContinuousEuclid{1} are algebraically equivalent.

      That's it, this factor is now part of the graph. This should be a solvable graph:

      solveGraph!(fg); # exact alias of solveTree!(fg)

      Later we will see how to ensure these new factors can be properly serialized to work with features like saveDFG and loadDFG. See What is CalcFactor for more details.

      See the next page on how to build your own Custom Relative Factor. Serialization of factors is also discussed in more detail at Standardized Factor Serialization.

      diff --git a/dev/examples/basic_hexagonal2d/index.html b/dev/examples/basic_hexagonal2d/index.html index d9820f35f..cdf1d5a2c 100644 --- a/dev/examples/basic_hexagonal2d/index.html +++ b/dev/examples/basic_hexagonal2d/index.html @@ -40,4 +40,4 @@ tree = solveTree!(fg, tree) # redraw -pl = drawPosesLandms(fg)

      test

      This concludes the Hexagonal 2D SLAM example.

      Interest: The Bayes (Junction) tree

      The Bayes (Junction) tree is used as an acyclic (has no loops) computational object, an exact algebraic refactorizating of factor graph, to perform the associated sum-product inference. The visual structure of the tree can be extracted by modifying the command tree = wipeBuildNewTree!(fg, drawpdf=true) to produce representations such as this in bt.pdf.

      exbt2d

      +pl = drawPosesLandms(fg)

      test

      This concludes the Hexagonal 2D SLAM example.

      Interest: The Bayes (Junction) tree

      The Bayes (Junction) tree is used as an acyclic (has no loops) computational object, an exact algebraic refactorizating of factor graph, to perform the associated sum-product inference. The visual structure of the tree can be extracted by modifying the command tree = wipeBuildNewTree!(fg, drawpdf=true) to produce representations such as this in bt.pdf.

      exbt2d

      diff --git a/dev/examples/basic_slamedonut/index.html b/dev/examples/basic_slamedonut/index.html index bf1fec01d..a9985921b 100644 --- a/dev/examples/basic_slamedonut/index.html +++ b/dev/examples/basic_slamedonut/index.html @@ -143,4 +143,4 @@ # Gadfly.draw(PDF("/tmp/testLocsAll.pdf", 20cm, 10cm),pl) pl = drawLandms(fg) -# Gadfly.draw(PDF("/tmp/testAll.pdf", 20cm, 10cm),pl)

      testall

      This example used the default of N=200 particles per marginal belief. By increasing the number to N=300 throughout the test many more modes and interesting features can be explored, and we refer the reader to an alternative and longer discussion on the same example, in Chapter 6 here.

      +# Gadfly.draw(PDF("/tmp/testAll.pdf", 20cm, 10cm),pl)

      testall

      This example used the default of N=200 particles per marginal belief. By increasing the number to N=300 throughout the test many more modes and interesting features can be explored, and we refer the reader to an alternative and longer discussion on the same example, in Chapter 6 here.

      diff --git a/dev/examples/canonical_graphs/index.html b/dev/examples/canonical_graphs/index.html index 517d0c77c..b6afcb9d5 100644 --- a/dev/examples/canonical_graphs/index.html +++ b/dev/examples/canonical_graphs/index.html @@ -126,4 +126,4 @@ spine_t, kwargs... ) -

      Generate canonical helix graph that expands along a spiral pattern, analogous flower petals.

      Notes

      Related

      generateGraph_Helix2D!, generateGraph_Helix2DSlew!

      source

      <!– RoME.generateGraph_Honeycomb! –>

      +

      Generate canonical helix graph that expands along a spiral pattern, analogous flower petals.

      Notes

      Related

      generateGraph_Helix2D!, generateGraph_Helix2DSlew!

      source

      <!– RoME.generateGraph_Honeycomb! –>

      diff --git a/dev/examples/custom_factor_features/index.html b/dev/examples/custom_factor_features/index.html index dfc13ca8e..d897cf263 100644 --- a/dev/examples/custom_factor_features/index.html +++ b/dev/examples/custom_factor_features/index.html @@ -28,4 +28,4 @@ # list the contents ls(fg2), lsf(fg2) -# should see :x0 and :x0f1 listed +# should see :x0 and :x0f1 listed diff --git a/dev/examples/custom_relative_factors/index.html b/dev/examples/custom_relative_factors/index.html index 0f1994a44..2ecad27d9 100644 --- a/dev/examples/custom_relative_factors/index.html +++ b/dev/examples/custom_relative_factors/index.html @@ -20,4 +20,4 @@ q̂ = Manifolds.compose(M, p, exp(M, identity_element(M, p), X)) Xc = vee(M, q, log(M, q, q̂)) return Xc -end

      It is recommended to leave the incoming types unrestricted. If you must define the types, make sure to allow sufficient dispatch freedom (i.e. dispatch to concrete types) and not force operations to "non-concrete" types. Usage can be very case specific, and hence better to let Julia type-inference automation do the hard work of inferring the concrete types.

      Note

      At present (2021) the residual function should return the residual value as a coordinate (not as tangent vectors or manifold points). Ongoing work is in progress, and likely to return residual values as manifold tangent vectors instead.

      Serialization

      Serialization of factors is also discussed in more detail at Standardized Factor Serialization.

      +end

      It is recommended to leave the incoming types unrestricted. If you must define the types, make sure to allow sufficient dispatch freedom (i.e. dispatch to concrete types) and not force operations to "non-concrete" types. Usage can be very case specific, and hence better to let Julia type-inference automation do the hard work of inferring the concrete types.

      Note

      At present (2021) the residual function should return the residual value as a coordinate (not as tangent vectors or manifold points). Ongoing work is in progress, and likely to return residual values as manifold tangent vectors instead.

      Serialization

      Serialization of factors is also discussed in more detail at Standardized Factor Serialization.

      diff --git a/dev/examples/custom_variables/index.html b/dev/examples/custom_variables/index.html index f36a10001..3139fa625 100644 --- a/dev/examples/custom_variables/index.html +++ b/dev/examples/custom_variables/index.html @@ -8,4 +8,4 @@ Pose2, SpecialEuclidean(2), ArrayPartition(MVector{2}(0.0,0.0), MMatrix{2,2}(1.0,0.0,0.0,1.0)) -)

      Here we used Manifolds.SpecialEuclidean(2) as the variable manifold, and the default data representation is similar to Manifolds.identity_element(SpecialEuclidean(2)), or Float32[1.0 0; 0 1], etc. In the example above, we used StaticArrays.MVector, StaticArrays.MMatrix for better performance, owing to better heap vs. stack memory management.

      DistributedFactorGraphs.@defVariableMacro
      @defVariable StructName manifolds<:ManifoldsBase.AbstractManifold

      A macro to create a new variable with name StructName and manifolds. Note that the manifolds is an object and must be a subtype of ManifoldsBase.AbstractManifold. See documentation in Manifolds.jl on making your own.

      Example:

      DFG.@defVariable Pose2 SpecialEuclidean(2) ArrayPartition([0;0.0],[1 0; 0 1.0])
      source
      Note

      Users can implement their own manifolds using the ManifoldsBase.jl API; and the tutorial. See JuliaManifolds/Manifolds.jl for general information.

      +)

      Here we used Manifolds.SpecialEuclidean(2) as the variable manifold, and the default data representation is similar to Manifolds.identity_element(SpecialEuclidean(2)), or Float32[1.0 0; 0 1], etc. In the example above, we used StaticArrays.MVector, StaticArrays.MMatrix for better performance, owing to better heap vs. stack memory management.

      DistributedFactorGraphs.@defVariableMacro
      @defVariable StructName manifolds<:ManifoldsBase.AbstractManifold

      A macro to create a new variable with name StructName and manifolds. Note that the manifolds is an object and must be a subtype of ManifoldsBase.AbstractManifold. See documentation in Manifolds.jl on making your own.

      Example:

      DFG.@defVariable Pose2 SpecialEuclidean(2) ArrayPartition([0;0.0],[1 0; 0 1.0])
      source
      Note

      Users can implement their own manifolds using the ManifoldsBase.jl API; and the tutorial. See JuliaManifolds/Manifolds.jl for general information.

      diff --git a/dev/examples/deadreckontether/index.html b/dev/examples/deadreckontether/index.html index 3dcd3cdc4..d8e5ee5f6 100644 --- a/dev/examples/deadreckontether/index.html +++ b/dev/examples/deadreckontether/index.html @@ -27,4 +27,4 @@

      Helper function to duplicate values from a special factor variable into standard factor and variable. Returns the name of the new factor.

      Notes:

      Related

      addVariable!, addFactor!

      source
      RoME.accumulateDiscreteLocalFrame!Function
      accumulateDiscreteLocalFrame!(mpp, DX, Qc; ...)
       accumulateDiscreteLocalFrame!(mpp, DX, Qc, dt; Fk, Gk, Phik)
       

      Advance an odometry factor as though integrating an ODE – i.e. $X_2 = X_1 ⊕ ΔX$. Accepts continuous domain process noise density Qc which is internally integrated to discrete process noise Qd. $DX$ is assumed to already be incrementally integrated before this function. See related accumulateContinuousLocalFrame! for fully continuous system propagation.

      Notes

      • This update stays in the same reference frame but updates the local vector as though accumulating measurement values over time.
      • Kalman filter would have used for noise propagation: $Pk1 = F*Pk*F' + Qdk$
      • From Chirikjian, Vol.II, 2012, p.35: Jacobian SE(2), Jr = [cθ sθ 0; -sθ cθ 0; 0 0 1] – i.e. dSE2/dX' = SE2([0;0;-θ])
      • DX = dX/dt*Dt
      • assumed process noise for {}^b Qc = {}^b [x;y;yaw] = [fwd; sideways; rotation.rate]

      Dev Notes

      • TODO many operations here can be done in-place.

      Related

      accumulateContinuousLocalFrame!, accumulateDiscreteReferenceFrame!, accumulateFactorMeans

      source
      IncrementalInference.accumulateFactorMeansFunction
      accumulateFactorMeans(dfg, fctsyms; solveKey)
      -

      Accumulate chains of binary factors–-potentially starting from a prior–-as a parameteric mean value only.

      Notes

      • Not used during tree inference.
      • Expected uses are for user analysis of factors and estimates.
      • real-time dead reckoning chain prediction.
      • Returns mean value as coordinates

      DevNotes

      Related:

      approxConvBelief, solveFactorParametric, RoME.MutablePose2Pose2Gaussian

      source
      RoME.MutablePose2Pose2GaussianType
      mutable struct MutablePose2Pose2Gaussian <: AbstractManifoldMinimize

      Specialized Pose2Pose2 factor type (Gaussian), which allows for rapid accumulation of odometry information as a branch on the factor graph.

      source

      Additional Notes

      This will be consolidated with text above:

      for a new dead reckon tether solution;

      +

      Accumulate chains of binary factors–-potentially starting from a prior–-as a parameteric mean value only.

      Notes

      DevNotes

      Related:

      approxConvBelief, solveFactorParametric, RoME.MutablePose2Pose2Gaussian

      source
      RoME.MutablePose2Pose2GaussianType
      mutable struct MutablePose2Pose2Gaussian <: AbstractManifoldMinimize

      Specialized Pose2Pose2 factor type (Gaussian), which allows for rapid accumulation of odometry information as a branch on the factor graph.

      source

      Additional Notes

      This will be consolidated with text above:

      for a new dead reckon tether solution;

      diff --git a/dev/examples/examples/index.html b/dev/examples/examples/index.html index 473fb8dec..6c8bed107 100644 --- a/dev/examples/examples/index.html +++ b/dev/examples/examples/index.html @@ -6,4 +6,4 @@

      Hexagonal 2D

      Batch Mode

      A simple 2D hexagonal robot trajectory example is expanded below using techniques developed in simultaneous localization and mapping (SLAM).

      Bayes Tree Fixed-Lag Solving - Hexagonal2D Revisited

      The hexagonal fixed-lag example shows how tree based clique recycling can be achieved. A further example is given in the real-world underwater example below.

      An Underdetermined Solution (a.k.a. SLAM-e-donut)

      This tutorial describes (unforced multimodality) a range-only system where there are always more variable dimensions than range measurements made, see Underdeterminied Example here The error distribution over ranges could be nearly anything, but are restricted to Gaussian-only in this example to illustrate an alternative point – other examples show inference results where highly non-Gaussian error distributions are used.

      Multi-modal range only example (click here or image for full Vimeo):

      IMAGE ALT TEXT HERE

      Towards Real-Time Underwater Acoustic Navigation

      This example uses "dead reckon tethering" (DRT) to perform many of the common robot odometry and high frequency pose updated operations. These features are a staple and standard part of the distributed factor graph system.

      Click on image (or this link to Vimeo) for a video illustration:

      AUV SLAM

      Uncertain Data Associations, (forced multi-hypothesis)

      This example presents a novel multimodal solution to an otherwise intractible multihypothesis SLAM problem. This work spans the entire Victoria Park dataset, and resolves a solution over roughly 10000 variable dimensions with 2^1700 (yes to teh power 1700) theoretically possible modes. At the time of first solution in 2016, a full batch solution took around 3 hours to compute on a very spartan early implementation.

      -

      The fractional multi-hypothesis assignments addFactor!(..., multihypo=[1.0; 0.5;0.5]). Similarly for tri-nary or higher multi-hypotheses.

      Probabilistic Data Association (Uncertain loop closures)

      Example where the standard multihypothesis addFactor!(.., multihypo=[1.0;0.5;0.5]) interface is used. This is from the Kitti driving dataset. Video here. The data association and multihypothesis section discusses this feature in more detail.

      IMAGE ALT TEXT HERE

      Synthetic Aperture Sonar SLAM

      The full functional (approximate sum-product) inference approach can be used to natively imbed single hydrophone acoustic waveform data into highly non-Gaussian SAS factors–that implicitly perform beamforming/micro-location–-for a simultaneous localization and mapping solution (image links to video). See the Raw Correlator Probability (Matched Filter) Section for more details.

      IMAGE ALT TEXT HERE

      Marine Surface Vehicle with ROS

      Note

      See initial example here, and native ROS support section here.

      Simulated Ambiguous SONAR in 3D

      Intersection of ambiguous elevation angle from planar SONAR sensor:

      IMAGE ALT TEXT HERE

      Bi-modal belief

      IMAGE ALT TEXT HERE

      Multi-session Indoor Robot

      Multi-session Turtlebot example of the second floor in the Stata Center:

      Turtlebot Multi-session animation

      See the multisession information page for more details, as well as academic work:

      More Examples

      Please see examples folders for Caesar and RoME for more examples, with expanded documentation in the works.

      Adding Factors - Simple Factor Design

      Caesar can be extended with new variables and factors without changing the core code. An example of this design pattern is provided in this example.

      Defining New Variables and Factor

      Adding Factors - DynPose Factor

      Intermediate Example: Adding Dynamic Factors and Variables

      +

      The fractional multi-hypothesis assignments addFactor!(..., multihypo=[1.0; 0.5;0.5]). Similarly for tri-nary or higher multi-hypotheses.

      Probabilistic Data Association (Uncertain loop closures)

      Example where the standard multihypothesis addFactor!(.., multihypo=[1.0;0.5;0.5]) interface is used. This is from the Kitti driving dataset. Video here. The data association and multihypothesis section discusses this feature in more detail.

      IMAGE ALT TEXT HERE

      Synthetic Aperture Sonar SLAM

      The full functional (approximate sum-product) inference approach can be used to natively imbed single hydrophone acoustic waveform data into highly non-Gaussian SAS factors–that implicitly perform beamforming/micro-location–-for a simultaneous localization and mapping solution (image links to video). See the Raw Correlator Probability (Matched Filter) Section for more details.

      IMAGE ALT TEXT HERE

      Marine Surface Vehicle with ROS

      Note

      See initial example here, and native ROS support section here.

      Simulated Ambiguous SONAR in 3D

      Intersection of ambiguous elevation angle from planar SONAR sensor:

      IMAGE ALT TEXT HERE

      Bi-modal belief

      IMAGE ALT TEXT HERE

      Multi-session Indoor Robot

      Multi-session Turtlebot example of the second floor in the Stata Center:

      Turtlebot Multi-session animation

      See the multisession information page for more details, as well as academic work:

      More Examples

      Please see examples folders for Caesar and RoME for more examples, with expanded documentation in the works.

      Adding Factors - Simple Factor Design

      Caesar can be extended with new variables and factors without changing the core code. An example of this design pattern is provided in this example.

      Defining New Variables and Factor

      Adding Factors - DynPose Factor

      Intermediate Example: Adding Dynamic Factors and Variables

      diff --git a/dev/examples/interm_fixedlag_hexagonal/index.html b/dev/examples/interm_fixedlag_hexagonal/index.html index d87a94bda..306885923 100644 --- a/dev/examples/interm_fixedlag_hexagonal/index.html +++ b/dev/examples/interm_fixedlag_hexagonal/index.html @@ -78,4 +78,4 @@ Guide.xlabel("Solving Iteration"), Guide.ylabel("Solving Time (seconds)"), Guide.manual_color_key("Legend", ["fixed", "batch"], ["green", "magenta"])) -Gadfly.draw(PNG("results_comparison.png", 12cm, 15cm), plt)

      Results

      Warning

      Note these results are out of date, much improved performance is possible and work is in progress to improve the documentation around this feature.

      Preliminary results for the comparison can be seen below. However, this is just a start and we need to perform more testing. At the moment we are working on providing consistent results and further improving performance/flattening the fixed-lag time. It should be noted that the below graph is not to demonstrate the absolute solve time, but rather the relative behavior of full-graph solve vs. fixed-lag.

      Timing comparison of full solve vs. fixed-lag

      NOTE Work is underway (aka "Project Tree House") to reduce overhead computations that result in poorer fixed-lag solving times. We expect the fixed-lag performance to improve in the coming months (Written Nov 2018). Please file issues if a deeper discussion is required.

      Additional Example

      Work In Progress, but In the mean time see the following examples:

      https://github.com/JuliaRobotics/Caesar.jl/blob/master/examples/wheeled/racecar/apriltagandzed_slam.jl

      +Gadfly.draw(PNG("results_comparison.png", 12cm, 15cm), plt)

      Results

      Warning

      Note these results are out of date, much improved performance is possible and work is in progress to improve the documentation around this feature.

      Preliminary results for the comparison can be seen below. However, this is just a start and we need to perform more testing. At the moment we are working on providing consistent results and further improving performance/flattening the fixed-lag time. It should be noted that the below graph is not to demonstrate the absolute solve time, but rather the relative behavior of full-graph solve vs. fixed-lag.

      Timing comparison of full solve vs. fixed-lag

      NOTE Work is underway (aka "Project Tree House") to reduce overhead computations that result in poorer fixed-lag solving times. We expect the fixed-lag performance to improve in the coming months (Written Nov 2018). Please file issues if a deeper discussion is required.

      Additional Example

      Work In Progress, but In the mean time see the following examples:

      https://github.com/JuliaRobotics/Caesar.jl/blob/master/examples/wheeled/racecar/apriltagandzed_slam.jl

      diff --git a/dev/examples/legacy_deffactors/index.html b/dev/examples/legacy_deffactors/index.html index 1e9768f21..e6b2bffcf 100644 --- a/dev/examples/legacy_deffactors/index.html +++ b/dev/examples/legacy_deffactors/index.html @@ -31,4 +31,4 @@ # the broadcast operators with automatically vectorize res = z .- (x1[1:2] .- x1[1:2]) return res -end +end diff --git a/dev/examples/parametric_solve/index.html b/dev/examples/parametric_solve/index.html index 80aef62a6..4ed796975 100644 --- a/dev/examples/parametric_solve/index.html +++ b/dev/examples/parametric_solve/index.html @@ -12,4 +12,4 @@ 0 1/var(s.range)] return meas, iΣ -end

      The Factor

      The factor is evaluated in a cost function using the Mahalanobis distance and the measurement should therefore match the residual returned.

      Optimization

      IncrementalInference.solveGraphParametric! uses Optim.jl. The factors that are supported should have a gradient and Hessian available/exists and therefore it makes use of TwiceDifferentiable. Full control of Optim's setup is possible with keyword arguments.

      +end

      The Factor

      The factor is evaluated in a cost function using the Mahalanobis distance and the measurement should therefore match the residual returned.

      Optimization

      IncrementalInference.solveGraphParametric! uses Optim.jl. The factors that are supported should have a gradient and Hessian available/exists and therefore it makes use of TwiceDifferentiable. Full control of Optim's setup is possible with keyword arguments.

      diff --git a/dev/examples/using_images/index.html b/dev/examples/using_images/index.html index 56093b2bb..0fcd11be5 100644 --- a/dev/examples/using_images/index.html +++ b/dev/examples/using_images/index.html @@ -28,4 +28,4 @@ end # free AprilTags library memory -freeDetector!(detector)

      DevNotes

      Related

      AprilTags.detect, PackedPose2AprilTag4Corners, generateCostAprilTagsPreimageCalib

      source

      Using Images.jl

      The Caesar.jl ecosystem support use of the JuliaImages/Images.jl suite of packages. Please see documentation there for the wealth of features implemented.

      Handy Notes

      Converting between images and PNG format:

      bytes = Caesar.toFormat(format"PNG", img)
      Note

      More details to follow.

      Images enables ScatterAlign

      See point cloud alignment page for details on ScatterAlignPose

      +freeDetector!(detector)

      DevNotes

      Related

      AprilTags.detect, PackedPose2AprilTag4Corners, generateCostAprilTagsPreimageCalib

      source

      Using Images.jl

      The Caesar.jl ecosystem support use of the JuliaImages/Images.jl suite of packages. Please see documentation there for the wealth of features implemented.

      Handy Notes

      Converting between images and PNG format:

      bytes = Caesar.toFormat(format"PNG", img)
      Note

      More details to follow.

      Images enables ScatterAlign

      See point cloud alignment page for details on ScatterAlignPose

      diff --git a/dev/examples/using_pcl/index.html b/dev/examples/using_pcl/index.html index a3907493e..bf834edc9 100644 --- a/dev/examples/using_pcl/index.html +++ b/dev/examples/using_pcl/index.html @@ -36,4 +36,4 @@ X_fix = readdlm(io1) X_mov = readdlm(io2) -H, HX_mov, stat = Caesar._PCL.alignICP_Simple(X_fix, X_mov; verbose=true)

      Notes

      DevNotes

      See also: PointCloud

      source

      Visualizing Point Clouds

      See work in progress on alng with example code on the page 3D Visualization.

      +H, HX_mov, stat = Caesar._PCL.alignICP_Simple(X_fix, X_mov; verbose=true)

      Notes

      DevNotes

      See also: PointCloud

      source

      Visualizing Point Clouds

      See work in progress on alng with example code on the page 3D Visualization.

      diff --git a/dev/examples/using_ros/index.html b/dev/examples/using_ros/index.html index 2f2f113f0..520ed627b 100644 --- a/dev/examples/using_ros/index.html +++ b/dev/examples/using_ros/index.html @@ -71,4 +71,4 @@ rmsg = Caesar._PCL.toROSPointCloud2(wPC2);

      More Tools for Real-Time

      See tools such as

      ST = manageSolveTree!(robotslam.dfg, robotslam.solveSettings, dbg=false)
      RoME.manageSolveTree!Function
      manageSolveTree!(dfg, mss; dbg, timinglog, limitfixeddown)
       

      Asynchronous solver manager that can run concurrently while other Tasks are modifying a common distributed factor graph object.

      Notes

      • When adding Variables and Factors, use solvable=0 to disable the new fragments until ready for inference.
        • e.g. addVariable!(fg, :x45, Pose2, solvable=0)
        • These parts of the factor graph can simply be activated for solving setSolvable!(fg, :x45, 1)
      source

      for solving a factor graph while the middleware processes are modifying the graph, while documentation is being completed see the code here: https://github.com/JuliaRobotics/RoME.jl/blob/a662d45e22ae4db2b6ee20410b00b75361294545/src/Slam.jl#L175-L288

      To stop or trigger a new solve in the SLAM manager you can just use either of these

      RoME.stopManageSolveTree!Function
      stopManageSolveTree!(slam)
       

      Stops a manageSolveTree! session. Usually up to the user to do so as a SLAM process comes to completion.

      Related

      manageSolveTree!

      source
      RoME.triggerSolve!Function
      triggerSolve!(slam)
      -

      Trigger a factor graph solveTree!(slam.dfg,...) after clearing the solvable buffer slam.?? (assuming the manageSolveTree! task is already running).

      Notes

      • Used in combination with manageSolveTree!
      source
      Note

      Native code for consuming rosbags also includes methods:

      RosbagSubscriber, loop!, getROSPyMsgTimestamp, nanosecond2datetime
      Note

      Additional notes about tricks that came up during development is kept in this wiki.

      Note

      See ongoing RobotOS.jl discussion on building a direct C++ interface and skipping PyCall.jl entirely: https://github.com/jdlangs/RobotOS.jl/issues/59

      +

      Trigger a factor graph solveTree!(slam.dfg,...) after clearing the solvable buffer slam.?? (assuming the manageSolveTree! task is already running).

      Notes

      source
      Note

      Native code for consuming rosbags also includes methods:

      RosbagSubscriber, loop!, getROSPyMsgTimestamp, nanosecond2datetime
      Note

      Additional notes about tricks that came up during development is kept in this wiki.

      Note

      See ongoing RobotOS.jl discussion on building a direct C++ interface and skipping PyCall.jl entirely: https://github.com/jdlangs/RobotOS.jl/issues/59

      diff --git a/dev/faq/index.html b/dev/faq/index.html index 0228eb521..9eeb69bce 100644 --- a/dev/faq/index.html +++ b/dev/faq/index.html @@ -13,4 +13,4 @@ ... # more stuff end -end # let block

      See Stack overflow on let or the Julia docs page on scoping. Also note it is good practice to use local scope (i.e. inside a function) variables for performance reasons.

      Note

      This behaviour is going to change in Julia 1.5 back to what Julia 0.6 was in interactive cases, and therefore likely less of a problem in future versions. See Julia 1.5 Change Notes, ([#28789], [#33864]).

      How to Enable @debug Logging.jl

      https://stackoverflow.com/questions/53548681/how-to-enable-debugging-messages-in-juno-julia-editor

      Julia Images.jl Axis Convention

      Julia Images.jl follows the common `::Array column-major–-i.e. vertical-major–-index convention

      How does JSON-Schema work?

      Caesar.jl intends to follow json-schema.org, see step-by-step guide here.

      How to get Julia memory allocation points?

      See discourse discussion.

      Increase Linux Open File Limit?

      If you see the error "Open Files Limit", please follow these intructions on your local system. This is likely to happen when debug code and a large number of files are stored in the general solution specific logpath.

      +end # let block

      See Stack overflow on let or the Julia docs page on scoping. Also note it is good practice to use local scope (i.e. inside a function) variables for performance reasons.

      Note

      This behaviour is going to change in Julia 1.5 back to what Julia 0.6 was in interactive cases, and therefore likely less of a problem in future versions. See Julia 1.5 Change Notes, ([#28789], [#33864]).

      How to Enable @debug Logging.jl

      https://stackoverflow.com/questions/53548681/how-to-enable-debugging-messages-in-juno-julia-editor

      Julia Images.jl Axis Convention

      Julia Images.jl follows the common `::Array column-major–-i.e. vertical-major–-index convention

      How does JSON-Schema work?

      Caesar.jl intends to follow json-schema.org, see step-by-step guide here.

      How to get Julia memory allocation points?

      See discourse discussion.

      Increase Linux Open File Limit?

      If you see the error "Open Files Limit", please follow these intructions on your local system. This is likely to happen when debug code and a large number of files are stored in the general solution specific logpath.

      diff --git a/dev/func_ref/index.html b/dev/func_ref/index.html index eafd01309..f27a9d6a8 100644 --- a/dev/func_ref/index.html +++ b/dev/func_ref/index.html @@ -177,4 +177,4 @@ logger )

      Perform computations required for the upward message passing during belief propation on the Bayes (Junction) tree. This function is usually called as via remote_call for multiprocess dispatch.

      Notes

      DevNotes

      source
      IncrementalInference.resetVariableAllInitializations!Function
      resetVariableAllInitializations!(fgl)
      -

      Reset initialization flag on all variables in ::AbstractDFG.

      Notes

      • Numerical values remain, but inference will overwrite since init flags are now false.
      source
      +

      Reset initialization flag on all variables in ::AbstractDFG.

      Notes

      source diff --git a/dev/index.html b/dev/index.html index 6ef9587a6..3481ce206 100644 --- a/dev/index.html +++ b/dev/index.html @@ -1,4 +1,4 @@ Welcome · Caesar.jl

      -

      Open Community

      Click here to go to the Caesar.jl Github repo:

      source

      Caesar.jl is a community project to facilate software technology development for localization and mapping from multiple sensor data, and multiple sessions or human / semi-autonomous / autonomous agents. This software is being developed with broadly Industry 4.0, Robotics, and Work of the Future in mind. Caesar.jl is an "umbrella package" to combine many other libraries from across the Julia package ecosystem.

      Commercial Products and Services

      WhereWhen.ai's NavAbility products and services build upon, continually develop, and help administer the Caesar.jl suite of open-source libraries. Please reach out for any additional information (info@navability.io), or using the community links provided below.

      Various mapping and localization solutions are possible both for commercial and R&D. We recommend taking a look at:

      Follow this page to see the NavAbility Tutorials which are zero install and build around specific application examples.

      Origins and Ongoing Research

      Caesar.jl developed as a spin-out project from MIT's Computer Science and Artificial Intelligence Laboratory. See related works on the literature page. Many future directions are in the works – including fundamental research, implementation quality/performance, and system integration.

      Consider citing our work: CITATION.bib.

      Community, Issues, Comments, or Help

      Post Issues, or Discussions for community help. Maintainers can easily transfer Issues to the best suited package location if necessary. Also see the history of changes and ongoing work can via the Milestone pages (click through badges here). You can also get in touch via Slack at .

      Note

      Please help improve this documentation–if something confuses you, chances are you're not alone. It's easy to do as you read along: just click on the "Edit on GitHub" link above, and then edit the files directly in your browser. Your changes will be vetted by developers before becoming permanent, so don't worry about whether you might say something wrong.

      JuliaRobotics Code of Conduct

      The Caesar.jl project is part of the JuliaRobotics organization and adheres to the JuliaRobotics code-of-conduct.

      Next Steps

      For installation steps, examples/tutorials, and concepts please refer to the following pages:

      +

      Open Community

      Click here to go to the Caesar.jl Github repo:

      source

      Caesar.jl is a community project to facilate software technology development for localization and mapping from multiple sensor data, and multiple sessions or human / semi-autonomous / autonomous agents. This software is being developed with broadly Industry 4.0, Robotics, and Work of the Future in mind. Caesar.jl is an "umbrella package" to combine many other libraries from across the Julia package ecosystem.

      Commercial Products and Services

      WhereWhen.ai's NavAbility products and services build upon, continually develop, and help administer the Caesar.jl suite of open-source libraries. Please reach out for any additional information (info@navability.io), or using the community links provided below.

      Various mapping and localization solutions are possible both for commercial and R&D. We recommend taking a look at:

      Follow this page to see the NavAbility Tutorials which are zero install and build around specific application examples.

      Origins and Ongoing Research

      Caesar.jl developed as a spin-out project from MIT's Computer Science and Artificial Intelligence Laboratory. See related works on the literature page. Many future directions are in the works – including fundamental research, implementation quality/performance, and system integration.

      Consider citing our work: CITATION.bib.

      Community, Issues, Comments, or Help

      Post Issues, or Discussions for community help. Maintainers can easily transfer Issues to the best suited package location if necessary. Also see the history of changes and ongoing work can via the Milestone pages (click through badges here). You can also get in touch via Slack at .

      Note

      Please help improve this documentation–if something confuses you, chances are you're not alone. It's easy to do as you read along: just click on the "Edit on GitHub" link above, and then edit the files directly in your browser. Your changes will be vetted by developers before becoming permanent, so don't worry about whether you might say something wrong.

      JuliaRobotics Code of Conduct

      The Caesar.jl project is part of the JuliaRobotics organization and adheres to the JuliaRobotics code-of-conduct.

      Next Steps

      For installation steps, examples/tutorials, and concepts please refer to the following pages:

      diff --git a/dev/install_viz/index.html b/dev/install_viz/index.html index 622c112f6..26e30fbe3 100644 --- a/dev/install_viz/index.html +++ b/dev/install_viz/index.html @@ -1,2 +1,2 @@ -Installing Viz · Caesar.jl
      +Installing Viz · Caesar.jl
      diff --git a/dev/installation_environment/index.html b/dev/installation_environment/index.html index 46e559c0a..f862a5870 100644 --- a/dev/installation_environment/index.html +++ b/dev/installation_environment/index.html @@ -1,25 +1,18 @@ -Installation · Caesar.jl

      Install Caesar.jl

      Caesar.jl is one of the packages within the JuliaRobotics community, and adheres to the code-of-conduct.

      Possible System Dependencies

      The following (Linux) system packages are used by Caesar.jl:

      # Likely dependencies
      -sudo apt-get install hdf5-tools imagemagick
      -
      -# optional packages
      -sudo apt-get install graphviz xdot

      For ROS.org users, see at least one usage example at the ROS Direct page.

      Installing Julia Packages

      The philosophy around Julia packages are discussed at length in the Julia core documentation, where each Julia package relates to a git repository likely found on Github.com. Also see JuliaHub.com for dashboard-style representation of the broader Julia package ecosystem. To install a Julia package, simply open a julia REPL (equally the Julia REPL in VSCode) and type:

      ] # activate Pkg manager
      -(v1.6) pkg> add Caesar

      These are registered packages maintained by JuliaRegistries/General. Unregistered latest packages can also be installed with using only the Pkg.develop function:

      # Caesar is registered on JuliaRegistries/General
      -julia> ]
      -(v1.6) pkg> add Caesar
      -(v1.6) pkg> add Caesar#janes-awesome-fix-branch
      -(v1.6) pkg> add Caesar@v0.10.0
      -
      -# or alternatively your own local fork (just using old link as example)
      -(v1.6) pkg> add https://github.com/dehann/Caesar.jl

      See Pkg.jl for details and features regarding package management, development, version control, virtual environments and much more.

      Next Steps

      The sections hereafter describe Building, [Interacting], and Solving factor graphs. We also recommend reviewing the various examples available in the Examples section.

      New to Julia

      Installing the Julia Binary

      Although Julia (or JuliaPro) can be installed on a Linux computer using the apt package manager, we are striving for a fully local installation environment which is highly reproducible on a variety of platforms.

      The easiest method is–-via the terminal–-to download the desired version of Julia as a binary, extract, setup a symbolic link, and run:

      cd ~
      -mkdir -p .julia
      -cd .julia
      -wget https://julialang-s3.julialang.org/bin/linux/x64/1.6/julia-1.6.7-linux-x86_64.tar.gz
      -tar -xvf julia-1.6.7-linux-x86_64.tar.gz
      -rm julia-1.6.7-linux-x86_64.tar.gz
      -cd /usr/local/bin
      -sudo ln -s ~/.julia/julia-1.6.7/bin/julia julia
      Note

      Feel free to modify this setup as you see fit.

      This should allow any terminal or process on the computer to run the Julia REPL by type julia and testing with:

      VSCode IDE Environment

      VSCode IDE allows for interactive development of Julia code using the Julia Extension. After installing and running VSCode, install the Julia Language Support Extension:

      +Installation · Caesar.jl

      Install Caesar.jl

      Caesar.jl is one of the packages within the JuliaRobotics community, and adheres to the code-of-conduct.

      New to Julia

      Installing the Julia Binary

      Although Julia (or JuliaPro) can be installed on a Linux/Mac/Windows via a package manager, we prefer a highly reproducible and self contained (local environment) install.

      The easiest method is–-via the terminal–-as described on the JuliaLang.org downloads page.

      Note

      Feel free to modify this setup as you see fit.

      VSCode IDE Environment

      VSCode IDE allows for interactive development of Julia code using the Julia Extension. After installing and running VSCode, install the Julia Language Support Extension:

      In VSCode, open the command pallette by pressing Ctrl + Shift + p. There are a wealth of tips and tricks on how to use VSCode. See this JuliaCon presentation for as a general introduction into 'piece-by-piece' code execution and much much more. Working in one of the Julia IDEs like VS Code or Juno should feel something like this (Gif borrowed from DiffEqFlux.jl):

      -

      There are a variety of useful packages in VSCode, such as GitLens, LiveShare, and Todo Browser as just a few highlights. These VSCode Extensions are independent of the already vast JuliaLang Package Ecosystem (see JuliaObserver.com).

      +

      There are a variety of useful packages in VSCode, such as GitLens, LiveShare, and Todo Browser as just a few highlights. These VSCode Extensions are independent of the already vast JuliaLang Package Ecosystem (see JuliaObserver.com).

      Note

      For ROS.org users, see at least one usage example at the ROS Direct page.

      Installing Julia Packages

      Vanilla Install

      The philosophy around Julia packages are discussed at length in the Julia core documentation, where each Julia package relates to a git repository likely found on Github.com. Also see JuliaHub.com for dashboard-style representation of the broader Julia package ecosystem. To install a Julia package, simply start a julia REPL (equally the Julia REPL in VSCode) and then type:

      julia> ] # activate Pkg manager
      +(v___) pkg> add Caesar

      Version Control, Branches

      These are registered packages maintained by JuliaRegistries/General. Unregistered latest packages can also be installed with using only the Pkg.develop function:

      # Caesar is registered on JuliaRegistries/General
      +julia> ]
      +(v___) pkg> add Caesar
      +(v___) pkg> add Caesar#janes-awesome-fix-branch
      +(v___) pkg> add Caesar@v0.16
      +
      +# or alternatively your own local fork (just using old link as example)
      +(v___) pkg> add https://github.com/dehann/Caesar.jl

      Virtual Environments

      Note

      Julia has native support for virtual environments and exact package manifests. See Pkg.jl Docs for more info. More details and features regarding package management, development, version control, virtual environments are available there.

      Next Steps

      The sections hereafter describe Building, [Interacting], and Solving factor graphs. We also recommend reviewing the various examples available in the Examples section.

      Possible System Dependencies

      The following (Linux) system packages have been required on some systems in the past, but likely does not have to be installed system wide on newer versions of Julia:

      # Likely dependencies
      +sudo apt-get install hdf5-tools imagemagick
      +
      +# optional packages
      +sudo apt-get install graphviz xdot
      diff --git a/dev/introduction/index.html b/dev/introduction/index.html index 1dfb2f4c6..4daf4082a 100644 --- a/dev/introduction/index.html +++ b/dev/introduction/index.html @@ -1,2 +1,2 @@ -Introduction · Caesar.jl

      Introduction

      Caesar is an open-source robotic software stack aimed at localization and mapping for robotics, using non-Gaussian graphical model state-estimation techniques. The factor graph method is well suited to combining heterogeneous and ambiguous sensor data streams. The focus is predominantly on geometric/spatial/semantic estimation tasks related to simultaneous localization and mapping (SLAM). The software is also highly extensible and well suited to a variety of estimation /filtering-type tasks — especially in non-Gaussian/multimodal settings. Check out a brief description on why non-Gaussian / multi-modal data processing needs arise.

      A Few Highlights

      Caesar.jl addresses numerous issues that arise in prior SLAM solutions, including:

      • Distributed Factor Graph representation deeply-coupled with an on-Manifold probabilistic algebra language;
      • Localization using different algorithms:
        • MM-iSAMv2
        • Parametric methods, including regular Guassian or Max-Mixtures.
        • Other multi-parametric and non-Gaussian algorithms are presently being implemented.
      • Solving under-defined systems,
      • Inference with non-Gaussian measurements,
      • Standard features for natively handling ambiguous data association and multi-hypotheses,
        • Native multi-modal (hypothesis) representation in the factor-graph, see Data Association and Hypotheses:
        • Multi-modal and non-parametric representation of constraints;
        • Gaussian distributions are but one of the many representations of measurement error;
      • Simplifying bespoke factor development,
      • Centralized (or peer-to-peer decentralized) factor-graph persistence,
        • i.e. Federated multi-session/agent reduction.
      • Multi-CPU inference.
      • Out-of-library extendable for Custom New Variables and Factors;
      • Natively supports legacy Gaussian parametric and max-mixtures solutions;
      • Local in-memory solving on the device as well as database-driven centralized solving (micro-service architecture);
      • Natively support Clique Recycling (i.e. fixed-lag out-marginalization) for continuous operation as well as off-line batch solving, see more at Using Incremental Updates (Clique Recycling I);
      • Natively supports Dead Reckon Tethering;
      • Natively supports Federated multi-session/agent solving;
      • Native support for Entry=>Data blobs for storing large format data.
      • Middleware support, e.g. see the ROS Integration Page.
      +Introduction · Caesar.jl

      Introduction

      Caesar is an open-source robotic software stack aimed at localization and mapping for robotics, using non-Gaussian graphical model state-estimation techniques. The factor graph method is well suited to combining heterogeneous and ambiguous sensor data streams. The focus is predominantly on geometric/spatial/semantic estimation tasks related to simultaneous localization and mapping (SLAM). The software is also highly extensible and well suited to a variety of estimation /filtering-type tasks — especially in non-Gaussian/multimodal settings. Check out a brief description on why non-Gaussian / multi-modal data processing needs arise.

      A Few Highlights

      Caesar.jl addresses numerous issues that arise in prior SLAM solutions, including:

      • Distributed Factor Graph representation deeply-coupled with an on-Manifold probabilistic algebra language;
      • Localization using different algorithms:
        • MM-iSAMv2
        • Parametric methods, including regular Guassian or Max-Mixtures.
        • Other multi-parametric and non-Gaussian algorithms are presently being implemented.
      • Solving under-defined systems,
      • Inference with non-Gaussian measurements,
      • Standard features for natively handling ambiguous data association and multi-hypotheses,
        • Native multi-modal (hypothesis) representation in the factor-graph, see Data Association and Hypotheses:
        • Multi-modal and non-parametric representation of constraints;
        • Gaussian distributions are but one of the many representations of measurement error;
      • Simplifying bespoke factor development,
      • Centralized (or peer-to-peer decentralized) factor-graph persistence,
        • i.e. Federated multi-session/agent reduction.
      • Multi-CPU inference.
      • Out-of-library extendable for Custom New Variables and Factors;
      • Natively supports legacy Gaussian parametric and max-mixtures solutions;
      • Local in-memory solving on the device as well as database-driven centralized solving (micro-service architecture);
      • Natively support Clique Recycling (i.e. fixed-lag out-marginalization) for continuous operation as well as off-line batch solving, see more at Using Incremental Updates (Clique Recycling I);
      • Natively supports Dead Reckon Tethering;
      • Natively supports Federated multi-session/agent solving;
      • Native support for Entry=>Data blobs for storing large format data.
      • Middleware support, e.g. see the ROS Integration Page.
      diff --git a/dev/principles/approxConvDensities/index.html b/dev/principles/approxConvDensities/index.html index a0e030e5f..7010b9aa2 100644 --- a/dev/principles/approxConvDensities/index.html +++ b/dev/principles/approxConvDensities/index.html @@ -37,4 +37,4 @@ approxDeconv(fcto, ccw; N, measurement, retries)

      Inverse solve of predicted noise value and returns tuple of (newly calculated-predicted, and known measurements) values.

      Notes

      DevNotes

      Related

      approxDeconv, _solveCCWNumeric!

      source
      approxDeconv(dfg, fctsym; ...)
       approxDeconv(dfg, fctsym, solveKey; retries)
      -

      Generalized deconvolution to find the predicted measurement values of the factor fctsym in dfg. Inverse solve of predicted noise value and returns tuple of (newly predicted, and known "measured" noise) values.

      Notes

      Related

      approxConvBelief, deconvSolveKey

      source

      This feature is not yet as feature rich as the approxConvBelief function, and also requires further work to improve the consistency of the calculation – but none the less exists and is useful in many applications.

      +

      Generalized deconvolution to find the predicted measurement values of the factor fctsym in dfg. Inverse solve of predicted noise value and returns tuple of (newly predicted, and known "measured" noise) values.

      Notes

      Related

      approxConvBelief, deconvSolveKey

      source

      This feature is not yet as feature rich as the approxConvBelief function, and also requires further work to improve the consistency of the calculation – but none the less exists and is useful in many applications.

      diff --git a/dev/principles/bayestreePrinciples/index.html b/dev/principles/bayestreePrinciples/index.html index 14317e186..fc0a5836a 100644 --- a/dev/principles/bayestreePrinciples/index.html +++ b/dev/principles/bayestreePrinciples/index.html @@ -47,4 +47,4 @@ spyCliqMat(tree, :x1) # provided by IncrementalInference #or embedded in graphviz -drawTree(tree, imgs=true, show=true)

      Clique State Machine

      The mmisam solver is based on a state machine design to handle the inter and intra clique operations during a variety of situations. Use of the clique state machine (CSM) makes debugging, development, verification, and modification of the algorithm real easy. Contact us for any support regarding modifications to the default algorithm. For pre-docs on working with CSM, please see IIF #443.

      STATUS of a Clique

      CSM currently uses the following statusses for each of the cliques during the inference process.

      [:initialized;:upsolved;:marginalized;:downsolved;:uprecycled]

      Bayes Tree Legend (from IIF)

      The color legend for the refactored CSM from issue.

      +drawTree(tree, imgs=true, show=true)

      Clique State Machine

      The mmisam solver is based on a state machine design to handle the inter and intra clique operations during a variety of situations. Use of the clique state machine (CSM) makes debugging, development, verification, and modification of the algorithm real easy. Contact us for any support regarding modifications to the default algorithm. For pre-docs on working with CSM, please see IIF #443.

      STATUS of a Clique

      CSM currently uses the following statusses for each of the cliques during the inference process.

      [:initialized;:upsolved;:marginalized;:downsolved;:uprecycled]

      Bayes Tree Legend (from IIF)

      The color legend for the refactored CSM from issue.

      diff --git a/dev/principles/filterCorrespondence/index.html b/dev/principles/filterCorrespondence/index.html index af4191532..0fcf35813 100644 --- a/dev/principles/filterCorrespondence/index.html +++ b/dev/principles/filterCorrespondence/index.html @@ -6,4 +6,4 @@

      Alternatively, indirect measurements of the state variables are should be modeled with the most sensible function

      \[y = h(\theta, \eta)\\ \delta_j(\theta_j, \eta_j) = \ominus h_j(\theta_j, \eta_j) \oplus y_j,\]

      which approximates the underlying (on-manifold) stochastics and physics of the process at hand. The measurement models can be used to project belief through a measurement function, and should be recognized as a standard representation for a Hidden Markov Model (HMM):

      -

      Beyond Filtering

      Consider a multi-sensory system along with data transmission delays, variable sampling rates, etc.; when designing a filtering system to track one or multiple targets, it quickly becomes difficult to augment state vectors with the required state and measurement histories. In contrast, the factor graph as a language allows for heterogeneous data streams to be combined in a common inference framework, and is discussed further in the building distributed factor graphs section.

      Note

      Factor graphs are constructed along with the evolution of time which allows the mmisam inference algorithm to resolve variable marginal estimates both forward and backwards in time. Conventional filtering only allows for forward-backward "smoothing" as two separate processes. When inferring over a factor graph, all variables and factors are considered simultaneously according the topological connectivity irrespective of when and where which measurements were made or communicated – as long as the factor graph (probabilistic model) captures the stochastics of the situation with sufficient accuracy.

      TODO: Multi-modal (belief) vs. multi-hypothesis – see thesis work on multimodal solutions in the mean time.

      Note

      Mmisam allows for parametric, non-parametric, or intensity noise models which can be incorporated into any differentiable residual function.

      Anecdotal Example (EKF-SLAM / MSC-KF)

      WIP: Explain how this method is similar to EKF-SLAM and MSC-KF...

      +

      Beyond Filtering

      Consider a multi-sensory system along with data transmission delays, variable sampling rates, etc.; when designing a filtering system to track one or multiple targets, it quickly becomes difficult to augment state vectors with the required state and measurement histories. In contrast, the factor graph as a language allows for heterogeneous data streams to be combined in a common inference framework, and is discussed further in the building distributed factor graphs section.

      Note

      Factor graphs are constructed along with the evolution of time which allows the mmisam inference algorithm to resolve variable marginal estimates both forward and backwards in time. Conventional filtering only allows for forward-backward "smoothing" as two separate processes. When inferring over a factor graph, all variables and factors are considered simultaneously according the topological connectivity irrespective of when and where which measurements were made or communicated – as long as the factor graph (probabilistic model) captures the stochastics of the situation with sufficient accuracy.

      TODO: Multi-modal (belief) vs. multi-hypothesis – see thesis work on multimodal solutions in the mean time.

      Note

      Mmisam allows for parametric, non-parametric, or intensity noise models which can be incorporated into any differentiable residual function.

      Anecdotal Example (EKF-SLAM / MSC-KF)

      WIP: Explain how this method is similar to EKF-SLAM and MSC-KF...

      diff --git a/dev/principles/initializingOnBayesTree/index.html b/dev/principles/initializingOnBayesTree/index.html index 80171f09b..bccfa4e0a 100644 --- a/dev/principles/initializingOnBayesTree/index.html +++ b/dev/principles/initializingOnBayesTree/index.html @@ -1,4 +1,4 @@ Advanced Bayes Tree Topics · Caesar.jl

      Advanced Topics on Bayes Tree

      Definitions

      Squashing or collapsing the Bayes tree back into a 'flat' Bayes net, by chain rule:

      \[p(x,y) = p(x|y)p(y) = p(y|x)p(x) \\ p(x,y,z) = p(x|y,z)p(y,z) = p(x,y|z)p(z) = p(x|y,z)p(y|z)p(z) \\ -p(x,y,z) = p(x|y,z)p(y)p(z) \, \text{iff y is independent of z,} \, also p(y|z)=p(y)\]

      Are cliques in the Bayes (Junction) tree densly connected?

      Yes and no. From the chordal Bayes net's perspective (obtained through the elimination game in order to build the clique tree), the nodes of the Bayes tree are indeed fully connected subgraphs (they are called cliques after all!). From the perspective of the subgraph of the original factor graph induced by the clique's variables, cliques need not be fully connected, since we are assuming the factor graph as sparse, and that no new information can be created out of nothing–-hence each clique must be sparse. That said, the potential exists for the inference within a clique to become densly connected (experience full "fill-in"). See the paper on square-root-SAM, where the connection between dense covariance matrix of a Kalman filter (EKF-SLAM) is actually related to the inverse square root (rectangular) matrix which structure equivalent to the clique subgraph adjacency matrix.

      Also remember that the intermediate Bayes net (which has densly connected cliques) hides the underlying tree structure – think of the Bayes net as looking at the tree from on top or below, thereby encoding the dense connectivity in the structure of the tree itself. All information below any clique of the tree is encoded in the upward marginal belief messages at that point (i.e. the densly connected aspects pertained lower down in the tree).

      LU/QR vs. Belief Propagation

      LU/QR is a special case (Parametric/Linear) of more general belief propagation. The story though is more intricate, where QR/LU assume that product-factors can be formed through the chain rule – using congruency – it is not that straight forward with general beliefs. In the general case we are almost forced to use belief propagation, which in turn implies special care is needed to describe the relationship between sparse factor graph fragments in cliques on the tree, and the more densely connected structure of the Bayes Net.

      Bayes Tree vs Bayes Net

      The Bayes tree is a purely symbolic structure – i.e. special grouping of factors that all come from the factor graph joint product (product of independently sampled likelihood/conditional models):

      \[[\Theta | Z] \propto \prod_i \, [ Z_i=z_i | \Theta_i ]\]

      A sparse factor graph problem can be squashed into smaller dense problem of product-factor conditionals (from variable elimination). Therefore each product-factor (aka "smart factor" in other uses of the language) represent both the factors as well as the sequencing of cliques in that branch. This process repeats recursively from the root down to the leaves. The leaves of the tree have no further reduced product factors condensing child cliques below, and therefore sparse factor fragments can be computed to start the upward belief propagation process. More importantly, as belief propagation progresses up the tree, upward belief messages (on clique separators) capture the same structure as the densely connected Bayes net but each clique in the Bayes tree still only contains sparse fragments from the original factor graph. The structure of the tree (combined parent-child relationships) encodes the same information as the product-factor conditionals!

      Initialization on the Tree

      It more challenging but possible to initialize all variables in a factor graph through belief propagation on the Bayes tree.

      As a thought experiment: Wouldn't it be awesome if we could compile the upsolve as a symbolic process only, and only assign numerical values once during a single downsolve procedure. The origin of this idea comes from the realization that a complete upsolve on the Bayes (Junction) tree is very nearly the same thing finding good numerical initialization values for the factor graph. If the up-init-solve can be performed as a purely symbolic process, it would greatly simplify numerical computations by deferring them to the down solve alone.

      Trying to do initialization for real, we might want to replace up-init-symbolic operations with numerical equivalents. Either way, it would be worth knowing what the equivalent numerical operations of a full up-init-solve of an uninitialized factor graph would look like.

      In general, if a clique can not be initialized based on information from lower down in that branch of the tree; more information is need from the parent. In the Gaussian (more accurately the congruent factor) case, all information lower down in the branch–-i.e. the relationships between variables in parent–-can be summarized by a new conditional product-factor that is computed with the probabilistic chain rule. To restate, the process of squashing the Bayes tree branch back down into a Bayes net, is effectively the the chain rule process used in variable elimination.

      Note

      Question, are cascading up and down solves are required if you do not use eliminated factor conditionals in parent cliques.

      Gaussian-only special case

      Elimination of variables and factors using chain rule reduction is a special case of belief propagation, and thus far only the reduction of congruent beliefs (such as Gaussian) is known.

      These computations can be parallelized depending on the conditional independence structure of the Bayes tree – separate branches are effectively separate chain rule instances. This is precisely the same process exploited by multi-frontal QR matrix factorization.

      On the down solve the conditionals–-from eliminated chains of previously eliminated variables and factors–-can be used for inference directly in the parent.

      See node x1 to x3 in IncrementalInference issue 464. It does not branch or provide additional prior information. so it is collapsed into one factor between x1 and x3, solved in the root and the individual variable can be solved by inference.

      Note

      Question, what does the Jacobian in Gaussian only case mean with regard to a symbolic upsolve?

      +p(x,y,z) = p(x|y,z)p(y)p(z) \, \text{iff y is independent of z,} \, also p(y|z)=p(y)\]

      Are cliques in the Bayes (Junction) tree densly connected?

      Yes and no. From the chordal Bayes net's perspective (obtained through the elimination game in order to build the clique tree), the nodes of the Bayes tree are indeed fully connected subgraphs (they are called cliques after all!). From the perspective of the subgraph of the original factor graph induced by the clique's variables, cliques need not be fully connected, since we are assuming the factor graph as sparse, and that no new information can be created out of nothing–-hence each clique must be sparse. That said, the potential exists for the inference within a clique to become densly connected (experience full "fill-in"). See the paper on square-root-SAM, where the connection between dense covariance matrix of a Kalman filter (EKF-SLAM) is actually related to the inverse square root (rectangular) matrix which structure equivalent to the clique subgraph adjacency matrix.

      Also remember that the intermediate Bayes net (which has densly connected cliques) hides the underlying tree structure – think of the Bayes net as looking at the tree from on top or below, thereby encoding the dense connectivity in the structure of the tree itself. All information below any clique of the tree is encoded in the upward marginal belief messages at that point (i.e. the densly connected aspects pertained lower down in the tree).

      LU/QR vs. Belief Propagation

      LU/QR is a special case (Parametric/Linear) of more general belief propagation. The story though is more intricate, where QR/LU assume that product-factors can be formed through the chain rule – using congruency – it is not that straight forward with general beliefs. In the general case we are almost forced to use belief propagation, which in turn implies special care is needed to describe the relationship between sparse factor graph fragments in cliques on the tree, and the more densely connected structure of the Bayes Net.

      Bayes Tree vs Bayes Net

      The Bayes tree is a purely symbolic structure – i.e. special grouping of factors that all come from the factor graph joint product (product of independently sampled likelihood/conditional models):

      \[[\Theta | Z] \propto \prod_i \, [ Z_i=z_i | \Theta_i ]\]

      A sparse factor graph problem can be squashed into smaller dense problem of product-factor conditionals (from variable elimination). Therefore each product-factor (aka "smart factor" in other uses of the language) represent both the factors as well as the sequencing of cliques in that branch. This process repeats recursively from the root down to the leaves. The leaves of the tree have no further reduced product factors condensing child cliques below, and therefore sparse factor fragments can be computed to start the upward belief propagation process. More importantly, as belief propagation progresses up the tree, upward belief messages (on clique separators) capture the same structure as the densely connected Bayes net but each clique in the Bayes tree still only contains sparse fragments from the original factor graph. The structure of the tree (combined parent-child relationships) encodes the same information as the product-factor conditionals!

      Initialization on the Tree

      It more challenging but possible to initialize all variables in a factor graph through belief propagation on the Bayes tree.

      As a thought experiment: Wouldn't it be awesome if we could compile the upsolve as a symbolic process only, and only assign numerical values once during a single downsolve procedure. The origin of this idea comes from the realization that a complete upsolve on the Bayes (Junction) tree is very nearly the same thing finding good numerical initialization values for the factor graph. If the up-init-solve can be performed as a purely symbolic process, it would greatly simplify numerical computations by deferring them to the down solve alone.

      Trying to do initialization for real, we might want to replace up-init-symbolic operations with numerical equivalents. Either way, it would be worth knowing what the equivalent numerical operations of a full up-init-solve of an uninitialized factor graph would look like.

      In general, if a clique can not be initialized based on information from lower down in that branch of the tree; more information is need from the parent. In the Gaussian (more accurately the congruent factor) case, all information lower down in the branch–-i.e. the relationships between variables in parent–-can be summarized by a new conditional product-factor that is computed with the probabilistic chain rule. To restate, the process of squashing the Bayes tree branch back down into a Bayes net, is effectively the the chain rule process used in variable elimination.

      Note

      Question, are cascading up and down solves are required if you do not use eliminated factor conditionals in parent cliques.

      Gaussian-only special case

      Elimination of variables and factors using chain rule reduction is a special case of belief propagation, and thus far only the reduction of congruent beliefs (such as Gaussian) is known.

      These computations can be parallelized depending on the conditional independence structure of the Bayes tree – separate branches are effectively separate chain rule instances. This is precisely the same process exploited by multi-frontal QR matrix factorization.

      On the down solve the conditionals–-from eliminated chains of previously eliminated variables and factors–-can be used for inference directly in the parent.

      See node x1 to x3 in IncrementalInference issue 464. It does not branch or provide additional prior information. so it is collapsed into one factor between x1 and x3, solved in the root and the individual variable can be solved by inference.

      Note

      Question, what does the Jacobian in Gaussian only case mean with regard to a symbolic upsolve?

      diff --git a/dev/principles/interm_dynpose/index.html b/dev/principles/interm_dynpose/index.html index 0c5e74019..ff1d0881e 100644 --- a/dev/principles/interm_dynpose/index.html +++ b/dev/principles/interm_dynpose/index.html @@ -106,4 +106,4 @@ @show x1 = getKDEMax(getBelief(getVariable(fg, :x1))) @show x2 = getKDEMax(getBelief(getVariable(fg, :x2)))

      Producing output:

      x0 = getKDEMax(getBelief(getVariable(fg, :x0))) = [0.101503, -0.0273216, 9.86718, 9.91146]
       x1 = getKDEMax(getBelief(getVariable(fg, :x1))) = [10.0087, 9.95139, 10.0622, 10.0195]
      -x2 = getKDEMax(getBelief(getVariable(fg, :x2))) = [19.9381, 19.9791, 10.0056, 9.92442]

      IncrementalInference.jl Defining Factors (Future API)

      We would like to remove the idx indexing from the residual function calls, since that is an unnecessary burden on the user. Instead, the package will use views and SubArray types to simplify the interface. Please contact author for more details (8 June 2018).

      Contributions

      Thanks to mc2922 for raising the catalyst issue and conversations that followed from JuliaRobotics/RoME.jl#60.

      +x2 = getKDEMax(getBelief(getVariable(fg, :x2))) = [19.9381, 19.9791, 10.0056, 9.92442]

      IncrementalInference.jl Defining Factors (Future API)

      We would like to remove the idx indexing from the residual function calls, since that is an unnecessary burden on the user. Instead, the package will use views and SubArray types to simplify the interface. Please contact author for more details (8 June 2018).

      Contributions

      Thanks to mc2922 for raising the catalyst issue and conversations that followed from JuliaRobotics/RoME.jl#60.

      diff --git a/dev/principles/multiplyingDensities/index.html b/dev/principles/multiplyingDensities/index.html index a463a10de..f038e0a04 100644 --- a/dev/principles/multiplyingDensities/index.html +++ b/dev/principles/multiplyingDensities/index.html @@ -86,4 +86,4 @@ prior = Factor('Prior', ['x0'], Normal(np.zeros(1,1)-3.0, np.eye(1)) ) e.AddFactor(prior) prior = Factor('Prior', ['x0'], Normal(np.zeros(1,1)+3.0, np.eye(1)) ) - e.AddFactor(prior) + e.AddFactor(prior) diff --git a/dev/refs/literature/index.html b/dev/refs/literature/index.html index 91ee122be..76294ce0b 100644 --- a/dev/refs/literature/index.html +++ b/dev/refs/literature/index.html @@ -1,2 +1,2 @@ -References · Caesar.jl

      Literature

      Newly created page to list related references and additional literature pertaining to this package.

      Direct References

      [1.1] Fourie, D., Leonard, J., Kaess, M.: "A Nonparametric Belief Solution to the Bayes Tree" IEEE/RSJ Intl. Conf. on Intelligent Robots and Systems (IROS), (2016).

      [1.2] Fourie, D.: "Multi-modal and Inertial Sensor Solutions for Navigation-type Factor Graphs", Ph.D. Thesis, Massachusetts Institute of Technology Electrical Engineering and Computer Science together with Woods Hole Oceanographic Institution Department for Applied Ocean Science and Engineering, September 2017.

      [1.3] Fourie, D., Claassens, S., Pillai, S., Mata, R., Leonard, J.: "SLAMinDB: Centralized graph databases for mobile robotics", IEEE Intl. Conf. on Robotics and Automation (ICRA), Singapore, 2017.

      [1.4] Cheung, M., Fourie, D., Rypkema, N., Vaz Teixeira, P., Schmidt, H., and Leonard, J.: "Non-Gaussian SLAM utilizing Synthetic Aperture Sonar", Intl. Conf. On Robotics and Automation (ICRA), IEEE, Montreal, 2019.

      [1.5] Doherty, K., Fourie, D., Leonard, J.: "Multimodal Semantic SLAM with Probabilistic Data Association", Intl. Conf. On Robotics and Automation (ICRA), IEEE, Montreal, 2019.

      [1.6] Fourie, D., Vaz Teixeira, P., Leonard, J.: "Non-parametric Mixed-Manifold Products using Multiscale Kernel Densities", IEEE Intl. Conf. on Intelligent Robots and Systems (IROS), (2019),.

      [1.7] Teixeira, P.N.V., Fourie, D., Kaess, M. and Leonard, J.J., 2019, September. "Dense, sonar-based reconstruction of underwater scenes". In 2019 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS) (pp. 8060-8066). IEEE.

      [1.8] Fourie, D., Leonard, J.: "Inertial Odometry with Retroactive Sensor Calibration", 2015-2019.

      [1.9] Koolen, T. and Deits, R., 2019. Julia for robotics: Simulation and real-time control in a high-level programming language. IEEE, Intl. Conference on Robotics and Automation, ICRA (2019).

      [1.10] Fourie, D., Espinoza, A. T., Kaess, M., and Leonard, J. J., “Characterizing marginalization and incremental operations on the Bayes tree,” in International Workshop on Algorithmic Foundations of Robotics (WAFR), 2020, Oulu, Finland, Springer Publishing.

      [1.11] Fourie, D., Rypkema, N., Claassens, S., Vaz Teixeira, P., Fischell, E., and Leonard, J.J., "Towards Real-Time Non-Gaussian SLAM for Underdetermined Navigation", in IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), October 2020, Las Vegas, USA.

      [1.12] J. Terblanche, S. Claassens and D. Fourie, "Multimodal Navigation-Affordance Matching for SLAM," in IEEE Robotics and Automation Letters, vol. 6, no. 4, pp. 7728-7735, Oct. 2021, doi: 10.1109/LRA.2021.3098788. Also presented at, IEEE 17th International Conference on Automation Science and Engineering, August 2021, Lyon, France.

      Important References

      [2.1] Kaess, Michael, et al. "iSAM2: Incremental smoothing and mapping using the Bayes tree" The International Journal of Robotics Research (2011): 0278364911430419.

      [2.2] Kaess, Michael, et al. "The Bayes tree: An algorithmic foundation for probabilistic robot mapping." Algorithmic Foundations of Robotics IX. Springer, Berlin, Heidelberg, 2010. 157-173.

      [2.3] Kschischang, Frank R., Brendan J. Frey, and Hans-Andrea Loeliger. "Factor graphs and the sum-product algorithm." IEEE Transactions on information theory 47.2 (2001): 498-519.

      [2.4] Dellaert, Frank, and Michael Kaess. "Factor graphs for robot perception." Foundations and Trends® in Robotics 6.1-2 (2017): 1-139.

      [2.5] Sudderth, E.B., Ihler, A.T., Isard, M., Freeman, W.T. and Willsky, A.S., 2010. "Nonparametric belief propagation." Communications of the ACM, 53(10), pp.95-103

      [2.6] Paskin, Mark A. "Thin junction tree filters for simultaneous localization and mapping." in Int. Joint Conf. on Artificial Intelligence. 2003.

      [2.7] Farrell, J., and Matthew B.: "The global positioning system and inertial navigation." Vol. 61. New York: Mcgraw-hill, 1999.

      [2.8] Zarchan, Paul, and Howard Musoff, eds. Fundamentals of Kalman filtering: a practical approach. American Institute of Aeronautics and Astronautics, Inc., 2013.

      [2.9] Rypkema, N. R.,: "Underwater & Out of Sight: Towards Ubiquity in UnderwaterRobotics", Ph.D. Thesis, Massachusetts Institute of Technology Electrical Engineering and Computer Science together with Woods Hole Oceanographic Institution Department for Applied Ocean Science and Engineering, September 2019.

      [2.10] Vaz Teixeira, P.: "Dense, Sonar-based Reconstruction of Underwater Scenes", Ph.D. Thesis, Massachusetts Institute of Technology Electrical Engineering and Computer Science together with Woods Hole Oceanographic Institution Department for Applied Ocean Science and Engineering, September 2019.

      [2.11] Hanebeck, Uwe D. "FLUX: Progressive State Estimation Based on Zakai-type Distributed Ordinary Differential Equations." arXiv preprint arXiv:1808.02825 (2018).

      [2.12] Muandet, Krikamol, et al. "Kernel mean embedding of distributions: A review and beyond." Foundations and Trends® in Machine Learning 10.1-2 (2017): 1-141.

      [2.13] Hsiao, M. and Kaess, M., 2019, May. "MH-iSAM2: Multi-hypothesis iSAM using Bayes Tree and Hypo-tree". In 2019 International Conference on Robotics and Automation (ICRA) (pp. 1274-1280). IEEE.

      [2.14] Arnborg, S., Corneil, D.G. and Proskurowski, A., 1987. "Complexity of finding embeddings in a k-tree". SIAM Journal on Algebraic Discrete Methods, 8(2), pp.277-284.

      [2.15a] Sola, J., Deray, J. and Atchuthan, D., 2018. "A micro Lie theory for state estimation in robotics". arXiv preprint arXiv:1812.01537, and tech report. And cheatsheet w/ suspected typos.

      [2.15b] Delleart F., 2012. Lie Groups for Beginners.

      [2.15c] Eade E., 2017 Lie Groups for 2D and 3D Transformations.

      [2.15d] Chirikjian, G.S., 2015. Partial bi-invariance of SE(3) metrics. Journal of Computing and Information Science in Engineering, 15(1).

      [2.15e] Pennec, X. and Lorenzi, M., 2020. Beyond Riemannian geometry: The affine connection setting for transformation groups. In Riemannian Geometric Statistics in Medical Image Analysis (pp. 169-229). Academic Press.

      [2.15f] Žefran, M., Kumar, V. and Croke, C., 1996, August. Choice of Riemannian metrics for rigid body kinematics. In International Design Engineering Technical Conferences and Computers and Information in Engineering Conference (Vol. 97584, p. V02BT02A030). American Society of Mechanical Engineers.

      [2.15g] Chirikjian, G.S. and Zhou, S., 1998. Metrics on motion and deformation of solid models.

      [2.16] Kaess, M. and Dellaert, F., 2009. Covariance recovery from a square root information matrix for data association. Robotics and autonomous systems, 57(12), pp.1198-1210.

      [2.17] Bishop, C.M., 2006. Pattern recognition and machine learning. New York: Springer. ISBN 978-0-387-31073-2.

      Additional References

      [3.1] Duits, Remco, Erik J. Bekkers, and Alexey Mashtakov. "Fourier Transform on the Homogeneous Space of 3D Positions and Orientations for Exact Solutions to Linear Parabolic and (Hypo-) Elliptic PDEs". arXiv preprint arXiv:1811.00363 (2018).

      [3.2] Mohamed, S., Rosca, M., Figurnov, M. and Mnih, A., 2019. "Monte carlo gradient estimation in machine learning". arXiv preprint arXiv:1906.10652.

      [3.3] Rackauckas, C., Ma, Y., Martensen, J., Warner, C., Zubov, K., Supekar, R., Skinner, D., Ramadhan, A., Edelman, A., "Universal Differential Equations for Scientific Machine Learning", Archive online, DOI: 2001.04385.

      [3.4] Boumal, Nicolas. An introduction to optimization on smooth manifolds. Available online, May, 2020.

      [3.5] Relationship between the Hessianand Covariance Matrix forGaussian Random Variables, John Wiley & Sons

      [3.6] Pennec, Xavier. Intrinsic Statistics on Riemannian Manifolds: Basic Tools for Geometric Measurements, HAL Archive, 2011, Inria, France.

      [3.7] Weber, P., Medina-Oliva, G., Simon, C., et al., 2012. Overview on Bayesian networks applications for dependability risk analysis and maintenance areas. Appl. Artif. Intell. 25 (4), 671e682. https://doi.org/10.1016/j.engappai.2010.06.002. Preprint PDF.

      [3.8] Wang, H.R., Ye, L.T., Xu, X.Y., et al., 2010. Bayesian networks precipitation model based on hidden markov analysis and its application. Sci. China Technol. Sci. 53 (2), 539e547. https://doi.org/10.1007/s11431-010-0034-3.

      [3.9] Mangelson, J.G., Dominic, D., Eustice, R.M. and Vasudevan, R., 2018, May. Pairwise consistent measurement set maximization for robust multi-robot map merging. In 2018 IEEE international conference on robotics and automation (ICRA) (pp. 2916-2923). IEEE.

      [3.10] Bourgeois, F. and Lassalle, J.C., 1971. An extension of the Munkres algorithm for the assignment problem to rectangular matrices. Communications of the ACM, 14(12), pp.802-804.

      Signal Processing (Beamforming and Channel Deconvolution)

      [4.1] Van Trees, H.L., 2004. Optimum array processing: Part IV of detection, estimation, and modulation theory. John Wiley & Sons.

      [4.2a] Dowling, D.R., 2013. "Acoustic Blind Deconvolution and Unconventional Nonlinear Beamforming in Shallow Ocean Environments". MICHIGAN UNIV ANN ARBOR DEPT OF MECHANICAL ENGINEERING.

      [4.2b] Hossein Abadi, S., 2013. "Blind deconvolution in multipath environments and extensions to remote source localization", paper, thesis.

      Contact or Tactile

      [5.1] Suresh, S., Bauza, M., Yu, K.T., Mangelson, J.G., Rodriguez, A. and Kaess, M., 2021, May. Tactile SLAM: Real-time inference of shape and pose from planar pushing. In 2021 IEEE International Conference on Robotics and Automation (ICRA) (pp. 11322-11328). IEEE.

      +References · Caesar.jl

      Literature

      Newly created page to list related references and additional literature pertaining to this package.

      Direct References

      [1.1] Fourie, D., Leonard, J., Kaess, M.: "A Nonparametric Belief Solution to the Bayes Tree" IEEE/RSJ Intl. Conf. on Intelligent Robots and Systems (IROS), (2016).

      [1.2] Fourie, D.: "Multi-modal and Inertial Sensor Solutions for Navigation-type Factor Graphs", Ph.D. Thesis, Massachusetts Institute of Technology Electrical Engineering and Computer Science together with Woods Hole Oceanographic Institution Department for Applied Ocean Science and Engineering, September 2017.

      [1.3] Fourie, D., Claassens, S., Pillai, S., Mata, R., Leonard, J.: "SLAMinDB: Centralized graph databases for mobile robotics", IEEE Intl. Conf. on Robotics and Automation (ICRA), Singapore, 2017.

      [1.4] Cheung, M., Fourie, D., Rypkema, N., Vaz Teixeira, P., Schmidt, H., and Leonard, J.: "Non-Gaussian SLAM utilizing Synthetic Aperture Sonar", Intl. Conf. On Robotics and Automation (ICRA), IEEE, Montreal, 2019.

      [1.5] Doherty, K., Fourie, D., Leonard, J.: "Multimodal Semantic SLAM with Probabilistic Data Association", Intl. Conf. On Robotics and Automation (ICRA), IEEE, Montreal, 2019.

      [1.6] Fourie, D., Vaz Teixeira, P., Leonard, J.: "Non-parametric Mixed-Manifold Products using Multiscale Kernel Densities", IEEE Intl. Conf. on Intelligent Robots and Systems (IROS), (2019),.

      [1.7] Teixeira, P.N.V., Fourie, D., Kaess, M. and Leonard, J.J., 2019, September. "Dense, sonar-based reconstruction of underwater scenes". In 2019 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS) (pp. 8060-8066). IEEE.

      [1.8] Fourie, D., Leonard, J.: "Inertial Odometry with Retroactive Sensor Calibration", 2015-2019.

      [1.9] Koolen, T. and Deits, R., 2019. Julia for robotics: Simulation and real-time control in a high-level programming language. IEEE, Intl. Conference on Robotics and Automation, ICRA (2019).

      [1.10] Fourie, D., Espinoza, A. T., Kaess, M., and Leonard, J. J., “Characterizing marginalization and incremental operations on the Bayes tree,” in International Workshop on Algorithmic Foundations of Robotics (WAFR), 2020, Oulu, Finland, Springer Publishing.

      [1.11] Fourie, D., Rypkema, N., Claassens, S., Vaz Teixeira, P., Fischell, E., and Leonard, J.J., "Towards Real-Time Non-Gaussian SLAM for Underdetermined Navigation", in IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), October 2020, Las Vegas, USA.

      [1.12] J. Terblanche, S. Claassens and D. Fourie, "Multimodal Navigation-Affordance Matching for SLAM," in IEEE Robotics and Automation Letters, vol. 6, no. 4, pp. 7728-7735, Oct. 2021, doi: 10.1109/LRA.2021.3098788. Also presented at, IEEE 17th International Conference on Automation Science and Engineering, August 2021, Lyon, France.

      Important References

      [2.1] Kaess, Michael, et al. "iSAM2: Incremental smoothing and mapping using the Bayes tree" The International Journal of Robotics Research (2011): 0278364911430419.

      [2.2] Kaess, Michael, et al. "The Bayes tree: An algorithmic foundation for probabilistic robot mapping." Algorithmic Foundations of Robotics IX. Springer, Berlin, Heidelberg, 2010. 157-173.

      [2.3] Kschischang, Frank R., Brendan J. Frey, and Hans-Andrea Loeliger. "Factor graphs and the sum-product algorithm." IEEE Transactions on information theory 47.2 (2001): 498-519.

      [2.4] Dellaert, Frank, and Michael Kaess. "Factor graphs for robot perception." Foundations and Trends® in Robotics 6.1-2 (2017): 1-139.

      [2.5] Sudderth, E.B., Ihler, A.T., Isard, M., Freeman, W.T. and Willsky, A.S., 2010. "Nonparametric belief propagation." Communications of the ACM, 53(10), pp.95-103

      [2.6] Paskin, Mark A. "Thin junction tree filters for simultaneous localization and mapping." in Int. Joint Conf. on Artificial Intelligence. 2003.

      [2.7] Farrell, J., and Matthew B.: "The global positioning system and inertial navigation." Vol. 61. New York: Mcgraw-hill, 1999.

      [2.8] Zarchan, Paul, and Howard Musoff, eds. Fundamentals of Kalman filtering: a practical approach. American Institute of Aeronautics and Astronautics, Inc., 2013.

      [2.9] Rypkema, N. R.,: "Underwater & Out of Sight: Towards Ubiquity in UnderwaterRobotics", Ph.D. Thesis, Massachusetts Institute of Technology Electrical Engineering and Computer Science together with Woods Hole Oceanographic Institution Department for Applied Ocean Science and Engineering, September 2019.

      [2.10] Vaz Teixeira, P.: "Dense, Sonar-based Reconstruction of Underwater Scenes", Ph.D. Thesis, Massachusetts Institute of Technology Electrical Engineering and Computer Science together with Woods Hole Oceanographic Institution Department for Applied Ocean Science and Engineering, September 2019.

      [2.11] Hanebeck, Uwe D. "FLUX: Progressive State Estimation Based on Zakai-type Distributed Ordinary Differential Equations." arXiv preprint arXiv:1808.02825 (2018).

      [2.12] Muandet, Krikamol, et al. "Kernel mean embedding of distributions: A review and beyond." Foundations and Trends® in Machine Learning 10.1-2 (2017): 1-141.

      [2.13] Hsiao, M. and Kaess, M., 2019, May. "MH-iSAM2: Multi-hypothesis iSAM using Bayes Tree and Hypo-tree". In 2019 International Conference on Robotics and Automation (ICRA) (pp. 1274-1280). IEEE.

      [2.14] Arnborg, S., Corneil, D.G. and Proskurowski, A., 1987. "Complexity of finding embeddings in a k-tree". SIAM Journal on Algebraic Discrete Methods, 8(2), pp.277-284.

      [2.15a] Sola, J., Deray, J. and Atchuthan, D., 2018. "A micro Lie theory for state estimation in robotics". arXiv preprint arXiv:1812.01537, and tech report. And cheatsheet w/ suspected typos.

      [2.15b] Delleart F., 2012. Lie Groups for Beginners.

      [2.15c] Eade E., 2017 Lie Groups for 2D and 3D Transformations.

      [2.15d] Chirikjian, G.S., 2015. Partial bi-invariance of SE(3) metrics. Journal of Computing and Information Science in Engineering, 15(1).

      [2.15e] Pennec, X. and Lorenzi, M., 2020. Beyond Riemannian geometry: The affine connection setting for transformation groups. In Riemannian Geometric Statistics in Medical Image Analysis (pp. 169-229). Academic Press.

      [2.15f] Žefran, M., Kumar, V. and Croke, C., 1996, August. Choice of Riemannian metrics for rigid body kinematics. In International Design Engineering Technical Conferences and Computers and Information in Engineering Conference (Vol. 97584, p. V02BT02A030). American Society of Mechanical Engineers.

      [2.15g] Chirikjian, G.S. and Zhou, S., 1998. Metrics on motion and deformation of solid models.

      [2.16] Kaess, M. and Dellaert, F., 2009. Covariance recovery from a square root information matrix for data association. Robotics and autonomous systems, 57(12), pp.1198-1210.

      [2.17] Bishop, C.M., 2006. Pattern recognition and machine learning. New York: Springer. ISBN 978-0-387-31073-2.

      Additional References

      [3.1] Duits, Remco, Erik J. Bekkers, and Alexey Mashtakov. "Fourier Transform on the Homogeneous Space of 3D Positions and Orientations for Exact Solutions to Linear Parabolic and (Hypo-) Elliptic PDEs". arXiv preprint arXiv:1811.00363 (2018).

      [3.2] Mohamed, S., Rosca, M., Figurnov, M. and Mnih, A., 2019. "Monte carlo gradient estimation in machine learning". arXiv preprint arXiv:1906.10652.

      [3.3] Rackauckas, C., Ma, Y., Martensen, J., Warner, C., Zubov, K., Supekar, R., Skinner, D., Ramadhan, A., Edelman, A., "Universal Differential Equations for Scientific Machine Learning", Archive online, DOI: 2001.04385.

      [3.4] Boumal, Nicolas. An introduction to optimization on smooth manifolds. Available online, May, 2020.

      [3.5] Relationship between the Hessianand Covariance Matrix forGaussian Random Variables, John Wiley & Sons

      [3.6] Pennec, Xavier. Intrinsic Statistics on Riemannian Manifolds: Basic Tools for Geometric Measurements, HAL Archive, 2011, Inria, France.

      [3.7] Weber, P., Medina-Oliva, G., Simon, C., et al., 2012. Overview on Bayesian networks applications for dependability risk analysis and maintenance areas. Appl. Artif. Intell. 25 (4), 671e682. https://doi.org/10.1016/j.engappai.2010.06.002. Preprint PDF.

      [3.8] Wang, H.R., Ye, L.T., Xu, X.Y., et al., 2010. Bayesian networks precipitation model based on hidden markov analysis and its application. Sci. China Technol. Sci. 53 (2), 539e547. https://doi.org/10.1007/s11431-010-0034-3.

      [3.9] Mangelson, J.G., Dominic, D., Eustice, R.M. and Vasudevan, R., 2018, May. Pairwise consistent measurement set maximization for robust multi-robot map merging. In 2018 IEEE international conference on robotics and automation (ICRA) (pp. 2916-2923). IEEE.

      [3.10] Bourgeois, F. and Lassalle, J.C., 1971. An extension of the Munkres algorithm for the assignment problem to rectangular matrices. Communications of the ACM, 14(12), pp.802-804.

      Signal Processing (Beamforming and Channel Deconvolution)

      [4.1] Van Trees, H.L., 2004. Optimum array processing: Part IV of detection, estimation, and modulation theory. John Wiley & Sons.

      [4.2a] Dowling, D.R., 2013. "Acoustic Blind Deconvolution and Unconventional Nonlinear Beamforming in Shallow Ocean Environments". MICHIGAN UNIV ANN ARBOR DEPT OF MECHANICAL ENGINEERING.

      [4.2b] Hossein Abadi, S., 2013. "Blind deconvolution in multipath environments and extensions to remote source localization", paper, thesis.

      Contact or Tactile

      [5.1] Suresh, S., Bauza, M., Yu, K.T., Mangelson, J.G., Rodriguez, A. and Kaess, M., 2021, May. Tactile SLAM: Real-time inference of shape and pose from planar pushing. In 2021 IEEE International Conference on Robotics and Automation (ICRA) (pp. 11322-11328). IEEE.

      diff --git a/dev/search_index.js b/dev/search_index.js index 778c01b0b..dc41b7ca8 100644 --- a/dev/search_index.js +++ b/dev/search_index.js @@ -1,3 +1,3 @@ var documenterSearchIndex = {"docs": -[{"location":"concepts/why_nongaussian/#why_nongaussian","page":"Gaussian vs. Non-Gaussian","title":"Why/Where does non-Gaussian data come from?","text":"","category":"section"},{"location":"concepts/why_nongaussian/","page":"Gaussian vs. Non-Gaussian","title":"Gaussian vs. Non-Gaussian","text":"Gaussian error models in measurement or data cues will only be Gaussian (normally distributed) if all physics/decisions/systematic-errors/calibration/etc. has a correct algebraic model in all circumstances. Caesar.jl and MM-iSAMv2 is heavily focussed on state-estimation from a plethora of heterogenous data that may not yet have perfect algebraic models. Four major categories of non-Gaussian errors have thus far been considered:","category":"page"},{"location":"concepts/why_nongaussian/","page":"Gaussian vs. Non-Gaussian","title":"Gaussian vs. Non-Gaussian","text":"Uncertain decisions (a.k.a. data association), such as a robot trying to decide if a navigation loop-closure can be deduced from a repeat observation of a similar object or measurement from current and past data. These issues are commonly also referred to as multi-hypothesis.\nUnderdetermined or underdefined systems where there are more variables than constraining measurements to fully define the system as a single mode–-a.k.a solution ambiguity. For example, in 2D consider two range measurements resulting in two possible locations through trilateration.\nNonlinearity. For example in 2D, consider a Pose2 odometry where the orientation is uncertain: The resulting belief of where a next pose might be (convolution with odometry factor) results in a banana shape curve, even though the entire process is driven by assumed Gaussian belief.\nPhysics of the measurement process. Many measurement processes exhibit non-Gaussian behaviour. For example, acoustic/radio time-of-flight measurements, using either pulse-train or matched filtering, result in an \"energy intensity\" over time/distance of what the range to a scattering-target/source might be–i.e. highly non-Gaussian.","category":"page"},{"location":"concepts/why_nongaussian/#Next-Steps","page":"Gaussian vs. Non-Gaussian","title":"Next Steps","text":"","category":"section"},{"location":"concepts/why_nongaussian/","page":"Gaussian vs. Non-Gaussian","title":"Gaussian vs. Non-Gaussian","text":"Quick links to related pages:","category":"page"},{"location":"concepts/why_nongaussian/","page":"Gaussian vs. Non-Gaussian","title":"Gaussian vs. Non-Gaussian","text":"Pages = [\n \"installation_environment.md\"\n \"concepts/concepts.md\"\n \"concepts/building_graphs.md\"\n \"concepts/2d_plotting.md\"\n]\nDepth = 1","category":"page"},{"location":"dev/known_issues/#Known-Issues","page":"Known Issue List","title":"Known Issues","text":"","category":"section"},{"location":"dev/known_issues/","page":"Known Issue List","title":"Known Issue List","text":"This page is used to list known issues:","category":"page"},{"location":"dev/known_issues/","page":"Known Issue List","title":"Known Issue List","text":"Arena.jl is fairly behind on a number of updates and deprecations. Fixes for this are planned 2021Q2.\nRoMEPlotting.jl main features like plotSLAM2D are working, but some of the other features are not fully up to date with recent changes in upstream packages. This too will be updated around Summer 2021.","category":"page"},{"location":"dev/known_issues/#Features-To-Be-Restored","page":"Known Issue List","title":"Features To Be Restored","text":"","category":"section"},{"location":"dev/known_issues/#Install-3D-Visualization-Utils-(e.g.-Arena.jl)","page":"Known Issue List","title":"Install 3D Visualization Utils (e.g. Arena.jl)","text":"","category":"section"},{"location":"dev/known_issues/","page":"Known Issue List","title":"Known Issue List","text":"3D Visualizations are provided by Arena.jl as well as development package Amphitheater.jl. Please follow instructions on the Visualizations page for a variety of 3D utilities.","category":"page"},{"location":"dev/known_issues/","page":"Known Issue List","title":"Known Issue List","text":"note: Note\nArena.jl and Amphitheater.jl are currently being refactored as part of the broader DistributedFactorGraph migration, the features are are in beta stage (1Q2020).","category":"page"},{"location":"dev/known_issues/","page":"Known Issue List","title":"Known Issue List","text":"Install the latest master branch version with","category":"page"},{"location":"dev/known_issues/","page":"Known Issue List","title":"Known Issue List","text":"(v1.5) pkg> add Arena#master","category":"page"},{"location":"dev/known_issues/#Install-\"Just-the-ZMQ/ROS-Runtime-Solver\"-(Linux)","page":"Known Issue List","title":"Install \"Just the ZMQ/ROS Runtime Solver\" (Linux)","text":"","category":"section"},{"location":"dev/known_issues/","page":"Known Issue List","title":"Known Issue List","text":"Work in progress (see issue #278).","category":"page"},{"location":"concepts/compile_binary/#compile_binaries","page":"Compile Binaries","title":"Compile Binaries","text":"","category":"section"},{"location":"concepts/compile_binary/","page":"Compile Binaries","title":"Compile Binaries","text":"Broader Julia ecosystem work on compiling shared libraries and images is hosted by PackageCompiler.jl, see documentation there.","category":"page"},{"location":"concepts/compile_binary/#Compiling-RoME.so","page":"Compile Binaries","title":"Compiling RoME.so","text":"","category":"section"},{"location":"concepts/compile_binary/","page":"Compile Binaries","title":"Compile Binaries","text":"A default RoME system image script can be used compileRoME/compileRoMESysimage.jl to reduce the \"time-to-first-plot\".","category":"page"},{"location":"concepts/compile_binary/","page":"Compile Binaries","title":"Compile Binaries","text":"To use RoME with the newly created sysimage, start julia with:","category":"page"},{"location":"concepts/compile_binary/","page":"Compile Binaries","title":"Compile Binaries","text":"julia -O3 -J ~/.julia/dev/RoME/compileRoME/RoMESysimage.so","category":"page"},{"location":"concepts/compile_binary/","page":"Compile Binaries","title":"Compile Binaries","text":"Which should dramatically cut down on the load time of the included package JIT compilation. More packages or functions can be added to the binary, depending on the application. Furthermore, full executable binaries can easily be made with PackageCompiler.jl.","category":"page"},{"location":"concepts/compile_binary/#More-Info","page":"Compile Binaries","title":"More Info","text":"","category":"section"},{"location":"concepts/compile_binary/","page":"Compile Binaries","title":"Compile Binaries","text":"note: Note\nAlso see this Julia Binaries Blog. More on discourse.. Also see new brute force sysimg work at Fezzik.jl.","category":"page"},{"location":"concepts/compile_binary/","page":"Compile Binaries","title":"Compile Binaries","text":"note: Note\nContents of a previous blog post this AOT vs JIT compiling blog post has been wrapped into PackageCompiler.jl.","category":"page"},{"location":"examples/examples/#examples_section","page":"Caesar Examples","title":"Examples","text":"","category":"section"},{"location":"examples/examples/","page":"Caesar Examples","title":"Caesar Examples","text":"The following examples demonstrate the conceptual operation of Caesar, highlighting specific features of the framework and its use.","category":"page"},{"location":"examples/examples/#Continuous-Scalar","page":"Caesar Examples","title":"Continuous Scalar","text":"","category":"section"},{"location":"examples/examples/#Calculating-a-Square-Root-(Underdetermined)","page":"Caesar Examples","title":"Calculating a Square Root (Underdetermined)","text":"","category":"section"},{"location":"examples/examples/","page":"Caesar Examples","title":"Caesar Examples","text":"Probably the most minimal example that illustrates how factor graphs represent a mathematical framework is a reworking of the classic square root calculation.","category":"page"},{"location":"examples/examples/","page":"Caesar Examples","title":"Caesar Examples","text":"

      \n\n

      ","category":"page"},{"location":"examples/examples/","page":"Caesar Examples","title":"Caesar Examples","text":"note: Note\nWIP, a combined type-definion and square root script is available as an example script. We're working to present the example without having to define any types.","category":"page"},{"location":"examples/examples/#Continuous-Scalar-with-Mixtures","page":"Caesar Examples","title":"Continuous Scalar with Mixtures","text":"","category":"section"},{"location":"examples/examples/","page":"Caesar Examples","title":"Caesar Examples","text":"This abstract continuous scalar example illustrates how IncrementalInference.jl enables algebraic relations between stochastic variables, and how a final posterior belief estimate is calculated from several pieces of information.","category":"page"},{"location":"examples/examples/","page":"Caesar Examples","title":"Caesar Examples","text":"

      \n\n

      ","category":"page"},{"location":"examples/examples/#Hexagonal-2D","page":"Caesar Examples","title":"Hexagonal 2D","text":"","category":"section"},{"location":"examples/examples/#Batch-Mode","page":"Caesar Examples","title":"Batch Mode","text":"","category":"section"},{"location":"examples/examples/","page":"Caesar Examples","title":"Caesar Examples","text":"A simple 2D hexagonal robot trajectory example is expanded below using techniques developed in simultaneous localization and mapping (SLAM).","category":"page"},{"location":"examples/examples/","page":"Caesar Examples","title":"Caesar Examples","text":"

      \n\n

      ","category":"page"},{"location":"examples/examples/#Bayes-Tree-Fixed-Lag-Solving-Hexagonal2D-Revisited","page":"Caesar Examples","title":"Bayes Tree Fixed-Lag Solving - Hexagonal2D Revisited","text":"","category":"section"},{"location":"examples/examples/","page":"Caesar Examples","title":"Caesar Examples","text":"The hexagonal fixed-lag example shows how tree based clique recycling can be achieved. A further example is given in the real-world underwater example below.","category":"page"},{"location":"examples/examples/#An-Underdetermined-Solution-(a.k.a.-SLAM-e-donut)","page":"Caesar Examples","title":"An Underdetermined Solution (a.k.a. SLAM-e-donut)","text":"","category":"section"},{"location":"examples/examples/","page":"Caesar Examples","title":"Caesar Examples","text":"This tutorial describes (unforced multimodality) a range-only system where there are always more variable dimensions than range measurements made, see Underdeterminied Example here The error distribution over ranges could be nearly anything, but are restricted to Gaussian-only in this example to illustrate an alternative point – other examples show inference results where highly non-Gaussian error distributions are used.","category":"page"},{"location":"examples/examples/","page":"Caesar Examples","title":"Caesar Examples","text":"Multi-modal range only example (click here or image for full Vimeo): ","category":"page"},{"location":"examples/examples/","page":"Caesar Examples","title":"Caesar Examples","text":"\"IMAGE","category":"page"},{"location":"examples/examples/#Towards-Real-Time-Underwater-Acoustic-Navigation","page":"Caesar Examples","title":"Towards Real-Time Underwater Acoustic Navigation","text":"","category":"section"},{"location":"examples/examples/","page":"Caesar Examples","title":"Caesar Examples","text":"This example uses \"dead reckon tethering\" (DRT) to perform many of the common robot odometry and high frequency pose updated operations. These features are a staple and standard part of the distributed factor graph system.","category":"page"},{"location":"examples/examples/","page":"Caesar Examples","title":"Caesar Examples","text":"Click on image (or this link to Vimeo) for a video illustration:","category":"page"},{"location":"examples/examples/","page":"Caesar Examples","title":"Caesar Examples","text":"\"AUV","category":"page"},{"location":"examples/examples/#Uncertain-Data-Associations,-(forced-multi-hypothesis)","page":"Caesar Examples","title":"Uncertain Data Associations, (forced multi-hypothesis)","text":"","category":"section"},{"location":"examples/examples/","page":"Caesar Examples","title":"Caesar Examples","text":"This example presents a novel multimodal solution to an otherwise intractible multihypothesis SLAM problem. This work spans the entire Victoria Park dataset, and resolves a solution over roughly 10000 variable dimensions with 2^1700 (yes to teh power 1700) theoretically possible modes. At the time of first solution in 2016, a full batch solution took around 3 hours to compute on a very spartan early implementation.","category":"page"},{"location":"examples/examples/","page":"Caesar Examples","title":"Caesar Examples","text":"\n

      ","category":"page"},{"location":"examples/examples/","page":"Caesar Examples","title":"Caesar Examples","text":"The fractional multi-hypothesis assignments addFactor!(..., multihypo=[1.0; 0.5;0.5]). Similarly for tri-nary or higher multi-hypotheses.","category":"page"},{"location":"examples/examples/#Probabilistic-Data-Association-(Uncertain-loop-closures)","page":"Caesar Examples","title":"Probabilistic Data Association (Uncertain loop closures)","text":"","category":"section"},{"location":"examples/examples/","page":"Caesar Examples","title":"Caesar Examples","text":"Example where the standard multihypothesis addFactor!(.., multihypo=[1.0;0.5;0.5]) interface is used. This is from the Kitti driving dataset. Video here. The data association and multihypothesis section discusses this feature in more detail.","category":"page"},{"location":"examples/examples/","page":"Caesar Examples","title":"Caesar Examples","text":"\"IMAGE","category":"page"},{"location":"examples/examples/#Synthetic-Aperture-Sonar-SLAM","page":"Caesar Examples","title":"Synthetic Aperture Sonar SLAM","text":"","category":"section"},{"location":"examples/examples/","page":"Caesar Examples","title":"Caesar Examples","text":"The full functional (approximate sum-product) inference approach can be used to natively imbed single hydrophone acoustic waveform data into highly non-Gaussian SAS factors–that implicitly perform beamforming/micro-location–-for a simultaneous localization and mapping solution (image links to video). See the Raw Correlator Probability (Matched Filter) Section for more details.","category":"page"},{"location":"examples/examples/","page":"Caesar Examples","title":"Caesar Examples","text":"\"IMAGE","category":"page"},{"location":"examples/examples/#Marine-Surface-Vehicle-with-ROS","page":"Caesar Examples","title":"Marine Surface Vehicle with ROS","text":"","category":"section"},{"location":"examples/examples/","page":"Caesar Examples","title":"Caesar Examples","text":"New marine surface vehicle code tutorial using ROS.","category":"page"},{"location":"examples/examples/","page":"Caesar Examples","title":"Caesar Examples","text":"note: Note\nSee initial example here, and native ROS support section here.","category":"page"},{"location":"examples/examples/#Simulated-Ambiguous-SONAR-in-3D","page":"Caesar Examples","title":"Simulated Ambiguous SONAR in 3D","text":"","category":"section"},{"location":"examples/examples/","page":"Caesar Examples","title":"Caesar Examples","text":"Intersection of ambiguous elevation angle from planar SONAR sensor: ","category":"page"},{"location":"examples/examples/","page":"Caesar Examples","title":"Caesar Examples","text":"\"IMAGE","category":"page"},{"location":"examples/examples/","page":"Caesar Examples","title":"Caesar Examples","text":"Bi-modal belief","category":"page"},{"location":"examples/examples/","page":"Caesar Examples","title":"Caesar Examples","text":"\"IMAGE","category":"page"},{"location":"examples/examples/#Multi-session-Indoor-Robot","page":"Caesar Examples","title":"Multi-session Indoor Robot","text":"","category":"section"},{"location":"examples/examples/","page":"Caesar Examples","title":"Caesar Examples","text":"Multi-session Turtlebot example of the second floor in the Stata Center:","category":"page"},{"location":"examples/examples/","page":"Caesar Examples","title":"Caesar Examples","text":"\"Turtlebot","category":"page"},{"location":"examples/examples/","page":"Caesar Examples","title":"Caesar Examples","text":"See the multisession information page for more details, as well as academic work:","category":"page"},{"location":"examples/examples/#More-Examples","page":"Caesar Examples","title":"More Examples","text":"","category":"section"},{"location":"examples/examples/","page":"Caesar Examples","title":"Caesar Examples","text":"Please see examples folders for Caesar and RoME for more examples, with expanded documentation in the works.","category":"page"},{"location":"examples/examples/#Adding-Factors-Simple-Factor-Design","page":"Caesar Examples","title":"Adding Factors - Simple Factor Design","text":"","category":"section"},{"location":"examples/examples/","page":"Caesar Examples","title":"Caesar Examples","text":"Caesar can be extended with new variables and factors without changing the core code. An example of this design pattern is provided in this example.","category":"page"},{"location":"examples/examples/","page":"Caesar Examples","title":"Caesar Examples","text":"Defining New Variables and Factor","category":"page"},{"location":"examples/examples/#Adding-Factors-DynPose-Factor","page":"Caesar Examples","title":"Adding Factors - DynPose Factor","text":"","category":"section"},{"location":"examples/examples/","page":"Caesar Examples","title":"Caesar Examples","text":"Intermediate Example: Adding Dynamic Factors and Variables","category":"page"},{"location":"concepts/using_julia/#Using-Julia","page":"Using Julia","title":"Using Julia","text":"","category":"section"},{"location":"concepts/using_julia/","page":"Using Julia","title":"Using Julia","text":"While Caesar.jl is accessible from various programming languages, this page describes how to use Julia, existing packages, multi-process and multi-threading features, and more. A wealth of general Julia resources are available in the Internet, see `www.julialang.org for more resources.","category":"page"},{"location":"concepts/using_julia/","page":"Using Julia","title":"Using Julia","text":"If you are familar with Julia, feel free to skip over to the next page.","category":"page"},{"location":"concepts/using_julia/#Julia-REPL-and-Help","page":"Using Julia","title":"Julia REPL and Help","text":"","category":"section"},{"location":"concepts/using_julia/","page":"Using Julia","title":"Using Julia","text":"Julia's documentation on the REPL can be found here. As a brief example, the REPL in a terminal looks as follows:","category":"page"},{"location":"concepts/using_julia/","page":"Using Julia","title":"Using Julia","text":"$ julia -O3\n _\n _ _ _(_)_ | Documentation: https://docs.julialang.org\n (_) | (_) (_) |\n _ _ _| |_ __ _ | Type \"?\" for help, \"]?\" for Pkg help.\n | | | | | | |/ _` | |\n | | |_| | | | (_| | | Version 1.6.3 (2021-09-23)\n _/ |\\__'_|_|_|\\__'_| | Official https://julialang.org/ release\n|__/ |\n\njulia> ? # upon typing ?, the prompt changes (in place) to: help?>\n\nhelp?> string\nsearch: string String Cstring Cwstring RevString randstring bytestring SubString\n\n string(xs...)\n\n Create a string from any values using the print function.\n ...","category":"page"},{"location":"concepts/using_julia/","page":"Using Julia","title":"Using Julia","text":"The -O 3 is for level 3 code compilation optimization and is a useful habit for slightly faster execution, but slightly slower first run just-in-time compilation of any new function.","category":"page"},{"location":"concepts/using_julia/#Loading-Packages","page":"Using Julia","title":"Loading Packages","text":"","category":"section"},{"location":"concepts/using_julia/","page":"Using Julia","title":"Using Julia","text":"Assuming you just loaded an empty REPL, or at the start of a script, or working inside the VSCode IDE, the first thing to do is load the necessary Julia packages. Caesar.jl is an umbrella package potentially covering over 100 Julia Packages. For this reason the particular parts of the code are broken up amongst more focussed vertical purpose library packages. Usually for Robotics either Caesar or less expansive RoME will do. Other non-Geometric sensor processing applications might build in the MM-iSAMv2, Bayes tree, and DistributedFactorGraph libraries. Any of these packages can be loaded as follows:","category":"page"},{"location":"concepts/using_julia/","page":"Using Julia","title":"Using Julia","text":"# umbrella containing most functional packages including RoME\nusing Caesar\n# contains the IncrementalInference and other geometric manifold packages\nusing RoME\n# contains among others DistributedFactorGraphs.jl and ApproxManifoldProducts.jl\nusing IncrementalInference","category":"page"},{"location":"concepts/using_julia/#Optional-Package-Loading","page":"Using Julia","title":"Optional Package Loading","text":"","category":"section"},{"location":"concepts/using_julia/","page":"Using Julia","title":"Using Julia","text":"Many of these packages have additional features that are not included by default. For example, the Flux.jl machine learning package will introduce several additional features when loaded, e.g.:","category":"page"},{"location":"concepts/using_julia/","page":"Using Julia","title":"Using Julia","text":"julia> using Flux, RoME\n\n[ Info: IncrementalInference is adding Flux related functionality.\n[ Info: RoME is adding Flux related functionality.","category":"page"},{"location":"concepts/using_julia/","page":"Using Julia","title":"Using Julia","text":"For completeness, so too with packages like Images.jl, RobotOS.jl, and others:","category":"page"},{"location":"concepts/using_julia/","page":"Using Julia","title":"Using Julia","text":"using Caesar, Images","category":"page"},{"location":"concepts/using_julia/#Running-Unit-Tests-Locally","page":"Using Julia","title":"Running Unit Tests Locally","text":"","category":"section"},{"location":"concepts/using_julia/","page":"Using Julia","title":"Using Julia","text":"Unit tests can further be performed for the upstream packages as follows – NOTE first time runs are slow since each new function call or package must first be precompiled. These test can take up to an hour and may have occasional stochastic failures in any one of the many tests being run. Thus far we have accepted occasional stochasticly driven numerical events–-e.g. a test event might result in 1.03 < 1–-rather than making tests so loose such that actual bugs are missed. Strictly speaking, we should repeat tests 10 times over with tighter tolerances, but that would require hundreds or thousands of cloud CI hours a week.","category":"page"},{"location":"concepts/using_julia/","page":"Using Julia","title":"Using Julia","text":"juila> ] # activate Pkg manager\n\n# the multimodal incremental smoothing and mapping solver\n(v1.6) pkg> test IncrementalInference\n...\n# robotics related variables and factors to work with IncrementalInference -- can be used standalone SLAM system\n(v1.6) pkg> test RoME\n...\n# umbrella framework with interaction tools and more -- allows stand alone and server based solving\n(v1.6) pkg> test Caesar\n...","category":"page"},{"location":"concepts/using_julia/#Install-Repos-for-Development","page":"Using Julia","title":"Install Repos for Development","text":"","category":"section"},{"location":"concepts/using_julia/","page":"Using Julia","title":"Using Julia","text":"Alternatively, the dev command:","category":"page"},{"location":"concepts/using_julia/","page":"Using Julia","title":"Using Julia","text":"(v1.6) pkg> dev https://github.com/JuliaRobotics/Caesar.jl\n\n# Or fetching a local fork where you push access\n# (v1.6) pkg> dev https://github.com/dehann/Caesar.jl","category":"page"},{"location":"concepts/using_julia/","page":"Using Julia","title":"Using Julia","text":"warn: Warn\nDevelopment packages are NOT managed by Pkg.jl, so you have to manage this Git repo manually. Development packages can usually be found at, e.g. Caesar~/.julia/dev/Caesar","category":"page"},{"location":"concepts/using_julia/","page":"Using Julia","title":"Using Julia","text":"If you'd like to modify or contribute then feel free to fork the specific repo from JuliaRobotics, complete the work on branches in the fork as is normal with a Git workflow and then submit a PR back upstream. We try to keep PRs small, specific to a task and preempt large changes by first merging smaller non-breaking changes and finally do a small switch over PR. We also follow a backport onto release/vX.Y branch strategy with common main || master branch as the \"lobby\" for shared development into which individual single responsibility PRs are merged. Each PR, the main development lobby, and stable release/vX.Y branches are regularly tested through Continuous Integration at each of the repsective packages.","category":"page"},{"location":"concepts/using_julia/","page":"Using Julia","title":"Using Julia","text":"note: Note\nBinary compilation and fast \"first-time-to-plot\" can be done through PackageCompiler.jl, see here for more details.","category":"page"},{"location":"concepts/using_julia/#Julia-Command-Examples","page":"Using Julia","title":"Julia Command Examples","text":"","category":"section"},{"location":"concepts/using_julia/","page":"Using Julia","title":"Using Julia","text":"Run Julia in REPL (console) mode:","category":"page"},{"location":"concepts/using_julia/","page":"Using Julia","title":"Using Julia","text":"$ julia\njulia> println(\"hello world\")\n\"hello world\"","category":"page"},{"location":"concepts/using_julia/","page":"Using Julia","title":"Using Julia","text":"Maybe a script, or command:","category":"page"},{"location":"concepts/using_julia/","page":"Using Julia","title":"Using Julia","text":"user@...$ echo \"println(\\\"hello again\\\")\" > myscript.jl\nuser@...$ julia myscript.jl\nhello again\nuser@...$ rm myscript.jl\n\nuser@...$ julia -e \"println(\\\"one more time.\\\")\"\none more time.\nuser@...$ julia -e \"println(\\\"...testing...\\\")\"\n...testing...","category":"page"},{"location":"concepts/using_julia/","page":"Using Julia","title":"Using Julia","text":"note: Note\nWhen searching for Julia related help online, use the phrase 'julialang' instead of just 'julia'. For example, search for 'julialang workflow tips' or 'julialang performance tips'. Also, see FAQ - Why are first runs slow?, which is due to Just-In-Time/Pre compiling and caching.","category":"page"},{"location":"concepts/using_julia/#Next-Steps","page":"Using Julia","title":"Next Steps","text":"","category":"section"},{"location":"concepts/using_julia/","page":"Using Julia","title":"Using Julia","text":"Although Caesar is Julia-based, it provides multi-language support with a ZMQ interface. This is discussed in Caesar Multi-Language Support. Caesar.jl also supports various visualizations and plots by using Arena, RoMEPlotting, and Director. This is discussed in Visualization with Arena.jl and RoMEPlotting.jl.","category":"page"},{"location":"func_ref/#Additional-Function-Reference","page":"More Functions","title":"Additional Function Reference","text":"","category":"section"},{"location":"func_ref/","page":"More Functions","title":"More Functions","text":"Pages = [\n \"func_ref.md\"\n]\nDepth = 3","category":"page"},{"location":"func_ref/#RoME","page":"More Functions","title":"RoME","text":"","category":"section"},{"location":"func_ref/","page":"More Functions","title":"More Functions","text":"getRangeKDEMax2D\ninitFactorGraph!\naddOdoFG!","category":"page"},{"location":"func_ref/#RoME.getRangeKDEMax2D","page":"More Functions","title":"RoME.getRangeKDEMax2D","text":"getRangeKDEMax2D(fgl, vsym1, vsym2)\n\n\nCalculate the cartesian distance between two vertices in the graph using their symbol name, and by maximum belief point.\n\n\n\n\n\n","category":"function"},{"location":"func_ref/#RoME.initFactorGraph!","page":"More Functions","title":"RoME.initFactorGraph!","text":"initFactorGraph!(\n fg;\n P0,\n init,\n N,\n lbl,\n solvable,\n firstPoseType,\n labels\n)\n\n\nInitialize a factor graph object as Pose2, Pose3, or neither and returns variable and factor symbols as array.\n\n\n\n\n\n","category":"function"},{"location":"func_ref/#RoME.addOdoFG!","page":"More Functions","title":"RoME.addOdoFG!","text":"addOdoFG!(fg, n, DX, cov; N, solvable, labels)\n\n\nCreate a new variable node and insert odometry constraint factor between which will automatically increment latest pose symbol x for new node new node and constraint factor are returned as a tuple.\n\n\n\n\n\naddOdoFG!(fgl, odo; N, solvable, labels)\n\n\nCreate a new variable node and insert odometry constraint factor between which will automatically increment latest pose symbol x for new node new node and constraint factor are returned as a tuple.\n\n\n\n\n\n","category":"function"},{"location":"func_ref/#IncrementalInference","page":"More Functions","title":"IncrementalInference","text":"","category":"section"},{"location":"func_ref/","page":"More Functions","title":"More Functions","text":"approxCliqMarginalUp!\nareCliqVariablesAllMarginalized\nattemptTreeSimilarClique\nchildCliqs\ncliqHistFilterTransitions\ncycleInitByVarOrder!\ndoautoinit!\ndrawCliqSubgraphUpMocking\nfifoFreeze!\nfilterHistAllToArray\nfmcmc!\ngetClique\ngetCliqAllVarIds\ngetCliqAssocMat\ngetCliqDepth\ngetCliqDownMsgsAfterDownSolve\ngetCliqFrontalVarIds\ngetCliqVarInitOrderUp\ngetCliqMat\ngetCliqSeparatorVarIds\ngetCliqSiblings\ngetCliqVarIdsPriors\ngetCliqVarSingletons\ngetParent\ngetTreeAllFrontalSyms\nhasClique\nisInitialized\nisMarginalized\nisTreeSolved\nisPartial\nlocalProduct\nmakeCsmMovie\nparentCliq\npredictVariableByFactor\nprintCliqHistorySummary\nresetCliqSolve!\nresetData!\nresetTreeCliquesForUpSolve!\nresetVariable!\nsetfreeze!\nsetValKDE!\nsetVariableInitialized!\nsolveCliqWithStateMachine!\ntransferUpdateSubGraph!\ntreeProductDwn\ntreeProductUp\nunfreezeVariablesAll!\ndontMarginalizeVariablesAll!\nupdateFGBT!\nupGibbsCliqueDensity\nresetVariableAllInitializations!","category":"page"},{"location":"func_ref/#IncrementalInference.approxCliqMarginalUp!","page":"More Functions","title":"IncrementalInference.approxCliqMarginalUp!","text":"approxCliqMarginalUp!(csmc; ...)\napproxCliqMarginalUp!(\n csmc,\n childmsgs;\n N,\n dbg,\n multiproc,\n logger,\n iters,\n drawpdf\n)\n\n\nApproximate Chapman-Kolmogorov transit integral and return separator marginals as messages to pass up the Bayes (Junction) tree, along with additional clique operation values for debugging.\n\nNotes\n\nonduplicate=true by default internally uses deepcopy of factor graph and Bayes tree, and does not update the given objects. Set false to update fgl and treel during compute.\n\nFuture\n\nTODO: internal function chain is too long and needs to be refactored for maintainability.\n\n\n\n\n\n","category":"function"},{"location":"func_ref/#IncrementalInference.areCliqVariablesAllMarginalized","page":"More Functions","title":"IncrementalInference.areCliqVariablesAllMarginalized","text":"areCliqVariablesAllMarginalized(subfg, cliq)\n\n\nReturn true if all variables in clique are considered marginalized (and initialized).\n\n\n\n\n\n","category":"function"},{"location":"func_ref/#IncrementalInference.attemptTreeSimilarClique","page":"More Functions","title":"IncrementalInference.attemptTreeSimilarClique","text":"attemptTreeSimilarClique(othertree, seeksSimilar)\n\n\nSpecial internal function to try return the clique data if succesfully identified in othertree::AbstractBayesTree, based on contents of seeksSimilar::BayesTreeNodeData.\n\nNotes\n\nUsed to identify and skip similar cliques (i.e. recycle computations)\n\n\n\n\n\n","category":"function"},{"location":"func_ref/#IncrementalInference.childCliqs","page":"More Functions","title":"IncrementalInference.childCliqs","text":"childCliqs(treel, cliq)\n\n\nReturn a vector of child cliques to cliq.\n\n\n\n\n\n","category":"function"},{"location":"func_ref/#IncrementalInference.cliqHistFilterTransitions","page":"More Functions","title":"IncrementalInference.cliqHistFilterTransitions","text":"cliqHistFilterTransitions(hist, nextfnc)\n\n\nReturn state machine transition steps from history such that the nextfnc::Function.\n\nRelated:\n\nprintCliqHistorySummary, filterHistAllToArray, sandboxCliqResolveStep\n\n\n\n\n\n","category":"function"},{"location":"func_ref/#IncrementalInference.cycleInitByVarOrder!","page":"More Functions","title":"IncrementalInference.cycleInitByVarOrder!","text":"cycleInitByVarOrder!(subfg, varorder; solveKey, logger)\n\n\nCycle through var order and initialize variables as possible in subfg::AbstractDFG. Return true if something was updated.\n\nNotes:\n\nassumed subfg is a subgraph containing only the factors that can be used.\nincluding the required up or down messages\nintended for both up and down initialization operations.\n\nDev Notes\n\nShould monitor updates based on the number of inferred & solvable dimensions\n\n\n\n\n\n","category":"function"},{"location":"func_ref/#IncrementalInference.doautoinit!","page":"More Functions","title":"IncrementalInference.doautoinit!","text":"doautoinit!(dfg, xi; solveKey, singles, N, logger)\n\n\nEXPERIMENTAL: initialize target variable xi based on connected factors in the factor graph fgl. Possibly called from addFactor!, or doCliqAutoInitUp! (?).\n\nNotes:\n\nSpecial carve out for multihypo cases, see issue 427.\n\nDevelopment Notes:\n\nTarget factor is first (singletons) or second (dim 2 pairwise) variable vertex in xi.\nTODO use DFG properly with local operations and DB update at end.\nTODO get faster version of isInitialized for database version.\nTODO: Persist this back if we want to here.\nTODO: init from just partials\n\n\n\n\n\n","category":"function"},{"location":"func_ref/#IncrementalInference.drawCliqSubgraphUpMocking","page":"More Functions","title":"IncrementalInference.drawCliqSubgraphUpMocking","text":"drawCliqSubgraphUpMocking(\n fgl,\n treel,\n frontalSym;\n show,\n filepath,\n engine,\n viewerapp\n)\n\n\nConstruct (new) subgraph and draw the subgraph associated with clique frontalSym::Symbol.\n\nNotes\n\nSee drawGraphCliq/writeGraphPdf for details on keyword options.\n\nRelated\n\ndrawGraphCliq, spyCliqMat, drawTree, buildCliqSubgraphUp, buildSubgraphFromLabels!\n\n\n\n\n\n","category":"function"},{"location":"func_ref/#IncrementalInference.fifoFreeze!","page":"More Functions","title":"IncrementalInference.fifoFreeze!","text":"fifoFreeze!(dfg)\n\n\nFreeze nodes that are older than the quasi fixed-lag length defined by fg.qfl, according to fg.fifo ordering.\n\nFuture:\n\nAllow different freezing strategies beyond fifo.\n\n\n\n\n\n","category":"function"},{"location":"func_ref/#IncrementalInference.filterHistAllToArray","page":"More Functions","title":"IncrementalInference.filterHistAllToArray","text":"filterHistAllToArray(tree, hists, frontals, nextfnc)\n\n\nReturn state machine transition steps from all cliq histories with transition nextfnc::Function.\n\nRelated:\n\nprintCliqHistorySummary, cliqHistFilterTransitions, sandboxCliqResolveStep\n\n\n\n\n\n","category":"function"},{"location":"func_ref/#IncrementalInference.fmcmc!","page":"More Functions","title":"IncrementalInference.fmcmc!","text":"fmcmc!(fgl, cliq, fmsgs, lbls, solveKey, N, MCMCIter)\nfmcmc!(fgl, cliq, fmsgs, lbls, solveKey, N, MCMCIter, dbg)\nfmcmc!(\n fgl,\n cliq,\n fmsgs,\n lbls,\n solveKey,\n N,\n MCMCIter,\n dbg,\n logger\n)\nfmcmc!(\n fgl,\n cliq,\n fmsgs,\n lbls,\n solveKey,\n N,\n MCMCIter,\n dbg,\n logger,\n multithreaded\n)\n\n\nIterate successive approximations of clique marginal beliefs by means of the stipulated proposal convolutions and products of the functional objects for tree clique cliq.\n\n\n\n\n\n","category":"function"},{"location":"func_ref/#IncrementalInference.getClique","page":"More Functions","title":"IncrementalInference.getClique","text":"getClique(tree, cId)\n\n\nReturn the TreeClique node object that represents a clique in the Bayes (Junction) tree, as defined by one of the frontal variables frt<:AbstractString.\n\nNotes\n\nFrontal variables only occur once in a clique per tree, therefore is a unique identifier.\n\nRelated:\n\ngetCliq, getTreeAllFrontalSyms\n\n\n\n\n\n","category":"function"},{"location":"func_ref/#IncrementalInference.getCliqAllVarIds","page":"More Functions","title":"IncrementalInference.getCliqAllVarIds","text":"getCliqAllVarIds(cliq)\n\n\nGet all cliq variable ids::Symbol.\n\nRelated\n\ngetCliqVarIdsAll, getCliqFactorIdsAll, getCliqVarsWithFrontalNeighbors\n\n\n\n\n\n","category":"function"},{"location":"func_ref/#IncrementalInference.getCliqAssocMat","page":"More Functions","title":"IncrementalInference.getCliqAssocMat","text":"getCliqAssocMat(cliq)\n\n\nReturn boolean matrix of factor by variable (row by column) associations within clique, corresponds to order presented by getCliqFactorIds and getCliqAllVarIds.\n\n\n\n\n\n","category":"function"},{"location":"func_ref/#IncrementalInference.getCliqDepth","page":"More Functions","title":"IncrementalInference.getCliqDepth","text":"getCliqDepth(tree, cliq)\n\n\nReturn depth in tree as ::Int, with root as depth=0.\n\nRelated\n\ngetCliq\n\n\n\n\n\n","category":"function"},{"location":"func_ref/#IncrementalInference.getCliqDownMsgsAfterDownSolve","page":"More Functions","title":"IncrementalInference.getCliqDownMsgsAfterDownSolve","text":"getCliqDownMsgsAfterDownSolve(\n subdfg,\n cliq,\n solveKey;\n status,\n sender\n)\n\n\nReturn dictionary of down messages consisting of all frontal and separator beliefs of this clique.\n\nNotes:\n\nFetches numerical results from subdfg as dictated in cliq.\nreturn LikelihoodMessage\n\n\n\n\n\n","category":"function"},{"location":"func_ref/#IncrementalInference.getCliqFrontalVarIds","page":"More Functions","title":"IncrementalInference.getCliqFrontalVarIds","text":"getCliqFrontalVarIds(cliqdata)\n\n\nGet the frontal variable IDs ::Int for a given clique in a Bayes (Junction) tree.\n\n\n\n\n\n","category":"function"},{"location":"func_ref/#IncrementalInference.getCliqVarInitOrderUp","page":"More Functions","title":"IncrementalInference.getCliqVarInitOrderUp","text":"getCliqVarInitOrderUp(subfg)\n\n\nReturn the most likely ordering for initializing factor (assuming up solve sequence).\n\nNotes:\n\nsorts id (label) for increasing number of connected factors using the clique subfg with messages already included.\n\n\n\n\n\n","category":"function"},{"location":"func_ref/#IncrementalInference.getCliqMat","page":"More Functions","title":"IncrementalInference.getCliqMat","text":"getCliqMat(cliq; showmsg)\n\n\nReturn boolean matrix of factor variable associations for a clique, optionally including (showmsg::Bool=true) the upward message singletons. Variable order corresponds to getCliqAllVarIds.\n\n\n\n\n\n","category":"function"},{"location":"func_ref/#IncrementalInference.getCliqSeparatorVarIds","page":"More Functions","title":"IncrementalInference.getCliqSeparatorVarIds","text":"getCliqSeparatorVarIds(cliqdata)\n\n\nGet cliq separator (a.k.a. conditional) variable ids::Symbol.\n\n\n\n\n\n","category":"function"},{"location":"func_ref/#IncrementalInference.getCliqSiblings","page":"More Functions","title":"IncrementalInference.getCliqSiblings","text":"getCliqSiblings(treel, cliq)\ngetCliqSiblings(treel, cliq, inclusive)\n\n\nReturn a vector of all siblings to a clique, which defaults to not inclusive the calling cliq.\n\n\n\n\n\n","category":"function"},{"location":"func_ref/#IncrementalInference.getCliqVarIdsPriors","page":"More Functions","title":"IncrementalInference.getCliqVarIdsPriors","text":"getCliqVarIdsPriors(cliq)\ngetCliqVarIdsPriors(cliq, allids)\ngetCliqVarIdsPriors(cliq, allids, partials)\n\n\nGet variable ids::Int with prior factors associated with this cliq.\n\nNotes:\n\ndoes not include any singleton messages from upward or downward message passing.\n\n\n\n\n\n","category":"function"},{"location":"func_ref/#IncrementalInference.getCliqVarSingletons","page":"More Functions","title":"IncrementalInference.getCliqVarSingletons","text":"getCliqVarSingletons(cliq)\ngetCliqVarSingletons(cliq, allids)\ngetCliqVarSingletons(cliq, allids, partials)\n\n\nGet cliq variable IDs with singleton factors – i.e. both in clique priors and up messages.\n\n\n\n\n\n","category":"function"},{"location":"func_ref/#IncrementalInference.getParent","page":"More Functions","title":"IncrementalInference.getParent","text":"getParent(treel, afrontal)\n\n\nReturn cliq's parent clique.\n\n\n\n\n\n","category":"function"},{"location":"func_ref/#IncrementalInference.getTreeAllFrontalSyms","page":"More Functions","title":"IncrementalInference.getTreeAllFrontalSyms","text":"getTreeAllFrontalSyms(_, tree)\n\n\nReturn one symbol (a frontal variable) from each clique in the ::BayesTree.\n\nNotes\n\nFrontal variables only occur once in a clique per tree, therefore is a unique identifier.\n\nRelated:\n\nwhichCliq, printCliqHistorySummary\n\n\n\n\n\n","category":"function"},{"location":"func_ref/#IncrementalInference.hasClique","page":"More Functions","title":"IncrementalInference.hasClique","text":"hasClique(bt, frt)\n\n\nReturn boolean on whether the frontal variable frt::Symbol exists somewhere in the ::BayesTree.\n\n\n\n\n\n","category":"function"},{"location":"func_ref/#DistributedFactorGraphs.isInitialized","page":"More Functions","title":"DistributedFactorGraphs.isInitialized","text":"isInitialized(var)\nisInitialized(var, key)\n\n\nReturns state of variable data .initialized flag.\n\nNotes:\n\nused by both factor graph variable and Bayes tree clique logic.\n\n\n\n\n\nisInitialized(cliq)\n\n\nReturns state of Bayes tree clique .initialized flag.\n\nNotes:\n\nused by Bayes tree clique logic.\nsimilar method in DFG\n\n\n\n\n\n","category":"function"},{"location":"func_ref/#DistributedFactorGraphs.isMarginalized","page":"More Functions","title":"DistributedFactorGraphs.isMarginalized","text":"isMarginalized(vert)\nisMarginalized(vert, solveKey)\n\n\nReturn ::Bool on whether this variable has been marginalized.\n\nNotes:\n\nVariableNodeData default solveKey=:default\n\n\n\n\n\n","category":"function"},{"location":"func_ref/#IncrementalInference.isTreeSolved","page":"More Functions","title":"IncrementalInference.isTreeSolved","text":"isTreeSolved(treel; skipinitialized)\n\n\nReturn true or false depending on whether the tree has been fully initialized/solved/marginalized.\n\n\n\n\n\n","category":"function"},{"location":"func_ref/#ApproxManifoldProducts.isPartial","page":"More Functions","title":"ApproxManifoldProducts.isPartial","text":"isPartial(fcf)\n\n\nReturn ::Bool on whether factor is a partial constraint.\n\n\n\n\n\n","category":"function"},{"location":"func_ref/#IncrementalInference.localProduct","page":"More Functions","title":"IncrementalInference.localProduct","text":"localProduct(dfg, sym; solveKey, N, dbg, logger)\n\n\nUsing factor graph object dfg, project belief through connected factors (convolution with likelihood) to variable sym followed by a approximate functional product.\n\nReturn: product belief, full proposals, partial dimension proposals, labels\n\n\n\n\n\n","category":"function"},{"location":"func_ref/#IncrementalInference.makeCsmMovie","page":"More Functions","title":"IncrementalInference.makeCsmMovie","text":"makeCsmMovie(fg, tree; ...)\nmakeCsmMovie(\n fg,\n tree,\n cliqs;\n assignhist,\n show,\n filename,\n frames\n)\n\n\nConvenience function to assign and make video of CSM state machine for cliqs.\n\nNotes\n\nProbably several teething issues still (lower priority).\nUse assignhist if solver params async was true, or errored.\n\nRelated\n\ncsmAnimate, printCliqHistorySummary\n\n\n\n\n\n","category":"function"},{"location":"func_ref/#IncrementalInference.parentCliq","page":"More Functions","title":"IncrementalInference.parentCliq","text":"parentCliq(treel, cliq)\n\n\nReturn cliq's parent clique.\n\n\n\n\n\n","category":"function"},{"location":"func_ref/#RoME.predictVariableByFactor","page":"More Functions","title":"RoME.predictVariableByFactor","text":"predictVariableByFactor(dfg, targetsym, fct, prevars)\n\n\nMethod to compare current and predicted estimate on a variable, developed for testing a new factor before adding to the factor graph.\n\nNotes\n\nfct does not have to be in the factor graph – likely used to test beforehand.\nfunction is useful for detecting if multihypo should be used.\napproxConv will project the full belief estimate through some factor but must already be in factor graph.\n\nExample\n\n# fg already exists containing :x7 and :l3\npp = Pose2Point2BearingRange(Normal(0,0.1),Normal(10,1.0))\n# possible new measurement from :x7 to :l3\ncurr, pred = predictVariableByFactor(fg, :l3, pp, [:x7; :l3])\n# example of naive user defined test on fit score\nfitscore = minkld(curr, pred)\n# `multihypo` can be used as option between existing or new variables\n\nRelated\n\napproxConv\n\n\n\n\n\n","category":"function"},{"location":"func_ref/#IncrementalInference.printCliqHistorySummary","page":"More Functions","title":"IncrementalInference.printCliqHistorySummary","text":"printCliqHistorySummary(fid, hist)\nprintCliqHistorySummary(fid, hist, cliqid)\n\n\nPrint a short summary of state machine history for a clique solve.\n\nRelated:\n\ngetTreeAllFrontalSyms, animateCliqStateMachines, printHistoryLine, printCliqHistorySequential\n\n\n\n\n\n","category":"function"},{"location":"func_ref/#IncrementalInference.resetCliqSolve!","page":"More Functions","title":"IncrementalInference.resetCliqSolve!","text":"resetCliqSolve!(dfg, treel, cliq; solveKey)\n\n\nReset the state of all variables in a clique to not initialized.\n\nNotes\n\nresets numberical values to zeros.\n\nDev Notes\n\nTODO not all kde manifolds will initialize to zero.\nFIXME channels need to be consolidated\n\n\n\n\n\n","category":"function"},{"location":"func_ref/#IncrementalInference.resetData!","page":"More Functions","title":"IncrementalInference.resetData!","text":"resetData!(vdata)\n\n\nPartial reset of basic data fields in ::VariableNodeData of ::FunctionNode structures.\n\n\n\n\n\n","category":"function"},{"location":"func_ref/#IncrementalInference.resetTreeCliquesForUpSolve!","page":"More Functions","title":"IncrementalInference.resetTreeCliquesForUpSolve!","text":"resetTreeCliquesForUpSolve!(treel)\n\n\nReset the Bayes (Junction) tree so that a new upsolve can be performed.\n\nNotes\n\nWill change previous clique status from DOWNSOLVED to INITIALIZED only.\nSets the color of tree clique to lightgreen.\n\n\n\n\n\n","category":"function"},{"location":"func_ref/#IncrementalInference.resetVariable!","page":"More Functions","title":"IncrementalInference.resetVariable!","text":"resetVariable!(varid; solveKey)\n\n\nReset the solve state of a variable to uninitialized/unsolved state.\n\n\n\n\n\n","category":"function"},{"location":"func_ref/#IncrementalInference.setfreeze!","page":"More Functions","title":"IncrementalInference.setfreeze!","text":"setfreeze!(dfg, sym)\n\n\nSet variable(s) sym of factor graph to be marginalized – i.e. not be updated by inference computation.\n\n\n\n\n\n","category":"function"},{"location":"func_ref/#IncrementalInference.setValKDE!","page":"More Functions","title":"IncrementalInference.setValKDE!","text":"setValKDE!(vd, pts, bws)\nsetValKDE!(vd, pts, bws, setinit)\nsetValKDE!(vd, pts, bws, setinit, ipc)\n\n\nSet the point centers and bandwidth parameters of a variable node, also set isInitialized=true if setinit::Bool=true (as per default).\n\nNotes\n\ninitialized is used for initial solve of factor graph where variables are not yet initialized.\ninferdim is used to identify if the initialized was only partial.\n\n\n\n\n\n","category":"function"},{"location":"func_ref/#IncrementalInference.setVariableInitialized!","page":"More Functions","title":"IncrementalInference.setVariableInitialized!","text":"setVariableInitialized!(varid, status)\n\n\nSet variable initialized status.\n\n\n\n\n\n","category":"function"},{"location":"func_ref/#IncrementalInference.solveCliqWithStateMachine!","page":"More Functions","title":"IncrementalInference.solveCliqWithStateMachine!","text":"solveCliqWithStateMachine!(\n dfg,\n tree,\n frontal;\n iters,\n downsolve,\n recordhistory,\n verbose,\n nextfnc,\n prevcsmc\n)\n\n\nStandalone state machine solution for a single clique.\n\nRelated:\n\ninitInferTreeUp!\n\n\n\n\n\n","category":"function"},{"location":"func_ref/#IncrementalInference.transferUpdateSubGraph!","page":"More Functions","title":"IncrementalInference.transferUpdateSubGraph!","text":"transferUpdateSubGraph!(dest, src; ...)\ntransferUpdateSubGraph!(dest, src, syms; ...)\ntransferUpdateSubGraph!(\n dest,\n src,\n syms,\n logger;\n updatePPE,\n solveKey\n)\n\n\nTransfer contents of src::AbstractDFG variables syms::Vector{Symbol} to dest::AbstractDFG. Notes\n\nReads, dest := src, for all syms\n\n\n\n\n\n","category":"function"},{"location":"func_ref/#IncrementalInference.treeProductDwn","page":"More Functions","title":"IncrementalInference.treeProductDwn","text":"treeProductDwn(fg, tree, cliq, sym; N, dbg)\n\n\nCalculate a fresh–-single step–-approximation to the variable sym in clique cliq as though during the downward message passing. The full inference algorithm may repeatedly calculate successive apprimxations to the variable based on the structure of variables, factors, and incoming messages to this clique. Which clique to be used is defined by frontal variable symbols (cliq in this case) – see getClique(...) for more details. The sym symbol indicates which symbol of this clique to be calculated. Note that the sym variable must appear in the clique where cliq is a frontal variable.\n\n\n\n\n\n","category":"function"},{"location":"func_ref/#IncrementalInference.treeProductUp","page":"More Functions","title":"IncrementalInference.treeProductUp","text":"treeProductUp(fg, tree, cliq, sym; N, dbg)\n\n\nCalculate a fresh (single step) approximation to the variable sym in clique cliq as though during the upward message passing. The full inference algorithm may repeatedly calculate successive apprimxations to the variables based on the structure of the clique, factors, and incoming messages. Which clique to be used is defined by frontal variable symbols (cliq in this case) – see getClique(...) for more details. The sym symbol indicates which symbol of this clique to be calculated. Note that the sym variable must appear in the clique where cliq is a frontal variable.\n\n\n\n\n\n","category":"function"},{"location":"func_ref/#IncrementalInference.unfreezeVariablesAll!","page":"More Functions","title":"IncrementalInference.unfreezeVariablesAll!","text":"unfreezeVariablesAll!(fgl)\n\n\nFree all variables from marginalization.\n\nRelated\n\ndontMarginalizeVariablesAll!\n\n\n\n\n\n","category":"function"},{"location":"func_ref/#IncrementalInference.dontMarginalizeVariablesAll!","page":"More Functions","title":"IncrementalInference.dontMarginalizeVariablesAll!","text":"dontMarginalizeVariablesAll!(fgl)\n\n\nFree all variables from marginalization.\n\n\n\n\n\n","category":"function"},{"location":"func_ref/#IncrementalInference.updateFGBT!","page":"More Functions","title":"IncrementalInference.updateFGBT!","text":"updateFGBT!(fg, cliq, IDvals; dbg, fillcolor, logger)\n\n\nUpdate cliq cliqID in Bayes (Juction) tree bt according to contents of urt. Intended use is to update main clique after a upward belief propagation computation has been completed per clique.\n\n\n\n\n\n","category":"function"},{"location":"func_ref/#IncrementalInference.upGibbsCliqueDensity","page":"More Functions","title":"IncrementalInference.upGibbsCliqueDensity","text":"upGibbsCliqueDensity(dfg, cliq, solveKey, inmsgs)\nupGibbsCliqueDensity(dfg, cliq, solveKey, inmsgs, N)\nupGibbsCliqueDensity(dfg, cliq, solveKey, inmsgs, N, dbg)\nupGibbsCliqueDensity(\n dfg,\n cliq,\n solveKey,\n inmsgs,\n N,\n dbg,\n iters\n)\nupGibbsCliqueDensity(\n dfg,\n cliq,\n solveKey,\n inmsgs,\n N,\n dbg,\n iters,\n logger\n)\n\n\nPerform computations required for the upward message passing during belief propation on the Bayes (Junction) tree. This function is usually called as via remote_call for multiprocess dispatch.\n\nNotes\n\nfg factor graph,\ntree Bayes tree,\ncliq which cliq to perform the computation on,\nparent the parent clique to where the upward message will be sent,\nchildmsgs is for any incoming messages from child cliques.\n\nDevNotes\n\nFIXME total rewrite with AMP #41 and RoME #244 in mind\n\n\n\n\n\n","category":"function"},{"location":"func_ref/#IncrementalInference.resetVariableAllInitializations!","page":"More Functions","title":"IncrementalInference.resetVariableAllInitializations!","text":"resetVariableAllInitializations!(fgl)\n\n\nReset initialization flag on all variables in ::AbstractDFG.\n\nNotes\n\nNumerical values remain, but inference will overwrite since init flags are now false.\n\n\n\n\n\n","category":"function"},{"location":"examples/parametric_solve/#Parametric-Solve-(Experimental)","page":"[DEV] Parametric Solve","title":"Parametric Solve (Experimental)","text":"","category":"section"},{"location":"examples/parametric_solve/","page":"[DEV] Parametric Solve","title":"[DEV] Parametric Solve","text":"Note that parametric solve (i.e. conventional Gaussians) is currently supported as an experimental feature which might appear more buggy. Familiar parametric methods should become fully integrated and we invite comments or contributions from the community. A great deal of effort has gone into finding the best abstractions to support multiple factor graph solving strategies.","category":"page"},{"location":"examples/parametric_solve/#Batch-Parametric","page":"[DEV] Parametric Solve","title":"Batch Parametric","text":"","category":"section"},{"location":"examples/parametric_solve/","page":"[DEV] Parametric Solve","title":"[DEV] Parametric Solve","text":"solveGraphParametric\nIncrementalInference.solveGraphParametric!","category":"page"},{"location":"examples/parametric_solve/#IncrementalInference.solveGraphParametric","page":"[DEV] Parametric Solve","title":"IncrementalInference.solveGraphParametric","text":"solveGraphParametric(args; kwargs...)\n\n\nBatch parametric graph solve using Riemannian Levenberg Marquardt.\n\n\n\n\n\n","category":"function"},{"location":"examples/parametric_solve/#DistributedFactorGraphs.solveGraphParametric!","page":"[DEV] Parametric Solve","title":"DistributedFactorGraphs.solveGraphParametric!","text":"Standard parametric graph solution (Experimental).\n\n\n\n\n\n","category":"function"},{"location":"examples/parametric_solve/","page":"[DEV] Parametric Solve","title":"[DEV] Parametric Solve","text":"Initializing the parametric solve from existing values can be done with the help of:","category":"page"},{"location":"examples/parametric_solve/","page":"[DEV] Parametric Solve","title":"[DEV] Parametric Solve","text":"initParametricFrom!","category":"page"},{"location":"examples/parametric_solve/#IncrementalInference.initParametricFrom!","page":"[DEV] Parametric Solve","title":"IncrementalInference.initParametricFrom!","text":"initParametricFrom!(fg; ...)\ninitParametricFrom!(fg, fromkey; parkey, onepoint, force)\n\n\nInitialize the parametric solver data from a different solution in fromkey.\n\nDevNotes\n\nTODO, keyword force not wired up yet.\n\n\n\n\n\n","category":"function"},{"location":"examples/parametric_solve/#parametric_factors","page":"[DEV] Parametric Solve","title":"Defining Factors to Support a Parametric Solution (Experimental)","text":"","category":"section"},{"location":"examples/parametric_solve/","page":"[DEV] Parametric Solve","title":"[DEV] Parametric Solve","text":"Factor that supports a parametric solution, with supported distributions (such as Normal and MvNormal), can be used in a parametric batch solver solveGraphParametric. ","category":"page"},{"location":"examples/parametric_solve/#getMeasurementParametric","page":"[DEV] Parametric Solve","title":"getMeasurementParametric","text":"","category":"section"},{"location":"examples/parametric_solve/","page":"[DEV] Parametric Solve","title":"[DEV] Parametric Solve","text":"Parameteric calculations require the mean and covariance from Gaussian measurement functions (factors) using the getMeasurementParametric getMeasurementParametric defaults to looking for a supported distribution in field .Z followed by .z. Therefore, if the factor uses this fieldname, getMeasurementParametric does not need to be extended. You can extend by simply implementing, for example, your own IncrementalInference.getMeasurementParametric(f::OtherFactor) = m.density.","category":"page"},{"location":"examples/parametric_solve/","page":"[DEV] Parametric Solve","title":"[DEV] Parametric Solve","text":"For this example, the Z field will automatically be detected used by default for MyFactor from above.","category":"page"},{"location":"examples/parametric_solve/","page":"[DEV] Parametric Solve","title":"[DEV] Parametric Solve","text":"struct MyFactor{T <: SamplableBelief} <: IIF.AbstractRelativeRoots\n Z::T\nend","category":"page"},{"location":"examples/parametric_solve/","page":"[DEV] Parametric Solve","title":"[DEV] Parametric Solve","text":"An example of where implementing getMeasurementParametric is needed can be found in the RoME factor Pose2Point2BearingRange","category":"page"},{"location":"examples/parametric_solve/","page":"[DEV] Parametric Solve","title":"[DEV] Parametric Solve","text":"import getMeasurementParametric\nfunction getMeasurementParametric(s::Pose2Point2BearingRange{<:Normal, <:Normal})\n\n meas = [mean(s.bearing), mean(s.range)]\n iΣ = [1/var(s.bearing) 0;\n 0 1/var(s.range)]\n\n return meas, iΣ\nend","category":"page"},{"location":"examples/parametric_solve/#The-Factor","page":"[DEV] Parametric Solve","title":"The Factor","text":"","category":"section"},{"location":"examples/parametric_solve/","page":"[DEV] Parametric Solve","title":"[DEV] Parametric Solve","text":"The factor is evaluated in a cost function using the Mahalanobis distance and the measurement should therefore match the residual returned. ","category":"page"},{"location":"examples/parametric_solve/#Optimization","page":"[DEV] Parametric Solve","title":"Optimization","text":"","category":"section"},{"location":"examples/parametric_solve/","page":"[DEV] Parametric Solve","title":"[DEV] Parametric Solve","text":"IncrementalInference.solveGraphParametric! uses Optim.jl. The factors that are supported should have a gradient and Hessian available/exists and therefore it makes use of TwiceDifferentiable. Full control of Optim's setup is possible with keyword arguments. ","category":"page"},{"location":"caesar_framework/#The-Caesar-Framework","page":"Pkg Framework","title":"The Caesar Framework","text":"","category":"section"},{"location":"caesar_framework/","page":"Pkg Framework","title":"Pkg Framework","text":"The Caesar.jl package is an \"umbrella\" framework around other dedicated algorithmic packages. While most of the packages are implemented in native Julia (JuliaPro), a few dependencies are wrapped C libraries. Note that C/C++ can be incorporated with zero overhead, such as was done with AprilTags.jl.","category":"page"},{"location":"caesar_framework/","page":"Pkg Framework","title":"Pkg Framework","text":"FAQ: Why use Julia?","category":"page"},{"location":"caesar_framework/#AMP-/-IIF-/-RoME","page":"Pkg Framework","title":"AMP / IIF / RoME","text":"","category":"section"},{"location":"caesar_framework/","page":"Pkg Framework","title":"Pkg Framework","text":"Robot motion estimate (RoME.jl) can operate in the conventional SLAM manner, using local memory (dictionaries), or alternatively distribute over a persisted DistributedFactorGraph.jl through common serialization and graph storage/database technologies, see this article as example [1.3]. A variety of 2D plotting, 3D visualization, serialization, middleware, and analysis tools come standard as provided by the associated packages. RoME.jl combines reference frame transformations and robotics SLAM tool around the back-end solver provides by IncrementalInference.jl.","category":"page"},{"location":"caesar_framework/","page":"Pkg Framework","title":"Pkg Framework","text":"Details about the accompanying packages:","category":"page"},{"location":"caesar_framework/","page":"Pkg Framework","title":"Pkg Framework","text":"IncrementalInference.jl supplies the algebraic logic for factor graph inference with Bayes tree and depends on several packages itself.\nRoME.jl introduces nodes and factors that are useful to robotic navigation.\nApproxManifoldProducts.jl provides on-manifold belief product operations.","category":"page"},{"location":"caesar_framework/#Visualization-(Arena.jl/RoMEPlotting.jl)","page":"Pkg Framework","title":"Visualization (Arena.jl/RoMEPlotting.jl)","text":"","category":"section"},{"location":"caesar_framework/","page":"Pkg Framework","title":"Pkg Framework","text":"Caesar visualization (plotting of results, graphs, and data) is provided by 2D and 3D packages respectively:","category":"page"},{"location":"caesar_framework/","page":"Pkg Framework","title":"Pkg Framework","text":"RoMEPlotting.jl are a set of scripts that provide MATLAB style plotting of factor graph beliefs, mostly supporting 2D visualization with some support for projections of 3D;\nArena.jl package, which is a collection of 3D visualization tools.","category":"page"},{"location":"caesar_framework/#Multilanguage-Interops:-NavAbility.io-SDKs-and-APIs","page":"Pkg Framework","title":"Multilanguage Interops: NavAbility.io SDKs and APIs","text":"","category":"section"},{"location":"caesar_framework/","page":"Pkg Framework","title":"Pkg Framework","text":"The Caesar framework is not limited to direct Julia use. Check out www.NavAbility.io, or contact directly at (info@navabiliyt.io), for more details. Also see the community multi-language page for details.","category":"page"},{"location":"caesar_framework/","page":"Pkg Framework","title":"Pkg Framework","text":"note: Note\nFAQ: Interop with other languages (not limited to Julia only)","category":"page"},{"location":"concepts/flux_factors/#Incorporating-Neural-Network-Factors","page":"Flux (NN) Factors","title":"Incorporating Neural Network Factors","text":"","category":"section"},{"location":"concepts/flux_factors/","page":"Flux (NN) Factors","title":"Flux (NN) Factors","text":"IncrementalInference.jl and RoME.jl has native support for using Neural Networks (via Flux.jl) as non-Gaussian factors. Documentation is forthcoming, but meanwhile see the following generic Flux.jl factor structure. Note also that a standard Mixture approach already exists too.","category":"page"},{"location":"examples/legacy_deffactors/#Relative-Factors-(Legacy)","page":"Legacy Factors","title":"Relative Factors (Legacy)","text":"","category":"section"},{"location":"examples/legacy_deffactors/#One-Dimension-Roots-Example","page":"Legacy Factors","title":"One Dimension Roots Example","text":"","category":"section"},{"location":"examples/legacy_deffactors/","page":"Legacy Factors","title":"Legacy Factors","text":"Previously we looked at adding a prior. This section demonstrates the first of two <:AbstractRelative factor types. These are factors that introduce only relative information between variables in the factor graph.","category":"page"},{"location":"examples/legacy_deffactors/","page":"Legacy Factors","title":"Legacy Factors","text":"This example is on <:IIF.AbstractRelativeRoots. First, lets create the factor as before ","category":"page"},{"location":"examples/legacy_deffactors/","page":"Legacy Factors","title":"Legacy Factors","text":"struct MyFactor{T <: SamplableBelief} <: IIF.AbstractRelativeRoots\n Z::T\nend\ngetSample(cfo::CalcFactor{<:MyFactor}, N::Int=1) = (reshape(rand(cfo.factor.Z,N) ,1,N), )\n\nfunction (cfo::CalcFactor{<:MyFactor})( measurement_z,\n x1,\n x2 )\n #\n res = measurement_z - (x2[1] - x1[1])\n return res\nend","category":"page"},{"location":"examples/legacy_deffactors/","page":"Legacy Factors","title":"Legacy Factors","text":"The selection of <:IIF.AbstractRelativeRoots, akin to earlier <:AbstractPrior, instructs IIF to find the roots of the provided residual function. That is the one dimensional residual function, res[1] = measurement - prediction, is used during inference to approximate the convolution of conditional beliefs from the approximate beliefs of the connected variables in the factor graph.","category":"page"},{"location":"examples/legacy_deffactors/","page":"Legacy Factors","title":"Legacy Factors","text":"Important aspects to note, <:IIF.AbstractRelativeRoots requires all elements length(res) (the factor measurement dimension) to have a feasible zero crossing solution. A two dimensional system will solve for variables where both res[1]==0 and res[2]==0.","category":"page"},{"location":"examples/legacy_deffactors/","page":"Legacy Factors","title":"Legacy Factors","text":"note: Note\nAs of IncrementalInference v0.21, CalcResidual no longer takes a residual as input parameter and should return residual, see IIF#467.","category":"page"},{"location":"examples/legacy_deffactors/","page":"Legacy Factors","title":"Legacy Factors","text":"note: Note\nMeasurements and variables passed in to the factor residual function do not have the same type as when constructing the factor graph. It is recommended to leave these incoming types unrestricted. If you must define the types, these either are (or will be) of element type relating to the manifold on which the measurement or variable beliefs reside. Probably a vector or manifolds type. Usage can be very case specific, and hence better to let Julia type-inference automation do the hard work for you. The ","category":"page"},{"location":"examples/legacy_deffactors/#Two-Dimension-Minimize-Example","page":"Legacy Factors","title":"Two Dimension Minimize Example","text":"","category":"section"},{"location":"examples/legacy_deffactors/","page":"Legacy Factors","title":"Legacy Factors","text":"The second type is <:IIF.AbstractRelativeMinimize which simply minimizes the residual vector of the user factor. This type is useful for partial constraint situations where the residual function is not gauranteed to have zero crossings in all dimensions and the problem is converted into a minimization problem instead:","category":"page"},{"location":"examples/legacy_deffactors/","page":"Legacy Factors","title":"Legacy Factors","text":"struct OtherFactor{T <: SamplableBelief} <: IIF.AbstractRelativeMinimize\n Z::T # assuming something 2 dimensional\n userdata::String # or whatever is necessary\nend\n\n# just illustrating some arbitraty second value in tuple of different size\ngetSample(cfo::CalcFactor{<:OtherFactor}, N::Int=1) = (rand(cfo.factor.z,N), rand())\n\nfunction (cfo::CalcFactor{<:OtherFactor})(res::AbstractVector{<:Real},\n z,\n second_val,\n x1,\n x2 )\n #\n # @assert length(z) == 2\n # not doing anything with `second_val` but illustrating\n # not doing anything with `cfo.factor.userdata` either\n \n # the broadcast operators with automatically vectorize\n res = z .- (x1[1:2] .- x1[1:2])\n return res\nend","category":"page"},{"location":"principles/multiplyingDensities/#Principle:-Multiplying-Functions-(Python)","page":"Multiplying Functions (.py)","title":"Principle: Multiplying Functions (Python)","text":"","category":"section"},{"location":"principles/multiplyingDensities/","page":"Multiplying Functions (.py)","title":"Multiplying Functions (.py)","text":"This example illustrates a central concept in Caesar.jl (and the multimodal-iSAM algorithm), whereby different probability belief functions are multiplied together. The true product between various likelihood beliefs is very complicated to compute, but a good approximations exist. In addition, ZmqCaesar offers a ZMQ interface to the factor graph solution for multilanguage support. This example is a small subset that shows how to use the ZMQ infrastructure, but avoids the larger factor graph related calls.","category":"page"},{"location":"principles/multiplyingDensities/#Products-of-Infinite-Objects-(Functionals)","page":"Multiplying Functions (.py)","title":"Products of Infinite Objects (Functionals)","text":"","category":"section"},{"location":"principles/multiplyingDensities/","page":"Multiplying Functions (.py)","title":"Multiplying Functions (.py)","text":"Consider multiplying multiple belief density functions together, for example","category":"page"},{"location":"principles/multiplyingDensities/","page":"Multiplying Functions (.py)","title":"Multiplying Functions (.py)","text":"f = f_1 times f_2 times f_3","category":"page"},{"location":"principles/multiplyingDensities/","page":"Multiplying Functions (.py)","title":"Multiplying Functions (.py)","text":"which is a core operation required for solving the Chapman-Kolmogorov transit equations.","category":"page"},{"location":"principles/multiplyingDensities/#Direct-Julia-Calculation","page":"Multiplying Functions (.py)","title":"Direct Julia Calculation","text":"","category":"section"},{"location":"principles/multiplyingDensities/","page":"Multiplying Functions (.py)","title":"Multiplying Functions (.py)","text":"The ApproxManifoldProducts.jl package (experimental) is meant to unify many on-manifold product operations, and can be called directly in Julia:","category":"page"},{"location":"principles/multiplyingDensities/","page":"Multiplying Functions (.py)","title":"Multiplying Functions (.py)","text":"using ApproxManifoldProducts\n\nf1 = manikde!(ContinuousScalar, [randn()-3.0 for _ in 1:100])\nf2 = manikde!(ContinuousScalar, [randn()+3.0 for _ in 1:100])\n...\n\nf12 = manifoldProduct(ContinuousScalar, [f1;f2])","category":"page"},{"location":"principles/multiplyingDensities/","page":"Multiplying Functions (.py)","title":"Multiplying Functions (.py)","text":"Also see previous KernelDensityEstimate.jl.","category":"page"},{"location":"principles/multiplyingDensities/","page":"Multiplying Functions (.py)","title":"Multiplying Functions (.py)","text":"To make Caesar.jl usable from other languages, a ZMQ server interface model has been developed which can also be used to test this principle functional product operation.","category":"page"},{"location":"principles/multiplyingDensities/#Not-Susceptible-to-Particle-Depletion","page":"Multiplying Functions (.py)","title":"Not Susceptible to Particle Depletion","text":"","category":"section"},{"location":"principles/multiplyingDensities/","page":"Multiplying Functions (.py)","title":"Multiplying Functions (.py)","text":"The product process of say f1*f2 is not a importance sampling procedure that is commonly used in particle filtering, but instead a more advanced Bayesian inference process based on a wide variety of academic literature. The KernelDensityEstimate method is a stochastic method, what active research is looking into deterministic homotopy/continuation methods.","category":"page"},{"location":"principles/multiplyingDensities/","page":"Multiplying Functions (.py)","title":"Multiplying Functions (.py)","text":"The easy example that demonstrates that particle depletion is avoided here, is where f1 and f2 are represented by well separated and evenly weighted samples – the Bayesian inference 'product' technique efficiently produces new (evenly weighted) samples for f12 somewhere in between f1 and f2, but clearly not overlapping the original population of samples used for f1 and f2. In contrast, conventional particle filtering measurement updates would have \"de-weighted\" particles of either input function and then be rejected during an eventual resampling step, thereby depleting the sample population.","category":"page"},{"location":"principles/multiplyingDensities/#Starting-the-ZMQ-server","page":"Multiplying Functions (.py)","title":"Starting the ZMQ server","text":"","category":"section"},{"location":"principles/multiplyingDensities/","page":"Multiplying Functions (.py)","title":"Multiplying Functions (.py)","text":"Caesar.jl provides a startup script for a default ZMQ instance. Start a server and allow precompilations to finish, as indicated by a printout message \"waiting to receive...\". More details here.","category":"page"},{"location":"principles/multiplyingDensities/#Functional-Products-via-Python","page":"Multiplying Functions (.py)","title":"Functional Products via Python","text":"","category":"section"},{"location":"principles/multiplyingDensities/","page":"Multiplying Functions (.py)","title":"Multiplying Functions (.py)","text":"Clone the Python GraffSDK.py code here and look at the product.py file.","category":"page"},{"location":"principles/multiplyingDensities/","page":"Multiplying Functions (.py)","title":"Multiplying Functions (.py)","text":"import sys\nsys.path.append('..')\n\nimport numpy as np\nfrom graff.Endpoint import Endpoint\nfrom graff.Distribution.Normal import Normal\nfrom graff.Distribution.SampleWeights import SampleWeights\nfrom graff.Distribution.BallTreeDensity import BallTreeDensity\n\nfrom graff.Core import MultiplyDistributions\n\nimport matplotlib.pyplot as plt\n\nif __name__ == '__main__':\n e = Endpoint()\n\n e.Connect('tcp://192.168.0.102:5555')\n print(e.Status())\n\n N = 1000\n u1 = 0.0\n s1 = 10.0\n x1 = u1+s1*np.random.randn(N)\n\n u2 = 50.0\n s2 = 10.0\n x2 = u2+s2*np.random.randn(N)\n b1 = BallTreeDensity('Gaussian', np.ones(N), np.ones(N), x1)\n b2 = BallTreeDensity('Gaussian', np.ones(N), np.ones(N), x2)\n\n rep = MultiplyDistributions(e, [b1,b2])\n print(rep)\n x = np.array(rep['points'] )\n # plt.stem(x, np.ones(len(x)) )\n plt.hist(x, bins = int(len(x)/10.0), color= 'm')\n plt.hist(x1, bins = int(len(x)/10.0),color='r')\n plt.hist(x2, bins = int(len(x)/10.0),color='b')\n plt.show()\n\n e.Disconnect()","category":"page"},{"location":"principles/multiplyingDensities/#A-Basic-Factor-Graph-Product-Illustration","page":"Multiplying Functions (.py)","title":"A Basic Factor Graph Product Illustration","text":"","category":"section"},{"location":"principles/multiplyingDensities/","page":"Multiplying Functions (.py)","title":"Multiplying Functions (.py)","text":"Using the factor graph methodology, we can repeat the example by adding variable and two prior factors. This can be done directly in Julia (or via ZMQ in the further Python example below)","category":"page"},{"location":"principles/multiplyingDensities/#Products-of-Functions-(Factor-Graphs-in-Julia)","page":"Multiplying Functions (.py)","title":"Products of Functions (Factor Graphs in Julia)","text":"","category":"section"},{"location":"principles/multiplyingDensities/","page":"Multiplying Functions (.py)","title":"Multiplying Functions (.py)","text":"Directly in Julia:","category":"page"},{"location":"principles/multiplyingDensities/","page":"Multiplying Functions (.py)","title":"Multiplying Functions (.py)","text":"using IncrementalInference\n\nfg = initfg()\n\naddVariable!(fg, :x0, ContinuousScalar)\naddFactor!(fg, [:x0], Prior(Normal(-3.0,1.0)))\naddFactor!(fg, [:x0], Prior(Normal(+3.0,1.0)))\n\nsolveTree!(fg)\n\n# plot the results\nusing KernelDensityEstimatePlotting\n\nplotKDE(getBelief(fg, :x0))","category":"page"},{"location":"principles/multiplyingDensities/","page":"Multiplying Functions (.py)","title":"Multiplying Functions (.py)","text":"Example figure:","category":"page"},{"location":"principles/multiplyingDensities/","page":"Multiplying Functions (.py)","title":"Multiplying Functions (.py)","text":"

      \n\n

      ","category":"page"},{"location":"principles/multiplyingDensities/#Products-of-Functions-(Via-Python-and-ZmqCaesar)","page":"Multiplying Functions (.py)","title":"Products of Functions (Via Python and ZmqCaesar)","text":"","category":"section"},{"location":"principles/multiplyingDensities/","page":"Multiplying Functions (.py)","title":"Multiplying Functions (.py)","text":"We repeat the example using Python and the ZMQ interface:","category":"page"},{"location":"principles/multiplyingDensities/","page":"Multiplying Functions (.py)","title":"Multiplying Functions (.py)","text":"import sys\nsys.path.append('..')\n\nimport numpy as np\nfrom graff.Endpoint import Endpoint\nfrom graff.Distribution.Normal import Normal\nfrom graff.Distribution.SampleWeights import SampleWeights\nfrom graff.Distribution.BallTreeDensity import BallTreeDensity\n\nfrom graff.Core import MultiplyDistributions\n\n\nif __name__ == '__main__':\n \"\"\"\n\n \"\"\"\n e.Connect('tcp://127.0.0.1:5555')\n print(e.Status())\n\n # Add the first pose x0\n x0 = Variable('x0', 'ContinuousScalar')\n e.AddVariable(x0)\n\n # Add at a fixed location PriorPose2 to pin x0 to a starting location\n prior = Factor('Prior', ['x0'], Normal(np.zeros(1,1)-3.0, np.eye(1)) )\n e.AddFactor(prior)\n prior = Factor('Prior', ['x0'], Normal(np.zeros(1,1)+3.0, np.eye(1)) )\n e.AddFactor(prior)","category":"page"},{"location":"examples/custom_relative_factors/#custom_relative_factor","page":"Custom Relative Factor","title":"Custom Relative Factor","text":"","category":"section"},{"location":"examples/custom_relative_factors/","page":"Custom Relative Factor","title":"Custom Relative Factor","text":"Required Brief description\nMyFactor struct Prior (<:AbstractPrior) or Relative (<:AbstractManifoldMinimize) factor definition\ngetManifold The manifold of the factor\n(cfo::CalcFactor{<:MyFactor}) Factor residual function\nOptional methods Brief description\ngetSample(cfo::CalcFactor{<:MyFactor}) Get a sample from the measurement model","category":"page"},{"location":"examples/custom_relative_factors/#Define-the-relative-struct","page":"Custom Relative Factor","title":"Define the relative struct","text":"","category":"section"},{"location":"examples/custom_relative_factors/","page":"Custom Relative Factor","title":"Custom Relative Factor","text":"Previously we looked at making a Custom Prior Factor. This section describes how to build relative factors. Relative factors introduce relative-only information between variables in the factor graph, and do not add any absolute information. For example, a rigid transform between two variables is a relative relationship, regardless of their common absolute position in the world.","category":"page"},{"location":"examples/custom_relative_factors/","page":"Custom Relative Factor","title":"Custom Relative Factor","text":"Lets look at either the EuclidDistance of Pose2Pose2 factors as simple examples. First, create the uniquely named factor struct:","category":"page"},{"location":"examples/custom_relative_factors/","page":"Custom Relative Factor","title":"Custom Relative Factor","text":"struct EuclidDistance{T <: IIF.SamplableBelief} <: IIF.AbstractManifoldMinimize\n Z::T\nend","category":"page"},{"location":"examples/custom_relative_factors/","page":"Custom Relative Factor","title":"Custom Relative Factor","text":"New relative factors should either inheret from <:AbstractManifoldMinimize, <:AbstractRelativeMinimize, or <:AbstractRelativeRoots. These are all subtypes of <:AbstractRelative. There are only two abstract super types, <:AbstractPrior and <:AbstractRelative.","category":"page"},{"location":"examples/custom_relative_factors/#Summary-of-Sampling-Data-Representation","page":"Custom Relative Factor","title":"Summary of Sampling Data Representation","text":"","category":"section"},{"location":"examples/custom_relative_factors/","page":"Custom Relative Factor","title":"Custom Relative Factor","text":"Usage <:AbstractPrior <:AbstractRelative\ngetSample point p on Manifold tangent X at some p (e.g. identity)","category":"page"},{"location":"examples/custom_relative_factors/","page":"Custom Relative Factor","title":"Custom Relative Factor","text":"Usage \nsampleTangent tangent at point p or the identity element for groups\nrand / sample coordinates","category":"page"},{"location":"examples/custom_relative_factors/#Specialized-Dispatch-(getManifold,-getSample)","page":"Custom Relative Factor","title":"Specialized Dispatch (getManifold, getSample)","text":"","category":"section"},{"location":"examples/custom_relative_factors/","page":"Custom Relative Factor","title":"Custom Relative Factor","text":"Relative factors involve computaton, these computations must be performed on some manifold. Custom relative factors require that the getManifold function be overridded. Here two examples are given for reference:","category":"page"},{"location":"examples/custom_relative_factors/","page":"Custom Relative Factor","title":"Custom Relative Factor","text":"# import override/specialize the multiple dispatch\nimport DistributedFactorGraphs: getManifold\n\n# two examples of existing functions in the standard libraries\nDFG.getManifold(::EuclidDistance) = Manifolds.TranslationGroup(1)\nDFG.getManifold(::Pose2Pose2) = Manifolds.SpecialEuclidean(2)","category":"page"},{"location":"examples/custom_relative_factors/","page":"Custom Relative Factor","title":"Custom Relative Factor","text":"Extending the getSample method for our EuclidDistance factor example is not required, since the default dispatch using field .Z <: SamplableBelief will already be able to sample the measurement – see Specialized getSample.","category":"page"},{"location":"examples/custom_relative_factors/","page":"Custom Relative Factor","title":"Custom Relative Factor","text":"One important note is that getSample for <:AbstractRelative factors should return measurement values as manifold tangent vectors – for computational efficiency reasons.","category":"page"},{"location":"examples/custom_relative_factors/","page":"Custom Relative Factor","title":"Custom Relative Factor","text":"If more advanced sampling is required, extend the getSample function. ","category":"page"},{"location":"examples/custom_relative_factors/","page":"Custom Relative Factor","title":"Custom Relative Factor","text":"function getSample(cf::CalcFactor{<:Pose2Pose2}) \n M = getManifold(cf.factor)\n ϵ = getPointIdentity(Pose2)\n X = sampleTangent(M, cf.factor.Z, ϵ)\n return X\nend","category":"page"},{"location":"examples/custom_relative_factors/","page":"Custom Relative Factor","title":"Custom Relative Factor","text":"The return type for getSample is unrestricted, and will be passed to the residual function \"as-is\", but must return values representing a tangent vector for <:AbstractRelative","category":"page"},{"location":"examples/custom_relative_factors/","page":"Custom Relative Factor","title":"Custom Relative Factor","text":"note: Note\nDefault dispatches in IncrementalInference will try use cf.factor.Z to samplePoint on manifold (for <:AbstractPrior) or sampleTangent (for <:AbstractRelative), which simplifies new factor definitions. If, however, you wish to build more complicated sampling processes, then simply define your own getSample(cf::CalcFactor{<:MyFactor}) function.","category":"page"},{"location":"examples/custom_relative_factors/#factor_residual_function","page":"Custom Relative Factor","title":"Factor Residual Function","text":"","category":"section"},{"location":"examples/custom_relative_factors/","page":"Custom Relative Factor","title":"Custom Relative Factor","text":"The selection of <:IIF.AbstractManifoldMinimize, akin to earlier <:AbstractPrior, instructs IIF to find the minimum of the provided residual function. The residual function is used during inference to approximate the convolution of conditional beliefs from the approximate beliefs of the connected variables in the factor graph. Conceptually, the residual function is usually something akin to residual = measurement - prediction, but does not have to follow the exact recipe.","category":"page"},{"location":"examples/custom_relative_factors/","page":"Custom Relative Factor","title":"Custom Relative Factor","text":"The returned value (the factor measurement) from getSample will always be passed as the first argument (e.g. X) to the factor residual function. ","category":"page"},{"location":"examples/custom_relative_factors/","page":"Custom Relative Factor","title":"Custom Relative Factor","text":"# first residual function example\n(cf::CalcFactor{<:EuclidDistance})(X, p, q) = X - norm(p .- q)\n\n# second residual function example\nfunction (cf::CalcFactor{<:Pose2Pose2})(X, p, q)\n M = getManifold(Pose2)\n q̂ = Manifolds.compose(M, p, exp(M, identity_element(M, p), X))\n Xc = vee(M, q, log(M, q, q̂))\n return Xc\nend","category":"page"},{"location":"examples/custom_relative_factors/","page":"Custom Relative Factor","title":"Custom Relative Factor","text":"It is recommended to leave the incoming types unrestricted. If you must define the types, make sure to allow sufficient dispatch freedom (i.e. dispatch to concrete types) and not force operations to \"non-concrete\" types. Usage can be very case specific, and hence better to let Julia type-inference automation do the hard work of inferring the concrete types.","category":"page"},{"location":"examples/custom_relative_factors/","page":"Custom Relative Factor","title":"Custom Relative Factor","text":"note: Note\nAt present (2021) the residual function should return the residual value as a coordinate (not as tangent vectors or manifold points). Ongoing work is in progress, and likely to return residual values as manifold tangent vectors instead.","category":"page"},{"location":"examples/custom_relative_factors/#Serialization","page":"Custom Relative Factor","title":"Serialization","text":"","category":"section"},{"location":"examples/custom_relative_factors/","page":"Custom Relative Factor","title":"Custom Relative Factor","text":"Serialization of factors is also discussed in more detail at Standardized Factor Serialization.","category":"page"},{"location":"concepts/multilang/#Multilanguage-Interops","page":"Multi-Language Support","title":"Multilanguage Interops","text":"","category":"section"},{"location":"concepts/multilang/","page":"Multi-Language Support","title":"Multi-Language Support","text":"The Caesar framework is not limited to direct Julia use. ","category":"page"},{"location":"concepts/multilang/#navabilitysdk","page":"Multi-Language Support","title":"NavAbility SDKs and APIs","text":"","category":"section"},{"location":"concepts/multilang/","page":"Multi-Language Support","title":"Multi-Language Support","text":"The maintainers of Caesar.jl together with NavAbility.io are developing a standardized SDK / API for much easier multi-language / multi-access use of the solver features. The Documentation for the NavAbilitySDK's can be found here.","category":"page"},{"location":"concepts/multilang/","page":"Multi-Language Support","title":"Multi-Language Support","text":"Contact info@navability.io for more information.","category":"page"},{"location":"concepts/multilang/#Static,-Shared-Object-.so-Compilation","page":"Multi-Language Support","title":"Static, Shared Object .so Compilation","text":"","category":"section"},{"location":"concepts/multilang/","page":"Multi-Language Support","title":"Multi-Language Support","text":"See Compiling Binaries.","category":"page"},{"location":"concepts/multilang/#ROS-Integration","page":"Multi-Language Support","title":"ROS Integration","text":"","category":"section"},{"location":"concepts/multilang/","page":"Multi-Language Support","title":"Multi-Language Support","text":"See ROS Integration.","category":"page"},{"location":"concepts/multilang/#Python-Direct","page":"Multi-Language Support","title":"Python Direct","text":"","category":"section"},{"location":"concepts/multilang/","page":"Multi-Language Support","title":"Multi-Language Support","text":"For completeness, another design pattern is to wrap Julia packages for direct access from python, see SciML/diffeqpy as example.","category":"page"},{"location":"concepts/multilang/#[OUTDATED]-ZMQ-Messaging-Interface","page":"Multi-Language Support","title":"[OUTDATED] ZMQ Messaging Interface","text":"","category":"section"},{"location":"concepts/multilang/","page":"Multi-Language Support","title":"Multi-Language Support","text":"Caesar.jl has a ZMQ messaging interface (interested can see code here) that allows users to interact with the solver code base in a variety of ways. The messaging interface is not meant to replace static .so library file compilation but rather provide a more versatile and flexible development strategy.","category":"page"},{"location":"concepts/multilang/#Starting-the-Caesar-ZMQ-Navigation-Server","page":"Multi-Language Support","title":"Starting the Caesar ZMQ Navigation Server","text":"","category":"section"},{"location":"concepts/multilang/","page":"Multi-Language Support","title":"Multi-Language Support","text":"Start the Caesar.ZmqCaesar server in a Julia session with a few process cores and full optimization:","category":"page"},{"location":"concepts/multilang/","page":"Multi-Language Support","title":"Multi-Language Support","text":"julia -p4 -O3","category":"page"},{"location":"concepts/multilang/","page":"Multi-Language Support","title":"Multi-Language Support","text":"Then run the following commands, and note these steps have also been scripted here:","category":"page"},{"location":"concepts/multilang/","page":"Multi-Language Support","title":"Multi-Language Support","text":"#import the required modules\nusing Caesar, Caesar.ZmqCaesar\n\n# create empty factor graph and config objects\nfg = initfg()\nconfig = Dict{String, String}()\nzmqConfig = ZmqServer(fg, config, true, \"tcp://*:5555\");\n\n# Start the server over ZMQ\nstart(zmqConfig)\n\n# give the server a minute to start up ...","category":"page"},{"location":"concepts/multilang/","page":"Multi-Language Support","title":"Multi-Language Support","text":"The current tests are a good place to see some examples of the current interfacing functions. Feel free to change the ZMQ interface for to any of the ZMQ supported modes of data transport, such as Interprocess Communication (IPC) vs. TCP.","category":"page"},{"location":"concepts/multilang/#Alternative-Methods","page":"Multi-Language Support","title":"Alternative Methods","text":"","category":"section"},{"location":"concepts/multilang/","page":"Multi-Language Support","title":"Multi-Language Support","text":"Interfacing from languages like Python may also be achieved using PyCall.jl although little work has been done in the Caesar.jl framework to explore this path. Julia is itself interactive/dynamic and has plenty of line-by-line and Integrated Development Environment support – consider trying Julia for your application.","category":"page"},{"location":"dev/internal_fncs/#Good-to-know","page":"Internal Functions","title":"Good to know","text":"","category":"section"},{"location":"dev/internal_fncs/#Conditional-Multivariate-Normals","page":"Internal Functions","title":"Conditional Multivariate Normals","text":"","category":"section"},{"location":"dev/internal_fncs/","page":"Internal Functions","title":"Internal Functions","text":"using Distributions\nusing LinearAlgebra\n\n##\n\n# P(A|B)\n\nΣab = 0.2*randn(3,3)\nΣab += Σab'\nΣab += diagm([1.0;1.0;1.0])\n\nμ_ab = [10.0;0.0;-1.0]\nμ_1 = μ_ab[1:1]\nμ_2 = μ_ab[2:3]\n\nΣ_11 = Σab[1:1,1:1]\nΣ_12 = Σab[1:1,2:3]\nΣ_21 = Σab[2:3,1:1]\nΣ_22 = Σab[2:3,2:3]\n\n##\n\n# P(A|B) = P(A,B) / P(B)\nP_AB = MvNormal(μ_ab, Σab) # likelihood\nP_B = MvNormal([-0.5;0.75], [0.75 0.3; 0.3 2.0]) # evidence\n\n# Schur compliment\nμ_(b) = μ_1 + Σ_12*Σ_22^(-1)*(b-μ_2)\nΣ_ = Σ_11 + Σ_12*Σ_22^(-1)*Σ_21\n\nP_AB_B(a,b) = pdf(P_AB, [a;b]) / pdf(P_B, b)\nP_A_B(a,b; mv = MvNormal(μ_(b), Σ_)) = pdf(mv, a) \n\n##\n\n# probability density: p(a) = P(A=a | B=b)\n@show P_A_B([1.;],[0.;0.])\n@show P_AB_B([1.;],[0.;0.])\n\nP(A|B=B(.))","category":"page"},{"location":"dev/internal_fncs/#Various-Internal-Function-Docs","page":"Internal Functions","title":"Various Internal Function Docs","text":"","category":"section"},{"location":"dev/internal_fncs/","page":"Internal Functions","title":"Internal Functions","text":"_solveCCWNumeric!","category":"page"},{"location":"dev/internal_fncs/#IncrementalInference._solveCCWNumeric!","page":"Internal Functions","title":"IncrementalInference._solveCCWNumeric!","text":"_solveCCWNumeric!(ccwl; ...)\n_solveCCWNumeric!(ccwl, _slack; perturb)\n\n\nSolve free variable x by root finding residual function fgr.usrfnc(res, x). This is the penultimate step before calling numerical operations to move actual estimates, which is done by an internally created lambda function.\n\nNotes\n\nAssumes cpt_.p is already set to desired X decision variable dimensions and size. \nAssumes only ccw.particleidx will be solved for\nsmall random (off-manifold) perturbation used to prevent trivial solver cases, div by 0 etc.\nperturb is necessary for NLsolve (obsolete) cases, and smaller than 1e-10 will result in test failure\nAlso incorporates the active hypo lookup\n\nDevNotes\n\nTODO testshuffle is now obsolete, should be removed\nTODO perhaps consolidate perturbation with inflation or nullhypo\n\n\n\n\n\n","category":"function"},{"location":"dev/wiki/#Developers-Documentation","page":"Wiki Pointers","title":"Developers Documentation","text":"","category":"section"},{"location":"dev/wiki/#High-Level-Requirements","page":"Wiki Pointers","title":"High Level Requirements","text":"","category":"section"},{"location":"dev/wiki/","page":"Wiki Pointers","title":"Wiki Pointers","text":"Wiki to formalize some of the overall objectives.","category":"page"},{"location":"dev/wiki/#Standardizing-the-API,-verbNoun-Definitions:","page":"Wiki Pointers","title":"Standardizing the API, verbNoun Definitions:","text":"","category":"section"},{"location":"dev/wiki/","page":"Wiki Pointers","title":"Wiki Pointers","text":"The API derives from a set of standard definitions for verbs and Nouns, please see the developer wiki regarding these definitions.","category":"page"},{"location":"dev/wiki/#DistributedFactorGraphs.jl-Docs","page":"Wiki Pointers","title":"DistributedFactorGraphs.jl Docs","text":"","category":"section"},{"location":"dev/wiki/","page":"Wiki Pointers","title":"Wiki Pointers","text":"These are more hardy developer docs, such as the lower level data management API etc.","category":"page"},{"location":"dev/wiki/#Design-Wiki,-Data-and-Architecture","page":"Wiki Pointers","title":"Design Wiki, Data and Architecture","text":"","category":"section"},{"location":"dev/wiki/","page":"Wiki Pointers","title":"Wiki Pointers","text":"More developer zone material will be added here in the future, but for the time being check out the Caesar Wiki.","category":"page"},{"location":"dev/wiki/#Tree-and-CSM-References","page":"Wiki Pointers","title":"Tree and CSM References","text":"","category":"section"},{"location":"dev/wiki/","page":"Wiki Pointers","title":"Wiki Pointers","text":"Major upgrades to how the tree and CSM works is tracked in IIF issue 889.","category":"page"},{"location":"dev/wiki/#Coding-Templates","page":"Wiki Pointers","title":"Coding Templates","text":"","category":"section"},{"location":"dev/wiki/","page":"Wiki Pointers","title":"Wiki Pointers","text":"We've started to organize useful coding templates that are not available elsewhere (such as JuliaDocs) in a more local developers ","category":"page"},{"location":"dev/wiki/","page":"Wiki Pointers","title":"Wiki Pointers","text":"Coding Templates Wiki I.\nCoding Templates Wiki II","category":"page"},{"location":"dev/wiki/#Shortcuts-for-vscode-IDE","page":"Wiki Pointers","title":"Shortcuts for vscode IDE","text":"","category":"section"},{"location":"dev/wiki/","page":"Wiki Pointers","title":"Wiki Pointers","text":"See wiki","category":"page"},{"location":"dev/wiki/#Parametric-Solve-Whiteboard","page":"Wiki Pointers","title":"Parametric Solve Whiteboard","text":"","category":"section"},{"location":"dev/wiki/","page":"Wiki Pointers","title":"Wiki Pointers","text":"https://github.com/JuliaRobotics/IncrementalInference.jl/wiki/Parametric-Solve-Whiteboard","category":"page"},{"location":"dev/wiki/#Early-PoC-work-on-Tree-based-Initialization","page":"Wiki Pointers","title":"Early PoC work on Tree based Initialization","text":"","category":"section"},{"location":"dev/wiki/","page":"Wiki Pointers","title":"Wiki Pointers","text":"https://github.com/JuliaRobotics/IncrementalInference.jl/wiki/Tree-Based-Initialization","category":"page"},{"location":"dev/wiki/#Variable-Ordering-Links","page":"Wiki Pointers","title":"Variable Ordering Links","text":"","category":"section"},{"location":"dev/wiki/","page":"Wiki Pointers","title":"Wiki Pointers","text":"Wiki for variable ordering links.","category":"page"},{"location":"examples/using_images/#images_and_fiducials","page":"Images and AprilTags","title":"Images and Fiducials","text":"","category":"section"},{"location":"examples/using_images/#AprilTags","page":"Images and AprilTags","title":"AprilTags","text":"","category":"section"},{"location":"examples/using_images/","page":"Images and AprilTags","title":"Images and AprilTags","text":"One common use in SLAM is AprilTags.jl. Please see that repo for documentation on detecting tags in images. Note that Caesar.jl has a few built in tools for working with Images.jl too.","category":"page"},{"location":"examples/using_images/","page":"Images and AprilTags","title":"Images and AprilTags","text":"using AprilTags\nusing Images, Caesar","category":"page"},{"location":"examples/using_images/","page":"Images and AprilTags","title":"Images and AprilTags","text":"Which immediately enables a new factor specifically developed for using AprilTags in a factor graph:","category":"page"},{"location":"examples/using_images/","page":"Images and AprilTags","title":"Images and AprilTags","text":"Caesar.Pose2AprilTag4Corners","category":"page"},{"location":"examples/using_images/#Caesar.Pose2AprilTag4Corners","page":"Images and AprilTags","title":"Caesar.Pose2AprilTag4Corners","text":"struct Pose2AprilTag4Corners{T<:(SamplableBelief), F<:Function} <: AbstractManifoldMinimize\n\nSimplified constructor type to convert between 4 corner detection of AprilTags to a Pose2Pose2 factor for use in 2D\n\nNotes\n\nCoordinate frames are:\nassume robotics body frame is xyz <==> fwd-lft-up\nassume AprilTags pose is xyz <==> rht-dwn-fwd\nassume camera frame is xyz <==> rht-dwn-fwd\nassume Images.jl frame is row-col <==> i-j <==> dwn-rht\nHelper constructor uses f_width, f_height, c_width, c_height,s to build K, \nsetting K will overrule f_width,f_height, c_width, c_height,s.\nFinding preimage from deconv measurement sample idx in place of MvNormal mean:\nsee generateCostAprilTagsPreimageCalib for detauls.\n\nExample\n\n# bring in the packages\nusing AprilTags, Caesar, FileIO\n\n# the size of the tag, as in the outer length of each side on of black square \ntaglength = 0.15\n\n# load the image\nimg = load(\"photo.jpg\")\n\n# the image size\nwidth, height = size(img)\n# auto-guess `f_width=height, c_width=round(Int,width/2), c_height=round(Int, height/2)`\n\ndetector = AprilTagDetector()\ntags = detector(img)\n\n# new factor graph with Pose2 `:x0` and a Prior.\nfg = generateGraph_ZeroPose(varType=Pose2)\n\n# use a construction helper to add factors to all the tags\nfor tag in tags\n tagSym = Symbol(\"tag$(tag.id)\")\n exists(fg, tagSym) ? nothing : addVariable!(fg, tagSym, Pose2)\n pat = Pose2AprilTag4Corners(corners=tag.p, homography=tag.H, taglength=taglength)\n addFactor!(fg, [:x0; tagSym], pat)\nend\n\n# free AprilTags library memory\nfreeDetector!(detector)\n\nDevNotes\n\nTODO IIF will get plumbing to combine many of preimage obj terms into single calibration search\n\nRelated\n\nAprilTags.detect, PackedPose2AprilTag4Corners, generateCostAprilTagsPreimageCalib\n\n\n\n\n\n","category":"type"},{"location":"examples/using_images/#Using-Images.jl","page":"Images and AprilTags","title":"Using Images.jl","text":"","category":"section"},{"location":"examples/using_images/","page":"Images and AprilTags","title":"Images and AprilTags","text":"The Caesar.jl ecosystem support use of the JuliaImages/Images.jl suite of packages. Please see documentation there for the wealth of features implemented.","category":"page"},{"location":"examples/using_images/#Handy-Notes","page":"Images and AprilTags","title":"Handy Notes","text":"","category":"section"},{"location":"examples/using_images/","page":"Images and AprilTags","title":"Images and AprilTags","text":"Converting between images and PNG format:","category":"page"},{"location":"examples/using_images/","page":"Images and AprilTags","title":"Images and AprilTags","text":"bytes = Caesar.toFormat(format\"PNG\", img)","category":"page"},{"location":"examples/using_images/","page":"Images and AprilTags","title":"Images and AprilTags","text":"note: Note\nMore details to follow.","category":"page"},{"location":"examples/using_images/#Images-enables-ScatterAlign","page":"Images and AprilTags","title":"Images enables ScatterAlign","text":"","category":"section"},{"location":"examples/using_images/","page":"Images and AprilTags","title":"Images and AprilTags","text":"See point cloud alignment page for details on ScatterAlignPose","category":"page"},{"location":"concepts/interacting_fgs/#Factor-Graph-as-a-Whole","page":"Interact w Graphs","title":"Factor Graph as a Whole","text":"","category":"section"},{"location":"concepts/interacting_fgs/#Saving-and-Loading","page":"Interact w Graphs","title":"Saving and Loading","text":"","category":"section"},{"location":"concepts/interacting_fgs/","page":"Interact w Graphs","title":"Interact w Graphs","text":"Assuming some factor graph object has been constructed by hand or automation, it is often very useful to be able to store that factor graph to file for later loading, solving, analysis etc. Caesar.jl provides such functionality through easy saving and loading. To save a factor graph, simply do:","category":"page"},{"location":"concepts/interacting_fgs/","page":"Interact w Graphs","title":"Interact w Graphs","text":"saveDFG(\"/somewhere/myfg\", fg)","category":"page"},{"location":"concepts/interacting_fgs/","page":"Interact w Graphs","title":"Interact w Graphs","text":"saveDFG","category":"page"},{"location":"concepts/interacting_fgs/#DistributedFactorGraphs.saveDFG","page":"Interact w Graphs","title":"DistributedFactorGraphs.saveDFG","text":"saveDFG(folder, dfg; saveMetadata)\n\n\nSave a DFG to a folder. Will create/overwrite folder if it exists.\n\nDevNotes:\n\nTODO remove compress kwarg.\n\nExample\n\nusing DistributedFactorGraphs, IncrementalInference\n# Create a DFG - can make one directly, e.g. GraphsDFG{NoSolverParams}() or use IIF:\ndfg = initfg()\n# ... Add stuff to graph using either IIF or DFG:\nv1 = addVariable!(dfg, :a, ContinuousScalar, tags = [:POSE], solvable=0)\n# Now save it:\nsaveDFG(dfg, \"/tmp/saveDFG.tar.gz\")\n\n\n\n\n\n","category":"function"},{"location":"concepts/interacting_fgs/","page":"Interact w Graphs","title":"Interact w Graphs","text":"Similarly in the same or a new Julia context, you can load a factor graph object","category":"page"},{"location":"concepts/interacting_fgs/","page":"Interact w Graphs","title":"Interact w Graphs","text":"# using Caesar\nfg_ = loadDFG(\"/somwhere/myfg\")","category":"page"},{"location":"concepts/interacting_fgs/","page":"Interact w Graphs","title":"Interact w Graphs","text":"loadDFG\nloadDFG!","category":"page"},{"location":"concepts/interacting_fgs/#DistributedFactorGraphs.loadDFG","page":"Interact w Graphs","title":"DistributedFactorGraphs.loadDFG","text":"loadDFG(file)\n\n\nConvenience graph loader into a default LocalDFG.\n\nSee also: loadDFG!, saveDFG\n\n\n\n\n\n","category":"function"},{"location":"concepts/interacting_fgs/#DistributedFactorGraphs.loadDFG!","page":"Interact w Graphs","title":"DistributedFactorGraphs.loadDFG!","text":"loadDFG!(\n dfgLoadInto,\n dst;\n overwriteDFGMetadata,\n useDeprExtract\n)\n\n\nLoad a DFG from a saved folder.\n\nExample\n\nusing DistributedFactorGraphs, IncrementalInference\n# Create a DFG - can make one directly, e.g. GraphsDFG{NoSolverParams}() or use IIF:\ndfg = initfg()\n# Load the graph\nloadDFG!(dfg, \"/tmp/savedgraph.tar.gz\")\n# Use the DFG as you do normally.\nls(dfg)\n\nSee also: loadDFG, saveDFG\n\n\n\n\n\n","category":"function"},{"location":"concepts/interacting_fgs/","page":"Interact w Graphs","title":"Interact w Graphs","text":"note: Note\nJulia natively provides a direct in memory deepcopy function for making duplicate objects if you wish to keep a backup of the factor graph, e.g.fg2 = deepcopy(fg)","category":"page"},{"location":"concepts/interacting_fgs/#Adding-an-EntryData-Blob-store","page":"Interact w Graphs","title":"Adding an Entry=>Data Blob store","text":"","category":"section"},{"location":"concepts/interacting_fgs/","page":"Interact w Graphs","title":"Interact w Graphs","text":"A later part of the documentation will show how to include a Entry=>Data blob store.","category":"page"},{"location":"concepts/interacting_fgs/#querying_graph","page":"Interact w Graphs","title":"Querying the Graph","text":"","category":"section"},{"location":"concepts/interacting_fgs/#List-Variables:","page":"Interact w Graphs","title":"List Variables:","text":"","category":"section"},{"location":"concepts/interacting_fgs/","page":"Interact w Graphs","title":"Interact w Graphs","text":"A quick summary of the variables in the factor graph can be retrieved with:","category":"page"},{"location":"concepts/interacting_fgs/","page":"Interact w Graphs","title":"Interact w Graphs","text":"# List variables\nls(fg)\n# List factors attached to x0\nls(fg, :x0)\n# TODO: Provide an overview of getVal, getVert, getBW, getBelief, etc.","category":"page"},{"location":"concepts/interacting_fgs/","page":"Interact w Graphs","title":"Interact w Graphs","text":"It is possible to filter the listing with Regex string:","category":"page"},{"location":"concepts/interacting_fgs/","page":"Interact w Graphs","title":"Interact w Graphs","text":"ls(fg, r\"x\\d\")","category":"page"},{"location":"concepts/interacting_fgs/","page":"Interact w Graphs","title":"Interact w Graphs","text":"ls","category":"page"},{"location":"concepts/interacting_fgs/#DistributedFactorGraphs.ls","page":"Interact w Graphs","title":"DistributedFactorGraphs.ls","text":"ls(dfg; ...)\nls(dfg, regexFilter; tags, solvable)\n\n\nList the DFGVariables in the DFG. Optionally specify a label regular expression to retrieves a subset of the variables. Tags is a list of any tags that a node must have (at least one match).\n\nNotes:\n\nReturns Vector{Symbol}\n\n\n\n\n\nls(dfg; ...)\nls(dfg, node; solvable)\n\n\nRetrieve a list of labels of the immediate neighbors around a given variable or factor.\n\n\n\n\n\n","category":"function"},{"location":"concepts/interacting_fgs/","page":"Interact w Graphs","title":"Interact w Graphs","text":"unsorted = intersect(ls(fg, r\"x\"), ls(fg, Pose2)) # by regex\n\n# sorting in most natural way (as defined by DFG)\nsorted = sortDFG(unsorted)","category":"page"},{"location":"concepts/interacting_fgs/","page":"Interact w Graphs","title":"Interact w Graphs","text":"sortDFG","category":"page"},{"location":"concepts/interacting_fgs/#DistributedFactorGraphs.sortDFG","page":"Interact w Graphs","title":"DistributedFactorGraphs.sortDFG","text":"sortDFG(vars; by, kwargs...)\n\n\nConvenience wrapper for Base.sort. Sort variable (factor) lists in a meaningful way (by timestamp, label, etc), for example [:april;:x1_3;:x1_6;] Defaults to sorting by timestamp for variables and factors and using natural_lt for Symbols. See Base.sort for more detail.\n\nNotes\n\nNot fool proof, but does better than native sort.\n\nExample\n\nsortDFG(ls(dfg)) sortDFG(ls(dfg), by=getLabel, lt=natural_lt)\n\nRelated\n\nls, lsf\n\n\n\n\n\n","category":"function"},{"location":"concepts/interacting_fgs/#List-Factors:","page":"Interact w Graphs","title":"List Factors:","text":"","category":"section"},{"location":"concepts/interacting_fgs/","page":"Interact w Graphs","title":"Interact w Graphs","text":"unsorted = lsf(fg)\nunsorted = ls(fg, Pose2Point2BearingRange)","category":"page"},{"location":"concepts/interacting_fgs/","page":"Interact w Graphs","title":"Interact w Graphs","text":"or using the tags (works for variables too):","category":"page"},{"location":"concepts/interacting_fgs/","page":"Interact w Graphs","title":"Interact w Graphs","text":"lsf(fg, tags=[:APRILTAGS;])","category":"page"},{"location":"concepts/interacting_fgs/","page":"Interact w Graphs","title":"Interact w Graphs","text":"lsf\nlsfPriors","category":"page"},{"location":"concepts/interacting_fgs/#DistributedFactorGraphs.lsf","page":"Interact w Graphs","title":"DistributedFactorGraphs.lsf","text":"lsf(dfg; ...)\nlsf(dfg, regexFilter; tags, solvable)\n\n\nList the DFGFactors in the DFG. Optionally specify a label regular expression to retrieves a subset of the factors.\n\nNotes\n\nReturn Vector{Symbol}\n\n\n\n\n\n","category":"function"},{"location":"concepts/interacting_fgs/#DistributedFactorGraphs.lsfPriors","page":"Interact w Graphs","title":"DistributedFactorGraphs.lsfPriors","text":"lsfPriors(dfg)\n\n\nReturn vector of prior factor symbol labels in factor graph dfg.\n\nNotes:\n\nReturns Vector{Symbol}\n\n\n\n\n\n","category":"function"},{"location":"concepts/interacting_fgs/","page":"Interact w Graphs","title":"Interact w Graphs","text":"There are a variety of functions to query the factor graph, please refer to Function Reference for details and note that many functions still need to be added to this documentation.","category":"page"},{"location":"concepts/interacting_fgs/#Extracting-a-Subgraph","page":"Interact w Graphs","title":"Extracting a Subgraph","text":"","category":"section"},{"location":"concepts/interacting_fgs/","page":"Interact w Graphs","title":"Interact w Graphs","text":"Sometimes it is useful to make a deepcopy of a segment of the factor graph for some purpose:","category":"page"},{"location":"concepts/interacting_fgs/","page":"Interact w Graphs","title":"Interact w Graphs","text":"sfg = buildSubgraph(fg, [:x1;:x2;:l7], 1)","category":"page"},{"location":"concepts/interacting_fgs/#Extracting-Belief-Results-(and-PPE)","page":"Interact w Graphs","title":"Extracting Belief Results (and PPE)","text":"","category":"section"},{"location":"concepts/interacting_fgs/","page":"Interact w Graphs","title":"Interact w Graphs","text":"Once you have solved the graph, you can review the full marginal with:","category":"page"},{"location":"concepts/interacting_fgs/","page":"Interact w Graphs","title":"Interact w Graphs","text":"X0 = getBelief(fg, :x0)\n# Evaluate the marginal density function just for fun at [0.0, 0, 0].\nX0(zeros(3,1))","category":"page"},{"location":"concepts/interacting_fgs/","page":"Interact w Graphs","title":"Interact w Graphs","text":"This object is currently a Kernel Density which contains kernels at specific points on the associated manifold. These kernel locations can be retrieved with:","category":"page"},{"location":"concepts/interacting_fgs/","page":"Interact w Graphs","title":"Interact w Graphs","text":"X0pts = getPoints(X0)","category":"page"},{"location":"concepts/interacting_fgs/","page":"Interact w Graphs","title":"Interact w Graphs","text":"getBelief","category":"page"},{"location":"concepts/interacting_fgs/#IncrementalInference.getBelief","page":"Interact w Graphs","title":"IncrementalInference.getBelief","text":"getBelief(vnd)\n\n\nGet a ManifoldKernelDensity estimate from variable node data.\n\n\n\n\n\n","category":"function"},{"location":"concepts/interacting_fgs/#Parametric-Point-Estimates-(PPE)","page":"Interact w Graphs","title":"Parametric Point Estimates (PPE)","text":"","category":"section"},{"location":"concepts/interacting_fgs/","page":"Interact w Graphs","title":"Interact w Graphs","text":"Since Caesar.jl is build around the each variable state being estimated as a total marginal posterior belief, it is often useful to get the equivalent parametric point estimate from the belief. Many of these computations are already done by the inference library and avalable via the various getPPE methods, e.g.:","category":"page"},{"location":"concepts/interacting_fgs/","page":"Interact w Graphs","title":"Interact w Graphs","text":"getPPE(fg, :l3)\ngetPPESuggested(fg, :l5)","category":"page"},{"location":"concepts/interacting_fgs/","page":"Interact w Graphs","title":"Interact w Graphs","text":"There are values for mean, max, or hybrid combinations.","category":"page"},{"location":"concepts/interacting_fgs/","page":"Interact w Graphs","title":"Interact w Graphs","text":"getPPE\ncalcPPE","category":"page"},{"location":"concepts/interacting_fgs/#DistributedFactorGraphs.getPPE","page":"Interact w Graphs","title":"DistributedFactorGraphs.getPPE","text":"getPPE(vari)\ngetPPE(vari, solveKey)\n\n\nGet the parametric point estimate (PPE) for a variable in the factor graph.\n\nNotes\n\nDefaults on keywords solveKey and method\n\nRelated\n\ngetMeanPPE, getMaxPPE, getKDEMean, getKDEFit, getPPEs, getVariablePPEs\n\n\n\n\n\ngetPPE(v)\ngetPPE(v, ppekey)\n\n\nGet the parametric point estimate (PPE) for a variable in the factor graph for a given solve key.\n\nNotes\n\nDefaults on keywords solveKey and method\n\nRelated getPPEMean, getPPEMax, updatePPE!, mean(BeliefType)\n\n\n\n\n\n","category":"function"},{"location":"concepts/interacting_fgs/#IncrementalInference.calcPPE","page":"Interact w Graphs","title":"IncrementalInference.calcPPE","text":"calcPPE(var; ...)\ncalcPPE(var, varType; ppeType, solveKey, ppeKey)\n\n\nGet the ParametricPointEstimates–-based on full marginal belief estimates–-of a variable in the distributed factor graph. Calculate new Parametric Point Estimates for a given variable.\n\nDevNotes\n\nTODO update for manifold subgroups.\nTODO standardize after AMP3D\n\nRelated\n\ngetPPE, setPPE!, getVariablePPE\n\n\n\n\n\n","category":"function"},{"location":"concepts/interacting_fgs/#Getting-Many-Marginal-Samples","page":"Interact w Graphs","title":"Getting Many Marginal Samples","text":"","category":"section"},{"location":"concepts/interacting_fgs/","page":"Interact w Graphs","title":"Interact w Graphs","text":"It is also possible to sample the above belief objects for more samples:","category":"page"},{"location":"concepts/interacting_fgs/","page":"Interact w Graphs","title":"Interact w Graphs","text":"pts = rand(X0, 200)","category":"page"},{"location":"concepts/interacting_fgs/#build_manikde","page":"Interact w Graphs","title":"Building On-Manifold KDEs","text":"","category":"section"},{"location":"concepts/interacting_fgs/","page":"Interact w Graphs","title":"Interact w Graphs","text":"These kernel density belief objects can be constructed from points as follows:","category":"page"},{"location":"concepts/interacting_fgs/","page":"Interact w Graphs","title":"Interact w Graphs","text":"X0_ = manikde!(Pose2, pts)","category":"page"},{"location":"concepts/interacting_fgs/#Logging-Output-(Unique-Folder)","page":"Interact w Graphs","title":"Logging Output (Unique Folder)","text":"","category":"section"},{"location":"concepts/interacting_fgs/","page":"Interact w Graphs","title":"Interact w Graphs","text":"Each new factor graph is designated a unique folder in /tmp/caesar. This is usaully used for debugging or large scale test analysis. Sometimes it may be useful for the user to also use this temporary location. The location is stored in the SolverParams:","category":"page"},{"location":"concepts/interacting_fgs/","page":"Interact w Graphs","title":"Interact w Graphs","text":"getSolverParams(fg).logpath","category":"page"},{"location":"concepts/interacting_fgs/","page":"Interact w Graphs","title":"Interact w Graphs","text":"The functions of interest are:","category":"page"},{"location":"concepts/interacting_fgs/","page":"Interact w Graphs","title":"Interact w Graphs","text":"getLogPath\njoinLogPath","category":"page"},{"location":"concepts/interacting_fgs/#IncrementalInference.getLogPath","page":"Interact w Graphs","title":"IncrementalInference.getLogPath","text":"getLogPath(opt)\n\n\nGet the folder location where debug and solver information is recorded for a particular factor graph.\n\n\n\n\n\n","category":"function"},{"location":"concepts/interacting_fgs/#IncrementalInference.joinLogPath","page":"Interact w Graphs","title":"IncrementalInference.joinLogPath","text":"joinLogPath(opt, str)\n\n\nAppend str onto factor graph log path as convenience function.\n\n\n\n\n\n","category":"function"},{"location":"concepts/interacting_fgs/","page":"Interact w Graphs","title":"Interact w Graphs","text":"note: Note\nA useful tip for doing large scale processing might be to reduce amount of write operations to a solid-state drive that will be written to default location /tmp/caesar by simplying adding a symbolic link to a USB drive or SDCard, perhaps similar to:cd /tmp\nmkdir -p /media/MYFLASHDRIVE/caesar\nln -s /media/MYFLASHDRIVE/caesar caesar","category":"page"},{"location":"concepts/interacting_fgs/#Other-Useful-Functions","page":"Interact w Graphs","title":"Other Useful Functions","text":"","category":"section"},{"location":"concepts/interacting_fgs/","page":"Interact w Graphs","title":"Interact w Graphs","text":"getFactorDim\ngetManifold","category":"page"},{"location":"concepts/interacting_fgs/#IncrementalInference.getFactorDim","page":"Interact w Graphs","title":"IncrementalInference.getFactorDim","text":"getFactorDim(w...) -> Any\n\n\nReturn the number of dimensions this factor vertex fc influences.\n\nDevNotes\n\nTODO document how this function handles partial dimensions\nCurrently a factor manifold is just what the measurement provides (i.e. bearing only would be dimension 1)\n\n\n\n\n\n","category":"function"},{"location":"concepts/interacting_fgs/#DistributedFactorGraphs.getManifold","page":"Interact w Graphs","title":"DistributedFactorGraphs.getManifold","text":"getManifold(_)\n\n\nInterface function to return the <:ManifoldsBase.AbstractManifold object of variableType<:InferenceVariable.\n\n\n\n\n\ngetManifold(mkd)\ngetManifold(mkd, asPartial)\n\n\nReturn the manifold on which this ManifoldKernelDensity is defined.\n\nDevNotes\n\nTODO currently ignores the .partial aspect (captured in parameter L)\n\n\n\n\n\n","category":"function"},{"location":"refs/literature/#Literature","page":"References","title":"Literature","text":"","category":"section"},{"location":"refs/literature/","page":"References","title":"References","text":"Newly created page to list related references and additional literature pertaining to this package.","category":"page"},{"location":"refs/literature/#Direct-References","page":"References","title":"Direct References","text":"","category":"section"},{"location":"refs/literature/","page":"References","title":"References","text":"[1.1] Fourie, D., Leonard, J., Kaess, M.: \"A Nonparametric Belief Solution to the Bayes Tree\" IEEE/RSJ Intl. Conf. on Intelligent Robots and Systems (IROS), (2016).","category":"page"},{"location":"refs/literature/","page":"References","title":"References","text":"[1.2] Fourie, D.: \"Multi-modal and Inertial Sensor Solutions for Navigation-type Factor Graphs\", Ph.D. Thesis, Massachusetts Institute of Technology Electrical Engineering and Computer Science together with Woods Hole Oceanographic Institution Department for Applied Ocean Science and Engineering, September 2017.","category":"page"},{"location":"refs/literature/","page":"References","title":"References","text":"[1.3] Fourie, D., Claassens, S., Pillai, S., Mata, R., Leonard, J.: \"SLAMinDB: Centralized graph databases for mobile robotics\", IEEE Intl. Conf. on Robotics and Automation (ICRA), Singapore, 2017.","category":"page"},{"location":"refs/literature/","page":"References","title":"References","text":"[1.4] Cheung, M., Fourie, D., Rypkema, N., Vaz Teixeira, P., Schmidt, H., and Leonard, J.: \"Non-Gaussian SLAM utilizing Synthetic Aperture Sonar\", Intl. Conf. On Robotics and Automation (ICRA), IEEE, Montreal, 2019.","category":"page"},{"location":"refs/literature/","page":"References","title":"References","text":"[1.5] Doherty, K., Fourie, D., Leonard, J.: \"Multimodal Semantic SLAM with Probabilistic Data Association\", Intl. Conf. On Robotics and Automation (ICRA), IEEE, Montreal, 2019.","category":"page"},{"location":"refs/literature/","page":"References","title":"References","text":"[1.6] Fourie, D., Vaz Teixeira, P., Leonard, J.: \"Non-parametric Mixed-Manifold Products using Multiscale Kernel Densities\", IEEE Intl. Conf. on Intelligent Robots and Systems (IROS), (2019),.","category":"page"},{"location":"refs/literature/","page":"References","title":"References","text":"[1.7] Teixeira, P.N.V., Fourie, D., Kaess, M. and Leonard, J.J., 2019, September. \"Dense, sonar-based reconstruction of underwater scenes\". In 2019 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS) (pp. 8060-8066). IEEE.","category":"page"},{"location":"refs/literature/","page":"References","title":"References","text":"[1.8] Fourie, D., Leonard, J.: \"Inertial Odometry with Retroactive Sensor Calibration\", 2015-2019.","category":"page"},{"location":"refs/literature/","page":"References","title":"References","text":"[1.9] Koolen, T. and Deits, R., 2019. Julia for robotics: Simulation and real-time control in a high-level programming language. IEEE, Intl. Conference on Robotics and Automation, ICRA (2019).","category":"page"},{"location":"refs/literature/","page":"References","title":"References","text":"[1.10] Fourie, D., Espinoza, A. T., Kaess, M., and Leonard, J. J., “Characterizing marginalization and incremental operations on the Bayes tree,” in International Workshop on Algorithmic Foundations of Robotics (WAFR), 2020, Oulu, Finland, Springer Publishing.","category":"page"},{"location":"refs/literature/","page":"References","title":"References","text":"[1.11] Fourie, D., Rypkema, N., Claassens, S., Vaz Teixeira, P., Fischell, E., and Leonard, J.J., \"Towards Real-Time Non-Gaussian SLAM for Underdetermined Navigation\", in IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), October 2020, Las Vegas, USA.","category":"page"},{"location":"refs/literature/","page":"References","title":"References","text":"[1.12] J. Terblanche, S. Claassens and D. Fourie, \"Multimodal Navigation-Affordance Matching for SLAM,\" in IEEE Robotics and Automation Letters, vol. 6, no. 4, pp. 7728-7735, Oct. 2021, doi: 10.1109/LRA.2021.3098788. Also presented at, IEEE 17th International Conference on Automation Science and Engineering, August 2021, Lyon, France.","category":"page"},{"location":"refs/literature/#Important-References","page":"References","title":"Important References","text":"","category":"section"},{"location":"refs/literature/","page":"References","title":"References","text":"[2.1] Kaess, Michael, et al. \"iSAM2: Incremental smoothing and mapping using the Bayes tree\" The International Journal of Robotics Research (2011): 0278364911430419.","category":"page"},{"location":"refs/literature/","page":"References","title":"References","text":"[2.2] Kaess, Michael, et al. \"The Bayes tree: An algorithmic foundation for probabilistic robot mapping.\" Algorithmic Foundations of Robotics IX. Springer, Berlin, Heidelberg, 2010. 157-173.","category":"page"},{"location":"refs/literature/","page":"References","title":"References","text":"[2.3] Kschischang, Frank R., Brendan J. Frey, and Hans-Andrea Loeliger. \"Factor graphs and the sum-product algorithm.\" IEEE Transactions on information theory 47.2 (2001): 498-519.","category":"page"},{"location":"refs/literature/","page":"References","title":"References","text":"[2.4] Dellaert, Frank, and Michael Kaess. \"Factor graphs for robot perception.\" Foundations and Trends® in Robotics 6.1-2 (2017): 1-139.","category":"page"},{"location":"refs/literature/","page":"References","title":"References","text":"[2.5] Sudderth, E.B., Ihler, A.T., Isard, M., Freeman, W.T. and Willsky, A.S., 2010. \"Nonparametric belief propagation.\" Communications of the ACM, 53(10), pp.95-103","category":"page"},{"location":"refs/literature/","page":"References","title":"References","text":"[2.6] Paskin, Mark A. \"Thin junction tree filters for simultaneous localization and mapping.\" in Int. Joint Conf. on Artificial Intelligence. 2003.","category":"page"},{"location":"refs/literature/","page":"References","title":"References","text":"[2.7] Farrell, J., and Matthew B.: \"The global positioning system and inertial navigation.\" Vol. 61. New York: Mcgraw-hill, 1999.","category":"page"},{"location":"refs/literature/","page":"References","title":"References","text":"[2.8] Zarchan, Paul, and Howard Musoff, eds. Fundamentals of Kalman filtering: a practical approach. American Institute of Aeronautics and Astronautics, Inc., 2013.","category":"page"},{"location":"refs/literature/","page":"References","title":"References","text":"[2.9] Rypkema, N. R.,: \"Underwater & Out of Sight: Towards Ubiquity in UnderwaterRobotics\", Ph.D. Thesis, Massachusetts Institute of Technology Electrical Engineering and Computer Science together with Woods Hole Oceanographic Institution Department for Applied Ocean Science and Engineering, September 2019.","category":"page"},{"location":"refs/literature/","page":"References","title":"References","text":"[2.10] Vaz Teixeira, P.: \"Dense, Sonar-based Reconstruction of Underwater Scenes\", Ph.D. Thesis, Massachusetts Institute of Technology Electrical Engineering and Computer Science together with Woods Hole Oceanographic Institution Department for Applied Ocean Science and Engineering, September 2019.","category":"page"},{"location":"refs/literature/","page":"References","title":"References","text":"[2.11] Hanebeck, Uwe D. \"FLUX: Progressive State Estimation Based on Zakai-type Distributed Ordinary Differential Equations.\" arXiv preprint arXiv:1808.02825 (2018).","category":"page"},{"location":"refs/literature/","page":"References","title":"References","text":"[2.12] Muandet, Krikamol, et al. \"Kernel mean embedding of distributions: A review and beyond.\" Foundations and Trends® in Machine Learning 10.1-2 (2017): 1-141.","category":"page"},{"location":"refs/literature/","page":"References","title":"References","text":"[2.13] Hsiao, M. and Kaess, M., 2019, May. \"MH-iSAM2: Multi-hypothesis iSAM using Bayes Tree and Hypo-tree\". In 2019 International Conference on Robotics and Automation (ICRA) (pp. 1274-1280). IEEE.","category":"page"},{"location":"refs/literature/","page":"References","title":"References","text":"[2.14] Arnborg, S., Corneil, D.G. and Proskurowski, A., 1987. \"Complexity of finding embeddings in a k-tree\". SIAM Journal on Algebraic Discrete Methods, 8(2), pp.277-284.","category":"page"},{"location":"refs/literature/","page":"References","title":"References","text":"[2.15a] Sola, J., Deray, J. and Atchuthan, D., 2018. \"A micro Lie theory for state estimation in robotics\". arXiv preprint arXiv:1812.01537, and tech report. And cheatsheet w/ suspected typos.","category":"page"},{"location":"refs/literature/","page":"References","title":"References","text":"[2.15b] Delleart F., 2012. Lie Groups for Beginners.","category":"page"},{"location":"refs/literature/","page":"References","title":"References","text":"[2.15c] Eade E., 2017 Lie Groups for 2D and 3D Transformations.","category":"page"},{"location":"refs/literature/","page":"References","title":"References","text":"[2.15d] Chirikjian, G.S., 2015. Partial bi-invariance of SE(3) metrics. Journal of Computing and Information Science in Engineering, 15(1).","category":"page"},{"location":"refs/literature/","page":"References","title":"References","text":"[2.15e] Pennec, X. and Lorenzi, M., 2020. Beyond Riemannian geometry: The affine connection setting for transformation groups. In Riemannian Geometric Statistics in Medical Image Analysis (pp. 169-229). Academic Press.","category":"page"},{"location":"refs/literature/","page":"References","title":"References","text":"[2.15f] Žefran, M., Kumar, V. and Croke, C., 1996, August. Choice of Riemannian metrics for rigid body kinematics. In International Design Engineering Technical Conferences and Computers and Information in Engineering Conference (Vol. 97584, p. V02BT02A030). American Society of Mechanical Engineers.","category":"page"},{"location":"refs/literature/","page":"References","title":"References","text":"[2.15g] Chirikjian, G.S. and Zhou, S., 1998. Metrics on motion and deformation of solid models.","category":"page"},{"location":"refs/literature/","page":"References","title":"References","text":"[2.16] Kaess, M. and Dellaert, F., 2009. Covariance recovery from a square root information matrix for data association. Robotics and autonomous systems, 57(12), pp.1198-1210.","category":"page"},{"location":"refs/literature/","page":"References","title":"References","text":"[2.17] Bishop, C.M., 2006. Pattern recognition and machine learning. New York: Springer. ISBN 978-0-387-31073-2.","category":"page"},{"location":"refs/literature/#Additional-References","page":"References","title":"Additional References","text":"","category":"section"},{"location":"refs/literature/","page":"References","title":"References","text":"[3.1] Duits, Remco, Erik J. Bekkers, and Alexey Mashtakov. \"Fourier Transform on the Homogeneous Space of 3D Positions and Orientations for Exact Solutions to Linear Parabolic and (Hypo-) Elliptic PDEs\". arXiv preprint arXiv:1811.00363 (2018).","category":"page"},{"location":"refs/literature/","page":"References","title":"References","text":"[3.2] Mohamed, S., Rosca, M., Figurnov, M. and Mnih, A., 2019. \"Monte carlo gradient estimation in machine learning\". arXiv preprint arXiv:1906.10652.","category":"page"},{"location":"refs/literature/","page":"References","title":"References","text":"[3.3] Rackauckas, C., Ma, Y., Martensen, J., Warner, C., Zubov, K., Supekar, R., Skinner, D., Ramadhan, A., Edelman, A., \"Universal Differential Equations for Scientific Machine Learning\", Archive online, DOI: 2001.04385.","category":"page"},{"location":"refs/literature/","page":"References","title":"References","text":"[3.4] Boumal, Nicolas. An introduction to optimization on smooth manifolds. Available online, May, 2020.","category":"page"},{"location":"refs/literature/","page":"References","title":"References","text":"[3.5] Relationship between the Hessianand Covariance Matrix forGaussian Random Variables, John Wiley & Sons","category":"page"},{"location":"refs/literature/","page":"References","title":"References","text":"[3.6] Pennec, Xavier. Intrinsic Statistics on Riemannian Manifolds: Basic Tools for Geometric Measurements, HAL Archive, 2011, Inria, France.","category":"page"},{"location":"refs/literature/","page":"References","title":"References","text":"[3.7] Weber, P., Medina-Oliva, G., Simon, C., et al., 2012. Overview on Bayesian networks applications for dependability risk analysis and maintenance areas. Appl. Artif. Intell. 25 (4), 671e682. https://doi.org/10.1016/j.engappai.2010.06.002. Preprint PDF.","category":"page"},{"location":"refs/literature/","page":"References","title":"References","text":"[3.8] Wang, H.R., Ye, L.T., Xu, X.Y., et al., 2010. Bayesian networks precipitation model based on hidden markov analysis and its application. Sci. China Technol. Sci. 53 (2), 539e547. https://doi.org/10.1007/s11431-010-0034-3.","category":"page"},{"location":"refs/literature/","page":"References","title":"References","text":"[3.9] Mangelson, J.G., Dominic, D., Eustice, R.M. and Vasudevan, R., 2018, May. Pairwise consistent measurement set maximization for robust multi-robot map merging. In 2018 IEEE international conference on robotics and automation (ICRA) (pp. 2916-2923). IEEE.","category":"page"},{"location":"refs/literature/","page":"References","title":"References","text":"[3.10] Bourgeois, F. and Lassalle, J.C., 1971. An extension of the Munkres algorithm for the assignment problem to rectangular matrices. Communications of the ACM, 14(12), pp.802-804.","category":"page"},{"location":"refs/literature/#Signal-Processing-(Beamforming-and-Channel-Deconvolution)","page":"References","title":"Signal Processing (Beamforming and Channel Deconvolution)","text":"","category":"section"},{"location":"refs/literature/","page":"References","title":"References","text":"[4.1] Van Trees, H.L., 2004. Optimum array processing: Part IV of detection, estimation, and modulation theory. John Wiley & Sons.","category":"page"},{"location":"refs/literature/","page":"References","title":"References","text":"[4.2a] Dowling, D.R., 2013. \"Acoustic Blind Deconvolution and Unconventional Nonlinear Beamforming in Shallow Ocean Environments\". MICHIGAN UNIV ANN ARBOR DEPT OF MECHANICAL ENGINEERING.","category":"page"},{"location":"refs/literature/","page":"References","title":"References","text":"[4.2b] Hossein Abadi, S., 2013. \"Blind deconvolution in multipath environments and extensions to remote source localization\", paper, thesis.","category":"page"},{"location":"refs/literature/#Contact-or-Tactile","page":"References","title":"Contact or Tactile","text":"","category":"section"},{"location":"refs/literature/","page":"References","title":"References","text":"[5.1] Suresh, S., Bauza, M., Yu, K.T., Mangelson, J.G., Rodriguez, A. and Kaess, M., 2021, May. Tactile SLAM: Real-time inference of shape and pose from planar pushing. In 2021 IEEE International Conference on Robotics and Automation (ICRA) (pp. 11322-11328). IEEE.","category":"page"},{"location":"introduction/#Introduction","page":"Introduction","title":"Introduction","text":"","category":"section"},{"location":"introduction/","page":"Introduction","title":"Introduction","text":"Caesar is an open-source robotic software stack aimed at localization and mapping for robotics, using non-Gaussian graphical model state-estimation techniques. The factor graph method is well suited to combining heterogeneous and ambiguous sensor data streams. The focus is predominantly on geometric/spatial/semantic estimation tasks related to simultaneous localization and mapping (SLAM). The software is also highly extensible and well suited to a variety of estimation /filtering-type tasks — especially in non-Gaussian/multimodal settings. Check out a brief description on why non-Gaussian / multi-modal data processing needs arise.","category":"page"},{"location":"introduction/#A-Few-Highlights","page":"Introduction","title":"A Few Highlights","text":"","category":"section"},{"location":"introduction/","page":"Introduction","title":"Introduction","text":"Caesar.jl addresses numerous issues that arise in prior SLAM solutions, including: ","category":"page"},{"location":"introduction/","page":"Introduction","title":"Introduction","text":"Distributed Factor Graph representation deeply-coupled with an on-Manifold probabilistic algebra language;\nLocalization using different algorithms:\nMM-iSAMv2\nParametric methods, including regular Guassian or Max-Mixtures.\nOther multi-parametric and non-Gaussian algorithms are presently being implemented.\nSolving under-defined systems, \nInference with non-Gaussian measurements, \nStandard features for natively handling ambiguous data association and multi-hypotheses, \nNative multi-modal (hypothesis) representation in the factor-graph, see Data Association and Hypotheses:\nMulti-modal and non-parametric representation of constraints;\nGaussian distributions are but one of the many representations of measurement error;\nSimplifying bespoke factor development, \nCentralized (or peer-to-peer decentralized) factor-graph persistence, \ni.e. Federated multi-session/agent reduction.\nMulti-CPU inference.\nOut-of-library extendable for Custom New Variables and Factors;\nNatively supports legacy Gaussian parametric and max-mixtures solutions;\nLocal in-memory solving on the device as well as database-driven centralized solving (micro-service architecture);\nNatively support Clique Recycling (i.e. fixed-lag out-marginalization) for continuous operation as well as off-line batch solving, see more at Using Incremental Updates (Clique Recycling I);\nNatively supports Dead Reckon Tethering;\nNatively supports Federated multi-session/agent solving;\nNative support for Entry=>Data blobs for storing large format data.\nMiddleware support, e.g. see the ROS Integration Page.","category":"page"},{"location":"concepts/available_varfacs/#variables_factors","page":"Variables/Factors","title":"Variables in Caesar.jl","text":"","category":"section"},{"location":"concepts/available_varfacs/","page":"Variables/Factors","title":"Variables/Factors","text":"You can check for the latest variable types by running the following in your terminal:","category":"page"},{"location":"concepts/available_varfacs/","page":"Variables/Factors","title":"Variables/Factors","text":"using RoME, Caesar\n\nsubtypes(IIF.InferenceVariable)\n\n# variables already available\nIIF.getCurrentWorkspaceVariables()\n\n# factors already available\nIIF.getCurrentWorkspaceFactors()","category":"page"},{"location":"concepts/available_varfacs/","page":"Variables/Factors","title":"Variables/Factors","text":"The variables and factors in Caesar should be sufficient for a variety of robotic applications, however, users can easily extend the framework (without changing the core code). This can even be done out-of-library at runtime after a construction of a factor graph has started! See Custom Variables and Custom Factors for more details.","category":"page"},{"location":"concepts/available_varfacs/#Basic-Variables","page":"Variables/Factors","title":"Basic Variables","text":"","category":"section"},{"location":"concepts/available_varfacs/","page":"Variables/Factors","title":"Variables/Factors","text":"Default variables in IncrementalInference","category":"page"},{"location":"concepts/available_varfacs/","page":"Variables/Factors","title":"Variables/Factors","text":"Position{N}","category":"page"},{"location":"concepts/available_varfacs/#IncrementalInference.Position","page":"Variables/Factors","title":"IncrementalInference.Position","text":"struct Position{N} <: InferenceVariable\n\nContinuous Euclidean variable of dimension N representing a Position in cartesian space.\n\n\n\n\n\n","category":"type"},{"location":"concepts/available_varfacs/#2D-Variables","page":"Variables/Factors","title":"2D Variables","text":"","category":"section"},{"location":"concepts/available_varfacs/","page":"Variables/Factors","title":"Variables/Factors","text":"The current variables types are:","category":"page"},{"location":"concepts/available_varfacs/","page":"Variables/Factors","title":"Variables/Factors","text":"Point2\nPose2\nDynPoint2\nDynPose2","category":"page"},{"location":"concepts/available_varfacs/#RoME.Point2","page":"Variables/Factors","title":"RoME.Point2","text":"struct Point2 <: InferenceVariable\n\nXY Euclidean manifold variable node softtype.\n\n\n\n\n\n","category":"type"},{"location":"concepts/available_varfacs/#RoME.Pose2","page":"Variables/Factors","title":"RoME.Pose2","text":"struct Pose2 <: InferenceVariable\n\nPose2 is a SE(2) mechanization of two Euclidean translations and one Circular rotation, used for general 2D SLAM.\n\n\n\n\n\n","category":"type"},{"location":"concepts/available_varfacs/#RoME.DynPoint2","page":"Variables/Factors","title":"RoME.DynPoint2","text":"struct DynPoint2 <: InferenceVariable\n\nDynamic point in 2D space with velocity components: x, y, dx/dt, dy/dt\n\n\n\n\n\n","category":"type"},{"location":"concepts/available_varfacs/#RoME.DynPose2","page":"Variables/Factors","title":"RoME.DynPose2","text":"struct DynPose2 <: InferenceVariable\n\nDynamic pose variable with velocity components: x, y, theta, dx/dt, dy/dt\n\nNote\n\nThe SE2E2_Manifold definition used currently is a hack to simplify the transition to Manifolds.jl, see #244 \nReplaced SE2E2_Manifold hack with ProductManifold(SpecialEuclidean(2), TranslationGroup(2)), confirm if it is correct.\n\n\n\n\n\n","category":"type"},{"location":"concepts/available_varfacs/#3D-Variables","page":"Variables/Factors","title":"3D Variables","text":"","category":"section"},{"location":"concepts/available_varfacs/","page":"Variables/Factors","title":"Variables/Factors","text":"Point3\nPose3","category":"page"},{"location":"concepts/available_varfacs/#RoME.Point3","page":"Variables/Factors","title":"RoME.Point3","text":"struct Point3 <: InferenceVariable\n\nXYZ Euclidean manifold variable node softtype.\n\nExample\n\np3 = Point3()\n\n\n\n\n\n","category":"type"},{"location":"concepts/available_varfacs/#RoME.Pose3","page":"Variables/Factors","title":"RoME.Pose3","text":"struct Pose3 <: InferenceVariable\n\nPose3 is currently a Euler angle mechanization of three Euclidean translations and three Circular rotation.\n\nFuture:\n\nWork in progress on AMP3D for proper non-Euler angle on-manifold operations.\nTODO the AMP upgrade is aimed at resolving 3D to Quat/SE3/SP3 – current Euler angles will be replaced\n\n\n\n\n\n","category":"type"},{"location":"concepts/available_varfacs/","page":"Variables/Factors","title":"Variables/Factors","text":"note: Note\nPlease open an issue with JuliaRobotics/RoME.jl for specific requests, problems, or suggestions. Contributions are also welcome. There might be more variable types in Caesar/RoME/IIF not yet documented here.","category":"page"},{"location":"concepts/available_varfacs/#Factors-in-Caesar.jl","page":"Variables/Factors","title":"Factors in Caesar.jl","text":"","category":"section"},{"location":"concepts/available_varfacs/","page":"Variables/Factors","title":"Variables/Factors","text":"You can check for the latest factor types by running the following in your terminal:","category":"page"},{"location":"concepts/available_varfacs/","page":"Variables/Factors","title":"Variables/Factors","text":"using RoME, Caesar\nprintln(\"- Singletons (priors): \")\nprintln.(sort(string.(subtypes(IIF.AbstractPrior))));\nprintln(\"- Pairwise (variable constraints): \")\nprintln.(sort(string.(subtypes(IIF.AbstractRelativeRoots))));\nprintln(\"- Pairwise (variable minimization constraints): \")\nprintln.(sort(string.(subtypes(IIF.AbstractRelativeMinimize))));","category":"page"},{"location":"concepts/available_varfacs/#Priors-(Absolute-Data)","page":"Variables/Factors","title":"Priors (Absolute Data)","text":"","category":"section"},{"location":"concepts/available_varfacs/","page":"Variables/Factors","title":"Variables/Factors","text":"Defaults in IncrementalInference.jl:","category":"page"},{"location":"concepts/available_varfacs/","page":"Variables/Factors","title":"Variables/Factors","text":"Prior\nPartialPrior","category":"page"},{"location":"concepts/available_varfacs/#IncrementalInference.Prior","page":"Variables/Factors","title":"IncrementalInference.Prior","text":"struct Prior{T<:(SamplableBelief)} <: AbstractPrior\n\nDefault prior on all dimensions of a variable node in the factor graph. Prior is not recommended when non-Euclidean dimensions are used in variables.\n\n\n\n\n\n","category":"type"},{"location":"concepts/available_varfacs/#IncrementalInference.PartialPrior","page":"Variables/Factors","title":"IncrementalInference.PartialPrior","text":"struct PartialPrior{T<:(SamplableBelief), P<:Tuple} <: AbstractPrior\n\nPartial prior belief (absolute data) on any variable, given <:SamplableBelief and which dimensions of the intended variable.\n\nNotes\n\nIf using AMP.ManifoldKernelDensity, don't double partial. Only define the partial in this PartialPrior container. \nFuture TBD, consider using AMP.getManifoldPartial for more general abstraction.\n\n\n\n\n\n","category":"type"},{"location":"concepts/available_varfacs/","page":"Variables/Factors","title":"Variables/Factors","text":"Some of the most common priors (unary factors) in Caesar.jl/RoME.jl include:","category":"page"},{"location":"concepts/available_varfacs/","page":"Variables/Factors","title":"Variables/Factors","text":"PriorPolar\nPriorPoint2\nPriorPose2\nPriorPoint3\nPriorPose3","category":"page"},{"location":"concepts/available_varfacs/#RoME.PriorPolar","page":"Variables/Factors","title":"RoME.PriorPolar","text":"struct PriorPolar{T1<:(SamplableBelief), T2<:(SamplableBelief)} <: AbstractPrior\n\nPrior belief on any Polar related variable.\n\n\n\n\n\n","category":"type"},{"location":"concepts/available_varfacs/#RoME.PriorPoint2","page":"Variables/Factors","title":"RoME.PriorPoint2","text":"struct PriorPoint2{T<:(SamplableBelief)} <: AbstractPrior\n\nDirection observation information of a Point2 variable.\n\n\n\n\n\n","category":"type"},{"location":"concepts/available_varfacs/#RoME.PriorPose2","page":"Variables/Factors","title":"RoME.PriorPose2","text":"struct PriorPose2{T<:(SamplableBelief)} <: AbstractPrior\n\nIntroduce direct observations on all dimensions of a Pose2 variable:\n\nExample:\n\nPriorPose2( MvNormal([10; 10; pi/6.0], Matrix(Diagonal([0.1;0.1;0.05].^2))) )\n\n\n\n\n\n","category":"type"},{"location":"concepts/available_varfacs/#RoME.PriorPoint3","page":"Variables/Factors","title":"RoME.PriorPoint3","text":"struct PriorPoint3{T} <: AbstractPrior\n\nDirection observation information of a Point3 variable.\n\n\n\n\n\n","category":"type"},{"location":"concepts/available_varfacs/#RoME.PriorPose3","page":"Variables/Factors","title":"RoME.PriorPose3","text":"struct PriorPose3{T<:(SamplableBelief)} <: AbstractPrior\n\nDirect observation information of Pose3 variable type.\n\n\n\n\n\n","category":"type"},{"location":"concepts/available_varfacs/#Relative-Likelihoods-(Relative-Data)","page":"Variables/Factors","title":"Relative Likelihoods (Relative Data)","text":"","category":"section"},{"location":"concepts/available_varfacs/","page":"Variables/Factors","title":"Variables/Factors","text":"Defaults in IncrementalInference.jl:","category":"page"},{"location":"concepts/available_varfacs/","page":"Variables/Factors","title":"Variables/Factors","text":"LinearRelative","category":"page"},{"location":"concepts/available_varfacs/#IncrementalInference.LinearRelative","page":"Variables/Factors","title":"IncrementalInference.LinearRelative","text":"struct LinearRelative{N, T<:(SamplableBelief)} <: AbstractManifoldMinimize\n\nDefault linear offset between two scalar variables.\n\nX_2 = X_1 + η_Z\n\n\n\n\n\n","category":"type"},{"location":"concepts/available_varfacs/","page":"Variables/Factors","title":"Variables/Factors","text":"Existing n-ary factors in Caesar.jl/RoME.jl/IIF.jl include:","category":"page"},{"location":"concepts/available_varfacs/","page":"Variables/Factors","title":"Variables/Factors","text":"PolarPolar\nPoint2Point2\nPose2Point2\nPose2Point2Bearing\nPose2Point2BearingRange\nPose2Point2Range\nPose2Pose2\nDynPoint2VelocityPrior\nDynPoint2DynPoint2\nVelPoint2VelPoint2\nPoint2Point2Velocity\nDynPose2VelocityPrior\nVelPose2VelPose2\nDynPose2Pose2\nPose3Pose3\nPriorPose3ZRP\nPose3Pose3XYYaw","category":"page"},{"location":"concepts/available_varfacs/#RoME.PolarPolar","page":"Variables/Factors","title":"RoME.PolarPolar","text":"struct PolarPolar{T1<:(SamplableBelief), T2<:(SamplableBelief)} <: AbstractRelativeMinimize\n\nLinear offset factor of IIF.SamplableBelief between two Polar variables.\n\n\n\n\n\n","category":"type"},{"location":"concepts/available_varfacs/#RoME.Point2Point2","page":"Variables/Factors","title":"RoME.Point2Point2","text":"struct Point2Point2{D<:(SamplableBelief)} <: AbstractManifoldMinimize\n\n\n\n\n\n","category":"type"},{"location":"concepts/available_varfacs/#RoME.Pose2Point2","page":"Variables/Factors","title":"RoME.Pose2Point2","text":"struct Pose2Point2{T<:(SamplableBelief)} <: AbstractManifoldMinimize\n\nBearing and Range constraint from a Pose2 to Point2 variable.\n\n\n\n\n\n","category":"type"},{"location":"concepts/available_varfacs/#RoME.Pose2Point2Bearing","page":"Variables/Factors","title":"RoME.Pose2Point2Bearing","text":"struct Pose2Point2Bearing{B<:(SamplableBelief)} <: AbstractManifoldMinimize\n\nSingle dimension bearing constraint from Pose2 to Point2 variable.\n\n\n\n\n\n","category":"type"},{"location":"concepts/available_varfacs/#RoME.Pose2Point2BearingRange","page":"Variables/Factors","title":"RoME.Pose2Point2BearingRange","text":"mutable struct Pose2Point2BearingRange{B<:(SamplableBelief), R<:(SamplableBelief)} <: AbstractManifoldMinimize\n\nBearing and Range constraint from a Pose2 to Point2 variable.\n\n\n\n\n\n","category":"type"},{"location":"concepts/available_varfacs/#RoME.Pose2Point2Range","page":"Variables/Factors","title":"RoME.Pose2Point2Range","text":"struct Pose2Point2Range{T<:(SamplableBelief)} <: AbstractManifoldMinimize\n\nRange only measurement from Pose2 to Point2 variable.\n\n\n\n\n\n","category":"type"},{"location":"concepts/available_varfacs/#RoME.Pose2Pose2","page":"Variables/Factors","title":"RoME.Pose2Pose2","text":"struct Pose2Pose2{T<:(SamplableBelief)} <: AbstractManifoldMinimize\n\nRigid transform between two Pose2's, assuming (x,y,theta).\n\nCalcuated as:\n\nbeginaligned\nhatq=exp_pX_m\nX = log_q hatq\nX^i = mathrmvee(q X)\nendaligned\n\nwith: mathcal M= mathrmSE(2) Special Euclidean group\np and q in mathcal M the two Pose2 points\nthe measurement vector X_m in T_p mathcal M\nand the error vector X in T_q mathcal M\nX^i coordinates of X\n\nDevNotes\n\nMaybe with Manifolds.jl, {T <: IIF.SamplableBelief, S, R, P}\n\nRelated\n\nPose3Pose3, Point2Point2, MutablePose2Pose2Gaussian, DynPose2, IMUDeltaFactor\n\n\n\n\n\n","category":"type"},{"location":"concepts/available_varfacs/#RoME.DynPoint2VelocityPrior","page":"Variables/Factors","title":"RoME.DynPoint2VelocityPrior","text":"mutable struct DynPoint2VelocityPrior{T<:(SamplableBelief)} <: AbstractPrior\n\n\n\n\n\n","category":"type"},{"location":"concepts/available_varfacs/#RoME.DynPoint2DynPoint2","page":"Variables/Factors","title":"RoME.DynPoint2DynPoint2","text":"mutable struct DynPoint2DynPoint2{T<:(SamplableBelief)} <: AbstractManifoldMinimize\n\n\n\n\n\n","category":"type"},{"location":"concepts/available_varfacs/#RoME.VelPoint2VelPoint2","page":"Variables/Factors","title":"RoME.VelPoint2VelPoint2","text":"mutable struct VelPoint2VelPoint2{T<:(SamplableBelief)} <: AbstractManifoldMinimize\n\n\n\n\n\n","category":"type"},{"location":"concepts/available_varfacs/#RoME.Point2Point2Velocity","page":"Variables/Factors","title":"RoME.Point2Point2Velocity","text":"mutable struct Point2Point2Velocity{T<:(SamplableBelief)} <: AbstractManifoldMinimize\n\n\n\n\n\n","category":"type"},{"location":"concepts/available_varfacs/#RoME.DynPose2VelocityPrior","page":"Variables/Factors","title":"RoME.DynPose2VelocityPrior","text":"mutable struct DynPose2VelocityPrior{T1, T2} <: AbstractPrior\n\n\n\n\n\n","category":"type"},{"location":"concepts/available_varfacs/#RoME.VelPose2VelPose2","page":"Variables/Factors","title":"RoME.VelPose2VelPose2","text":"struct VelPose2VelPose2{T1<:(SamplableBelief), T2<:(SamplableBelief)} <: AbstractManifoldMinimize\n\n\n\n\n\n","category":"type"},{"location":"concepts/available_varfacs/#RoME.DynPose2Pose2","page":"Variables/Factors","title":"RoME.DynPose2Pose2","text":"mutable struct DynPose2Pose2{T<:(SamplableBelief)} <: AbstractRelativeMinimize\n\n\n\n\n\n","category":"type"},{"location":"concepts/available_varfacs/#RoME.Pose3Pose3","page":"Variables/Factors","title":"RoME.Pose3Pose3","text":"struct Pose3Pose3{T<:(SamplableBelief)} <: AbstractManifoldMinimize\n\nRigid transform factor between two Pose3 compliant variables.\n\n\n\n\n\n","category":"type"},{"location":"concepts/available_varfacs/#RoME.PriorPose3ZRP","page":"Variables/Factors","title":"RoME.PriorPose3ZRP","text":"struct PriorPose3ZRP{T1<:(SamplableBelief), T2<:(SamplableBelief)} <: AbstractPrior\n\nPartial prior belief on Z, Roll, and Pitch of a Pose3.\n\n\n\n\n\n","category":"type"},{"location":"concepts/available_varfacs/#RoME.Pose3Pose3XYYaw","page":"Variables/Factors","title":"RoME.Pose3Pose3XYYaw","text":"struct Pose3Pose3XYYaw{T<:(SamplableBelief)} <: AbstractManifoldMinimize\n\nPartial factor between XY and Yaw of two Pose3 variables.\n\nwR2 = wR1*1R2 = wR1*(1Rψ*Rθ*Rϕ)\nwRz = wR1*1Rz\nzRz = wRz \\ wR(Δψ)\n\nM_R = SO(3)\nδ(α,β,γ) = vee(M_R, R_0, log(M_R, R_0, zRz))\n\nM = SE(3)\np0 = identity_element(M)\nδ(x,y,z,α,β,γ) = vee(M, p0, log(M, p0, zRz))\n\n\n\n\n\n","category":"type"},{"location":"concepts/available_varfacs/","page":"Variables/Factors","title":"Variables/Factors","text":" ","category":"page"},{"location":"concepts/available_varfacs/#Extending-Caesar-with-New-Variables-and-Factors","page":"Variables/Factors","title":"Extending Caesar with New Variables and Factors","text":"","category":"section"},{"location":"concepts/available_varfacs/","page":"Variables/Factors","title":"Variables/Factors","text":"A question that frequently arises is how to design custom variables and factors to solve a specific type of graph. One strength of Caesar is the ability to incorporate new variables and factors at will. Please refer to Adding Factors for more information on creating your own factors.","category":"page"},{"location":"concepts/mmisam_alg/#Multimodal-incremental-Smoothing-and-Mapping-Algorithm","page":"Non-Gaussian Algorithm","title":"Multimodal incremental Smoothing and Mapping Algorithm","text":"","category":"section"},{"location":"concepts/mmisam_alg/","page":"Non-Gaussian Algorithm","title":"Non-Gaussian Algorithm","text":"note: Note\nMajor refactoring of documentation under way 2020Q1. Much of the previous text has be repositioned and being improved. See references for details and check back here for updates in the coming weeks.","category":"page"},{"location":"concepts/mmisam_alg/","page":"Non-Gaussian Algorithm","title":"Non-Gaussian Algorithm","text":"Caesar.jl uses an approximate sum-product inference algorithm (mmiSAM) works. Until then, see related literature for more details.","category":"page"},{"location":"concepts/mmisam_alg/#Joint-Probability","page":"Non-Gaussian Algorithm","title":"Joint Probability","text":"","category":"section"},{"location":"concepts/mmisam_alg/#General-Factor-Graph-–-i.e.-non-Gaussian-and-multi-modal","page":"Non-Gaussian Algorithm","title":"General Factor Graph – i.e. non-Gaussian and multi-modal","text":"","category":"section"},{"location":"concepts/mmisam_alg/","page":"Non-Gaussian Algorithm","title":"Non-Gaussian Algorithm","text":"(Image: mmfgbt)","category":"page"},{"location":"concepts/mmisam_alg/#Inference-on-Bayes/Junction/Elimination-Tree","page":"Non-Gaussian Algorithm","title":"Inference on Bayes/Junction/Elimination Tree","text":"","category":"section"},{"location":"concepts/mmisam_alg/","page":"Non-Gaussian Algorithm","title":"Non-Gaussian Algorithm","text":"See tree solve video here.","category":"page"},{"location":"concepts/mmisam_alg/","page":"Non-Gaussian Algorithm","title":"Non-Gaussian Algorithm","text":"\"Bayes/Junction","category":"page"},{"location":"concepts/mmisam_alg/","page":"Non-Gaussian Algorithm","title":"Non-Gaussian Algorithm","text":"Algorithm combats the so called curse-of-dimensionality on the basis of eight principles outlined in the thesis work \"Multimodal and Inertial Sensor Solutions to Navigation-type Factor Graphs\".","category":"page"},{"location":"concepts/mmisam_alg/#Chapman-Kolmogorov-(Belief-Propagation-/-Sum-product)","page":"Non-Gaussian Algorithm","title":"Chapman-Kolmogorov (Belief Propagation / Sum-product)","text":"","category":"section"},{"location":"concepts/mmisam_alg/","page":"Non-Gaussian Algorithm","title":"Non-Gaussian Algorithm","text":"The main computational effort is to focus compute cycles on dominant modes exhibited by the data, by dropping low likelihood modes (although not indefinitely) and not sacrificing accuracy individual major features. ","category":"page"},{"location":"concepts/mmisam_alg/","page":"Non-Gaussian Algorithm","title":"Non-Gaussian Algorithm","text":"D. Fourie, A. T. Espinoza, M. Kaess, and J. J. Leonard, “Characterizing marginalization and incremental operations on the Bayes tree,” in International Workshop on Algorithmic Foundations of Robotics (WAFR), 2020, submitted, under review.","category":"page"},{"location":"concepts/mmisam_alg/#Focussing-Computation-on-Tree","page":"Non-Gaussian Algorithm","title":"Focussing Computation on Tree","text":"","category":"section"},{"location":"concepts/mmisam_alg/","page":"Non-Gaussian Algorithm","title":"Non-Gaussian Algorithm","text":"Link to new dedicated Bayes tree pages. The following sections describe different elements of clique recycling.","category":"page"},{"location":"concepts/mmisam_alg/#Incremental-Updates","page":"Non-Gaussian Algorithm","title":"Incremental Updates","text":"","category":"section"},{"location":"concepts/mmisam_alg/","page":"Non-Gaussian Algorithm","title":"Non-Gaussian Algorithm","text":"Recycling computations similar to iSAM2, with option to complete future downward pass.","category":"page"},{"location":"concepts/mmisam_alg/#Fixed-Lag-operation-(out-marginalization)","page":"Non-Gaussian Algorithm","title":"Fixed-Lag operation (out-marginalization)","text":"","category":"section"},{"location":"concepts/mmisam_alg/","page":"Non-Gaussian Algorithm","title":"Non-Gaussian Algorithm","text":"Active user (likely) computational limits on message passing. Also mixed priority solving","category":"page"},{"location":"concepts/mmisam_alg/#Federated-Tree-Solution-(Multi-session/agent)","page":"Non-Gaussian Algorithm","title":"Federated Tree Solution (Multi session/agent)","text":"","category":"section"},{"location":"concepts/mmisam_alg/","page":"Non-Gaussian Algorithm","title":"Non-Gaussian Algorithm","text":"Tentatively see the multisession page.","category":"page"},{"location":"concepts/mmisam_alg/#Clique-State-Machine","page":"Non-Gaussian Algorithm","title":"Clique State Machine","text":"","category":"section"},{"location":"concepts/mmisam_alg/","page":"Non-Gaussian Algorithm","title":"Non-Gaussian Algorithm","text":"The CSM is used to govern the inference process within a clique. A FunctionalStateMachine.jl implementation is used to allow for initialization / incremental-recycling / fixed-lag solving, and will soon support federated branch solving as well as unidirectional message passing for fixed-lead operations. See the following video for an auto-generated–-using csmAnimate–-concurrent clique solving example.","category":"page"},{"location":"concepts/mmisam_alg/#Sequential-Nested-Gibbs-Method","page":"Non-Gaussian Algorithm","title":"Sequential Nested Gibbs Method","text":"","category":"section"},{"location":"concepts/mmisam_alg/","page":"Non-Gaussian Algorithm","title":"Non-Gaussian Algorithm","text":"Current default inference method. See [Fourie et al., IROS 2016]","category":"page"},{"location":"concepts/mmisam_alg/#Convolution-Approximation-(Quasi-Deterministic)","page":"Non-Gaussian Algorithm","title":"Convolution Approximation (Quasi-Deterministic)","text":"","category":"section"},{"location":"concepts/mmisam_alg/","page":"Non-Gaussian Algorithm","title":"Non-Gaussian Algorithm","text":"Convolution operations are used to implement the numerical computation of the probabilistic chain rule:","category":"page"},{"location":"concepts/mmisam_alg/","page":"Non-Gaussian Algorithm","title":"Non-Gaussian Algorithm","text":"P(A B) = P(A B)P(B)","category":"page"},{"location":"concepts/mmisam_alg/","page":"Non-Gaussian Algorithm","title":"Non-Gaussian Algorithm","text":"Proposal distributions are computed by means of (analytical or numerical – i.e. \"algebraic\") factor which defines a residual function:","category":"page"},{"location":"concepts/mmisam_alg/","page":"Non-Gaussian Algorithm","title":"Non-Gaussian Algorithm","text":"delta S times Eta rightarrow mathcalR","category":"page"},{"location":"concepts/mmisam_alg/","page":"Non-Gaussian Algorithm","title":"Non-Gaussian Algorithm","text":"where S times Eta is the domain such that theta_i in S eta sim P(Eta), and P(cdot) is a probability.","category":"page"},{"location":"concepts/mmisam_alg/","page":"Non-Gaussian Algorithm","title":"Non-Gaussian Algorithm","text":"Please follow, a more detailed description is on the convolutional computations page.","category":"page"},{"location":"concepts/mmisam_alg/#Stochastic-Product-Approx-of-Infinite-Functionals","page":"Non-Gaussian Algorithm","title":"Stochastic Product Approx of Infinite Functionals","text":"","category":"section"},{"location":"concepts/mmisam_alg/","page":"Non-Gaussian Algorithm","title":"Non-Gaussian Algorithm","text":"See mixed-manifold products presented in the literature section.","category":"page"},{"location":"concepts/mmisam_alg/","page":"Non-Gaussian Algorithm","title":"Non-Gaussian Algorithm","text":"writing in progress","category":"page"},{"location":"concepts/mmisam_alg/#Mixture-Parametric-Method","page":"Non-Gaussian Algorithm","title":"Mixture Parametric Method","text":"","category":"section"},{"location":"concepts/mmisam_alg/","page":"Non-Gaussian Algorithm","title":"Non-Gaussian Algorithm","text":"Work In Progress – deferred for progress on full functional methods, but likely to have Gaussian legacy algorithm with mixture model expansion added in the near future.","category":"page"},{"location":"concepts/mmisam_alg/#Chapman-Kolmogorov","page":"Non-Gaussian Algorithm","title":"Chapman-Kolmogorov","text":"","category":"section"},{"location":"concepts/mmisam_alg/","page":"Non-Gaussian Algorithm","title":"Non-Gaussian Algorithm","text":"Work in progress","category":"page"},{"location":"concepts/using_manifolds/#On-Manifold-Operations","page":"Using Manifolds.jl","title":"On-Manifold Operations","text":"","category":"section"},{"location":"concepts/using_manifolds/","page":"Using Manifolds.jl","title":"Using Manifolds.jl","text":"Caesar.jl and libraries have adopted JuliaManifolds/Manifolds.jl as foundation for developing the algebraic operations used. ","category":"page"},{"location":"concepts/using_manifolds/","page":"Using Manifolds.jl","title":"Using Manifolds.jl","text":"The Community has been developing high quality documentation for Manifolds.jl, and we encourage the interested reader to learn and use everything available there.","category":"page"},{"location":"concepts/using_manifolds/#Separate-Manifold-Beliefs-Page","page":"Using Manifolds.jl","title":"Separate Manifold Beliefs Page","text":"","category":"section"},{"location":"concepts/using_manifolds/","page":"Using Manifolds.jl","title":"Using Manifolds.jl","text":"See building a Manifold Kernel Density or for more information.","category":"page"},{"location":"concepts/using_manifolds/#Why-Manifolds.jl","page":"Using Manifolds.jl","title":"Why Manifolds.jl","text":"","category":"section"},{"location":"concepts/using_manifolds/","page":"Using Manifolds.jl","title":"Using Manifolds.jl","text":"There is much to be said about how and why Manifolds.jl is the right decision for building a next-gen factor graph solver. We believe the future will show that mathematicians are way ahead of the curve, and that adopting a manifold approach will pretty much be the only way to develop the required mathematical operations in Caesar.jl for the forseable future.","category":"page"},{"location":"concepts/using_manifolds/#Are-Manifolds-Difficult?-No.","page":"Using Manifolds.jl","title":"Are Manifolds Difficult? No.","text":"","category":"section"},{"location":"concepts/using_manifolds/","page":"Using Manifolds.jl","title":"Using Manifolds.jl","text":"Do you need a math degree to be able to use Manifolds.jl? No you don't since Caesar.jl and related packages have already packaged many of the common functions and factors you need to get going. ","category":"page"},{"location":"concepts/using_manifolds/","page":"Using Manifolds.jl","title":"Using Manifolds.jl","text":"This page is meant to open the door for readers to learn more about how things work under the hood, and empower the Community to rapidly develop upon existing work. This page is also intended to show that the Caesar.jl related packages are being developed with strong focus on consolidation, single definition functionality, and serious cross discipline considerations.","category":"page"},{"location":"concepts/using_manifolds/","page":"Using Manifolds.jl","title":"Using Manifolds.jl","text":"If you are looking for rapid help or more expertise on a particular issue, consider reaching out by opening Issues or connecting to the ongoing chats in the Slack Channel.","category":"page"},{"location":"concepts/using_manifolds/#What-Are-Manifolds","page":"Using Manifolds.jl","title":"What Are Manifolds","text":"","category":"section"},{"location":"concepts/using_manifolds/","page":"Using Manifolds.jl","title":"Using Manifolds.jl","text":"If you are a newcomer to the term Manifold and want to learn more, fear not even though your first search results might be somewhat disorienting. ","category":"page"},{"location":"concepts/using_manifolds/","page":"Using Manifolds.jl","title":"Using Manifolds.jl","text":"The rest of this page is meant to introduce the basics, and point you to handy resources. Caesar.jl and NavAbility support open Community and are upstreaming improvements to Manifolds.jl, including code updates and documentation improvements.","category":"page"},{"location":"concepts/using_manifolds/#'One-Page'-Summary-of-Manifolds","page":"Using Manifolds.jl","title":"'One Page' Summary of Manifolds","text":"","category":"section"},{"location":"concepts/using_manifolds/","page":"Using Manifolds.jl","title":"Using Manifolds.jl","text":"Imagine you have a sheet of paper and you draw with a pencil a short line segment on the page. Now draw a second line segment from the end of the first. That should be pretty easy on a flat surface, right?","category":"page"},{"location":"concepts/using_manifolds/","page":"Using Manifolds.jl","title":"Using Manifolds.jl","text":"When the piece of paper is lying flat on the table, you have a line in the Euclidean(2) manifold, and you can easily assign [x,y] coordinates to describe these lines or vectors. Note coordinates here is a precise technical term.","category":"page"},{"location":"concepts/using_manifolds/","page":"Using Manifolds.jl","title":"Using Manifolds.jl","text":"If you roll the paper into a cyclinder... well now you have line segments on a cylindrical manifold. The question is, how to conduct mathematical operations concisely and consistently indepependent of the shape of your manifold? And, how to 'unroll' the paper for simple computations on a locally flat surface.","category":"page"},{"location":"concepts/using_manifolds/","page":"Using Manifolds.jl","title":"Using Manifolds.jl","text":"How far can the math go before there just isn't a good recipe for writing down generic operations? Turns out a few smart people have been working to solve this and the keyword here is Manifold.","category":"page"},{"location":"concepts/using_manifolds/","page":"Using Manifolds.jl","title":"Using Manifolds.jl","text":"If you are drinking some coffee right now, then you are moving the cup in Euclidean(3) space, that is you assume the motion is in flat coordinates [x;y;z]. A more technical way to say that is that the Euclidean manifold has zero curvature. ","category":"page"},{"location":"concepts/using_manifolds/","page":"Using Manifolds.jl","title":"Using Manifolds.jl","text":"What if you are concerned with the orientation of the cup too–-as in not spill the hot contents everywhere–-then you might actually want to work on the SpecialEuclidean(3) manifold – that is 3 degrees of translational freedom, and 3 degrees of rotational freedom. You might have heard of Lie Groups and Lie Algebras, well that is exactly it, Lie Groups are a special set of Group Manifolds and associated operations that are already supported by JuliaManifolds/Manifolds.jl.","category":"page"},{"location":"concepts/using_manifolds/","page":"Using Manifolds.jl","title":"Using Manifolds.jl","text":"Things are a little easier for a robot traveling around on a flat 2D surface. If your robot is moving around with coordinates xytheta, well then you are working with the coordinates of the SpecialEuclidean(2) manifold. There is more to say on how the coordinates xytheta get converted into the mathfrakse(2) Lie algebra, and that gets converted into a Lie Group element – i.e. (xy mathrmRotMat(theta)). More on that later.","category":"page"},{"location":"concepts/using_manifolds/","page":"Using Manifolds.jl","title":"Using Manifolds.jl","text":"Perhaps you are interested in relativistic effects where time as the fourth dimension is of interest, well then the Minkowski space provides Group and Manifold constructs for that – actually Minkowski falls under the supported Lorentz Manifolds.","category":"page"},{"location":"concepts/using_manifolds/","page":"Using Manifolds.jl","title":"Using Manifolds.jl","text":"The point here is that the math for drawing line segments in each of these manifolds above is almost exactly the same, thanks to the abstractions that have already been developed. And, many more powerful constructs exist which will become more apparent as you continue to work with Manifolds.","category":"page"},{"location":"concepts/using_manifolds/#7-Things-to-know-First","page":"Using Manifolds.jl","title":"7 Things to know First","text":"","category":"section"},{"location":"concepts/using_manifolds/","page":"Using Manifolds.jl","title":"Using Manifolds.jl","text":"As a robotics, navigation, or control person who wants to get started, you need to know what the following terms mean:","category":"page"},{"location":"concepts/using_manifolds/","page":"Using Manifolds.jl","title":"Using Manifolds.jl","text":"Q1) What are manifold points, tangent vectors, and user coordinates,\nQ2) What does the logarithm map of a manifold do,\nQ3) What does the exponential map of a manifold do,\nQ4) What do the vee and hat operations do,\nQ5) What is the difference between Riemannian or Group manifolds,\nQ6) Is a retraction the same as the exponential map,\nQ7) Is a projection the same as a logarithm map,","category":"page"},{"location":"concepts/using_manifolds/","page":"Using Manifolds.jl","title":"Using Manifolds.jl","text":"Know it sounds like a lot, but the point of this paragraph is that if you are able to answer these seven questions for yourself, then you will be empowered to venture into the math of manifolds much more easily. And, everything will begin to make sense. A lot of sense, to the point that you might agree with our assesment that JuliaManifolds/Manifolds.jl is the right direction for the future.","category":"page"},{"location":"concepts/using_manifolds/","page":"Using Manifolds.jl","title":"Using Manifolds.jl","text":"Although you will be able to find many answers for these seven questions in many places, our answers are listed at the bottom of this page.","category":"page"},{"location":"concepts/using_manifolds/#Manifold-Tutorials","page":"Using Manifolds.jl","title":"Manifold Tutorials","text":"","category":"section"},{"location":"concepts/using_manifolds/","page":"Using Manifolds.jl","title":"Using Manifolds.jl","text":"The rest of this page is devoted to showing you how to use the math, write your own code to do new things beyond what Caesar.jl can already do. If you are willing to share any contributions, please do so by opening pull requests against the related repos.","category":"page"},{"location":"concepts/using_manifolds/#Using-Manifolds-in-Factors","page":"Using Manifolds.jl","title":"Using Manifolds in Factors","text":"","category":"section"},{"location":"concepts/using_manifolds/","page":"Using Manifolds.jl","title":"Using Manifolds.jl","text":"The best way to show this is just dive straight into a factor that actually uses a Manifolds mechanization, and RoME.Pose2Pose2 is a fairly straight forward example. This factor gets used for rigid transforms on a 2D plane, with coordinates xytheta as alluded to above.","category":"page"},{"location":"concepts/using_manifolds/#A-Tutorial-on-Rotations","page":"Using Manifolds.jl","title":"A Tutorial on Rotations","text":"","category":"section"},{"location":"concepts/using_manifolds/","page":"Using Manifolds.jl","title":"Using Manifolds.jl","text":"note: Note\nWork in progress, Upstream Tutorial","category":"page"},{"location":"concepts/using_manifolds/#A-Tutorial-on-2D-Rigid-Transforms","page":"Using Manifolds.jl","title":"A Tutorial on 2D Rigid Transforms","text":"","category":"section"},{"location":"concepts/using_manifolds/","page":"Using Manifolds.jl","title":"Using Manifolds.jl","text":"note: Note\nWork in progress, Upstream Tutorial","category":"page"},{"location":"concepts/using_manifolds/#Existing-Manifolds","page":"Using Manifolds.jl","title":"Existing Manifolds","text":"","category":"section"},{"location":"concepts/using_manifolds/","page":"Using Manifolds.jl","title":"Using Manifolds.jl","text":"The most popular Manifolds used in Caesar.jl related packages are:","category":"page"},{"location":"concepts/using_manifolds/#Group-Manifolds","page":"Using Manifolds.jl","title":"Group Manifolds","text":"","category":"section"},{"location":"concepts/using_manifolds/","page":"Using Manifolds.jl","title":"Using Manifolds.jl","text":"TranslationGroup(N) (future work will relax to Euclidean(N)).\nSpecialOrthogonal(N).\nSpecialEuclidean(N).\n_CircleEuclid LEGACY, TODO.\nAMP.SE2_E2 LEGACY, TODO.","category":"page"},{"location":"concepts/using_manifolds/#Riemannian-Manifolds-(Work-in-progress)","page":"Using Manifolds.jl","title":"Riemannian Manifolds (Work in progress)","text":"","category":"section"},{"location":"concepts/using_manifolds/","page":"Using Manifolds.jl","title":"Using Manifolds.jl","text":"Sphere(N) WORK IN PROGRESS.","category":"page"},{"location":"concepts/using_manifolds/","page":"Using Manifolds.jl","title":"Using Manifolds.jl","text":"note: Note\nCaesar.jl encourages the JuliaManifolds approach to defining new manifolds, and can readily be used for Caesar.jl related operations.","category":"page"},{"location":"concepts/using_manifolds/#Creating-a-new-Manifold","page":"Using Manifolds.jl","title":"Creating a new Manifold","text":"","category":"section"},{"location":"concepts/using_manifolds/","page":"Using Manifolds.jl","title":"Using Manifolds.jl","text":"JuliaManifolds.jl is designed to make it as easy as possible to define your own manifold and then get all the benefits of the Manifolds.jl ecosystem. Follow the documentation there to make your own manifold, which can then readily be used with all the features of both JuliaManifolds as well as the Caesar.jl related packages.","category":"page"},{"location":"concepts/using_manifolds/#seven_mani_answers","page":"Using Manifolds.jl","title":"Answers to 7 Questions","text":"","category":"section"},{"location":"concepts/using_manifolds/#Q1)-What-are-Point,-Tangents,-Coordinates","page":"Using Manifolds.jl","title":"Q1) What are Point, Tangents, Coordinates","text":"","category":"section"},{"location":"concepts/using_manifolds/","page":"Using Manifolds.jl","title":"Using Manifolds.jl","text":"A manifold mathcalM is a collection of points that together create the given space. Points are like round sprinkles on the donut. The representation of points will vary from manifold to manifold. Sometimes it is even possible to have different representations for the same point on a manifold. These are usually denoted as p.","category":"page"},{"location":"concepts/using_manifolds/","page":"Using Manifolds.jl","title":"Using Manifolds.jl","text":"Tangent vectors (we prefer tangents for clarity) is a vector X that emanates from a point on a manifold tangential to the manifold curvature. A vector lives in the tangent space of the manifold, a locally flat region around a point Xin T_p mathcalM. On the donut, imagine a rod-shaped sprinkle stuck along the tangent of the surface at a particular point p. The tangent space is the collection of all possible tangents at p. ","category":"page"},{"location":"concepts/using_manifolds/","page":"Using Manifolds.jl","title":"Using Manifolds.jl","text":"Coordinates are a user defined property that uses the Euclidean nature of the tangent space at point p to operate as a regular linear space. Coordinates are just a list of the indepedent coordinate dimensions of the tangent space values collected together. Read this part carefully, as it can easily be confused with a conventional tangent vector in a regular Euclidean space. ","category":"page"},{"location":"concepts/using_manifolds/","page":"Using Manifolds.jl","title":"Using Manifolds.jl","text":"For example, a tangent vector to the Euclidean(2) manifold, at the origin point (00) is what you likely are familiar with from school as a \"vector\" (not the coordinates, although that happens to be the same thing in the trivial case). For Euclidean space, a vector from point p of length xy looks like the line segment between points p and q on the underlying manifold. ","category":"page"},{"location":"concepts/using_manifolds/","page":"Using Manifolds.jl","title":"Using Manifolds.jl","text":"This trivial overlapping of \"vectors\" in the Euclidean Manifold, and in a tangent space around p, and coordinates for that tangent space, are no longer trivial when the manifold has curvature.","category":"page"},{"location":"concepts/using_manifolds/#Q2)-What-is-the-Logarithm-map","page":"Using Manifolds.jl","title":"Q2) What is the Logarithm map","text":"","category":"section"},{"location":"concepts/using_manifolds/","page":"Using Manifolds.jl","title":"Using Manifolds.jl","text":"The logarithm X = logmap(M,p,q) computes, based at point p, the tangent vector X on the tangent plane T_pmathcalM from p. In other words, image a string following the curve of a manifold from p to q, pick up that string from q while holding p firm, until the string is flat against the tangent space emminating from p. The logarithm is the opposite of the exponential map. ","category":"page"},{"location":"concepts/using_manifolds/","page":"Using Manifolds.jl","title":"Using Manifolds.jl","text":"Multiple logmap interpretations exist, for example in the case of SpecialEuclidean(N) there are multiple definitions for oplus and ominus, see [2.15]. When using a library, it is worth testing how logmap and expmap are computed (away from the identity element for Groups).","category":"page"},{"location":"concepts/using_manifolds/#Q3)-What-is-the-Exponential-map","page":"Using Manifolds.jl","title":"Q3) What is the Exponential map","text":"","category":"section"},{"location":"concepts/using_manifolds/","page":"Using Manifolds.jl","title":"Using Manifolds.jl","text":"The exponential map does the opposite of the logarithm. Imagine a tangent vector X emanating from point p. The length and direction of X can be wrapped onto the curvature of the manifold to form a line on the manifold surface.","category":"page"},{"location":"concepts/using_manifolds/#Q4)-What-does-vee/hat-do","page":"Using Manifolds.jl","title":"Q4) What does vee/hat do","text":"","category":"section"},{"location":"concepts/using_manifolds/","page":"Using Manifolds.jl","title":"Using Manifolds.jl","text":"vee is an operation that converts a tangent vector representation into a coordinate representation. For example Lie algebra elements are tangent vector elements, so vee([0 -w; w 0]) = w. And visa versa for hat(w) = [0 -w; w 0], which goes from coordinates to tangent vectors.","category":"page"},{"location":"concepts/using_manifolds/#Q5)-What-Riemannian-vs.-Group-Manifolds","page":"Using Manifolds.jl","title":"Q5) What Riemannian vs. Group Manifolds","text":"","category":"section"},{"location":"concepts/using_manifolds/","page":"Using Manifolds.jl","title":"Using Manifolds.jl","text":"Groups are mathematical structures which often fit well inside the manifold way of working. For example in robotics, Lie Groups are popular under SpecialEuclidean(N) <: AbstractGroupManifold. Groups also have a well defined action. Most prominently for our usage, groups are sets of points for which there exists an identity point. Riemannian manifolds are more general than Lie groups, specifically Riemannian manifolds do not have an identity point. ","category":"page"},{"location":"concepts/using_manifolds/","page":"Using Manifolds.jl","title":"Using Manifolds.jl","text":"An easy example is that the Euclidean(N) manifold does not have an identity element, since what we know as 00 is actually a coordinate base point for the local tangent space, and which just happens to look the same as the underlying Euclidean(N) manifold. The TranslationGroup(N) exists as an additional structure over the Euclidean(N) space which has a defined identity element as well as a defined operations on points.","category":"page"},{"location":"concepts/using_manifolds/#Q6)-Retraction-vs.-Exp-map","page":"Using Manifolds.jl","title":"Q6) Retraction vs. Exp map","text":"","category":"section"},{"location":"concepts/using_manifolds/","page":"Using Manifolds.jl","title":"Using Manifolds.jl","text":"Retractions are numerically efficient approximations to convert a tangent vector into a point on the manifold. The exponential map is the theoretically precise retraction, but may well be computationally expensive beyond the need for most applications.","category":"page"},{"location":"concepts/using_manifolds/#Q7)-Projection-vs.-Log-map","page":"Using Manifolds.jl","title":"Q7) Projection vs. Log map","text":"","category":"section"},{"location":"concepts/using_manifolds/","page":"Using Manifolds.jl","title":"Using Manifolds.jl","text":"The term projection can be somewhat ambiguous between references. In Manifolds.jl, projections either project a point in the embedding to a point on the manifold, or a vector from the embedding onto a tangent space at a certain point. ","category":"page"},{"location":"concepts/using_manifolds/","page":"Using Manifolds.jl","title":"Using Manifolds.jl","text":"Confusion, can easily happen between cases where there is no ambient space around a particular manifold. Then the term projection may be moot.","category":"page"},{"location":"concepts/using_manifolds/","page":"Using Manifolds.jl","title":"Using Manifolds.jl","text":"In Manifolds.jl, an inverse retraction is an approximate logmap of a point up from the manifold onto a tangent space – i.e. not a projection. It is important not to confuse a point on the manifold as a point in the ambient space, when thinking about the term projection.","category":"page"},{"location":"concepts/using_manifolds/","page":"Using Manifolds.jl","title":"Using Manifolds.jl","text":"It is best to make sure you know which one is being used in any particular situation.","category":"page"},{"location":"concepts/using_manifolds/","page":"Using Manifolds.jl","title":"Using Manifolds.jl","text":"note: Note\nFor a slightly deeper dive into the relation between embedding, ambient space, and projections, see the background conversation here.","category":"page"},{"location":"examples/adding_variables_factors/#Variable/Factor-Considerations","page":"Variable/Factor Considerations","title":"Variable/Factor Considerations","text":"","category":"section"},{"location":"examples/adding_variables_factors/","page":"Variable/Factor Considerations","title":"Variable/Factor Considerations","text":"A couple of important points:","category":"page"},{"location":"examples/adding_variables_factors/","page":"Variable/Factor Considerations","title":"Variable/Factor Considerations","text":"You do not need to modify or insert your new code into Caesar/RoME/IncrementalInference source code libraries – they can be created and run anywhere on-the-fly!\nAs long as the factors exist in the working space when the solver is run, the factors are automatically used – this is possible due to Julia's multiple dispatch design\nCaesar.jl is designed to allow you to add new variables and factors to your own independent repository and incorporate them at will at compile-time or even run-time\nResidual function definitions for new factors types use a callable struct (a.k.a functor) architecture to simultaneously allow: \nMultiple dispatch (i.e. 'polymorphic' behavior)\nMeta-data and in-place memory storage for advanced and performant code\nAn outside callback implementation style\nIn most robotics scenarios, there is no need for new variables or factors:\nVariables have various mechanisms that allow you to attach data to them, e.g. raw sensory data or identified April tags, so you do not need to create a new variable type just to store data\nNew variables are required only if you are representing a new state - TODO: Example of needed state\nNew factors are needed if:\nYou need to represent a constraint for a variable (known as a singleton) and that constraint type doesn't exist\nYou need to represent a constraint between two variables and that constraint type doesn't exist","category":"page"},{"location":"examples/adding_variables_factors/","page":"Variable/Factor Considerations","title":"Variable/Factor Considerations","text":"All factors inherit from one of the following types, depending on their function:","category":"page"},{"location":"examples/adding_variables_factors/","page":"Variable/Factor Considerations","title":"Variable/Factor Considerations","text":"AbstractPrior is for priors (unary factors) that provide an absolute constraint for a single variable. A simple example of this is an absolute GPS prior, or equivalently a (0, 0, 0) starting location in a Pose2 scenario.\nRequires: A getSample function\nAbstractRelativeMinimize uses Optim.jl and is for relative factors that introduce an algebraic relationship between two or more variables. A simple example of this is an odometry factor between two pose variables, or a range factor indicating the range between a pose and another variable.\nRequires: A getSample function and a residual function definition\nThe minimize suffix specifies that the residual function of this factor will be enforced by numerical minimization (find me the minimum of this function)\n[NEW] AbstractManifoldMinimize uses Manopt.jl.","category":"page"},{"location":"examples/adding_variables_factors/","page":"Variable/Factor Considerations","title":"Variable/Factor Considerations","text":"How do you decide which to use?","category":"page"},{"location":"examples/adding_variables_factors/","page":"Variable/Factor Considerations","title":"Variable/Factor Considerations","text":"If you are creating factors for world-frame information that will be tied to a single variable, inherit from <:AbstractPrior\nGPS coordinates should be priors\nIf you are creating factors for local-frame relationships between variables, inherit from IIF.AbstractRelativeMinimize\nOdometry and bearing deltas should be introduced as pairwise factors and should be local frame","category":"page"},{"location":"examples/adding_variables_factors/","page":"Variable/Factor Considerations","title":"Variable/Factor Considerations","text":"TBD: Users should start with IIF.AbstractRelativeMinimize, discuss why and when they should promote their factors to IIF.AbstractRelativeRoots.","category":"page"},{"location":"examples/adding_variables_factors/","page":"Variable/Factor Considerations","title":"Variable/Factor Considerations","text":"note: Note\nAbstractRelativeMinimize does not imply that the overall inference algorithm only minimizes an objective function. The MM-iSAM algorithm is built around fixed-point analysis. Minimization is used here to locally enforce the residual function.","category":"page"},{"location":"examples/adding_variables_factors/","page":"Variable/Factor Considerations","title":"Variable/Factor Considerations","text":"What you need to build in the new factor:","category":"page"},{"location":"examples/adding_variables_factors/","page":"Variable/Factor Considerations","title":"Variable/Factor Considerations","text":"A struct for the factor itself\nA sampler function to return measurements from the random ditributions\nIf you are building a <:AbstractRelative you need to define a residual function to introduce the relative algebraic relationship between the variables\nMinimization function should be lower-bounded and smooth\nA packed type of the factor which must be named Packed[Factor name], and allows the factor to be packed/transmitted/unpacked\nSerialization and deserialization methods\nThese are convert functions that pack and unpack the factor (which may be highly complex) into serialization-compatible formats\nAs the factors are mostly comprised of distributions (of type SampleableBelief), while JSON3.jl` is used for serialization.","category":"page"},{"location":"concepts/2d_plotting/#plotting_2d","page":"Plotting (2D)","title":"Plotting 2D","text":"","category":"section"},{"location":"concepts/2d_plotting/","page":"Plotting (2D)","title":"Plotting (2D)","text":"Once the graph has been built, 2D plot visualizations are provided by RoMEPlotting.jl and KernelDensityEstimatePlotting.jl. These visualizations tools are readily modifiable to highlight various aspects of mobile platform navigation.","category":"page"},{"location":"concepts/2d_plotting/","page":"Plotting (2D)","title":"Plotting (2D)","text":"note: Note\nPlotting packages can be installed separately.","category":"page"},{"location":"concepts/2d_plotting/","page":"Plotting (2D)","title":"Plotting (2D)","text":"The major 2D plotting functions between RoMEPlotting.jl and KernelDensityEstimatePlotting.jl:","category":"page"},{"location":"concepts/2d_plotting/","page":"Plotting (2D)","title":"Plotting (2D)","text":"plotSLAM2D,\nplotSLAM2DPoses,\nplotSLAM2DLandmarks,\nplotPose,\nplotBelief\nLEGACY plotKDE\nplotLocalProduct,\nPDF, PNG, SVG,\nhstack, vstack.","category":"page"},{"location":"concepts/2d_plotting/#Example-Plot-SLAM-2D","page":"Plotting (2D)","title":"Example Plot SLAM 2D","text":"","category":"section"},{"location":"concepts/2d_plotting/","page":"Plotting (2D)","title":"Plotting (2D)","text":"This simplest example for visualizing a 2D robot trajectory–-such as first running the Hexagonal 2D SLAM example–-","category":"page"},{"location":"concepts/2d_plotting/","page":"Plotting (2D)","title":"Plotting (2D)","text":"Assuming some fg<:AbstractDFG has been loaded/constructed:","category":"page"},{"location":"concepts/2d_plotting/","page":"Plotting (2D)","title":"Plotting (2D)","text":"# load the plotting functionality\nusing RoME, RoMEPlotting\n\n# generate some factor graph with numerical values\nfg = generateGraph_Hexagonal()\nsolveTree!(fg)\n\n# or fg = loadDFG(\"somepath\")\n\n# slam2D plot\npl = plotSLAM2D(fg, drawhist=true, drawPoints=false)","category":"page"},{"location":"concepts/2d_plotting/","page":"Plotting (2D)","title":"Plotting (2D)","text":"

      \n\n

      ","category":"page"},{"location":"concepts/2d_plotting/","page":"Plotting (2D)","title":"Plotting (2D)","text":"plotSLAM2D","category":"page"},{"location":"concepts/2d_plotting/#RoMEPlotting.plotSLAM2D","page":"Plotting (2D)","title":"RoMEPlotting.plotSLAM2D","text":"plotSLAM2D(\n fgl;\n solveKey,\n from,\n to,\n minnei,\n meanmax,\n posesPPE,\n landmsPPE,\n recalcPPEs,\n lbls,\n scale,\n x_off,\n y_off,\n drawTriads,\n dyadScale,\n levels,\n drawhist,\n MM,\n xmin,\n xmax,\n ymin,\n ymax,\n showmm,\n window,\n point_size,\n line_width,\n regexLandmark,\n regexPoses,\n variableList,\n manualColor,\n drawPoints,\n pointsColor,\n drawContour,\n drawEllipse,\n ellipseColor,\n title,\n aspect_ratio\n)\n\n\n2D plot of both poses and landmarks contained in factor graph. Assuming poses and landmarks are labeled :x1, :x2, ... and :l0, :l1, ..., respectively. The range of numbers to include can be controlled with from and to along with other keyword functionality for manipulating the plot.\n\nNotes\n\nAssumes :l1, :l2, ... for landmarks – \nCan increase default Gadfly plot size (for JSSVG in browser): Gadfly.set_default_plot_size(35cm,20cm).\nEnable or disable features such as the covariance ellipse with keyword drawEllipse=true.\n\nDevNotes\n\nTODO update to use e.g. tags=[:LANDMARK],\nTODO fix drawHist,\nTODO deprecate, showmm, spscale.\n\nExamples:\n\nfg = generateGraph_Hexagonal()\nplotSLAM2D(fg)\nplotSLAM2D(fg, drawPoints=false)\nplotSLAM2D(fg, contour=false, drawEllipse=true)\nplotSLAM2D(fg, contour=false, title=\"SLAM result 1\")\n\n# or load a factor graph\nfg_ = loadDFG(\"somewhere.tar.gz\")\nplotSLAM2D(fg_)\n\nRelated\n\nplotSLAM2DPoses, plotSLAM2DLandmarks, plotPose, plotBelief \n\n\n\n\n\n","category":"function"},{"location":"concepts/2d_plotting/#Plot-Covariance-Ellipse-and-Points","page":"Plotting (2D)","title":"Plot Covariance Ellipse and Points","text":"","category":"section"},{"location":"concepts/2d_plotting/","page":"Plotting (2D)","title":"Plotting (2D)","text":"While the Caesar.jl framework is focussed on non-Gaussian inference, it is frequently desirable to relate the results to a more familiar covariance ellipse, and native support for this exists:","category":"page"},{"location":"concepts/2d_plotting/","page":"Plotting (2D)","title":"Plotting (2D)","text":"plotSLAM2D(fg, drawContour=false, drawEllipse=true, drawPoints=true)","category":"page"},{"location":"concepts/2d_plotting/","page":"Plotting (2D)","title":"Plotting (2D)","text":"

      \n\n

      ","category":"page"},{"location":"concepts/2d_plotting/#Plot-Poses-or-Landmarks","page":"Plotting (2D)","title":"Plot Poses or Landmarks","text":"","category":"section"},{"location":"concepts/2d_plotting/","page":"Plotting (2D)","title":"Plotting (2D)","text":"Lower down utility functions are used to plot poses and landmarks separately before joining the Gadfly layers.","category":"page"},{"location":"concepts/2d_plotting/","page":"Plotting (2D)","title":"Plotting (2D)","text":"plotSLAM2DPoses\nplotSLAM2DLandmarks","category":"page"},{"location":"concepts/2d_plotting/#RoMEPlotting.plotSLAM2DPoses","page":"Plotting (2D)","title":"RoMEPlotting.plotSLAM2DPoses","text":"plotSLAM2DPoses(\n fg;\n solveKey,\n regexPoses,\n from,\n to,\n variableList,\n meanmax,\n ppe,\n recalcPPEs,\n lbls,\n scale,\n x_off,\n y_off,\n drawhist,\n spscale,\n dyadScale,\n drawTriads,\n drawContour,\n levels,\n contour,\n line_width,\n drawPoints,\n pointsColor,\n drawEllipse,\n ellipseColor,\n manualColor\n)\n\n\n2D plot of all poses, assuming poses are labeled from `::Symbol type :x0, :x1, ..., :xn. Use to and from to limit the range of numbers n to be drawn. The underlying histogram can be enabled or disabled, and the size of maximum-point belief estimate cursors can be controlled with spscale.\n\nFuture:\n\nRelax to user defined pose labeling scheme, for example :p1, :p2, ...\n\n\n\n\n\n","category":"function"},{"location":"concepts/2d_plotting/#RoMEPlotting.plotSLAM2DLandmarks","page":"Plotting (2D)","title":"RoMEPlotting.plotSLAM2DLandmarks","text":"plotSLAM2DLandmarks(\n fg;\n solveKey,\n regexLandmark,\n from,\n to,\n minnei,\n variableList,\n meanmax,\n ppe,\n recalcPPEs,\n lbls,\n showmm,\n scale,\n x_off,\n y_off,\n drawhist,\n drawContour,\n levels,\n contour,\n manualColor,\n c,\n MM,\n point_size,\n drawPoints,\n pointsColor,\n drawEllipse,\n ellipseColor,\n resampleGaussianFit\n)\n\n\n2D plot of landmarks, assuming :l1, :l2, ... :ln. Use from and to to control the range of landmarks n to include.\n\n\n\n\n\n","category":"function"},{"location":"concepts/2d_plotting/#Plot-Belief-Density-Contour","page":"Plotting (2D)","title":"Plot Belief Density Contour","text":"","category":"section"},{"location":"concepts/2d_plotting/","page":"Plotting (2D)","title":"Plotting (2D)","text":"KernelDensityEstimatePlotting (as used in RoMEPlotting) provides an interface to visualize belief densities as counter plots. Something basic might be to just show all plane pairs of this variable marginal belief:","category":"page"},{"location":"concepts/2d_plotting/","page":"Plotting (2D)","title":"Plotting (2D)","text":"# Draw the KDE for x0\nplotBelief(fg, :x0)","category":"page"},{"location":"concepts/2d_plotting/","page":"Plotting (2D)","title":"Plotting (2D)","text":"Plotting the marginal density over say variables (x,y) in a Pose2 would be:","category":"page"},{"location":"concepts/2d_plotting/","page":"Plotting (2D)","title":"Plotting (2D)","text":"plotBelief(fg, :x1, dims=[1;2])","category":"page"},{"location":"concepts/2d_plotting/","page":"Plotting (2D)","title":"Plotting (2D)","text":"The following example better shows some of features (via Gadfly.jl):","category":"page"},{"location":"concepts/2d_plotting/","page":"Plotting (2D)","title":"Plotting (2D)","text":"# Draw the (x,y) marginal estimated belief contour for :x0, :x2, and Lx4\npl = plotBelief(fg, [:x0; :x2; :x4], c=[\"red\";\"green\";\"blue\"], levels=2, dims=[1;2])\n\n# add a few fun layers\npl3 = plotSLAM2DPoses(fg, regexPoses=r\"x\\d\", from=3, to=3, drawContour=false, drawEllipse=true)\npl5 = plotSLAM2DPoses(fg, regexPoses=r\"x\\d\", from=5, to=5, drawContour=false, drawEllipse=true, drawPoints=false)\npl_ = plotSLAM2DPoses(fg, drawContour=false, drawPoints=false, dyadScale=0.001, to=5)\nunion!(pl.layers, pl3.layers)\nunion!(pl.layers, pl5.layers)\nunion!(pl.layers, pl_.layers)\n\n# change the plotting coordinates\npl.coord = Coord.Cartesian(xmin=-10,xmax=20, ymin=-1, ymax=25)\n\n# save the plot to SVG and giving dedicated (although optional) sizing\npl |> SVG(\"/tmp/test.svg\", 25cm, 15cm)\n\n# also display the plot live\npl","category":"page"},{"location":"concepts/2d_plotting/","page":"Plotting (2D)","title":"Plotting (2D)","text":"

      \n\n

      ","category":"page"},{"location":"concepts/2d_plotting/","page":"Plotting (2D)","title":"Plotting (2D)","text":"See function documentation for more details on API features","category":"page"},{"location":"concepts/2d_plotting/","page":"Plotting (2D)","title":"Plotting (2D)","text":"plotBelief","category":"page"},{"location":"concepts/2d_plotting/#RoMEPlotting.plotBelief","page":"Plotting (2D)","title":"RoMEPlotting.plotBelief","text":"plotBelief(\n fgl,\n sym;\n solveKey,\n dims,\n title,\n levels,\n fill,\n layers,\n c,\n overlay\n)\n\n\nA peneric KDE plotting function that allows marginals of higher dimensional beliefs and various keyword options.\n\nExample for Position2:\n\n\np = manikde!(Position2, [randn(2) for _ in 1:100])\nq = manikde!(Position2, [randn(2).+[5;0] for _ in 1:100])\n\nplotBelief(p)\nplotBelief(p, dims=[1;2], levels=3)\nplotBelief(p, dims=[1])\n\nplotBelief([p;q])\nplotBelief([p;q], dims=[1;2], levels=3)\nplotBelief([p;q], dims=[1])\n\nExample for Pose2:\n\n# TODO\n\n\n\n\n\n","category":"function"},{"location":"concepts/2d_plotting/#Save-Plot-to-Image","page":"Plotting (2D)","title":"Save Plot to Image","text":"","category":"section"},{"location":"concepts/2d_plotting/","page":"Plotting (2D)","title":"Plotting (2D)","text":"VSCode/Juno can set plot to be opened in a browser tab instead. For scripting use-cases you can also export the image:","category":"page"},{"location":"concepts/2d_plotting/","page":"Plotting (2D)","title":"Plotting (2D)","text":"using Gadfly\n# can change the default plot size\n# Gadfly.set_default_plot_size(35cm, 30cm)\n\npl |> PDF(\"/tmp/test.pdf\", 20cm, 10cm) # or PNG, SVG","category":"page"},{"location":"concepts/2d_plotting/#Save-Plot-Object-To-File","page":"Plotting (2D)","title":"Save Plot Object To File","text":"","category":"section"},{"location":"concepts/2d_plotting/","page":"Plotting (2D)","title":"Plotting (2D)","text":"It is also possible to store the whole plot container to file using JLD2.jl:","category":"page"},{"location":"concepts/2d_plotting/","page":"Plotting (2D)","title":"Plotting (2D)","text":"JLD2.@save \"/tmp/myplot.jld2\" pl\n\n# and loading elsewhere\nJLD2.@load \"/tmp/myplot.jld2\" pl","category":"page"},{"location":"concepts/2d_plotting/#Interactive-Plots,-Zoom,-Pan-(Gadfly.jl)","page":"Plotting (2D)","title":"Interactive Plots, Zoom, Pan (Gadfly.jl)","text":"","category":"section"},{"location":"concepts/2d_plotting/","page":"Plotting (2D)","title":"Plotting (2D)","text":"See the following two discussions on Interactive 2D plots:","category":"page"},{"location":"concepts/2d_plotting/","page":"Plotting (2D)","title":"Plotting (2D)","text":"Interactivity\nInteractive-SVGs","category":"page"},{"location":"concepts/2d_plotting/","page":"Plotting (2D)","title":"Plotting (2D)","text":"note: Note\nRed and Green dyad lines represent the visualization-only assumption of X-forward and Y-left direction of Pose2. The inference and manifold libraries surrounding Caesar.jl are agnostic to any particular choice of reference frame alignment, such as north east down (NED) or forward left up (common in mobile robotics).","category":"page"},{"location":"concepts/2d_plotting/","page":"Plotting (2D)","title":"Plotting (2D)","text":"note: Note\nAlso see Gadfly.jl notes about hstack and vstack to combine plots side by side or vertically.","category":"page"},{"location":"concepts/2d_plotting/#Plot-Pose-Individually","page":"Plotting (2D)","title":"Plot Pose Individually","text":"","category":"section"},{"location":"concepts/2d_plotting/","page":"Plotting (2D)","title":"Plotting (2D)","text":"It is also possible to plot the belief density of a Pose2 on-manifold:","category":"page"},{"location":"concepts/2d_plotting/","page":"Plotting (2D)","title":"Plotting (2D)","text":"plotPose(fg, :x6)","category":"page"},{"location":"concepts/2d_plotting/","page":"Plotting (2D)","title":"Plotting (2D)","text":"

      \n\n

      ","category":"page"},{"location":"concepts/2d_plotting/","page":"Plotting (2D)","title":"Plotting (2D)","text":"plotPose","category":"page"},{"location":"concepts/2d_plotting/#RoMEPlotting.plotPose","page":"Plotting (2D)","title":"RoMEPlotting.plotPose","text":"plotPose(, pp; ...)\nplotPose(\n ,\n pp,\n title;\n levels,\n c,\n legend,\n axis,\n scale,\n overlay,\n hdl\n)\n\n\nPlot pose belief as contour information on visually sensible manifolds.\n\nExample:\n\nfg = generateGraph_ZeroPose()\ninitAll!(fg);\nplotPose(fg, :x0)\n\nRelated\n\nplotSLAM2D, plotSLAM2DPoses, plotBelief, plotKDECircular\n\n\n\n\n\nplotPose(\n fgl,\n syms;\n solveKey,\n levels,\n c,\n axis,\n scale,\n show,\n filepath,\n app,\n hdl\n)\n\n\nExample: pl = plotPose(fg, [:x1; :x2; :x3])\n\n\n\n\n\n","category":"function"},{"location":"concepts/2d_plotting/#Debug-With-Local-Graph-Product-Plot","page":"Plotting (2D)","title":"Debug With Local Graph Product Plot","text":"","category":"section"},{"location":"concepts/2d_plotting/","page":"Plotting (2D)","title":"Plotting (2D)","text":"One useful function is to check that data in the factor graph makes sense. While the full inference algorithm uses a Bayes (Junction) tree to assemble marginal belief estimates in an efficient manner, it is often useful for a straight forward graph based sanity check. The plotLocalProduct projects through approxConvBelief each of the factors connected to the target variable and plots the result. This example looks at the loop-closure point around :x0, which is also pinned down by the only prior in the canonical Hexagonal factor graph.","category":"page"},{"location":"concepts/2d_plotting/","page":"Plotting (2D)","title":"Plotting (2D)","text":"@show ls(fg, :x0);\n# ls(fg, :x0) = [:x0f1, :x0x1f1, :x0l1f1]\n\npl = plotLocalProduct(fg, :x0, dims=[1;2], levels=1)","category":"page"},{"location":"concepts/2d_plotting/","page":"Plotting (2D)","title":"Plotting (2D)","text":"

      \n\n

      ","category":"page"},{"location":"concepts/2d_plotting/","page":"Plotting (2D)","title":"Plotting (2D)","text":"While perhaps a little cluttered to read at first, this figure shows that a new calculation local to only the factor graph prod in greem matches well with the existing value curr in red in the fg from the earlier solveTree! call. These values are close to the prior prediction :x0f1 in blue (fairly trivial case), while the odometry :x0x1f1 and landmark sighting projection :x0l1f1 are also well in agreement.","category":"page"},{"location":"concepts/2d_plotting/","page":"Plotting (2D)","title":"Plotting (2D)","text":"plotLocalProduct","category":"page"},{"location":"concepts/2d_plotting/#RoMEPlotting.plotLocalProduct","page":"Plotting (2D)","title":"RoMEPlotting.plotLocalProduct","text":"plotLocalProduct(\n fgl,\n lbl;\n solveKey,\n N,\n dims,\n levels,\n show,\n dirpath,\n mimetype,\n sidelength,\n title,\n xmin,\n xmax,\n ymin,\n ymax\n)\n\n\nPlot the proposal belief from neighboring factors to lbl in the factor graph (ignoring Bayes tree representation), and show with new product approximation for reference.\n\nDevNotes\n\nTODO, standardize around ::MIME=\"image/svg\", see JuliaRobotics/DistributedFactorGraphs.jl#640\n\n\n\n\n\nplotLocalProduct(fgl, lbl; N, dims)\n\n\nPlot the proposal belief from neighboring factors to lbl in the factor graph (ignoring Bayes tree representation), and show with new product approximation for reference. String version is obsolete and will be deprecated.\n\n\n\n\n\n","category":"function"},{"location":"concepts/2d_plotting/#More-Detail-About-Density-Plotting","page":"Plotting (2D)","title":"More Detail About Density Plotting","text":"","category":"section"},{"location":"concepts/2d_plotting/","page":"Plotting (2D)","title":"Plotting (2D)","text":"Multiple beliefs can be plotted at the same time, while setting levels=4 rather than the default value:","category":"page"},{"location":"concepts/2d_plotting/","page":"Plotting (2D)","title":"Plotting (2D)","text":"plX1 = plotBelief(fg, [:x0; :x1], dims=[1;2], levels=4)\n\n# plX1 |> PNG(\"/tmp/testX1.png\")","category":"page"},{"location":"concepts/2d_plotting/","page":"Plotting (2D)","title":"Plotting (2D)","text":"

      \n\n

      ","category":"page"},{"location":"concepts/2d_plotting/","page":"Plotting (2D)","title":"Plotting (2D)","text":"One dimensional (such as Θ) or a stack of all plane projections is also available:","category":"page"},{"location":"concepts/2d_plotting/","page":"Plotting (2D)","title":"Plotting (2D)","text":"plTh = plotBelief(fg, [:x0; :x1], dims=[3], levels=4)\n\n# plTh |> PNG(\"/tmp/testTh.png\")","category":"page"},{"location":"concepts/2d_plotting/","page":"Plotting (2D)","title":"Plotting (2D)","text":"

      \n\n

      ","category":"page"},{"location":"concepts/2d_plotting/","page":"Plotting (2D)","title":"Plotting (2D)","text":"plAll = plotBelief(fg, [:x0; :x1], levels=3)\n# plAll |> PNG(\"/tmp/testX1.png\",20cm,15cm)","category":"page"},{"location":"concepts/2d_plotting/","page":"Plotting (2D)","title":"Plotting (2D)","text":"

      \n\n

      ","category":"page"},{"location":"concepts/2d_plotting/","page":"Plotting (2D)","title":"Plotting (2D)","text":"note: Note\nThe functions hstack and vstack is provided through the Gadfly package and allows the user to build a near arbitrary composition of plots.","category":"page"},{"location":"concepts/2d_plotting/","page":"Plotting (2D)","title":"Plotting (2D)","text":"Please see KernelDensityEstimatePlotting package source for more features.","category":"page"},{"location":"principles/bayestreePrinciples/#Principle:-Bayes-tree-prototyping","page":"Bayes (Junction) tree","title":"Principle: Bayes tree prototyping","text":"","category":"section"},{"location":"principles/bayestreePrinciples/","page":"Bayes (Junction) tree","title":"Bayes (Junction) tree","text":"This page describes how to visualize, study, test, and compare Bayes (Junction) tree concepts with special regard for variable ordering.","category":"page"},{"location":"principles/bayestreePrinciples/#Why-a-Bayes-(Junction)-tree","page":"Bayes (Junction) tree","title":"Why a Bayes (Junction) tree","text":"","category":"section"},{"location":"principles/bayestreePrinciples/","page":"Bayes (Junction) tree","title":"Bayes (Junction) tree","text":"The tree is algebraicly equivalent–-but acyclic–-structure to the factor graph: i.) Inference is easier on on acyclic graphs; ii.) We can exploit Smart Message Passing benefits (known from the full conditional independence structure encoded in the tree), since the tree represents the \"complete form\" when marginalizing each variable one at a time (also known as elimination game, marginalization, also related to smart factors). In loose terms, the Bayes (Junction) tree has implicit access to all Schur complements (if it parametric and linearized) of each variable to all others. Please see this page more information regarding advanced topics on the Bayes tree.","category":"page"},{"location":"principles/bayestreePrinciples/#What-is-a-Bayes-(Junction)-tree","page":"Bayes (Junction) tree","title":"What is a Bayes (Junction) tree","text":"","category":"section"},{"location":"principles/bayestreePrinciples/","page":"Bayes (Junction) tree","title":"Bayes (Junction) tree","text":"The Bayes tree data structure is a rooted and directed Junction tree (maximal elimination clique tree). It allows for exact inference to be carried out by leveraging and exposing the variables' conditional independence and, very interestingly, can be directly associated with the sparsity pattern exhibited by a system's factorized upper triangular square root information matrix (see picture below).","category":"page"},{"location":"principles/bayestreePrinciples/","page":"Bayes (Junction) tree","title":"Bayes (Junction) tree","text":"(Image: graph and matrix analagos)","category":"page"},{"location":"principles/bayestreePrinciples/","page":"Bayes (Junction) tree","title":"Bayes (Junction) tree","text":"Following this matrix-graph parallel, the picture also shows what the associated matrix interpretation is for a factor graph (~first order expansion in the form of a measurement Jacobian) and its corresponding Markov random field (sparsity pattern corresponding to the information matrix).","category":"page"},{"location":"principles/bayestreePrinciples/","page":"Bayes (Junction) tree","title":"Bayes (Junction) tree","text":"The procedure for obtaining the Bayes (Junction) tree is outlined in the figure shown below (factor graph to chrodal Bayes net via bipartite elimination game, and chordal Bayes net to Bayes tree via maximum cardinality search algorithm).","category":"page"},{"location":"principles/bayestreePrinciples/","page":"Bayes (Junction) tree","title":"Bayes (Junction) tree","text":"(Image: add the fg2net2tree outline)","category":"page"},{"location":"principles/bayestreePrinciples/#Constructing-a-Tree","page":"Bayes (Junction) tree","title":"Constructing a Tree","text":"","category":"section"},{"location":"principles/bayestreePrinciples/","page":"Bayes (Junction) tree","title":"Bayes (Junction) tree","text":"note: Note\nA visual illustration of factor graph to Bayes net to Bayes tree can be found in this PDF ","category":"page"},{"location":"principles/bayestreePrinciples/","page":"Bayes (Junction) tree","title":"Bayes (Junction) tree","text":"Trees and factor graphs are separated in the implementation, allowing the user to construct multiple different trees from one factor graph except for a few temporary values in the factor graph.","category":"page"},{"location":"principles/bayestreePrinciples/","page":"Bayes (Junction) tree","title":"Bayes (Junction) tree","text":"using IncrementalInference # RoME or Caesar will work too\n\n## construct a distributed factor graph object\nfg = generateGraph_Kaess()\n# add variables and factors\n# ...\n\n## build the tree\ntree = buildTreeReset!(fg)","category":"page"},{"location":"principles/bayestreePrinciples/","page":"Bayes (Junction) tree","title":"Bayes (Junction) tree","text":"The temporary values are reset from the distributed factor graph object fg<:AbstractDFG and a new tree is constructed. This buildTreeReset! call can be repeated as many times the user desires and results should be consistent for the same factor graph structure (regardless of numerical values contained within).","category":"page"},{"location":"principles/bayestreePrinciples/","page":"Bayes (Junction) tree","title":"Bayes (Junction) tree","text":"buildTreeReset!","category":"page"},{"location":"principles/bayestreePrinciples/#IncrementalInference.buildTreeReset!","page":"Bayes (Junction) tree","title":"IncrementalInference.buildTreeReset!","text":"buildTreeReset!(dfg; ...)\nbuildTreeReset!(\n dfg,\n eliminationOrder;\n ordering,\n drawpdf,\n show,\n filepath,\n viewerapp,\n imgs,\n ensureSolvable,\n eliminationConstraints\n)\n\n\nBuild a completely new Bayes (Junction) tree, after first wiping clean all temporary state in fg from a possibly pre-existing tree.\n\nDevNotes\n\nreplaces resetBuildTreeFromOrder!\n\nRelated:\n\nbuildTreeFromOrdering!, \n\n\n\n\n\n","category":"function"},{"location":"principles/bayestreePrinciples/#Variable-Ordering","page":"Bayes (Junction) tree","title":"Variable Ordering","text":"","category":"section"},{"location":"principles/bayestreePrinciples/#Getting-the-AMD-Variable-Ordering","page":"Bayes (Junction) tree","title":"Getting the AMD Variable Ordering","text":"","category":"section"},{"location":"principles/bayestreePrinciples/","page":"Bayes (Junction) tree","title":"Bayes (Junction) tree","text":"The variable ordering is described as a ::Vector{Symbol}. Note the automated methods can be varied between AMD, CCOLAMD, and others.","category":"page"},{"location":"principles/bayestreePrinciples/","page":"Bayes (Junction) tree","title":"Bayes (Junction) tree","text":"# get the automated variable elimination order\nvo = getEliminationOrder(fg)","category":"page"},{"location":"principles/bayestreePrinciples/","page":"Bayes (Junction) tree","title":"Bayes (Junction) tree","text":"It is also possible to manually define the Variable Ordering","category":"page"},{"location":"principles/bayestreePrinciples/","page":"Bayes (Junction) tree","title":"Bayes (Junction) tree","text":"vo = [:x1; :l3; :x2; ...]","category":"page"},{"location":"principles/bayestreePrinciples/","page":"Bayes (Junction) tree","title":"Bayes (Junction) tree","text":"And then reset the factor graph and build a new tree","category":"page"},{"location":"principles/bayestreePrinciples/","page":"Bayes (Junction) tree","title":"Bayes (Junction) tree","text":"buildTreeReset!(fg, vo)","category":"page"},{"location":"principles/bayestreePrinciples/","page":"Bayes (Junction) tree","title":"Bayes (Junction) tree","text":"note: Note\na list of variables or factors can be obtained through the ls and related functions, see Querying the Factor Graph.","category":"page"},{"location":"principles/bayestreePrinciples/#Interfacing-with-the-MM-iSAMv2-Solver","page":"Bayes (Junction) tree","title":"Interfacing with the MM-iSAMv2 Solver","text":"","category":"section"},{"location":"principles/bayestreePrinciples/","page":"Bayes (Junction) tree","title":"Bayes (Junction) tree","text":"The following parmaters (set before calling solveTree!) will show the solution progress on the tree visualization:","category":"page"},{"location":"principles/bayestreePrinciples/","page":"Bayes (Junction) tree","title":"Bayes (Junction) tree","text":"getSolverParams(fg).drawtree = true\ngetSolverParams(fg).showtree = true\n\n# asybc process will now draw and show the tree in linux\ntree = solveTree!(fg)","category":"page"},{"location":"principles/bayestreePrinciples/","page":"Bayes (Junction) tree","title":"Bayes (Junction) tree","text":"note: Note\nSee the Solving Graphs section for more details on the solver.","category":"page"},{"location":"principles/bayestreePrinciples/#Get-the-Elimination-Order-Used","page":"Bayes (Junction) tree","title":"Get the Elimination Order Used","text":"","category":"section"},{"location":"principles/bayestreePrinciples/","page":"Bayes (Junction) tree","title":"Bayes (Junction) tree","text":"The solver internally uses buildTreeReset! which sometimes requires the user extract the variable elimination order after the fact. This can be done with:","category":"page"},{"location":"principles/bayestreePrinciples/","page":"Bayes (Junction) tree","title":"Bayes (Junction) tree","text":"getEliminationOrder","category":"page"},{"location":"principles/bayestreePrinciples/#IncrementalInference.getEliminationOrder","page":"Bayes (Junction) tree","title":"IncrementalInference.getEliminationOrder","text":"getEliminationOrder(dfg; ordering, solvable, constraints)\n\n\nDetermine the variable ordering used to construct both the Bayes Net and Bayes/Junction/Elimination tree.\n\nNotes\n\nHeuristic method – equivalent to QR or Cholesky.\nAre using Blas QR function to extract variable ordering.\nNOT USING SUITE SPARSE – which would requires commercial license.\nFor now A::Array{<:Number,2} as a dense matrix.\nColumns of A are system variables, rows are factors (without differentiating between partial or full factor).\ndefault is to use solvable=1 and ignore factors and variables that might be used for dead reckoning or similar.\n\nFuture\n\nTODO: A should be sparse data structure (when we exceed 10'000 var dims)\nTODO: Incidence matrix is rectagular and adjacency is the square.\n\n\n\n\n\ngetEliminationOrder(treel)\n\n\nReturn the variable elimination order stored in a tree object.\n\n\n\n\n\n","category":"function"},{"location":"principles/bayestreePrinciples/#Visualizing","page":"Bayes (Junction) tree","title":"Visualizing","text":"","category":"section"},{"location":"principles/bayestreePrinciples/","page":"Bayes (Junction) tree","title":"Bayes (Junction) tree","text":"IncrementalInference.jl includes functions for visualizing the Bayes tree, and uses outside packages such as GraphViz (standard) and Latex tools (experimental, optional) to do so. ","category":"page"},{"location":"principles/bayestreePrinciples/#GraphViz","page":"Bayes (Junction) tree","title":"GraphViz","text":"","category":"section"},{"location":"principles/bayestreePrinciples/","page":"Bayes (Junction) tree","title":"Bayes (Junction) tree","text":"drawTree(tree, show=true) # , filepath=\"/tmp/caesar/mytree.pdf\"","category":"page"},{"location":"principles/bayestreePrinciples/","page":"Bayes (Junction) tree","title":"Bayes (Junction) tree","text":"drawTree","category":"page"},{"location":"principles/bayestreePrinciples/#IncrementalInference.drawTree","page":"Bayes (Junction) tree","title":"IncrementalInference.drawTree","text":"drawTree(\n treel;\n show,\n suffix,\n filepath,\n xlabels,\n dpi,\n viewerapp,\n imgs\n)\n\n\nDraw the Bayes (Junction) tree by means of graphviz .dot files. Ensure Linux packages are installed sudo apt-get install graphviz xdot.\n\nNotes\n\nxlabels is optional cliqid=>xlabel.\n\n\n\n\n\n","category":"function"},{"location":"principles/bayestreePrinciples/#Latex-Tikz-(Optional)","page":"Bayes (Junction) tree","title":"Latex Tikz (Optional)","text":"","category":"section"},{"location":"principles/bayestreePrinciples/","page":"Bayes (Junction) tree","title":"Bayes (Junction) tree","text":"EXPERIMENTAL, requiring special import.","category":"page"},{"location":"principles/bayestreePrinciples/","page":"Bayes (Junction) tree","title":"Bayes (Junction) tree","text":"First make sure the following packages are installed on your system:","category":"page"},{"location":"principles/bayestreePrinciples/","page":"Bayes (Junction) tree","title":"Bayes (Junction) tree","text":"$ sudo apt-get install texlive-pictures dot2tex\n$ pip install dot2tex","category":"page"},{"location":"principles/bayestreePrinciples/","page":"Bayes (Junction) tree","title":"Bayes (Junction) tree","text":"Then in Julia you should be able to do:","category":"page"},{"location":"principles/bayestreePrinciples/","page":"Bayes (Junction) tree","title":"Bayes (Junction) tree","text":"import IncrementalInference: generateTexTree\n\ngenerateTexTree(tree)","category":"page"},{"location":"principles/bayestreePrinciples/","page":"Bayes (Junction) tree","title":"Bayes (Junction) tree","text":"An example Bayes (Junction) tree representation obtained through generateTexTree(tree) for the sample factor graph shown above can be seen in the following image.","category":"page"},{"location":"principles/bayestreePrinciples/","page":"Bayes (Junction) tree","title":"Bayes (Junction) tree","text":"

      \n\n

      ","category":"page"},{"location":"principles/bayestreePrinciples/#Visualizing-Clique-Adjacency-Matrix","page":"Bayes (Junction) tree","title":"Visualizing Clique Adjacency Matrix","text":"","category":"section"},{"location":"principles/bayestreePrinciples/","page":"Bayes (Junction) tree","title":"Bayes (Junction) tree","text":"It is also possible to see the upward message passing variable/factor association matrix for each clique, requiring the Gadfly.jl package:","category":"page"},{"location":"principles/bayestreePrinciples/","page":"Bayes (Junction) tree","title":"Bayes (Junction) tree","text":"using Gadfly\n\nspyCliqMat(tree, :x1) # provided by IncrementalInference\n\n#or embedded in graphviz\ndrawTree(tree, imgs=true, show=true)","category":"page"},{"location":"principles/bayestreePrinciples/#Clique-State-Machine","page":"Bayes (Junction) tree","title":"Clique State Machine","text":"","category":"section"},{"location":"principles/bayestreePrinciples/","page":"Bayes (Junction) tree","title":"Bayes (Junction) tree","text":"The mmisam solver is based on a state machine design to handle the inter and intra clique operations during a variety of situations. Use of the clique state machine (CSM) makes debugging, development, verification, and modification of the algorithm real easy. Contact us for any support regarding modifications to the default algorithm. For pre-docs on working with CSM, please see IIF #443.","category":"page"},{"location":"principles/bayestreePrinciples/#STATUS-of-a-Clique","page":"Bayes (Junction) tree","title":"STATUS of a Clique","text":"","category":"section"},{"location":"principles/bayestreePrinciples/","page":"Bayes (Junction) tree","title":"Bayes (Junction) tree","text":"CSM currently uses the following statusses for each of the cliques during the inference process.","category":"page"},{"location":"principles/bayestreePrinciples/","page":"Bayes (Junction) tree","title":"Bayes (Junction) tree","text":"[:initialized;:upsolved;:marginalized;:downsolved;:uprecycled]","category":"page"},{"location":"principles/bayestreePrinciples/#Bayes-Tree-Legend-(from-IIF)","page":"Bayes (Junction) tree","title":"Bayes Tree Legend (from IIF)","text":"","category":"section"},{"location":"principles/bayestreePrinciples/","page":"Bayes (Junction) tree","title":"Bayes (Junction) tree","text":"The color legend for the refactored CSM from issue.","category":"page"},{"location":"principles/bayestreePrinciples/","page":"Bayes (Junction) tree","title":"Bayes (Junction) tree","text":"Blank / white – uninitialized or unprocessed,\nOrange – recycled clique upsolve solution from previous tree passed into solveTree! – TODO,\nBlue – fully marginalized clique that will not be updated during upsolve (maybe downsolved),\nLight blue – completed downsolve,\nGreen – trying to up initialize,\nDarkgreen – initUp some could up init,\nLightgreen – initUp no aditional variables could up init,\nOlive – trying to down initialize,\nSeagreen – initUp some could down init,\nKhaki – initUp no aditional variables could down init,\nBrown – initialized but not solved yet (likely child cliques that depend on downward autoinit msgs),\nLight red – completed upsolve,\nTomato – partial dimension upsolve but finished,\nRed – CPU working on clique's Chapman-Kolmogorov inference (up),\nMaroon – CPU working on clique's Chapman-Kolmogorov inference (down),\nRed – If finished cliques in red are in ERROR_STATUS","category":"page"},{"location":"concepts/entry_data/#section_data_entry_blob_store","page":"Entry=>Data Blob","title":"Additional (Large) Data","text":"","category":"section"},{"location":"concepts/entry_data/","page":"Entry=>Data Blob","title":"Entry=>Data Blob","text":"There are a variety of situations that require more data to be stored natively in the factor graph object. This page will showcase some of Entry=>Data features available.","category":"page"},{"location":"concepts/entry_data/#Adding-A-FolderStore","page":"Entry=>Data Blob","title":"Adding A FolderStore","text":"","category":"section"},{"location":"concepts/entry_data/","page":"Entry=>Data Blob","title":"Entry=>Data Blob","text":"Caesar.jl (with DFG) supports storage and retrieval of larger data blobs by means of various database/datastore technologies. To get going, you can use a conventional FolderStore: ","category":"page"},{"location":"concepts/entry_data/","page":"Entry=>Data Blob","title":"Entry=>Data Blob","text":"# temporary location example\nstoreDir = joinpath(\"/tmp\",\"cjldata\")\ndatastore = FolderStore{Vector{UInt8}}(:default_folder_store, storeDir) \naddBlobStore!(fg, datastore)","category":"page"},{"location":"concepts/entry_data/","page":"Entry=>Data Blob","title":"Entry=>Data Blob","text":"note: Note\nThis example places the data folder in the .logpath location which defaults to /tmp/caesar/UNIQUEDATETIME. This is not a long term storage location since /tmp is periodically cleared by the operating system. Note that the data folder can be used in combination with loading and saving factor graph objects.","category":"page"},{"location":"concepts/entry_data/#Adding-Data-Blobs","page":"Entry=>Data Blob","title":"Adding Data Blobs","text":"","category":"section"},{"location":"concepts/entry_data/","page":"Entry=>Data Blob","title":"Entry=>Data Blob","text":"Just showcasing a JSON Dict approach","category":"page"},{"location":"concepts/entry_data/","page":"Entry=>Data Blob","title":"Entry=>Data Blob","text":"using JSON2\nsomeDict = Dict(:name => \"Jane\", :data => randn(100))\naddData!(fg, :default_folder_store, :x1, :datalabel, Vector{UInt8}(JSON2.write( someDict )), mimeType=\"application/json/octet-stream\" )\n# see retrieval example below...","category":"page"},{"location":"concepts/entry_data/","page":"Entry=>Data Blob","title":"Entry=>Data Blob","text":"This approach allows the maximum flexibility, for example it is also possible to do:","category":"page"},{"location":"concepts/entry_data/","page":"Entry=>Data Blob","title":"Entry=>Data Blob","text":"# from https://juliaimages.org/stable/install/\nusing TestImages, Images, ImageView\nimg = testimage(\"mandrill\")\nimshow(img)\n\n# TODO, convert to Vector{UInt8}\nusing ImageMagick, FileIO\n# convert image to PNG bytestream\nio = IOBuffer()\npngSm = Stream(format\"PNG\", io)\nsave(pngSm, img) # think FileIO is required for this\npngBytes = take!(io)\naddData!(fg, :default_folder_store, :x1, :testImage, pngBytes, mimeType=\"image/png\", description=\"mandrill test image\" )","category":"page"},{"location":"concepts/entry_data/#section_retrieve_data_blob","page":"Entry=>Data Blob","title":"Retrieving a Data Blob","text":"","category":"section"},{"location":"concepts/entry_data/","page":"Entry=>Data Blob","title":"Entry=>Data Blob","text":"Data is stored as an Entry => Blob relationship, and the entries associated with a variable can be found via","category":"page"},{"location":"concepts/entry_data/","page":"Entry=>Data Blob","title":"Entry=>Data Blob","text":"julia> listDataEntries(fg, :x6)\n1-element Array{Symbol,1}:\n :JOYSTICK_CMD_VALS\n :testImage","category":"page"},{"location":"concepts/entry_data/","page":"Entry=>Data Blob","title":"Entry=>Data Blob","text":"And retrieved via:","category":"page"},{"location":"concepts/entry_data/","page":"Entry=>Data Blob","title":"Entry=>Data Blob","text":"rawData = getData(fg, :x6, :JOYSTICK_CMD_VALS);\nimgEntry, imgBytes = getData(fg, :x1, :testImage)","category":"page"},{"location":"concepts/entry_data/","page":"Entry=>Data Blob","title":"Entry=>Data Blob","text":"Looking at rawData in a bit more detail:","category":"page"},{"location":"concepts/entry_data/","page":"Entry=>Data Blob","title":"Entry=>Data Blob","text":"julia> rawData[1]\nBlobStoreEntry(:JOYSTICK_CMD_VALS, UUID(\"d21fc841-6214-4196-a396-b1d5ef95be49\"), :default_folder_store, \"deeb3ed0cba6ffd149298de21c361af26a207e565e27a3cd3fa6c807b9aaa44d\", \"DefaultUser|DefaultRobot|Session_851d81|x6\", \"\", \"application/json/octet-stream\", TimeZones.ZonedDateTime(2020, 8, 15, 14, 26, 36, 397, tz\"UTC-04:00\"))\n\njulia> rawData[2]\n3362-element Array{UInt8,1}:\n 0x5b\n 0x5b\n 0x32\n#...","category":"page"},{"location":"concepts/entry_data/","page":"Entry=>Data Blob","title":"Entry=>Data Blob","text":"For :testImage the data was packed in a familiar image/png and can be converted backto bitmap (array) format:","category":"page"},{"location":"concepts/entry_data/","page":"Entry=>Data Blob","title":"Entry=>Data Blob","text":"rgb = ImageMagick.readblob(imgBytes); # automatically detected as PNG format\n\nusing ImageView\nimshow(rgb)","category":"page"},{"location":"concepts/entry_data/","page":"Entry=>Data Blob","title":"Entry=>Data Blob","text":"In the other case where data was packed as \"application/json/octet-stream\":","category":"page"},{"location":"concepts/entry_data/","page":"Entry=>Data Blob","title":"Entry=>Data Blob","text":"myData = JSON2.read(IOBuffer(rawData[2]))\n\n# as example\njulia> myData[1]\n3-element Array{Any,1}:\n 2017\n 1532558043061497600\n (buttons = Any[0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0], axis = Any[0, 0.25026196241378784, 0, 0, 0, 0])","category":"page"},{"location":"concepts/entry_data/#Quick-Camera-Calibration-Storage-Example","page":"Entry=>Data Blob","title":"Quick Camera Calibration Storage Example","text":"","category":"section"},{"location":"concepts/entry_data/","page":"Entry=>Data Blob","title":"Entry=>Data Blob","text":"Consider storing camera calibration data inside the factor graph tar.gz object for later use:","category":"page"},{"location":"concepts/entry_data/","page":"Entry=>Data Blob","title":"Entry=>Data Blob","text":"fx = 341.4563903808594\nfy = 341.4563903808594\ncx = 329.19091796875\ncy = 196.3658447265625\n\nK = [-fx 0 cx;\n 0 fy cy]\n\n# Cheap way to include data as a Blob. Also see the more hacky `Smalldata` alternative for situations that make sense.\ncamCalib = Dict(:size=>size(K), :vecK=>vec(K))\naddData!(dfg,:default_folder_store,:x0,:camCalib,\n Vector{UInt8}(JSON2.write(camCalib)), mimeType=\"application/json/octet-stream\", \n description=\"reshape(camCalib[:vecK], camCalib[:size]...)\") ","category":"page"},{"location":"concepts/entry_data/#Working-with-Binary-Data-(BSON)","page":"Entry=>Data Blob","title":"Working with Binary Data (BSON)","text":"","category":"section"},{"location":"concepts/entry_data/","page":"Entry=>Data Blob","title":"Entry=>Data Blob","text":"Sometime it's useful to store binary data. Let's combine the example of storing a Flux.jl Neural Network object using the existing BSON approach. Also see BSON wrangling snippets here.","category":"page"},{"location":"concepts/entry_data/","page":"Entry=>Data Blob","title":"Entry=>Data Blob","text":"note: Note\nWe will store binary data as Base64 encoded string to avoid other framing problems. See Julia Docs on Base64","category":"page"},{"location":"concepts/entry_data/","page":"Entry=>Data Blob","title":"Entry=>Data Blob","text":"# the object you wish to store as binary\nmodel = Chain(Dense(5,2), Dense(2,3))\n\nio = IOBuffer()\n\n# using BSON\nBSON.@save io model\n\n# get base64 binary\nmdlBytes = take!(io)\n\naddData!(dfg,:default_folder_store,:x0,:nnModel,\n mdlBytes, mimeType=\"application/bson/octet-stream\", \n description=\"BSON.@load PipeBuffer(readBytes) model\") ","category":"page"},{"location":"concepts/entry_data/#Experimental-Features","page":"Entry=>Data Blob","title":"Experimental Features","text":"","category":"section"},{"location":"concepts/entry_data/","page":"Entry=>Data Blob","title":"Entry=>Data Blob","text":"Loading images is a relatively common task, hence a convenience function has been developed, when using ImageMagick try Caesar.fetchDataImage.","category":"page"},{"location":"installation_environment/#Install-Caesar.jl","page":"Installation","title":"Install Caesar.jl","text":"","category":"section"},{"location":"installation_environment/","page":"Installation","title":"Installation","text":"Caesar.jl is one of the packages within the JuliaRobotics community, and adheres to the code-of-conduct.","category":"page"},{"location":"installation_environment/#Possible-System-Dependencies","page":"Installation","title":"Possible System Dependencies","text":"","category":"section"},{"location":"installation_environment/","page":"Installation","title":"Installation","text":"The following (Linux) system packages are used by Caesar.jl:","category":"page"},{"location":"installation_environment/","page":"Installation","title":"Installation","text":"# Likely dependencies\nsudo apt-get install hdf5-tools imagemagick\n\n# optional packages\nsudo apt-get install graphviz xdot","category":"page"},{"location":"installation_environment/","page":"Installation","title":"Installation","text":"For ROS.org users, see at least one usage example at the ROS Direct page.","category":"page"},{"location":"installation_environment/#Installing-Julia-Packages","page":"Installation","title":"Installing Julia Packages","text":"","category":"section"},{"location":"installation_environment/","page":"Installation","title":"Installation","text":"The philosophy around Julia packages are discussed at length in the Julia core documentation, where each Julia package relates to a git repository likely found on Github.com. Also see JuliaHub.com for dashboard-style representation of the broader Julia package ecosystem. To install a Julia package, simply open a julia REPL (equally the Julia REPL in VSCode) and type:","category":"page"},{"location":"installation_environment/","page":"Installation","title":"Installation","text":"] # activate Pkg manager\n(v1.6) pkg> add Caesar","category":"page"},{"location":"installation_environment/","page":"Installation","title":"Installation","text":"These are registered packages maintained by JuliaRegistries/General. Unregistered latest packages can also be installed with using only the Pkg.develop function:","category":"page"},{"location":"installation_environment/","page":"Installation","title":"Installation","text":"# Caesar is registered on JuliaRegistries/General\njulia> ]\n(v1.6) pkg> add Caesar\n(v1.6) pkg> add Caesar#janes-awesome-fix-branch\n(v1.6) pkg> add Caesar@v0.10.0\n\n# or alternatively your own local fork (just using old link as example)\n(v1.6) pkg> add https://github.com/dehann/Caesar.jl","category":"page"},{"location":"installation_environment/","page":"Installation","title":"Installation","text":"See Pkg.jl for details and features regarding package management, development, version control, virtual environments and much more.","category":"page"},{"location":"installation_environment/#Next-Steps","page":"Installation","title":"Next Steps","text":"","category":"section"},{"location":"installation_environment/","page":"Installation","title":"Installation","text":"The sections hereafter describe Building, [Interacting], and Solving factor graphs. We also recommend reviewing the various examples available in the Examples section. ","category":"page"},{"location":"installation_environment/#New-to-Julia","page":"Installation","title":"New to Julia","text":"","category":"section"},{"location":"installation_environment/#Installing-the-Julia-Binary","page":"Installation","title":"Installing the Julia Binary","text":"","category":"section"},{"location":"installation_environment/","page":"Installation","title":"Installation","text":"Although Julia (or JuliaPro) can be installed on a Linux computer using the apt package manager, we are striving for a fully local installation environment which is highly reproducible on a variety of platforms.","category":"page"},{"location":"installation_environment/","page":"Installation","title":"Installation","text":"The easiest method is–-via the terminal–-to download the desired version of Julia as a binary, extract, setup a symbolic link, and run:","category":"page"},{"location":"installation_environment/","page":"Installation","title":"Installation","text":"cd ~\nmkdir -p .julia\ncd .julia\nwget https://julialang-s3.julialang.org/bin/linux/x64/1.6/julia-1.6.7-linux-x86_64.tar.gz\ntar -xvf julia-1.6.7-linux-x86_64.tar.gz\nrm julia-1.6.7-linux-x86_64.tar.gz\ncd /usr/local/bin\nsudo ln -s ~/.julia/julia-1.6.7/bin/julia julia","category":"page"},{"location":"installation_environment/","page":"Installation","title":"Installation","text":"note: Note\nFeel free to modify this setup as you see fit.","category":"page"},{"location":"installation_environment/","page":"Installation","title":"Installation","text":"This should allow any terminal or process on the computer to run the Julia REPL by type julia and testing with:","category":"page"},{"location":"installation_environment/#VSCode-IDE-Environment","page":"Installation","title":"VSCode IDE Environment","text":"","category":"section"},{"location":"installation_environment/","page":"Installation","title":"Installation","text":"VSCode IDE allows for interactive development of Julia code using the Julia Extension. After installing and running VSCode, install the Julia Language Support Extension:","category":"page"},{"location":"installation_environment/","page":"Installation","title":"Installation","text":"

      \n\n

      ","category":"page"},{"location":"installation_environment/","page":"Installation","title":"Installation","text":"In VSCode, open the command pallette by pressing Ctrl + Shift + p. There are a wealth of tips and tricks on how to use VSCode. See this JuliaCon presentation for as a general introduction into 'piece-by-piece' code execution and much much more. Working in one of the Julia IDEs like VS Code or Juno should feel something like this (Gif borrowed from DiffEqFlux.jl):","category":"page"},{"location":"installation_environment/","page":"Installation","title":"Installation","text":"

      \n\n

      ","category":"page"},{"location":"installation_environment/","page":"Installation","title":"Installation","text":"There are a variety of useful packages in VSCode, such as GitLens, LiveShare, and Todo Browser as just a few highlights. These VSCode Extensions are independent of the already vast JuliaLang Package Ecosystem (see JuliaObserver.com).","category":"page"},{"location":"examples/custom_variables/#custom_variables","page":"Custom Variables","title":"Custom Variables","text":"","category":"section"},{"location":"examples/custom_variables/","page":"Custom Variables","title":"Custom Variables","text":"In most scenarios, the existing variables and factors should be sufficient for most robotics applications. Caesar however, is extensible and allows you to easily incorporate your own variable and factor types for specialized applications. Let's look at creating custom variables first.","category":"page"},{"location":"examples/custom_variables/","page":"Custom Variables","title":"Custom Variables","text":"A handy macro helps define new variables:","category":"page"},{"location":"examples/custom_variables/","page":"Custom Variables","title":"Custom Variables","text":"@defVariable(\n MyVar,\n TranslationGroup(2),\n MVector{2}(0.0,0.0)\n)","category":"page"},{"location":"examples/custom_variables/","page":"Custom Variables","title":"Custom Variables","text":"First, we define the name MyVar, then the manifold on which the variable probability estimates exist (a simple Cartesian translation in two dimensions). The third parameter is a default point for your new variable.","category":"page"},{"location":"examples/custom_variables/","page":"Custom Variables","title":"Custom Variables","text":"This new variable is now ready to be added to a factor graph:","category":"page"},{"location":"examples/custom_variables/","page":"Custom Variables","title":"Custom Variables","text":"addVariable!(fg, :myvar1, MyVar)","category":"page"},{"location":"examples/custom_variables/","page":"Custom Variables","title":"Custom Variables","text":"Another good example to look at is RoME's Pose2 with 3 degrees of freedom: X Y translation and a rotation matrix using R(theta). Caesar.jl uses JuliaManifolds/Manifolds.jl for structuring numerical operations, we can use either the Manifolds.ProductRepr (or RecursiveArrayTools.ArrayPartition), to define manifold point types:","category":"page"},{"location":"examples/custom_variables/","page":"Custom Variables","title":"Custom Variables","text":"# already exists in RoME/src/factors/Pose2D.jl\n@defVariable(\n Pose2,\n SpecialEuclidean(2),\n ArrayPartition(MVector{2}(0.0,0.0), MMatrix{2,2}(1.0,0.0,0.0,1.0))\n)","category":"page"},{"location":"examples/custom_variables/","page":"Custom Variables","title":"Custom Variables","text":"Here we used Manifolds.SpecialEuclidean(2) as the variable manifold, and the default data representation is similar to Manifolds.identity_element(SpecialEuclidean(2)), or Float32[1.0 0; 0 1], etc. In the example above, we used StaticArrays.MVector, StaticArrays.MMatrix for better performance, owing to better heap vs. stack memory management.","category":"page"},{"location":"examples/custom_variables/","page":"Custom Variables","title":"Custom Variables","text":"@defVariable","category":"page"},{"location":"examples/custom_variables/#DistributedFactorGraphs.@defVariable","page":"Custom Variables","title":"DistributedFactorGraphs.@defVariable","text":"@defVariable StructName manifolds<:ManifoldsBase.AbstractManifold\n\nA macro to create a new variable with name StructName and manifolds. Note that the manifolds is an object and must be a subtype of ManifoldsBase.AbstractManifold. See documentation in Manifolds.jl on making your own. \n\nExample:\n\nDFG.@defVariable Pose2 SpecialEuclidean(2) ArrayPartition([0;0.0],[1 0; 0 1.0])\n\n\n\n\n\n","category":"macro"},{"location":"examples/custom_variables/","page":"Custom Variables","title":"Custom Variables","text":"note: Note\nUsers can implement their own manifolds using the ManifoldsBase.jl API; and the tutorial. See JuliaManifolds/Manifolds.jl for general information.","category":"page"},{"location":"","page":"Welcome","title":"Welcome","text":"

      \n\n

      ","category":"page"},{"location":"#Open-Community","page":"Welcome","title":"Open Community","text":"","category":"section"},{"location":"","page":"Welcome","title":"Welcome","text":"Click here to go to the Caesar.jl Github repo:","category":"page"},{"location":"","page":"Welcome","title":"Welcome","text":"(Image: source)","category":"page"},{"location":"","page":"Welcome","title":"Welcome","text":"Caesar.jl is a community project to facilate software technology development for localization and mapping from multiple sensor data, and multiple sessions or human / semi-autonomous / autonomous agents. This software is being developed with broadly Industry 4.0, Robotics, and Work of the Future in mind. Caesar.jl is an \"umbrella package\" to combine many other libraries from across the Julia package ecosystem. ","category":"page"},{"location":"#Commercial-Products-and-Services","page":"Welcome","title":"Commercial Products and Services","text":"","category":"section"},{"location":"","page":"Welcome","title":"Welcome","text":"WhereWhen.ai's NavAbility products and services build upon, continually develop, and help administer the Caesar.jl suite of open-source libraries. Please reach out for any additional information (info@navability.io), or using the community links provided below.","category":"page"},{"location":"","page":"Welcome","title":"Welcome","text":"Various mapping and localization solutions are possible both for commercial and R&D. We recommend taking a look at:","category":"page"},{"location":"","page":"Welcome","title":"Welcome","text":"The human-to-machine friendly NavAbility App interaction; and\nThe machine-to-machine friendly NavAbilitySDKs (Python, Julia, JS, etc.). Also see the SDK.py Docs.","category":"page"},{"location":"#NavAbility-Zero-Install-Tutorials","page":"Welcome","title":"NavAbility Zero Install Tutorials","text":"","category":"section"},{"location":"","page":"Welcome","title":"Welcome","text":"Follow this page to see the NavAbility Tutorials which are zero install and build around specific application examples.","category":"page"},{"location":"#Origins-and-Ongoing-Research","page":"Welcome","title":"Origins and Ongoing Research","text":"","category":"section"},{"location":"","page":"Welcome","title":"Welcome","text":"Caesar.jl developed as a spin-out project from MIT's Computer Science and Artificial Intelligence Laboratory. See related works on the literature page. Many future directions are in the works – including fundamental research, implementation quality/performance, and system integration.","category":"page"},{"location":"","page":"Welcome","title":"Welcome","text":"Consider citing our work: CITATION.bib.","category":"page"},{"location":"#Community,-Issues,-Comments,-or-Help","page":"Welcome","title":"Community, Issues, Comments, or Help","text":"","category":"section"},{"location":"","page":"Welcome","title":"Welcome","text":"Post Issues, or Discussions for community help. Maintainers can easily transfer Issues to the best suited package location if necessary. Also see the history of changes and ongoing work can via the Milestone pages (click through badges here). You can also get in touch via Slack at (Image: ).","category":"page"},{"location":"","page":"Welcome","title":"Welcome","text":"note: Note\nPlease help improve this documentation–if something confuses you, chances are you're not alone. It's easy to do as you read along: just click on the \"Edit on GitHub\" link above, and then edit the files directly in your browser. Your changes will be vetted by developers before becoming permanent, so don't worry about whether you might say something wrong.","category":"page"},{"location":"#JuliaRobotics-Code-of-Conduct","page":"Welcome","title":"JuliaRobotics Code of Conduct","text":"","category":"section"},{"location":"","page":"Welcome","title":"Welcome","text":"The Caesar.jl project is part of the JuliaRobotics organization and adheres to the JuliaRobotics code-of-conduct.","category":"page"},{"location":"#Next-Steps","page":"Welcome","title":"Next Steps","text":"","category":"section"},{"location":"","page":"Welcome","title":"Welcome","text":"For installation steps, examples/tutorials, and concepts please refer to the following pages:","category":"page"},{"location":"","page":"Welcome","title":"Welcome","text":"Pages = [\n \"concepts/why_nongaussian.md\"\n \"installation_environment.md\"\n \"concepts/concepts.md\"\n \"concepts/building_graphs.md\"\n \"concepts/2d_plotting.md\"\n \"examples/examples.md\"\n]\nDepth = 1","category":"page"},{"location":"examples/deadreckontether/#Dead-Reckon-Tether","page":"Dead Reckon Tether","title":"Dead Reckon Tether","text":"","category":"section"},{"location":"examples/deadreckontether/","page":"Dead Reckon Tether","title":"Dead Reckon Tether","text":"Towards real-rime location prediction and model based target tracking. See brief description in this presentation.","category":"page"},{"location":"examples/deadreckontether/","page":"Dead Reckon Tether","title":"Dead Reckon Tether","text":"\n

      Towards Real-Time Non-Gaussian SLAM from Dehann on Vimeo.

      ","category":"page"},{"location":"examples/deadreckontether/#DRT-Functions","page":"Dead Reckon Tether","title":"DRT Functions","text":"","category":"section"},{"location":"examples/deadreckontether/","page":"Dead Reckon Tether","title":"Dead Reckon Tether","text":"Overview of related functions while this documentation is being expanded:","category":"page"},{"location":"examples/deadreckontether/","page":"Dead Reckon Tether","title":"Dead Reckon Tether","text":"addVariable!(fg, :drt_0, ..., solvable=0)\ndrec1 = MutablePose2Pose2Gaussian(...)\naddFactor!(dfg, [:x0; :drt_0], drec1, solvable=0, graphinit=false)\naccumulateDiscreteLocalFrame!\naccumulateFactorMeans\nduplicateToStandardFactorVariable","category":"page"},{"location":"examples/deadreckontether/#DRT-Construct","page":"Dead Reckon Tether","title":"DRT Construct","text":"","category":"section"},{"location":"examples/deadreckontether/","page":"Dead Reckon Tether","title":"Dead Reckon Tether","text":"The idea is that the dead reckong tracking method is to update a single value based on high-rate sensor data. Perhaps 'particles' values can be propagated as a non-Gaussian prediction, depending on allowable compute resources, and for that see approxConvBelief. Some specialized plumbing has been built to facilitate rapid single value propagation using the factor graph. ","category":"page"},{"location":"examples/deadreckontether/#Suppress-w/-solvable","page":"Dead Reckon Tether","title":"Suppress w/ solvable","text":"","category":"section"},{"location":"examples/deadreckontether/","page":"Dead Reckon Tether","title":"Dead Reckon Tether","text":"The construct uses regular addVariable! and addFactor! calls but with a few tweaks. The first is that some variables and factors should not be incorporated with the regular solveTree! call and can be achieved on a per node basis, e.g.:","category":"page"},{"location":"examples/deadreckontether/","page":"Dead Reckon Tether","title":"Dead Reckon Tether","text":"fg = initfg()\n\n# a regular variable and prior for solving in graph\naddVariable!(fg, :x0, Pose2) # default solvable=1\naddFactor!(fg, [:x0;], PriorPose2(MvNormal([0;0;0.0],diagm([0.1;0.1;0.01]))))\n\n# now add a variable that will not be included in solves\naddVariable!(fg, :drt0, Pose2, solvable=0)","category":"page"},{"location":"examples/deadreckontether/#A-Mutable-Factor","page":"Dead Reckon Tether","title":"A Mutable Factor","text":"","category":"section"},{"location":"examples/deadreckontether/","page":"Dead Reckon Tether","title":"Dead Reckon Tether","text":"The next part is to add a factor that can be rapidly updated from sensor data, hence liberal use of the term 'Mutable':","category":"page"},{"location":"examples/deadreckontether/","page":"Dead Reckon Tether","title":"Dead Reckon Tether","text":"drt0 = MutablePose2Pose2Gaussian(MvNormal([0;0;0.0],diagm([0.1;0.1;0.01])))\naddFactor!(dfg, [:x0; :drt0], drt0, solvable=0, graphinit=false)","category":"page"},{"location":"examples/deadreckontether/","page":"Dead Reckon Tether","title":"Dead Reckon Tether","text":"Notice that this factor is also set with solvable=0 to exclude it from the regular solving process. Also note the graphinit=false to prevent any immediate automated attempts to initialize the values to connected variables using this factor.","category":"page"},{"location":"examples/deadreckontether/#Sensor-rate-updates","page":"Dead Reckon Tether","title":"Sensor rate updates","text":"","category":"section"},{"location":"examples/deadreckontether/","page":"Dead Reckon Tether","title":"Dead Reckon Tether","text":"The idea of a dead reckon tether is that the value in the factor can rapidly be updated without affecting any other regular part of the factor graph or simultaneous solving progress. Imagine new sensor data from wheel odometry or an IMU is available which is then used to 'mutate' the values in a DRT factor:","category":"page"},{"location":"examples/deadreckontether/","page":"Dead Reckon Tether","title":"Dead Reckon Tether","text":"# continuous Gaussian process noise Q\nQc = 0.001*diagm(ones(3))\n\n# accumulate a Pose2 delta odometry measurement segment onto existing value in drt0\naccumulateDiscreteLocalFrame!(drt0,[0.1;0;0.05],Qc)","category":"page"},{"location":"examples/deadreckontether/#Dead-Reckoned-Prediction","page":"Dead Reckon Tether","title":"Dead Reckoned Prediction","text":"","category":"section"},{"location":"examples/deadreckontether/","page":"Dead Reckon Tether","title":"Dead Reckon Tether","text":"Using the latest available inference result fg[:x0], the drt0 factor can be used to predict the single parameteric location of variable :drt0:","category":"page"},{"location":"examples/deadreckontether/","page":"Dead Reckon Tether","title":"Dead Reckon Tether","text":"# can happen concurrently with most other operations on fg, including `solveTree!`\npredictDRT0 = accumulateFactorMeans(fg, [:x0drt0f1;])","category":"page"},{"location":"examples/deadreckontether/","page":"Dead Reckon Tether","title":"Dead Reckon Tether","text":"Note also a convenience function uses similar plumbing for integrating odometry as well as any other DRT operations. Imagine a robot is driving from pose position 0 to 1, then the final pose trigger value in factor drt0 is the same value required to instantiate a new factor graph Pose2Pose2, and hence:","category":"page"},{"location":"examples/deadreckontether/","page":"Dead Reckon Tether","title":"Dead Reckon Tether","text":"# add new regular rigid transform (odometry) factor between pose variables \nduplicateToStandardFactorVariable(Pose2Pose2, drt0, fg, :x0, :x1)","category":"page"},{"location":"examples/deadreckontether/","page":"Dead Reckon Tether","title":"Dead Reckon Tether","text":"warning: Warning\n(2021Q1) Some of these function names are likely to be better standardized in the future. Regular semver deprecation warnings will be used to simplify any potential updates that may occur. Please file issues at Caesar.jl if any problems arise.","category":"page"},{"location":"examples/deadreckontether/#Function-Reference","page":"Dead Reckon Tether","title":"Function Reference","text":"","category":"section"},{"location":"examples/deadreckontether/","page":"Dead Reckon Tether","title":"Dead Reckon Tether","text":"duplicateToStandardFactorVariable\naccumulateDiscreteLocalFrame!\naccumulateFactorMeans\nMutablePose2Pose2Gaussian","category":"page"},{"location":"examples/deadreckontether/#RoME.duplicateToStandardFactorVariable","page":"Dead Reckon Tether","title":"RoME.duplicateToStandardFactorVariable","text":"duplicateToStandardFactorVariable(\n ,\n mpp,\n dfg,\n prevsym,\n newsym;\n solvable,\n graphinit,\n cov\n)\n\n\nHelper function to duplicate values from a special factor variable into standard factor and variable. Returns the name of the new factor.\n\nNotes:\n\nDeveloped for accumulating odometry in a MutablePosePose and then cloning out a standard PosePose and new variable.\nDoes not change the original MutablePosePose source factor or variable in any way.\nAssumes timestampe from mpp object.\n\nRelated\n\naddVariable!, addFactor!\n\n\n\n\n\n","category":"function"},{"location":"examples/deadreckontether/#RoME.accumulateDiscreteLocalFrame!","page":"Dead Reckon Tether","title":"RoME.accumulateDiscreteLocalFrame!","text":"accumulateDiscreteLocalFrame!(mpp, DX, Qc; ...)\naccumulateDiscreteLocalFrame!(mpp, DX, Qc, dt; Fk, Gk, Phik)\n\n\nAdvance an odometry factor as though integrating an ODE – i.e. X_2 = X_1 ΔX. Accepts continuous domain process noise density Qc which is internally integrated to discrete process noise Qd. DX is assumed to already be incrementally integrated before this function. See related accumulateContinuousLocalFrame! for fully continuous system propagation.\n\nNotes\n\nThis update stays in the same reference frame but updates the local vector as though accumulating measurement values over time.\nKalman filter would have used for noise propagation: Pk1 = F*Pk*F + Qdk\nFrom Chirikjian, Vol.II, 2012, p.35: Jacobian SE(2), Jr = [cθ sθ 0; -sθ cθ 0; 0 0 1] – i.e. dSE2/dX' = SE2([0;0;-θ])\nDX = dX/dt*Dt\nassumed process noise for {}^b Qc = {}^b [x;y;yaw] = [fwd; sideways; rotation.rate]\n\nDev Notes\n\nTODO many operations here can be done in-place.\n\nRelated\n\naccumulateContinuousLocalFrame!, accumulateDiscreteReferenceFrame!, accumulateFactorMeans\n\n\n\n\n\n","category":"function"},{"location":"examples/deadreckontether/#IncrementalInference.accumulateFactorMeans","page":"Dead Reckon Tether","title":"IncrementalInference.accumulateFactorMeans","text":"accumulateFactorMeans(dfg, fctsyms; solveKey)\n\n\nAccumulate chains of binary factors–-potentially starting from a prior–-as a parameteric mean value only.\n\nNotes\n\nNot used during tree inference.\nExpected uses are for user analysis of factors and estimates.\nreal-time dead reckoning chain prediction.\nReturns mean value as coordinates\n\nDevNotes\n\nTODO consolidate with similar approxConvBelief\nTODO compare consolidate with solveParametricConditionals\nTODO compare consolidate with solveFactorParametric\n\nRelated:\n\napproxConvBelief, solveFactorParametric, RoME.MutablePose2Pose2Gaussian\n\n\n\n\n\n","category":"function"},{"location":"examples/deadreckontether/#RoME.MutablePose2Pose2Gaussian","page":"Dead Reckon Tether","title":"RoME.MutablePose2Pose2Gaussian","text":"mutable struct MutablePose2Pose2Gaussian <: AbstractManifoldMinimize\n\nSpecialized Pose2Pose2 factor type (Gaussian), which allows for rapid accumulation of odometry information as a branch on the factor graph.\n\n\n\n\n\n","category":"type"},{"location":"examples/deadreckontether/#Additional-Notes","page":"Dead Reckon Tether","title":"Additional Notes","text":"","category":"section"},{"location":"examples/deadreckontether/","page":"Dead Reckon Tether","title":"Dead Reckon Tether","text":"This will be consolidated with text above:","category":"page"},{"location":"examples/deadreckontether/","page":"Dead Reckon Tether","title":"Dead Reckon Tether","text":"regardless of slam solution going on in the background, you can then just call val = accumulateFactorMeans(fg, [:x0deadreckon_x0f1])","category":"page"},{"location":"examples/deadreckontether/","page":"Dead Reckon Tether","title":"Dead Reckon Tether","text":"for a new dead reckon tether solution;","category":"page"},{"location":"examples/deadreckontether/","page":"Dead Reckon Tether","title":"Dead Reckon Tether","text":"you can add as many tethers as you want. \nSo if you solving every 10 poses, you just add a new tether x0, x10, x20, x30...\nas the solves complete on previous segments, then you can just get the latest accumulateFactorMean","category":"page"},{"location":"examples/basic_continuousscalar/#Tutorials","page":"Canonical 1D Example","title":"Tutorials","text":"","category":"section"},{"location":"examples/basic_continuousscalar/#IncrementalInference.jl-ContinuousScalar","page":"Canonical 1D Example","title":"IncrementalInference.jl ContinuousScalar","text":"","category":"section"},{"location":"examples/basic_continuousscalar/","page":"Canonical 1D Example","title":"Canonical 1D Example","text":"The application of this tutorial is presented in abstract from which the user is free to imagine any system of relationships: For example, a robot driving in a one dimensional world; or a time traveler making uncertain jumps forwards and backwards in time. The tutorial implicitly shows a multi-modal uncertainty can be introduced from non-Gaussian measurements, and then transmitted through the system. The tutorial also illustrates consensus through an additional piece of information, which reduces all stochastic variable marginal beliefs to unimodal only beliefs. This tutorial illustrates how algebraic relations (i.e. residual functions) between multiple stochastic variables are calculated, as well as the final posterior belief estimate, from several pieces of information. Lastly, the tutorial demonstrates how automatic initialization of variables works.","category":"page"},{"location":"examples/basic_continuousscalar/","page":"Canonical 1D Example","title":"Canonical 1D Example","text":"This tutorial requires RoME.jl and RoMEPlotting packages be installed. In addition, the optional GraphViz package will allow easy visualization of the FactorGraph object structure.","category":"page"},{"location":"examples/basic_continuousscalar/","page":"Canonical 1D Example","title":"Canonical 1D Example","text":"To start, the two major mathematical packages are brought into scope.","category":"page"},{"location":"examples/basic_continuousscalar/","page":"Canonical 1D Example","title":"Canonical 1D Example","text":"using IncrementalInference","category":"page"},{"location":"examples/basic_continuousscalar/","page":"Canonical 1D Example","title":"Canonical 1D Example","text":"note: Note\nGuidelines for developing your own functions are discussed here in Adding Variables and Factors, and we note that mechanizations and manifolds required for robotic simultaneous localization and mapping (SLAM) has been tightly integrated with the expansion package RoME.jl.","category":"page"},{"location":"examples/basic_continuousscalar/","page":"Canonical 1D Example","title":"Canonical 1D Example","text":"The next step is to describe the inference problem with a graphical model with any of the existing concrete types that inherit from <: AbstractDFG. The first step is to create an empty factor graph object and start populating it with variable nodes. The variable nodes are identified by Symbols, namely :x0, :x1, :x2, :x3.","category":"page"},{"location":"examples/basic_continuousscalar/","page":"Canonical 1D Example","title":"Canonical 1D Example","text":"# Start with an empty factor graph\nfg = initfg()\n\n# add the first node\naddVariable!(fg, :x0, ContinuousScalar)\n\n# this is unary (prior) factor and does not immediately trigger autoinit of :x0.\naddFactor!(fg, [:x0], Prior(Normal(0,1)))","category":"page"},{"location":"examples/basic_continuousscalar/","page":"Canonical 1D Example","title":"Canonical 1D Example","text":"Factor graphs are bipartite graphs with factors that act as mathematical structure between interacting variables. After adding node :x0, a singleton factor of type Prior (which was defined by the user earlier) is 'connected to' variable node :x0. This unary factor is taken as a Distributions.Normal distribution with zero mean and a standard devitation of 1. Graphviz can be used to visualize the factor graph structure, although the package is not installed by default – $ sudo apt-get install graphviz. Furthermore, the drawGraph member definition is given at the end of this tutorial, which allows the user to store the graph image in graphviz supported image types.","category":"page"},{"location":"examples/basic_continuousscalar/","page":"Canonical 1D Example","title":"Canonical 1D Example","text":"drawGraph(fg, show=true)","category":"page"},{"location":"examples/basic_continuousscalar/","page":"Canonical 1D Example","title":"Canonical 1D Example","text":"The two node factor graph is shown in the image below.","category":"page"},{"location":"examples/basic_continuousscalar/","page":"Canonical 1D Example","title":"Canonical 1D Example","text":"

      \n\n

      ","category":"page"},{"location":"examples/basic_continuousscalar/#Graph-based-Variable-Initialization","page":"Canonical 1D Example","title":"Graph-based Variable Initialization","text":"","category":"section"},{"location":"examples/basic_continuousscalar/","page":"Canonical 1D Example","title":"Canonical 1D Example","text":"Automatic initialization of variables depend on how the factor graph model is constructed. This tutorial demonstrates this behavior by first showing that :x0 is not initialized:","category":"page"},{"location":"examples/basic_continuousscalar/","page":"Canonical 1D Example","title":"Canonical 1D Example","text":"@show isInitialized(fg, :x0) # false","category":"page"},{"location":"examples/basic_continuousscalar/","page":"Canonical 1D Example","title":"Canonical 1D Example","text":"Why is :x0 not initialized? Since no other variable nodes have been 'connected to' (or depend) on :x0 and future intentions of the user are unknown, the initialization of :x0 is deferred until the latest possible moment. IncrementalInference.jl assumes that the user will generally populate new variable nodes with most of the associated factors before moving to the next variable. By delaying initialization of a new variable (say :x0) until a second newer uninitialized variable (say :x1) depends on :x0, the IncrementalInference algorithms hope to then initialize :x0 with the more information from previous and surrounding variables and factors. Also note that graph-based initialization of variables is a local operation based only on the neighboring nodes – global inference occurs over the entire graph and is shown later in this tutorial.","category":"page"},{"location":"examples/basic_continuousscalar/","page":"Canonical 1D Example","title":"Canonical 1D Example","text":"By adding :x1 and connecting it through the LinearRelative and Normal distributed factor, the automatic initialization of :x0 is triggered.","category":"page"},{"location":"examples/basic_continuousscalar/","page":"Canonical 1D Example","title":"Canonical 1D Example","text":"addVariable!(fg, :x1, ContinuousScalar)\n# P(Z | :x1 - :x0 ) where Z ~ Normal(10,1)\naddFactor!(fg, [:x0, :x1], LinearRelative(Normal(10.0,1)))\n@show isInitialized(fg, :x0) # true","category":"page"},{"location":"examples/basic_continuousscalar/","page":"Canonical 1D Example","title":"Canonical 1D Example","text":"Note that the automatic initialization of :x0 is aware that :x1 is not initialized and therefore only used the Prior(Normal(0,1)) unary factor to initialize the marginal belief estimate for :x0. The structure of the graph has now been updated to two variable nodes and two factors.","category":"page"},{"location":"examples/basic_continuousscalar/","page":"Canonical 1D Example","title":"Canonical 1D Example","text":"

      \n\n

      ","category":"page"},{"location":"examples/basic_continuousscalar/","page":"Canonical 1D Example","title":"Canonical 1D Example","text":"Global inference requires that the entire factor graph be initialized before the numerical belief computation algorithms can be performed. Notice how the new :x1 variable is not yet initialized:","category":"page"},{"location":"examples/basic_continuousscalar/","page":"Canonical 1D Example","title":"Canonical 1D Example","text":"@show isInitialized(fg, :x1) # false","category":"page"},{"location":"examples/basic_continuousscalar/#Visualizing-the-Variable-Probability-Belief","page":"Canonical 1D Example","title":"Visualizing the Variable Probability Belief","text":"","category":"section"},{"location":"examples/basic_continuousscalar/","page":"Canonical 1D Example","title":"Canonical 1D Example","text":"The RoMEPlotting.jl package allows visualization (plotting) of the belief state over any of the variable nodes. Remember the first time executions are slow given required code compilation, and that future versions of these package will use more precompilation to reduce first execution running cost.","category":"page"},{"location":"examples/basic_continuousscalar/","page":"Canonical 1D Example","title":"Canonical 1D Example","text":"using RoMEPlotting\n\nplotKDE(fg, :x0)","category":"page"},{"location":"examples/basic_continuousscalar/","page":"Canonical 1D Example","title":"Canonical 1D Example","text":"

      \n\n

      ","category":"page"},{"location":"examples/basic_continuousscalar/","page":"Canonical 1D Example","title":"Canonical 1D Example","text":"By forcing the initialization of :x1 and plotting its belief estimate,","category":"page"},{"location":"examples/basic_continuousscalar/","page":"Canonical 1D Example","title":"Canonical 1D Example","text":"initAll!(fg)\nplotKDE(fg, [:x0, :x1])","category":"page"},{"location":"examples/basic_continuousscalar/","page":"Canonical 1D Example","title":"Canonical 1D Example","text":"the predicted influence of the P(Z| X1 - X0) = LinearRelative(Normal(10, 1)) is shown by the red trace.","category":"page"},{"location":"examples/basic_continuousscalar/","page":"Canonical 1D Example","title":"Canonical 1D Example","text":"

      \n\n

      ","category":"page"},{"location":"examples/basic_continuousscalar/","page":"Canonical 1D Example","title":"Canonical 1D Example","text":"The red trace (predicted belief of :x1) is noting more than the approximated convolution of the current marginal belief of :x0 with the conditional belief described by P(Z | X1 - X0).","category":"page"},{"location":"examples/basic_continuousscalar/#Defining-A-Mixture-Relative-on-ContinuousScalar","page":"Canonical 1D Example","title":"Defining A Mixture Relative on ContinuousScalar","text":"","category":"section"},{"location":"examples/basic_continuousscalar/","page":"Canonical 1D Example","title":"Canonical 1D Example","text":"Another ContinuousScalar variable :x2 is 'connected' to :x1 through a more complicated MixtureRelative likelihood function.","category":"page"},{"location":"examples/basic_continuousscalar/","page":"Canonical 1D Example","title":"Canonical 1D Example","text":"addVariable!(fg, :x2, ContinuousScalar)\nmmo = Mixture(LinearRelative, \n (hypo1=Rayleigh(3), hypo2=Uniform(30,55)), \n [0.4; 0.6])\naddFactor!(fg, [:x1, :x2], mmo)","category":"page"},{"location":"examples/basic_continuousscalar/","page":"Canonical 1D Example","title":"Canonical 1D Example","text":"

      \n\n

      ","category":"page"},{"location":"examples/basic_continuousscalar/","page":"Canonical 1D Example","title":"Canonical 1D Example","text":"The mmo variable illustrates how a near arbitrary mixture probability distribution can be used as a conditional relationship between variable nodes in the factor graph. In this case, a 40%/60% balance of a Rayleigh and truncated Uniform distribution which acts as a multi-modal conditional belief. Interpret carefully what a conditional belief of this nature actually means.","category":"page"},{"location":"examples/basic_continuousscalar/","page":"Canonical 1D Example","title":"Canonical 1D Example","text":"Following the tutorial's practical example frameworks (robot navigation or time travel), this multi-modal belief implies that moving from one of the probable locations in :x1 to a location in :x2 by some processes defined by mmo=P(Z | X2, X1) is uncertain to the same 40%/60% ratio. In practical terms, collapsing (through observation of an event) the probabilistic likelihoods of the transition from :x1 to :x2 may result in the :x2 location being at either 15-20, or 40-65-ish units. The predicted belief over :x2 is illustrated by plotting the predicted belief (green trace), after forcing initialization.","category":"page"},{"location":"examples/basic_continuousscalar/","page":"Canonical 1D Example","title":"Canonical 1D Example","text":"initAll!(fg)\nplotKDE(fg, [:x0, :x1, :x2])","category":"page"},{"location":"examples/basic_continuousscalar/","page":"Canonical 1D Example","title":"Canonical 1D Example","text":"

      \n\n

      ","category":"page"},{"location":"examples/basic_continuousscalar/","page":"Canonical 1D Example","title":"Canonical 1D Example","text":"Adding one more variable :x3 through another LinearRelative(Normal(-50,1))","category":"page"},{"location":"examples/basic_continuousscalar/","page":"Canonical 1D Example","title":"Canonical 1D Example","text":"addVariable!(fg, :x3, ContinuousScalar)\naddFactor!(fg, [:x2, :x3], LinearRelative(Normal(-50, 1)))","category":"page"},{"location":"examples/basic_continuousscalar/","page":"Canonical 1D Example","title":"Canonical 1D Example","text":"expands the factor graph to to four variables and four factors.","category":"page"},{"location":"examples/basic_continuousscalar/","page":"Canonical 1D Example","title":"Canonical 1D Example","text":"

      \n\n

      ","category":"page"},{"location":"examples/basic_continuousscalar/","page":"Canonical 1D Example","title":"Canonical 1D Example","text":"This part of the tutorial shows how a unimodal likelihood (conditional belief) can transmit the bimodal belief currently contained in :x2.","category":"page"},{"location":"examples/basic_continuousscalar/","page":"Canonical 1D Example","title":"Canonical 1D Example","text":"initAll!(fg)\nplotKDE(fg, [:x0, :x1, :x2, :x3])","category":"page"},{"location":"examples/basic_continuousscalar/","page":"Canonical 1D Example","title":"Canonical 1D Example","text":"Notice the blue trace (:x3) is a shifted and slightly spread out version of the initialized belief on :x2, through the convolution with the conditional belief P(Z | X2, X3).","category":"page"},{"location":"examples/basic_continuousscalar/","page":"Canonical 1D Example","title":"Canonical 1D Example","text":"

      \n\n

      ","category":"page"},{"location":"examples/basic_continuousscalar/","page":"Canonical 1D Example","title":"Canonical 1D Example","text":"Global inference over the entire factor graph has still not occurred, and will at this stage produce roughly similar results to the predicted beliefs shown above. Only by introducing more information into the factor graph can inference extract more precise marginal belief estimates for each of the variables. A final piece of information added to this graph is a factor directly relating :x3 with :x0.","category":"page"},{"location":"examples/basic_continuousscalar/","page":"Canonical 1D Example","title":"Canonical 1D Example","text":"addFactor!(fg, [:x3, :x0], LinearRelative(Normal(40, 1)))","category":"page"},{"location":"examples/basic_continuousscalar/","page":"Canonical 1D Example","title":"Canonical 1D Example","text":"Pay close attention to what this last factor means in terms of the probability density traces shown in the previous figure. The blue trace for :x3 has two major modes, one that overlaps with :x0, :x1 near 0 and a second mode further to the left at -40. The last factor introduces a shift LinearRelative(Normal(40,1)) which essentially aligns the left most mode of :x3 back onto :x0.","category":"page"},{"location":"examples/basic_continuousscalar/","page":"Canonical 1D Example","title":"Canonical 1D Example","text":"

      \n\n

      ","category":"page"},{"location":"examples/basic_continuousscalar/","page":"Canonical 1D Example","title":"Canonical 1D Example","text":"This last factor forces a mode selection through consensus. By doing global inference, the new information obtained in :x3 will be equally propagated to :x2 where only one of the two modes will remain.","category":"page"},{"location":"examples/basic_continuousscalar/","page":"Canonical 1D Example","title":"Canonical 1D Example","text":"Global inference is achieved with local computation using two function calls, as follows.","category":"page"},{"location":"examples/basic_continuousscalar/","page":"Canonical 1D Example","title":"Canonical 1D Example","text":"tree = solveTree!(fg)\n\n# and visualization\nplotKDE(fg, [:x0, :x1, :x2, :x3])","category":"page"},{"location":"examples/basic_continuousscalar/","page":"Canonical 1D Example","title":"Canonical 1D Example","text":"The resulting posterior marginal beliefs over all the system variables are:","category":"page"},{"location":"examples/basic_continuousscalar/","page":"Canonical 1D Example","title":"Canonical 1D Example","text":"

      \n\n

      ","category":"page"},{"location":"examples/basic_continuousscalar/","page":"Canonical 1D Example","title":"Canonical 1D Example","text":"It is import to note that although this tutorial ends with all marginal beliefs having near Gaussian shape and are unimodal, that the package supports multi-modal belief estimates during both the prediction and global inference processes. In fact, many of the same underlying inference functions are involved with the automatic initialization process and the global multi-modal iSAM inference procedure. This concludes the ContinuousScalar tutorial particular to the IncrementalInference package.","category":"page"},{"location":"concepts/concepts/#Graph-Concepts","page":"Initial Concepts","title":"Graph Concepts","text":"","category":"section"},{"location":"concepts/concepts/","page":"Initial Concepts","title":"Initial Concepts","text":"Factor graphs are bipartite consisting of variables and factors, which are connected by edges to form a graph structure. The terminology of nodes is reserved for actually storing the data on some graph oriented technology.","category":"page"},{"location":"concepts/concepts/#What-are-Variables-and-Factors","page":"Initial Concepts","title":"What are Variables and Factors","text":"","category":"section"},{"location":"concepts/concepts/","page":"Initial Concepts","title":"Initial Concepts","text":"Variables, denoted as the larger nodes in the figur below, represent state variables of interest such as vehicle or landmark positions, sensor calibration parameters, and more. Variables are likely hidden values which are not directly observed, but we want to estimate them them from observed data and at least some minimal algebra structure from probabilistic measurement models.","category":"page"},{"location":"concepts/concepts/","page":"Initial Concepts","title":"Initial Concepts","text":"Factors, the smaller nodes in the figure, represent the algebraic interaction between particular variables, which is captured through edges. Factors must adhere to the limits of probabilistic models – for example conditional likelihoods capture the likelihood correlations between variables; while priors (unary to one variable) represent absolute information to be introduced. A heterogeneous factor graph illustration is shown below; also see a broader discussion linked on the literature page.","category":"page"},{"location":"concepts/concepts/","page":"Initial Concepts","title":"Initial Concepts","text":"(Image: factorgraphexample)","category":"page"},{"location":"concepts/concepts/","page":"Initial Concepts","title":"Initial Concepts","text":"We assume factors are constructed from statistically independent measurements (i.e. no direct correlations between measurements other than the known algebraic model that might connect them), then we can use Probabilistic Chain rule to write inference operation down (unnormalized):","category":"page"},{"location":"concepts/concepts/","page":"Initial Concepts","title":"Initial Concepts","text":"P(Theta Z) propto P(Z Theta) P(Theta)","category":"page"},{"location":"concepts/concepts/","page":"Initial Concepts","title":"Initial Concepts","text":"This unnormalized \"Bayes rule\" is a consequence of two ideas, namely the probabilistic chain rule where Theta represents all variables and Z represents all measurements or data","category":"page"},{"location":"concepts/concepts/","page":"Initial Concepts","title":"Initial Concepts","text":"P(Theta Z) = P(Z Theta) P(Theta)","category":"page"},{"location":"concepts/concepts/","page":"Initial Concepts","title":"Initial Concepts","text":"or similarly,","category":"page"},{"location":"concepts/concepts/","page":"Initial Concepts","title":"Initial Concepts","text":"P(Theta Z) = P(Theta Z) P(Z)","category":"page"},{"location":"concepts/concepts/","page":"Initial Concepts","title":"Initial Concepts","text":"The inference objective is to invert this system, so as to find the states given the product between all the likelihood models (based on the data):","category":"page"},{"location":"concepts/concepts/","page":"Initial Concepts","title":"Initial Concepts","text":"P(Theta Z) propto prod_i P(Z_i Theta_i) prod_j P(Theta_j)","category":"page"},{"location":"concepts/concepts/","page":"Initial Concepts","title":"Initial Concepts","text":"We use the uncorrelated measurement process assumption that measurements Z are independent given the constructed algebraic model.","category":"page"},{"location":"concepts/concepts/","page":"Initial Concepts","title":"Initial Concepts","text":"note: Note\nStrictly speaking, factors are actually \"observed variables\" that are stochastically \"fixed\" and not free for estimation in the conventional SLAM perspective. Waving hands over the fact that factors encode both the algebraic model and the observed measurement values provides a perspective on learning structure of a problem, including more mundane operations such as sensor calibration or learning of channel transfer models.","category":"page"},{"location":"concepts/concepts/","page":"Initial Concepts","title":"Initial Concepts","text":"note: Note\nWikipedia too provides a short overview of factor graphs.","category":"page"},{"location":"examples/using_pcl/#pointclouds_and_pcl","page":"Pointclouds and PCL","title":"Pointclouds and PCL Types","text":"","category":"section"},{"location":"examples/using_pcl/#Introduction-Caesar._PCL","page":"Pointclouds and PCL","title":"Introduction Caesar._PCL","text":"","category":"section"},{"location":"examples/using_pcl/","page":"Pointclouds and PCL","title":"Pointclouds and PCL","text":"A wide ranging and well used point cloud library exists called PCL which is implemented in C++. To get access to many of those features and bridge the Caesar.jl suite of packages, the base PCL.PointCloud types have been implemented in Julia and reside under Caesar._PCL. The main types of interest:","category":"page"},{"location":"examples/using_pcl/","page":"Pointclouds and PCL","title":"Pointclouds and PCL","text":"Caesar._PCL.PointCloud\nCaesar._PCL.PCLPointCloud2\nCaesar._PCL.PointXYZ\nCaesar._PCL.Header\nCaesar._PCL.PointField\nCaesar._PCL.FieldMapper","category":"page"},{"location":"examples/using_pcl/","page":"Pointclouds and PCL","title":"Pointclouds and PCL","text":"The PointCloud types use Colors.jl:","category":"page"},{"location":"examples/using_pcl/","page":"Pointclouds and PCL","title":"Pointclouds and PCL","text":"using Colors, Caesar\nusing StaticArrays\n\n# one point\nx,y,z,intens = 1f0,0,0,1\npt = Caesar._PCL.PointXYZ(;data=SA[x,y,z,intens])\n\n# etc.","category":"page"},{"location":"examples/using_pcl/","page":"Pointclouds and PCL","title":"Pointclouds and PCL","text":"_PCL.PointCloud","category":"page"},{"location":"examples/using_pcl/#Caesar._PCL.PointCloud","page":"Pointclouds and PCL","title":"Caesar._PCL.PointCloud","text":"struct PointCloud{T<:Caesar._PCL.PointT, P, R}\n\nConvert a PCLPointCloud2 binary data blob into a Caesar._PCL.PointCloud{T} object using a field_map::Caesar._PCL.MsgFieldMap.\n\nUse PointCloud(::Caesar._PCL.PCLPointCloud2) directly or create you own MsgFieldMap:\n\nfield_map = Caesar._PCL.createMapping(msg.fields, field_map)\n\nNotes\n\nTested on Radar data with height z=constant for all points – i.e. 2D sweeping scan where .height=1.\n\nDevNotes\n\nTODO .PCLPointCloud2 convert not tested on regular 3D data from structured light or lidar yet, but current implementation should be close (or already working).\n\nReferences\n\nhttps://pointclouds.org/documentation/classpcl11pointcloud.html\n(seems older) https://docs.ros.org/en/hydro/api/pcl/html/conversions8hsource.html#l00123 \n\n\n\n\n\n","category":"type"},{"location":"examples/using_pcl/#Conversion-with-ROS.PointCloud2","page":"Pointclouds and PCL","title":"Conversion with ROS.PointCloud2","text":"","category":"section"},{"location":"examples/using_pcl/","page":"Pointclouds and PCL","title":"Pointclouds and PCL","text":"Strong integration between PCL and ROS predominantly through the message types","category":"page"},{"location":"examples/using_pcl/","page":"Pointclouds and PCL","title":"Pointclouds and PCL","text":"@rosimport std_msgs.msg: Header, @rosimport sensor_msgs.msg: PointField, @rosimport sensor_msgs.msg: PointCloud2.","category":"page"},{"location":"examples/using_pcl/","page":"Pointclouds and PCL","title":"Pointclouds and PCL","text":"These have been integrated through conversions to equivalent Julian types already listed above. ROS conversions requires RobotOS.jl be loaded, see page on using ROS Direct.","category":"page"},{"location":"examples/using_pcl/","page":"Pointclouds and PCL","title":"Pointclouds and PCL","text":"_PCL.PointXYZ\n_PCL.Header\n_PCL.PointField\n_PCL.FieldMapper\n_PCL.PCLPointCloud2","category":"page"},{"location":"examples/using_pcl/#Caesar._PCL.PointXYZ","page":"Pointclouds and PCL","title":"Caesar._PCL.PointXYZ","text":"struct PointXYZ{C<:Colorant, T<:Number} <: Caesar._PCL.PointT\n\nImmutable PointXYZ with color information. E.g. PointXYZ{RGB}, PointXYZ{Gray}, etc.\n\nAliases\n\nPointXYZRGB\nPointXYZRGBA\n\nSee \n\nhttps://pointclouds.org/documentation/structpcl11pointxyz.html\nhttps://pointclouds.org/documentation/point__types8hppsource.html\n\n\n\n\n\n","category":"type"},{"location":"examples/using_pcl/#Caesar._PCL.Header","page":"Pointclouds and PCL","title":"Caesar._PCL.Header","text":"struct Header\n\nImmutable Header.\n\nSee https://pointclouds.org/documentation/structpcl11pclheader.html\n\n\n\n\n\n","category":"type"},{"location":"examples/using_pcl/#Caesar._PCL.PointField","page":"Pointclouds and PCL","title":"Caesar._PCL.PointField","text":"struct PointField\n\nHow a point is stored in memory.\n\nhttps://pointclouds.org/documentation/structpcl11pclpoint_field.html\n\n\n\n\n\n","category":"type"},{"location":"examples/using_pcl/#Caesar._PCL.FieldMapper","page":"Pointclouds and PCL","title":"Caesar._PCL.FieldMapper","text":"struct FieldMapper{T<:Caesar._PCL.PointT}\n\nWhich field values to store and how to map them to values during serialization.\n\nhttps://docs.ros.org/en/hydro/api/pcl/html/conversions8hsource.html#l00091\n\n\n\n\n\n","category":"type"},{"location":"examples/using_pcl/#Caesar._PCL.PCLPointCloud2","page":"Pointclouds and PCL","title":"Caesar._PCL.PCLPointCloud2","text":"struct PCLPointCloud2\n\nImmutable point cloud type. Immutable for performance, computations are more frequent and intensive than anticipated frequency of constructing new clouds.\n\nReferences:\n\nhttps://pointclouds.org/documentation/structpcl11pclpoint_cloud2.html\nhttps://pointclouds.org/documentation/classpcl11pointcloud.html\nhttps://pointclouds.org/documentation/common2include2pcl2point__cloud8h_source.html\n\nSee also: Caesar._PCL.toROSPointCloud2\n\n\n\n\n\n","category":"type"},{"location":"examples/using_pcl/#Aligning-Point-Clouds","page":"Pointclouds and PCL","title":"Aligning Point Clouds","text":"","category":"section"},{"location":"examples/using_pcl/","page":"Pointclouds and PCL","title":"Pointclouds and PCL","text":"Caesar.jl is currently growing support for two related point cloud alignment methods, namely:","category":"page"},{"location":"examples/using_pcl/","page":"Pointclouds and PCL","title":"Pointclouds and PCL","text":"Continuous density function alignment ScatterAlignPose2, ScatterAlignPose3,\nTraditional Iterated Closest Point (with normals) alignICP_Simple.","category":"page"},{"location":"examples/using_pcl/#sec_scatter_align","page":"Pointclouds and PCL","title":"ScatterAlign for Pose2 and Pose3","text":"","category":"section"},{"location":"examples/using_pcl/","page":"Pointclouds and PCL","title":"Pointclouds and PCL","text":"These factors use minimum mean distance embeddings to cost the alignment between pointclouds and supports various other interesting function alignment cases. These functions require Images.jl, see page Using Images.","category":"page"},{"location":"examples/using_pcl/","page":"Pointclouds and PCL","title":"Pointclouds and PCL","text":"Caesar.ScatterAlign\nCaesar.ScatterAlignPose2\nCaesar.ScatterAlignPose3","category":"page"},{"location":"examples/using_pcl/#Caesar.ScatterAlign","page":"Pointclouds and PCL","title":"Caesar.ScatterAlign","text":"ScatterAlign{P,H1,H2} where {H1 <: Union{<:ManifoldKernelDensity, <:HeatmapGridDensity}, \n H2 <: Union{<:ManifoldKernelDensity, <:HeatmapGridDensity}}\n\nAlignment factor between point cloud populations, using either\n\na continuous density function cost: ApproxManifoldProducts.mmd, or\na conventional iterative closest point (ICP) algorithm (when .sample_count < 0).\n\nThis factor can support very large density clouds, with sample_count subsampling for individual alignments.\n\nKeyword Options:\n\nsample_count::Int = 100, number of subsamples to use during each alignment in getSample. \nValues greater than 0 use MMD alignment, while values less than 0 use ICP alignment.\nbw::Real, the bandwidth to use for mmd distance\nrescale::Real\nN::Int\ncvt::Function, convert function for image when using HeatmapGridDensity.\nuseStashing::Bool = false, to switch serialization strategy to using Stashing.\ndataEntry_cloud1::AbstractString = \"\", blob identifier used with stashing.\ndataEntry_cloud2::AbstractString = \"\", blob identifier used with stashing.\ndataStoreHint::AbstractString = \"\"\n\nExample\n\narp2 = ScatterAlignPose2(img1, img2, 2) # e.g. 2 meters/pixel \n\nNotes\n\nSupports two belief \"clouds\" as either\nManifoldKernelDensitys, or\nHeatmapGridDensitys.\nStanard cvt argument is lambda function to convert incoming images to user convention of image axes,\nGeography map default cvt flips image rows so that Pose2 +xy-axes corresponds to img[-x,+y]\ni.e. rows down is \"North\" and columns across from top left corner is \"East\".\nUse rescale to resize the incoming images for lower resolution (faster) correlations\nBoth images passed to the construct must have the same type some matrix of type T.\nExperimental support for Stashing based serialization.\n\nDevNotes:\n\nTODO Upgrade to use other information during alignment process, e.g. point normals for Pose3.\n\nSee also: ScatterAlignPose2, ScatterAlignPose3, overlayScanMatcher, Caesar._PCL.alignICP_Simple.\n\n\n\n\n\n","category":"type"},{"location":"examples/using_pcl/#Caesar.ScatterAlignPose2","page":"Pointclouds and PCL","title":"Caesar.ScatterAlignPose2","text":"ScatterAlignPose2(im1::Matrix, im2::Matrix, domain; options...)\nScatterAlignPose2(; mkd1::ManifoldKernelDensity, mkd2::ManifoldKernelDensity, moreoptions...)\n\nSpecialization of ScatterAlign for Pose2.\n\nSee also: ScatterAlignPose3\n\n\n\n\n\n","category":"type"},{"location":"examples/using_pcl/#Caesar.ScatterAlignPose3","page":"Pointclouds and PCL","title":"Caesar.ScatterAlignPose3","text":"ScatterAlignPose3(; cloud1=mkd1::ManifoldKernelDensity, \n cloud2=mkd2::ManifoldKernelDensity, \n moreoptions...)\n\nSpecialization of ScatterAlign for Pose3.\n\nSee also: ScatterAlignPose2\n\n\n\n\n\n","category":"type"},{"location":"examples/using_pcl/","page":"Pointclouds and PCL","title":"Pointclouds and PCL","text":"note: Note\nFuture work may include ScatterAlignPose2z, please open issues at Caesar.jl if this is of interest.","category":"page"},{"location":"examples/using_pcl/#Iterative-Closest-Point","page":"Pointclouds and PCL","title":"Iterative Closest Point","text":"","category":"section"},{"location":"examples/using_pcl/","page":"Pointclouds and PCL","title":"Pointclouds and PCL","text":"Ongoing work is integrating ICP into a factor similar to ScatterAlign.","category":"page"},{"location":"examples/using_pcl/","page":"Pointclouds and PCL","title":"Pointclouds and PCL","text":"Caesar._PCL.alignICP_Simple","category":"page"},{"location":"examples/using_pcl/#Caesar._PCL.alignICP_Simple","page":"Pointclouds and PCL","title":"Caesar._PCL.alignICP_Simple","text":"alignICP_Simple(\n X_fix,\n X_mov;\n correspondences,\n neighbors,\n min_planarity,\n max_overlap_distance,\n min_change,\n max_iterations,\n verbose,\n H\n)\n\n\nAlign two point clouds using ICP (with normals).\n\nExample:\n\nusing Downloads, DelimitedFiles\nusing Colors, Caesar\n\n# get some test data (~50mb download)\nlidar1_url = \"https://github.com/JuliaRobotics/CaesarTestData.jl/raw/main/data/lidar/simpleICP/terrestrial_lidar1.xyz\"\nlidar2_url = \"https://github.com/JuliaRobotics/CaesarTestData.jl/raw/main/data/lidar/simpleICP/terrestrial_lidar2.xyz\"\nio1 = PipeBuffer()\nio2 = PipeBuffer()\nDownloads.download(lidar1_url, io1)\nDownloads.download(lidar2_url, io2)\n\nX_fix = readdlm(io1)\nX_mov = readdlm(io2)\n\nH, HX_mov, stat = Caesar._PCL.alignICP_Simple(X_fix, X_mov; verbose=true)\n\nNotes\n\nMostly consolidated with Caesar._PCL types.\nInternally uses Caesar._PCL._ICP_PointCloud which was created to help facilite consolidation of code:\nModified from www.github.com/pglira/simpleICP (July 2022).\nSee here for a brief example on Visualizing Point Clouds.\n\nDevNotes\n\nTODO switch rigid transfrom to Caesar._PCL.apply along with performance considerations, instead of current transform!.\n\nSee also: PointCloud\n\n\n\n\n\n","category":"function"},{"location":"examples/using_pcl/#Visualizing-Point-Clouds","page":"Pointclouds and PCL","title":"Visualizing Point Clouds","text":"","category":"section"},{"location":"examples/using_pcl/","page":"Pointclouds and PCL","title":"Pointclouds and PCL","text":"See work in progress on alng with example code on the page 3D Visualization.","category":"page"},{"location":"principles/initializingOnBayesTree/#Advanced-Topics-on-Bayes-Tree","page":"Advanced Bayes Tree Topics","title":"Advanced Topics on Bayes Tree","text":"","category":"section"},{"location":"principles/initializingOnBayesTree/#Definitions","page":"Advanced Bayes Tree Topics","title":"Definitions","text":"","category":"section"},{"location":"principles/initializingOnBayesTree/","page":"Advanced Bayes Tree Topics","title":"Advanced Bayes Tree Topics","text":"Squashing or collapsing the Bayes tree back into a 'flat' Bayes net, by chain rule: ","category":"page"},{"location":"principles/initializingOnBayesTree/","page":"Advanced Bayes Tree Topics","title":"Advanced Bayes Tree Topics","text":"p(xy) = p(xy)p(y) = p(yx)p(x) \np(xyz) = p(xyz)p(yz) = p(xyz)p(z) = p(xyz)p(yz)p(z) \np(xyz) = p(xyz)p(y)p(z) textiff y is independent of z also p(yz)=p(y)","category":"page"},{"location":"principles/initializingOnBayesTree/#Are-cliques-in-the-Bayes-(Junction)-tree-densly-connected?","page":"Advanced Bayes Tree Topics","title":"Are cliques in the Bayes (Junction) tree densly connected?","text":"","category":"section"},{"location":"principles/initializingOnBayesTree/","page":"Advanced Bayes Tree Topics","title":"Advanced Bayes Tree Topics","text":"Yes and no. From the chordal Bayes net's perspective (obtained through the elimination game in order to build the clique tree), the nodes of the Bayes tree are indeed fully connected subgraphs (they are called cliques after all!). From the perspective of the subgraph of the original factor graph induced by the clique's variables, cliques need not be fully connected, since we are assuming the factor graph as sparse, and that no new information can be created out of nothing–-hence each clique must be sparse. That said, the potential exists for the inference within a clique to become densly connected (experience full \"fill-in\"). See the paper on square-root-SAM, where the connection between dense covariance matrix of a Kalman filter (EKF-SLAM) is actually related to the inverse square root (rectangular) matrix which structure equivalent to the clique subgraph adjacency matrix. ","category":"page"},{"location":"principles/initializingOnBayesTree/","page":"Advanced Bayes Tree Topics","title":"Advanced Bayes Tree Topics","text":"Also remember that the intermediate Bayes net (which has densly connected cliques) hides the underlying tree structure – think of the Bayes net as looking at the tree from on top or below, thereby encoding the dense connectivity in the structure of the tree itself. All information below any clique of the tree is encoded in the upward marginal belief messages at that point (i.e. the densly connected aspects pertained lower down in the tree).","category":"page"},{"location":"principles/initializingOnBayesTree/#LU/QR-vs.-Belief-Propagation","page":"Advanced Bayes Tree Topics","title":"LU/QR vs. Belief Propagation","text":"","category":"section"},{"location":"principles/initializingOnBayesTree/","page":"Advanced Bayes Tree Topics","title":"Advanced Bayes Tree Topics","text":"LU/QR is a special case (Parametric/Linear) of more general belief propagation. The story though is more intricate, where QR/LU assume that product-factors can be formed through the chain rule – using congruency – it is not that straight forward with general beliefs. In the general case we are almost forced to use belief propagation, which in turn implies special care is needed to describe the relationship between sparse factor graph fragments in cliques on the tree, and the more densely connected structure of the Bayes Net.","category":"page"},{"location":"principles/initializingOnBayesTree/#Bayes-Tree-vs-Bayes-Net","page":"Advanced Bayes Tree Topics","title":"Bayes Tree vs Bayes Net","text":"","category":"section"},{"location":"principles/initializingOnBayesTree/","page":"Advanced Bayes Tree Topics","title":"Advanced Bayes Tree Topics","text":"The Bayes tree is a purely symbolic structure – i.e. special grouping of factors that all come from the factor graph joint product (product of independently sampled likelihood/conditional models):","category":"page"},{"location":"principles/initializingOnBayesTree/","page":"Advanced Bayes Tree Topics","title":"Advanced Bayes Tree Topics","text":"Theta Z propto prod_i Z_i=z_i Theta_i ","category":"page"},{"location":"principles/initializingOnBayesTree/","page":"Advanced Bayes Tree Topics","title":"Advanced Bayes Tree Topics","text":"A sparse factor graph problem can be squashed into smaller dense problem of product-factor conditionals (from variable elimination). Therefore each product-factor (aka \"smart factor\" in other uses of the language) represent both the factors as well as the sequencing of cliques in that branch. This process repeats recursively from the root down to the leaves. The leaves of the tree have no further reduced product factors condensing child cliques below, and therefore sparse factor fragments can be computed to start the upward belief propagation process. More importantly, as belief propagation progresses up the tree, upward belief messages (on clique separators) capture the same structure as the densely connected Bayes net but each clique in the Bayes tree still only contains sparse fragments from the original factor graph. The structure of the tree (combined parent-child relationships) encodes the same information as the product-factor conditionals!","category":"page"},{"location":"principles/initializingOnBayesTree/#Initialization-on-the-Tree","page":"Advanced Bayes Tree Topics","title":"Initialization on the Tree","text":"","category":"section"},{"location":"principles/initializingOnBayesTree/","page":"Advanced Bayes Tree Topics","title":"Advanced Bayes Tree Topics","text":"It more challenging but possible to initialize all variables in a factor graph through belief propagation on the Bayes tree.","category":"page"},{"location":"principles/initializingOnBayesTree/","page":"Advanced Bayes Tree Topics","title":"Advanced Bayes Tree Topics","text":"As a thought experiment: Wouldn't it be awesome if we could compile the upsolve as a symbolic process only, and only assign numerical values once during a single downsolve procedure. The origin of this idea comes from the realization that a complete upsolve on the Bayes (Junction) tree is very nearly the same thing finding good numerical initialization values for the factor graph. If the up-init-solve can be performed as a purely symbolic process, it would greatly simplify numerical computations by deferring them to the down solve alone.","category":"page"},{"location":"principles/initializingOnBayesTree/","page":"Advanced Bayes Tree Topics","title":"Advanced Bayes Tree Topics","text":"Trying to do initialization for real, we might want to replace up-init-symbolic operations with numerical equivalents. Either way, it would be worth knowing what the equivalent numerical operations of a full up-init-solve of an uninitialized factor graph would look like.","category":"page"},{"location":"principles/initializingOnBayesTree/","page":"Advanced Bayes Tree Topics","title":"Advanced Bayes Tree Topics","text":"In general, if a clique can not be initialized based on information from lower down in that branch of the tree; more information is need from the parent. In the Gaussian (more accurately the congruent factor) case, all information lower down in the branch–-i.e. the relationships between variables in parent–-can be summarized by a new conditional product-factor that is computed with the probabilistic chain rule. To restate, the process of squashing the Bayes tree branch back down into a Bayes net, is effectively the the chain rule process used in variable elimination.","category":"page"},{"location":"principles/initializingOnBayesTree/","page":"Advanced Bayes Tree Topics","title":"Advanced Bayes Tree Topics","text":"note: Note\nQuestion, are cascading up and down solves are required if you do not use eliminated factor conditionals in parent cliques.","category":"page"},{"location":"principles/initializingOnBayesTree/#Gaussian-only-special-case","page":"Advanced Bayes Tree Topics","title":"Gaussian-only special case","text":"","category":"section"},{"location":"principles/initializingOnBayesTree/","page":"Advanced Bayes Tree Topics","title":"Advanced Bayes Tree Topics","text":"Elimination of variables and factors using chain rule reduction is a special case of belief propagation, and thus far only the reduction of congruent beliefs (such as Gaussian) is known.","category":"page"},{"location":"principles/initializingOnBayesTree/","page":"Advanced Bayes Tree Topics","title":"Advanced Bayes Tree Topics","text":"These computations can be parallelized depending on the conditional independence structure of the Bayes tree – separate branches are effectively separate chain rule instances. This is precisely the same process exploited by multi-frontal QR matrix factorization.","category":"page"},{"location":"principles/initializingOnBayesTree/","page":"Advanced Bayes Tree Topics","title":"Advanced Bayes Tree Topics","text":"On the down solve the conditionals–-from eliminated chains of previously eliminated variables and factors–-can be used for inference directly in the parent. ","category":"page"},{"location":"principles/initializingOnBayesTree/","page":"Advanced Bayes Tree Topics","title":"Advanced Bayes Tree Topics","text":"See node x1 to x3 in IncrementalInference issue 464. It does not branch or provide additional prior information. so it is collapsed into one factor between x1 and x3, solved in the root and the individual variable can be solved by inference.","category":"page"},{"location":"principles/initializingOnBayesTree/","page":"Advanced Bayes Tree Topics","title":"Advanced Bayes Tree Topics","text":"note: Note\nQuestion, what does the Jacobian in Gaussian only case mean with regard to a symbolic upsolve?","category":"page"},{"location":"faq/#Frequently-Asked-Questions","page":"FAQ","title":"Frequently Asked Questions","text":"","category":"section"},{"location":"faq/#Factor-Graphs:-why-not-just-filter?","page":"FAQ","title":"Factor Graphs: why not just filter?","text":"","category":"section"},{"location":"faq/","page":"FAQ","title":"FAQ","text":"Why can't I just filter, or what is the connection with FGs? See the \"Principles\" section in the documentation. ","category":"page"},{"location":"faq/#Why-worry-about-non-Gaussian-Probabilities","page":"FAQ","title":"Why worry about non-Gaussian Probabilities","text":"","category":"section"},{"location":"faq/","page":"FAQ","title":"FAQ","text":"The non-Gaussian/multimodal section in the docs is dedicated to precisely this question.","category":"page"},{"location":"faq/#Why-Julia","page":"FAQ","title":"Why Julia","text":"","category":"section"},{"location":"faq/","page":"FAQ","title":"FAQ","text":"The JuliaLang and (JuliaPro) is an open-source Just-In-Time (JIT) & optionally precompiled, strongly-typed, and high-performance programming language. The algorithmic code is implemented in Julia for many reasons, such as agile development, high level syntax, performance, type safety, multiple dispatch replacement for object oriented which exhibits several emergent properties, parallel computing, dynamic development, cross compilable (with gcc and clang) and foundational cross-platform (LLVM) technologies. See JuliaCon2018 highlights video. Julia can be thought of as either {C+, Mex (done right), or as a modern Fortran replacement}.","category":"page"},{"location":"faq/#Current-Julia-version?","page":"FAQ","title":"Current Julia version?","text":"","category":"section"},{"location":"faq/","page":"FAQ","title":"FAQ","text":"Caesar.jl and packages are currently targeting Julia version as per the local install page.","category":"page"},{"location":"faq/#Just-In-Time-Compiling-(i.e.-why-are-first-runs-slow?)","page":"FAQ","title":"Just-In-Time Compiling (i.e. why are first runs slow?)","text":"","category":"section"},{"location":"faq/","page":"FAQ","title":"FAQ","text":"Julia uses just-in-time compilation (unless already pre-compiled) which takes additional time the first time a new function is called. Additional calls to a cached function are fast from the second call onwards since the static binary code is now cached and ready for use.","category":"page"},{"location":"faq/#How-does-garbage-collection-work?","page":"FAQ","title":"How does garbage collection work?","text":"","category":"section"},{"location":"faq/","page":"FAQ","title":"FAQ","text":"A short description of Julia's garbage collection is described in Discourse here.","category":"page"},{"location":"faq/","page":"FAQ","title":"FAQ","text":"note: Note\nGarbage collection can be influenced in a few ways to allow more certainty about operational outcome, see the Julia Docs Garbage Collection Internal functions like enable, preserve, safepoint, etc.","category":"page"},{"location":"faq/#Using-Julia-in-real-time-systems?","page":"FAQ","title":"Using Julia in real-time systems?","text":"","category":"section"},{"location":"faq/","page":"FAQ","title":"FAQ","text":"See the JuliaCon presentation by rdeits here.","category":"page"},{"location":"faq/#Can-Caesar.jl-be-used-in-other-languages-beyond-Julia?-Yes.","page":"FAQ","title":"Can Caesar.jl be used in other languages beyond Julia? Yes.","text":"","category":"section"},{"location":"faq/","page":"FAQ","title":"FAQ","text":"The Caesar.jl project is expressly focused on making this algorithmic code available to C/Fortran/C++/C#/Python/Java/JS. Julia itself offers many additional interops. ZMQ and HTTP/WebSockets are the standardized interfaces of choice, please see details at the multi-language section). Consider opening issues or getting in touch for more information.","category":"page"},{"location":"faq/#Can-Julia-Compile-Binaries-/-Shared-Libraries","page":"FAQ","title":"Can Julia Compile Binaries / Shared Libraries","text":"","category":"section"},{"location":"faq/","page":"FAQ","title":"FAQ","text":"Yes, see the Compile Binaries Page.","category":"page"},{"location":"faq/#Can-Julia-be-Embedded-into-C/C","page":"FAQ","title":"Can Julia be Embedded into C/C++","text":"","category":"section"},{"location":"faq/","page":"FAQ","title":"FAQ","text":"Yes, see the Julia embedding documentation page.","category":"page"},{"location":"faq/#ROS-Integration","page":"FAQ","title":"ROS Integration","text":"","category":"section"},{"location":"faq/","page":"FAQ","title":"FAQ","text":"ROS and ZMQ interfaces are closely related. Please see the ROS Integration Page for details on using ROS with Caesar.jl.","category":"page"},{"location":"faq/#Why-ZMQ-Middleware-Layer-(multilang)?","page":"FAQ","title":"Why ZMQ Middleware Layer (multilang)?","text":"","category":"section"},{"location":"faq/","page":"FAQ","title":"FAQ","text":"Zero Message Queue (ZMQ) is a widely used data transport layer used to build various other multiprocess middleware with wide support among other programming languages. Caesar.jl has on been used with a direct ZMQ type link, which is similar to a ROS workflow. Contributions are welcome for binding ZMQ endpoints for a non-ROS messaging interface.","category":"page"},{"location":"faq/","page":"FAQ","title":"FAQ","text":"Note ZMQ work has been happening on and off based on behind the main priority on resolving abstractions with the DistributedFactorGraphs.jl framework. See ongoing work for the ZMQ interface.","category":"page"},{"location":"faq/#What-is-supersolve?","page":"FAQ","title":"What is supersolve?","text":"","category":"section"},{"location":"faq/","page":"FAQ","title":"FAQ","text":"When multiple numerical values/solutions exists for the (or nearly) same factor graph – then solutions, including a reference solution (ground truth) can just be stacked in that variable. See and comment on a few cases here.","category":"page"},{"location":"faq/#Variable-Scope-in-For-loop-Error","page":"FAQ","title":"Variable Scope in For loop Error","text":"","category":"section"},{"location":"faq/","page":"FAQ","title":"FAQ","text":"Julia wants you to be specific about global variables, and variables packed in a development script at top level are created as globals. Globals can be accessed using the global varname at the start of the context. When writing for loops (using Julia versions 0.7 through 1.3) stricter rules on global scoping applied. The purest way to ensure scope of variables are properly managed in the REPL or Juno script Main context is using the let syntax (not required post Julia 1.4).","category":"page"},{"location":"faq/","page":"FAQ","title":"FAQ","text":"fg = ...\ntree = solveTree!(fg)\n...\n# and then a loop here:\nlet tree=tree, fg=fg\nfor i 2:100\n # global tree, fg # forcing globals is the alternative\n # add variables and stuff\n ...\n # want to solve again\n tree = solveTree!(fg, tree)\n ...\n # more stuff\nend\nend # let block","category":"page"},{"location":"faq/","page":"FAQ","title":"FAQ","text":"See Stack overflow on let or the Julia docs page on scoping. Also note it is good practice to use local scope (i.e. inside a function) variables for performance reasons.","category":"page"},{"location":"faq/","page":"FAQ","title":"FAQ","text":"note: Note\nThis behaviour is going to change in Julia 1.5 back to what Julia 0.6 was in interactive cases, and therefore likely less of a problem in future versions. See Julia 1.5 Change Notes, ([#28789], [#33864]).","category":"page"},{"location":"faq/#How-to-Enable-@debug-Logging.jl","page":"FAQ","title":"How to Enable @debug Logging.jl","text":"","category":"section"},{"location":"faq/","page":"FAQ","title":"FAQ","text":"https://stackoverflow.com/questions/53548681/how-to-enable-debugging-messages-in-juno-julia-editor","category":"page"},{"location":"faq/#Julia-Images.jl-Axis-Convention","page":"FAQ","title":"Julia Images.jl Axis Convention","text":"","category":"section"},{"location":"faq/","page":"FAQ","title":"FAQ","text":"Julia Images.jl follows the common `::Array column-major–-i.e. vertical-major–-index convention","category":"page"},{"location":"faq/","page":"FAQ","title":"FAQ","text":"That is img[vertical, horizontal]\nSee https://evizero.github.io/Augmentor.jl/images/#Vertical-Major-vs-Horizontal-Major-1 for more details.\nAlso, https://juliaimages.org/latest/pkgs/axes/#Names-and-locations","category":"page"},{"location":"faq/#How-does-JSON-Schema-work?","page":"FAQ","title":"How does JSON-Schema work?","text":"","category":"section"},{"location":"faq/","page":"FAQ","title":"FAQ","text":"Caesar.jl intends to follow json-schema.org, see step-by-step guide here.","category":"page"},{"location":"faq/#How-to-get-Julia-memory-allocation-points?","page":"FAQ","title":"How to get Julia memory allocation points?","text":"","category":"section"},{"location":"faq/","page":"FAQ","title":"FAQ","text":"See discourse discussion.","category":"page"},{"location":"faq/#Increase-Linux-Open-File-Limit?","page":"FAQ","title":"Increase Linux Open File Limit?","text":"","category":"section"},{"location":"faq/","page":"FAQ","title":"FAQ","text":"If you see the error \"Open Files Limit\", please follow these intructions on your local system. This is likely to happen when debug code and a large number of files are stored in the general solution specific logpath.","category":"page"},{"location":"examples/canonical_graphs/#Canonical-Graphs","page":"Canonical Generators","title":"Canonical Graphs","text":"","category":"section"},{"location":"examples/canonical_graphs/","page":"Canonical Generators","title":"Canonical Generators","text":"try tab-completion in the REPL:","category":"page"},{"location":"examples/canonical_graphs/","page":"Canonical Generators","title":"Canonical Generators","text":"IncrementalInference.generateGraph_Kaess\nIncrementalInference.generateGraph_TestSymbolic\nIncrementalInference.generateGraph_CaesarRing1D\nIncrementalInference.generateGraph_LineStep\nIncrementalInference.generateGraph_EuclidDistance\nRoME.generateGraph_Circle\nRoME.generateGraph_ZeroPose\nRoME.generateGraph_Hexagonal\nRoME.generateGraph_Beehive!\nRoME.generateGraph_Helix2D!\nRoME.generateGraph_Helix2DSlew!\nRoME.generateGraph_Helix2DSpiral!","category":"page"},{"location":"examples/canonical_graphs/#IncrementalInference.generateGraph_Kaess","page":"Canonical Generators","title":"IncrementalInference.generateGraph_Kaess","text":"generateGraph_Kaess(; graphinit)\n\n\nCanonical example from literature, Kaess, et al.: ISAM2, IJRR, 2011.\n\nNotes\n\nPaper variable ordering: p = [:l1;:l2;:x1;:x2;:x3]\n\n\n\n\n\n","category":"function"},{"location":"examples/canonical_graphs/#IncrementalInference.generateGraph_TestSymbolic","page":"Canonical Generators","title":"IncrementalInference.generateGraph_TestSymbolic","text":"generateGraph_TestSymbolic(; graphinit)\n\n\nCanonical example introduced by Borglab.\n\nNotes\n\nKnown variable ordering: p = [:x1; :l3; :l1; :x5; :x2; :l2; :x4; :x3]\n\n\n\n\n\n","category":"function"},{"location":"examples/canonical_graphs/#IncrementalInference.generateGraph_CaesarRing1D","page":"Canonical Generators","title":"IncrementalInference.generateGraph_CaesarRing1D","text":"generateGraph_CaesarRing1D(; graphinit)\n\n\nCanonical example introduced originally as Caesar Hex Example.\n\nNotes\n\nPaper variable ordering: p = [:x0;:x2;:x4;:x6;:x1;:l1;:x5;:x3;]\n\n\n\n\n\n","category":"function"},{"location":"examples/canonical_graphs/#IncrementalInference.generateGraph_LineStep","page":"Canonical Generators","title":"IncrementalInference.generateGraph_LineStep","text":"generateGraph_LineStep(\n lineLength;\n poseEvery,\n landmarkEvery,\n posePriorsAt,\n landmarkPriorsAt,\n sightDistance,\n vardims,\n noisy,\n graphinit,\n σ_pose_prior,\n σ_lm_prior,\n σ_pose_pose,\n σ_pose_lm,\n solverParams\n)\n\n\nContinuous, linear scalar and multivariate test graph generation. Follows a line with the pose id equal to the ground truth.\n\n\n\n\n\n","category":"function"},{"location":"examples/canonical_graphs/#IncrementalInference.generateGraph_EuclidDistance","page":"Canonical Generators","title":"IncrementalInference.generateGraph_EuclidDistance","text":"generateGraph_EuclidDistance(; ...)\ngenerateGraph_EuclidDistance(\n points;\n dist,\n σ_prior,\n σ_dist,\n N,\n graphinit\n)\n\n\nGenerate a EuclidDistance test graph where 1 landmark position is unknown. \n\n\n\n\n\n","category":"function"},{"location":"examples/canonical_graphs/#RoME.generateGraph_Circle","page":"Canonical Generators","title":"RoME.generateGraph_Circle","text":"generateGraph_Circle(; ...)\ngenerateGraph_Circle(\n poses;\n fg,\n offsetPoses,\n autoinit,\n graphinit,\n landmark,\n loopClosure,\n stopEarly,\n biasTurn,\n kappaOdo,\n cyclePoses\n)\n\n\nGenerate a canonical factor graph: driving in a circular pattern with one landmark.\n\nNotes\n\nPoses, :x0, :x1,... Pose2,\nOdometry, :x0x1f1, etc., Pose2Pose2 (Gaussian)\nOPTIONAL: 1 Landmark, :l1, Point2,\n2 Sightings, :x0l1f1, :x6l1f1, RangeBearing (Gaussian)\n\nExample\n\nusing RoME\n\nfg = generateGraph_Hexagonal()\ndrawGraph(fg, show=true)\n\nDevNotes\n\nTODO refactor to use new calcHelix_T.\n\nRelated\n\ngenerateGraph_Circle, generateGraph_Kaess, generateGraph_TwoPoseOdo\n\n\n\n\n\n","category":"function"},{"location":"examples/canonical_graphs/#RoME.generateGraph_ZeroPose","page":"Canonical Generators","title":"RoME.generateGraph_ZeroPose","text":"generateGraph_ZeroPose(\n;\n varType,\n graphinit,\n solverParams,\n dfg,\n doRef,\n useMsgLikelihoods,\n label,\n priorType,\n μ0,\n Σ0,\n priorArgs,\n solvable,\n variableTags,\n factorTags,\n postpose_cb\n)\n\n\nGenerate a canonical factor graph with a Pose2 :x0 and MvNormal with covariance P0.\n\nNotes\n\nUse e.g. varType=Point2 to change from the default variable type Pose2.\nUse priorArgs::Tuple to override the default input arguments to priorType.\nUse callback postpose_cb(g::AbstractDFG,lastpose::Symbol) to call user operations after each pose step.\n\n\n\n\n\n","category":"function"},{"location":"examples/canonical_graphs/#RoME.generateGraph_Hexagonal","page":"Canonical Generators","title":"RoME.generateGraph_Hexagonal","text":"generateGraph_Hexagonal(\n;\n fg,\n landmark,\n loopClosure,\n N,\n autoinit,\n graphinit\n)\n\n\nGenerate a canonical factor graph: driving in a hexagonal circular pattern with one landmark.\n\nNotes\n\n7 Poses, :x0-:x6, Pose2,\n1 Landmark, :l1, Point2,\n6 Odometry, :x0x1f1, etc., Pose2Pose2 (Gaussian)\n2 Sightings, :x0l1f1, :x6l1f1, RangeBearing (Gaussian)\n\nExample\n\nusing RoME\n\nfg = generateGraph_Hexagonal()\ndrawGraph(fg, show=true)\n\nRelated\n\ngenerateGraph_Circle, generateGraph_Kaess, generateGraph_TwoPoseOdo, generateGraph_Boxes2D!\n\n\n\n\n\n","category":"function"},{"location":"examples/canonical_graphs/#RoME.generateGraph_Beehive!","page":"Canonical Generators","title":"RoME.generateGraph_Beehive!","text":"generateGraph_Beehive!(; ...)\ngenerateGraph_Beehive!(\n poseCountTarget;\n graphinit,\n dfg,\n useMsgLikelihoods,\n solvable,\n refKey,\n addLandmarks,\n landmarkSolvable,\n poseRegex,\n pose0,\n yaw0,\n μ0,\n postpose_cb,\n locality,\n atol\n)\n\n\nPretend a bee is walking in a hive where each step (pose) follows one edge of an imaginary honeycomb lattice, and at after each step a new direction left or right is stochastically chosen and the process repeats.\n\nNotes\n\nThe keyword locality=1 is a positive ::Real ∈ [0,∞) value, where higher numbers imply direction decisions are more sticky for multiple steps.\nUse keyword callback function postpose_cb = (fg, lastpose) -> ... to hook in your own features right after each new pose step.\n\nDevNotes\n\nTODO rewrite as a recursive generator function instead.\n\nSee also: generateGraph_Honeycomb!, generateGraph_Hexagonal, generateGraph_ZeroPose\n\n\n\n\n\n","category":"function"},{"location":"examples/canonical_graphs/#RoME.generateGraph_Helix2D!","page":"Canonical Generators","title":"RoME.generateGraph_Helix2D!","text":"generateGraph_Helix2D!(; ...)\ngenerateGraph_Helix2D!(\n numposes;\n posesperturn,\n graphinit,\n useMsgLikelihoods,\n solverParams,\n dfg,\n radius,\n spine_t,\n xr_t,\n yr_t,\n poseRegex,\n μ0,\n refKey,\n Qd,\n postpose_cb\n)\n\n\nGeneralized canonical graph generator function for helix patterns.\n\nNotes\n\nassumes poses are labeled according to r\"x\\d+\"\nGradient (i.e. angle) calculations are on the order of 1e-8.\nUse callback spine_t(t)::Complex to modify how the helix pattern is moved in x, y along the progression of t,\nSee related wrapper functions for convenient generators of helix patterns in 2D,\nReal valued xr_t(t) and yr_t(t) can be modified (and will override) complex valued spine_t instead.\nuse postpose_cb = (fg_, lastestpose) -> ... for additional user features after each new pose\ncan be used to grow a graph with repeated calls, but keyword parameters are assumed identical between calls.\n\nSee also: generateGraph_Helix2DSlew!, generateGraph_Helix2DSpiral!, generateGraph_Beehive!\n\n\n\n\n\n","category":"function"},{"location":"examples/canonical_graphs/#RoME.generateGraph_Helix2DSlew!","page":"Canonical Generators","title":"RoME.generateGraph_Helix2DSlew!","text":"generateGraph_Helix2DSlew!(; ...)\ngenerateGraph_Helix2DSlew!(\n numposes;\n slew_x,\n slew_y,\n spine_t,\n kwargs...\n)\n\n\nGenerate canonical slewed helix graph (like a flattened slinky).\n\nNotes\n\nUse slew_x and slew_y to pull the \"slinky\" out in different directions at constant rate.\nSee generalized helix generator for more details. \nDefaults are choosen to slew along x and have multple trajectory intersects between consecutive loops of the helix.\n\nRelated\n\ngenerateGraph_Helix2D!, generateGraph_Helix2DSpiral!\n\n\n\n\n\n","category":"function"},{"location":"examples/canonical_graphs/#RoME.generateGraph_Helix2DSpiral!","page":"Canonical Generators","title":"RoME.generateGraph_Helix2DSpiral!","text":"generateGraph_Helix2DSpiral!(; ...)\ngenerateGraph_Helix2DSpiral!(\n numposes;\n rate_r,\n rate_a,\n spine_t,\n kwargs...\n)\n\n\nGenerate canonical helix graph that expands along a spiral pattern, analogous flower petals.\n\nNotes\n\nThis function wraps the complex spine_t(t) function to generate the spiral pattern.\nrate_a and rate_r can be varied for different spiral behavior.\nSee generalized helix generator for more details. \nDefaults are choosen to slewto have multple trajectory intersects between consecutive loops of the helix and do a decent job of moving around coverage area with a relative balance of encircled area sizes.\n\nRelated \n\ngenerateGraph_Helix2D!, generateGraph_Helix2DSlew!\n\n\n\n\n\n","category":"function"},{"location":"examples/canonical_graphs/","page":"Canonical Generators","title":"Canonical Generators","text":"","category":"page"},{"location":"examples/basic_hexagonal2d/#Hexagonal-2D-SLAM-Example-(Local-Compute)","page":"Hexagonal 2D SLAM","title":"Hexagonal 2D SLAM Example (Local Compute)","text":"","category":"section"},{"location":"examples/basic_hexagonal2d/","page":"Hexagonal 2D SLAM","title":"Hexagonal 2D SLAM","text":"A simple 2D robot trajectory example is expanded below using techniques developed in simultaneous localization and mapping (SLAM). This example is available as a single script here.","category":"page"},{"location":"examples/basic_hexagonal2d/#Creating-the-Factor-Graph-with-Pose2","page":"Hexagonal 2D SLAM","title":"Creating the Factor Graph with Pose2","text":"","category":"section"},{"location":"examples/basic_hexagonal2d/","page":"Hexagonal 2D SLAM","title":"Hexagonal 2D SLAM","text":"The first step is to load the required modules, and in our case we will add a few Julia processes to help with the compute later on. ","category":"page"},{"location":"examples/basic_hexagonal2d/","page":"Hexagonal 2D SLAM","title":"Hexagonal 2D SLAM","text":"# add more julia processes\nnprocs() < 4 ? addprocs(4-nprocs()) : nothing\n\n# tell Julia that you want to use these modules/namespaces\nusing RoME, Distributions, LinearAlgebra","category":"page"},{"location":"examples/basic_hexagonal2d/","page":"Hexagonal 2D SLAM","title":"Hexagonal 2D SLAM","text":"After loading the RoME and Distributions modules, we construct a local factor graph object in memory:","category":"page"},{"location":"examples/basic_hexagonal2d/","page":"Hexagonal 2D SLAM","title":"Hexagonal 2D SLAM","text":"# start with an empty factor graph object\nfg = initfg()\n\n# Add the first pose :x0\naddVariable!(fg, :x0, Pose2)\n\n# Add at a fixed location PriorPose2 to pin :x0 to a starting location\naddFactor!(fg, [:x0], PriorPose2(MvNormal(zeros(3), 0.01*Matrix(LinearAlgebra.I,3,3))) )","category":"page"},{"location":"examples/basic_hexagonal2d/","page":"Hexagonal 2D SLAM","title":"Hexagonal 2D SLAM","text":"A factor graph object fg (of type <:AbstractDFG) has been constructed; the first pose :x0 has been added; and a prior factor setting the origin at [0,0,0] over variable node dimensions [x,y,θ] in the world frame. The type Pose2 is used to indicate what variable is stored in the node. Caesar.jl allows a little more freedom in how factor and variable nodes can be connected, while still allowing for type-assertion to occur.","category":"page"},{"location":"examples/basic_hexagonal2d/","page":"Hexagonal 2D SLAM","title":"Hexagonal 2D SLAM","text":"NOTE Julia uses just-in-time compilation (unless pre-compiled) which is slow the first time a function is called but fast from the second call onwards, since the static function is now cached and ready for use.","category":"page"},{"location":"examples/basic_hexagonal2d/","page":"Hexagonal 2D SLAM","title":"Hexagonal 2D SLAM","text":"The next 6 nodes are added with odometry in an counter-clockwise hexagonal manner. Note how variables are denoted with symbols, :x2 == Symbol(\"x2\"):","category":"page"},{"location":"examples/basic_hexagonal2d/","page":"Hexagonal 2D SLAM","title":"Hexagonal 2D SLAM","text":"# Drive around in a hexagon\nfor i in 0:5\n psym = Symbol(\"x$i\")\n nsym = Symbol(\"x$(i+1)\")\n addVariable!(fg, nsym, Pose2)\n pp = Pose2Pose2(MvNormal([10.0;0;pi/3], Matrix(Diagonal([0.1;0.1;0.1].^2))))\n addFactor!(fg, [psym;nsym], pp )\nend","category":"page"},{"location":"examples/basic_hexagonal2d/","page":"Hexagonal 2D SLAM","title":"Hexagonal 2D SLAM","text":"At this point it would be good to see what the factor graph actually looks like:","category":"page"},{"location":"examples/basic_hexagonal2d/","page":"Hexagonal 2D SLAM","title":"Hexagonal 2D SLAM","text":"drawGraph(fg)","category":"page"},{"location":"examples/basic_hexagonal2d/","page":"Hexagonal 2D SLAM","title":"Hexagonal 2D SLAM","text":"You should see the program evince open with this visual:","category":"page"},{"location":"examples/basic_hexagonal2d/","page":"Hexagonal 2D SLAM","title":"Hexagonal 2D SLAM","text":"(Image: exfg2d)","category":"page"},{"location":"examples/basic_hexagonal2d/#Performing-Inference","page":"Hexagonal 2D SLAM","title":"Performing Inference","text":"","category":"section"},{"location":"examples/basic_hexagonal2d/","page":"Hexagonal 2D SLAM","title":"Hexagonal 2D SLAM","text":"Let's run the multimodal-incremental smoothing and mapping (mm-iSAM) solver against this fg object:","category":"page"},{"location":"examples/basic_hexagonal2d/","page":"Hexagonal 2D SLAM","title":"Hexagonal 2D SLAM","text":"# perform inference, and remember first runs are slower owing to Julia's just-in-time compiling\ntree = solveTree!(fg)","category":"page"},{"location":"examples/basic_hexagonal2d/","page":"Hexagonal 2D SLAM","title":"Hexagonal 2D SLAM","text":"This will take a couple of seconds (including first time compiling for all Julia processes). If you wanted to see the Bayes tree operations during solving, set the following parameters before calling the solver:","category":"page"},{"location":"examples/basic_hexagonal2d/","page":"Hexagonal 2D SLAM","title":"Hexagonal 2D SLAM","text":"getSolverParams(fg).drawtree = true\ngetSolverParams(fg).showtree = true","category":"page"},{"location":"examples/basic_hexagonal2d/#Some-Visualization-Plot","page":"Hexagonal 2D SLAM","title":"Some Visualization Plot","text":"","category":"section"},{"location":"examples/basic_hexagonal2d/","page":"Hexagonal 2D SLAM","title":"Hexagonal 2D SLAM","text":"2D plots of the factor graph contents is provided by the RoMEPlotting package. See further discussion on visualizations and packages here.","category":"page"},{"location":"examples/basic_hexagonal2d/","page":"Hexagonal 2D SLAM","title":"Hexagonal 2D SLAM","text":"## Inter-operating visualization packages for Caesar/RoME/IncrementalInference exist\nusing RoMEPlotting\n\n# For Juno/Jupyter style use\npl = drawPoses(fg)\n\n# For scripting use-cases you can export the image\npl |> Gadfly.PDF(\"/tmp/test.pdf\") # or PNG(...)","category":"page"},{"location":"examples/basic_hexagonal2d/","page":"Hexagonal 2D SLAM","title":"Hexagonal 2D SLAM","text":"(Image: test)","category":"page"},{"location":"examples/basic_hexagonal2d/#Adding-Landmarks-as-Point2","page":"Hexagonal 2D SLAM","title":"Adding Landmarks as Point2","text":"","category":"section"},{"location":"examples/basic_hexagonal2d/","page":"Hexagonal 2D SLAM","title":"Hexagonal 2D SLAM","text":"Suppose some sensor detected a feature of interest with an associated range and bearing measurement. The new variable and measurement can be included into the factor graph as follows:","category":"page"},{"location":"examples/basic_hexagonal2d/","page":"Hexagonal 2D SLAM","title":"Hexagonal 2D SLAM","text":"# Add landmarks with Bearing range measurements\naddVariable!(fg, :l1, Point2, tags=[:LANDMARK;])\np2br = Pose2Point2BearingRange(Normal(0,0.1),Normal(20.0,1.0))\naddFactor!(fg, [:x0; :l1], p2br)\n\n# Initialize :l1 numerical values but do not rerun solver\ninitAll!(fg)","category":"page"},{"location":"examples/basic_hexagonal2d/","page":"Hexagonal 2D SLAM","title":"Hexagonal 2D SLAM","text":"NOTE The default behavior for initialization of variable nodes implies the last variable node added will not have any numerical values yet, please see ContinuousScalar Tutorial for deeper discussion on automatic initialization (autoinit). A slightly expanded plotting function will draw both poses and landmarks (and currently assumes labels starting with :x and :l respectively)–-notice the new landmark bottom right:","category":"page"},{"location":"examples/basic_hexagonal2d/","page":"Hexagonal 2D SLAM","title":"Hexagonal 2D SLAM","text":"drawPosesLandms(fg)","category":"page"},{"location":"examples/basic_hexagonal2d/","page":"Hexagonal 2D SLAM","title":"Hexagonal 2D SLAM","text":"(Image: test)","category":"page"},{"location":"examples/basic_hexagonal2d/#One-type-of-Loop-Closure","page":"Hexagonal 2D SLAM","title":"One type of Loop-Closure","text":"","category":"section"},{"location":"examples/basic_hexagonal2d/","page":"Hexagonal 2D SLAM","title":"Hexagonal 2D SLAM","text":"Loop-closures are a major part of SLAM based state estimation. One illustration is to take a second sighting of the same :l1 landmark from the last pose :x6; followed by repeating the inference and re-plotting the result–-notice the tighter confidences over all variables:","category":"page"},{"location":"examples/basic_hexagonal2d/","page":"Hexagonal 2D SLAM","title":"Hexagonal 2D SLAM","text":"# Add landmarks with Bearing range measurements\np2br2 = Pose2Point2BearingRange(Normal(0,0.1),Normal(20.0,1.0))\naddFactor!(fg, [:x6; :l1], p2br2)\n\n# solve\ntree = solveTree!(fg, tree)\n\n# redraw\npl = drawPosesLandms(fg)","category":"page"},{"location":"examples/basic_hexagonal2d/","page":"Hexagonal 2D SLAM","title":"Hexagonal 2D SLAM","text":"(Image: test)","category":"page"},{"location":"examples/basic_hexagonal2d/","page":"Hexagonal 2D SLAM","title":"Hexagonal 2D SLAM","text":"This concludes the Hexagonal 2D SLAM example.","category":"page"},{"location":"examples/basic_hexagonal2d/#Interest:-The-Bayes-(Junction)-tree","page":"Hexagonal 2D SLAM","title":"Interest: The Bayes (Junction) tree","text":"","category":"section"},{"location":"examples/basic_hexagonal2d/","page":"Hexagonal 2D SLAM","title":"Hexagonal 2D SLAM","text":"The Bayes (Junction) tree is used as an acyclic (has no loops) computational object, an exact algebraic refactorizating of factor graph, to perform the associated sum-product inference. The visual structure of the tree can be extracted by modifying the command tree = wipeBuildNewTree!(fg, drawpdf=true) to produce representations such as this in bt.pdf.","category":"page"},{"location":"examples/basic_hexagonal2d/","page":"Hexagonal 2D SLAM","title":"Hexagonal 2D SLAM","text":"(Image: exbt2d)","category":"page"},{"location":"concepts/multisession/#Multisession-Operation","page":"Multi-session/agent Solving","title":"Multisession Operation","text":"","category":"section"},{"location":"concepts/multisession/","page":"Multi-session/agent Solving","title":"Multi-session/agent Solving","text":"Having all the data consolidated in a factor graph allows us to do something we find really exciting: reason against data for different robots, different robot sessions, even different users. Of course, this is all optional, and must be explicitly configured, but if enabled, current inference solutions can make use of historical data to continually improve their solutions.","category":"page"},{"location":"concepts/multisession/","page":"Multi-session/agent Solving","title":"Multi-session/agent Solving","text":"Consider a single robot working in a common environment that has driven around the same area a number of times and has identified a landmark that is (probably) the same. We can automatically close the loop and use the information from the prior data to improve our current solution. This is called a multisession solve.","category":"page"},{"location":"concepts/multisession/","page":"Multi-session/agent Solving","title":"Multi-session/agent Solving","text":"To perform a multisession solve, you need to specify that a session is part of a common environment, e.g 'lab'. A user then requests a multisession solve (manually for the moment), and this creates relationships between common landmarks. The collective information is used to produce a consensus on the shared landmarks. A chain of session solves is then created, and the information is propagated into the individual sessions, improving their results.","category":"page"},{"location":"concepts/multisession/#Steps-in-Multisession-Solve","page":"Multi-session/agent Solving","title":"Steps in Multisession Solve","text":"","category":"section"},{"location":"concepts/multisession/","page":"Multi-session/agent Solving","title":"Multi-session/agent Solving","text":"The following steps are performed by the user:","category":"page"},{"location":"concepts/multisession/","page":"Multi-session/agent Solving","title":"Multi-session/agent Solving","text":"Indicate which sessions are part of a common environment - this is done via GraffSDK when the session is created\nRequest a multisession solve","category":"page"},{"location":"concepts/multisession/","page":"Multi-session/agent Solving","title":"Multi-session/agent Solving","text":"Upon request, the solver performs the following actions:","category":"page"},{"location":"concepts/multisession/","page":"Multi-session/agent Solving","title":"Multi-session/agent Solving","text":"Updates the common existing multisession landmarks with any new information (propagation from session to common information)\nBuilds common landmarks for any new sessions or updated data\nSolves the common, multisession graph\nPropagates the common consensus result to the individual sessions\nFreezes all the session landmarks so that the session solving does not update the consensus result\nRequests session solves for all the updated sessions","category":"page"},{"location":"concepts/multisession/","page":"Multi-session/agent Solving","title":"Multi-session/agent Solving","text":"Note the current approach is well positioned to transition to the \"Federated Bayes (Junction) Tree\" multisession solving method, and will be updated accordingly in due coarse. The Federated method will allow faster multi-session solving times by avoiding the current iterated approach.","category":"page"},{"location":"concepts/multisession/#Example","page":"Multi-session/agent Solving","title":"Example","text":"","category":"section"},{"location":"concepts/multisession/","page":"Multi-session/agent Solving","title":"Multi-session/agent Solving","text":"Consider three sessions which exist in the same, shared environment. In this environment, during each session the robot identified the same l0 landmark, as shown in the below figure. (Image: Independent Sessions)","category":"page"},{"location":"concepts/multisession/","page":"Multi-session/agent Solving","title":"Multi-session/agent Solving","text":"If we examine this in terms of the estimates of the actual landmarks, we have three independent densities (blue, green, and orange) giving measures of l0 located at (20, 0):","category":"page"},{"location":"concepts/multisession/","page":"Multi-session/agent Solving","title":"Multi-session/agent Solving","text":"(Image: Independent densities)","category":"page"},{"location":"concepts/multisession/","page":"Multi-session/agent Solving","title":"Multi-session/agent Solving","text":"Now we trigger a multisession solve. For each landmark that is seen in multiple session, we produce a common landmark (we call a prime landmark) and link it to the session landmarks via factors - all denoted in black outline.","category":"page"},{"location":"concepts/multisession/","page":"Multi-session/agent Solving","title":"Multi-session/agent Solving","text":"(Image: Linked landmarks)","category":"page"},{"location":"concepts/multisession/","page":"Multi-session/agent Solving","title":"Multi-session/agent Solving","text":"A multisession solve is performed, which for each common (prime) landmark, we produce a common estimate. In terms of densities, this is a single answer for the disparate information, as shown in red in the below figure (for a slightly different dataset):","category":"page"},{"location":"concepts/multisession/","page":"Multi-session/agent Solving","title":"Multi-session/agent Solving","text":"(Image: Prime density)","category":"page"},{"location":"concepts/multisession/","page":"Multi-session/agent Solving","title":"Multi-session/agent Solving","text":"This information is then propagated back to the individual session landmarks, giving one common density for each landmark. As above, our green, blue, and orange individual densities are now all updated to match the consensus shown in black:","category":"page"},{"location":"concepts/multisession/","page":"Multi-session/agent Solving","title":"Multi-session/agent Solving","text":"(Image: Prime density)","category":"page"},{"location":"concepts/multisession/","page":"Multi-session/agent Solving","title":"Multi-session/agent Solving","text":"The session landmarks are then frozen, and individual session solves are triggered to propagate the information back into the sessions. Until the federated upgrade is completed, the above process is iterated a few times to allow information to cross propagate through all sessions. There federated tree solution requires only a single iteration up and down the federated Bayes (Junction) tree. ","category":"page"},{"location":"concepts/multisession/#Next-Steps","page":"Multi-session/agent Solving","title":"Next Steps","text":"","category":"section"},{"location":"concepts/multisession/","page":"Multi-session/agent Solving","title":"Multi-session/agent Solving","text":"This provides an initial implementation for stitching data from multiple sessions, robots, and users. In the short term, we may trigger this automatically for any shared environments. Multisession solving along with other automated techniques for additional measurement discovery in data allows the system to 'dream' – i.e. reducing succint info from the large volumes of heterogenous sensor data.","category":"page"},{"location":"concepts/multisession/","page":"Multi-session/agent Solving","title":"Multi-session/agent Solving","text":"In the medium future we will extend this functionality to operate in the Bayes tree, which we call 'federated solving', so that we perform the operation using cached results of subtrees. ","category":"page"},{"location":"concepts/dataassociation/#data_multihypo","page":"Multi-Modal/Hypothesis","title":"Data Association and Hypotheses","text":"","category":"section"},{"location":"concepts/dataassociation/","page":"Multi-Modal/Hypothesis","title":"Multi-Modal/Hypothesis","text":"Ambiguous data and processing often produce complicated data association situations. In SLAM, loop-closures are a major source of concern when developing autonomous subsystems or behaviors. To illustrate this point, consider the two scenarios depicted below:","category":"page"},{"location":"concepts/dataassociation/","page":"Multi-Modal/Hypothesis","title":"Multi-Modal/Hypothesis","text":"

      \n\n

      ","category":"page"},{"location":"concepts/dataassociation/","page":"Multi-Modal/Hypothesis","title":"Multi-Modal/Hypothesis","text":"In conventional parametric Gaussian-only systems an incorrect loop-closure can occur, resulting in highly unstable numerical solutions. The mm-iSAM algorithm was conceived to directly address these (and other related) issues by changing the fundamental manner in which the statistical inference is performed.","category":"page"},{"location":"concepts/dataassociation/","page":"Multi-Modal/Hypothesis","title":"Multi-Modal/Hypothesis","text":"The data association problem applies well beyond just loop-closures including (but not limited to) navigation-affordance matching and discrepancy detection, and indicates the versatility of the IncrementalInference.jl standardized multihypo interface. Note that much more is possible, however, the so-called single-fraction multihypo approach already yields significant benefits and simplicity.","category":"page"},{"location":"concepts/dataassociation/#section_multihypo","page":"Multi-Modal/Hypothesis","title":"Multihypothesis","text":"","category":"section"},{"location":"concepts/dataassociation/","page":"Multi-Modal/Hypothesis","title":"Multi-Modal/Hypothesis","text":"Consider for example a regular three variable factor [:pose;:landmark;:calib] that due to some decision has a triple association uncertainty about the middle variable. This fractional certainty can easily be modelled via:","category":"page"},{"location":"concepts/dataassociation/","page":"Multi-Modal/Hypothesis","title":"Multi-Modal/Hypothesis","text":"addFactor!(fg, [:p10, :l1_a,:l1_b,:l1_c, :c], PoseLandmCalib, multihypo=[1; 0.6;0.3;0.1; 1])","category":"page"},{"location":"concepts/dataassociation/","page":"Multi-Modal/Hypothesis","title":"Multi-Modal/Hypothesis","text":"Therefore, the user can \"partition\" certainty about one variable using any arbitrary n-ary factor. The 100% certain variables are indicated as 1, while the remaining uncertainties regarding the uncertain data association decision are grouped as positive fractions that sum to 1. In this example, the values 0.6,0.3,0.1 represent the confidence about the association between :p10 and either of :l1_a,:l1_b,:l1_c.","category":"page"},{"location":"concepts/dataassociation/","page":"Multi-Modal/Hypothesis","title":"Multi-Modal/Hypothesis","text":"A more classical binary multihypothesis example is illustated in the multimodal (non-Gaussian) factor graph below:","category":"page"},{"location":"concepts/dataassociation/","page":"Multi-Modal/Hypothesis","title":"Multi-Modal/Hypothesis","text":"

      \n\n

      ","category":"page"},{"location":"concepts/dataassociation/#Mixture-Models","page":"Multi-Modal/Hypothesis","title":"Mixture Models","text":"","category":"section"},{"location":"concepts/dataassociation/","page":"Multi-Modal/Hypothesis","title":"Multi-Modal/Hypothesis","text":"Mixture is a different kind of multi-modal modeling where different hypotheses of the measurement itself are unknown. It is possible to also model uncertain data associations as a Mixture(Prior,...) but this is a feature of factor graph modeling something different than data association uncertainty in n-ary factors: e.g. it is possible to use Mixture together with multihypo= and be sure to take the time to understand the different and how these concepts interact. The Caesar.jl solution is more general than simply allocating different mixtures to different association decisions. All these elements together can create quite the multi-modal soup. A practical example from SLAM is a loop-closure where a robot observes an object similar to one previously seen. The measurement observation is one thing (can maybe be a Mixture) and the association of this \"measurement\" with this or that variable is a multihypothesis selection.","category":"page"},{"location":"concepts/dataassociation/","page":"Multi-Modal/Hypothesis","title":"Multi-Modal/Hypothesis","text":"See the familiar RobotFourDoor.jl as example as a highly simplified case using priors where these elements effectively all the same thing. Again, Mixture is something different than multihypo= and the two can be used together.","category":"page"},{"location":"concepts/dataassociation/","page":"Multi-Modal/Hypothesis","title":"Multi-Modal/Hypothesis","text":"A mixture can be created from any existing prior or relative likelihood factor, for example:","category":"page"},{"location":"concepts/dataassociation/","page":"Multi-Modal/Hypothesis","title":"Multi-Modal/Hypothesis","text":"mlr = Mixture(LinearRelative, \n (correlator=AliasingScalarSampler(...), naive=Normal(0.5,5), lucky=Uniform(0,10)),\n [0.5;0.4;0.1])\n\naddFactor!(fg, [:x0;:x1], mlr)","category":"page"},{"location":"concepts/dataassociation/","page":"Multi-Modal/Hypothesis","title":"Multi-Modal/Hypothesis","text":"See a example with Defining A Mixture Relative on ContinuousScalar for more details.","category":"page"},{"location":"concepts/dataassociation/","page":"Multi-Modal/Hypothesis","title":"Multi-Modal/Hypothesis","text":"Mixture","category":"page"},{"location":"concepts/dataassociation/#IncrementalInference.Mixture","page":"Multi-Modal/Hypothesis","title":"IncrementalInference.Mixture","text":"struct Mixture{N, F<:AbstractFactor, S, T<:Tuple} <: AbstractFactor\n\nA Mixture object for use with either a <: AbstractPrior or <: AbstractRelative.\n\nNotes\n\nThe internal data representation is a ::NamedTuple, which allows total type-stability for all component types.\nVarious construction helpers can accept a variety of inputs, including <: AbstractArray and Tuple.\nN is the number of components used to make the mixture, so two bumps from two Normal components means N=2.\n\nDevNotes\n\nFIXME swap API order so Mixture of distibutions works like a distribtion, see Caesar.jl #808\nShould not have field mechanics.\nTODO on sampling see #1099 and #1094 and #1069 \n\nExample\n\n# prior factor\nmsp = Mixture(Prior, \n [Normal(0,0.1), Uniform(-pi/1,pi/2)],\n [0.5;0.5])\n\naddFactor!(fg, [:head], msp, tags=[:MAGNETOMETER;])\n\n# Or relative\nmlr = Mixture(LinearRelative, \n (correlator=AliasingScalarSampler(...), naive=Normal(0.5,5), lucky=Uniform(0,10)),\n [0.5;0.4;0.1])\n\naddFactor!(fg, [:x0;:x1], mlr)\n\n\n\n\n\n","category":"type"},{"location":"concepts/dataassociation/#Raw-Correlator-Probability-(Matched-Filter)","page":"Multi-Modal/Hypothesis","title":"Raw Correlator Probability (Matched Filter)","text":"","category":"section"},{"location":"concepts/dataassociation/","page":"Multi-Modal/Hypothesis","title":"Multi-Modal/Hypothesis","text":"Realistic measurement processes are based on physical process observations such as wave function interferometry or matched filtering correlation. This style of measurement is common in RADAR and SONAR systems, and can be directly incorporated in Caesar.jl since the measurement likelihood models need not be parametric. There the raw correlator output from a sensor measurement can be directly modelled and included as part of the factor algebriac likelihood probability function:","category":"page"},{"location":"concepts/dataassociation/","page":"Multi-Modal/Hypothesis","title":"Multi-Modal/Hypothesis","text":"# Building a samplable likelihood, using softmax to convert intensity-energy into a pseudo-probability\nrangeLikeli = AliasingScalarSampler(rangeIndex, Flux.softmax(correlatorIntensity))\n\n# or alternatively with existing samples similar to a what a particle filter would have done\nrangeLikeli = manikde!(Euclid{1}, probPoints)\n\n# add the relative algebra, and remember you can construct your own highly non-linear factor\nrangeFct = Pose2Point2Range(rangeLikeli)\n\naddFactor!(fg, [:x8, :beacon_8], rangeFct)","category":"page"},{"location":"concepts/dataassociation/#Various-SamplableBelief-Distribution-Types","page":"Multi-Modal/Hypothesis","title":"Various SamplableBelief Distribution Types","text":"","category":"section"},{"location":"concepts/dataassociation/","page":"Multi-Modal/Hypothesis","title":"Multi-Modal/Hypothesis","text":"Also recognize that other features like multihypo= and Mixture readily be combined with object like this rangeFct shown above. These tricks are all possible due to the multiple dispatch magic of JuliaLang, more explicitly the following is code will all return true:","category":"page"},{"location":"concepts/dataassociation/","page":"Multi-Modal/Hypothesis","title":"Multi-Modal/Hypothesis","text":"IIF.AliasingScalarSampler <: IIF.SamplableBelief\nIIF.Mixture <: IIF.SamplableBelief\nKDE.BallTreeDensity <: IIF.SamplableBelief\nDistribution.Rayleigh <: IIF.SamplableBelief\nDistribution.Uniform <: IIF.SamplableBelief\nDistribution.MvNormal <: IIF.SamplableBelief","category":"page"},{"location":"concepts/dataassociation/","page":"Multi-Modal/Hypothesis","title":"Multi-Modal/Hypothesis","text":"One of the more exotic examples is to natively represent Synthetic Aperture Sonar (SAS) as a deeply non-Gaussian factor in the factor graph. See Synthetic Aperture Sonar SLAM. Also see the full AUV stack using a single reference beacon and Towards Real-Time Underwater Acoustic Navigation.","category":"page"},{"location":"concepts/dataassociation/#Null-Hypothesis","page":"Multi-Modal/Hypothesis","title":"Null Hypothesis","text":"","category":"section"},{"location":"concepts/dataassociation/","page":"Multi-Modal/Hypothesis","title":"Multi-Modal/Hypothesis","text":"Sometimes there is basic uncertainty about whether a measurement is at all valid. Note that the above examples (multihypo and Mixture) still accept that a certain association definitely exists. A null hypothesis models the situation in which a factor might be completely bogus, in which case it should be ignored. The underlying mechanics of this approach are not entirely straightforward since removing one or more factors essentially changes the structure of the graph. That said, IncrementalInference.jl employs a reasonable stand-in solution that does not require changing the graph structure and can simply be included for any factor.","category":"page"},{"location":"concepts/dataassociation/","page":"Multi-Modal/Hypothesis","title":"Multi-Modal/Hypothesis","text":"addFactor!(fg, [:x7;:l13], Pose2Point2Range(...), nullhypo=0.1)","category":"page"},{"location":"concepts/dataassociation/","page":"Multi-Modal/Hypothesis","title":"Multi-Modal/Hypothesis","text":"This keyword indicates to the solver that there is a 10% chance that this factor is not valid.","category":"page"},{"location":"concepts/dataassociation/","page":"Multi-Modal/Hypothesis","title":"Multi-Modal/Hypothesis","text":"note: Note\nAn entirely separate page is reserved for incorporating Flux neural network models into Caesar.jl as highly plastic and trainable (i.e. learnable) factors.","category":"page"},{"location":"concepts/stash_and_cache/#section_stash_and_cache","page":"Caching and Stashing","title":"EXPL Stash and Cache","text":"","category":"section"},{"location":"concepts/stash_and_cache/","page":"Caching and Stashing","title":"Caching and Stashing","text":"warning: Warning\nStashing and Caching are new EXPERIMENTAL features (22Q2) and is not yet be fully integrated throughout the overall system. See Notes below for specific considerations.","category":"page"},{"location":"concepts/stash_and_cache/","page":"Caching and Stashing","title":"Caching and Stashing","text":"Caching aims to improve in-place, memory, and communication bandwidth requirements for factor calculations and serialization.","category":"page"},{"location":"concepts/stash_and_cache/#Preamble-Cache","page":"Caching and Stashing","title":"Preamble Cache","text":"","category":"section"},{"location":"concepts/stash_and_cache/","page":"Caching and Stashing","title":"Caching and Stashing","text":"The Caesar.jl framework has a standardized feature to preload or cache important data for factor calculations the first time a factor is created/loaded into a graph (i.e. during addFactor!). The preambleCache function runs just once before any computations are performed. A default dispatch for preambleCache returns nothing as a cache object that is later used in several places throughout the code.","category":"page"},{"location":"concepts/stash_and_cache/#Overriding-preambleCache","page":"Caching and Stashing","title":"Overriding preambleCache","text":"","category":"section"},{"location":"concepts/stash_and_cache/","page":"Caching and Stashing","title":"Caching and Stashing","text":"A user may choose to override the dispatch for a particular factor's preambleCache and thereby return a more intricate/optimized cache object for later use. Any object can be returned, but we strongly recommend you return a type-stable object for best performance in production. Returning non-concrete types is allowed and likely faster for development, just remember to check type-stability before calling it a day.","category":"page"},{"location":"concepts/stash_and_cache/","page":"Caching and Stashing","title":"Caching and Stashing","text":"Whatever object is returned by the preambleCache(dfg, vars, fnc) function is referenced and duplicated within the solver code. During the design, use of cache is expected to predominantly occur during factor sampling, factor residual calculations, and deserialization (i.e. unpacking) of previously persisted graph objects.","category":"page"},{"location":"concepts/stash_and_cache/","page":"Caching and Stashing","title":"Caching and Stashing","text":"The preambleCache function has access to the parent factor graph object as well as an ordered list of the DFGVariables attached to said factor. The user created factor type objects passed as the third argument. The combination of these three objects allows the user much freedom wrt to where and how large data might be stored in the system.","category":"page"},{"location":"concepts/stash_and_cache/","page":"Caching and Stashing","title":"Caching and Stashing","text":"preambleCache","category":"page"},{"location":"concepts/stash_and_cache/#IncrementalInference.preambleCache","page":"Caching and Stashing","title":"IncrementalInference.preambleCache","text":"preambleCache(dfg, vars, usrfnc)\n\n\nOverload for specific factor preamble usage.\n\nNotes:\n\nSee https://github.com/JuliaRobotics/IncrementalInference.jl/issues/1462\n\nDevNotes\n\nIntegrate into CalcFactor\nAdd threading\n\nExample:\n\nimport IncrementalInference: preableCache\n\npreableCache(dfg::AbstractDFG, vars::AbstractVector{<:DFGVariable}, usrfnc::MyFactor) = MyFactorCache(randn(10))\n\n# continue regular use, e.g.\nmfc = MyFactor(...)\naddFactor!(fg, [:a;:b], mfc)\n# ... \n\n\n\n\n\n","category":"function"},{"location":"concepts/stash_and_cache/#In-Place-vs.-In-Line-Cache","page":"Caching and Stashing","title":"In-Place vs. In-Line Cache","text":"","category":"section"},{"location":"concepts/stash_and_cache/","page":"Caching and Stashing","title":"Caching and Stashing","text":"Depending on your particular bent, two different cache models might be more appealing. The design of preambleCache does not preclude either design options, and actually promote use of either depending on the particular situation at hand. The purpose of preambleCache is to provide and opportunity for caching when working with factors in the factor graph rather than dictate one design over the other.","category":"page"},{"location":"concepts/stash_and_cache/#CalcFactor.cache::T","page":"Caching and Stashing","title":"CalcFactor.cache::T","text":"","category":"section"},{"location":"concepts/stash_and_cache/","page":"Caching and Stashing","title":"Caching and Stashing","text":"One likely use of the preambleCache function is for in-place memory allocation for solver hot-loop operations. Consider for example a getSample or factor residual calculation that is memory intensive. The best way to improve performance is remove any memory allocations during the hot-loop. For this reason the CalcFactor object has a cache::T field which will have exactly the type ::T that is returned by the user's preambleCache dispatch override. To usein the factor getSample or residual functions, simply use the calcfactor.cache field.","category":"page"},{"location":"concepts/stash_and_cache/#Pulling-Data-from-Stores","page":"Caching and Stashing","title":"Pulling Data from Stores","text":"","category":"section"},{"location":"concepts/stash_and_cache/","page":"Caching and Stashing","title":"Caching and Stashing","text":"The Caesar.jl framework supports various data store designs. Some of these data stores are likely best suited for in-line caching design. Values can be retrieved from a data store during the preambleCache step, irrespective of where the data is stored. ","category":"page"},{"location":"concepts/stash_and_cache/","page":"Caching and Stashing","title":"Caching and Stashing","text":"If the user chooses to store weird and wonderful caching links to alternative hardware via the described caching, go forth and be productive! Consider sharing enhancements back the public repositories.","category":"page"},{"location":"concepts/stash_and_cache/#section_stash_unstash","page":"Caching and Stashing","title":"Stash Serialization","text":"","category":"section"},{"location":"concepts/stash_and_cache/","page":"Caching and Stashing","title":"Caching and Stashing","text":"note: Note\nStashing uses Additional (Large) Data storage and retrieval following starved graph design considerations.","category":"page"},{"location":"concepts/stash_and_cache/","page":"Caching and Stashing","title":"Caching and Stashing","text":"Some applications use graph factors with with large memory requirements during computation. Often, it is not efficient/performant to store large data blobs directly within the graph when persisted. Caesar.jl therefore supports a concept called stashing (similar to starved graphs), where particular operationally important data is stored separately from the graph which can then be retrieved during the preambleCache step – a.k.a. unstashing.","category":"page"},{"location":"concepts/stash_and_cache/#Deserialize-only-Stash-Design-(i.e.-unstashing)","page":"Caching and Stashing","title":"Deserialize-only Stash Design (i.e. unstashing)","text":"","category":"section"},{"location":"concepts/stash_and_cache/","page":"Caching and Stashing","title":"Caching and Stashing","text":"Presently, we recommend following a deserialize-only design. This is where factor graph are reconstituted from some persisted storage into computable form in memory, a.k.a. loadDFG. During the load steps, factors are added to the destination graph using the addFactor! calls, which in turn call preambleCache for each factor. ","category":"page"},{"location":"concepts/stash_and_cache/","page":"Caching and Stashing","title":"Caching and Stashing","text":"Therefore, factors which are persisted using the 'stash' methodology are only fully reconstructed after the preambleCache step, and the user is responsible for defining the preambleCache override for a particular factor. The desired stashed data should also already be available in said data store before the factor graph is loaded.","category":"page"},{"location":"concepts/stash_and_cache/","page":"Caching and Stashing","title":"Caching and Stashing","text":"Caesar.jl does have factors that can use the stash design, but are currently only available as experimental features. Specifically, see the ScatterAlignPose2 factor code.","category":"page"},{"location":"concepts/stash_and_cache/","page":"Caching and Stashing","title":"Caching and Stashing","text":"Modifying the overall Caesar.jl code for both read and write stashing might be considered in future work but is not in the current roadmap.","category":"page"},{"location":"concepts/stash_and_cache/#stashcache_notes","page":"Caching and Stashing","title":"Notes","text":"","category":"section"},{"location":"concepts/stash_and_cache/","page":"Caching and Stashing","title":"Caching and Stashing","text":"Please see or open issues for specific questions not yet covered here. You can also reach out via Slack, or contact NavAbility.io for help.","category":"page"},{"location":"concepts/stash_and_cache/","page":"Caching and Stashing","title":"Caching and Stashing","text":"Use caution in designing preambleCache for situations where multihypo= functionality is used. If factor memory is tied to specific variables, then the association ambiguities to multihypo situations at compute time must considered. E.g. if you are storing images for two landmarks in two landmark variable hypotheses, then just remember that the user cache must during the sampling or residual calculations track which hypothesis is being used before using said data – we recommend using NamedTuple in your cache structure.\nIf using the Deserialize-Stash design, note that the appropriate data blob stores should already be attached to the destination factor graph object, else the preambleCache function will not be able to succesfully access any of the getData functions you are likely to use to 'unstash' data.\nUsers can readily implement their own threading inside factor samping and residual computations. Caching is not yet thread-safe for some internal solver side-by-side computations, User can self manage shared vs. separate memory for the Multithreaded Factor option, but we'd recommend reaching out to or getting involved with the threading redesign, see IIF 1094.","category":"page"},{"location":"concepts/building_graphs/#building_graphs","page":"Building Graphs","title":"Building Graphs","text":"","category":"section"},{"location":"concepts/building_graphs/","page":"Building Graphs","title":"Building Graphs","text":"Irrespective of your application - real-time robotics, batch processing of survey data, or really complex multi-hypothesis modeling - you're going to need to add factors and variables to a graph. This section discusses how to do that in Caesar.","category":"page"},{"location":"concepts/building_graphs/","page":"Building Graphs","title":"Building Graphs","text":"The following sections discuss the steps required to construct a graph and solve it:","category":"page"},{"location":"concepts/building_graphs/","page":"Building Graphs","title":"Building Graphs","text":"Initializing the Factor Graph\nAdding Variables and Factors to the Graph\nSolving the Graph\nInforming the Solver About Ready Data","category":"page"},{"location":"concepts/building_graphs/#Familiar-Canonical-Factor-Graphs","page":"Building Graphs","title":"Familiar Canonical Factor Graphs","text":"","category":"section"},{"location":"concepts/building_graphs/","page":"Building Graphs","title":"Building Graphs","text":"Starting with a shortcut to just quickly getting a small predefined canonical graph containing a few variables and factors. Functions to generate a canonical factor graph object that is useful for orientation, testing, learning, or validation. You can generate any of these factor graphs at any time, for example when quickly wanting to test some idea midway through building a more sophisiticated fg, you might just want to quickly do:","category":"page"},{"location":"concepts/building_graphs/","page":"Building Graphs","title":"Building Graphs","text":"fg_ = generateGraph_Hexagonal()","category":"page"},{"location":"concepts/building_graphs/","page":"Building Graphs","title":"Building Graphs","text":"and then work with fg_ to try out something risky.","category":"page"},{"location":"concepts/building_graphs/","page":"Building Graphs","title":"Building Graphs","text":"note: Note\nSee the Canonical Graphs page for a more complete list of existing graph generators.","category":"page"},{"location":"concepts/building_graphs/#Building-a-new-Graph","page":"Building Graphs","title":"Building a new Graph","text":"","category":"section"},{"location":"concepts/building_graphs/","page":"Building Graphs","title":"Building Graphs","text":"The first step is to model the data (using the most appropriate factors) among variables of interest. To start model, first create a distributed factor graph object:","category":"page"},{"location":"concepts/building_graphs/","page":"Building Graphs","title":"Building Graphs","text":"# start with an empty factor graph object\nfg = initfg()","category":"page"},{"location":"concepts/building_graphs/","page":"Building Graphs","title":"Building Graphs","text":"initfg","category":"page"},{"location":"concepts/building_graphs/#IncrementalInference.initfg","page":"Building Graphs","title":"IncrementalInference.initfg","text":"initfg(; ...)\ninitfg(dfg; sessionname, robotname, username, cloudgraph)\n\n\nInitialize an empty in-memory DistributedFactorGraph ::DistributedFactorGraph object.\n\n\n\n\n\n","category":"function"},{"location":"concepts/building_graphs/#Variables","page":"Building Graphs","title":"Variables","text":"","category":"section"},{"location":"concepts/building_graphs/","page":"Building Graphs","title":"Building Graphs","text":"Variables (a.k.a. poses or states in navigation lingo) are created with the addVariable! fucntion call.","category":"page"},{"location":"concepts/building_graphs/","page":"Building Graphs","title":"Building Graphs","text":"# Add the first pose :x0\naddVariable!(fg, :x0, Pose2)\n# Add a few more poses\nfor i in 1:10\n addVariable!(fg, Symbol(\"x\",i), Pose2)\nend","category":"page"},{"location":"concepts/building_graphs/","page":"Building Graphs","title":"Building Graphs","text":"Variables contain a label, a data type (e.g. in 2D RoME.Point2 or RoME.Pose2). Note that variables are solved - i.e. they are the product, what you wish to calculate when the solver runs - so you don't provide any measurements when creating them.","category":"page"},{"location":"concepts/building_graphs/","page":"Building Graphs","title":"Building Graphs","text":"addVariable!\ndeleteVariable!","category":"page"},{"location":"concepts/building_graphs/#DistributedFactorGraphs.addVariable!","page":"Building Graphs","title":"DistributedFactorGraphs.addVariable!","text":"addVariable!(dfg, variable)\n\n\nAdd a DFGVariable to a DFG.\n\n\n\n\n\naddVariable!(\n dfg,\n label,\n varTypeU;\n N,\n solvable,\n timestamp,\n nanosecondtime,\n dontmargin,\n tags,\n smalldata,\n checkduplicates,\n initsolvekeys\n)\n\n\nAdd a variable node label::Symbol to dfg::AbstractDFG, as varType<:InferenceVariable.\n\nNotes\n\nkeyword nanosecondtime is experimental and intended as the whole subsection portion – i.e. accurateTime = (timestamp MOD second) + Nanosecond\n\nExample\n\nfg = initfg()\naddVariable!(fg, :x0, Pose2)\n\n\n\n\n\n","category":"function"},{"location":"concepts/building_graphs/#DistributedFactorGraphs.deleteVariable!","page":"Building Graphs","title":"DistributedFactorGraphs.deleteVariable!","text":"deleteVariable!(dfg, label)\n\n\nDelete a DFGVariable from the DFG using its label.\n\n\n\n\n\ndeleteVariable!(dfg, variable)\n\n\nDelete a referenced DFGVariable from the DFG.\n\nNotes\n\nReturns Tuple{AbstractDFGVariable, Vector{<:AbstractDFGFactor}}\n\n\n\n\n\n","category":"function"},{"location":"concepts/building_graphs/","page":"Building Graphs","title":"Building Graphs","text":"The MM-iSAMv2 algorithm uses one of two approaches to automatically initialize variables, or can be initialized manually.","category":"page"},{"location":"concepts/building_graphs/#Factors","page":"Building Graphs","title":"Factors","text":"","category":"section"},{"location":"concepts/building_graphs/","page":"Building Graphs","title":"Building Graphs","text":"Factors are algebraic relationships between variables based on data cues such as sensor measurements. Examples of factors are absolute (pre-resolved) GPS readings (unary factors/priors) and odometry changes between pose variables. All factors encode a stochastic measurement (measurement + error), such as below, where a generic Prior belief is add to x0 (using the addFactor! call) as a normal distribution centered around [0,0,0].","category":"page"},{"location":"concepts/building_graphs/","page":"Building Graphs","title":"Building Graphs","text":"addFactor!\ndeleteFactor!","category":"page"},{"location":"concepts/building_graphs/#DistributedFactorGraphs.addFactor!","page":"Building Graphs","title":"DistributedFactorGraphs.addFactor!","text":"Add a DFGFactor to a DFG.\n\naddFactor!(dfg, factor)\n\n\n\n\n\n\naddFactor!(dfg, variables, factor)\n\n\n\n\n\n\naddFactor!(dfg, variableLabels, factor)\n\n\n\n\n\n\naddFactor!(\n dfg,\n Xi,\n usrfnc;\n multihypo,\n nullhypo,\n solvable,\n tags,\n timestamp,\n graphinit,\n suppressChecks,\n inflation,\n namestring,\n _blockRecursion\n)\n\n\nAdd factor with user defined type <:AbstractFactorto the factor graph object. Define whether the automatic initialization of variables should be performed. Use order sensitivemultihypo` keyword argument to define if any variables are related to data association uncertainty.\n\nExperimental\n\ninflation, to better disperse kernels before convolution solve, see IIF #1051.\n\n\n\n\n\n","category":"function"},{"location":"concepts/building_graphs/#DistributedFactorGraphs.deleteFactor!","page":"Building Graphs","title":"DistributedFactorGraphs.deleteFactor!","text":"deleteFactor!(dfg, label; suppressGetFactor)\n\n\nDelete a DFGFactor from the DFG using its label.\n\n\n\n\n\ndeleteFactor!(dfg, factor; suppressGetFactor)\n\n\nDelete the referened DFGFactor from the DFG.\n\n\n\n\n\n","category":"function"},{"location":"concepts/building_graphs/#Priors","page":"Building Graphs","title":"Priors","text":"","category":"section"},{"location":"concepts/building_graphs/","page":"Building Graphs","title":"Building Graphs","text":"# Add at a fixed location Prior to pin :x0 to a starting location (0,0,pi/6.0)\naddFactor!(fg, [:x0], PriorPose2( MvNormal([0; 0; pi/6.0], Matrix(Diagonal([0.1;0.1;0.05].^2)) )))","category":"page"},{"location":"concepts/building_graphs/#Factors-Between-Variables","page":"Building Graphs","title":"Factors Between Variables","text":"","category":"section"},{"location":"concepts/building_graphs/","page":"Building Graphs","title":"Building Graphs","text":"# Add odometry indicating a zigzag movement\nfor i in 1:10\n pp = Pose2Pose2(MvNormal([10.0;0; (i % 2 == 0 ? -pi/3 : pi/3)], Matrix(Diagonal([0.1;0.1;0.1].^2))))\n addFactor!(fg, [Symbol(\"x$(i-1)\"); Symbol(\"x$(i)\")], pp )\nend","category":"page"},{"location":"concepts/building_graphs/#[OPTIONAL]-Understanding-Internal-Factor-Naming-Convention","page":"Building Graphs","title":"[OPTIONAL] Understanding Internal Factor Naming Convention","text":"","category":"section"},{"location":"concepts/building_graphs/","page":"Building Graphs","title":"Building Graphs","text":"The factor name used by Caesar is automatically generated from ","category":"page"},{"location":"concepts/building_graphs/","page":"Building Graphs","title":"Building Graphs","text":"addFactor!(fg, [:x0; :x1],...)","category":"page"},{"location":"concepts/building_graphs/","page":"Building Graphs","title":"Building Graphs","text":"will create a factor with name :x0x1f1","category":"page"},{"location":"concepts/building_graphs/","page":"Building Graphs","title":"Building Graphs","text":"When you were to add a another factor betweem :x0, :x1:","category":"page"},{"location":"concepts/building_graphs/","page":"Building Graphs","title":"Building Graphs","text":"addFactor!(fg, [:x0; :x1],...)","category":"page"},{"location":"concepts/building_graphs/","page":"Building Graphs","title":"Building Graphs","text":"will create a second factor with the name :x0x1f2.","category":"page"},{"location":"concepts/building_graphs/#Adding-Tags","page":"Building Graphs","title":"Adding Tags","text":"","category":"section"},{"location":"concepts/building_graphs/","page":"Building Graphs","title":"Building Graphs","text":"It is possible to add tags to variables and factors that make later graph management tasks easier, e.g.:","category":"page"},{"location":"concepts/building_graphs/","page":"Building Graphs","title":"Building Graphs","text":"addVariable!(fg, :l7_3, Pose2, tags=[:APRILTAG; :LANDMARK])","category":"page"},{"location":"concepts/building_graphs/#Drawing-the-Factor-Graph","page":"Building Graphs","title":"Drawing the Factor Graph","text":"","category":"section"},{"location":"concepts/building_graphs/","page":"Building Graphs","title":"Building Graphs","text":"Once you have a graph, you can visualize the graph as follows (beware though if the fg object is large):","category":"page"},{"location":"concepts/building_graphs/","page":"Building Graphs","title":"Building Graphs","text":"# requires `sudo apt-get install graphviz\ndrawGraph(fg, show=true)","category":"page"},{"location":"concepts/building_graphs/","page":"Building Graphs","title":"Building Graphs","text":"By setting show=true, the application evince will be called to show the fg.pdf file that was created using GraphViz. A GraphPlot.jl visualization engine is also available.","category":"page"},{"location":"concepts/building_graphs/","page":"Building Graphs","title":"Building Graphs","text":"using GraphPlot\nplotDFG(fg)","category":"page"},{"location":"concepts/building_graphs/","page":"Building Graphs","title":"Building Graphs","text":"drawGraph","category":"page"},{"location":"concepts/building_graphs/#IncrementalInference.drawGraph","page":"Building Graphs","title":"IncrementalInference.drawGraph","text":"drawGraph(fgl; viewerapp, filepath, engine, show)\n\n\nDraw and show the factor graph <:AbstractDFG via system graphviz and xdot app.\n\nNotes\n\nRequires system install on Linux of sudo apt-get install xdot\nShould not be calling outside programs.\nNeed long term solution\nDFG's toDotFile a better solution – view with xdot application.\nalso try engine={\"sfdp\",\"fdp\",\"dot\",\"twopi\",\"circo\",\"neato\"}\n\nNotes:\n\nCalls external system application xdot to read the .dot file format\ntoDot(fg,file=...); @async run(`xdot file.dot`)\n\nRelated\n\ndrawGraphCliq, drawTree, printCliqSummary, spyCliqMat\n\n\n\n\n\n","category":"function"},{"location":"concepts/building_graphs/","page":"Building Graphs","title":"Building Graphs","text":"For more details, see the DFG docs on Drawing Graphs.","category":"page"},{"location":"concepts/building_graphs/#When-to-Instantiate-Poses-(i.e.-new-Variables-in-Factor-Graph)","page":"Building Graphs","title":"When to Instantiate Poses (i.e. new Variables in Factor Graph)","text":"","category":"section"},{"location":"concepts/building_graphs/","page":"Building Graphs","title":"Building Graphs","text":"Consider a robot traversing some area while exploring, localizing, and wanting to find strong loop-closure features for consistent mapping. The creation of new poses and landmark variables is a trade-off in computational complexity and marginalization errors made during factor graph construction. Common triggers for new poses are:","category":"page"},{"location":"concepts/building_graphs/","page":"Building Graphs","title":"Building Graphs","text":"Time-based trigger (eg. new pose a second or 5 minutes if stationary)\nDistance traveled (eg. new pose every 0.5 meters)\nRotation angle (eg. new pose every 15 degrees)","category":"page"},{"location":"concepts/building_graphs/","page":"Building Graphs","title":"Building Graphs","text":"Computation will progress faster if poses and landmarks are very sparse. To extract the benefit of dense reconstructions, one approach is to use the factor graph as sparse index in history about the general progression of the trajectory and use additional processing from dense sensor data for high-fidelity map reconstructions. Either interpolations, or better direct reconstructions from inertial data can be used for dense reconstruction.","category":"page"},{"location":"concepts/building_graphs/","page":"Building Graphs","title":"Building Graphs","text":"For completeness, one could also re-project the most meaningful measurements from sensor measurements between pose epochs as though measured from the pose epoch. This approach essentially marginalizes the local dead reckoning drift errors into the local interpose re-projections, but helps keep the pose count low.","category":"page"},{"location":"concepts/building_graphs/","page":"Building Graphs","title":"Building Graphs","text":"In addition, see Fixed-lag Solving for limiting during inference the number of fluid variables manually to a user desired count.","category":"page"},{"location":"concepts/building_graphs/#Which-Variables-and-Factors-to-use","page":"Building Graphs","title":"Which Variables and Factors to use","text":"","category":"section"},{"location":"concepts/building_graphs/","page":"Building Graphs","title":"Building Graphs","text":"See the next page on available variables and factors","category":"page"},{"location":"examples/custom_factor_features/#Custom-Factor-Features","page":"Important Factor Features","title":"Custom Factor Features","text":"","category":"section"},{"location":"examples/custom_factor_features/#Contributing-back-to-the-Community","page":"Important Factor Features","title":"Contributing back to the Community","text":"","category":"section"},{"location":"examples/custom_factor_features/","page":"Important Factor Features","title":"Important Factor Features","text":"Consider contributioning back, so if you have developed variables and factors that may be useful to the community, please write up an issue in Caesar.jl or submit a PR to the relavent repo.","category":"page"},{"location":"examples/custom_factor_features/#whatiscalcfactor","page":"Important Factor Features","title":"What is CalcFactor","text":"","category":"section"},{"location":"examples/custom_factor_features/","page":"Important Factor Features","title":"Important Factor Features","text":"CalcFactor is part of the IIF interface to all factors. It contains metadata and other important bits of information that are useful in a wide swath of applications. As work requires more interesting features from the code base, it is likely that the cfo::CalcFactor object will contain such data. If not, please open an issue with Caesar.jl so that the necessary options may be added.","category":"page"},{"location":"examples/custom_factor_features/","page":"Important Factor Features","title":"Important Factor Features","text":"The cfo object contains the field .factor::T which is the type of the user factor being used, e.g. myprior from above example. That is cfo.factor::MyPrior. This is why getSample is using rand(cfo.factor.Z).","category":"page"},{"location":"examples/custom_factor_features/","page":"Important Factor Features","title":"Important Factor Features","text":"CalcFactor was introduced in IncrementalInference v0.20 to consolidate and standardize a variety of features that had previously been diseparate and unwieldy.","category":"page"},{"location":"examples/custom_factor_features/","page":"Important Factor Features","title":"Important Factor Features","text":"The MM-iSAMv2 algorithm relies on the Kolmogorov-Criteria as well as uncorrelated factor sampling. This means that when generating fresh samples for a factor, those samples should not depend on values of variables in the graph or independent volatile variables. That said, if you have a non-violating reason for using additional data in the factor sampling or residual calculation process, you can do so via the cf::CalcFactor interface.","category":"page"},{"location":"examples/custom_factor_features/","page":"Important Factor Features","title":"Important Factor Features","text":"At present cf contains three main fields:","category":"page"},{"location":"examples/custom_factor_features/","page":"Important Factor Features","title":"Important Factor Features","text":"cf.factor::MyFactor the factor object as defined in the struct definition,\ncf.fullvariables, which can be used for large data blob retrieval such as used in Terrain Relative Navigation (TRN).\nAlso see Stashing and Caching\ncf.cache, which is user controlled via preambleCache function, see Cache Section.\ncf.manifold, for the manifold the factor operates on.\ncf._sampleIdx is the index of which computational sample is currently being calculated.","category":"page"},{"location":"examples/custom_factor_features/","page":"Important Factor Features","title":"Important Factor Features","text":"IncrementalInference.CalcFactor","category":"page"},{"location":"examples/custom_factor_features/#IncrementalInference.CalcFactor","page":"Important Factor Features","title":"IncrementalInference.CalcFactor","text":"Residual function for MutablePose2Pose2Gaussian.\n\nRelated\n\nPose2Pose2, Pose3Pose3, InertialPose3, DynPose2Pose2, Point2Point2, VelPoint2VelPoint2\n\n\n\n\n\n","category":"type"},{"location":"examples/custom_factor_features/","page":"Important Factor Features","title":"Important Factor Features","text":"tip: Tip\nMany factors already exists in IncrementalInference, RoME, and Caesar. Please see their src directories for more details.","category":"page"},{"location":"examples/custom_factor_features/","page":"Important Factor Features","title":"Important Factor Features","text":"warning: Warning\nThe old .specialSampler framework has been replaced with the standardized ::CalcFactor interface. See http://www.github.com/JuliaRobotics/IIF.jl/issues/467 for details.","category":"page"},{"location":"examples/custom_factor_features/#Partial-Factors","page":"Important Factor Features","title":"Partial Factors","text":"","category":"section"},{"location":"examples/custom_factor_features/","page":"Important Factor Features","title":"Important Factor Features","text":"In some cases a factor only effects a partial set of dimensions of a variable. For example a magnetometer being added onto a Pose2 variable would look something like this:","category":"page"},{"location":"examples/custom_factor_features/","page":"Important Factor Features","title":"Important Factor Features","text":"struct MyMagnetoPrior{T<:SamplableBelief} <: AbstractPrior\n Z::T\n partial::Tuple{Int}\nend\n\n# define a helper constructor\nMyMagnetoPrior(z) = MyMagnetoPrior(z, (3,))\n\ngetSample(cfo::CalcFactor{<:MyMagnetoPrior}) = samplePoint(cfo.factor.Z)","category":"page"},{"location":"examples/custom_factor_features/","page":"Important Factor Features","title":"Important Factor Features","text":"Similarly for <:IIF.AbstractRelativeMinimize, and note that the Roots version currently does not support the .partial option.","category":"page"},{"location":"examples/custom_factor_features/#Factors-supporting-a-Parametric-Solution","page":"Important Factor Features","title":"Factors supporting a Parametric Solution","text":"","category":"section"},{"location":"examples/custom_factor_features/","page":"Important Factor Features","title":"Important Factor Features","text":"See the parametric solve section","category":"page"},{"location":"examples/custom_factor_features/#factor_serialization","page":"Important Factor Features","title":"Standardized Factor Serialization","text":"","category":"section"},{"location":"examples/custom_factor_features/","page":"Important Factor Features","title":"Important Factor Features","text":"To take advantage of features like DFG.saveDFG and DFG.loadDFG a user specified type should be able to serialize via JSON standards. The decision was taken to require bespoke factor types to always be converted into a JSON friendly struct which must be prefixed as type name with PackedMyPrior{T}. Similarly, the user must also overload Base.convert as follows:","category":"page"},{"location":"examples/custom_factor_features/","page":"Important Factor Features","title":"Important Factor Features","text":"# necessary for overloading Base.convert\nimport Base: convert\n\nstruct PackedMyPrior <: AbstractPackedFactor\n Z::String\nend\n\n# IIF provides convert methods for `SamplableBelief` types\nconvert(::Type{PackedMyPrior}, pr::MyPrior{<:SamplableBelief}) = PackedMyPrior(convert(PackedSamplableBelief, pr.Z))\nconvert(::Type{MyPrior}, pr::PackedMyPrior) = MyPrior(IIF.convert(SamplableBelief, pr.Z))","category":"page"},{"location":"examples/custom_factor_features/","page":"Important Factor Features","title":"Important Factor Features","text":"Now you should be able to saveDFG and loadDFG your own factor graph types to Caesar.jl / FileDFG standard .tar.gz format.","category":"page"},{"location":"examples/custom_factor_features/","page":"Important Factor Features","title":"Important Factor Features","text":"fg = initfg()\naddVariable!(fg, :x0, ContinuousScalar)\naddFactor!(fg, [:x0], MyPrior(Normal()))\n\n# generate /tmp/myfg.tar.gz\nsaveDFG(\"/tmp/myfg\", fg)\n\n# test loading the .tar.gz (extension optional)\nfg2 = loadDFG(\"/tmp/myfg\")\n\n# list the contents\nls(fg2), lsf(fg2)\n# should see :x0 and :x0f1 listed","category":"page"},{"location":"examples/using_ros/#ros_direct","page":"ROS Middleware","title":"ROS Direct","text":"","category":"section"},{"location":"examples/using_ros/","page":"ROS Middleware","title":"ROS Middleware","text":"Since 2020, Caesar.jl has native support for ROS via the RobotOS.jl package. ","category":"page"},{"location":"examples/using_ros/#Load-the-ROS-Environment-Variables","page":"ROS Middleware","title":"Load the ROS Environment Variables","text":"","category":"section"},{"location":"examples/using_ros/","page":"ROS Middleware","title":"ROS Middleware","text":"The first thing to ensure is that the ROS environment variables are loaded before launching Julia, see \"1.5 Environment setup at ros.org\", something similar to:","category":"page"},{"location":"examples/using_ros/","page":"ROS Middleware","title":"ROS Middleware","text":"source /opt/ros/noetic/setup.bash","category":"page"},{"location":"examples/using_ros/#Setup-a-Catkin-Workspace","page":"ROS Middleware","title":"Setup a Catkin Workspace","text":"","category":"section"},{"location":"examples/using_ros/","page":"ROS Middleware","title":"ROS Middleware","text":"Assuming you have bespoke msg types, we suggest using a catkin workspace of choice, for example:","category":"page"},{"location":"examples/using_ros/","page":"ROS Middleware","title":"ROS Middleware","text":"mkdir -p ~/caesar_ws/src\ncd ~/caesar_ws/src\ngit clone https://github.com/pvazteixeira/caesar_ros","category":"page"},{"location":"examples/using_ros/","page":"ROS Middleware","title":"ROS Middleware","text":"Now build and configure your workspace","category":"page"},{"location":"examples/using_ros/","page":"ROS Middleware","title":"ROS Middleware","text":"cd ~/caesar_ws\ncatkin_make\nsource devel/setup.sh","category":"page"},{"location":"examples/using_ros/","page":"ROS Middleware","title":"ROS Middleware","text":"This last command is important, as you must have the workspace configuration in your environment when you run the julia process, so that you can import the service specifications.","category":"page"},{"location":"examples/using_ros/#RobotOS.jl-with-Correct-Python","page":"ROS Middleware","title":"RobotOS.jl with Correct Python","text":"","category":"section"},{"location":"examples/using_ros/","page":"ROS Middleware","title":"ROS Middleware","text":"RobotOS.jl currently uses PyCall.jl to interface through the rospy system. After launching Julia, make sure that PyCall is using the correct Python binary on your local system.","category":"page"},{"location":"examples/using_ros/","page":"ROS Middleware","title":"ROS Middleware","text":"# Assuming multiprocess will be used.\nusing Distributed\n# addprocs(4)\n\n# Prepare python version\nusing Pkg\nDistributed.@everywhere using Pkg\n\nDistributed.@everywhere begin\n ENV[\"PYTHON\"] = \"/usr/bin/python3\"\n Pkg.build(\"PyCall\")\nend\n\nusing PyCall\nDistributed.@everywhere using PyCall","category":"page"},{"location":"examples/using_ros/#Load-RobotOS.jl-along-with-Caesar.jl","page":"ROS Middleware","title":"Load RobotOS.jl along with Caesar.jl","text":"","category":"section"},{"location":"examples/using_ros/","page":"ROS Middleware","title":"ROS Middleware","text":"Caesar.jl has native by optional package tools relating to RobotOS.jl (leveraging Requires.jl):","category":"page"},{"location":"examples/using_ros/","page":"ROS Middleware","title":"ROS Middleware","text":"using RobotOS\n\n@rosimport std_msgs.msg: Header\n@rosimport sensor_msgs.msg: PointCloud2\n\nrostypegen()\n\nusing Caesar\nDistributed.@everywhere using Colors, Caesar","category":"page"},{"location":"examples/using_ros/","page":"ROS Middleware","title":"ROS Middleware","text":"Colors.jl is added as a conditional requirement to get Caesar._PCL.PointCloud support (see PCL page here).","category":"page"},{"location":"examples/using_ros/","page":"ROS Middleware","title":"ROS Middleware","text":"note: Note\nImports and type generation are necessary for RobotOS and Caesar to work properly.","category":"page"},{"location":"examples/using_ros/#Prepare-Any-Outer-Objects","page":"ROS Middleware","title":"Prepare Any Outer Objects","text":"","category":"section"},{"location":"examples/using_ros/","page":"ROS Middleware","title":"ROS Middleware","text":"Usually a factor graph or detectors, or some more common objects are required. For the example lets just say a basic SLAMWrapper containing a regular fg=initfg():","category":"page"},{"location":"examples/using_ros/","page":"ROS Middleware","title":"ROS Middleware","text":"robotslam = SLAMWrapperLocal()","category":"page"},{"location":"examples/using_ros/#Example-Caesar.jl-ROS-Handler","page":"ROS Middleware","title":"Example Caesar.jl ROS Handler","text":"","category":"section"},{"location":"examples/using_ros/","page":"ROS Middleware","title":"ROS Middleware","text":"Some function will also be required to consume the ROS traffic on any particular topic, where for the example we assume extraneous data will only be fg_:","category":"page"},{"location":"examples/using_ros/","page":"ROS Middleware","title":"ROS Middleware","text":"function myHandler(msgdata, slam_::SLAMWrapperLocal)\n # show some header information\n @show \"myHandler\", msgdata[2].header.seq\n\n # do stuff\n # addVariable!(slam.dfg, ...)\n # addFactor!(slam.dfg, ...)\n #, etc.\n\n nothing\nend","category":"page"},{"location":"examples/using_ros/#Read-or-Write-Bagfile-Messages","page":"ROS Middleware","title":"Read or Write Bagfile Messages","text":"","category":"section"},{"location":"examples/using_ros/","page":"ROS Middleware","title":"ROS Middleware","text":"Assuming that you are working from a bagfile, the following code makes it easy to consume the bagfile directly. Alternatively, see RobotOS.jl for wiring up publishers and subscribers for live data. Caesar.jl methods to consuming a bagfile are:","category":"page"},{"location":"examples/using_ros/","page":"ROS Middleware","title":"ROS Middleware","text":"# find the bagfile\nbagfile = joinpath(ENV[\"HOME\"],\"data/somedata.bag\")\n\n# open the file\nbagSubscriber = RosbagSubscriber(bagfile)\n\n# subscriber callbacks\nbagSubscriber(\"/zed/left/image_rect_color\", myHandler, robotslam)","category":"page"},{"location":"examples/using_ros/#Run-the-ROS-Loop","page":"ROS Middleware","title":"Run the ROS Loop","text":"","category":"section"},{"location":"examples/using_ros/","page":"ROS Middleware","title":"ROS Middleware","text":"Once everything is set up as you need, it's easy to loop over all the traffic in the bagfile (one message at a time):","category":"page"},{"location":"examples/using_ros/","page":"ROS Middleware","title":"ROS Middleware","text":"maxloops = 1000\nrosloops = 0\nwhile loop!(bagSubscriber)\n # plumbing to limit the number of messages\n rosloops += 1\n if maxloops < rosloops\n @warn \"reached --msgloops limit of $rosloops\"\n break\n end\n # delay progress for whatever reason\n blockProgress(robotslam) # required to prevent duplicate solves occuring at the same time\nend","category":"page"},{"location":"examples/using_ros/","page":"ROS Middleware","title":"ROS Middleware","text":"note: Note\nSee page on Synchronizing over the Graph","category":"page"},{"location":"examples/using_ros/#Write-Msgs-to-a-Bag","page":"ROS Middleware","title":"Write Msgs to a Bag","text":"","category":"section"},{"location":"examples/using_ros/","page":"ROS Middleware","title":"ROS Middleware","text":"Support is also provided for writing messages to bag files with Caesar.RosbagWriter:","category":"page"},{"location":"examples/using_ros/","page":"ROS Middleware","title":"ROS Middleware","text":"# Link with ROSbag infrastructure via rospy\nusing Pkg\nENV[\"PYTHON\"] = \"/usr/bin/python3\"\nPkg.build(\"PyCall\")\nusing PyCall\nusing RobotOS\n@rosimport std_msgs.msg: String\nrostypegen()\nusing Caesar\n\nbagwr = Caesar.RosbagWriter(\"/tmp/test.bag\")\ns = std_msgs.msg.StringMsg(\"test\")\nbagwr.write_message(\"/ch1\", s)\nbagwr.close()","category":"page"},{"location":"examples/using_ros/","page":"ROS Middleware","title":"ROS Middleware","text":"This has been tested and use with much more complicated types such as the Caesar._PCL.PCLPointCloud2.","category":"page"},{"location":"examples/using_ros/#Additional-Notes","page":"ROS Middleware","title":"Additional Notes","text":"","category":"section"},{"location":"examples/using_ros/#ROS-Conversions,-e.g.-PCL","page":"ROS Middleware","title":"ROS Conversions, e.g. PCL","text":"","category":"section"},{"location":"examples/using_ros/","page":"ROS Middleware","title":"ROS Middleware","text":"By loading RobotOS.jl, the Caesar module will also load additional functionality to convert some of the basic data types between ROS and PCL familiar types, for example PCLPointCloud2:","category":"page"},{"location":"examples/using_ros/","page":"ROS Middleware","title":"ROS Middleware","text":"wPC = Caesar._PCL.PointCloud()\nwPC2 = Caesar._PCL.PCLPointCloud2(wPC)\nrmsg = Caesar._PCL.toROSPointCloud2(wPC2);","category":"page"},{"location":"examples/using_ros/#More-Tools-for-Real-Time","page":"ROS Middleware","title":"More Tools for Real-Time","text":"","category":"section"},{"location":"examples/using_ros/","page":"ROS Middleware","title":"ROS Middleware","text":"See tools such as ","category":"page"},{"location":"examples/using_ros/","page":"ROS Middleware","title":"ROS Middleware","text":"ST = manageSolveTree!(robotslam.dfg, robotslam.solveSettings, dbg=false)","category":"page"},{"location":"examples/using_ros/","page":"ROS Middleware","title":"ROS Middleware","text":"manageSolveTree!","category":"page"},{"location":"examples/using_ros/#RoME.manageSolveTree!","page":"ROS Middleware","title":"RoME.manageSolveTree!","text":"manageSolveTree!(dfg, mss; dbg, timinglog, limitfixeddown)\n\n\nAsynchronous solver manager that can run concurrently while other Tasks are modifying a common distributed factor graph object.\n\nNotes\n\nWhen adding Variables and Factors, use solvable=0 to disable the new fragments until ready for inference.\ne.g. addVariable!(fg, :x45, Pose2, solvable=0)\nThese parts of the factor graph can simply be activated for solving setSolvable!(fg, :x45, 1)\n\n\n\n\n\n","category":"function"},{"location":"examples/using_ros/","page":"ROS Middleware","title":"ROS Middleware","text":"for solving a factor graph while the middleware processes are modifying the graph, while documentation is being completed see the code here: https://github.com/JuliaRobotics/RoME.jl/blob/a662d45e22ae4db2b6ee20410b00b75361294545/src/Slam.jl#L175-L288","category":"page"},{"location":"examples/using_ros/","page":"ROS Middleware","title":"ROS Middleware","text":"To stop or trigger a new solve in the SLAM manager you can just use either of these","category":"page"},{"location":"examples/using_ros/","page":"ROS Middleware","title":"ROS Middleware","text":"stopManageSolveTree!\ntriggerSolve!","category":"page"},{"location":"examples/using_ros/#RoME.stopManageSolveTree!","page":"ROS Middleware","title":"RoME.stopManageSolveTree!","text":"stopManageSolveTree!(slam)\n\n\nStops a manageSolveTree! session. Usually up to the user to do so as a SLAM process comes to completion.\n\nRelated\n\nmanageSolveTree!\n\n\n\n\n\n","category":"function"},{"location":"examples/using_ros/#RoME.triggerSolve!","page":"ROS Middleware","title":"RoME.triggerSolve!","text":"triggerSolve!(slam)\n\n\nTrigger a factor graph solveTree!(slam.dfg,...) after clearing the solvable buffer slam.?? (assuming the manageSolveTree! task is already running).\n\nNotes\n\nUsed in combination with manageSolveTree!\n\n\n\n\n\n","category":"function"},{"location":"examples/using_ros/","page":"ROS Middleware","title":"ROS Middleware","text":"note: Note\nNative code for consuming rosbags also includes methods:RosbagSubscriber, loop!, getROSPyMsgTimestamp, nanosecond2datetime","category":"page"},{"location":"examples/using_ros/","page":"ROS Middleware","title":"ROS Middleware","text":"note: Note\nAdditional notes about tricks that came up during development is kept in this wiki.","category":"page"},{"location":"examples/using_ros/","page":"ROS Middleware","title":"ROS Middleware","text":"note: Note\nSee ongoing RobotOS.jl discussion on building a direct C++ interface and skipping PyCall.jl entirely: https://github.com/jdlangs/RobotOS.jl/issues/59","category":"page"},{"location":"examples/basic_slamedonut/#Range-only-SLAM,-Singular-–-i.e.-\"Under-Constrained\"","page":"Underconstrained Range-only","title":"Range only SLAM, Singular – i.e. \"Under-Constrained\"","text":"","category":"section"},{"location":"examples/basic_slamedonut/","page":"Underconstrained Range-only","title":"Underconstrained Range-only","text":"Keywords: underdetermined, under-constrained, range-only, singular","category":"page"},{"location":"examples/basic_slamedonut/","page":"Underconstrained Range-only","title":"Underconstrained Range-only","text":"This tutorial describes a range-only system where there are always more variable dimensions than range measurements made. The error distribution over ranges could be nearly anything, but are restricted to Gaussian-only in this example to illustrate an alternative point – other examples show inference results where highly non-Gaussian error distributions are used.","category":"page"},{"location":"examples/basic_slamedonut/#Presentation-Style-Discussion","page":"Underconstrained Range-only","title":"Presentation Style Discussion","text":"","category":"section"},{"location":"examples/basic_slamedonut/","page":"Underconstrained Range-only","title":"Underconstrained Range-only","text":"A presentation discussion of this example is available here:","category":"page"},{"location":"examples/basic_slamedonut/","page":"Underconstrained Range-only","title":"Underconstrained Range-only","text":"\n

      Towards Real-Time Non-Gaussian SLAM from Dehann on Vimeo.

      ","category":"page"},{"location":"examples/basic_slamedonut/","page":"Underconstrained Range-only","title":"Underconstrained Range-only","text":"A script to recreate this example is provided in RoME/examples here. This singular range-only illustration:","category":"page"},{"location":"examples/basic_slamedonut/","page":"Underconstrained Range-only","title":"Underconstrained Range-only","text":"\n

      Multi-modal iSAM range and distance only example from Dehann on Vimeo.

      ","category":"page"},{"location":"examples/basic_slamedonut/#Quick-Install","page":"Underconstrained Range-only","title":"Quick Install","text":"","category":"section"},{"location":"examples/basic_slamedonut/","page":"Underconstrained Range-only","title":"Underconstrained Range-only","text":"If you already have Julia 1.0 or above, alternatively see complete installation instructions here:","category":"page"},{"location":"examples/basic_slamedonut/","page":"Underconstrained Range-only","title":"Underconstrained Range-only","text":"julia> ]\n(v1.0) pkg> add RoME, Distributed\n(v1.0) pkg> add RoMEPlotting","category":"page"},{"location":"examples/basic_slamedonut/","page":"Underconstrained Range-only","title":"Underconstrained Range-only","text":"The Julia REPL/console is sufficient for this example (copy-paste from this page). Note that more involved work in Julia is simplified by using the Juno IDE.","category":"page"},{"location":"examples/basic_slamedonut/","page":"Underconstrained Range-only","title":"Underconstrained Range-only","text":"Note A recent test (May 2019, IIF v0.6.0) showed a possible bug was introduced with one of the solver upgrades. THe figures shown on this example page are still, however, valid. Previous versions of the solver, such as IncrementalInference v0.4.x and v0.5.x, should still work as expected. Follow progress on issue 335 here as bug is being resolved. Previous versions of the solver can be installed with the package manager, for example: (v1.0) pkg> add IncrementalInference@v0.5.7. Please comment for further details.","category":"page"},{"location":"examples/basic_slamedonut/#Loading-The-Data","page":"Underconstrained Range-only","title":"Loading The Data","text":"","category":"section"},{"location":"examples/basic_slamedonut/","page":"Underconstrained Range-only","title":"Underconstrained Range-only","text":"Starting a Juno IDE or Julia REPL session, the ground truth positions for vehicle positions GTp and landmark positions GTl can be loaded into memory directly with these values:","category":"page"},{"location":"examples/basic_slamedonut/","page":"Underconstrained Range-only","title":"Underconstrained Range-only","text":"GTp = Dict{Symbol, Vector{Float64}}()\nGTp[:l100] = [0.0;0]\nGTp[:l101] = [50.0;0]\nGTp[:l102] = [100.0;0]\nGTp[:l103] = [100.0;50.0]\nGTp[:l104] = [100.0;100.0]\nGTp[:l105] = [50.0;100.0]\nGTp[:l106] = [0.0;100.0]\nGTp[:l107] = [0.0;50.0]\nGTp[:l108] = [0.0;-50.0]\nGTp[:l109] = [0.0;-100.0]\nGTp[:l110] = [50.0;-100.0]\nGTp[:l111] = [100.0;-100.0]\nGTp[:l112] = [100.0;-50.0]\n\nGTl = Dict{Symbol, Vector{Float64}}()\nGTl[:l1] = [10.0;30]\nGTl[:l2] = [30.0;-30]\nGTl[:l3] = [80.0;40]\nGTl[:l4] = [120.0;-50]","category":"page"},{"location":"examples/basic_slamedonut/","page":"Underconstrained Range-only","title":"Underconstrained Range-only","text":"NOTE 1. that by using location indicators :l1, :l2, ... or :l100, :l101, ... is of practical benefit when visualizing with existing RoMEPlotting functions.","category":"page"},{"location":"examples/basic_slamedonut/","page":"Underconstrained Range-only","title":"Underconstrained Range-only","text":"NOTE 2. Landmarks must be in range before range measurements can be made to them.","category":"page"},{"location":"examples/basic_slamedonut/#Creating-the-Factor-Graph-with-Point2","page":"Underconstrained Range-only","title":"Creating the Factor Graph with Point2","text":"","category":"section"},{"location":"examples/basic_slamedonut/","page":"Underconstrained Range-only","title":"Underconstrained Range-only","text":"The first step is to load the required modules, and in our case we will add a few Julia processes to help with the compute later on. ","category":"page"},{"location":"examples/basic_slamedonut/","page":"Underconstrained Range-only","title":"Underconstrained Range-only","text":"# add more julia processes\nusing Distributed\nnprocs() < 4 ? addprocs(4-nprocs()) : nothing\n\n# tell Julia that you want to use these modules/namespaces\nusing RoME","category":"page"},{"location":"examples/basic_slamedonut/","page":"Underconstrained Range-only","title":"Underconstrained Range-only","text":"NOTE Julia uses just-in-time compiling (unless pre-compiled), therefore each time a new function call on a Julia process will be slow, but all following calls to the same functions will be as fast as the statically compiled code.","category":"page"},{"location":"examples/basic_slamedonut/","page":"Underconstrained Range-only","title":"Underconstrained Range-only","text":"This example exclusively uses Point2 variable node types, which have dimension 2 and represent [x, y] position estimates in the world frame.","category":"page"},{"location":"examples/basic_slamedonut/","page":"Underconstrained Range-only","title":"Underconstrained Range-only","text":"Next construct the factor graph containing the first pose :l100 (without any knowledge of where it is) and three measured beacons/landmarks :l1,:l2,:l3 – with prior location knowledge for :l1 and :l2:","category":"page"},{"location":"examples/basic_slamedonut/","page":"Underconstrained Range-only","title":"Underconstrained Range-only","text":"# create the factor graph object\nfg = initfg()\n\n# first pose with no initial estimate\naddVariable!(fg, :l100, Point2)\n\n# add three landmarks\naddVariable!(fg, :l1, Point2)\naddVariable!(fg, :l2, Point2)\naddVariable!(fg, :l3, Point2)\n\n# and put priors on :l101 and :l102\naddFactor!(fg, [:l1;], PriorPoint2(MvNormal(GTl[:l1], diagm(ones(2)))) )\naddFactor!(fg, [:l2;], PriorPoint2(MvNormal(GTl[:l2], diagm(ones(2)))) )","category":"page"},{"location":"examples/basic_slamedonut/","page":"Underconstrained Range-only","title":"Underconstrained Range-only","text":"The PriorPoint2 is assumed to be a multivariate normal distribution of covariance diagm(ones(2)). Note the API PriorPoint2(::T) where T <: SamplableBelief = PriorPoint2{T} to accept distribution objects, discussed further in subsection Various SamplableBelief Distribution Types.","category":"page"},{"location":"examples/basic_slamedonut/#Adding-Range-Measurements-Between-Variables","page":"Underconstrained Range-only","title":"Adding Range Measurements Between Variables","text":"","category":"section"},{"location":"examples/basic_slamedonut/","page":"Underconstrained Range-only","title":"Underconstrained Range-only","text":"Next we connect the three range measurements from the vehicle location :l0 to the three beacons, respectively – and consider that the range measurements are completely relative between the vehicle and beacon position estimates:","category":"page"},{"location":"examples/basic_slamedonut/","page":"Underconstrained Range-only","title":"Underconstrained Range-only","text":"# first range measurement\nrhoZ1 = norm(GTl[:l1]-GTp[:l100])\nppr = Point2Point2Range( Normal(rhoZ1, 2) )\naddFactor!(fg, [:l100;:l1], ppr)\n\n# second range measurement\nrhoZ2 = norm(GTl[:l2]-GTp[:l100])\nppr = Point2Point2Range( Normal(rhoZ2, 3.0) )\naddFactor!(fg, [:l100; :l2], ppr)\n\n# second range measurement\nrhoZ3 = norm(GTl[:l3]-GTp[:l100])\nppr = Point2Point2Range( Normal(rhoZ3, 3.0) )\naddFactor!(fg, [:l100; :l3], ppr)","category":"page"},{"location":"examples/basic_slamedonut/","page":"Underconstrained Range-only","title":"Underconstrained Range-only","text":"The ranging measurement standard deviation of 2.0 or 3.0 is taken, assuming a Gaussian measurement assumption. Again, any distribution could have been used. The factor graph should look as follows:","category":"page"},{"location":"examples/basic_slamedonut/","page":"Underconstrained Range-only","title":"Underconstrained Range-only","text":"drawGraph(fg) # show the factor graph","category":"page"},{"location":"examples/basic_slamedonut/","page":"Underconstrained Range-only","title":"Underconstrained Range-only","text":"(Image: rangesonlyfirstfg)","category":"page"},{"location":"examples/basic_slamedonut/#Inference-and-Visualizations","page":"Underconstrained Range-only","title":"Inference and Visualizations","text":"","category":"section"},{"location":"examples/basic_slamedonut/","page":"Underconstrained Range-only","title":"Underconstrained Range-only","text":"At this point we can call the solver start interpreting the first results:","category":"page"},{"location":"examples/basic_slamedonut/","page":"Underconstrained Range-only","title":"Underconstrained Range-only","text":"tree = solveTree!(fg)","category":"page"},{"location":"examples/basic_slamedonut/","page":"Underconstrained Range-only","title":"Underconstrained Range-only","text":"The factor graph figure above showed the structure between variables and factors. In order to see the numerical values contained in the factor graph, a set of tools are provided by the RoMEPlotting and KernelDensityEstimatePlotting packages. For more details, please see the dedicated visualization discussion here.","category":"page"},{"location":"examples/basic_slamedonut/","page":"Underconstrained Range-only","title":"Underconstrained Range-only","text":"First look at the two landmark positions :l1, :l2 at (10.0,30),(30.0,-30) respectively.","category":"page"},{"location":"examples/basic_slamedonut/","page":"Underconstrained Range-only","title":"Underconstrained Range-only","text":"using RoMEPlotting\n\nplotKDE(fg, [:l1;:l2], dims=[1;2])","category":"page"},{"location":"examples/basic_slamedonut/","page":"Underconstrained Range-only","title":"Underconstrained Range-only","text":"(Image: testl1_2)","category":"page"},{"location":"examples/basic_slamedonut/","page":"Underconstrained Range-only","title":"Underconstrained Range-only","text":"Similarly, the belief estimate for the first vehicle position :l100 is bi-modal, due to the intersection of two range measurements:","category":"page"},{"location":"examples/basic_slamedonut/","page":"Underconstrained Range-only","title":"Underconstrained Range-only","text":"plotKDE(fg, :l100, dims=[1;2], levels=6)","category":"page"},{"location":"examples/basic_slamedonut/","page":"Underconstrained Range-only","title":"Underconstrained Range-only","text":"(Image: testl100)","category":"page"},{"location":"examples/basic_slamedonut/","page":"Underconstrained Range-only","title":"Underconstrained Range-only","text":"An alternative plotting interface can also be used, that shows a histogram of desired elements instead:","category":"page"},{"location":"examples/basic_slamedonut/","page":"Underconstrained Range-only","title":"Underconstrained Range-only","text":"drawLandms(fg, from=1, to=101, contour=false, drawhist=true)","category":"page"},{"location":"examples/basic_slamedonut/","page":"Underconstrained Range-only","title":"Underconstrained Range-only","text":"(Image: testlall)","category":"page"},{"location":"examples/basic_slamedonut/","page":"Underconstrained Range-only","title":"Underconstrained Range-only","text":"Notice the ring of particles which represents the belief on the third beacon/landmark :l3, which was not constrained by a prior factor. Instead, the belief over the position of :l3 is being estimated simultaneous to estimating the vehicle position :l100.","category":"page"},{"location":"examples/basic_slamedonut/#Implicit-Growth-and-Decay-of-Modes-(i.e.-Hypotheses)","page":"Underconstrained Range-only","title":"Implicit Growth and Decay of Modes (i.e. Hypotheses)","text":"","category":"section"},{"location":"examples/basic_slamedonut/","page":"Underconstrained Range-only","title":"Underconstrained Range-only","text":"Next consider the vehicle moving a distance of 50 units–-and by design the direction of travel is not known–-to the next true position. The video above gives away the vehicle position with the cyan line, showing travel in the shape of a lower case 'e'. The following function handles (pseudo odometry) factors as range-only between positions and range-only measurement factors to beacons as the vehice travels.","category":"page"},{"location":"examples/basic_slamedonut/","page":"Underconstrained Range-only","title":"Underconstrained Range-only","text":"function vehicle_drives_to!(fgl::G, pos_sym::Symbol, GTp::Dict, GTl::Dict; measurelimit::R=150.0) where {G <: AbstractDFG, R <: Real}\n currvar = union(ls(fgl)...)\n prev_sym = Symbol(\"l$(maximum(Int[parse(Int,string(currvar[i])[2:end]) for i in 2:length(currvar)]))\")\n if !(pos_sym in currvar)\n println(\"Adding variable vertex $pos_sym, not yet in fgl<:AbstractDFG.\")\n addVariable!(fgl, pos_sym, Point2)\n @show rho = norm(GTp[prev_sym] - GTp[pos_sym])\n ppr = Point2Point2Range( Normal(rho, 3.0) )\n addFactor!(fgl, [prev_sym;pos_sym], ppr)\n else\n @warn \"Variable node $pos_sym already in the factor graph.\"\n end\n beacons = keys(GTl)\n for ll in beacons\n rho = norm(GTl[ll] - GTp[pos_sym])\n # Check for feasible measurements: vehicle within 150 units from the beacons/landmarks\n if rho < measurelimit\n ppr = Point2Point2Range( Normal(rho, 3.0) )\n if !(ll in currvar)\n println(\"Adding variable vertex $ll, not yet in fgl<:AbstractDFG.\")\n addVariable!(fgl, ll, Point2)\n end\n addFactor!(fgl, [pos_sym;ll], ppr)\n end\n end\n nothing\nend","category":"page"},{"location":"examples/basic_slamedonut/","page":"Underconstrained Range-only","title":"Underconstrained Range-only","text":"After pasting (or running) this function in Julia, a new member definition vehicle_drives_to! can be used line any other function. Julia will handle the just-in-time compiling for the type specific function required and cach the static code for repeat executions.","category":"page"},{"location":"examples/basic_slamedonut/","page":"Underconstrained Range-only","title":"Underconstrained Range-only","text":"NOTE The exclamation mark at the end of the function name has no syntactic significance in Julia, since the full UTF8 character set is available for functions or variables. Instead, the exclamation serves as a Julia community convention to tell the caller that this function will modify the contents of at least some of the variables being passed into it – in this case the factor graph fg will be modified.","category":"page"},{"location":"examples/basic_slamedonut/","page":"Underconstrained Range-only","title":"Underconstrained Range-only","text":"Now the actual driving event can be added to the factor graph:","category":"page"},{"location":"examples/basic_slamedonut/","page":"Underconstrained Range-only","title":"Underconstrained Range-only","text":"#drive to location :l101, then :l102\nvehicle_drives_to!(fg, :l101, GTp, GTl)\nvehicle_drives_to!(fg, :l102, GTp, GTl)\n\n# see the graph\ndrawGraph(fg, engine=\"neato\")","category":"page"},{"location":"examples/basic_slamedonut/","page":"Underconstrained Range-only","title":"Underconstrained Range-only","text":"NOTE The distance traveled could be any combination of accrued direction and speeds, however, a straight line Gaussian error model is used to keep the visual presentation of this example as simple as possible.","category":"page"},{"location":"examples/basic_slamedonut/","page":"Underconstrained Range-only","title":"Underconstrained Range-only","text":"The marginal posterior estimates are found by repeating inference over the factor graph, followed drawing all vehicle locations as a contour map:","category":"page"},{"location":"examples/basic_slamedonut/","page":"Underconstrained Range-only","title":"Underconstrained Range-only","text":"# solve and show message passing on Bayes (Junction) tree\ngetSolverParams(fg).drawtree=true\ngetSolverParams(fg).showtree=true\ntree = solveTree!(fg)\n\n# draw all vehicle locations\npl = plotKDE(fg, [Symbol(\"l$(100+i)\") for i in 0:2], dims=[1;2])\n# Gadfly.draw(PDF(\"/tmp/testL100_102.pdf\", 20cm, 10cm),pl) # for storing image to disk\n\npl = plotKDE(fg, [:l3;:l4], dims=[1;2], levels=4)\n# Gadfly.draw(PNG(\"/tmp/testL3_4.png\", 20cm, 10cm),pl)","category":"page"},{"location":"examples/basic_slamedonut/","page":"Underconstrained Range-only","title":"Underconstrained Range-only","text":"Notice how the vehicle positions have two hypotheses, one left to right and one diagonal right to bottom left – both are valid solutions!","category":"page"},{"location":"examples/basic_slamedonut/","page":"Underconstrained Range-only","title":"Underconstrained Range-only","text":"(Image: testl100_102)","category":"page"},{"location":"examples/basic_slamedonut/","page":"Underconstrained Range-only","title":"Underconstrained Range-only","text":"The two \"free\" beacons/landmarks :l3,:l4 still have several modes each, implying insufficient data to constrain either to a strong unimodal belief.","category":"page"},{"location":"examples/basic_slamedonut/","page":"Underconstrained Range-only","title":"Underconstrained Range-only","text":"(Image: testl3_4)","category":"page"},{"location":"examples/basic_slamedonut/","page":"Underconstrained Range-only","title":"Underconstrained Range-only","text":"\nvehicle_drives_to!(fg, :l103, GTp, GTl)\nvehicle_drives_to!(fg, :l104, GTp, GTl)\n\ntree = solveTree!(fg)\n\npl = plotKDE(fg, [Symbol(\"l$(100+i)\") for i in 0:4], dims=[1;2])\n# Gadfly.draw(PDF(\"/tmp/testL100_104.pdf\", 20cm, 10cm),pl)","category":"page"},{"location":"examples/basic_slamedonut/","page":"Underconstrained Range-only","title":"Underconstrained Range-only","text":"Moving up to position :l104 still shows strong multiodality in the vehicle position estimates:","category":"page"},{"location":"examples/basic_slamedonut/","page":"Underconstrained Range-only","title":"Underconstrained Range-only","text":"(Image: testl100_105)","category":"page"},{"location":"examples/basic_slamedonut/","page":"Underconstrained Range-only","title":"Underconstrained Range-only","text":"vehicle_drives_to!(fg, :l105, GTp, GTl)\nvehicle_drives_to!(fg, :l106, GTp, GTl)\n\ntree = solveTree!(fg)\n\n\nvehicle_drives_to!(fg, :l107, GTp, GTl)\n\ntree = solveTree!(fg)\n\n\nvehicle_drives_to!(fg, :l108, GTp, GTl)\n\ntree = solveTree!(fg)\n\n\npl = plotKDE(fg, [Symbol(\"l$(100+i)\") for i in 2:8], dims=[1;2], levels=6)\n# Gadfly.draw(PDF(\"/tmp/testL103_108.pdf\", 20cm, 10cm),pl)","category":"page"},{"location":"examples/basic_slamedonut/","page":"Underconstrained Range-only","title":"Underconstrained Range-only","text":"Next we see a strong return to a single dominant mode in all vehicle position estimates, owing to the increased measurements to beacons/landmarks as well as more unimodal estimates in :l3, :l4 beacon/landmark positions.","category":"page"},{"location":"examples/basic_slamedonut/","page":"Underconstrained Range-only","title":"Underconstrained Range-only","text":"vehicle_drives_to!(fg, :l109, GTp, GTl)\nvehicle_drives_to!(fg, :l110, GTp, GTl)\n\ntree = solveTree!(fg)\n\n\nvehicle_drives_to!(fg, :l111, GTp, GTl)\nvehicle_drives_to!(fg, :l112, GTp, GTl)\n\ntree = solveTree!(fg)\n\n\npl = plotKDE(fg, [Symbol(\"l$(100+i)\") for i in 7:12], dims=[1;2])\n# Gadfly.draw(PDF(\"/tmp/testL106_112.pdf\", 20cm, 10cm),pl)\n\npl = plotKDE(fg, [:l1;:l2;:l3;:l4], dims=[1;2], levels=4)\n# Gadfly.draw(PDF(\"/tmp/testL1234.pdf\", 20cm, 10cm),pl)\n\npl = drawLandms(fg, from=100)\n# Gadfly.draw(PDF(\"/tmp/testLocsAll.pdf\", 20cm, 10cm),pl)","category":"page"},{"location":"examples/basic_slamedonut/","page":"Underconstrained Range-only","title":"Underconstrained Range-only","text":"Several location belief estimates exhibit multimodality as the trajectory progresses (not shown), but collapses and finally collapses to a stable set of dominant position estimates.","category":"page"},{"location":"examples/basic_slamedonut/","page":"Underconstrained Range-only","title":"Underconstrained Range-only","text":"(Image: testl106_112)","category":"page"},{"location":"examples/basic_slamedonut/","page":"Underconstrained Range-only","title":"Underconstrained Range-only","text":"Landmark estimates are also stable at one estimate:","category":"page"},{"location":"examples/basic_slamedonut/","page":"Underconstrained Range-only","title":"Underconstrained Range-only","text":"(Image: testl1234)","category":"page"},{"location":"examples/basic_slamedonut/","page":"Underconstrained Range-only","title":"Underconstrained Range-only","text":"In addition, the SLAM 2D landmark visualization can be re-used to plot more information at once:","category":"page"},{"location":"examples/basic_slamedonut/","page":"Underconstrained Range-only","title":"Underconstrained Range-only","text":"# pl = drawLandms(fg, from=100, to=200)\n# Gadfly.draw(PDF(\"/tmp/testLocsAll.pdf\", 20cm, 10cm),pl)\n\npl = drawLandms(fg)\n# Gadfly.draw(PDF(\"/tmp/testAll.pdf\", 20cm, 10cm),pl)","category":"page"},{"location":"examples/basic_slamedonut/","page":"Underconstrained Range-only","title":"Underconstrained Range-only","text":"(Image: testall)","category":"page"},{"location":"examples/basic_slamedonut/","page":"Underconstrained Range-only","title":"Underconstrained Range-only","text":"This example used the default of N=200 particles per marginal belief. By increasing the number to N=300 throughout the test many more modes and interesting features can be explored, and we refer the reader to an alternative and longer discussion on the same example, in Chapter 6 here.","category":"page"},{"location":"install_viz/#Install-Visualization-Tools","page":"Installing Viz","title":"Install Visualization Tools","text":"","category":"section"},{"location":"install_viz/#2D/3D-Plotting,-Arena.jl","page":"Installing Viz","title":"2D/3D Plotting, Arena.jl","text":"","category":"section"},{"location":"install_viz/","page":"Installing Viz","title":"Installing Viz","text":"pkg> add Arena","category":"page"},{"location":"install_viz/#2D-Plotting,-RoMEPlotting.jl","page":"Installing Viz","title":"2D Plotting, RoMEPlotting.jl","text":"","category":"section"},{"location":"install_viz/","page":"Installing Viz","title":"Installing Viz","text":"note: Note\n24Q1: Plotting is being consolidated into Arena.jl and RoMEPlotting.jl will become obsolete.","category":"page"},{"location":"install_viz/","page":"Installing Viz","title":"Installing Viz","text":"RoMEPlotting.jl (2D) and Arena.jl (3D) are optional visualization packages:","category":"page"},{"location":"install_viz/","page":"Installing Viz","title":"Installing Viz","text":"pkg> add RoMEPlotting","category":"page"},{"location":"examples/interm_fixedlag_hexagonal/#fixedlag_solving","page":"Fixed-Lag Solving 2D","title":"Hexagonal 2D with Fixed-Lag Solving","text":"","category":"section"},{"location":"examples/interm_fixedlag_hexagonal/","page":"Fixed-Lag Solving 2D","title":"Fixed-Lag Solving 2D","text":"note: Note\nThis feature has recently been updated and the documentation below needs to be updated. The new interface is greatly simplified from the example below. The results presented below are also out of date, new performance figures are expected to be faster (2Q2020). ","category":"page"},{"location":"examples/interm_fixedlag_hexagonal/","page":"Fixed-Lag Solving 2D","title":"Fixed-Lag Solving 2D","text":"This example provides an overview of how to enable it and the benefits of using fixed-lag solving. The objective is to provide a near-constant solve time for ever-growing graphs by only recalculating the most recent portion. Think of this as a placeholder, as we develop the solution this tutorial will be updated to demonstrate how that is achieved.","category":"page"},{"location":"examples/interm_fixedlag_hexagonal/#Example-Code","page":"Fixed-Lag Solving 2D","title":"Example Code","text":"","category":"section"},{"location":"examples/interm_fixedlag_hexagonal/","page":"Fixed-Lag Solving 2D","title":"Fixed-Lag Solving 2D","text":"The complete code for this example can be found in the fixed-lag branch of RoME: Hexagonal Fixed-Lag Example.","category":"page"},{"location":"examples/interm_fixedlag_hexagonal/#Introduction","page":"Fixed-Lag Solving 2D","title":"Introduction","text":"","category":"section"},{"location":"examples/interm_fixedlag_hexagonal/","page":"Fixed-Lag Solving 2D","title":"Fixed-Lag Solving 2D","text":"Fixed-lag solving is enabled when creating the factor-graph. Users provide a window–-the quasi fixed-lag constant (QFL)–-which defines how many of the most-recent variables should be calculated. Any other variables are 'frozen.' The objective of this example is to explore providing a near-constant solve time for ever-growing graphs by only recalculating the most recent portion.","category":"page"},{"location":"examples/interm_fixedlag_hexagonal/#Example-Overview","page":"Fixed-Lag Solving 2D","title":"Example Overview","text":"","category":"section"},{"location":"examples/interm_fixedlag_hexagonal/","page":"Fixed-Lag Solving 2D","title":"Fixed-Lag Solving 2D","text":"In the example, the basic Hexagonal 2D is grown to solve 200 variables. The original example remains the same, i.e., a vehicle is driving around in a hexagon and seeing the same bearing+range landmark as it crosses the starting point. At every 20th variable, a solve is invoked. Rather than use solveTree!(fg), the solve is performed in parts (construction of Bayes tree, solving the graph) to get performance statistics as the graph grows.","category":"page"},{"location":"examples/interm_fixedlag_hexagonal/","page":"Fixed-Lag Solving 2D","title":"Fixed-Lag Solving 2D","text":"numVariables = 200\nsolveEveryNVariables = 20\nlagLength = 30\n\n# Standard Hexagonal example for totalIterations - solve every iterationsPerSolve iterations.\nfunction runHexagonalExample(fg::G, totalIterations::Int, iterationsPerSolve::Int)::DataFrame where {G <: AbstractDFG}\n # Add the first pose :x0\n addVariable!(fg, :x0, Pose2)\n\n # dummy tree used later for incremental updates\n tree = wipeBuildNewTree!(fg)\n\n # Add at a fixed location PriorPose2 to pin :x0 to a starting location\n addFactor!(fg, [:x0], PriorPose2(MvNormal(zeros(3), 0.01*Matrix{Float64}(LinearAlgebra.I, 3,3))))\n\n # Add a landmark l1\n addVariable!(fg, :l1, Point2, tags=[:LANDMARK])\n\n # Drive around in a hexagon a number of times\n solveTimes = DataFrame(GraphSize = [], TimeBuildBayesTree = [], TimeSolveGraph = [])\n for i in 0:totalIterations\n psym = Symbol(\"x$i\")\n nsym = Symbol(\"x$(i+1)\")\n @info \"Adding pose $nsym...\"\n addVariable!(fg, nsym, Pose2)\n pp = Pose2Pose2(MvNormal([10.0;0;pi/3], Matrix(Diagonal( [0.1;0.1;0.1].^2 ) )))\n @info \"Adding odometry factor between $psym -> $nsym...\"\n addFactor!(fg, [psym;nsym], pp )\n\n if i % 6 == 0\n @info \"Creating factor between $psym and l1...\"\n p2br = Pose2Point2BearingRange(Normal(0,0.1),Normal(20.0,1.0))\n addFactor!(fg, [psym; :l1], p2br)\n end\n if i % iterationsPerSolve == 0 && i != 0\n @info \"Performing inference!\"\n if getSolverParams(fg).isfixedlag\n @info \"Quasi fixed-lag is enabled (a feature currently in testing)!\"\n fifoFreeze!(fg)\n end\n tInfer = @timed tree = solveTree!(fg, tree)\n graphSize = length([ls(fg)[1]..., ls(fg)[2]...])\n push!(solveTimes, (graphSize, tInfer[2], tInfer[2]))\n end\n end\n return solveTimes\nend","category":"page"},{"location":"examples/interm_fixedlag_hexagonal/","page":"Fixed-Lag Solving 2D","title":"Fixed-Lag Solving 2D","text":"Two cases are set up:","category":"page"},{"location":"examples/interm_fixedlag_hexagonal/","page":"Fixed-Lag Solving 2D","title":"Fixed-Lag Solving 2D","text":"One solving the full graph every time a solve is performed:","category":"page"},{"location":"examples/interm_fixedlag_hexagonal/","page":"Fixed-Lag Solving 2D","title":"Fixed-Lag Solving 2D","text":"# start with an empty factor graph object\nfg = initfg()\n# DO NOT enable fixed-lag operation\nsolverTimesForBatch = runHexagonalExample(fg, numVariables, solveEveryNVariables)","category":"page"},{"location":"examples/interm_fixedlag_hexagonal/","page":"Fixed-Lag Solving 2D","title":"Fixed-Lag Solving 2D","text":"The other enabling fixed-lag with a window of 20 variables:","category":"page"},{"location":"examples/interm_fixedlag_hexagonal/","page":"Fixed-Lag Solving 2D","title":"Fixed-Lag Solving 2D","text":"fgFixedLag = initfg()\nfgFixedLag.solverParams.isfixedlag = true\nfgFixedLag.solverParams.qfl = lagLength\n\nsolverTimesFixedLag = runHexagonalExample(fgFixedLag, numVariables, solveEveryNVariables)","category":"page"},{"location":"examples/interm_fixedlag_hexagonal/","page":"Fixed-Lag Solving 2D","title":"Fixed-Lag Solving 2D","text":"The resultant path of the robot can be seen by using RoMEPlotting and is drawn if the visualization lines are uncommented:","category":"page"},{"location":"examples/interm_fixedlag_hexagonal/","page":"Fixed-Lag Solving 2D","title":"Fixed-Lag Solving 2D","text":"#### Visualization\n\n# Plot the many iterations to see that it succeeded.\n# Batch\n# drawPosesLandms(fg)\n\n# Fixed lag\n# drawPosesLandms(fgFixedLag)","category":"page"},{"location":"examples/interm_fixedlag_hexagonal/","page":"Fixed-Lag Solving 2D","title":"Fixed-Lag Solving 2D","text":"Lastly, the timing results of both scenarios are merged into a single DataFrame table, exported to CSV, and a summary graph is shown using GadFly.","category":"page"},{"location":"examples/interm_fixedlag_hexagonal/","page":"Fixed-Lag Solving 2D","title":"Fixed-Lag Solving 2D","text":"using Gadfly\nusing Colors\nusing CSV\n\n# Make a clean dataset\nrename!(solverTimesForBatch, :TimeBuildBayesTree => :Batch_BayedBuild, :TimeSolveGraph => :Batch_SolveGraph);\nrename!(solverTimesFixedLag, :TimeBuildBayesTree => :FixedLag_BayedBuild, :TimeSolveGraph => :FixedLag_SolveGraph);\ntimingMerged = DataFrames.join(solverTimesForBatch, solverTimesFixedLag, on=:GraphSize)\nCSV.write(\"timing_comparison.csv\", timingMerged)\n\nPP = []\npush!(PP, Gadfly.layer(x=timingMerged[:GraphSize], y=timingMerged[:FixedLag_SolveGraph], Geom.path, Theme(default_color=colorant\"green\"))[1]);\npush!(PP, Gadfly.layer(x=timingMerged[:GraphSize], y=timingMerged[:Batch_SolveGraph], Geom.path, Theme(default_color=colorant\"magenta\"))[1]);\n\nplt = Gadfly.plot(PP...,\n Guide.title(\"Solving Time vs. Iteration for Fixed-Lag Operation\"),\n Guide.xlabel(\"Solving Iteration\"),\n Guide.ylabel(\"Solving Time (seconds)\"),\n Guide.manual_color_key(\"Legend\", [\"fixed\", \"batch\"], [\"green\", \"magenta\"]))\nGadfly.draw(PNG(\"results_comparison.png\", 12cm, 15cm), plt)","category":"page"},{"location":"examples/interm_fixedlag_hexagonal/#Results","page":"Fixed-Lag Solving 2D","title":"Results","text":"","category":"section"},{"location":"examples/interm_fixedlag_hexagonal/","page":"Fixed-Lag Solving 2D","title":"Fixed-Lag Solving 2D","text":"warning: Warning\nNote these results are out of date, much improved performance is possible and work is in progress to improve the documentation around this feature.","category":"page"},{"location":"examples/interm_fixedlag_hexagonal/","page":"Fixed-Lag Solving 2D","title":"Fixed-Lag Solving 2D","text":"Preliminary results for the comparison can be seen below. However, this is just a start and we need to perform more testing. At the moment we are working on providing consistent results and further improving performance/flattening the fixed-lag time. It should be noted that the below graph is not to demonstrate the absolute solve time, but rather the relative behavior of full-graph solve vs. fixed-lag.","category":"page"},{"location":"examples/interm_fixedlag_hexagonal/","page":"Fixed-Lag Solving 2D","title":"Fixed-Lag Solving 2D","text":"(Image: Timing comparison of full solve vs. fixed-lag)","category":"page"},{"location":"examples/interm_fixedlag_hexagonal/","page":"Fixed-Lag Solving 2D","title":"Fixed-Lag Solving 2D","text":"NOTE Work is underway (aka \"Project Tree House\") to reduce overhead computations that result in poorer fixed-lag solving times. We expect the fixed-lag performance to improve in the coming months (Written Nov 2018). Please file issues if a deeper discussion is required.","category":"page"},{"location":"examples/interm_fixedlag_hexagonal/#Additional-Example","page":"Fixed-Lag Solving 2D","title":"Additional Example","text":"","category":"section"},{"location":"examples/interm_fixedlag_hexagonal/","page":"Fixed-Lag Solving 2D","title":"Fixed-Lag Solving 2D","text":"Work In Progress, but In the mean time see the following examples:","category":"page"},{"location":"examples/interm_fixedlag_hexagonal/","page":"Fixed-Lag Solving 2D","title":"Fixed-Lag Solving 2D","text":"https://github.com/JuliaRobotics/Caesar.jl/blob/master/examples/wheeled/racecar/apriltagandzed_slam.jl","category":"page"},{"location":"concepts/solving_graphs/#solving_graphs","page":"Solving Graphs","title":"Solving Graphs","text":"","category":"section"},{"location":"concepts/solving_graphs/#Non-parametric-Batch-Solve","page":"Solving Graphs","title":"Non-parametric Batch Solve","text":"","category":"section"},{"location":"concepts/solving_graphs/","page":"Solving Graphs","title":"Solving Graphs","text":"When you have built the graph, you can call the solver to perform inference with the following:","category":"page"},{"location":"concepts/solving_graphs/","page":"Solving Graphs","title":"Solving Graphs","text":"# Perform inference\ntree = solveTree!(fg) # or solveGraph!","category":"page"},{"location":"concepts/solving_graphs/","page":"Solving Graphs","title":"Solving Graphs","text":"The returned Bayes (Junction) tree object is described in more detail on a dedicated documentation page, while smt and hist return values most closely relate to development and debug outputs which can be ignored during general use. Should an error occur during, the exception information is easily accessible in the smt object (as well as file logs which default to /tmp/caesar/).","category":"page"},{"location":"concepts/solving_graphs/","page":"Solving Graphs","title":"Solving Graphs","text":"solveTree!","category":"page"},{"location":"concepts/solving_graphs/#IncrementalInference.solveTree!","page":"Solving Graphs","title":"IncrementalInference.solveTree!","text":"solveTree!(dfgl; ...)\nsolveTree!(\n dfgl,\n oldtree;\n timeout,\n storeOld,\n verbose,\n verbosefid,\n delaycliqs,\n recordcliqs,\n limititercliqs,\n injectDelayBefore,\n skipcliqids,\n eliminationOrder,\n eliminationConstraints,\n smtasks,\n dotreedraw,\n runtaskmonitor,\n algorithm,\n solveKey,\n multithread\n)\n\n\nPerform inference over the Bayes tree according to opt::SolverParams and keyword arguments.\n\nNotes\n\nAliased with solveGraph!\nVariety of options, including fixed-lag solving – see getSolverParams(fg) for details.\nSee online Documentation for more details: https://juliarobotics.org/Caesar.jl/latest/\nLatest result always stored in solvekey=:default.\nExperimental storeOld::Bool=true will duplicate the current result as supersolve :default_k.\nBased on solvable==1 assumption.\nlimititercliqs allows user to limit the number of iterations a specific CSM does.\nkeywords verbose and verbosefid::IOStream can be used together to to send output to file or default stdout.\nkeyword recordcliqs=[:x0; :x7...] identifies by frontals which cliques to record CSM steps.\nSee repeatCSMStep!, printCSMHistoryLogical, printCSMHistorySequential\n\nDevNotes\n\nTODO Change keyword arguments to new @parameter SolverOptions type.\n\nExample\n\n# pass in old `tree` to enable compute recycling -- see online Documentation for more details\ntree = solveTree!(fg [,tree])\n\nRelated\n\nsolveGraph!, solveCliqUp!, solveCliqDown!, buildTreeReset!, repeatCSMStep, printCSMHistoryLogical\n\n\n\n\n\n","category":"function"},{"location":"concepts/solving_graphs/#variable_init","page":"Solving Graphs","title":"Automatic vs Manual Init","text":"","category":"section"},{"location":"concepts/solving_graphs/","page":"Solving Graphs","title":"Solving Graphs","text":"Currently the main automatic initialization technique used by IncrementalInference.jl by delayed propagation of belief on the factor graph. This can be globally or locally controlled via:","category":"page"},{"location":"concepts/solving_graphs/","page":"Solving Graphs","title":"Solving Graphs","text":"getSolverParams(fg).graphinit = false\n\n# or locally at each addFactor\naddFactor!(fg, [:x0;:x1], LinearRelative(Normal()); graphinit=false)","category":"page"},{"location":"concepts/solving_graphs/","page":"Solving Graphs","title":"Solving Graphs","text":"Use initVariable! if you'd like to force a particular numerical initialization of some or all the variables.","category":"page"},{"location":"concepts/solving_graphs/","page":"Solving Graphs","title":"Solving Graphs","text":"initVariable!","category":"page"},{"location":"concepts/solving_graphs/#IncrementalInference.initVariable!","page":"Solving Graphs","title":"IncrementalInference.initVariable!","text":"initVariable!(\n variable::DFGVariable,\n ptsArr::ManifoldKernelDensity;\n ...\n)\ninitVariable!(\n variable::DFGVariable,\n ptsArr::ManifoldKernelDensity,\n solveKey::Symbol;\n dontmargin,\n N\n)\n\n\nMethod to manually initialize a variable using a set of points.\n\nNotes\n\nDisable automated graphinit on `addFactor!(fg, ...; graphinit=false)\nany un-initialized variables will automatically be initialized by solveTree!\n\nExample:\n\n# some variable is added to fg\naddVariable!(fg, :somepoint3, ContinuousEuclid{2})\n\n# data is organized as (row,col) == (dimension, samples)\npts = randn(2,100)\ninitVariable!(fg, :somepoint3, pts)\n\n# manifold management should be done automatically.\n# note upgrades are coming to consolidate with Manifolds.jl, see RoME #244\n\n## it is also possible to initVariable! by using existing factors, e.g.\ninitVariable!(fg, :x3, [:x2x3f1])\n\nDevNotes\n\nTODO better document graphinit and treeinit.\n\n\n\n\n\n","category":"function"},{"location":"concepts/solving_graphs/","page":"Solving Graphs","title":"Solving Graphs","text":"All the variables can be initialized without solving with:","category":"page"},{"location":"concepts/solving_graphs/","page":"Solving Graphs","title":"Solving Graphs","text":"initAll!","category":"page"},{"location":"concepts/solving_graphs/#IncrementalInference.initAll!","page":"Solving Graphs","title":"IncrementalInference.initAll!","text":"initAll!(dfg; ...)\ninitAll!(dfg, solveKey; _parametricInit, solvable, N)\n\n\nPerform graphinit over all variables with solvable=1 (default).\n\nSee also: ensureSolvable!, (EXPERIMENTAL 'treeinit')\n\n\n\n\n\n","category":"function"},{"location":"concepts/solving_graphs/#Using-Incremental-Updates-(Clique-Recycling-I)","page":"Solving Graphs","title":"Using Incremental Updates (Clique Recycling I)","text":"","category":"section"},{"location":"concepts/solving_graphs/","page":"Solving Graphs","title":"Solving Graphs","text":"One of the major features of the MM-iSAMv2 algorithm (implemented by IncrementalInference.jl) is reducing computational load by recycling and marginalizing different (usually older) parts of the factor graph. In order to utilize the benefits of recycing, the previous Bayes (Junction) tree should also be provided as input (see fixed-lag examples for more details):","category":"page"},{"location":"concepts/solving_graphs/","page":"Solving Graphs","title":"Solving Graphs","text":"tree = solveTree!(fg, tree)","category":"page"},{"location":"concepts/solving_graphs/#Using-Clique-out-marginalization-(Clique-Recycling-II)","page":"Solving Graphs","title":"Using Clique out-marginalization (Clique Recycling II)","text":"","category":"section"},{"location":"concepts/solving_graphs/","page":"Solving Graphs","title":"Solving Graphs","text":"When building sysmtes with limited computation resources, the out-marginalization of cliques on the Bayes tree can be used. This approach limits the amount of variables that are inferred on each solution of the graph. This method is also a compliment to the above Incremental Recycling – these two methods can work in tandem. There is a default setting for a FIFO out-marginalization strategy (with some additional tricks):","category":"page"},{"location":"concepts/solving_graphs/","page":"Solving Graphs","title":"Solving Graphs","text":"defaultFixedLagOnTree!(fg, 50, limitfixeddown=true)","category":"page"},{"location":"concepts/solving_graphs/","page":"Solving Graphs","title":"Solving Graphs","text":"This call will keep the latest 50 variables fluid for inference during Bayes tree inference. The keyword limitfixeddown=true in this case will also prevent downward message passing on the Bayes tree from propagating into the out-marginalized branches on the tree. A later page in this documentation will discuss how the inference algorithm and Bayes tree aspects are put together.","category":"page"},{"location":"concepts/solving_graphs/#sync_over_graph_solvable","page":"Solving Graphs","title":"Synchronizing Over a Factor Graph","text":"","category":"section"},{"location":"concepts/solving_graphs/","page":"Solving Graphs","title":"Solving Graphs","text":"When adding Variables and Factors, use solvable=0 to disable the new fragments until ready for inference, for example","category":"page"},{"location":"concepts/solving_graphs/","page":"Solving Graphs","title":"Solving Graphs","text":"addVariable!(fg, :x45, Pose2, solvable=0)\nnewfct = addFactor!(fg, [:x11,:x12], Pose2Pose2, solvable=0)","category":"page"},{"location":"concepts/solving_graphs/","page":"Solving Graphs","title":"Solving Graphs","text":"These parts of the factor graph can simply be activated for solving:","category":"page"},{"location":"concepts/solving_graphs/","page":"Solving Graphs","title":"Solving Graphs","text":"setSolvable!(fg, :x45, 1)\nsetSolvable!(fg, newfct.label, 1)","category":"page"},{"location":"principles/filterCorrespondence/#Build-your-own-(Bayes)-Filter","page":"Filters vs. Graphs","title":"Build your own (Bayes) Filter","text":"","category":"section"},{"location":"principles/filterCorrespondence/#Correspondence-with-Kalman-Filtering?","page":"Filters vs. Graphs","title":"Correspondence with Kalman Filtering?","text":"","category":"section"},{"location":"principles/filterCorrespondence/","page":"Filters vs. Graphs","title":"Filters vs. Graphs","text":"A frequent discussion point is the correspondence between Kalman/particle/log-flow filtering strategies and factor graph formulations. This section aims to shed light on the relationship, and to show that factor graph interpretations are a powerful generalization of existing filtering techniques. The discussion follows a build-your-own-filter style and combines the Approximate Convolution and Multiplying Densities pages as the required prediction and update cycle steps, respectively. Using the steps described here, the user will be able to build fully-functional–-i.e. non-Gaussian–-(Bayes) filters. ","category":"page"},{"location":"principles/filterCorrespondence/","page":"Filters vs. Graphs","title":"Filters vs. Graphs","text":"note: Note\nA simple 1D predict correct Bayesian filtering example (using underlying convolution and product operations of the mmisam algorithm) can be used as a rough template to familiarize yourself on the correspondence between filters and newer graph-based operations.","category":"page"},{"location":"principles/filterCorrespondence/","page":"Filters vs. Graphs","title":"Filters vs. Graphs","text":"This page tries to highlight some of the reasons why using a factor graph approach (w/ Bayes/junction tree inference) in a incremental/fixed-lag/federated sense–-e.g. simultaneous localization and mapping (SLAM) approach–-has merit. The described steps form part of the core operations used by the multimodal incremental smoothing and mapping (mmisam) algorithm.","category":"page"},{"location":"principles/filterCorrespondence/","page":"Filters vs. Graphs","title":"Filters vs. Graphs","text":"Further topics on factor graph (and Bayes/junction tree) inference formulation, including how out-marginalization works is discussed separately as part of the Bayes tree description page. It is also worth reiterating the section on why do we even care about non-Gaussian signal processing.","category":"page"},{"location":"principles/filterCorrespondence/","page":"Filters vs. Graphs","title":"Filters vs. Graphs","text":"note: Note\nComing soon, the steps described on this page will be fully accessible via multi-language interfaces (middleware) – some of these interfaces already exist.","category":"page"},{"location":"principles/filterCorrespondence/#Causality-and-Markov-Assumption","page":"Filters vs. Graphs","title":"Causality and Markov Assumption","text":"","category":"section"},{"location":"principles/filterCorrespondence/","page":"Filters vs. Graphs","title":"Filters vs. Graphs","text":"WIP: Causal connection explanation: How is the graph based method the same as Kalman filtering variants (UKF, EKF), including Bayesian filtering (PF, etc.), and the Hidden Markov Model (HMM) methodology. ","category":"page"},{"location":"principles/filterCorrespondence/","page":"Filters vs. Graphs","title":"Filters vs. Graphs","text":"Furthermore, see below for connection to EKF-SLAM too.","category":"page"},{"location":"principles/filterCorrespondence/#Joint-Probability-and-Chapman-Kolmogorov","page":"Filters vs. Graphs","title":"Joint Probability and Chapman-Kolmogorov","text":"","category":"section"},{"location":"principles/filterCorrespondence/","page":"Filters vs. Graphs","title":"Filters vs. Graphs","text":"WIP; The high level task is to \"invert\" measurements Z give the state of the world Theta","category":"page"},{"location":"principles/filterCorrespondence/#Maximum-Likelihood-vs.-Message-Passing","page":"Filters vs. Graphs","title":"Maximum Likelihood vs. Message Passing","text":"","category":"section"},{"location":"principles/filterCorrespondence/","page":"Filters vs. Graphs","title":"Filters vs. Graphs","text":"WIP; This dicussion will lead towards Bayesian Networks (Pearl) and Bayes Trees (Kaess et al., Fourie et al.).","category":"page"},{"location":"principles/filterCorrespondence/#The-Target-Tracking-Problem-(Conventional-Filtering)","page":"Filters vs. Graphs","title":"The Target Tracking Problem (Conventional Filtering)","text":"","category":"section"},{"location":"principles/filterCorrespondence/","page":"Filters vs. Graphs","title":"Filters vs. Graphs","text":"Consider a common example, two dimensional target tracking, where a projectile transits over a tracking station using various sensing technologies [Zarchan 2013]. Position and velocity estimates of the target","category":"page"},{"location":"principles/filterCorrespondence/#Prediction-Step-using-a-Factor-Graph","page":"Filters vs. Graphs","title":"Prediction Step using a Factor Graph","text":"","category":"section"},{"location":"principles/filterCorrespondence/","page":"Filters vs. Graphs","title":"Filters vs. Graphs","text":"Assume a constant velocity model from which the estimate will be updated through the measurement model described in the next section. A constant velocity model is taken as (cartesian)","category":"page"},{"location":"principles/filterCorrespondence/","page":"Filters vs. Graphs","title":"Filters vs. Graphs","text":"fracdxdt = 0 + eta_x\nfracdydt = 0 + eta_y","category":"page"},{"location":"principles/filterCorrespondence/","page":"Filters vs. Graphs","title":"Filters vs. Graphs","text":"or polar coordinates","category":"page"},{"location":"principles/filterCorrespondence/","page":"Filters vs. Graphs","title":"Filters vs. Graphs","text":"fracdrhodt = 0 + eta_rho\nfracdthetadt = 0 + eta_theta","category":"page"},{"location":"principles/filterCorrespondence/","page":"Filters vs. Graphs","title":"Filters vs. Graphs","text":"In this example, noise is introduced as an affine slack variable \\eta, but could be added as any part of the process model:","category":"page"},{"location":"principles/filterCorrespondence/","page":"Filters vs. Graphs","title":"Filters vs. Graphs","text":"eta_j sim p()","category":"page"},{"location":"principles/filterCorrespondence/","page":"Filters vs. Graphs","title":"Filters vs. Graphs","text":"where p is any allowable noise probability density/distribution model – discussed more in the next section.","category":"page"},{"location":"principles/filterCorrespondence/","page":"Filters vs. Graphs","title":"Filters vs. Graphs","text":"After integration (assume zeroth order) the associated residual function can be constructed:","category":"page"},{"location":"principles/filterCorrespondence/","page":"Filters vs. Graphs","title":"Filters vs. Graphs","text":"delta_i (theta_k theta_k-1 fracd theta_kdt Delta t) = theta_k - (theta_k-1 + fracd theta_kdt Delta t)","category":"page"},{"location":"principles/filterCorrespondence/","page":"Filters vs. Graphs","title":"Filters vs. Graphs","text":"Filter prediction steps are synonymous with a binary factor (conditional likelihood) between two variables where a prior estimate from one variable is projected (by means of a convolution) to the next variable. The convolutional principle page describes a more detailed example on how a convolution can be computed. ","category":"page"},{"location":"principles/filterCorrespondence/#Measurement-Step-using-a-Factor-Graph","page":"Filters vs. Graphs","title":"Measurement Step using a Factor Graph","text":"","category":"section"},{"location":"principles/filterCorrespondence/","page":"Filters vs. Graphs","title":"Filters vs. Graphs","text":"The measurement update is a product operation of infinite functional objects (probability densities)","category":"page"},{"location":"principles/filterCorrespondence/","page":"Filters vs. Graphs","title":"Filters vs. Graphs","text":"p(X_k X_k-1 Z_a Z_b) approx p(X_k X_k-1 Z_a) times p(X_k Z_b)","category":"page"},{"location":"principles/filterCorrespondence/","page":"Filters vs. Graphs","title":"Filters vs. Graphs","text":"where Z_. represents conditional information for two beliefs on the same variable. The product of the two functional estimates (beliefs) are multiplied by a stochastic algorithm described in more detail on the multiplying functions page.","category":"page"},{"location":"principles/filterCorrespondence/","page":"Filters vs. Graphs","title":"Filters vs. Graphs","text":"Direct state observations can be added to the factor graph as prior factors directly on the variables. An illustration of both predictions (binary likelihood process model) and direct observations (measurements) is presented:","category":"page"},{"location":"principles/filterCorrespondence/","page":"Filters vs. Graphs","title":"Filters vs. Graphs","text":"

      \n\n

      ","category":"page"},{"location":"principles/filterCorrespondence/","page":"Filters vs. Graphs","title":"Filters vs. Graphs","text":"Alternatively, indirect measurements of the state variables are should be modeled with the most sensible function","category":"page"},{"location":"principles/filterCorrespondence/","page":"Filters vs. Graphs","title":"Filters vs. Graphs","text":"y = h(theta eta)\ndelta_j(theta_j eta_j) = ominus h_j(theta_j eta_j) oplus y_j","category":"page"},{"location":"principles/filterCorrespondence/","page":"Filters vs. Graphs","title":"Filters vs. Graphs","text":"which approximates the underlying (on-manifold) stochastics and physics of the process at hand. The measurement models can be used to project belief through a measurement function, and should be recognized as a standard representation for a Hidden Markov Model (HMM):","category":"page"},{"location":"principles/filterCorrespondence/","page":"Filters vs. Graphs","title":"Filters vs. Graphs","text":"

      \n\n

      ","category":"page"},{"location":"principles/filterCorrespondence/#Beyond-Filtering","page":"Filters vs. Graphs","title":"Beyond Filtering","text":"","category":"section"},{"location":"principles/filterCorrespondence/","page":"Filters vs. Graphs","title":"Filters vs. Graphs","text":"Consider a multi-sensory system along with data transmission delays, variable sampling rates, etc.; when designing a filtering system to track one or multiple targets, it quickly becomes difficult to augment state vectors with the required state and measurement histories. In contrast, the factor graph as a language allows for heterogeneous data streams to be combined in a common inference framework, and is discussed further in the building distributed factor graphs section.","category":"page"},{"location":"principles/filterCorrespondence/","page":"Filters vs. Graphs","title":"Filters vs. Graphs","text":"note: Note\nFactor graphs are constructed along with the evolution of time which allows the mmisam inference algorithm to resolve variable marginal estimates both forward and backwards in time. Conventional filtering only allows for forward-backward \"smoothing\" as two separate processes. When inferring over a factor graph, all variables and factors are considered simultaneously according the topological connectivity irrespective of when and where which measurements were made or communicated – as long as the factor graph (probabilistic model) captures the stochastics of the situation with sufficient accuracy. ","category":"page"},{"location":"principles/filterCorrespondence/","page":"Filters vs. Graphs","title":"Filters vs. Graphs","text":"TODO: Multi-modal (belief) vs. multi-hypothesis – see thesis work on multimodal solutions in the mean time.","category":"page"},{"location":"principles/filterCorrespondence/","page":"Filters vs. Graphs","title":"Filters vs. Graphs","text":"note: Note\nMmisam allows for parametric, non-parametric, or intensity noise models which can be incorporated into any differentiable residual function.","category":"page"},{"location":"principles/filterCorrespondence/#Anecdotal-Example-(EKF-SLAM-/-MSC-KF)","page":"Filters vs. Graphs","title":"Anecdotal Example (EKF-SLAM / MSC-KF)","text":"","category":"section"},{"location":"principles/filterCorrespondence/","page":"Filters vs. Graphs","title":"Filters vs. Graphs","text":"WIP: Explain how this method is similar to EKF-SLAM and MSC-KF...","category":"page"},{"location":"examples/basic_definingfactors/#custom_prior_factor","page":"Custom Prior Factor","title":"Custom Prior Factor","text":"","category":"section"},{"location":"examples/basic_definingfactors/","page":"Custom Prior Factor","title":"Custom Prior Factor","text":"Julia's type inference allows overloading of member functions outside a module. Therefore new factors can be defined at any time. ","category":"page"},{"location":"examples/basic_definingfactors/","page":"Custom Prior Factor","title":"Custom Prior Factor","text":"Required Brief description\nMyFactor struct Prior (<:AbstractPrior) factor definition\nOptional methods Brief description\ngetSample(cfo::CalcFactor{<:MyFactor}) Get a sample from the measurement model","category":"page"},{"location":"examples/basic_definingfactors/","page":"Custom Prior Factor","title":"Custom Prior Factor","text":"To better illustrate, in this example we will add new factors into the Main context after construction of the factor graph has already begun.","category":"page"},{"location":"examples/basic_definingfactors/","page":"Custom Prior Factor","title":"Custom Prior Factor","text":"tip: Tip\nIIF is a convenient const alias of the module IncrementalInference, similarly AMP for ApproxManifoldProducts.","category":"page"},{"location":"examples/basic_definingfactors/#Defining-a-New-Prior-(:AbsoluteFactor)","page":"Custom Prior Factor","title":"Defining a New Prior (<:AbsoluteFactor)","text":"","category":"section"},{"location":"examples/basic_definingfactors/","page":"Custom Prior Factor","title":"Custom Prior Factor","text":"Now lets define our own prior, MyPrior which allows for arbitrary distributions that inherit from <: IIF.SamplableBelief:","category":"page"},{"location":"examples/basic_definingfactors/","page":"Custom Prior Factor","title":"Custom Prior Factor","text":"struct MyPrior{T <: SamplableBelief} <: IIF.AbstractPrior\n Z::T\nend","category":"page"},{"location":"examples/basic_definingfactors/","page":"Custom Prior Factor","title":"Custom Prior Factor","text":"New priors must inheret from IIF.AbstractPrior, and usually takes a user input <:SamplableBelief as probabilistic model. <:AbstractPrior is a unary factor that introduces absolute information about only one variable.","category":"page"},{"location":"examples/basic_definingfactors/#specialized_getSample","page":"Custom Prior Factor","title":"Specialized getSample (if .Z)","text":"","category":"section"},{"location":"examples/basic_definingfactors/","page":"Custom Prior Factor","title":"Custom Prior Factor","text":"Caesar.jl uses a convention (non-binding) to simplify factor definitions in easier cases, but not restrict more complicated cases – a default getSample function already exists in IIF which assumes the field .Z <: SamplableBelief is used to generate the random sample values. So, the example above actually does not require the user to provide a specific getSample(cf::CalcFactor{<:MyPrior}) dispatch. ","category":"page"},{"location":"examples/basic_definingfactors/","page":"Custom Prior Factor","title":"Custom Prior Factor","text":"For the sake of the tutorial, let's write one anyway. Remember that we are now overriding the IIF API with a new dispatch, for that we need to import the function","category":"page"},{"location":"examples/basic_definingfactors/","page":"Custom Prior Factor","title":"Custom Prior Factor","text":"import IncrementalInference: getSample\n\n# adding our own specialized dispatch on getSample\nIIF.getSample(cfo::CalcFactor{<:MyPrior}) = rand(cfo.factor.Z)","category":"page"},{"location":"examples/basic_definingfactors/","page":"Custom Prior Factor","title":"Custom Prior Factor","text":"It is important to note that for <:AbstractPrior the getSample must return a point on the manifold, not a tangent vector or coordinate. ","category":"page"},{"location":"examples/basic_definingfactors/","page":"Custom Prior Factor","title":"Custom Prior Factor","text":"To recap, the getSample function for priors returns a measurement sample as points on the manifold.","category":"page"},{"location":"examples/basic_definingfactors/#Ready-to-Use","page":"Custom Prior Factor","title":"Ready to Use","text":"","category":"section"},{"location":"examples/basic_definingfactors/","page":"Custom Prior Factor","title":"Custom Prior Factor","text":"This new prior can now readily be added to an ongoing factor graph:","category":"page"},{"location":"examples/basic_definingfactors/","page":"Custom Prior Factor","title":"Custom Prior Factor","text":"# lets generate a random nonparametric belief\n\npts = [samplePoint(getManifold(Position{1}), Normal(8.0,2.0)) for _=1:75]\nsomeBelief = manikde!(Position{1}, pts)\n\n# and build your new factor as an object\nmyprior = MyPrior(someBelief)","category":"page"},{"location":"examples/basic_definingfactors/","page":"Custom Prior Factor","title":"Custom Prior Factor","text":"and add it to the existing factor graph from earlier, lets say:","category":"page"},{"location":"examples/basic_definingfactors/","page":"Custom Prior Factor","title":"Custom Prior Factor","text":"addFactor!(fg, [:x1], myprior)","category":"page"},{"location":"examples/basic_definingfactors/","page":"Custom Prior Factor","title":"Custom Prior Factor","text":"note: Note\nVariable types Postion{1} or ContinuousEuclid{1} are algebraically equivalent.","category":"page"},{"location":"examples/basic_definingfactors/","page":"Custom Prior Factor","title":"Custom Prior Factor","text":"That's it, this factor is now part of the graph. This should be a solvable graph:","category":"page"},{"location":"examples/basic_definingfactors/","page":"Custom Prior Factor","title":"Custom Prior Factor","text":"solveGraph!(fg); # exact alias of solveTree!(fg)","category":"page"},{"location":"examples/basic_definingfactors/","page":"Custom Prior Factor","title":"Custom Prior Factor","text":"Later we will see how to ensure these new factors can be properly serialized to work with features like saveDFG and loadDFG. See What is CalcFactor for more details.","category":"page"},{"location":"examples/basic_definingfactors/","page":"Custom Prior Factor","title":"Custom Prior Factor","text":"See the next page on how to build your own Custom Relative Factor. Serialization of factors is also discussed in more detail at Standardized Factor Serialization.","category":"page"},{"location":"concepts/arena_visualizations/#visualization_3d","page":"Visualization (3D)","title":"Visualization 3D","text":"","category":"section"},{"location":"concepts/arena_visualizations/#Introduction","page":"Visualization (3D)","title":"Introduction","text":"","category":"section"},{"location":"concepts/arena_visualizations/","page":"Visualization (3D)","title":"Visualization (3D)","text":"Over time, Caesar.jl/Arena.jl has used at various different 3D visualization technologies. ","category":"page"},{"location":"concepts/arena_visualizations/#Arena.jl-Visualization","page":"Visualization (3D)","title":"Arena.jl Visualization","text":"","category":"section"},{"location":"concepts/arena_visualizations/#viz_pointcloud","page":"Visualization (3D)","title":"Plotting a PointCloud","text":"","category":"section"},{"location":"concepts/arena_visualizations/","page":"Visualization (3D)","title":"Visualization (3D)","text":"Visualization support for point clouds is available through Arena and Caesar. The follow example shows some of the basics:","category":"page"},{"location":"concepts/arena_visualizations/","page":"Visualization (3D)","title":"Visualization (3D)","text":"using Arena\nusing Caesar\nusing Downloads\nusing DelimitedFiles\nusing LasIO\nusing Test\n\n##\n\nfunction downloadTestData(datafile, url)\n if 0 === Base.filesize(datafile)\n Base.mkpath(dirname(datafile))\n @info \"Downloading $url\"\n Downloads.download(url, datafile)\n end\n return datafile\nend\n\ntestdatafolder = joinpath(tempdir(), \"caesar\", \"testdata\") # \"/tmp/caesar/testdata/\"\n\nlidar_terr1_file = joinpath(testdatafolder,\"lidar\",\"simpleICP\",\"terrestrial_lidar1.xyz\")\nif !isfile(lidar_terr1_file)\n lidar_terr1_url = \"https://github.com/JuliaRobotics/CaesarTestData.jl/raw/main/data/lidar/simpleICP/terrestrial_lidar1.xyz\"\n downloadTestData(lidar_terr1_file,lidar_terr1_url)\nend\n\n# load the data to memory\nX_fix = readdlm(lidar_terr1_file, Float32)\n# convert data to PCL types\npc_fix = Caesar._PCL.PointCloud(X_fix);\n\n\npl = Arena.plotPointCloud(pc_fix)\n","category":"page"},{"location":"concepts/arena_visualizations/","page":"Visualization (3D)","title":"Visualization (3D)","text":"This should result in a plot similar to:","category":"page"},{"location":"concepts/arena_visualizations/","page":"Visualization (3D)","title":"Visualization (3D)","text":"

      \n\n

      ","category":"page"},{"location":"concepts/arena_visualizations/","page":"Visualization (3D)","title":"Visualization (3D)","text":"note: Note\n24Q1: Currently work is underway to better standardize within the Julia ecosystem, with the 4th generation of Arena.jl – note that this is work in progress. Information about legacy generations is included below.","category":"page"},{"location":"concepts/arena_visualizations/","page":"Visualization (3D)","title":"Visualization (3D)","text":"For more formal visualization support, contact www.NavAbility.io via email or slack. ","category":"page"},{"location":"concepts/arena_visualizations/#4th-Generation-Dev-Scripts-using-Makie.jl","page":"Visualization (3D)","title":"4th Generation Dev Scripts using Makie.jl","text":"","category":"section"},{"location":"concepts/arena_visualizations/","page":"Visualization (3D)","title":"Visualization (3D)","text":"Working towards new Makie.jl. Makie supports both GL and WGL, including 3rd party libraries such as three.js (previously used via MeshCat.jl, see Legacy section below.).","category":"page"},{"location":"concepts/arena_visualizations/#viz_pointcloud_makie","page":"Visualization (3D)","title":"Visualizing Point Clouds","text":"","category":"section"},{"location":"concepts/arena_visualizations/","page":"Visualization (3D)","title":"Visualization (3D)","text":"Point clouds could be massive, on the order of a million points or more. Makie.jl has good performance for handling such large point cloud datasets. Here is a quick example script.","category":"page"},{"location":"concepts/arena_visualizations/","page":"Visualization (3D)","title":"Visualization (3D)","text":"using Makie, GLMakie\n\n# n x 3 matrix of 3D points in pointcloud\npts1 = randn(100,3)\npts2 = randn(100,3)\n\n# plot first and update with second\nplt = scatter(pts1[:,1],pts1[:,2],pts1[:,3], color=pts1[:,3])\nscatter!(pts2[:,1],pts2[:,2],pts2[:,3], color=-pts2[:,3])","category":"page"},{"location":"concepts/arena_visualizations/#Visualizing-with-Arena.jl","page":"Visualization (3D)","title":"Visualizing with Arena.jl","text":"","category":"section"},{"location":"concepts/arena_visualizations/","page":"Visualization (3D)","title":"Visualization (3D)","text":"warning: Warning\nArena.jl is currently out of date since the package will likely support Makie via both GL and WGL interfaces. Makie.jl has been receiving much attention over the past years and starting to mature to a point where Arena.jl can be revived again. 2D plotting is done via RoMEPlotting.jl.","category":"page"},{"location":"concepts/arena_visualizations/","page":"Visualization (3D)","title":"Visualization (3D)","text":"The sections below discuss 3D visualization techniques available to the Caesar.jl robot navigation system. Caesar.jl uses the Arena.jl package for all the visualization requirements. This part of the documentation discusses the robotic visualization aspects supported by Arena.jl. Arena.jl supports a wide variety of general visualization as well as developer visualization tools more focused on research and development. The visualizations are also intended to help with subgraph plotting for finding loop closures in data or compare two datasets.","category":"page"},{"location":"concepts/arena_visualizations/#Legacy-Visualizers","page":"Visualization (3D)","title":"Legacy Visualizers","text":"","category":"section"},{"location":"concepts/arena_visualizations/","page":"Visualization (3D)","title":"Visualization (3D)","text":"Previous generations used various technologies, including WebGL and three.js by means of the MeshCat.jl package. Previous incarnations used a client side installation of VTK by means of the DrakeVisualizer.jl and Director libraries. Different 2D plotting libraries have also been used, with evolutions to improve usability for a wider user base. Each epoch has been aimed at reducing dependencies and increasing multi-platform support.","category":"page"},{"location":"concepts/arena_visualizations/#3rd-Generation-MeshCat.jl-(Three.js)","page":"Visualization (3D)","title":"3rd Generation MeshCat.jl (Three.js)","text":"","category":"section"},{"location":"concepts/arena_visualizations/","page":"Visualization (3D)","title":"Visualization (3D)","text":"For the latest work on using MeshCat.jl, see proof or concept examples in Amphitheater.jl (1Q20). The code below inspired the Amphitheater work.","category":"page"},{"location":"concepts/arena_visualizations/","page":"Visualization (3D)","title":"Visualization (3D)","text":"note: Note\nSee installation page for instructions.","category":"page"},{"location":"concepts/arena_visualizations/","page":"Visualization (3D)","title":"Visualization (3D)","text":"Factor graphs of two or three dimensions can be visualized with the 3D visualizations provided by Arena.jl and it's dependencies. The 2D example above and also be visualized in a 3D space with the commands:","category":"page"},{"location":"concepts/arena_visualizations/","page":"Visualization (3D)","title":"Visualization (3D)","text":"vc = startdefaultvisualization() # to load a DrakeVisualizer/Director process instance\nvisualize(fg, vc, drawlandms=false)\n# visualizeallposes!(vc, fg, drawlandms=false)","category":"page"},{"location":"concepts/arena_visualizations/","page":"Visualization (3D)","title":"Visualization (3D)","text":"Here is a basic example of using visualization and multi-core factor graph solving:","category":"page"},{"location":"concepts/arena_visualizations/","page":"Visualization (3D)","title":"Visualization (3D)","text":"addprocs(2)\nusing Caesar, RoME, TransformUtils, Distributions\n\n# load scene and ROV model (might experience UDP packet loss LCM buffer not set)\nsc1 = loadmodel(:scene01); sc1(vc)\nrovt = loadmodel(:rov); rovt(vc)\n\ninitCov = 0.001*eye(6); [initCov[i,i] = 0.00001 for i in 4:6];\nodoCov = 0.0001*eye(6); [odoCov[i,i] = 0.00001 for i in 4:6];\nrangecov, bearingcov = 3e-4, 2e-3\n\n# start and add to a factor graph\nfg = identitypose6fg(initCov=initCov)\ntf = SE3([0.0;0.7;0.0], Euler(pi/4,0.0,0.0) )\naddOdoFG!(fg, Pose3Pose3(MvNormal(veeEuler(tf), odoCov) ) )\n\naddLinearArrayConstraint(fg, (4.0, 0.0), :x0, :l1, rangecov=rangecov,bearingcov=bearingcov)\naddLinearArrayConstraint(fg, (4.0, 0.0), :x1, :l1, rangecov=rangecov,bearingcov=bearingcov)\n\nsolveBatch!(fg)\n\nusing Arena\n\nvc = startdefaultvisualization()\nvisualize(fg, vc, drawlandms=true, densitymeshes=[:l1;:x2])\nvisualizeDensityMesh!(vc, fg, :l1)\n# visualizeallposes!(vc, fg, drawlandms=false)","category":"page"},{"location":"concepts/arena_visualizations/","page":"Visualization (3D)","title":"Visualization (3D)","text":"For more information see JuliaRobotcs/MeshCat.jl.","category":"page"},{"location":"concepts/arena_visualizations/#2nd-Generation-3D-Viewer-(VTK-/-Director)","page":"Visualization (3D)","title":"2nd Generation 3D Viewer (VTK / Director)","text":"","category":"section"},{"location":"concepts/arena_visualizations/","page":"Visualization (3D)","title":"Visualization (3D)","text":"note: Note\nThis code is obsolete","category":"page"},{"location":"concepts/arena_visualizations/","page":"Visualization (3D)","title":"Visualization (3D)","text":"Previous versions used the much larger VTK based Director available via DrakeVisualizer.jl package. This requires the following preinstalled packages:","category":"page"},{"location":"concepts/arena_visualizations/","page":"Visualization (3D)","title":"Visualization (3D)","text":" sudo apt-get install libvtk5-qt4-dev python-vtk","category":"page"},{"location":"concepts/arena_visualizations/#1st-Generation-MIT-LCM-Collections-viewer","page":"Visualization (3D)","title":"1st Generation MIT LCM Collections viewer","text":"","category":"section"},{"location":"concepts/arena_visualizations/","page":"Visualization (3D)","title":"Visualization (3D)","text":"This code has been removed.","category":"page"},{"location":"concepts/zero_install/#Using-The-NavAbility-Cloud","page":"Zero Install Solution","title":"Using The NavAbility Cloud","text":"","category":"section"},{"location":"concepts/zero_install/","page":"Zero Install Solution","title":"Zero Install Solution","text":"See NavAbilitySDK for details. These features will include Multi-session/agent support.","category":"page"},{"location":"principles/approxConvDensities/#Principle:-Approximate-Convolutions","page":"Generic Convolutions","title":"Principle: Approximate Convolutions","text":"","category":"section"},{"location":"principles/approxConvDensities/","page":"Generic Convolutions","title":"Generic Convolutions","text":"This example illustrates a central concept of approximating the convolution of belief density functions. Convolutions are required to compute (estimate) the probabilistic chain rule with conditional probability density functions. One easy illustration is robotics where an odometry chain of poses has a continuous increase–-or spreading–-of the confidence/uncertainty of a next pose. This tutorial will demonstrate that process.","category":"page"},{"location":"principles/approxConvDensities/","page":"Generic Convolutions","title":"Generic Convolutions","text":"This page describes a Julia language interface, followed by a CaesarZMQ interface; a link to the mathematical description is provided thereafter.","category":"page"},{"location":"principles/approxConvDensities/#Convolutions-of-Infinite-Objects-(Functionals)","page":"Generic Convolutions","title":"Convolutions of Infinite Objects (Functionals)","text":"","category":"section"},{"location":"principles/approxConvDensities/","page":"Generic Convolutions","title":"Generic Convolutions","text":"Consider the following vehicle odometry prediction (probabilistic) operation, where odometry measurement Z is an independent stochastic process from prior belief on pose X0","category":"page"},{"location":"principles/approxConvDensities/","page":"Generic Convolutions","title":"Generic Convolutions","text":"p(X_1 X_0 Z) propto p(Z X_0 X_1) p(X_0)","category":"page"},{"location":"principles/approxConvDensities/","page":"Generic Convolutions","title":"Generic Convolutions","text":"and recognize this process as a convolution operation where the prior belief on X0 is spread to a less certain prediction of pose X1. The figure below shows an example quasi-deterministic convolution of green densitty with the red density, which results in the black density below:","category":"page"},{"location":"principles/approxConvDensities/","page":"Generic Convolutions","title":"Generic Convolutions","text":"\"Bayes/Junction","category":"page"},{"location":"principles/approxConvDensities/","page":"Generic Convolutions","title":"Generic Convolutions","text":"Note that this operation is precisely the same as a prediction step in filtering applications, where the state transition model–-usually annotated as d/dt x = f(x, z)–-is here presented by the conditional belief p(Z | X_0, X_1).","category":"page"},{"location":"principles/approxConvDensities/","page":"Generic Convolutions","title":"Generic Convolutions","text":"The convolution computation described above is a core operation required for solving the Chapman-Kolmogorov transit equations.","category":"page"},{"location":"principles/approxConvDensities/#Underlying-Mathematical-Operations","page":"Generic Convolutions","title":"Underlying Mathematical Operations","text":"","category":"section"},{"location":"principles/approxConvDensities/","page":"Generic Convolutions","title":"Generic Convolutions","text":"In order to compute generic convolutions, the mmisam algorithm uses non-linear gradient descent to resolve estimates of the target variable based on the values of other dependent variables. The conditional likelihood (multidimensional factor) is based on a residual function:","category":"page"},{"location":"principles/approxConvDensities/","page":"Generic Convolutions","title":"Generic Convolutions","text":"z_i = delta_i (theta_i)","category":"page"},{"location":"principles/approxConvDensities/","page":"Generic Convolutions","title":"Generic Convolutions","text":"where z_i is the innovation of any smooth twice differentiable residual function delta. The residual function depends on specific variables collected as theta_i. The IIF code supports both root finding or minimization trust-region operations, which are each provided by NLsolve.jl or Optim.jl packages respectively.","category":"page"},{"location":"principles/approxConvDensities/","page":"Generic Convolutions","title":"Generic Convolutions","text":"The choice between root finding or minimization is a performance consideration only. Minimization of the residual squared will always work but certain situations allow direct root finding to be used. If the residual function is guaranteed to cross zero–-i.e. z*=0–-the root finding approach can be used. Each measurement function has a certain number of dimensions – e.g. ranges or bearings are dimension one, and an inter Pose2 rigid transform (delta x, y, theta) is dimension 3. If the variable being resolved has larger dimension than the measurement residual, then the minimization approach must be used.","category":"page"},{"location":"principles/approxConvDensities/","page":"Generic Convolutions","title":"Generic Convolutions","text":"The method of solving the target variable is to fix all other variable values and resolve, sample by sample, the particle estimates of the target. The Julia programming language has good support for functional programming and is used extensively in the IIF implementation to utilize user defined functions to resolve any variable, including the null-hypothesis and multi-hypothesis generalizations.","category":"page"},{"location":"principles/approxConvDensities/","page":"Generic Convolutions","title":"Generic Convolutions","text":"The following section illustrates a single convolution operation by using a few high level and some low level function calls. An additional tutorial exists where a related example in one dimension is performed as a complete factor graph solution/estimation problem.","category":"page"},{"location":"principles/approxConvDensities/#Previous-Text-(to-be-merged-here)","page":"Generic Convolutions","title":"Previous Text (to be merged here)","text":"","category":"section"},{"location":"principles/approxConvDensities/","page":"Generic Convolutions","title":"Generic Convolutions","text":"Proposal distributions are computed by means of (analytical or numerical – i.e. \"algebraic\") factor which defines a residual function:","category":"page"},{"location":"principles/approxConvDensities/","page":"Generic Convolutions","title":"Generic Convolutions","text":"delta S times Eta rightarrow mathcalR","category":"page"},{"location":"principles/approxConvDensities/","page":"Generic Convolutions","title":"Generic Convolutions","text":"where S times Eta is the domain such that theta_i in S eta sim P(Eta), and P(cdot) is a probability.","category":"page"},{"location":"principles/approxConvDensities/","page":"Generic Convolutions","title":"Generic Convolutions","text":"A trust-region, nonlinear gradient decent method is used to enforce the residual function delta (theta_S) in a leave-one-out-Gibbs strategy for all the factors and variables in each clique. Each time a factor residual is enforced for another particle along with a sample from the stochastic noise term. Solutions are found either through root finding on \"full dimension\" equations (source code here):","category":"page"},{"location":"principles/approxConvDensities/","page":"Generic Convolutions","title":"Generic Convolutions","text":"textsolve_theta_i st 0 = delta(theta_S eta)","category":"page"},{"location":"principles/approxConvDensities/","page":"Generic Convolutions","title":"Generic Convolutions","text":"Or minimization of \"low dimension\" equations (source code here) that might not have any roots in theta_i:","category":"page"},{"location":"principles/approxConvDensities/","page":"Generic Convolutions","title":"Generic Convolutions","text":"textargmin_theta_i delta(theta_S eta)^2","category":"page"},{"location":"principles/approxConvDensities/","page":"Generic Convolutions","title":"Generic Convolutions","text":"Gradient decent methods are obtained from the Julia Package community, namely NLsolve.jl and Optim.jl.","category":"page"},{"location":"principles/approxConvDensities/","page":"Generic Convolutions","title":"Generic Convolutions","text":"The factor noise term can be any samplable belief (a.k.a. IIF.SamplableBelief), either through algebraic modeling, or (critically) directly from the sensor measurement that is driven by the underlying physics process. Parametric factors (Distributions.jl) or direct physical measurement noise can be used via AliasingScalarSampler or KernelDensityEstimate.","category":"page"},{"location":"principles/approxConvDensities/","page":"Generic Convolutions","title":"Generic Convolutions","text":"note: Note\nAlso see [1.2], Chap. 5, Approximate Convolutions for more details.","category":"page"},{"location":"principles/approxConvDensities/#Illustrated-Calculation-in-Julia","page":"Generic Convolutions","title":"Illustrated Calculation in Julia","text":"","category":"section"},{"location":"principles/approxConvDensities/","page":"Generic Convolutions","title":"Generic Convolutions","text":"The IncrementalInference.jl package provides a generic interface for estimating the convolution of full functional objects given some user specified residual or cost function. The residual/cost function is then used, with the help of non-linear gradient decent, to project/resolve a set of particles for any one variable associated with a any factor. In the binary variable factor case, such as the odometry tutorial, either pose X2 will be resolved from X1 using the user supplied likelihood residual function, or visa versa for X1 from X2. ","category":"page"},{"location":"principles/approxConvDensities/","page":"Generic Convolutions","title":"Generic Convolutions","text":"note: Note\nNote in a factor graph sense, the flow of time is captured in the structure of the graph and a requirement of the IncrementalInference system is that factors can be resolved towards any variable, given current estimates on all other variables connected to that factor. Furthermore, this forwards or backwards resolving/convolution through a factor should adhere to the Kolmogorov Criterion of reversibility to ensure that detailed balance is maintained in the overall marginal posterior solutions.","category":"page"},{"location":"principles/approxConvDensities/","page":"Generic Convolutions","title":"Generic Convolutions","text":"The IncrementalInference (IIF) package provides a few generic conditional likelihood functions such as LinearRelative or MixtureRelative which we will use in this illustration. ","category":"page"},{"location":"principles/approxConvDensities/","page":"Generic Convolutions","title":"Generic Convolutions","text":"note: Note\nNote that the RoME.jl package provides many more factors that are useful to robotics applications. For a listing of current factors see this docs page, details on developing your own factors on this page. One of the clear design objectives of the IIF package was to allow easier user extension of arbitrary residual functions that allows for vast capacity to represent non-Gaussian stochastic processes.","category":"page"},{"location":"principles/approxConvDensities/","page":"Generic Convolutions","title":"Generic Convolutions","text":"Consider a robot traveling in one dimension, progressing along the x-axis at varying speed. Lets assume pose locations are determined by a constant delta-time rule of say one pose every second, named X0, X1, X2, and so on.","category":"page"},{"location":"principles/approxConvDensities/","page":"Generic Convolutions","title":"Generic Convolutions","text":"note: Note\nNote the bread-crum discretization of the trajectory history by means of poses can later be used to allow estimation of previously unknown mapping parameters simultaneous to the ongoing localization problem.","category":"page"},{"location":"principles/approxConvDensities/","page":"Generic Convolutions","title":"Generic Convolutions","text":"Lets a few basic factor graph operations to develop the desired convolutions:","category":"page"},{"location":"principles/approxConvDensities/","page":"Generic Convolutions","title":"Generic Convolutions","text":"using IncrementalInference\n\n# empty factor graph container\nfg = initfg()\n\n# add two variables of interest\naddVariable!(fg, :x0, ContinuousScalar)\naddVariable!(fg, :x1, ContinuousScalar)\n\n# gauge the solution by adding the first prior information that represents all history up to the current starting position for the robot\npr = Prior(Normal(0.0, 0.1))\naddFactor!(fg, [:x0], pr)\n\n# numerically initialize variable :x0 -- this avoids repeat computations later (specific to this tutorial)\ndoautoinit!(fg, :x0)\n\n# lastly add the odometry conditional likelihood function between the two variables of interest\nodo = LinearConditional(Rayleigh(...))\naddFactor!(fg, [:x0;:x1], odo) # note the list is order sensitive","category":"page"},{"location":"principles/approxConvDensities/","page":"Generic Convolutions","title":"Generic Convolutions","text":"The code block above (not solved yet) describes a algebraic setup exactly equivalent to the convolution equation presented at the top of this page. ","category":"page"},{"location":"principles/approxConvDensities/","page":"Generic Convolutions","title":"Generic Convolutions","text":"note: Note\nIIF does not require the distribution functions to only be parametric, such as Normal, Rayleigh, mixture models, but also allows intensity based values or kernel density estimates. Parametric types are just used here for ease of illustration.","category":"page"},{"location":"principles/approxConvDensities/","page":"Generic Convolutions","title":"Generic Convolutions","text":"To perform an stochastic approximate convolution with the odometry conditional, one can simply call a low level function used the mmisam solver:","category":"page"},{"location":"principles/approxConvDensities/","page":"Generic Convolutions","title":"Generic Convolutions","text":"pts = approxConvBelief(fg, :x0x1f1, :x1) |> getPoints","category":"page"},{"location":"principles/approxConvDensities/","page":"Generic Convolutions","title":"Generic Convolutions","text":"The approxConvBelief function call reads as a operation on fg which won't influence any values of parameter list (common Julia exclamation mark convention) and must use the first factor :x0x1f1 to resolve a convolution on target variable :x1. Implicitly, this result is based on the current estimate contained in :x0. The value of pts is a ::Array{Float64,2} where the rows represent the different dimensions (1-D in this case) and the columns are each of the different samples drawn from the intermediate posterior (i.e. convolution result). ","category":"page"},{"location":"principles/approxConvDensities/","page":"Generic Convolutions","title":"Generic Convolutions","text":"approxConvBelief","category":"page"},{"location":"principles/approxConvDensities/#IncrementalInference.approxConvBelief","page":"Generic Convolutions","title":"IncrementalInference.approxConvBelief","text":"approxConvBelief(dfg, from, target; ...)\napproxConvBelief(\n dfg,\n from,\n target,\n measurement;\n solveKey,\n N,\n tfg,\n setPPEmethod,\n setPPE,\n path,\n skipSolve,\n nullSurplus\n)\n\n\nCalculate the sequential series of convolutions in order as listed by fctLabels, and starting from the value already contained in the first variable. \n\nNotes\n\ntarget must be a variable.\nThe ultimate target variable must be given to allow path discovery through n-ary factors.\nFresh starting point will be used if first element in fctLabels is a unary <:AbstractPrior.\nThis function will not change any values in dfg, and might have slightly less speed performance to meet this requirement.\npass in tfg to get a recoverable result of all convolutions in the chain.\nsetPPE and setPPEmethod can be used to store PPE information in temporary tfg\n\nDevNotes\n\nTODO strong requirement that this function is super efficient on single factor/variable case!\nFIXME must consolidate with accumulateFactorMeans\nTODO solveKey not fully wired up everywhere yet\ntfg gets all the solveKeys inside the source dfg variables\nTODO add a approxConv on PPE option\nConsolidate with accumulateFactorMeans, approxConvBinary\n\nRelated\n\napproxDeconv, findShortestPathDijkstra\n\n\n\n\n\n","category":"function"},{"location":"principles/approxConvDensities/","page":"Generic Convolutions","title":"Generic Convolutions","text":"IIF currently uses kernel density estimation to convert discrete samples into a smooth function estimate. The sample set can be converted into an on-manifold functional object as follows:","category":"page"},{"location":"principles/approxConvDensities/","page":"Generic Convolutions","title":"Generic Convolutions","text":"# create kde object by referencing back the existing memory location pts\nhatX1 = manikde!(ContinuousScalar, pts)","category":"page"},{"location":"principles/approxConvDensities/","page":"Generic Convolutions","title":"Generic Convolutions","text":"The functional object X1 is now ready for other operations such as function evaluation or product computations discussed on another principles page. The ContinuousScalar manifold is just Manifolds.TranslationGroup(1).","category":"page"},{"location":"principles/approxConvDensities/#approxDeconv","page":"Generic Convolutions","title":"approxDeconv","text":"","category":"section"},{"location":"principles/approxConvDensities/","page":"Generic Convolutions","title":"Generic Convolutions","text":"Analogous to a 'forward' convolution calculation, we can similarly approximate the inverse:","category":"page"},{"location":"principles/approxConvDensities/","page":"Generic Convolutions","title":"Generic Convolutions","text":"approxDeconv","category":"page"},{"location":"principles/approxConvDensities/#IncrementalInference.approxDeconv","page":"Generic Convolutions","title":"IncrementalInference.approxDeconv","text":"approxDeconv(fcto; ...)\napproxDeconv(fcto, ccw; N, measurement, retries)\n\n\nInverse solve of predicted noise value and returns tuple of (newly calculated-predicted, and known measurements) values.\n\nNotes\n\nOnly works for first value in measurement::Tuple at this stage.\n\"measured\" is used as starting point for the \"calculated-predicted\" values solve.\nNot all factor evaluation cases are support yet.\nNOTE only works on .threadid()==1 at present, see #1094\nThis function is still part of the initial implementation and needs a lot of generalization improvements.\n\nDevNotes\n\nTODO Test for various cases with multiple variables.\nTODO make multithread-safe, and able, see #1094\nTODO Test for cases with nullhypo\nFIXME FactorMetadata object for all use-cases, not just empty object.\nTODO resolve #1096 (multihypo)\nTODO Test cases for multihypo.\nTODO figure out if there is a way to consolidate with evalFactor and approxConv?\nbasically how to do deconv for just one sample with unique values (wrt TAF)\nTODO N should not be hardcoded to 100\n\nRelated\n\napproxDeconv, _solveCCWNumeric!\n\n\n\n\n\napproxDeconv(dfg, fctsym; ...)\napproxDeconv(dfg, fctsym, solveKey; retries)\n\n\nGeneralized deconvolution to find the predicted measurement values of the factor fctsym in dfg. Inverse solve of predicted noise value and returns tuple of (newly predicted, and known \"measured\" noise) values.\n\nNotes\n\nOpposite operation contained in approxConvBelief.\nFor more notes see solveFactorMeasurements.\n\nRelated\n\napproxConvBelief, deconvSolveKey\n\n\n\n\n\n","category":"function"},{"location":"principles/approxConvDensities/","page":"Generic Convolutions","title":"Generic Convolutions","text":"This feature is not yet as feature rich as the approxConvBelief function, and also requires further work to improve the consistency of the calculation – but none the less exists and is useful in many applications.","category":"page"},{"location":"concepts/parallel_processing/#Parallel-Processing","page":"Parallel Processing","title":"Parallel Processing","text":"","category":"section"},{"location":"concepts/parallel_processing/","page":"Parallel Processing","title":"Parallel Processing","text":"note: Note\nKeywords: parallel processing, multi-threading, multi-process","category":"page"},{"location":"concepts/parallel_processing/","page":"Parallel Processing","title":"Parallel Processing","text":"Julia allows high-performance, parallel processing from the ground up. Depending on the configuration, Caesar.jl can utilize a combination of four styles of multiprocessing: i) separate memory multi-process; ii) shared memory multi-threading; iii) asynchronous shared-memory (forced-atomic) co-routines; and iv) multi-architecture such as JuliaGPU. As of Julia 1.4, the most reliable method of loading all code into all contexts (for multi-processor speedup) is as follows.","category":"page"},{"location":"concepts/parallel_processing/#Multiprocessing","page":"Parallel Processing","title":"Multiprocessing","text":"","category":"section"},{"location":"concepts/parallel_processing/","page":"Parallel Processing","title":"Parallel Processing","text":"Make sure the environment variable JULIA_NUM_THREADS is set as default or per call and recommended to use 4 as starting point.","category":"page"},{"location":"concepts/parallel_processing/","page":"Parallel Processing","title":"Parallel Processing","text":"JULIA_NUM_THREADS=4 julia -O3","category":"page"},{"location":"concepts/parallel_processing/","page":"Parallel Processing","title":"Parallel Processing","text":"In addition to multithreading, Caesar.jl utilizes multiprocessing to distribute computation during the inference steps. Following standard Julia, more processes can be added as follows:","category":"page"},{"location":"concepts/parallel_processing/","page":"Parallel Processing","title":"Parallel Processing","text":"# load the required packages into procid()==1\nusing Flux, RoME, Caesar, RoMEPlotting\n\n# then start more processes\nusing Distributed\naddprocs(8) # note this yields 6*8=40 possible processing threads\n\n# now make sure all code is loaded everywhere (for separate memory cases)\n@everywhere using Flux, RoME, Caesar","category":"page"},{"location":"concepts/parallel_processing/","page":"Parallel Processing","title":"Parallel Processing","text":"It might also be convenient to warm up some of the Just-In-Time compiling:","category":"page"},{"location":"concepts/parallel_processing/","page":"Parallel Processing","title":"Parallel Processing","text":"# solve a few graphs etc, to get majority of solve code compiled before running a robot.\n[warmUpSolverJIT() for i in 1:3];","category":"page"},{"location":"concepts/parallel_processing/#Start-up-Time","page":"Parallel Processing","title":"Start-up Time","text":"","category":"section"},{"location":"concepts/parallel_processing/","page":"Parallel Processing","title":"Parallel Processing","text":"The best way to avoid compile time (when not developing) is to use the established Julia \"first time to plot\" approach based on PackageCompiler.jl, and more details are provided at Ahead of Time compiling.","category":"page"},{"location":"concepts/parallel_processing/#Multithreading","page":"Parallel Processing","title":"Multithreading","text":"","category":"section"},{"location":"concepts/parallel_processing/","page":"Parallel Processing","title":"Parallel Processing","text":"Julia has strong support for shared-memory multithreading. The most sensible breakdown into threaded work is either within each factor calculation or across individual samples of a factor calculation. Either of these cases require some special considerations.","category":"page"},{"location":"concepts/parallel_processing/#Threading-Within-the-Residual","page":"Parallel Processing","title":"Threading Within the Residual","text":"","category":"section"},{"location":"concepts/parallel_processing/","page":"Parallel Processing","title":"Parallel Processing","text":"A factor residual function itself can be broken down further into threaded operations. For example, see many of the features available at JuliaSIMD/LoopVectorization.jl. It is recommended to keep memory allocations down to zero, since the solver code will call on the factor samping and residual funtions mulitple times in random access. Also keep in mind the interaction between conventional thread pool balancing and the newer PARTR cache senstive automated thread scheduling.","category":"page"},{"location":"concepts/parallel_processing/#Threading-Across-Parallel-Samples-[DEPRECATED-–-REFACTORING]","page":"Parallel Processing","title":"Threading Across Parallel Samples [DEPRECATED – REFACTORING]","text":"","category":"section"},{"location":"concepts/parallel_processing/","page":"Parallel Processing","title":"Parallel Processing","text":"IncrementalInference.jl internally has the capability to span threads across samples in parallel computations during convolution operations. Keep in mind which parts of residual factor computation is shared memory. Likely the best course of action is for the factor definition to pre-allocate Threads.nthreads() many memory blocks for factor in-place operations.","category":"page"},{"location":"concepts/parallel_processing/","page":"Parallel Processing","title":"Parallel Processing","text":"To use this feature, IIF must be told that there are no data race concerns with a factor. The current API uses a keyword argument on addFactor!:","category":"page"},{"location":"concepts/parallel_processing/","page":"Parallel Processing","title":"Parallel Processing","text":"# NOTE, legacy `threadmodel=MultiThreaded` is being refactored with new `CalcFactor` pattern\naddFactor!(fg, [:x0; :x1], MyFactor(...))","category":"page"},{"location":"concepts/parallel_processing/","page":"Parallel Processing","title":"Parallel Processing","text":"warning: Warning\nThe current IIF factor multithreading interface is likely to be reworked/improved in the near future (penciled in for 1H2022).","category":"page"},{"location":"concepts/parallel_processing/","page":"Parallel Processing","title":"Parallel Processing","text":"See page Custom Factors for details on how factor computations are represented in code. Regarding threading, consider for example OtherFactor.userdata. The residual calculations from different threads might create a data race on userdata for some volatile internal computation. In that case it is recommended the to instead use Threads.nthreads() and Threads.threadid() to make sure the shared-memory issues are avoided:","category":"page"},{"location":"concepts/parallel_processing/","page":"Parallel Processing","title":"Parallel Processing","text":"struct MyThreadSafeFactor{T <: SamplableBelief} <: IIF.AbstractManifoldMinimize\n Z::T\n inplace::Vector{MyInplaceMem}\nend\n\n# helper function\nMyThreadSafeFactor(z) = MyThreadSafeFactor(z, [MyInplaceMem(0) for i in 1:Threads.nthreads()])\n\n# in residual function just use `thr_inplace = cfo.factor.inplace[Threads.threadid()]`","category":"page"},{"location":"concepts/parallel_processing/","page":"Parallel Processing","title":"Parallel Processing","text":"note: Note\nBeyond the cases discussed above, other features in the IncrementalInference.jl code base (especially regarding the Bayes tree) are already multithreaded.","category":"page"},{"location":"concepts/parallel_processing/#Factor-Caching-(In-place-operations)","page":"Parallel Processing","title":"Factor Caching (In-place operations)","text":"","category":"section"},{"location":"concepts/parallel_processing/","page":"Parallel Processing","title":"Parallel Processing","text":"In-place memory operations for factors can have a significant performance improvement. See the Cache and Stash section for more details.","category":"page"},{"location":"principles/interm_dynpose/#Adding-Velocity-(Preintegration)","page":"Creating DynPose Factor","title":"Adding Velocity (Preintegration)","text":"","category":"section"},{"location":"principles/interm_dynpose/","page":"Creating DynPose Factor","title":"Creating DynPose Factor","text":"This tutorial describes how a new factor can be developed, beyond the pre-existing implementation in RoME.jl. Factors can accept any number of variable dependencies and allow for a wide class of allowable function calls can be used. Our intention is to make it as easy as possible for users to create their own factor types.","category":"page"},{"location":"principles/interm_dynpose/#Example:-Adding-Velocity-to-RoME.Point2","page":"Creating DynPose Factor","title":"Example: Adding Velocity to RoME.Point2","text":"","category":"section"},{"location":"principles/interm_dynpose/","page":"Creating DynPose Factor","title":"Creating DynPose Factor","text":"A smaller example in two dimensions where we wish to estimate the velocity of some target: Consider two variables :x0 with a prior as well as a conditional–-likelihood for short–-to variable :x1. Priors are in the \"global\" reference frame (how ever you choose to define it), while likelihoods are in the \"local\" / \"relative\" frame that only exist between variables.","category":"page"},{"location":"principles/interm_dynpose/","page":"Creating DynPose Factor","title":"Creating DynPose Factor","text":"(Image: dynpoint2fg)","category":"page"},{"location":"principles/interm_dynpose/","page":"Creating DynPose Factor","title":"Creating DynPose Factor","text":"warning: Warning\nText below is outdated (2021Q1) and needs to be updated for changes softtype-->variableType and CalcFactor.","category":"page"},{"location":"principles/interm_dynpose/#Brief-on-Variable-Node-softtypes","page":"Creating DynPose Factor","title":"Brief on Variable Node softtypes","text":"","category":"section"},{"location":"principles/interm_dynpose/","page":"Creating DynPose Factor","title":"Creating DynPose Factor","text":"Variable nodes retain meta data (so called \"soft types\") describing the type of variable. Common VariableNode types are RoME.Point2D, RoME.Pose3D. VariableNode soft types are passed during construction of the factor graph, for example:","category":"page"},{"location":"principles/interm_dynpose/","page":"Creating DynPose Factor","title":"Creating DynPose Factor","text":"v1 = addVariable!(fg, :x1, Pose2)","category":"page"},{"location":"principles/interm_dynpose/","page":"Creating DynPose Factor","title":"Creating DynPose Factor","text":"Certain cases require that more information be retained for each VariableNode, and velocity calculations are a clear example where time stamp data across positions is required. ","category":"page"},{"location":"principles/interm_dynpose/","page":"Creating DynPose Factor","title":"Creating DynPose Factor","text":"Note Larger data can also be stored under the bigdata framework which is discussed here (TBD).","category":"page"},{"location":"principles/interm_dynpose/","page":"Creating DynPose Factor","title":"Creating DynPose Factor","text":"If the required VariableNode does not exist, then one can be created, such as adding velocity states with DynPoint2:","category":"page"},{"location":"principles/interm_dynpose/","page":"Creating DynPose Factor","title":"Creating DynPose Factor","text":"mutable struct DynPoint2 <: IncrementalInference.InferenceVariable\n ut::Int64 # microsecond time\n dims::Int\n DynPoint2(;ut::Int64=0) = new(ut, 4)\nend","category":"page"},{"location":"principles/interm_dynpose/","page":"Creating DynPose Factor","title":"Creating DynPose Factor","text":"The dims field is permanently set to 4, i.e. [x, y, dx/dt, dy/dt]. The utparameter is for storing the microsecond time stamp for that variable node.","category":"page"},{"location":"principles/interm_dynpose/","page":"Creating DynPose Factor","title":"Creating DynPose Factor","text":"In order to implement your own factor type outside IncrementalInference you should import the required identifiers, as follows:","category":"page"},{"location":"principles/interm_dynpose/","page":"Creating DynPose Factor","title":"Creating DynPose Factor","text":"using IncrementalInference\nimport IncrementalInference: getSample","category":"page"},{"location":"principles/interm_dynpose/","page":"Creating DynPose Factor","title":"Creating DynPose Factor","text":"Note that new factor types can be defined at any time, even after you have started to construct the FactorGraph object.","category":"page"},{"location":"principles/interm_dynpose/#DynPoint2VelocityPrior","page":"Creating DynPose Factor","title":"DynPoint2VelocityPrior","text":"","category":"section"},{"location":"principles/interm_dynpose/","page":"Creating DynPose Factor","title":"Creating DynPose Factor","text":"Work in progress.","category":"page"},{"location":"principles/interm_dynpose/","page":"Creating DynPose Factor","title":"Creating DynPose Factor","text":"mutable struct DynPoint2VelocityPrior{T} <: IncrementalInference.AbstractPrior where {T <: Distribution}\n z::T\n DynPoint2VelocityPrior{T}() where {T <: Distribution} = new{T}()\n DynPoint2VelocityPrior(z1::T) where {T <: Distribution} = new{T}(z1)\nend\ngetSample(dp2v::DynPoint2VelocityPrior, N::Int=1) = (rand(dp2v.z,N), )","category":"page"},{"location":"principles/interm_dynpose/#DynPoint2DynPoint2-(preintegration)","page":"Creating DynPose Factor","title":"DynPoint2DynPoint2 (preintegration)","text":"","category":"section"},{"location":"principles/interm_dynpose/","page":"Creating DynPose Factor","title":"Creating DynPose Factor","text":"warning: Warning\n::IIF.FactorMetadata is being refactored and improved. Some of the content below is out of date. See IIF #1025 for details. (1Q2021)","category":"page"},{"location":"principles/interm_dynpose/","page":"Creating DynPose Factor","title":"Creating DynPose Factor","text":"The basic idea is that change in position is composed of three components (originating from double integration of Newton's second law):","category":"page"},{"location":"principles/interm_dynpose/","page":"Creating DynPose Factor","title":"Creating DynPose Factor","text":"(Image: deltapositionplus) ( eq. 1)","category":"page"},{"location":"principles/interm_dynpose/","page":"Creating DynPose Factor","title":"Creating DynPose Factor","text":"DynPoint2DynPoint2 factor is using the above equation to define the difference in position between the two DynPoint2s. The position part stored in DynPoint2DynPoint2 factor corresponds to (Image: deltaposplusonly). A new multi-variable (so called \"pairwise\") factor between any number of variables is defined with three elements:","category":"page"},{"location":"principles/interm_dynpose/","page":"Creating DynPose Factor","title":"Creating DynPose Factor","text":"Factor type definition that inherits either IncrementalInference.FunctorPairwise or IncrementalInference.FunctorPairwiseMinimize;","category":"page"},{"location":"principles/interm_dynpose/","page":"Creating DynPose Factor","title":"Creating DynPose Factor","text":"mutable struct DynPoint2DynPoint2{T} <: IncrementalInference.FunctorPairwise where {T <: Distribution}\n z::T\n DynPoint2DynPoint2{T}() where {T <: Distribution} = new{T}()\n DynPoint2DynPoint2(z1::T) where {T <: Distribution} = new{T}(z1)\nend","category":"page"},{"location":"principles/interm_dynpose/","page":"Creating DynPose Factor","title":"Creating DynPose Factor","text":"A sampling function with exactly the signature: getSample(dp2dp2::DynPoint2DynPoint2, N::Int=1) and returning a Tuple (legacy reasons);","category":"page"},{"location":"principles/interm_dynpose/","page":"Creating DynPose Factor","title":"Creating DynPose Factor","text":"getSample(dp2dp2::DynPoint2DynPoint2, N::Int=1) = (rand(dp2dp2.z,N), )","category":"page"},{"location":"principles/interm_dynpose/","page":"Creating DynPose Factor","title":"Creating DynPose Factor","text":"A residual or minimization function with exactly the signature described below.","category":"page"},{"location":"principles/interm_dynpose/","page":"Creating DynPose Factor","title":"Creating DynPose Factor","text":"Residual (related to FunctorPairwise) or factor minimization function (related to FunctorPairwiseMinimize) signatures should match this dp2dp2::DynPoint2DynPoint2 example:","category":"page"},{"location":"principles/interm_dynpose/","page":"Creating DynPose Factor","title":"Creating DynPose Factor","text":"function (dp2dp2::DynPoint2DynPoint2)(\n res::Array{Float64},\n userdata,\n idx::Int,\n meas::Tuple,\n Xs... )::Nothing","category":"page"},{"location":"principles/interm_dynpose/","page":"Creating DynPose Factor","title":"Creating DynPose Factor","text":"where Xs can be expanded to the particular number of variable nodes this factor will be associated, and note they are order sensitive at addFactor!(fg, ...) time. The res parameter is a vector of the same dimension defined by the largest of the Xs terms. The userdata value contains the small metadata / userdata portions of information that was introduced to the factor graph at construction time – please consult error(string(fieldnames(userdata))) for details at this time. This is a relatively new feature in the code and likely to be improved. The idx parameter represents a legacy index into the measurement meas[1] and variables Xs to select the desired marginal sample value. Future versions of the code plan to remove the idx parameter entirely. The Xs array of parameter are each of type ::Array{Float64,2} and contain the estimated samples from each of the current best marginal belief estimates of the factor graph variable node. ","category":"page"},{"location":"principles/interm_dynpose/","page":"Creating DynPose Factor","title":"Creating DynPose Factor","text":"function (dp2dp2::DynPoint2DynPoint2)(\n res::Array{Float64},\n userdata,\n idx::Int,\n meas::Tuple,\n Xi::Array{Float64,2},\n Xj::Array{Float64,2} )\n #\n z = meas[1][:,idx]\n xi, xj = Xi[:,idx], Xj[:,idx]\n dt = (userdata.variableuserdata[2].ut - userdata.variableuserdata[1].ut)*1e-6 # roughly the intended use of userdata\n res[1:2] = z[1:2] - (xj[1:2] - (xi[1:2]+dt*xi[3:4]))\n res[3:4] = z[3:4] - (xj[3:4] - xi[3:4])\n nothing\nend","category":"page"},{"location":"principles/interm_dynpose/","page":"Creating DynPose Factor","title":"Creating DynPose Factor","text":"A brief usage example looks as follows, and further questions about how the preintegration strategy was implemented can be traced through the original issue JuliaRobotics/RoME.jl#60 or the literature associated with this project, or contact for more information.","category":"page"},{"location":"principles/interm_dynpose/","page":"Creating DynPose Factor","title":"Creating DynPose Factor","text":"using RoME, Distributions\nfg = initfg()\nv0 = addVariable!(fg, :x0, DynPoint2(ut=0))\n\n# Prior factor as boundary condition\npp0 = DynPoint2VelocityPrior(MvNormal([zeros(2);10*ones(2)], 0.1*eye(4)))\nf0 = addFactor!(fg, [:x0;], pp0)\n\n# conditional likelihood between Dynamic Point2\nv1 = addVariable!(fg, :x1, DynPoint2(ut=1000_000)) # time in microseconds\ndp2dp2 = DynPoint2DynPoint2(MvNormal([10*ones(2);zeros(2)], 0.1*eye(4)))\nf1 = addFactor!(fg, [:x0;:x1], dp2dp2)\n\ninitAll!(fg)\ntree = wipeBuildNewTree!(fg)\ninferOverTree!(fg, tree)\n\nusing KernelDensityEstimate\n@show x0 = getKDEMax(getBelief(fg, :x0))\n# julia> ... = [-0.19441, 0.0187019, 10.0082, 10.0901]\n@show x1 = getKDEMax(getBelief(fg, :x1))\n # julia> ... = [19.9072, 19.9765, 10.0418, 10.0797]","category":"page"},{"location":"principles/interm_dynpose/#VelPoint2VelPoint2-(back-differentiation)","page":"Creating DynPose Factor","title":"VelPoint2VelPoint2 (back-differentiation)","text":"","category":"section"},{"location":"principles/interm_dynpose/","page":"Creating DynPose Factor","title":"Creating DynPose Factor","text":"In case the preintegrated approach is not the first choice, we include VelPoint2VelPoint2 <: IncrementalInference.FunctorPairwiseMinimize as a second likelihood factor example which may seem more intuitive:","category":"page"},{"location":"principles/interm_dynpose/","page":"Creating DynPose Factor","title":"Creating DynPose Factor","text":"mutable struct VelPoint2VelPoint2{T} <: IncrementalInference.FunctorPairwiseMinimize where {T <: Distribution}\n z::T\n VelPoint2VelPoint2{T}() where {T <: Distribution} = new{T}()\n VelPoint2VelPoint2(z1::T) where {T <: Distribution} = new{T}(z1)\nend\ngetSample(vp2vp2::VelPoint2VelPoint2, N::Int=1) = (rand(vp2vp2.z,N), )\nfunction (vp2vp2::VelPoint2VelPoint2)(\n res::Array{Float64},\n userdata,\n idx::Int,\n meas::Tuple,\n Xi::Array{Float64,2},\n Xj::Array{Float64,2} )\n #\n z = meas[1][:,idx]\n xi, xj = Xi[:,idx], Xj[:,idx]\n dt = (userdata.variableuserdata[2].ut - userdata.variableuserdata[1].ut)*1e-6 # roughly the intended use of userdata\n dp = (xj[1:2]-xi[1:2])\n dv = (xj[3:4]-xi[3:4])\n res[1] = 0.0\n res[1] += sum((z[1:2] - dp).^2)\n res[1] += sum((z[3:4] - dv).^2)\n res[1] += sum((dp/dt - xi[3:4]).^2) # (dp/dt - 0.5*(xj[3:4]+xi[3:4])) # midpoint integration\n res[1]\nend","category":"page"},{"location":"principles/interm_dynpose/","page":"Creating DynPose Factor","title":"Creating DynPose Factor","text":"A similar usage example here shows:","category":"page"},{"location":"principles/interm_dynpose/","page":"Creating DynPose Factor","title":"Creating DynPose Factor","text":"fg = initfg()\n\n# add three point locations\nv0 = addVariable!(fg, :x0, DynPoint2(ut=0))\nv1 = addVariable!(fg, :x1, DynPoint2(ut=1000_000))\nv2 = addVariable!(fg, :x2, DynPoint2(ut=2000_000))\n\n# Prior factor as boundary condition\npp0 = DynPoint2VelocityPrior(MvNormal([zeros(2);10*ones(2)], 0.1*eye(4)))\nf0 = addFactor!(fg, [:x0;], pp0)\n\n# conditional likelihood between Dynamic Point2\ndp2dp2 = VelPoint2VelPoint2(MvNormal([10*ones(2);zeros(2)], 0.1*eye(4)))\nf1 = addFactor!(fg, [:x0;:x1], dp2dp2)\n\n# conditional likelihood between Dynamic Point2\ndp2dp2 = VelPoint2VelPoint2(MvNormal([10*ones(2);zeros(2)], 0.1*eye(4)))\nf2 = addFactor!(fg, [:x1;:x2], dp2dp2)\n\n# Graphs.plot(fg.g)\ninitAll!(fg)\ntree = wipeBuildNewTree!(fg)\ninferOverTree!(fg, tree)\n\n# see the output\n@show x0 = getKDEMax(getBelief(getVariable(fg, :x0)))\n@show x1 = getKDEMax(getBelief(getVariable(fg, :x1)))\n@show x2 = getKDEMax(getBelief(getVariable(fg, :x2)))","category":"page"},{"location":"principles/interm_dynpose/","page":"Creating DynPose Factor","title":"Creating DynPose Factor","text":"Producing output:","category":"page"},{"location":"principles/interm_dynpose/","page":"Creating DynPose Factor","title":"Creating DynPose Factor","text":"x0 = getKDEMax(getBelief(getVariable(fg, :x0))) = [0.101503, -0.0273216, 9.86718, 9.91146]\nx1 = getKDEMax(getBelief(getVariable(fg, :x1))) = [10.0087, 9.95139, 10.0622, 10.0195]\nx2 = getKDEMax(getBelief(getVariable(fg, :x2))) = [19.9381, 19.9791, 10.0056, 9.92442]","category":"page"},{"location":"principles/interm_dynpose/#IncrementalInference.jl-Defining-Factors-(Future-API)","page":"Creating DynPose Factor","title":"IncrementalInference.jl Defining Factors (Future API)","text":"","category":"section"},{"location":"principles/interm_dynpose/","page":"Creating DynPose Factor","title":"Creating DynPose Factor","text":"We would like to remove the idx indexing from the residual function calls, since that is an unnecessary burden on the user. Instead, the package will use views and SubArray types to simplify the interface. Please contact author for more details (8 June 2018).","category":"page"},{"location":"principles/interm_dynpose/#Contributions","page":"Creating DynPose Factor","title":"Contributions","text":"","category":"section"},{"location":"principles/interm_dynpose/","page":"Creating DynPose Factor","title":"Creating DynPose Factor","text":"Thanks to mc2922 for raising the catalyst issue and conversations that followed from JuliaRobotics/RoME.jl#60.","category":"page"}] +[{"location":"concepts/why_nongaussian/#why_nongaussian","page":"Gaussian vs. Non-Gaussian","title":"Why/Where does non-Gaussian data come from?","text":"","category":"section"},{"location":"concepts/why_nongaussian/","page":"Gaussian vs. Non-Gaussian","title":"Gaussian vs. Non-Gaussian","text":"Gaussian error models in measurement or data cues will only be Gaussian (normally distributed) if all physics/decisions/systematic-errors/calibration/etc. has a correct algebraic model in all circumstances. Caesar.jl and MM-iSAMv2 is heavily focussed on state-estimation from a plethora of heterogenous data that may not yet have perfect algebraic models. Four major categories of non-Gaussian errors have thus far been considered:","category":"page"},{"location":"concepts/why_nongaussian/","page":"Gaussian vs. Non-Gaussian","title":"Gaussian vs. Non-Gaussian","text":"Uncertain decisions (a.k.a. data association), such as a robot trying to decide if a navigation loop-closure can be deduced from a repeat observation of a similar object or measurement from current and past data. These issues are commonly also referred to as multi-hypothesis.\nUnderdetermined or underdefined systems where there are more variables than constraining measurements to fully define the system as a single mode–-a.k.a solution ambiguity. For example, in 2D consider two range measurements resulting in two possible locations through trilateration.\nNonlinearity. For example in 2D, consider a Pose2 odometry where the orientation is uncertain: The resulting belief of where a next pose might be (convolution with odometry factor) results in a banana shape curve, even though the entire process is driven by assumed Gaussian belief.\nPhysics of the measurement process. Many measurement processes exhibit non-Gaussian behaviour. For example, acoustic/radio time-of-flight measurements, using either pulse-train or matched filtering, result in an \"energy intensity\" over time/distance of what the range to a scattering-target/source might be–i.e. highly non-Gaussian.","category":"page"},{"location":"concepts/why_nongaussian/#Next-Steps","page":"Gaussian vs. Non-Gaussian","title":"Next Steps","text":"","category":"section"},{"location":"concepts/why_nongaussian/","page":"Gaussian vs. Non-Gaussian","title":"Gaussian vs. Non-Gaussian","text":"Quick links to related pages:","category":"page"},{"location":"concepts/why_nongaussian/","page":"Gaussian vs. Non-Gaussian","title":"Gaussian vs. Non-Gaussian","text":"Pages = [\n \"installation_environment.md\"\n \"concepts/concepts.md\"\n \"concepts/building_graphs.md\"\n \"concepts/2d_plotting.md\"\n]\nDepth = 1","category":"page"},{"location":"dev/known_issues/#Known-Issues","page":"Known Issue List","title":"Known Issues","text":"","category":"section"},{"location":"dev/known_issues/","page":"Known Issue List","title":"Known Issue List","text":"This page is used to list known issues:","category":"page"},{"location":"dev/known_issues/","page":"Known Issue List","title":"Known Issue List","text":"Arena.jl is fairly behind on a number of updates and deprecations. Fixes for this are planned 2021Q2.\nRoMEPlotting.jl main features like plotSLAM2D are working, but some of the other features are not fully up to date with recent changes in upstream packages. This too will be updated around Summer 2021.","category":"page"},{"location":"dev/known_issues/#Features-To-Be-Restored","page":"Known Issue List","title":"Features To Be Restored","text":"","category":"section"},{"location":"dev/known_issues/#Install-3D-Visualization-Utils-(e.g.-Arena.jl)","page":"Known Issue List","title":"Install 3D Visualization Utils (e.g. Arena.jl)","text":"","category":"section"},{"location":"dev/known_issues/","page":"Known Issue List","title":"Known Issue List","text":"3D Visualizations are provided by Arena.jl as well as development package Amphitheater.jl. Please follow instructions on the Visualizations page for a variety of 3D utilities.","category":"page"},{"location":"dev/known_issues/","page":"Known Issue List","title":"Known Issue List","text":"note: Note\nArena.jl and Amphitheater.jl are currently being refactored as part of the broader DistributedFactorGraph migration, the features are are in beta stage (1Q2020).","category":"page"},{"location":"dev/known_issues/","page":"Known Issue List","title":"Known Issue List","text":"Install the latest master branch version with","category":"page"},{"location":"dev/known_issues/","page":"Known Issue List","title":"Known Issue List","text":"(v1.5) pkg> add Arena#master","category":"page"},{"location":"dev/known_issues/#Install-\"Just-the-ZMQ/ROS-Runtime-Solver\"-(Linux)","page":"Known Issue List","title":"Install \"Just the ZMQ/ROS Runtime Solver\" (Linux)","text":"","category":"section"},{"location":"dev/known_issues/","page":"Known Issue List","title":"Known Issue List","text":"Work in progress (see issue #278).","category":"page"},{"location":"concepts/compile_binary/#compile_binaries","page":"Compile Binaries","title":"Compile Binaries","text":"","category":"section"},{"location":"concepts/compile_binary/","page":"Compile Binaries","title":"Compile Binaries","text":"Broader Julia ecosystem work on compiling shared libraries and images is hosted by PackageCompiler.jl, see documentation there.","category":"page"},{"location":"concepts/compile_binary/#Compiling-RoME.so","page":"Compile Binaries","title":"Compiling RoME.so","text":"","category":"section"},{"location":"concepts/compile_binary/","page":"Compile Binaries","title":"Compile Binaries","text":"A default RoME system image script can be used compileRoME/compileRoMESysimage.jl to reduce the \"time-to-first-plot\".","category":"page"},{"location":"concepts/compile_binary/","page":"Compile Binaries","title":"Compile Binaries","text":"To use RoME with the newly created sysimage, start julia with:","category":"page"},{"location":"concepts/compile_binary/","page":"Compile Binaries","title":"Compile Binaries","text":"julia -O3 -J ~/.julia/dev/RoME/compileRoME/RoMESysimage.so","category":"page"},{"location":"concepts/compile_binary/","page":"Compile Binaries","title":"Compile Binaries","text":"Which should dramatically cut down on the load time of the included package JIT compilation. More packages or functions can be added to the binary, depending on the application. Furthermore, full executable binaries can easily be made with PackageCompiler.jl.","category":"page"},{"location":"concepts/compile_binary/#More-Info","page":"Compile Binaries","title":"More Info","text":"","category":"section"},{"location":"concepts/compile_binary/","page":"Compile Binaries","title":"Compile Binaries","text":"note: Note\nAlso see this Julia Binaries Blog. More on discourse.. Also see new brute force sysimg work at Fezzik.jl.","category":"page"},{"location":"concepts/compile_binary/","page":"Compile Binaries","title":"Compile Binaries","text":"note: Note\nContents of a previous blog post this AOT vs JIT compiling blog post has been wrapped into PackageCompiler.jl.","category":"page"},{"location":"examples/examples/#examples_section","page":"Caesar Examples","title":"Examples","text":"","category":"section"},{"location":"examples/examples/","page":"Caesar Examples","title":"Caesar Examples","text":"The following examples demonstrate the conceptual operation of Caesar, highlighting specific features of the framework and its use.","category":"page"},{"location":"examples/examples/#Continuous-Scalar","page":"Caesar Examples","title":"Continuous Scalar","text":"","category":"section"},{"location":"examples/examples/#Calculating-a-Square-Root-(Underdetermined)","page":"Caesar Examples","title":"Calculating a Square Root (Underdetermined)","text":"","category":"section"},{"location":"examples/examples/","page":"Caesar Examples","title":"Caesar Examples","text":"Probably the most minimal example that illustrates how factor graphs represent a mathematical framework is a reworking of the classic square root calculation.","category":"page"},{"location":"examples/examples/","page":"Caesar Examples","title":"Caesar Examples","text":"

      \n\n

      ","category":"page"},{"location":"examples/examples/","page":"Caesar Examples","title":"Caesar Examples","text":"note: Note\nWIP, a combined type-definion and square root script is available as an example script. We're working to present the example without having to define any types.","category":"page"},{"location":"examples/examples/#Continuous-Scalar-with-Mixtures","page":"Caesar Examples","title":"Continuous Scalar with Mixtures","text":"","category":"section"},{"location":"examples/examples/","page":"Caesar Examples","title":"Caesar Examples","text":"This abstract continuous scalar example illustrates how IncrementalInference.jl enables algebraic relations between stochastic variables, and how a final posterior belief estimate is calculated from several pieces of information.","category":"page"},{"location":"examples/examples/","page":"Caesar Examples","title":"Caesar Examples","text":"

      \n\n

      ","category":"page"},{"location":"examples/examples/#Hexagonal-2D","page":"Caesar Examples","title":"Hexagonal 2D","text":"","category":"section"},{"location":"examples/examples/#Batch-Mode","page":"Caesar Examples","title":"Batch Mode","text":"","category":"section"},{"location":"examples/examples/","page":"Caesar Examples","title":"Caesar Examples","text":"A simple 2D hexagonal robot trajectory example is expanded below using techniques developed in simultaneous localization and mapping (SLAM).","category":"page"},{"location":"examples/examples/","page":"Caesar Examples","title":"Caesar Examples","text":"

      \n\n

      ","category":"page"},{"location":"examples/examples/#Bayes-Tree-Fixed-Lag-Solving-Hexagonal2D-Revisited","page":"Caesar Examples","title":"Bayes Tree Fixed-Lag Solving - Hexagonal2D Revisited","text":"","category":"section"},{"location":"examples/examples/","page":"Caesar Examples","title":"Caesar Examples","text":"The hexagonal fixed-lag example shows how tree based clique recycling can be achieved. A further example is given in the real-world underwater example below.","category":"page"},{"location":"examples/examples/#An-Underdetermined-Solution-(a.k.a.-SLAM-e-donut)","page":"Caesar Examples","title":"An Underdetermined Solution (a.k.a. SLAM-e-donut)","text":"","category":"section"},{"location":"examples/examples/","page":"Caesar Examples","title":"Caesar Examples","text":"This tutorial describes (unforced multimodality) a range-only system where there are always more variable dimensions than range measurements made, see Underdeterminied Example here The error distribution over ranges could be nearly anything, but are restricted to Gaussian-only in this example to illustrate an alternative point – other examples show inference results where highly non-Gaussian error distributions are used.","category":"page"},{"location":"examples/examples/","page":"Caesar Examples","title":"Caesar Examples","text":"Multi-modal range only example (click here or image for full Vimeo): ","category":"page"},{"location":"examples/examples/","page":"Caesar Examples","title":"Caesar Examples","text":"\"IMAGE","category":"page"},{"location":"examples/examples/#Towards-Real-Time-Underwater-Acoustic-Navigation","page":"Caesar Examples","title":"Towards Real-Time Underwater Acoustic Navigation","text":"","category":"section"},{"location":"examples/examples/","page":"Caesar Examples","title":"Caesar Examples","text":"This example uses \"dead reckon tethering\" (DRT) to perform many of the common robot odometry and high frequency pose updated operations. These features are a staple and standard part of the distributed factor graph system.","category":"page"},{"location":"examples/examples/","page":"Caesar Examples","title":"Caesar Examples","text":"Click on image (or this link to Vimeo) for a video illustration:","category":"page"},{"location":"examples/examples/","page":"Caesar Examples","title":"Caesar Examples","text":"\"AUV","category":"page"},{"location":"examples/examples/#Uncertain-Data-Associations,-(forced-multi-hypothesis)","page":"Caesar Examples","title":"Uncertain Data Associations, (forced multi-hypothesis)","text":"","category":"section"},{"location":"examples/examples/","page":"Caesar Examples","title":"Caesar Examples","text":"This example presents a novel multimodal solution to an otherwise intractible multihypothesis SLAM problem. This work spans the entire Victoria Park dataset, and resolves a solution over roughly 10000 variable dimensions with 2^1700 (yes to teh power 1700) theoretically possible modes. At the time of first solution in 2016, a full batch solution took around 3 hours to compute on a very spartan early implementation.","category":"page"},{"location":"examples/examples/","page":"Caesar Examples","title":"Caesar Examples","text":"\n

      ","category":"page"},{"location":"examples/examples/","page":"Caesar Examples","title":"Caesar Examples","text":"The fractional multi-hypothesis assignments addFactor!(..., multihypo=[1.0; 0.5;0.5]). Similarly for tri-nary or higher multi-hypotheses.","category":"page"},{"location":"examples/examples/#Probabilistic-Data-Association-(Uncertain-loop-closures)","page":"Caesar Examples","title":"Probabilistic Data Association (Uncertain loop closures)","text":"","category":"section"},{"location":"examples/examples/","page":"Caesar Examples","title":"Caesar Examples","text":"Example where the standard multihypothesis addFactor!(.., multihypo=[1.0;0.5;0.5]) interface is used. This is from the Kitti driving dataset. Video here. The data association and multihypothesis section discusses this feature in more detail.","category":"page"},{"location":"examples/examples/","page":"Caesar Examples","title":"Caesar Examples","text":"\"IMAGE","category":"page"},{"location":"examples/examples/#Synthetic-Aperture-Sonar-SLAM","page":"Caesar Examples","title":"Synthetic Aperture Sonar SLAM","text":"","category":"section"},{"location":"examples/examples/","page":"Caesar Examples","title":"Caesar Examples","text":"The full functional (approximate sum-product) inference approach can be used to natively imbed single hydrophone acoustic waveform data into highly non-Gaussian SAS factors–that implicitly perform beamforming/micro-location–-for a simultaneous localization and mapping solution (image links to video). See the Raw Correlator Probability (Matched Filter) Section for more details.","category":"page"},{"location":"examples/examples/","page":"Caesar Examples","title":"Caesar Examples","text":"\"IMAGE","category":"page"},{"location":"examples/examples/#Marine-Surface-Vehicle-with-ROS","page":"Caesar Examples","title":"Marine Surface Vehicle with ROS","text":"","category":"section"},{"location":"examples/examples/","page":"Caesar Examples","title":"Caesar Examples","text":"New marine surface vehicle code tutorial using ROS.","category":"page"},{"location":"examples/examples/","page":"Caesar Examples","title":"Caesar Examples","text":"note: Note\nSee initial example here, and native ROS support section here.","category":"page"},{"location":"examples/examples/#Simulated-Ambiguous-SONAR-in-3D","page":"Caesar Examples","title":"Simulated Ambiguous SONAR in 3D","text":"","category":"section"},{"location":"examples/examples/","page":"Caesar Examples","title":"Caesar Examples","text":"Intersection of ambiguous elevation angle from planar SONAR sensor: ","category":"page"},{"location":"examples/examples/","page":"Caesar Examples","title":"Caesar Examples","text":"\"IMAGE","category":"page"},{"location":"examples/examples/","page":"Caesar Examples","title":"Caesar Examples","text":"Bi-modal belief","category":"page"},{"location":"examples/examples/","page":"Caesar Examples","title":"Caesar Examples","text":"\"IMAGE","category":"page"},{"location":"examples/examples/#Multi-session-Indoor-Robot","page":"Caesar Examples","title":"Multi-session Indoor Robot","text":"","category":"section"},{"location":"examples/examples/","page":"Caesar Examples","title":"Caesar Examples","text":"Multi-session Turtlebot example of the second floor in the Stata Center:","category":"page"},{"location":"examples/examples/","page":"Caesar Examples","title":"Caesar Examples","text":"\"Turtlebot","category":"page"},{"location":"examples/examples/","page":"Caesar Examples","title":"Caesar Examples","text":"See the multisession information page for more details, as well as academic work:","category":"page"},{"location":"examples/examples/#More-Examples","page":"Caesar Examples","title":"More Examples","text":"","category":"section"},{"location":"examples/examples/","page":"Caesar Examples","title":"Caesar Examples","text":"Please see examples folders for Caesar and RoME for more examples, with expanded documentation in the works.","category":"page"},{"location":"examples/examples/#Adding-Factors-Simple-Factor-Design","page":"Caesar Examples","title":"Adding Factors - Simple Factor Design","text":"","category":"section"},{"location":"examples/examples/","page":"Caesar Examples","title":"Caesar Examples","text":"Caesar can be extended with new variables and factors without changing the core code. An example of this design pattern is provided in this example.","category":"page"},{"location":"examples/examples/","page":"Caesar Examples","title":"Caesar Examples","text":"Defining New Variables and Factor","category":"page"},{"location":"examples/examples/#Adding-Factors-DynPose-Factor","page":"Caesar Examples","title":"Adding Factors - DynPose Factor","text":"","category":"section"},{"location":"examples/examples/","page":"Caesar Examples","title":"Caesar Examples","text":"Intermediate Example: Adding Dynamic Factors and Variables","category":"page"},{"location":"concepts/using_julia/#Using-Julia","page":"Using Julia","title":"Using Julia","text":"","category":"section"},{"location":"concepts/using_julia/","page":"Using Julia","title":"Using Julia","text":"While Caesar.jl is accessible from various programming languages, this page describes how to use Julia, existing packages, multi-process and multi-threading features, and more. A wealth of general Julia resources are available in the Internet, see `www.julialang.org for more resources.","category":"page"},{"location":"concepts/using_julia/","page":"Using Julia","title":"Using Julia","text":"If you are familar with Julia, feel free to skip over to the next page.","category":"page"},{"location":"concepts/using_julia/#Julia-REPL-and-Help","page":"Using Julia","title":"Julia REPL and Help","text":"","category":"section"},{"location":"concepts/using_julia/","page":"Using Julia","title":"Using Julia","text":"Julia's documentation on the REPL can be found here. As a brief example, the REPL in a terminal looks as follows:","category":"page"},{"location":"concepts/using_julia/","page":"Using Julia","title":"Using Julia","text":"$ julia -O3\n _\n _ _ _(_)_ | Documentation: https://docs.julialang.org\n (_) | (_) (_) |\n _ _ _| |_ __ _ | Type \"?\" for help, \"]?\" for Pkg help.\n | | | | | | |/ _` | |\n | | |_| | | | (_| | | Version 1.6.3 (2021-09-23)\n _/ |\\__'_|_|_|\\__'_| | Official https://julialang.org/ release\n|__/ |\n\njulia> ? # upon typing ?, the prompt changes (in place) to: help?>\n\nhelp?> string\nsearch: string String Cstring Cwstring RevString randstring bytestring SubString\n\n string(xs...)\n\n Create a string from any values using the print function.\n ...","category":"page"},{"location":"concepts/using_julia/","page":"Using Julia","title":"Using Julia","text":"The -O 3 is for level 3 code compilation optimization and is a useful habit for slightly faster execution, but slightly slower first run just-in-time compilation of any new function.","category":"page"},{"location":"concepts/using_julia/#Loading-Packages","page":"Using Julia","title":"Loading Packages","text":"","category":"section"},{"location":"concepts/using_julia/","page":"Using Julia","title":"Using Julia","text":"Assuming you just loaded an empty REPL, or at the start of a script, or working inside the VSCode IDE, the first thing to do is load the necessary Julia packages. Caesar.jl is an umbrella package potentially covering over 100 Julia Packages. For this reason the particular parts of the code are broken up amongst more focussed vertical purpose library packages. Usually for Robotics either Caesar or less expansive RoME will do. Other non-Geometric sensor processing applications might build in the MM-iSAMv2, Bayes tree, and DistributedFactorGraph libraries. Any of these packages can be loaded as follows:","category":"page"},{"location":"concepts/using_julia/","page":"Using Julia","title":"Using Julia","text":"# umbrella containing most functional packages including RoME\nusing Caesar\n# contains the IncrementalInference and other geometric manifold packages\nusing RoME\n# contains among others DistributedFactorGraphs.jl and ApproxManifoldProducts.jl\nusing IncrementalInference","category":"page"},{"location":"concepts/using_julia/#Optional-Package-Loading","page":"Using Julia","title":"Optional Package Loading","text":"","category":"section"},{"location":"concepts/using_julia/","page":"Using Julia","title":"Using Julia","text":"Many of these packages have additional features that are not included by default. For example, the Flux.jl machine learning package will introduce several additional features when loaded, e.g.:","category":"page"},{"location":"concepts/using_julia/","page":"Using Julia","title":"Using Julia","text":"julia> using Flux, RoME\n\n[ Info: IncrementalInference is adding Flux related functionality.\n[ Info: RoME is adding Flux related functionality.","category":"page"},{"location":"concepts/using_julia/","page":"Using Julia","title":"Using Julia","text":"For completeness, so too with packages like Images.jl, RobotOS.jl, and others:","category":"page"},{"location":"concepts/using_julia/","page":"Using Julia","title":"Using Julia","text":"using Caesar, Images","category":"page"},{"location":"concepts/using_julia/#Running-Unit-Tests-Locally","page":"Using Julia","title":"Running Unit Tests Locally","text":"","category":"section"},{"location":"concepts/using_julia/","page":"Using Julia","title":"Using Julia","text":"Unit tests can further be performed for the upstream packages as follows – NOTE first time runs are slow since each new function call or package must first be precompiled. These test can take up to an hour and may have occasional stochastic failures in any one of the many tests being run. Thus far we have accepted occasional stochasticly driven numerical events–-e.g. a test event might result in 1.03 < 1–-rather than making tests so loose such that actual bugs are missed. Strictly speaking, we should repeat tests 10 times over with tighter tolerances, but that would require hundreds or thousands of cloud CI hours a week.","category":"page"},{"location":"concepts/using_julia/","page":"Using Julia","title":"Using Julia","text":"juila> ] # activate Pkg manager\n\n# the multimodal incremental smoothing and mapping solver\n(v1.6) pkg> test IncrementalInference\n...\n# robotics related variables and factors to work with IncrementalInference -- can be used standalone SLAM system\n(v1.6) pkg> test RoME\n...\n# umbrella framework with interaction tools and more -- allows stand alone and server based solving\n(v1.6) pkg> test Caesar\n...","category":"page"},{"location":"concepts/using_julia/#Install-Repos-for-Development","page":"Using Julia","title":"Install Repos for Development","text":"","category":"section"},{"location":"concepts/using_julia/","page":"Using Julia","title":"Using Julia","text":"Alternatively, the dev command:","category":"page"},{"location":"concepts/using_julia/","page":"Using Julia","title":"Using Julia","text":"(v1.6) pkg> dev https://github.com/JuliaRobotics/Caesar.jl\n\n# Or fetching a local fork where you push access\n# (v1.6) pkg> dev https://github.com/dehann/Caesar.jl","category":"page"},{"location":"concepts/using_julia/","page":"Using Julia","title":"Using Julia","text":"warn: Warn\nDevelopment packages are NOT managed by Pkg.jl, so you have to manage this Git repo manually. Development packages can usually be found at, e.g. Caesar~/.julia/dev/Caesar","category":"page"},{"location":"concepts/using_julia/","page":"Using Julia","title":"Using Julia","text":"If you'd like to modify or contribute then feel free to fork the specific repo from JuliaRobotics, complete the work on branches in the fork as is normal with a Git workflow and then submit a PR back upstream. We try to keep PRs small, specific to a task and preempt large changes by first merging smaller non-breaking changes and finally do a small switch over PR. We also follow a backport onto release/vX.Y branch strategy with common main || master branch as the \"lobby\" for shared development into which individual single responsibility PRs are merged. Each PR, the main development lobby, and stable release/vX.Y branches are regularly tested through Continuous Integration at each of the repsective packages.","category":"page"},{"location":"concepts/using_julia/","page":"Using Julia","title":"Using Julia","text":"note: Note\nBinary compilation and fast \"first-time-to-plot\" can be done through PackageCompiler.jl, see here for more details.","category":"page"},{"location":"concepts/using_julia/#Julia-Command-Examples","page":"Using Julia","title":"Julia Command Examples","text":"","category":"section"},{"location":"concepts/using_julia/","page":"Using Julia","title":"Using Julia","text":"Run Julia in REPL (console) mode:","category":"page"},{"location":"concepts/using_julia/","page":"Using Julia","title":"Using Julia","text":"$ julia\njulia> println(\"hello world\")\n\"hello world\"","category":"page"},{"location":"concepts/using_julia/","page":"Using Julia","title":"Using Julia","text":"Maybe a script, or command:","category":"page"},{"location":"concepts/using_julia/","page":"Using Julia","title":"Using Julia","text":"user@...$ echo \"println(\\\"hello again\\\")\" > myscript.jl\nuser@...$ julia myscript.jl\nhello again\nuser@...$ rm myscript.jl\n\nuser@...$ julia -e \"println(\\\"one more time.\\\")\"\none more time.\nuser@...$ julia -e \"println(\\\"...testing...\\\")\"\n...testing...","category":"page"},{"location":"concepts/using_julia/","page":"Using Julia","title":"Using Julia","text":"note: Note\nWhen searching for Julia related help online, use the phrase 'julialang' instead of just 'julia'. For example, search for 'julialang workflow tips' or 'julialang performance tips'. Also, see FAQ - Why are first runs slow?, which is due to Just-In-Time/Pre compiling and caching.","category":"page"},{"location":"concepts/using_julia/#Next-Steps","page":"Using Julia","title":"Next Steps","text":"","category":"section"},{"location":"concepts/using_julia/","page":"Using Julia","title":"Using Julia","text":"Although Caesar is Julia-based, it provides multi-language support with a ZMQ interface. This is discussed in Caesar Multi-Language Support. Caesar.jl also supports various visualizations and plots by using Arena, RoMEPlotting, and Director. This is discussed in Visualization with Arena.jl and RoMEPlotting.jl.","category":"page"},{"location":"func_ref/#Additional-Function-Reference","page":"More Functions","title":"Additional Function Reference","text":"","category":"section"},{"location":"func_ref/","page":"More Functions","title":"More Functions","text":"Pages = [\n \"func_ref.md\"\n]\nDepth = 3","category":"page"},{"location":"func_ref/#RoME","page":"More Functions","title":"RoME","text":"","category":"section"},{"location":"func_ref/","page":"More Functions","title":"More Functions","text":"getRangeKDEMax2D\ninitFactorGraph!\naddOdoFG!","category":"page"},{"location":"func_ref/#RoME.getRangeKDEMax2D","page":"More Functions","title":"RoME.getRangeKDEMax2D","text":"getRangeKDEMax2D(fgl, vsym1, vsym2)\n\n\nCalculate the cartesian distance between two vertices in the graph using their symbol name, and by maximum belief point.\n\n\n\n\n\n","category":"function"},{"location":"func_ref/#RoME.initFactorGraph!","page":"More Functions","title":"RoME.initFactorGraph!","text":"initFactorGraph!(\n fg;\n P0,\n init,\n N,\n lbl,\n solvable,\n firstPoseType,\n labels\n)\n\n\nInitialize a factor graph object as Pose2, Pose3, or neither and returns variable and factor symbols as array.\n\n\n\n\n\n","category":"function"},{"location":"func_ref/#RoME.addOdoFG!","page":"More Functions","title":"RoME.addOdoFG!","text":"addOdoFG!(fg, n, DX, cov; N, solvable, labels)\n\n\nCreate a new variable node and insert odometry constraint factor between which will automatically increment latest pose symbol x for new node new node and constraint factor are returned as a tuple.\n\n\n\n\n\naddOdoFG!(fgl, odo; N, solvable, labels)\n\n\nCreate a new variable node and insert odometry constraint factor between which will automatically increment latest pose symbol x for new node new node and constraint factor are returned as a tuple.\n\n\n\n\n\n","category":"function"},{"location":"func_ref/#IncrementalInference","page":"More Functions","title":"IncrementalInference","text":"","category":"section"},{"location":"func_ref/","page":"More Functions","title":"More Functions","text":"approxCliqMarginalUp!\nareCliqVariablesAllMarginalized\nattemptTreeSimilarClique\nchildCliqs\ncliqHistFilterTransitions\ncycleInitByVarOrder!\ndoautoinit!\ndrawCliqSubgraphUpMocking\nfifoFreeze!\nfilterHistAllToArray\nfmcmc!\ngetClique\ngetCliqAllVarIds\ngetCliqAssocMat\ngetCliqDepth\ngetCliqDownMsgsAfterDownSolve\ngetCliqFrontalVarIds\ngetCliqVarInitOrderUp\ngetCliqMat\ngetCliqSeparatorVarIds\ngetCliqSiblings\ngetCliqVarIdsPriors\ngetCliqVarSingletons\ngetParent\ngetTreeAllFrontalSyms\nhasClique\nisInitialized\nisMarginalized\nisTreeSolved\nisPartial\nlocalProduct\nmakeCsmMovie\nparentCliq\npredictVariableByFactor\nprintCliqHistorySummary\nresetCliqSolve!\nresetData!\nresetTreeCliquesForUpSolve!\nresetVariable!\nsetfreeze!\nsetValKDE!\nsetVariableInitialized!\nsolveCliqWithStateMachine!\ntransferUpdateSubGraph!\ntreeProductDwn\ntreeProductUp\nunfreezeVariablesAll!\ndontMarginalizeVariablesAll!\nupdateFGBT!\nupGibbsCliqueDensity\nresetVariableAllInitializations!","category":"page"},{"location":"func_ref/#IncrementalInference.approxCliqMarginalUp!","page":"More Functions","title":"IncrementalInference.approxCliqMarginalUp!","text":"approxCliqMarginalUp!(csmc; ...)\napproxCliqMarginalUp!(\n csmc,\n childmsgs;\n N,\n dbg,\n multiproc,\n logger,\n iters,\n drawpdf\n)\n\n\nApproximate Chapman-Kolmogorov transit integral and return separator marginals as messages to pass up the Bayes (Junction) tree, along with additional clique operation values for debugging.\n\nNotes\n\nonduplicate=true by default internally uses deepcopy of factor graph and Bayes tree, and does not update the given objects. Set false to update fgl and treel during compute.\n\nFuture\n\nTODO: internal function chain is too long and needs to be refactored for maintainability.\n\n\n\n\n\n","category":"function"},{"location":"func_ref/#IncrementalInference.areCliqVariablesAllMarginalized","page":"More Functions","title":"IncrementalInference.areCliqVariablesAllMarginalized","text":"areCliqVariablesAllMarginalized(subfg, cliq)\n\n\nReturn true if all variables in clique are considered marginalized (and initialized).\n\n\n\n\n\n","category":"function"},{"location":"func_ref/#IncrementalInference.attemptTreeSimilarClique","page":"More Functions","title":"IncrementalInference.attemptTreeSimilarClique","text":"attemptTreeSimilarClique(othertree, seeksSimilar)\n\n\nSpecial internal function to try return the clique data if succesfully identified in othertree::AbstractBayesTree, based on contents of seeksSimilar::BayesTreeNodeData.\n\nNotes\n\nUsed to identify and skip similar cliques (i.e. recycle computations)\n\n\n\n\n\n","category":"function"},{"location":"func_ref/#IncrementalInference.childCliqs","page":"More Functions","title":"IncrementalInference.childCliqs","text":"childCliqs(treel, cliq)\n\n\nReturn a vector of child cliques to cliq.\n\n\n\n\n\n","category":"function"},{"location":"func_ref/#IncrementalInference.cliqHistFilterTransitions","page":"More Functions","title":"IncrementalInference.cliqHistFilterTransitions","text":"cliqHistFilterTransitions(hist, nextfnc)\n\n\nReturn state machine transition steps from history such that the nextfnc::Function.\n\nRelated:\n\nprintCliqHistorySummary, filterHistAllToArray, sandboxCliqResolveStep\n\n\n\n\n\n","category":"function"},{"location":"func_ref/#IncrementalInference.cycleInitByVarOrder!","page":"More Functions","title":"IncrementalInference.cycleInitByVarOrder!","text":"cycleInitByVarOrder!(subfg, varorder; solveKey, logger)\n\n\nCycle through var order and initialize variables as possible in subfg::AbstractDFG. Return true if something was updated.\n\nNotes:\n\nassumed subfg is a subgraph containing only the factors that can be used.\nincluding the required up or down messages\nintended for both up and down initialization operations.\n\nDev Notes\n\nShould monitor updates based on the number of inferred & solvable dimensions\n\n\n\n\n\n","category":"function"},{"location":"func_ref/#IncrementalInference.doautoinit!","page":"More Functions","title":"IncrementalInference.doautoinit!","text":"doautoinit!(dfg, xi; solveKey, singles, N, logger)\n\n\nEXPERIMENTAL: initialize target variable xi based on connected factors in the factor graph fgl. Possibly called from addFactor!, or doCliqAutoInitUp! (?).\n\nNotes:\n\nSpecial carve out for multihypo cases, see issue 427.\n\nDevelopment Notes:\n\nTarget factor is first (singletons) or second (dim 2 pairwise) variable vertex in xi.\nTODO use DFG properly with local operations and DB update at end.\nTODO get faster version of isInitialized for database version.\nTODO: Persist this back if we want to here.\nTODO: init from just partials\n\n\n\n\n\n","category":"function"},{"location":"func_ref/#IncrementalInference.drawCliqSubgraphUpMocking","page":"More Functions","title":"IncrementalInference.drawCliqSubgraphUpMocking","text":"drawCliqSubgraphUpMocking(\n fgl,\n treel,\n frontalSym;\n show,\n filepath,\n engine,\n viewerapp\n)\n\n\nConstruct (new) subgraph and draw the subgraph associated with clique frontalSym::Symbol.\n\nNotes\n\nSee drawGraphCliq/writeGraphPdf for details on keyword options.\n\nRelated\n\ndrawGraphCliq, spyCliqMat, drawTree, buildCliqSubgraphUp, buildSubgraphFromLabels!\n\n\n\n\n\n","category":"function"},{"location":"func_ref/#IncrementalInference.fifoFreeze!","page":"More Functions","title":"IncrementalInference.fifoFreeze!","text":"fifoFreeze!(dfg)\n\n\nFreeze nodes that are older than the quasi fixed-lag length defined by fg.qfl, according to fg.fifo ordering.\n\nFuture:\n\nAllow different freezing strategies beyond fifo.\n\n\n\n\n\n","category":"function"},{"location":"func_ref/#IncrementalInference.filterHistAllToArray","page":"More Functions","title":"IncrementalInference.filterHistAllToArray","text":"filterHistAllToArray(tree, hists, frontals, nextfnc)\n\n\nReturn state machine transition steps from all cliq histories with transition nextfnc::Function.\n\nRelated:\n\nprintCliqHistorySummary, cliqHistFilterTransitions, sandboxCliqResolveStep\n\n\n\n\n\n","category":"function"},{"location":"func_ref/#IncrementalInference.fmcmc!","page":"More Functions","title":"IncrementalInference.fmcmc!","text":"fmcmc!(fgl, cliq, fmsgs, lbls, solveKey, N, MCMCIter)\nfmcmc!(fgl, cliq, fmsgs, lbls, solveKey, N, MCMCIter, dbg)\nfmcmc!(\n fgl,\n cliq,\n fmsgs,\n lbls,\n solveKey,\n N,\n MCMCIter,\n dbg,\n logger\n)\nfmcmc!(\n fgl,\n cliq,\n fmsgs,\n lbls,\n solveKey,\n N,\n MCMCIter,\n dbg,\n logger,\n multithreaded\n)\n\n\nIterate successive approximations of clique marginal beliefs by means of the stipulated proposal convolutions and products of the functional objects for tree clique cliq.\n\n\n\n\n\n","category":"function"},{"location":"func_ref/#IncrementalInference.getClique","page":"More Functions","title":"IncrementalInference.getClique","text":"getClique(tree, cId)\n\n\nReturn the TreeClique node object that represents a clique in the Bayes (Junction) tree, as defined by one of the frontal variables frt<:AbstractString.\n\nNotes\n\nFrontal variables only occur once in a clique per tree, therefore is a unique identifier.\n\nRelated:\n\ngetCliq, getTreeAllFrontalSyms\n\n\n\n\n\n","category":"function"},{"location":"func_ref/#IncrementalInference.getCliqAllVarIds","page":"More Functions","title":"IncrementalInference.getCliqAllVarIds","text":"getCliqAllVarIds(cliq)\n\n\nGet all cliq variable ids::Symbol.\n\nRelated\n\ngetCliqVarIdsAll, getCliqFactorIdsAll, getCliqVarsWithFrontalNeighbors\n\n\n\n\n\n","category":"function"},{"location":"func_ref/#IncrementalInference.getCliqAssocMat","page":"More Functions","title":"IncrementalInference.getCliqAssocMat","text":"getCliqAssocMat(cliq)\n\n\nReturn boolean matrix of factor by variable (row by column) associations within clique, corresponds to order presented by getCliqFactorIds and getCliqAllVarIds.\n\n\n\n\n\n","category":"function"},{"location":"func_ref/#IncrementalInference.getCliqDepth","page":"More Functions","title":"IncrementalInference.getCliqDepth","text":"getCliqDepth(tree, cliq)\n\n\nReturn depth in tree as ::Int, with root as depth=0.\n\nRelated\n\ngetCliq\n\n\n\n\n\n","category":"function"},{"location":"func_ref/#IncrementalInference.getCliqDownMsgsAfterDownSolve","page":"More Functions","title":"IncrementalInference.getCliqDownMsgsAfterDownSolve","text":"getCliqDownMsgsAfterDownSolve(\n subdfg,\n cliq,\n solveKey;\n status,\n sender\n)\n\n\nReturn dictionary of down messages consisting of all frontal and separator beliefs of this clique.\n\nNotes:\n\nFetches numerical results from subdfg as dictated in cliq.\nreturn LikelihoodMessage\n\n\n\n\n\n","category":"function"},{"location":"func_ref/#IncrementalInference.getCliqFrontalVarIds","page":"More Functions","title":"IncrementalInference.getCliqFrontalVarIds","text":"getCliqFrontalVarIds(cliqdata)\n\n\nGet the frontal variable IDs ::Int for a given clique in a Bayes (Junction) tree.\n\n\n\n\n\n","category":"function"},{"location":"func_ref/#IncrementalInference.getCliqVarInitOrderUp","page":"More Functions","title":"IncrementalInference.getCliqVarInitOrderUp","text":"getCliqVarInitOrderUp(subfg)\n\n\nReturn the most likely ordering for initializing factor (assuming up solve sequence).\n\nNotes:\n\nsorts id (label) for increasing number of connected factors using the clique subfg with messages already included.\n\n\n\n\n\n","category":"function"},{"location":"func_ref/#IncrementalInference.getCliqMat","page":"More Functions","title":"IncrementalInference.getCliqMat","text":"getCliqMat(cliq; showmsg)\n\n\nReturn boolean matrix of factor variable associations for a clique, optionally including (showmsg::Bool=true) the upward message singletons. Variable order corresponds to getCliqAllVarIds.\n\n\n\n\n\n","category":"function"},{"location":"func_ref/#IncrementalInference.getCliqSeparatorVarIds","page":"More Functions","title":"IncrementalInference.getCliqSeparatorVarIds","text":"getCliqSeparatorVarIds(cliqdata)\n\n\nGet cliq separator (a.k.a. conditional) variable ids::Symbol.\n\n\n\n\n\n","category":"function"},{"location":"func_ref/#IncrementalInference.getCliqSiblings","page":"More Functions","title":"IncrementalInference.getCliqSiblings","text":"getCliqSiblings(treel, cliq)\ngetCliqSiblings(treel, cliq, inclusive)\n\n\nReturn a vector of all siblings to a clique, which defaults to not inclusive the calling cliq.\n\n\n\n\n\n","category":"function"},{"location":"func_ref/#IncrementalInference.getCliqVarIdsPriors","page":"More Functions","title":"IncrementalInference.getCliqVarIdsPriors","text":"getCliqVarIdsPriors(cliq)\ngetCliqVarIdsPriors(cliq, allids)\ngetCliqVarIdsPriors(cliq, allids, partials)\n\n\nGet variable ids::Int with prior factors associated with this cliq.\n\nNotes:\n\ndoes not include any singleton messages from upward or downward message passing.\n\n\n\n\n\n","category":"function"},{"location":"func_ref/#IncrementalInference.getCliqVarSingletons","page":"More Functions","title":"IncrementalInference.getCliqVarSingletons","text":"getCliqVarSingletons(cliq)\ngetCliqVarSingletons(cliq, allids)\ngetCliqVarSingletons(cliq, allids, partials)\n\n\nGet cliq variable IDs with singleton factors – i.e. both in clique priors and up messages.\n\n\n\n\n\n","category":"function"},{"location":"func_ref/#IncrementalInference.getParent","page":"More Functions","title":"IncrementalInference.getParent","text":"getParent(treel, afrontal)\n\n\nReturn cliq's parent clique.\n\n\n\n\n\n","category":"function"},{"location":"func_ref/#IncrementalInference.getTreeAllFrontalSyms","page":"More Functions","title":"IncrementalInference.getTreeAllFrontalSyms","text":"getTreeAllFrontalSyms(_, tree)\n\n\nReturn one symbol (a frontal variable) from each clique in the ::BayesTree.\n\nNotes\n\nFrontal variables only occur once in a clique per tree, therefore is a unique identifier.\n\nRelated:\n\nwhichCliq, printCliqHistorySummary\n\n\n\n\n\n","category":"function"},{"location":"func_ref/#IncrementalInference.hasClique","page":"More Functions","title":"IncrementalInference.hasClique","text":"hasClique(bt, frt)\n\n\nReturn boolean on whether the frontal variable frt::Symbol exists somewhere in the ::BayesTree.\n\n\n\n\n\n","category":"function"},{"location":"func_ref/#DistributedFactorGraphs.isInitialized","page":"More Functions","title":"DistributedFactorGraphs.isInitialized","text":"isInitialized(var)\nisInitialized(var, key)\n\n\nReturns state of variable data .initialized flag.\n\nNotes:\n\nused by both factor graph variable and Bayes tree clique logic.\n\n\n\n\n\nisInitialized(cliq)\n\n\nReturns state of Bayes tree clique .initialized flag.\n\nNotes:\n\nused by Bayes tree clique logic.\nsimilar method in DFG\n\n\n\n\n\n","category":"function"},{"location":"func_ref/#DistributedFactorGraphs.isMarginalized","page":"More Functions","title":"DistributedFactorGraphs.isMarginalized","text":"isMarginalized(vert)\nisMarginalized(vert, solveKey)\n\n\nReturn ::Bool on whether this variable has been marginalized.\n\nNotes:\n\nVariableNodeData default solveKey=:default\n\n\n\n\n\n","category":"function"},{"location":"func_ref/#IncrementalInference.isTreeSolved","page":"More Functions","title":"IncrementalInference.isTreeSolved","text":"isTreeSolved(treel; skipinitialized)\n\n\nReturn true or false depending on whether the tree has been fully initialized/solved/marginalized.\n\n\n\n\n\n","category":"function"},{"location":"func_ref/#ApproxManifoldProducts.isPartial","page":"More Functions","title":"ApproxManifoldProducts.isPartial","text":"isPartial(fcf)\n\n\nReturn ::Bool on whether factor is a partial constraint.\n\n\n\n\n\n","category":"function"},{"location":"func_ref/#IncrementalInference.localProduct","page":"More Functions","title":"IncrementalInference.localProduct","text":"localProduct(dfg, sym; solveKey, N, dbg, logger)\n\n\nUsing factor graph object dfg, project belief through connected factors (convolution with likelihood) to variable sym followed by a approximate functional product.\n\nReturn: product belief, full proposals, partial dimension proposals, labels\n\n\n\n\n\n","category":"function"},{"location":"func_ref/#IncrementalInference.makeCsmMovie","page":"More Functions","title":"IncrementalInference.makeCsmMovie","text":"makeCsmMovie(fg, tree; ...)\nmakeCsmMovie(\n fg,\n tree,\n cliqs;\n assignhist,\n show,\n filename,\n frames\n)\n\n\nConvenience function to assign and make video of CSM state machine for cliqs.\n\nNotes\n\nProbably several teething issues still (lower priority).\nUse assignhist if solver params async was true, or errored.\n\nRelated\n\ncsmAnimate, printCliqHistorySummary\n\n\n\n\n\n","category":"function"},{"location":"func_ref/#IncrementalInference.parentCliq","page":"More Functions","title":"IncrementalInference.parentCliq","text":"parentCliq(treel, cliq)\n\n\nReturn cliq's parent clique.\n\n\n\n\n\n","category":"function"},{"location":"func_ref/#RoME.predictVariableByFactor","page":"More Functions","title":"RoME.predictVariableByFactor","text":"predictVariableByFactor(dfg, targetsym, fct, prevars)\n\n\nMethod to compare current and predicted estimate on a variable, developed for testing a new factor before adding to the factor graph.\n\nNotes\n\nfct does not have to be in the factor graph – likely used to test beforehand.\nfunction is useful for detecting if multihypo should be used.\napproxConv will project the full belief estimate through some factor but must already be in factor graph.\n\nExample\n\n# fg already exists containing :x7 and :l3\npp = Pose2Point2BearingRange(Normal(0,0.1),Normal(10,1.0))\n# possible new measurement from :x7 to :l3\ncurr, pred = predictVariableByFactor(fg, :l3, pp, [:x7; :l3])\n# example of naive user defined test on fit score\nfitscore = minkld(curr, pred)\n# `multihypo` can be used as option between existing or new variables\n\nRelated\n\napproxConv\n\n\n\n\n\n","category":"function"},{"location":"func_ref/#IncrementalInference.printCliqHistorySummary","page":"More Functions","title":"IncrementalInference.printCliqHistorySummary","text":"printCliqHistorySummary(fid, hist)\nprintCliqHistorySummary(fid, hist, cliqid)\n\n\nPrint a short summary of state machine history for a clique solve.\n\nRelated:\n\ngetTreeAllFrontalSyms, animateCliqStateMachines, printHistoryLine, printCliqHistorySequential\n\n\n\n\n\n","category":"function"},{"location":"func_ref/#IncrementalInference.resetCliqSolve!","page":"More Functions","title":"IncrementalInference.resetCliqSolve!","text":"resetCliqSolve!(dfg, treel, cliq; solveKey)\n\n\nReset the state of all variables in a clique to not initialized.\n\nNotes\n\nresets numberical values to zeros.\n\nDev Notes\n\nTODO not all kde manifolds will initialize to zero.\nFIXME channels need to be consolidated\n\n\n\n\n\n","category":"function"},{"location":"func_ref/#IncrementalInference.resetData!","page":"More Functions","title":"IncrementalInference.resetData!","text":"resetData!(vdata)\n\n\nPartial reset of basic data fields in ::VariableNodeData of ::FunctionNode structures.\n\n\n\n\n\n","category":"function"},{"location":"func_ref/#IncrementalInference.resetTreeCliquesForUpSolve!","page":"More Functions","title":"IncrementalInference.resetTreeCliquesForUpSolve!","text":"resetTreeCliquesForUpSolve!(treel)\n\n\nReset the Bayes (Junction) tree so that a new upsolve can be performed.\n\nNotes\n\nWill change previous clique status from DOWNSOLVED to INITIALIZED only.\nSets the color of tree clique to lightgreen.\n\n\n\n\n\n","category":"function"},{"location":"func_ref/#IncrementalInference.resetVariable!","page":"More Functions","title":"IncrementalInference.resetVariable!","text":"resetVariable!(varid; solveKey)\n\n\nReset the solve state of a variable to uninitialized/unsolved state.\n\n\n\n\n\n","category":"function"},{"location":"func_ref/#IncrementalInference.setfreeze!","page":"More Functions","title":"IncrementalInference.setfreeze!","text":"setfreeze!(dfg, sym)\n\n\nSet variable(s) sym of factor graph to be marginalized – i.e. not be updated by inference computation.\n\n\n\n\n\n","category":"function"},{"location":"func_ref/#IncrementalInference.setValKDE!","page":"More Functions","title":"IncrementalInference.setValKDE!","text":"setValKDE!(vd, pts, bws)\nsetValKDE!(vd, pts, bws, setinit)\nsetValKDE!(vd, pts, bws, setinit, ipc)\n\n\nSet the point centers and bandwidth parameters of a variable node, also set isInitialized=true if setinit::Bool=true (as per default).\n\nNotes\n\ninitialized is used for initial solve of factor graph where variables are not yet initialized.\ninferdim is used to identify if the initialized was only partial.\n\n\n\n\n\n","category":"function"},{"location":"func_ref/#IncrementalInference.setVariableInitialized!","page":"More Functions","title":"IncrementalInference.setVariableInitialized!","text":"setVariableInitialized!(varid, status)\n\n\nSet variable initialized status.\n\n\n\n\n\n","category":"function"},{"location":"func_ref/#IncrementalInference.solveCliqWithStateMachine!","page":"More Functions","title":"IncrementalInference.solveCliqWithStateMachine!","text":"solveCliqWithStateMachine!(\n dfg,\n tree,\n frontal;\n iters,\n downsolve,\n recordhistory,\n verbose,\n nextfnc,\n prevcsmc\n)\n\n\nStandalone state machine solution for a single clique.\n\nRelated:\n\ninitInferTreeUp!\n\n\n\n\n\n","category":"function"},{"location":"func_ref/#IncrementalInference.transferUpdateSubGraph!","page":"More Functions","title":"IncrementalInference.transferUpdateSubGraph!","text":"transferUpdateSubGraph!(dest, src; ...)\ntransferUpdateSubGraph!(dest, src, syms; ...)\ntransferUpdateSubGraph!(\n dest,\n src,\n syms,\n logger;\n updatePPE,\n solveKey\n)\n\n\nTransfer contents of src::AbstractDFG variables syms::Vector{Symbol} to dest::AbstractDFG. Notes\n\nReads, dest := src, for all syms\n\n\n\n\n\n","category":"function"},{"location":"func_ref/#IncrementalInference.treeProductDwn","page":"More Functions","title":"IncrementalInference.treeProductDwn","text":"treeProductDwn(fg, tree, cliq, sym; N, dbg)\n\n\nCalculate a fresh–-single step–-approximation to the variable sym in clique cliq as though during the downward message passing. The full inference algorithm may repeatedly calculate successive apprimxations to the variable based on the structure of variables, factors, and incoming messages to this clique. Which clique to be used is defined by frontal variable symbols (cliq in this case) – see getClique(...) for more details. The sym symbol indicates which symbol of this clique to be calculated. Note that the sym variable must appear in the clique where cliq is a frontal variable.\n\n\n\n\n\n","category":"function"},{"location":"func_ref/#IncrementalInference.treeProductUp","page":"More Functions","title":"IncrementalInference.treeProductUp","text":"treeProductUp(fg, tree, cliq, sym; N, dbg)\n\n\nCalculate a fresh (single step) approximation to the variable sym in clique cliq as though during the upward message passing. The full inference algorithm may repeatedly calculate successive apprimxations to the variables based on the structure of the clique, factors, and incoming messages. Which clique to be used is defined by frontal variable symbols (cliq in this case) – see getClique(...) for more details. The sym symbol indicates which symbol of this clique to be calculated. Note that the sym variable must appear in the clique where cliq is a frontal variable.\n\n\n\n\n\n","category":"function"},{"location":"func_ref/#IncrementalInference.unfreezeVariablesAll!","page":"More Functions","title":"IncrementalInference.unfreezeVariablesAll!","text":"unfreezeVariablesAll!(fgl)\n\n\nFree all variables from marginalization.\n\nRelated\n\ndontMarginalizeVariablesAll!\n\n\n\n\n\n","category":"function"},{"location":"func_ref/#IncrementalInference.dontMarginalizeVariablesAll!","page":"More Functions","title":"IncrementalInference.dontMarginalizeVariablesAll!","text":"dontMarginalizeVariablesAll!(fgl)\n\n\nFree all variables from marginalization.\n\n\n\n\n\n","category":"function"},{"location":"func_ref/#IncrementalInference.updateFGBT!","page":"More Functions","title":"IncrementalInference.updateFGBT!","text":"updateFGBT!(fg, cliq, IDvals; dbg, fillcolor, logger)\n\n\nUpdate cliq cliqID in Bayes (Juction) tree bt according to contents of urt. Intended use is to update main clique after a upward belief propagation computation has been completed per clique.\n\n\n\n\n\n","category":"function"},{"location":"func_ref/#IncrementalInference.upGibbsCliqueDensity","page":"More Functions","title":"IncrementalInference.upGibbsCliqueDensity","text":"upGibbsCliqueDensity(dfg, cliq, solveKey, inmsgs)\nupGibbsCliqueDensity(dfg, cliq, solveKey, inmsgs, N)\nupGibbsCliqueDensity(dfg, cliq, solveKey, inmsgs, N, dbg)\nupGibbsCliqueDensity(\n dfg,\n cliq,\n solveKey,\n inmsgs,\n N,\n dbg,\n iters\n)\nupGibbsCliqueDensity(\n dfg,\n cliq,\n solveKey,\n inmsgs,\n N,\n dbg,\n iters,\n logger\n)\n\n\nPerform computations required for the upward message passing during belief propation on the Bayes (Junction) tree. This function is usually called as via remote_call for multiprocess dispatch.\n\nNotes\n\nfg factor graph,\ntree Bayes tree,\ncliq which cliq to perform the computation on,\nparent the parent clique to where the upward message will be sent,\nchildmsgs is for any incoming messages from child cliques.\n\nDevNotes\n\nFIXME total rewrite with AMP #41 and RoME #244 in mind\n\n\n\n\n\n","category":"function"},{"location":"func_ref/#IncrementalInference.resetVariableAllInitializations!","page":"More Functions","title":"IncrementalInference.resetVariableAllInitializations!","text":"resetVariableAllInitializations!(fgl)\n\n\nReset initialization flag on all variables in ::AbstractDFG.\n\nNotes\n\nNumerical values remain, but inference will overwrite since init flags are now false.\n\n\n\n\n\n","category":"function"},{"location":"examples/parametric_solve/#Parametric-Solve-(Experimental)","page":"[DEV] Parametric Solve","title":"Parametric Solve (Experimental)","text":"","category":"section"},{"location":"examples/parametric_solve/","page":"[DEV] Parametric Solve","title":"[DEV] Parametric Solve","text":"Note that parametric solve (i.e. conventional Gaussians) is currently supported as an experimental feature which might appear more buggy. Familiar parametric methods should become fully integrated and we invite comments or contributions from the community. A great deal of effort has gone into finding the best abstractions to support multiple factor graph solving strategies.","category":"page"},{"location":"examples/parametric_solve/#Batch-Parametric","page":"[DEV] Parametric Solve","title":"Batch Parametric","text":"","category":"section"},{"location":"examples/parametric_solve/","page":"[DEV] Parametric Solve","title":"[DEV] Parametric Solve","text":"solveGraphParametric\nIncrementalInference.solveGraphParametric!","category":"page"},{"location":"examples/parametric_solve/#IncrementalInference.solveGraphParametric","page":"[DEV] Parametric Solve","title":"IncrementalInference.solveGraphParametric","text":"solveGraphParametric(args; kwargs...)\n\n\nBatch parametric graph solve using Riemannian Levenberg Marquardt.\n\n\n\n\n\n","category":"function"},{"location":"examples/parametric_solve/#DistributedFactorGraphs.solveGraphParametric!","page":"[DEV] Parametric Solve","title":"DistributedFactorGraphs.solveGraphParametric!","text":"Standard parametric graph solution (Experimental).\n\n\n\n\n\n","category":"function"},{"location":"examples/parametric_solve/","page":"[DEV] Parametric Solve","title":"[DEV] Parametric Solve","text":"Initializing the parametric solve from existing values can be done with the help of:","category":"page"},{"location":"examples/parametric_solve/","page":"[DEV] Parametric Solve","title":"[DEV] Parametric Solve","text":"initParametricFrom!","category":"page"},{"location":"examples/parametric_solve/#IncrementalInference.initParametricFrom!","page":"[DEV] Parametric Solve","title":"IncrementalInference.initParametricFrom!","text":"initParametricFrom!(fg; ...)\ninitParametricFrom!(fg, fromkey; parkey, onepoint, force)\n\n\nInitialize the parametric solver data from a different solution in fromkey.\n\nDevNotes\n\nTODO, keyword force not wired up yet.\n\n\n\n\n\n","category":"function"},{"location":"examples/parametric_solve/#parametric_factors","page":"[DEV] Parametric Solve","title":"Defining Factors to Support a Parametric Solution (Experimental)","text":"","category":"section"},{"location":"examples/parametric_solve/","page":"[DEV] Parametric Solve","title":"[DEV] Parametric Solve","text":"Factor that supports a parametric solution, with supported distributions (such as Normal and MvNormal), can be used in a parametric batch solver solveGraphParametric. ","category":"page"},{"location":"examples/parametric_solve/#getMeasurementParametric","page":"[DEV] Parametric Solve","title":"getMeasurementParametric","text":"","category":"section"},{"location":"examples/parametric_solve/","page":"[DEV] Parametric Solve","title":"[DEV] Parametric Solve","text":"Parameteric calculations require the mean and covariance from Gaussian measurement functions (factors) using the getMeasurementParametric getMeasurementParametric defaults to looking for a supported distribution in field .Z followed by .z. Therefore, if the factor uses this fieldname, getMeasurementParametric does not need to be extended. You can extend by simply implementing, for example, your own IncrementalInference.getMeasurementParametric(f::OtherFactor) = m.density.","category":"page"},{"location":"examples/parametric_solve/","page":"[DEV] Parametric Solve","title":"[DEV] Parametric Solve","text":"For this example, the Z field will automatically be detected used by default for MyFactor from above.","category":"page"},{"location":"examples/parametric_solve/","page":"[DEV] Parametric Solve","title":"[DEV] Parametric Solve","text":"struct MyFactor{T <: SamplableBelief} <: IIF.AbstractRelativeRoots\n Z::T\nend","category":"page"},{"location":"examples/parametric_solve/","page":"[DEV] Parametric Solve","title":"[DEV] Parametric Solve","text":"An example of where implementing getMeasurementParametric is needed can be found in the RoME factor Pose2Point2BearingRange","category":"page"},{"location":"examples/parametric_solve/","page":"[DEV] Parametric Solve","title":"[DEV] Parametric Solve","text":"import getMeasurementParametric\nfunction getMeasurementParametric(s::Pose2Point2BearingRange{<:Normal, <:Normal})\n\n meas = [mean(s.bearing), mean(s.range)]\n iΣ = [1/var(s.bearing) 0;\n 0 1/var(s.range)]\n\n return meas, iΣ\nend","category":"page"},{"location":"examples/parametric_solve/#The-Factor","page":"[DEV] Parametric Solve","title":"The Factor","text":"","category":"section"},{"location":"examples/parametric_solve/","page":"[DEV] Parametric Solve","title":"[DEV] Parametric Solve","text":"The factor is evaluated in a cost function using the Mahalanobis distance and the measurement should therefore match the residual returned. ","category":"page"},{"location":"examples/parametric_solve/#Optimization","page":"[DEV] Parametric Solve","title":"Optimization","text":"","category":"section"},{"location":"examples/parametric_solve/","page":"[DEV] Parametric Solve","title":"[DEV] Parametric Solve","text":"IncrementalInference.solveGraphParametric! uses Optim.jl. The factors that are supported should have a gradient and Hessian available/exists and therefore it makes use of TwiceDifferentiable. Full control of Optim's setup is possible with keyword arguments. ","category":"page"},{"location":"caesar_framework/#The-Caesar-Framework","page":"Pkg Framework","title":"The Caesar Framework","text":"","category":"section"},{"location":"caesar_framework/","page":"Pkg Framework","title":"Pkg Framework","text":"The Caesar.jl package is an \"umbrella\" framework around other dedicated algorithmic packages. While most of the packages are implemented in native Julia (JuliaPro), a few dependencies are wrapped C libraries. Note that C/C++ can be incorporated with zero overhead, such as was done with AprilTags.jl.","category":"page"},{"location":"caesar_framework/","page":"Pkg Framework","title":"Pkg Framework","text":"FAQ: Why use Julia?","category":"page"},{"location":"caesar_framework/#AMP-/-IIF-/-RoME","page":"Pkg Framework","title":"AMP / IIF / RoME","text":"","category":"section"},{"location":"caesar_framework/","page":"Pkg Framework","title":"Pkg Framework","text":"Robot motion estimate (RoME.jl) can operate in the conventional SLAM manner, using local memory (dictionaries), or alternatively distribute over a persisted DistributedFactorGraph.jl through common serialization and graph storage/database technologies, see this article as example [1.3]. A variety of 2D plotting, 3D visualization, serialization, middleware, and analysis tools come standard as provided by the associated packages. RoME.jl combines reference frame transformations and robotics SLAM tool around the back-end solver provides by IncrementalInference.jl.","category":"page"},{"location":"caesar_framework/","page":"Pkg Framework","title":"Pkg Framework","text":"Details about the accompanying packages:","category":"page"},{"location":"caesar_framework/","page":"Pkg Framework","title":"Pkg Framework","text":"IncrementalInference.jl supplies the algebraic logic for factor graph inference with Bayes tree and depends on several packages itself.\nRoME.jl introduces nodes and factors that are useful to robotic navigation.\nApproxManifoldProducts.jl provides on-manifold belief product operations.","category":"page"},{"location":"caesar_framework/#Visualization-(Arena.jl/RoMEPlotting.jl)","page":"Pkg Framework","title":"Visualization (Arena.jl/RoMEPlotting.jl)","text":"","category":"section"},{"location":"caesar_framework/","page":"Pkg Framework","title":"Pkg Framework","text":"Caesar visualization (plotting of results, graphs, and data) is provided by 2D and 3D packages respectively:","category":"page"},{"location":"caesar_framework/","page":"Pkg Framework","title":"Pkg Framework","text":"RoMEPlotting.jl are a set of scripts that provide MATLAB style plotting of factor graph beliefs, mostly supporting 2D visualization with some support for projections of 3D;\nArena.jl package, which is a collection of 3D visualization tools.","category":"page"},{"location":"caesar_framework/#Multilanguage-Interops:-NavAbility.io-SDKs-and-APIs","page":"Pkg Framework","title":"Multilanguage Interops: NavAbility.io SDKs and APIs","text":"","category":"section"},{"location":"caesar_framework/","page":"Pkg Framework","title":"Pkg Framework","text":"The Caesar framework is not limited to direct Julia use. Check out www.NavAbility.io, or contact directly at (info@navabiliyt.io), for more details. Also see the community multi-language page for details.","category":"page"},{"location":"caesar_framework/","page":"Pkg Framework","title":"Pkg Framework","text":"note: Note\nFAQ: Interop with other languages (not limited to Julia only)","category":"page"},{"location":"concepts/flux_factors/#Incorporating-Neural-Network-Factors","page":"Flux (NN) Factors","title":"Incorporating Neural Network Factors","text":"","category":"section"},{"location":"concepts/flux_factors/","page":"Flux (NN) Factors","title":"Flux (NN) Factors","text":"IncrementalInference.jl and RoME.jl has native support for using Neural Networks (via Flux.jl) as non-Gaussian factors. Documentation is forthcoming, but meanwhile see the following generic Flux.jl factor structure. Note also that a standard Mixture approach already exists too.","category":"page"},{"location":"examples/legacy_deffactors/#Relative-Factors-(Legacy)","page":"Legacy Factors","title":"Relative Factors (Legacy)","text":"","category":"section"},{"location":"examples/legacy_deffactors/#One-Dimension-Roots-Example","page":"Legacy Factors","title":"One Dimension Roots Example","text":"","category":"section"},{"location":"examples/legacy_deffactors/","page":"Legacy Factors","title":"Legacy Factors","text":"Previously we looked at adding a prior. This section demonstrates the first of two <:AbstractRelative factor types. These are factors that introduce only relative information between variables in the factor graph.","category":"page"},{"location":"examples/legacy_deffactors/","page":"Legacy Factors","title":"Legacy Factors","text":"This example is on <:IIF.AbstractRelativeRoots. First, lets create the factor as before ","category":"page"},{"location":"examples/legacy_deffactors/","page":"Legacy Factors","title":"Legacy Factors","text":"struct MyFactor{T <: SamplableBelief} <: IIF.AbstractRelativeRoots\n Z::T\nend\ngetSample(cfo::CalcFactor{<:MyFactor}, N::Int=1) = (reshape(rand(cfo.factor.Z,N) ,1,N), )\n\nfunction (cfo::CalcFactor{<:MyFactor})( measurement_z,\n x1,\n x2 )\n #\n res = measurement_z - (x2[1] - x1[1])\n return res\nend","category":"page"},{"location":"examples/legacy_deffactors/","page":"Legacy Factors","title":"Legacy Factors","text":"The selection of <:IIF.AbstractRelativeRoots, akin to earlier <:AbstractPrior, instructs IIF to find the roots of the provided residual function. That is the one dimensional residual function, res[1] = measurement - prediction, is used during inference to approximate the convolution of conditional beliefs from the approximate beliefs of the connected variables in the factor graph.","category":"page"},{"location":"examples/legacy_deffactors/","page":"Legacy Factors","title":"Legacy Factors","text":"Important aspects to note, <:IIF.AbstractRelativeRoots requires all elements length(res) (the factor measurement dimension) to have a feasible zero crossing solution. A two dimensional system will solve for variables where both res[1]==0 and res[2]==0.","category":"page"},{"location":"examples/legacy_deffactors/","page":"Legacy Factors","title":"Legacy Factors","text":"note: Note\nAs of IncrementalInference v0.21, CalcResidual no longer takes a residual as input parameter and should return residual, see IIF#467.","category":"page"},{"location":"examples/legacy_deffactors/","page":"Legacy Factors","title":"Legacy Factors","text":"note: Note\nMeasurements and variables passed in to the factor residual function do not have the same type as when constructing the factor graph. It is recommended to leave these incoming types unrestricted. If you must define the types, these either are (or will be) of element type relating to the manifold on which the measurement or variable beliefs reside. Probably a vector or manifolds type. Usage can be very case specific, and hence better to let Julia type-inference automation do the hard work for you. The ","category":"page"},{"location":"examples/legacy_deffactors/#Two-Dimension-Minimize-Example","page":"Legacy Factors","title":"Two Dimension Minimize Example","text":"","category":"section"},{"location":"examples/legacy_deffactors/","page":"Legacy Factors","title":"Legacy Factors","text":"The second type is <:IIF.AbstractRelativeMinimize which simply minimizes the residual vector of the user factor. This type is useful for partial constraint situations where the residual function is not gauranteed to have zero crossings in all dimensions and the problem is converted into a minimization problem instead:","category":"page"},{"location":"examples/legacy_deffactors/","page":"Legacy Factors","title":"Legacy Factors","text":"struct OtherFactor{T <: SamplableBelief} <: IIF.AbstractRelativeMinimize\n Z::T # assuming something 2 dimensional\n userdata::String # or whatever is necessary\nend\n\n# just illustrating some arbitraty second value in tuple of different size\ngetSample(cfo::CalcFactor{<:OtherFactor}, N::Int=1) = (rand(cfo.factor.z,N), rand())\n\nfunction (cfo::CalcFactor{<:OtherFactor})(res::AbstractVector{<:Real},\n z,\n second_val,\n x1,\n x2 )\n #\n # @assert length(z) == 2\n # not doing anything with `second_val` but illustrating\n # not doing anything with `cfo.factor.userdata` either\n \n # the broadcast operators with automatically vectorize\n res = z .- (x1[1:2] .- x1[1:2])\n return res\nend","category":"page"},{"location":"principles/multiplyingDensities/#Principle:-Multiplying-Functions-(Python)","page":"Multiplying Functions (.py)","title":"Principle: Multiplying Functions (Python)","text":"","category":"section"},{"location":"principles/multiplyingDensities/","page":"Multiplying Functions (.py)","title":"Multiplying Functions (.py)","text":"This example illustrates a central concept in Caesar.jl (and the multimodal-iSAM algorithm), whereby different probability belief functions are multiplied together. The true product between various likelihood beliefs is very complicated to compute, but a good approximations exist. In addition, ZmqCaesar offers a ZMQ interface to the factor graph solution for multilanguage support. This example is a small subset that shows how to use the ZMQ infrastructure, but avoids the larger factor graph related calls.","category":"page"},{"location":"principles/multiplyingDensities/#Products-of-Infinite-Objects-(Functionals)","page":"Multiplying Functions (.py)","title":"Products of Infinite Objects (Functionals)","text":"","category":"section"},{"location":"principles/multiplyingDensities/","page":"Multiplying Functions (.py)","title":"Multiplying Functions (.py)","text":"Consider multiplying multiple belief density functions together, for example","category":"page"},{"location":"principles/multiplyingDensities/","page":"Multiplying Functions (.py)","title":"Multiplying Functions (.py)","text":"f = f_1 times f_2 times f_3","category":"page"},{"location":"principles/multiplyingDensities/","page":"Multiplying Functions (.py)","title":"Multiplying Functions (.py)","text":"which is a core operation required for solving the Chapman-Kolmogorov transit equations.","category":"page"},{"location":"principles/multiplyingDensities/#Direct-Julia-Calculation","page":"Multiplying Functions (.py)","title":"Direct Julia Calculation","text":"","category":"section"},{"location":"principles/multiplyingDensities/","page":"Multiplying Functions (.py)","title":"Multiplying Functions (.py)","text":"The ApproxManifoldProducts.jl package (experimental) is meant to unify many on-manifold product operations, and can be called directly in Julia:","category":"page"},{"location":"principles/multiplyingDensities/","page":"Multiplying Functions (.py)","title":"Multiplying Functions (.py)","text":"using ApproxManifoldProducts\n\nf1 = manikde!(ContinuousScalar, [randn()-3.0 for _ in 1:100])\nf2 = manikde!(ContinuousScalar, [randn()+3.0 for _ in 1:100])\n...\n\nf12 = manifoldProduct(ContinuousScalar, [f1;f2])","category":"page"},{"location":"principles/multiplyingDensities/","page":"Multiplying Functions (.py)","title":"Multiplying Functions (.py)","text":"Also see previous KernelDensityEstimate.jl.","category":"page"},{"location":"principles/multiplyingDensities/","page":"Multiplying Functions (.py)","title":"Multiplying Functions (.py)","text":"To make Caesar.jl usable from other languages, a ZMQ server interface model has been developed which can also be used to test this principle functional product operation.","category":"page"},{"location":"principles/multiplyingDensities/#Not-Susceptible-to-Particle-Depletion","page":"Multiplying Functions (.py)","title":"Not Susceptible to Particle Depletion","text":"","category":"section"},{"location":"principles/multiplyingDensities/","page":"Multiplying Functions (.py)","title":"Multiplying Functions (.py)","text":"The product process of say f1*f2 is not a importance sampling procedure that is commonly used in particle filtering, but instead a more advanced Bayesian inference process based on a wide variety of academic literature. The KernelDensityEstimate method is a stochastic method, what active research is looking into deterministic homotopy/continuation methods.","category":"page"},{"location":"principles/multiplyingDensities/","page":"Multiplying Functions (.py)","title":"Multiplying Functions (.py)","text":"The easy example that demonstrates that particle depletion is avoided here, is where f1 and f2 are represented by well separated and evenly weighted samples – the Bayesian inference 'product' technique efficiently produces new (evenly weighted) samples for f12 somewhere in between f1 and f2, but clearly not overlapping the original population of samples used for f1 and f2. In contrast, conventional particle filtering measurement updates would have \"de-weighted\" particles of either input function and then be rejected during an eventual resampling step, thereby depleting the sample population.","category":"page"},{"location":"principles/multiplyingDensities/#Starting-the-ZMQ-server","page":"Multiplying Functions (.py)","title":"Starting the ZMQ server","text":"","category":"section"},{"location":"principles/multiplyingDensities/","page":"Multiplying Functions (.py)","title":"Multiplying Functions (.py)","text":"Caesar.jl provides a startup script for a default ZMQ instance. Start a server and allow precompilations to finish, as indicated by a printout message \"waiting to receive...\". More details here.","category":"page"},{"location":"principles/multiplyingDensities/#Functional-Products-via-Python","page":"Multiplying Functions (.py)","title":"Functional Products via Python","text":"","category":"section"},{"location":"principles/multiplyingDensities/","page":"Multiplying Functions (.py)","title":"Multiplying Functions (.py)","text":"Clone the Python GraffSDK.py code here and look at the product.py file.","category":"page"},{"location":"principles/multiplyingDensities/","page":"Multiplying Functions (.py)","title":"Multiplying Functions (.py)","text":"import sys\nsys.path.append('..')\n\nimport numpy as np\nfrom graff.Endpoint import Endpoint\nfrom graff.Distribution.Normal import Normal\nfrom graff.Distribution.SampleWeights import SampleWeights\nfrom graff.Distribution.BallTreeDensity import BallTreeDensity\n\nfrom graff.Core import MultiplyDistributions\n\nimport matplotlib.pyplot as plt\n\nif __name__ == '__main__':\n e = Endpoint()\n\n e.Connect('tcp://192.168.0.102:5555')\n print(e.Status())\n\n N = 1000\n u1 = 0.0\n s1 = 10.0\n x1 = u1+s1*np.random.randn(N)\n\n u2 = 50.0\n s2 = 10.0\n x2 = u2+s2*np.random.randn(N)\n b1 = BallTreeDensity('Gaussian', np.ones(N), np.ones(N), x1)\n b2 = BallTreeDensity('Gaussian', np.ones(N), np.ones(N), x2)\n\n rep = MultiplyDistributions(e, [b1,b2])\n print(rep)\n x = np.array(rep['points'] )\n # plt.stem(x, np.ones(len(x)) )\n plt.hist(x, bins = int(len(x)/10.0), color= 'm')\n plt.hist(x1, bins = int(len(x)/10.0),color='r')\n plt.hist(x2, bins = int(len(x)/10.0),color='b')\n plt.show()\n\n e.Disconnect()","category":"page"},{"location":"principles/multiplyingDensities/#A-Basic-Factor-Graph-Product-Illustration","page":"Multiplying Functions (.py)","title":"A Basic Factor Graph Product Illustration","text":"","category":"section"},{"location":"principles/multiplyingDensities/","page":"Multiplying Functions (.py)","title":"Multiplying Functions (.py)","text":"Using the factor graph methodology, we can repeat the example by adding variable and two prior factors. This can be done directly in Julia (or via ZMQ in the further Python example below)","category":"page"},{"location":"principles/multiplyingDensities/#Products-of-Functions-(Factor-Graphs-in-Julia)","page":"Multiplying Functions (.py)","title":"Products of Functions (Factor Graphs in Julia)","text":"","category":"section"},{"location":"principles/multiplyingDensities/","page":"Multiplying Functions (.py)","title":"Multiplying Functions (.py)","text":"Directly in Julia:","category":"page"},{"location":"principles/multiplyingDensities/","page":"Multiplying Functions (.py)","title":"Multiplying Functions (.py)","text":"using IncrementalInference\n\nfg = initfg()\n\naddVariable!(fg, :x0, ContinuousScalar)\naddFactor!(fg, [:x0], Prior(Normal(-3.0,1.0)))\naddFactor!(fg, [:x0], Prior(Normal(+3.0,1.0)))\n\nsolveTree!(fg)\n\n# plot the results\nusing KernelDensityEstimatePlotting\n\nplotKDE(getBelief(fg, :x0))","category":"page"},{"location":"principles/multiplyingDensities/","page":"Multiplying Functions (.py)","title":"Multiplying Functions (.py)","text":"Example figure:","category":"page"},{"location":"principles/multiplyingDensities/","page":"Multiplying Functions (.py)","title":"Multiplying Functions (.py)","text":"

      \n\n

      ","category":"page"},{"location":"principles/multiplyingDensities/#Products-of-Functions-(Via-Python-and-ZmqCaesar)","page":"Multiplying Functions (.py)","title":"Products of Functions (Via Python and ZmqCaesar)","text":"","category":"section"},{"location":"principles/multiplyingDensities/","page":"Multiplying Functions (.py)","title":"Multiplying Functions (.py)","text":"We repeat the example using Python and the ZMQ interface:","category":"page"},{"location":"principles/multiplyingDensities/","page":"Multiplying Functions (.py)","title":"Multiplying Functions (.py)","text":"import sys\nsys.path.append('..')\n\nimport numpy as np\nfrom graff.Endpoint import Endpoint\nfrom graff.Distribution.Normal import Normal\nfrom graff.Distribution.SampleWeights import SampleWeights\nfrom graff.Distribution.BallTreeDensity import BallTreeDensity\n\nfrom graff.Core import MultiplyDistributions\n\n\nif __name__ == '__main__':\n \"\"\"\n\n \"\"\"\n e.Connect('tcp://127.0.0.1:5555')\n print(e.Status())\n\n # Add the first pose x0\n x0 = Variable('x0', 'ContinuousScalar')\n e.AddVariable(x0)\n\n # Add at a fixed location PriorPose2 to pin x0 to a starting location\n prior = Factor('Prior', ['x0'], Normal(np.zeros(1,1)-3.0, np.eye(1)) )\n e.AddFactor(prior)\n prior = Factor('Prior', ['x0'], Normal(np.zeros(1,1)+3.0, np.eye(1)) )\n e.AddFactor(prior)","category":"page"},{"location":"examples/custom_relative_factors/#custom_relative_factor","page":"Custom Relative Factor","title":"Custom Relative Factor","text":"","category":"section"},{"location":"examples/custom_relative_factors/","page":"Custom Relative Factor","title":"Custom Relative Factor","text":"Required Brief description\nMyFactor struct Prior (<:AbstractPrior) or Relative (<:AbstractManifoldMinimize) factor definition\ngetManifold The manifold of the factor\n(cfo::CalcFactor{<:MyFactor}) Factor residual function\nOptional methods Brief description\ngetSample(cfo::CalcFactor{<:MyFactor}) Get a sample from the measurement model","category":"page"},{"location":"examples/custom_relative_factors/#Define-the-relative-struct","page":"Custom Relative Factor","title":"Define the relative struct","text":"","category":"section"},{"location":"examples/custom_relative_factors/","page":"Custom Relative Factor","title":"Custom Relative Factor","text":"Previously we looked at making a Custom Prior Factor. This section describes how to build relative factors. Relative factors introduce relative-only information between variables in the factor graph, and do not add any absolute information. For example, a rigid transform between two variables is a relative relationship, regardless of their common absolute position in the world.","category":"page"},{"location":"examples/custom_relative_factors/","page":"Custom Relative Factor","title":"Custom Relative Factor","text":"Lets look at either the EuclidDistance of Pose2Pose2 factors as simple examples. First, create the uniquely named factor struct:","category":"page"},{"location":"examples/custom_relative_factors/","page":"Custom Relative Factor","title":"Custom Relative Factor","text":"struct EuclidDistance{T <: IIF.SamplableBelief} <: IIF.AbstractManifoldMinimize\n Z::T\nend","category":"page"},{"location":"examples/custom_relative_factors/","page":"Custom Relative Factor","title":"Custom Relative Factor","text":"New relative factors should either inheret from <:AbstractManifoldMinimize, <:AbstractRelativeMinimize, or <:AbstractRelativeRoots. These are all subtypes of <:AbstractRelative. There are only two abstract super types, <:AbstractPrior and <:AbstractRelative.","category":"page"},{"location":"examples/custom_relative_factors/#Summary-of-Sampling-Data-Representation","page":"Custom Relative Factor","title":"Summary of Sampling Data Representation","text":"","category":"section"},{"location":"examples/custom_relative_factors/","page":"Custom Relative Factor","title":"Custom Relative Factor","text":"Usage <:AbstractPrior <:AbstractRelative\ngetSample point p on Manifold tangent X at some p (e.g. identity)","category":"page"},{"location":"examples/custom_relative_factors/","page":"Custom Relative Factor","title":"Custom Relative Factor","text":"Usage \nsampleTangent tangent at point p or the identity element for groups\nrand / sample coordinates","category":"page"},{"location":"examples/custom_relative_factors/#Specialized-Dispatch-(getManifold,-getSample)","page":"Custom Relative Factor","title":"Specialized Dispatch (getManifold, getSample)","text":"","category":"section"},{"location":"examples/custom_relative_factors/","page":"Custom Relative Factor","title":"Custom Relative Factor","text":"Relative factors involve computaton, these computations must be performed on some manifold. Custom relative factors require that the getManifold function be overridded. Here two examples are given for reference:","category":"page"},{"location":"examples/custom_relative_factors/","page":"Custom Relative Factor","title":"Custom Relative Factor","text":"# import override/specialize the multiple dispatch\nimport DistributedFactorGraphs: getManifold\n\n# two examples of existing functions in the standard libraries\nDFG.getManifold(::EuclidDistance) = Manifolds.TranslationGroup(1)\nDFG.getManifold(::Pose2Pose2) = Manifolds.SpecialEuclidean(2)","category":"page"},{"location":"examples/custom_relative_factors/","page":"Custom Relative Factor","title":"Custom Relative Factor","text":"Extending the getSample method for our EuclidDistance factor example is not required, since the default dispatch using field .Z <: SamplableBelief will already be able to sample the measurement – see Specialized getSample.","category":"page"},{"location":"examples/custom_relative_factors/","page":"Custom Relative Factor","title":"Custom Relative Factor","text":"One important note is that getSample for <:AbstractRelative factors should return measurement values as manifold tangent vectors – for computational efficiency reasons.","category":"page"},{"location":"examples/custom_relative_factors/","page":"Custom Relative Factor","title":"Custom Relative Factor","text":"If more advanced sampling is required, extend the getSample function. ","category":"page"},{"location":"examples/custom_relative_factors/","page":"Custom Relative Factor","title":"Custom Relative Factor","text":"function getSample(cf::CalcFactor{<:Pose2Pose2}) \n M = getManifold(cf.factor)\n ϵ = getPointIdentity(Pose2)\n X = sampleTangent(M, cf.factor.Z, ϵ)\n return X\nend","category":"page"},{"location":"examples/custom_relative_factors/","page":"Custom Relative Factor","title":"Custom Relative Factor","text":"The return type for getSample is unrestricted, and will be passed to the residual function \"as-is\", but must return values representing a tangent vector for <:AbstractRelative","category":"page"},{"location":"examples/custom_relative_factors/","page":"Custom Relative Factor","title":"Custom Relative Factor","text":"note: Note\nDefault dispatches in IncrementalInference will try use cf.factor.Z to samplePoint on manifold (for <:AbstractPrior) or sampleTangent (for <:AbstractRelative), which simplifies new factor definitions. If, however, you wish to build more complicated sampling processes, then simply define your own getSample(cf::CalcFactor{<:MyFactor}) function.","category":"page"},{"location":"examples/custom_relative_factors/#factor_residual_function","page":"Custom Relative Factor","title":"Factor Residual Function","text":"","category":"section"},{"location":"examples/custom_relative_factors/","page":"Custom Relative Factor","title":"Custom Relative Factor","text":"The selection of <:IIF.AbstractManifoldMinimize, akin to earlier <:AbstractPrior, instructs IIF to find the minimum of the provided residual function. The residual function is used during inference to approximate the convolution of conditional beliefs from the approximate beliefs of the connected variables in the factor graph. Conceptually, the residual function is usually something akin to residual = measurement - prediction, but does not have to follow the exact recipe.","category":"page"},{"location":"examples/custom_relative_factors/","page":"Custom Relative Factor","title":"Custom Relative Factor","text":"The returned value (the factor measurement) from getSample will always be passed as the first argument (e.g. X) to the factor residual function. ","category":"page"},{"location":"examples/custom_relative_factors/","page":"Custom Relative Factor","title":"Custom Relative Factor","text":"# first residual function example\n(cf::CalcFactor{<:EuclidDistance})(X, p, q) = X - norm(p .- q)\n\n# second residual function example\nfunction (cf::CalcFactor{<:Pose2Pose2})(X, p, q)\n M = getManifold(Pose2)\n q̂ = Manifolds.compose(M, p, exp(M, identity_element(M, p), X))\n Xc = vee(M, q, log(M, q, q̂))\n return Xc\nend","category":"page"},{"location":"examples/custom_relative_factors/","page":"Custom Relative Factor","title":"Custom Relative Factor","text":"It is recommended to leave the incoming types unrestricted. If you must define the types, make sure to allow sufficient dispatch freedom (i.e. dispatch to concrete types) and not force operations to \"non-concrete\" types. Usage can be very case specific, and hence better to let Julia type-inference automation do the hard work of inferring the concrete types.","category":"page"},{"location":"examples/custom_relative_factors/","page":"Custom Relative Factor","title":"Custom Relative Factor","text":"note: Note\nAt present (2021) the residual function should return the residual value as a coordinate (not as tangent vectors or manifold points). Ongoing work is in progress, and likely to return residual values as manifold tangent vectors instead.","category":"page"},{"location":"examples/custom_relative_factors/#Serialization","page":"Custom Relative Factor","title":"Serialization","text":"","category":"section"},{"location":"examples/custom_relative_factors/","page":"Custom Relative Factor","title":"Custom Relative Factor","text":"Serialization of factors is also discussed in more detail at Standardized Factor Serialization.","category":"page"},{"location":"concepts/multilang/#Multilanguage-Interops","page":"Multi-Language Support","title":"Multilanguage Interops","text":"","category":"section"},{"location":"concepts/multilang/","page":"Multi-Language Support","title":"Multi-Language Support","text":"The Caesar framework is not limited to direct Julia use. ","category":"page"},{"location":"concepts/multilang/#navabilitysdk","page":"Multi-Language Support","title":"NavAbility SDKs and APIs","text":"","category":"section"},{"location":"concepts/multilang/","page":"Multi-Language Support","title":"Multi-Language Support","text":"The maintainers of Caesar.jl together with NavAbility.io are developing a standardized SDK / API for much easier multi-language / multi-access use of the solver features. The Documentation for the NavAbilitySDK's can be found here.","category":"page"},{"location":"concepts/multilang/","page":"Multi-Language Support","title":"Multi-Language Support","text":"Contact info@navability.io for more information.","category":"page"},{"location":"concepts/multilang/#Static,-Shared-Object-.so-Compilation","page":"Multi-Language Support","title":"Static, Shared Object .so Compilation","text":"","category":"section"},{"location":"concepts/multilang/","page":"Multi-Language Support","title":"Multi-Language Support","text":"See Compiling Binaries.","category":"page"},{"location":"concepts/multilang/#ROS-Integration","page":"Multi-Language Support","title":"ROS Integration","text":"","category":"section"},{"location":"concepts/multilang/","page":"Multi-Language Support","title":"Multi-Language Support","text":"See ROS Integration.","category":"page"},{"location":"concepts/multilang/#Python-Direct","page":"Multi-Language Support","title":"Python Direct","text":"","category":"section"},{"location":"concepts/multilang/","page":"Multi-Language Support","title":"Multi-Language Support","text":"For completeness, another design pattern is to wrap Julia packages for direct access from python, see SciML/diffeqpy as example.","category":"page"},{"location":"concepts/multilang/#[OUTDATED]-ZMQ-Messaging-Interface","page":"Multi-Language Support","title":"[OUTDATED] ZMQ Messaging Interface","text":"","category":"section"},{"location":"concepts/multilang/","page":"Multi-Language Support","title":"Multi-Language Support","text":"Caesar.jl has a ZMQ messaging interface (interested can see code here) that allows users to interact with the solver code base in a variety of ways. The messaging interface is not meant to replace static .so library file compilation but rather provide a more versatile and flexible development strategy.","category":"page"},{"location":"concepts/multilang/#Starting-the-Caesar-ZMQ-Navigation-Server","page":"Multi-Language Support","title":"Starting the Caesar ZMQ Navigation Server","text":"","category":"section"},{"location":"concepts/multilang/","page":"Multi-Language Support","title":"Multi-Language Support","text":"Start the Caesar.ZmqCaesar server in a Julia session with a few process cores and full optimization:","category":"page"},{"location":"concepts/multilang/","page":"Multi-Language Support","title":"Multi-Language Support","text":"julia -p4 -O3","category":"page"},{"location":"concepts/multilang/","page":"Multi-Language Support","title":"Multi-Language Support","text":"Then run the following commands, and note these steps have also been scripted here:","category":"page"},{"location":"concepts/multilang/","page":"Multi-Language Support","title":"Multi-Language Support","text":"#import the required modules\nusing Caesar, Caesar.ZmqCaesar\n\n# create empty factor graph and config objects\nfg = initfg()\nconfig = Dict{String, String}()\nzmqConfig = ZmqServer(fg, config, true, \"tcp://*:5555\");\n\n# Start the server over ZMQ\nstart(zmqConfig)\n\n# give the server a minute to start up ...","category":"page"},{"location":"concepts/multilang/","page":"Multi-Language Support","title":"Multi-Language Support","text":"The current tests are a good place to see some examples of the current interfacing functions. Feel free to change the ZMQ interface for to any of the ZMQ supported modes of data transport, such as Interprocess Communication (IPC) vs. TCP.","category":"page"},{"location":"concepts/multilang/#Alternative-Methods","page":"Multi-Language Support","title":"Alternative Methods","text":"","category":"section"},{"location":"concepts/multilang/","page":"Multi-Language Support","title":"Multi-Language Support","text":"Interfacing from languages like Python may also be achieved using PyCall.jl although little work has been done in the Caesar.jl framework to explore this path. Julia is itself interactive/dynamic and has plenty of line-by-line and Integrated Development Environment support – consider trying Julia for your application.","category":"page"},{"location":"dev/internal_fncs/#Good-to-know","page":"Internal Functions","title":"Good to know","text":"","category":"section"},{"location":"dev/internal_fncs/#Conditional-Multivariate-Normals","page":"Internal Functions","title":"Conditional Multivariate Normals","text":"","category":"section"},{"location":"dev/internal_fncs/","page":"Internal Functions","title":"Internal Functions","text":"using Distributions\nusing LinearAlgebra\n\n##\n\n# P(A|B)\n\nΣab = 0.2*randn(3,3)\nΣab += Σab'\nΣab += diagm([1.0;1.0;1.0])\n\nμ_ab = [10.0;0.0;-1.0]\nμ_1 = μ_ab[1:1]\nμ_2 = μ_ab[2:3]\n\nΣ_11 = Σab[1:1,1:1]\nΣ_12 = Σab[1:1,2:3]\nΣ_21 = Σab[2:3,1:1]\nΣ_22 = Σab[2:3,2:3]\n\n##\n\n# P(A|B) = P(A,B) / P(B)\nP_AB = MvNormal(μ_ab, Σab) # likelihood\nP_B = MvNormal([-0.5;0.75], [0.75 0.3; 0.3 2.0]) # evidence\n\n# Schur compliment\nμ_(b) = μ_1 + Σ_12*Σ_22^(-1)*(b-μ_2)\nΣ_ = Σ_11 + Σ_12*Σ_22^(-1)*Σ_21\n\nP_AB_B(a,b) = pdf(P_AB, [a;b]) / pdf(P_B, b)\nP_A_B(a,b; mv = MvNormal(μ_(b), Σ_)) = pdf(mv, a) \n\n##\n\n# probability density: p(a) = P(A=a | B=b)\n@show P_A_B([1.;],[0.;0.])\n@show P_AB_B([1.;],[0.;0.])\n\nP(A|B=B(.))","category":"page"},{"location":"dev/internal_fncs/#Various-Internal-Function-Docs","page":"Internal Functions","title":"Various Internal Function Docs","text":"","category":"section"},{"location":"dev/internal_fncs/","page":"Internal Functions","title":"Internal Functions","text":"_solveCCWNumeric!","category":"page"},{"location":"dev/internal_fncs/#IncrementalInference._solveCCWNumeric!","page":"Internal Functions","title":"IncrementalInference._solveCCWNumeric!","text":"_solveCCWNumeric!(ccwl; ...)\n_solveCCWNumeric!(ccwl, _slack; perturb)\n\n\nSolve free variable x by root finding residual function fgr.usrfnc(res, x). This is the penultimate step before calling numerical operations to move actual estimates, which is done by an internally created lambda function.\n\nNotes\n\nAssumes cpt_.p is already set to desired X decision variable dimensions and size. \nAssumes only ccw.particleidx will be solved for\nsmall random (off-manifold) perturbation used to prevent trivial solver cases, div by 0 etc.\nperturb is necessary for NLsolve (obsolete) cases, and smaller than 1e-10 will result in test failure\nAlso incorporates the active hypo lookup\n\nDevNotes\n\nTODO testshuffle is now obsolete, should be removed\nTODO perhaps consolidate perturbation with inflation or nullhypo\n\n\n\n\n\n","category":"function"},{"location":"dev/wiki/#Developers-Documentation","page":"Wiki Pointers","title":"Developers Documentation","text":"","category":"section"},{"location":"dev/wiki/#High-Level-Requirements","page":"Wiki Pointers","title":"High Level Requirements","text":"","category":"section"},{"location":"dev/wiki/","page":"Wiki Pointers","title":"Wiki Pointers","text":"Wiki to formalize some of the overall objectives.","category":"page"},{"location":"dev/wiki/#Standardizing-the-API,-verbNoun-Definitions:","page":"Wiki Pointers","title":"Standardizing the API, verbNoun Definitions:","text":"","category":"section"},{"location":"dev/wiki/","page":"Wiki Pointers","title":"Wiki Pointers","text":"The API derives from a set of standard definitions for verbs and Nouns, please see the developer wiki regarding these definitions.","category":"page"},{"location":"dev/wiki/#DistributedFactorGraphs.jl-Docs","page":"Wiki Pointers","title":"DistributedFactorGraphs.jl Docs","text":"","category":"section"},{"location":"dev/wiki/","page":"Wiki Pointers","title":"Wiki Pointers","text":"These are more hardy developer docs, such as the lower level data management API etc.","category":"page"},{"location":"dev/wiki/#Design-Wiki,-Data-and-Architecture","page":"Wiki Pointers","title":"Design Wiki, Data and Architecture","text":"","category":"section"},{"location":"dev/wiki/","page":"Wiki Pointers","title":"Wiki Pointers","text":"More developer zone material will be added here in the future, but for the time being check out the Caesar Wiki.","category":"page"},{"location":"dev/wiki/#Tree-and-CSM-References","page":"Wiki Pointers","title":"Tree and CSM References","text":"","category":"section"},{"location":"dev/wiki/","page":"Wiki Pointers","title":"Wiki Pointers","text":"Major upgrades to how the tree and CSM works is tracked in IIF issue 889.","category":"page"},{"location":"dev/wiki/#Coding-Templates","page":"Wiki Pointers","title":"Coding Templates","text":"","category":"section"},{"location":"dev/wiki/","page":"Wiki Pointers","title":"Wiki Pointers","text":"We've started to organize useful coding templates that are not available elsewhere (such as JuliaDocs) in a more local developers ","category":"page"},{"location":"dev/wiki/","page":"Wiki Pointers","title":"Wiki Pointers","text":"Coding Templates Wiki I.\nCoding Templates Wiki II","category":"page"},{"location":"dev/wiki/#Shortcuts-for-vscode-IDE","page":"Wiki Pointers","title":"Shortcuts for vscode IDE","text":"","category":"section"},{"location":"dev/wiki/","page":"Wiki Pointers","title":"Wiki Pointers","text":"See wiki","category":"page"},{"location":"dev/wiki/#Parametric-Solve-Whiteboard","page":"Wiki Pointers","title":"Parametric Solve Whiteboard","text":"","category":"section"},{"location":"dev/wiki/","page":"Wiki Pointers","title":"Wiki Pointers","text":"https://github.com/JuliaRobotics/IncrementalInference.jl/wiki/Parametric-Solve-Whiteboard","category":"page"},{"location":"dev/wiki/#Early-PoC-work-on-Tree-based-Initialization","page":"Wiki Pointers","title":"Early PoC work on Tree based Initialization","text":"","category":"section"},{"location":"dev/wiki/","page":"Wiki Pointers","title":"Wiki Pointers","text":"https://github.com/JuliaRobotics/IncrementalInference.jl/wiki/Tree-Based-Initialization","category":"page"},{"location":"dev/wiki/#Variable-Ordering-Links","page":"Wiki Pointers","title":"Variable Ordering Links","text":"","category":"section"},{"location":"dev/wiki/","page":"Wiki Pointers","title":"Wiki Pointers","text":"Wiki for variable ordering links.","category":"page"},{"location":"examples/using_images/#images_and_fiducials","page":"Images and AprilTags","title":"Images and Fiducials","text":"","category":"section"},{"location":"examples/using_images/#AprilTags","page":"Images and AprilTags","title":"AprilTags","text":"","category":"section"},{"location":"examples/using_images/","page":"Images and AprilTags","title":"Images and AprilTags","text":"One common use in SLAM is AprilTags.jl. Please see that repo for documentation on detecting tags in images. Note that Caesar.jl has a few built in tools for working with Images.jl too.","category":"page"},{"location":"examples/using_images/","page":"Images and AprilTags","title":"Images and AprilTags","text":"using AprilTags\nusing Images, Caesar","category":"page"},{"location":"examples/using_images/","page":"Images and AprilTags","title":"Images and AprilTags","text":"Which immediately enables a new factor specifically developed for using AprilTags in a factor graph:","category":"page"},{"location":"examples/using_images/","page":"Images and AprilTags","title":"Images and AprilTags","text":"Caesar.Pose2AprilTag4Corners","category":"page"},{"location":"examples/using_images/#Caesar.Pose2AprilTag4Corners","page":"Images and AprilTags","title":"Caesar.Pose2AprilTag4Corners","text":"struct Pose2AprilTag4Corners{T<:(SamplableBelief), F<:Function} <: AbstractManifoldMinimize\n\nSimplified constructor type to convert between 4 corner detection of AprilTags to a Pose2Pose2 factor for use in 2D\n\nNotes\n\nCoordinate frames are:\nassume robotics body frame is xyz <==> fwd-lft-up\nassume AprilTags pose is xyz <==> rht-dwn-fwd\nassume camera frame is xyz <==> rht-dwn-fwd\nassume Images.jl frame is row-col <==> i-j <==> dwn-rht\nHelper constructor uses f_width, f_height, c_width, c_height,s to build K, \nsetting K will overrule f_width,f_height, c_width, c_height,s.\nFinding preimage from deconv measurement sample idx in place of MvNormal mean:\nsee generateCostAprilTagsPreimageCalib for detauls.\n\nExample\n\n# bring in the packages\nusing AprilTags, Caesar, FileIO\n\n# the size of the tag, as in the outer length of each side on of black square \ntaglength = 0.15\n\n# load the image\nimg = load(\"photo.jpg\")\n\n# the image size\nwidth, height = size(img)\n# auto-guess `f_width=height, c_width=round(Int,width/2), c_height=round(Int, height/2)`\n\ndetector = AprilTagDetector()\ntags = detector(img)\n\n# new factor graph with Pose2 `:x0` and a Prior.\nfg = generateGraph_ZeroPose(varType=Pose2)\n\n# use a construction helper to add factors to all the tags\nfor tag in tags\n tagSym = Symbol(\"tag$(tag.id)\")\n exists(fg, tagSym) ? nothing : addVariable!(fg, tagSym, Pose2)\n pat = Pose2AprilTag4Corners(corners=tag.p, homography=tag.H, taglength=taglength)\n addFactor!(fg, [:x0; tagSym], pat)\nend\n\n# free AprilTags library memory\nfreeDetector!(detector)\n\nDevNotes\n\nTODO IIF will get plumbing to combine many of preimage obj terms into single calibration search\n\nRelated\n\nAprilTags.detect, PackedPose2AprilTag4Corners, generateCostAprilTagsPreimageCalib\n\n\n\n\n\n","category":"type"},{"location":"examples/using_images/#Using-Images.jl","page":"Images and AprilTags","title":"Using Images.jl","text":"","category":"section"},{"location":"examples/using_images/","page":"Images and AprilTags","title":"Images and AprilTags","text":"The Caesar.jl ecosystem support use of the JuliaImages/Images.jl suite of packages. Please see documentation there for the wealth of features implemented.","category":"page"},{"location":"examples/using_images/#Handy-Notes","page":"Images and AprilTags","title":"Handy Notes","text":"","category":"section"},{"location":"examples/using_images/","page":"Images and AprilTags","title":"Images and AprilTags","text":"Converting between images and PNG format:","category":"page"},{"location":"examples/using_images/","page":"Images and AprilTags","title":"Images and AprilTags","text":"bytes = Caesar.toFormat(format\"PNG\", img)","category":"page"},{"location":"examples/using_images/","page":"Images and AprilTags","title":"Images and AprilTags","text":"note: Note\nMore details to follow.","category":"page"},{"location":"examples/using_images/#Images-enables-ScatterAlign","page":"Images and AprilTags","title":"Images enables ScatterAlign","text":"","category":"section"},{"location":"examples/using_images/","page":"Images and AprilTags","title":"Images and AprilTags","text":"See point cloud alignment page for details on ScatterAlignPose","category":"page"},{"location":"concepts/interacting_fgs/#Factor-Graph-as-a-Whole","page":"Interact w Graphs","title":"Factor Graph as a Whole","text":"","category":"section"},{"location":"concepts/interacting_fgs/#Saving-and-Loading","page":"Interact w Graphs","title":"Saving and Loading","text":"","category":"section"},{"location":"concepts/interacting_fgs/","page":"Interact w Graphs","title":"Interact w Graphs","text":"Assuming some factor graph object has been constructed by hand or automation, it is often very useful to be able to store that factor graph to file for later loading, solving, analysis etc. Caesar.jl provides such functionality through easy saving and loading. To save a factor graph, simply do:","category":"page"},{"location":"concepts/interacting_fgs/","page":"Interact w Graphs","title":"Interact w Graphs","text":"saveDFG(\"/somewhere/myfg\", fg)","category":"page"},{"location":"concepts/interacting_fgs/","page":"Interact w Graphs","title":"Interact w Graphs","text":"saveDFG","category":"page"},{"location":"concepts/interacting_fgs/#DistributedFactorGraphs.saveDFG","page":"Interact w Graphs","title":"DistributedFactorGraphs.saveDFG","text":"saveDFG(folder, dfg; saveMetadata)\n\n\nSave a DFG to a folder. Will create/overwrite folder if it exists.\n\nDevNotes:\n\nTODO remove compress kwarg.\n\nExample\n\nusing DistributedFactorGraphs, IncrementalInference\n# Create a DFG - can make one directly, e.g. GraphsDFG{NoSolverParams}() or use IIF:\ndfg = initfg()\n# ... Add stuff to graph using either IIF or DFG:\nv1 = addVariable!(dfg, :a, ContinuousScalar, tags = [:POSE], solvable=0)\n# Now save it:\nsaveDFG(dfg, \"/tmp/saveDFG.tar.gz\")\n\n\n\n\n\n","category":"function"},{"location":"concepts/interacting_fgs/","page":"Interact w Graphs","title":"Interact w Graphs","text":"Similarly in the same or a new Julia context, you can load a factor graph object","category":"page"},{"location":"concepts/interacting_fgs/","page":"Interact w Graphs","title":"Interact w Graphs","text":"# using Caesar\nfg_ = loadDFG(\"/somwhere/myfg\")","category":"page"},{"location":"concepts/interacting_fgs/","page":"Interact w Graphs","title":"Interact w Graphs","text":"loadDFG\nloadDFG!","category":"page"},{"location":"concepts/interacting_fgs/#DistributedFactorGraphs.loadDFG","page":"Interact w Graphs","title":"DistributedFactorGraphs.loadDFG","text":"loadDFG(file)\n\n\nConvenience graph loader into a default LocalDFG.\n\nSee also: loadDFG!, saveDFG\n\n\n\n\n\n","category":"function"},{"location":"concepts/interacting_fgs/#DistributedFactorGraphs.loadDFG!","page":"Interact w Graphs","title":"DistributedFactorGraphs.loadDFG!","text":"loadDFG!(\n dfgLoadInto,\n dst;\n overwriteDFGMetadata,\n useDeprExtract\n)\n\n\nLoad a DFG from a saved folder.\n\nExample\n\nusing DistributedFactorGraphs, IncrementalInference\n# Create a DFG - can make one directly, e.g. GraphsDFG{NoSolverParams}() or use IIF:\ndfg = initfg()\n# Load the graph\nloadDFG!(dfg, \"/tmp/savedgraph.tar.gz\")\n# Use the DFG as you do normally.\nls(dfg)\n\nSee also: loadDFG, saveDFG\n\n\n\n\n\n","category":"function"},{"location":"concepts/interacting_fgs/","page":"Interact w Graphs","title":"Interact w Graphs","text":"note: Note\nJulia natively provides a direct in memory deepcopy function for making duplicate objects if you wish to keep a backup of the factor graph, e.g.fg2 = deepcopy(fg)","category":"page"},{"location":"concepts/interacting_fgs/#Adding-an-EntryData-Blob-store","page":"Interact w Graphs","title":"Adding an Entry=>Data Blob store","text":"","category":"section"},{"location":"concepts/interacting_fgs/","page":"Interact w Graphs","title":"Interact w Graphs","text":"A later part of the documentation will show how to include a Entry=>Data blob store.","category":"page"},{"location":"concepts/interacting_fgs/#querying_graph","page":"Interact w Graphs","title":"Querying the Graph","text":"","category":"section"},{"location":"concepts/interacting_fgs/#List-Variables:","page":"Interact w Graphs","title":"List Variables:","text":"","category":"section"},{"location":"concepts/interacting_fgs/","page":"Interact w Graphs","title":"Interact w Graphs","text":"A quick summary of the variables in the factor graph can be retrieved with:","category":"page"},{"location":"concepts/interacting_fgs/","page":"Interact w Graphs","title":"Interact w Graphs","text":"# List variables\nls(fg)\n# List factors attached to x0\nls(fg, :x0)\n# TODO: Provide an overview of getVal, getVert, getBW, getBelief, etc.","category":"page"},{"location":"concepts/interacting_fgs/","page":"Interact w Graphs","title":"Interact w Graphs","text":"It is possible to filter the listing with Regex string:","category":"page"},{"location":"concepts/interacting_fgs/","page":"Interact w Graphs","title":"Interact w Graphs","text":"ls(fg, r\"x\\d\")","category":"page"},{"location":"concepts/interacting_fgs/","page":"Interact w Graphs","title":"Interact w Graphs","text":"ls","category":"page"},{"location":"concepts/interacting_fgs/#DistributedFactorGraphs.ls","page":"Interact w Graphs","title":"DistributedFactorGraphs.ls","text":"ls(dfg; ...)\nls(dfg, regexFilter; tags, solvable)\n\n\nList the DFGVariables in the DFG. Optionally specify a label regular expression to retrieves a subset of the variables. Tags is a list of any tags that a node must have (at least one match).\n\nNotes:\n\nReturns Vector{Symbol}\n\n\n\n\n\nls(dfg; ...)\nls(dfg, node; solvable)\n\n\nRetrieve a list of labels of the immediate neighbors around a given variable or factor.\n\n\n\n\n\n","category":"function"},{"location":"concepts/interacting_fgs/","page":"Interact w Graphs","title":"Interact w Graphs","text":"unsorted = intersect(ls(fg, r\"x\"), ls(fg, Pose2)) # by regex\n\n# sorting in most natural way (as defined by DFG)\nsorted = sortDFG(unsorted)","category":"page"},{"location":"concepts/interacting_fgs/","page":"Interact w Graphs","title":"Interact w Graphs","text":"sortDFG","category":"page"},{"location":"concepts/interacting_fgs/#DistributedFactorGraphs.sortDFG","page":"Interact w Graphs","title":"DistributedFactorGraphs.sortDFG","text":"sortDFG(vars; by, kwargs...)\n\n\nConvenience wrapper for Base.sort. Sort variable (factor) lists in a meaningful way (by timestamp, label, etc), for example [:april;:x1_3;:x1_6;] Defaults to sorting by timestamp for variables and factors and using natural_lt for Symbols. See Base.sort for more detail.\n\nNotes\n\nNot fool proof, but does better than native sort.\n\nExample\n\nsortDFG(ls(dfg)) sortDFG(ls(dfg), by=getLabel, lt=natural_lt)\n\nRelated\n\nls, lsf\n\n\n\n\n\n","category":"function"},{"location":"concepts/interacting_fgs/#List-Factors:","page":"Interact w Graphs","title":"List Factors:","text":"","category":"section"},{"location":"concepts/interacting_fgs/","page":"Interact w Graphs","title":"Interact w Graphs","text":"unsorted = lsf(fg)\nunsorted = ls(fg, Pose2Point2BearingRange)","category":"page"},{"location":"concepts/interacting_fgs/","page":"Interact w Graphs","title":"Interact w Graphs","text":"or using the tags (works for variables too):","category":"page"},{"location":"concepts/interacting_fgs/","page":"Interact w Graphs","title":"Interact w Graphs","text":"lsf(fg, tags=[:APRILTAGS;])","category":"page"},{"location":"concepts/interacting_fgs/","page":"Interact w Graphs","title":"Interact w Graphs","text":"lsf\nlsfPriors","category":"page"},{"location":"concepts/interacting_fgs/#DistributedFactorGraphs.lsf","page":"Interact w Graphs","title":"DistributedFactorGraphs.lsf","text":"lsf(dfg; ...)\nlsf(dfg, regexFilter; tags, solvable)\n\n\nList the DFGFactors in the DFG. Optionally specify a label regular expression to retrieves a subset of the factors.\n\nNotes\n\nReturn Vector{Symbol}\n\n\n\n\n\n","category":"function"},{"location":"concepts/interacting_fgs/#DistributedFactorGraphs.lsfPriors","page":"Interact w Graphs","title":"DistributedFactorGraphs.lsfPriors","text":"lsfPriors(dfg)\n\n\nReturn vector of prior factor symbol labels in factor graph dfg.\n\nNotes:\n\nReturns Vector{Symbol}\n\n\n\n\n\n","category":"function"},{"location":"concepts/interacting_fgs/","page":"Interact w Graphs","title":"Interact w Graphs","text":"There are a variety of functions to query the factor graph, please refer to Function Reference for details and note that many functions still need to be added to this documentation.","category":"page"},{"location":"concepts/interacting_fgs/#Extracting-a-Subgraph","page":"Interact w Graphs","title":"Extracting a Subgraph","text":"","category":"section"},{"location":"concepts/interacting_fgs/","page":"Interact w Graphs","title":"Interact w Graphs","text":"Sometimes it is useful to make a deepcopy of a segment of the factor graph for some purpose:","category":"page"},{"location":"concepts/interacting_fgs/","page":"Interact w Graphs","title":"Interact w Graphs","text":"sfg = buildSubgraph(fg, [:x1;:x2;:l7], 1)","category":"page"},{"location":"concepts/interacting_fgs/#Extracting-Belief-Results-(and-PPE)","page":"Interact w Graphs","title":"Extracting Belief Results (and PPE)","text":"","category":"section"},{"location":"concepts/interacting_fgs/","page":"Interact w Graphs","title":"Interact w Graphs","text":"Once you have solved the graph, you can review the full marginal with:","category":"page"},{"location":"concepts/interacting_fgs/","page":"Interact w Graphs","title":"Interact w Graphs","text":"X0 = getBelief(fg, :x0)\n# Evaluate the marginal density function just for fun at [0.0, 0, 0].\nX0(zeros(3,1))","category":"page"},{"location":"concepts/interacting_fgs/","page":"Interact w Graphs","title":"Interact w Graphs","text":"This object is currently a Kernel Density which contains kernels at specific points on the associated manifold. These kernel locations can be retrieved with:","category":"page"},{"location":"concepts/interacting_fgs/","page":"Interact w Graphs","title":"Interact w Graphs","text":"X0pts = getPoints(X0)","category":"page"},{"location":"concepts/interacting_fgs/","page":"Interact w Graphs","title":"Interact w Graphs","text":"getBelief","category":"page"},{"location":"concepts/interacting_fgs/#IncrementalInference.getBelief","page":"Interact w Graphs","title":"IncrementalInference.getBelief","text":"getBelief(vnd)\n\n\nGet a ManifoldKernelDensity estimate from variable node data.\n\n\n\n\n\n","category":"function"},{"location":"concepts/interacting_fgs/#Parametric-Point-Estimates-(PPE)","page":"Interact w Graphs","title":"Parametric Point Estimates (PPE)","text":"","category":"section"},{"location":"concepts/interacting_fgs/","page":"Interact w Graphs","title":"Interact w Graphs","text":"Since Caesar.jl is build around the each variable state being estimated as a total marginal posterior belief, it is often useful to get the equivalent parametric point estimate from the belief. Many of these computations are already done by the inference library and avalable via the various getPPE methods, e.g.:","category":"page"},{"location":"concepts/interacting_fgs/","page":"Interact w Graphs","title":"Interact w Graphs","text":"getPPE(fg, :l3)\ngetPPESuggested(fg, :l5)","category":"page"},{"location":"concepts/interacting_fgs/","page":"Interact w Graphs","title":"Interact w Graphs","text":"There are values for mean, max, or hybrid combinations.","category":"page"},{"location":"concepts/interacting_fgs/","page":"Interact w Graphs","title":"Interact w Graphs","text":"getPPE\ncalcPPE","category":"page"},{"location":"concepts/interacting_fgs/#DistributedFactorGraphs.getPPE","page":"Interact w Graphs","title":"DistributedFactorGraphs.getPPE","text":"getPPE(vari)\ngetPPE(vari, solveKey)\n\n\nGet the parametric point estimate (PPE) for a variable in the factor graph.\n\nNotes\n\nDefaults on keywords solveKey and method\n\nRelated\n\ngetMeanPPE, getMaxPPE, getKDEMean, getKDEFit, getPPEs, getVariablePPEs\n\n\n\n\n\ngetPPE(v)\ngetPPE(v, ppekey)\n\n\nGet the parametric point estimate (PPE) for a variable in the factor graph for a given solve key.\n\nNotes\n\nDefaults on keywords solveKey and method\n\nRelated getPPEMean, getPPEMax, updatePPE!, mean(BeliefType)\n\n\n\n\n\n","category":"function"},{"location":"concepts/interacting_fgs/#IncrementalInference.calcPPE","page":"Interact w Graphs","title":"IncrementalInference.calcPPE","text":"calcPPE(var; ...)\ncalcPPE(var, varType; ppeType, solveKey, ppeKey)\n\n\nGet the ParametricPointEstimates–-based on full marginal belief estimates–-of a variable in the distributed factor graph. Calculate new Parametric Point Estimates for a given variable.\n\nDevNotes\n\nTODO update for manifold subgroups.\nTODO standardize after AMP3D\n\nRelated\n\ngetPPE, setPPE!, getVariablePPE\n\n\n\n\n\n","category":"function"},{"location":"concepts/interacting_fgs/#Getting-Many-Marginal-Samples","page":"Interact w Graphs","title":"Getting Many Marginal Samples","text":"","category":"section"},{"location":"concepts/interacting_fgs/","page":"Interact w Graphs","title":"Interact w Graphs","text":"It is also possible to sample the above belief objects for more samples:","category":"page"},{"location":"concepts/interacting_fgs/","page":"Interact w Graphs","title":"Interact w Graphs","text":"pts = rand(X0, 200)","category":"page"},{"location":"concepts/interacting_fgs/#build_manikde","page":"Interact w Graphs","title":"Building On-Manifold KDEs","text":"","category":"section"},{"location":"concepts/interacting_fgs/","page":"Interact w Graphs","title":"Interact w Graphs","text":"These kernel density belief objects can be constructed from points as follows:","category":"page"},{"location":"concepts/interacting_fgs/","page":"Interact w Graphs","title":"Interact w Graphs","text":"X0_ = manikde!(Pose2, pts)","category":"page"},{"location":"concepts/interacting_fgs/#Logging-Output-(Unique-Folder)","page":"Interact w Graphs","title":"Logging Output (Unique Folder)","text":"","category":"section"},{"location":"concepts/interacting_fgs/","page":"Interact w Graphs","title":"Interact w Graphs","text":"Each new factor graph is designated a unique folder in /tmp/caesar. This is usaully used for debugging or large scale test analysis. Sometimes it may be useful for the user to also use this temporary location. The location is stored in the SolverParams:","category":"page"},{"location":"concepts/interacting_fgs/","page":"Interact w Graphs","title":"Interact w Graphs","text":"getSolverParams(fg).logpath","category":"page"},{"location":"concepts/interacting_fgs/","page":"Interact w Graphs","title":"Interact w Graphs","text":"The functions of interest are:","category":"page"},{"location":"concepts/interacting_fgs/","page":"Interact w Graphs","title":"Interact w Graphs","text":"getLogPath\njoinLogPath","category":"page"},{"location":"concepts/interacting_fgs/#IncrementalInference.getLogPath","page":"Interact w Graphs","title":"IncrementalInference.getLogPath","text":"getLogPath(opt)\n\n\nGet the folder location where debug and solver information is recorded for a particular factor graph.\n\n\n\n\n\n","category":"function"},{"location":"concepts/interacting_fgs/#IncrementalInference.joinLogPath","page":"Interact w Graphs","title":"IncrementalInference.joinLogPath","text":"joinLogPath(opt, str)\n\n\nAppend str onto factor graph log path as convenience function.\n\n\n\n\n\n","category":"function"},{"location":"concepts/interacting_fgs/","page":"Interact w Graphs","title":"Interact w Graphs","text":"note: Note\nA useful tip for doing large scale processing might be to reduce amount of write operations to a solid-state drive that will be written to default location /tmp/caesar by simplying adding a symbolic link to a USB drive or SDCard, perhaps similar to:cd /tmp\nmkdir -p /media/MYFLASHDRIVE/caesar\nln -s /media/MYFLASHDRIVE/caesar caesar","category":"page"},{"location":"concepts/interacting_fgs/#Other-Useful-Functions","page":"Interact w Graphs","title":"Other Useful Functions","text":"","category":"section"},{"location":"concepts/interacting_fgs/","page":"Interact w Graphs","title":"Interact w Graphs","text":"getFactorDim\ngetManifold","category":"page"},{"location":"concepts/interacting_fgs/#IncrementalInference.getFactorDim","page":"Interact w Graphs","title":"IncrementalInference.getFactorDim","text":"getFactorDim(w...) -> Any\n\n\nReturn the number of dimensions this factor vertex fc influences.\n\nDevNotes\n\nTODO document how this function handles partial dimensions\nCurrently a factor manifold is just what the measurement provides (i.e. bearing only would be dimension 1)\n\n\n\n\n\n","category":"function"},{"location":"concepts/interacting_fgs/#DistributedFactorGraphs.getManifold","page":"Interact w Graphs","title":"DistributedFactorGraphs.getManifold","text":"getManifold(_)\n\n\nInterface function to return the <:ManifoldsBase.AbstractManifold object of variableType<:InferenceVariable.\n\n\n\n\n\ngetManifold(mkd)\ngetManifold(mkd, asPartial)\n\n\nReturn the manifold on which this ManifoldKernelDensity is defined.\n\nDevNotes\n\nTODO currently ignores the .partial aspect (captured in parameter L)\n\n\n\n\n\n","category":"function"},{"location":"refs/literature/#Literature","page":"References","title":"Literature","text":"","category":"section"},{"location":"refs/literature/","page":"References","title":"References","text":"Newly created page to list related references and additional literature pertaining to this package.","category":"page"},{"location":"refs/literature/#Direct-References","page":"References","title":"Direct References","text":"","category":"section"},{"location":"refs/literature/","page":"References","title":"References","text":"[1.1] Fourie, D., Leonard, J., Kaess, M.: \"A Nonparametric Belief Solution to the Bayes Tree\" IEEE/RSJ Intl. Conf. on Intelligent Robots and Systems (IROS), (2016).","category":"page"},{"location":"refs/literature/","page":"References","title":"References","text":"[1.2] Fourie, D.: \"Multi-modal and Inertial Sensor Solutions for Navigation-type Factor Graphs\", Ph.D. Thesis, Massachusetts Institute of Technology Electrical Engineering and Computer Science together with Woods Hole Oceanographic Institution Department for Applied Ocean Science and Engineering, September 2017.","category":"page"},{"location":"refs/literature/","page":"References","title":"References","text":"[1.3] Fourie, D., Claassens, S., Pillai, S., Mata, R., Leonard, J.: \"SLAMinDB: Centralized graph databases for mobile robotics\", IEEE Intl. Conf. on Robotics and Automation (ICRA), Singapore, 2017.","category":"page"},{"location":"refs/literature/","page":"References","title":"References","text":"[1.4] Cheung, M., Fourie, D., Rypkema, N., Vaz Teixeira, P., Schmidt, H., and Leonard, J.: \"Non-Gaussian SLAM utilizing Synthetic Aperture Sonar\", Intl. Conf. On Robotics and Automation (ICRA), IEEE, Montreal, 2019.","category":"page"},{"location":"refs/literature/","page":"References","title":"References","text":"[1.5] Doherty, K., Fourie, D., Leonard, J.: \"Multimodal Semantic SLAM with Probabilistic Data Association\", Intl. Conf. On Robotics and Automation (ICRA), IEEE, Montreal, 2019.","category":"page"},{"location":"refs/literature/","page":"References","title":"References","text":"[1.6] Fourie, D., Vaz Teixeira, P., Leonard, J.: \"Non-parametric Mixed-Manifold Products using Multiscale Kernel Densities\", IEEE Intl. Conf. on Intelligent Robots and Systems (IROS), (2019),.","category":"page"},{"location":"refs/literature/","page":"References","title":"References","text":"[1.7] Teixeira, P.N.V., Fourie, D., Kaess, M. and Leonard, J.J., 2019, September. \"Dense, sonar-based reconstruction of underwater scenes\". In 2019 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS) (pp. 8060-8066). IEEE.","category":"page"},{"location":"refs/literature/","page":"References","title":"References","text":"[1.8] Fourie, D., Leonard, J.: \"Inertial Odometry with Retroactive Sensor Calibration\", 2015-2019.","category":"page"},{"location":"refs/literature/","page":"References","title":"References","text":"[1.9] Koolen, T. and Deits, R., 2019. Julia for robotics: Simulation and real-time control in a high-level programming language. IEEE, Intl. Conference on Robotics and Automation, ICRA (2019).","category":"page"},{"location":"refs/literature/","page":"References","title":"References","text":"[1.10] Fourie, D., Espinoza, A. T., Kaess, M., and Leonard, J. J., “Characterizing marginalization and incremental operations on the Bayes tree,” in International Workshop on Algorithmic Foundations of Robotics (WAFR), 2020, Oulu, Finland, Springer Publishing.","category":"page"},{"location":"refs/literature/","page":"References","title":"References","text":"[1.11] Fourie, D., Rypkema, N., Claassens, S., Vaz Teixeira, P., Fischell, E., and Leonard, J.J., \"Towards Real-Time Non-Gaussian SLAM for Underdetermined Navigation\", in IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), October 2020, Las Vegas, USA.","category":"page"},{"location":"refs/literature/","page":"References","title":"References","text":"[1.12] J. Terblanche, S. Claassens and D. Fourie, \"Multimodal Navigation-Affordance Matching for SLAM,\" in IEEE Robotics and Automation Letters, vol. 6, no. 4, pp. 7728-7735, Oct. 2021, doi: 10.1109/LRA.2021.3098788. Also presented at, IEEE 17th International Conference on Automation Science and Engineering, August 2021, Lyon, France.","category":"page"},{"location":"refs/literature/#Important-References","page":"References","title":"Important References","text":"","category":"section"},{"location":"refs/literature/","page":"References","title":"References","text":"[2.1] Kaess, Michael, et al. \"iSAM2: Incremental smoothing and mapping using the Bayes tree\" The International Journal of Robotics Research (2011): 0278364911430419.","category":"page"},{"location":"refs/literature/","page":"References","title":"References","text":"[2.2] Kaess, Michael, et al. \"The Bayes tree: An algorithmic foundation for probabilistic robot mapping.\" Algorithmic Foundations of Robotics IX. Springer, Berlin, Heidelberg, 2010. 157-173.","category":"page"},{"location":"refs/literature/","page":"References","title":"References","text":"[2.3] Kschischang, Frank R., Brendan J. Frey, and Hans-Andrea Loeliger. \"Factor graphs and the sum-product algorithm.\" IEEE Transactions on information theory 47.2 (2001): 498-519.","category":"page"},{"location":"refs/literature/","page":"References","title":"References","text":"[2.4] Dellaert, Frank, and Michael Kaess. \"Factor graphs for robot perception.\" Foundations and Trends® in Robotics 6.1-2 (2017): 1-139.","category":"page"},{"location":"refs/literature/","page":"References","title":"References","text":"[2.5] Sudderth, E.B., Ihler, A.T., Isard, M., Freeman, W.T. and Willsky, A.S., 2010. \"Nonparametric belief propagation.\" Communications of the ACM, 53(10), pp.95-103","category":"page"},{"location":"refs/literature/","page":"References","title":"References","text":"[2.6] Paskin, Mark A. \"Thin junction tree filters for simultaneous localization and mapping.\" in Int. Joint Conf. on Artificial Intelligence. 2003.","category":"page"},{"location":"refs/literature/","page":"References","title":"References","text":"[2.7] Farrell, J., and Matthew B.: \"The global positioning system and inertial navigation.\" Vol. 61. New York: Mcgraw-hill, 1999.","category":"page"},{"location":"refs/literature/","page":"References","title":"References","text":"[2.8] Zarchan, Paul, and Howard Musoff, eds. Fundamentals of Kalman filtering: a practical approach. American Institute of Aeronautics and Astronautics, Inc., 2013.","category":"page"},{"location":"refs/literature/","page":"References","title":"References","text":"[2.9] Rypkema, N. R.,: \"Underwater & Out of Sight: Towards Ubiquity in UnderwaterRobotics\", Ph.D. Thesis, Massachusetts Institute of Technology Electrical Engineering and Computer Science together with Woods Hole Oceanographic Institution Department for Applied Ocean Science and Engineering, September 2019.","category":"page"},{"location":"refs/literature/","page":"References","title":"References","text":"[2.10] Vaz Teixeira, P.: \"Dense, Sonar-based Reconstruction of Underwater Scenes\", Ph.D. Thesis, Massachusetts Institute of Technology Electrical Engineering and Computer Science together with Woods Hole Oceanographic Institution Department for Applied Ocean Science and Engineering, September 2019.","category":"page"},{"location":"refs/literature/","page":"References","title":"References","text":"[2.11] Hanebeck, Uwe D. \"FLUX: Progressive State Estimation Based on Zakai-type Distributed Ordinary Differential Equations.\" arXiv preprint arXiv:1808.02825 (2018).","category":"page"},{"location":"refs/literature/","page":"References","title":"References","text":"[2.12] Muandet, Krikamol, et al. \"Kernel mean embedding of distributions: A review and beyond.\" Foundations and Trends® in Machine Learning 10.1-2 (2017): 1-141.","category":"page"},{"location":"refs/literature/","page":"References","title":"References","text":"[2.13] Hsiao, M. and Kaess, M., 2019, May. \"MH-iSAM2: Multi-hypothesis iSAM using Bayes Tree and Hypo-tree\". In 2019 International Conference on Robotics and Automation (ICRA) (pp. 1274-1280). IEEE.","category":"page"},{"location":"refs/literature/","page":"References","title":"References","text":"[2.14] Arnborg, S., Corneil, D.G. and Proskurowski, A., 1987. \"Complexity of finding embeddings in a k-tree\". SIAM Journal on Algebraic Discrete Methods, 8(2), pp.277-284.","category":"page"},{"location":"refs/literature/","page":"References","title":"References","text":"[2.15a] Sola, J., Deray, J. and Atchuthan, D., 2018. \"A micro Lie theory for state estimation in robotics\". arXiv preprint arXiv:1812.01537, and tech report. And cheatsheet w/ suspected typos.","category":"page"},{"location":"refs/literature/","page":"References","title":"References","text":"[2.15b] Delleart F., 2012. Lie Groups for Beginners.","category":"page"},{"location":"refs/literature/","page":"References","title":"References","text":"[2.15c] Eade E., 2017 Lie Groups for 2D and 3D Transformations.","category":"page"},{"location":"refs/literature/","page":"References","title":"References","text":"[2.15d] Chirikjian, G.S., 2015. Partial bi-invariance of SE(3) metrics. Journal of Computing and Information Science in Engineering, 15(1).","category":"page"},{"location":"refs/literature/","page":"References","title":"References","text":"[2.15e] Pennec, X. and Lorenzi, M., 2020. Beyond Riemannian geometry: The affine connection setting for transformation groups. In Riemannian Geometric Statistics in Medical Image Analysis (pp. 169-229). Academic Press.","category":"page"},{"location":"refs/literature/","page":"References","title":"References","text":"[2.15f] Žefran, M., Kumar, V. and Croke, C., 1996, August. Choice of Riemannian metrics for rigid body kinematics. In International Design Engineering Technical Conferences and Computers and Information in Engineering Conference (Vol. 97584, p. V02BT02A030). American Society of Mechanical Engineers.","category":"page"},{"location":"refs/literature/","page":"References","title":"References","text":"[2.15g] Chirikjian, G.S. and Zhou, S., 1998. Metrics on motion and deformation of solid models.","category":"page"},{"location":"refs/literature/","page":"References","title":"References","text":"[2.16] Kaess, M. and Dellaert, F., 2009. Covariance recovery from a square root information matrix for data association. Robotics and autonomous systems, 57(12), pp.1198-1210.","category":"page"},{"location":"refs/literature/","page":"References","title":"References","text":"[2.17] Bishop, C.M., 2006. Pattern recognition and machine learning. New York: Springer. ISBN 978-0-387-31073-2.","category":"page"},{"location":"refs/literature/#Additional-References","page":"References","title":"Additional References","text":"","category":"section"},{"location":"refs/literature/","page":"References","title":"References","text":"[3.1] Duits, Remco, Erik J. Bekkers, and Alexey Mashtakov. \"Fourier Transform on the Homogeneous Space of 3D Positions and Orientations for Exact Solutions to Linear Parabolic and (Hypo-) Elliptic PDEs\". arXiv preprint arXiv:1811.00363 (2018).","category":"page"},{"location":"refs/literature/","page":"References","title":"References","text":"[3.2] Mohamed, S., Rosca, M., Figurnov, M. and Mnih, A., 2019. \"Monte carlo gradient estimation in machine learning\". arXiv preprint arXiv:1906.10652.","category":"page"},{"location":"refs/literature/","page":"References","title":"References","text":"[3.3] Rackauckas, C., Ma, Y., Martensen, J., Warner, C., Zubov, K., Supekar, R., Skinner, D., Ramadhan, A., Edelman, A., \"Universal Differential Equations for Scientific Machine Learning\", Archive online, DOI: 2001.04385.","category":"page"},{"location":"refs/literature/","page":"References","title":"References","text":"[3.4] Boumal, Nicolas. An introduction to optimization on smooth manifolds. Available online, May, 2020.","category":"page"},{"location":"refs/literature/","page":"References","title":"References","text":"[3.5] Relationship between the Hessianand Covariance Matrix forGaussian Random Variables, John Wiley & Sons","category":"page"},{"location":"refs/literature/","page":"References","title":"References","text":"[3.6] Pennec, Xavier. Intrinsic Statistics on Riemannian Manifolds: Basic Tools for Geometric Measurements, HAL Archive, 2011, Inria, France.","category":"page"},{"location":"refs/literature/","page":"References","title":"References","text":"[3.7] Weber, P., Medina-Oliva, G., Simon, C., et al., 2012. Overview on Bayesian networks applications for dependability risk analysis and maintenance areas. Appl. Artif. Intell. 25 (4), 671e682. https://doi.org/10.1016/j.engappai.2010.06.002. Preprint PDF.","category":"page"},{"location":"refs/literature/","page":"References","title":"References","text":"[3.8] Wang, H.R., Ye, L.T., Xu, X.Y., et al., 2010. Bayesian networks precipitation model based on hidden markov analysis and its application. Sci. China Technol. Sci. 53 (2), 539e547. https://doi.org/10.1007/s11431-010-0034-3.","category":"page"},{"location":"refs/literature/","page":"References","title":"References","text":"[3.9] Mangelson, J.G., Dominic, D., Eustice, R.M. and Vasudevan, R., 2018, May. Pairwise consistent measurement set maximization for robust multi-robot map merging. In 2018 IEEE international conference on robotics and automation (ICRA) (pp. 2916-2923). IEEE.","category":"page"},{"location":"refs/literature/","page":"References","title":"References","text":"[3.10] Bourgeois, F. and Lassalle, J.C., 1971. An extension of the Munkres algorithm for the assignment problem to rectangular matrices. Communications of the ACM, 14(12), pp.802-804.","category":"page"},{"location":"refs/literature/#Signal-Processing-(Beamforming-and-Channel-Deconvolution)","page":"References","title":"Signal Processing (Beamforming and Channel Deconvolution)","text":"","category":"section"},{"location":"refs/literature/","page":"References","title":"References","text":"[4.1] Van Trees, H.L., 2004. Optimum array processing: Part IV of detection, estimation, and modulation theory. John Wiley & Sons.","category":"page"},{"location":"refs/literature/","page":"References","title":"References","text":"[4.2a] Dowling, D.R., 2013. \"Acoustic Blind Deconvolution and Unconventional Nonlinear Beamforming in Shallow Ocean Environments\". MICHIGAN UNIV ANN ARBOR DEPT OF MECHANICAL ENGINEERING.","category":"page"},{"location":"refs/literature/","page":"References","title":"References","text":"[4.2b] Hossein Abadi, S., 2013. \"Blind deconvolution in multipath environments and extensions to remote source localization\", paper, thesis.","category":"page"},{"location":"refs/literature/#Contact-or-Tactile","page":"References","title":"Contact or Tactile","text":"","category":"section"},{"location":"refs/literature/","page":"References","title":"References","text":"[5.1] Suresh, S., Bauza, M., Yu, K.T., Mangelson, J.G., Rodriguez, A. and Kaess, M., 2021, May. Tactile SLAM: Real-time inference of shape and pose from planar pushing. In 2021 IEEE International Conference on Robotics and Automation (ICRA) (pp. 11322-11328). IEEE.","category":"page"},{"location":"introduction/#Introduction","page":"Introduction","title":"Introduction","text":"","category":"section"},{"location":"introduction/","page":"Introduction","title":"Introduction","text":"Caesar is an open-source robotic software stack aimed at localization and mapping for robotics, using non-Gaussian graphical model state-estimation techniques. The factor graph method is well suited to combining heterogeneous and ambiguous sensor data streams. The focus is predominantly on geometric/spatial/semantic estimation tasks related to simultaneous localization and mapping (SLAM). The software is also highly extensible and well suited to a variety of estimation /filtering-type tasks — especially in non-Gaussian/multimodal settings. Check out a brief description on why non-Gaussian / multi-modal data processing needs arise.","category":"page"},{"location":"introduction/#A-Few-Highlights","page":"Introduction","title":"A Few Highlights","text":"","category":"section"},{"location":"introduction/","page":"Introduction","title":"Introduction","text":"Caesar.jl addresses numerous issues that arise in prior SLAM solutions, including: ","category":"page"},{"location":"introduction/","page":"Introduction","title":"Introduction","text":"Distributed Factor Graph representation deeply-coupled with an on-Manifold probabilistic algebra language;\nLocalization using different algorithms:\nMM-iSAMv2\nParametric methods, including regular Guassian or Max-Mixtures.\nOther multi-parametric and non-Gaussian algorithms are presently being implemented.\nSolving under-defined systems, \nInference with non-Gaussian measurements, \nStandard features for natively handling ambiguous data association and multi-hypotheses, \nNative multi-modal (hypothesis) representation in the factor-graph, see Data Association and Hypotheses:\nMulti-modal and non-parametric representation of constraints;\nGaussian distributions are but one of the many representations of measurement error;\nSimplifying bespoke factor development, \nCentralized (or peer-to-peer decentralized) factor-graph persistence, \ni.e. Federated multi-session/agent reduction.\nMulti-CPU inference.\nOut-of-library extendable for Custom New Variables and Factors;\nNatively supports legacy Gaussian parametric and max-mixtures solutions;\nLocal in-memory solving on the device as well as database-driven centralized solving (micro-service architecture);\nNatively support Clique Recycling (i.e. fixed-lag out-marginalization) for continuous operation as well as off-line batch solving, see more at Using Incremental Updates (Clique Recycling I);\nNatively supports Dead Reckon Tethering;\nNatively supports Federated multi-session/agent solving;\nNative support for Entry=>Data blobs for storing large format data.\nMiddleware support, e.g. see the ROS Integration Page.","category":"page"},{"location":"concepts/available_varfacs/#variables_factors","page":"Variables/Factors","title":"Variables in Caesar.jl","text":"","category":"section"},{"location":"concepts/available_varfacs/","page":"Variables/Factors","title":"Variables/Factors","text":"You can check for the latest variable types by running the following in your terminal:","category":"page"},{"location":"concepts/available_varfacs/","page":"Variables/Factors","title":"Variables/Factors","text":"using RoME, Caesar\n\nsubtypes(IIF.InferenceVariable)\n\n# variables already available\nIIF.getCurrentWorkspaceVariables()\n\n# factors already available\nIIF.getCurrentWorkspaceFactors()","category":"page"},{"location":"concepts/available_varfacs/","page":"Variables/Factors","title":"Variables/Factors","text":"The variables and factors in Caesar should be sufficient for a variety of robotic applications, however, users can easily extend the framework (without changing the core code). This can even be done out-of-library at runtime after a construction of a factor graph has started! See Custom Variables and Custom Factors for more details.","category":"page"},{"location":"concepts/available_varfacs/#Basic-Variables","page":"Variables/Factors","title":"Basic Variables","text":"","category":"section"},{"location":"concepts/available_varfacs/","page":"Variables/Factors","title":"Variables/Factors","text":"Default variables in IncrementalInference","category":"page"},{"location":"concepts/available_varfacs/","page":"Variables/Factors","title":"Variables/Factors","text":"Position{N}","category":"page"},{"location":"concepts/available_varfacs/#IncrementalInference.Position","page":"Variables/Factors","title":"IncrementalInference.Position","text":"struct Position{N} <: InferenceVariable\n\nContinuous Euclidean variable of dimension N representing a Position in cartesian space.\n\n\n\n\n\n","category":"type"},{"location":"concepts/available_varfacs/#2D-Variables","page":"Variables/Factors","title":"2D Variables","text":"","category":"section"},{"location":"concepts/available_varfacs/","page":"Variables/Factors","title":"Variables/Factors","text":"The current variables types are:","category":"page"},{"location":"concepts/available_varfacs/","page":"Variables/Factors","title":"Variables/Factors","text":"Point2\nPose2\nDynPoint2\nDynPose2","category":"page"},{"location":"concepts/available_varfacs/#RoME.Point2","page":"Variables/Factors","title":"RoME.Point2","text":"struct Point2 <: InferenceVariable\n\nXY Euclidean manifold variable node softtype.\n\n\n\n\n\n","category":"type"},{"location":"concepts/available_varfacs/#RoME.Pose2","page":"Variables/Factors","title":"RoME.Pose2","text":"struct Pose2 <: InferenceVariable\n\nPose2 is a SE(2) mechanization of two Euclidean translations and one Circular rotation, used for general 2D SLAM.\n\n\n\n\n\n","category":"type"},{"location":"concepts/available_varfacs/#RoME.DynPoint2","page":"Variables/Factors","title":"RoME.DynPoint2","text":"struct DynPoint2 <: InferenceVariable\n\nDynamic point in 2D space with velocity components: x, y, dx/dt, dy/dt\n\n\n\n\n\n","category":"type"},{"location":"concepts/available_varfacs/#RoME.DynPose2","page":"Variables/Factors","title":"RoME.DynPose2","text":"struct DynPose2 <: InferenceVariable\n\nDynamic pose variable with velocity components: x, y, theta, dx/dt, dy/dt\n\nNote\n\nThe SE2E2_Manifold definition used currently is a hack to simplify the transition to Manifolds.jl, see #244 \nReplaced SE2E2_Manifold hack with ProductManifold(SpecialEuclidean(2), TranslationGroup(2)), confirm if it is correct.\n\n\n\n\n\n","category":"type"},{"location":"concepts/available_varfacs/#3D-Variables","page":"Variables/Factors","title":"3D Variables","text":"","category":"section"},{"location":"concepts/available_varfacs/","page":"Variables/Factors","title":"Variables/Factors","text":"Point3\nPose3","category":"page"},{"location":"concepts/available_varfacs/#RoME.Point3","page":"Variables/Factors","title":"RoME.Point3","text":"struct Point3 <: InferenceVariable\n\nXYZ Euclidean manifold variable node softtype.\n\nExample\n\np3 = Point3()\n\n\n\n\n\n","category":"type"},{"location":"concepts/available_varfacs/#RoME.Pose3","page":"Variables/Factors","title":"RoME.Pose3","text":"struct Pose3 <: InferenceVariable\n\nPose3 is currently a Euler angle mechanization of three Euclidean translations and three Circular rotation.\n\nFuture:\n\nWork in progress on AMP3D for proper non-Euler angle on-manifold operations.\nTODO the AMP upgrade is aimed at resolving 3D to Quat/SE3/SP3 – current Euler angles will be replaced\n\n\n\n\n\n","category":"type"},{"location":"concepts/available_varfacs/","page":"Variables/Factors","title":"Variables/Factors","text":"note: Note\nPlease open an issue with JuliaRobotics/RoME.jl for specific requests, problems, or suggestions. Contributions are also welcome. There might be more variable types in Caesar/RoME/IIF not yet documented here.","category":"page"},{"location":"concepts/available_varfacs/#Factors-in-Caesar.jl","page":"Variables/Factors","title":"Factors in Caesar.jl","text":"","category":"section"},{"location":"concepts/available_varfacs/","page":"Variables/Factors","title":"Variables/Factors","text":"You can check for the latest factor types by running the following in your terminal:","category":"page"},{"location":"concepts/available_varfacs/","page":"Variables/Factors","title":"Variables/Factors","text":"using RoME, Caesar\nprintln(\"- Singletons (priors): \")\nprintln.(sort(string.(subtypes(IIF.AbstractPrior))));\nprintln(\"- Pairwise (variable constraints): \")\nprintln.(sort(string.(subtypes(IIF.AbstractRelativeRoots))));\nprintln(\"- Pairwise (variable minimization constraints): \")\nprintln.(sort(string.(subtypes(IIF.AbstractRelativeMinimize))));","category":"page"},{"location":"concepts/available_varfacs/#Priors-(Absolute-Data)","page":"Variables/Factors","title":"Priors (Absolute Data)","text":"","category":"section"},{"location":"concepts/available_varfacs/","page":"Variables/Factors","title":"Variables/Factors","text":"Defaults in IncrementalInference.jl:","category":"page"},{"location":"concepts/available_varfacs/","page":"Variables/Factors","title":"Variables/Factors","text":"Prior\nPartialPrior","category":"page"},{"location":"concepts/available_varfacs/#IncrementalInference.Prior","page":"Variables/Factors","title":"IncrementalInference.Prior","text":"struct Prior{T<:(SamplableBelief)} <: AbstractPrior\n\nDefault prior on all dimensions of a variable node in the factor graph. Prior is not recommended when non-Euclidean dimensions are used in variables.\n\n\n\n\n\n","category":"type"},{"location":"concepts/available_varfacs/#IncrementalInference.PartialPrior","page":"Variables/Factors","title":"IncrementalInference.PartialPrior","text":"struct PartialPrior{T<:(SamplableBelief), P<:Tuple} <: AbstractPrior\n\nPartial prior belief (absolute data) on any variable, given <:SamplableBelief and which dimensions of the intended variable.\n\nNotes\n\nIf using AMP.ManifoldKernelDensity, don't double partial. Only define the partial in this PartialPrior container. \nFuture TBD, consider using AMP.getManifoldPartial for more general abstraction.\n\n\n\n\n\n","category":"type"},{"location":"concepts/available_varfacs/","page":"Variables/Factors","title":"Variables/Factors","text":"Some of the most common priors (unary factors) in Caesar.jl/RoME.jl include:","category":"page"},{"location":"concepts/available_varfacs/","page":"Variables/Factors","title":"Variables/Factors","text":"PriorPolar\nPriorPoint2\nPriorPose2\nPriorPoint3\nPriorPose3","category":"page"},{"location":"concepts/available_varfacs/#RoME.PriorPolar","page":"Variables/Factors","title":"RoME.PriorPolar","text":"struct PriorPolar{T1<:(SamplableBelief), T2<:(SamplableBelief)} <: AbstractPrior\n\nPrior belief on any Polar related variable.\n\n\n\n\n\n","category":"type"},{"location":"concepts/available_varfacs/#RoME.PriorPoint2","page":"Variables/Factors","title":"RoME.PriorPoint2","text":"struct PriorPoint2{T<:(SamplableBelief)} <: AbstractPrior\n\nDirection observation information of a Point2 variable.\n\n\n\n\n\n","category":"type"},{"location":"concepts/available_varfacs/#RoME.PriorPose2","page":"Variables/Factors","title":"RoME.PriorPose2","text":"struct PriorPose2{T<:(SamplableBelief)} <: AbstractPrior\n\nIntroduce direct observations on all dimensions of a Pose2 variable:\n\nExample:\n\nPriorPose2( MvNormal([10; 10; pi/6.0], Matrix(Diagonal([0.1;0.1;0.05].^2))) )\n\n\n\n\n\n","category":"type"},{"location":"concepts/available_varfacs/#RoME.PriorPoint3","page":"Variables/Factors","title":"RoME.PriorPoint3","text":"struct PriorPoint3{T} <: AbstractPrior\n\nDirection observation information of a Point3 variable.\n\n\n\n\n\n","category":"type"},{"location":"concepts/available_varfacs/#RoME.PriorPose3","page":"Variables/Factors","title":"RoME.PriorPose3","text":"struct PriorPose3{T<:(SamplableBelief)} <: AbstractPrior\n\nDirect observation information of Pose3 variable type.\n\n\n\n\n\n","category":"type"},{"location":"concepts/available_varfacs/#Relative-Likelihoods-(Relative-Data)","page":"Variables/Factors","title":"Relative Likelihoods (Relative Data)","text":"","category":"section"},{"location":"concepts/available_varfacs/","page":"Variables/Factors","title":"Variables/Factors","text":"Defaults in IncrementalInference.jl:","category":"page"},{"location":"concepts/available_varfacs/","page":"Variables/Factors","title":"Variables/Factors","text":"LinearRelative","category":"page"},{"location":"concepts/available_varfacs/#IncrementalInference.LinearRelative","page":"Variables/Factors","title":"IncrementalInference.LinearRelative","text":"struct LinearRelative{N, T<:(SamplableBelief)} <: AbstractManifoldMinimize\n\nDefault linear offset between two scalar variables.\n\nX_2 = X_1 + η_Z\n\n\n\n\n\n","category":"type"},{"location":"concepts/available_varfacs/","page":"Variables/Factors","title":"Variables/Factors","text":"Existing n-ary factors in Caesar.jl/RoME.jl/IIF.jl include:","category":"page"},{"location":"concepts/available_varfacs/","page":"Variables/Factors","title":"Variables/Factors","text":"PolarPolar\nPoint2Point2\nPose2Point2\nPose2Point2Bearing\nPose2Point2BearingRange\nPose2Point2Range\nPose2Pose2\nDynPoint2VelocityPrior\nDynPoint2DynPoint2\nVelPoint2VelPoint2\nPoint2Point2Velocity\nDynPose2VelocityPrior\nVelPose2VelPose2\nDynPose2Pose2\nPose3Pose3\nPriorPose3ZRP\nPose3Pose3XYYaw","category":"page"},{"location":"concepts/available_varfacs/#RoME.PolarPolar","page":"Variables/Factors","title":"RoME.PolarPolar","text":"struct PolarPolar{T1<:(SamplableBelief), T2<:(SamplableBelief)} <: AbstractRelativeMinimize\n\nLinear offset factor of IIF.SamplableBelief between two Polar variables.\n\n\n\n\n\n","category":"type"},{"location":"concepts/available_varfacs/#RoME.Point2Point2","page":"Variables/Factors","title":"RoME.Point2Point2","text":"struct Point2Point2{D<:(SamplableBelief)} <: AbstractManifoldMinimize\n\n\n\n\n\n","category":"type"},{"location":"concepts/available_varfacs/#RoME.Pose2Point2","page":"Variables/Factors","title":"RoME.Pose2Point2","text":"struct Pose2Point2{T<:(SamplableBelief)} <: AbstractManifoldMinimize\n\nBearing and Range constraint from a Pose2 to Point2 variable.\n\n\n\n\n\n","category":"type"},{"location":"concepts/available_varfacs/#RoME.Pose2Point2Bearing","page":"Variables/Factors","title":"RoME.Pose2Point2Bearing","text":"struct Pose2Point2Bearing{B<:(SamplableBelief)} <: AbstractManifoldMinimize\n\nSingle dimension bearing constraint from Pose2 to Point2 variable.\n\n\n\n\n\n","category":"type"},{"location":"concepts/available_varfacs/#RoME.Pose2Point2BearingRange","page":"Variables/Factors","title":"RoME.Pose2Point2BearingRange","text":"mutable struct Pose2Point2BearingRange{B<:(SamplableBelief), R<:(SamplableBelief)} <: AbstractManifoldMinimize\n\nBearing and Range constraint from a Pose2 to Point2 variable.\n\n\n\n\n\n","category":"type"},{"location":"concepts/available_varfacs/#RoME.Pose2Point2Range","page":"Variables/Factors","title":"RoME.Pose2Point2Range","text":"struct Pose2Point2Range{T<:(SamplableBelief)} <: AbstractManifoldMinimize\n\nRange only measurement from Pose2 to Point2 variable.\n\n\n\n\n\n","category":"type"},{"location":"concepts/available_varfacs/#RoME.Pose2Pose2","page":"Variables/Factors","title":"RoME.Pose2Pose2","text":"struct Pose2Pose2{T<:(SamplableBelief)} <: AbstractManifoldMinimize\n\nRigid transform between two Pose2's, assuming (x,y,theta).\n\nCalcuated as:\n\nbeginaligned\nhatq=exp_pX_m\nX = log_q hatq\nX^i = mathrmvee(q X)\nendaligned\n\nwith: mathcal M= mathrmSE(2) Special Euclidean group\np and q in mathcal M the two Pose2 points\nthe measurement vector X_m in T_p mathcal M\nand the error vector X in T_q mathcal M\nX^i coordinates of X\n\nDevNotes\n\nMaybe with Manifolds.jl, {T <: IIF.SamplableBelief, S, R, P}\n\nRelated\n\nPose3Pose3, Point2Point2, MutablePose2Pose2Gaussian, DynPose2, IMUDeltaFactor\n\n\n\n\n\n","category":"type"},{"location":"concepts/available_varfacs/#RoME.DynPoint2VelocityPrior","page":"Variables/Factors","title":"RoME.DynPoint2VelocityPrior","text":"mutable struct DynPoint2VelocityPrior{T<:(SamplableBelief)} <: AbstractPrior\n\n\n\n\n\n","category":"type"},{"location":"concepts/available_varfacs/#RoME.DynPoint2DynPoint2","page":"Variables/Factors","title":"RoME.DynPoint2DynPoint2","text":"mutable struct DynPoint2DynPoint2{T<:(SamplableBelief)} <: AbstractManifoldMinimize\n\n\n\n\n\n","category":"type"},{"location":"concepts/available_varfacs/#RoME.VelPoint2VelPoint2","page":"Variables/Factors","title":"RoME.VelPoint2VelPoint2","text":"mutable struct VelPoint2VelPoint2{T<:(SamplableBelief)} <: AbstractManifoldMinimize\n\n\n\n\n\n","category":"type"},{"location":"concepts/available_varfacs/#RoME.Point2Point2Velocity","page":"Variables/Factors","title":"RoME.Point2Point2Velocity","text":"mutable struct Point2Point2Velocity{T<:(SamplableBelief)} <: AbstractManifoldMinimize\n\n\n\n\n\n","category":"type"},{"location":"concepts/available_varfacs/#RoME.DynPose2VelocityPrior","page":"Variables/Factors","title":"RoME.DynPose2VelocityPrior","text":"mutable struct DynPose2VelocityPrior{T1, T2} <: AbstractPrior\n\n\n\n\n\n","category":"type"},{"location":"concepts/available_varfacs/#RoME.VelPose2VelPose2","page":"Variables/Factors","title":"RoME.VelPose2VelPose2","text":"struct VelPose2VelPose2{T1<:(SamplableBelief), T2<:(SamplableBelief)} <: AbstractManifoldMinimize\n\n\n\n\n\n","category":"type"},{"location":"concepts/available_varfacs/#RoME.DynPose2Pose2","page":"Variables/Factors","title":"RoME.DynPose2Pose2","text":"mutable struct DynPose2Pose2{T<:(SamplableBelief)} <: AbstractRelativeMinimize\n\n\n\n\n\n","category":"type"},{"location":"concepts/available_varfacs/#RoME.Pose3Pose3","page":"Variables/Factors","title":"RoME.Pose3Pose3","text":"struct Pose3Pose3{T<:(SamplableBelief)} <: AbstractManifoldMinimize\n\nRigid transform factor between two Pose3 compliant variables.\n\n\n\n\n\n","category":"type"},{"location":"concepts/available_varfacs/#RoME.PriorPose3ZRP","page":"Variables/Factors","title":"RoME.PriorPose3ZRP","text":"struct PriorPose3ZRP{T1<:(SamplableBelief), T2<:(SamplableBelief)} <: AbstractPrior\n\nPartial prior belief on Z, Roll, and Pitch of a Pose3.\n\n\n\n\n\n","category":"type"},{"location":"concepts/available_varfacs/#RoME.Pose3Pose3XYYaw","page":"Variables/Factors","title":"RoME.Pose3Pose3XYYaw","text":"struct Pose3Pose3XYYaw{T<:(SamplableBelief)} <: AbstractManifoldMinimize\n\nPartial factor between XY and Yaw of two Pose3 variables.\n\nwR2 = wR1*1R2 = wR1*(1Rψ*Rθ*Rϕ)\nwRz = wR1*1Rz\nzRz = wRz \\ wR(Δψ)\n\nM_R = SO(3)\nδ(α,β,γ) = vee(M_R, R_0, log(M_R, R_0, zRz))\n\nM = SE(3)\np0 = identity_element(M)\nδ(x,y,z,α,β,γ) = vee(M, p0, log(M, p0, zRz))\n\n\n\n\n\n","category":"type"},{"location":"concepts/available_varfacs/","page":"Variables/Factors","title":"Variables/Factors","text":" ","category":"page"},{"location":"concepts/available_varfacs/#Extending-Caesar-with-New-Variables-and-Factors","page":"Variables/Factors","title":"Extending Caesar with New Variables and Factors","text":"","category":"section"},{"location":"concepts/available_varfacs/","page":"Variables/Factors","title":"Variables/Factors","text":"A question that frequently arises is how to design custom variables and factors to solve a specific type of graph. One strength of Caesar is the ability to incorporate new variables and factors at will. Please refer to Adding Factors for more information on creating your own factors.","category":"page"},{"location":"concepts/mmisam_alg/#Multimodal-incremental-Smoothing-and-Mapping-Algorithm","page":"Non-Gaussian Algorithm","title":"Multimodal incremental Smoothing and Mapping Algorithm","text":"","category":"section"},{"location":"concepts/mmisam_alg/","page":"Non-Gaussian Algorithm","title":"Non-Gaussian Algorithm","text":"note: Note\nMajor refactoring of documentation under way 2020Q1. Much of the previous text has be repositioned and being improved. See references for details and check back here for updates in the coming weeks.","category":"page"},{"location":"concepts/mmisam_alg/","page":"Non-Gaussian Algorithm","title":"Non-Gaussian Algorithm","text":"Caesar.jl uses an approximate sum-product inference algorithm (mmiSAM) works. Until then, see related literature for more details.","category":"page"},{"location":"concepts/mmisam_alg/#Joint-Probability","page":"Non-Gaussian Algorithm","title":"Joint Probability","text":"","category":"section"},{"location":"concepts/mmisam_alg/#General-Factor-Graph-–-i.e.-non-Gaussian-and-multi-modal","page":"Non-Gaussian Algorithm","title":"General Factor Graph – i.e. non-Gaussian and multi-modal","text":"","category":"section"},{"location":"concepts/mmisam_alg/","page":"Non-Gaussian Algorithm","title":"Non-Gaussian Algorithm","text":"(Image: mmfgbt)","category":"page"},{"location":"concepts/mmisam_alg/#Inference-on-Bayes/Junction/Elimination-Tree","page":"Non-Gaussian Algorithm","title":"Inference on Bayes/Junction/Elimination Tree","text":"","category":"section"},{"location":"concepts/mmisam_alg/","page":"Non-Gaussian Algorithm","title":"Non-Gaussian Algorithm","text":"See tree solve video here.","category":"page"},{"location":"concepts/mmisam_alg/","page":"Non-Gaussian Algorithm","title":"Non-Gaussian Algorithm","text":"\"Bayes/Junction","category":"page"},{"location":"concepts/mmisam_alg/","page":"Non-Gaussian Algorithm","title":"Non-Gaussian Algorithm","text":"Algorithm combats the so called curse-of-dimensionality on the basis of eight principles outlined in the thesis work \"Multimodal and Inertial Sensor Solutions to Navigation-type Factor Graphs\".","category":"page"},{"location":"concepts/mmisam_alg/#Chapman-Kolmogorov-(Belief-Propagation-/-Sum-product)","page":"Non-Gaussian Algorithm","title":"Chapman-Kolmogorov (Belief Propagation / Sum-product)","text":"","category":"section"},{"location":"concepts/mmisam_alg/","page":"Non-Gaussian Algorithm","title":"Non-Gaussian Algorithm","text":"The main computational effort is to focus compute cycles on dominant modes exhibited by the data, by dropping low likelihood modes (although not indefinitely) and not sacrificing accuracy individual major features. ","category":"page"},{"location":"concepts/mmisam_alg/","page":"Non-Gaussian Algorithm","title":"Non-Gaussian Algorithm","text":"D. Fourie, A. T. Espinoza, M. Kaess, and J. J. Leonard, “Characterizing marginalization and incremental operations on the Bayes tree,” in International Workshop on Algorithmic Foundations of Robotics (WAFR), 2020, submitted, under review.","category":"page"},{"location":"concepts/mmisam_alg/#Focussing-Computation-on-Tree","page":"Non-Gaussian Algorithm","title":"Focussing Computation on Tree","text":"","category":"section"},{"location":"concepts/mmisam_alg/","page":"Non-Gaussian Algorithm","title":"Non-Gaussian Algorithm","text":"Link to new dedicated Bayes tree pages. The following sections describe different elements of clique recycling.","category":"page"},{"location":"concepts/mmisam_alg/#Incremental-Updates","page":"Non-Gaussian Algorithm","title":"Incremental Updates","text":"","category":"section"},{"location":"concepts/mmisam_alg/","page":"Non-Gaussian Algorithm","title":"Non-Gaussian Algorithm","text":"Recycling computations similar to iSAM2, with option to complete future downward pass.","category":"page"},{"location":"concepts/mmisam_alg/#Fixed-Lag-operation-(out-marginalization)","page":"Non-Gaussian Algorithm","title":"Fixed-Lag operation (out-marginalization)","text":"","category":"section"},{"location":"concepts/mmisam_alg/","page":"Non-Gaussian Algorithm","title":"Non-Gaussian Algorithm","text":"Active user (likely) computational limits on message passing. Also mixed priority solving","category":"page"},{"location":"concepts/mmisam_alg/#Federated-Tree-Solution-(Multi-session/agent)","page":"Non-Gaussian Algorithm","title":"Federated Tree Solution (Multi session/agent)","text":"","category":"section"},{"location":"concepts/mmisam_alg/","page":"Non-Gaussian Algorithm","title":"Non-Gaussian Algorithm","text":"Tentatively see the multisession page.","category":"page"},{"location":"concepts/mmisam_alg/#Clique-State-Machine","page":"Non-Gaussian Algorithm","title":"Clique State Machine","text":"","category":"section"},{"location":"concepts/mmisam_alg/","page":"Non-Gaussian Algorithm","title":"Non-Gaussian Algorithm","text":"The CSM is used to govern the inference process within a clique. A FunctionalStateMachine.jl implementation is used to allow for initialization / incremental-recycling / fixed-lag solving, and will soon support federated branch solving as well as unidirectional message passing for fixed-lead operations. See the following video for an auto-generated–-using csmAnimate–-concurrent clique solving example.","category":"page"},{"location":"concepts/mmisam_alg/#Sequential-Nested-Gibbs-Method","page":"Non-Gaussian Algorithm","title":"Sequential Nested Gibbs Method","text":"","category":"section"},{"location":"concepts/mmisam_alg/","page":"Non-Gaussian Algorithm","title":"Non-Gaussian Algorithm","text":"Current default inference method. See [Fourie et al., IROS 2016]","category":"page"},{"location":"concepts/mmisam_alg/#Convolution-Approximation-(Quasi-Deterministic)","page":"Non-Gaussian Algorithm","title":"Convolution Approximation (Quasi-Deterministic)","text":"","category":"section"},{"location":"concepts/mmisam_alg/","page":"Non-Gaussian Algorithm","title":"Non-Gaussian Algorithm","text":"Convolution operations are used to implement the numerical computation of the probabilistic chain rule:","category":"page"},{"location":"concepts/mmisam_alg/","page":"Non-Gaussian Algorithm","title":"Non-Gaussian Algorithm","text":"P(A B) = P(A B)P(B)","category":"page"},{"location":"concepts/mmisam_alg/","page":"Non-Gaussian Algorithm","title":"Non-Gaussian Algorithm","text":"Proposal distributions are computed by means of (analytical or numerical – i.e. \"algebraic\") factor which defines a residual function:","category":"page"},{"location":"concepts/mmisam_alg/","page":"Non-Gaussian Algorithm","title":"Non-Gaussian Algorithm","text":"delta S times Eta rightarrow mathcalR","category":"page"},{"location":"concepts/mmisam_alg/","page":"Non-Gaussian Algorithm","title":"Non-Gaussian Algorithm","text":"where S times Eta is the domain such that theta_i in S eta sim P(Eta), and P(cdot) is a probability.","category":"page"},{"location":"concepts/mmisam_alg/","page":"Non-Gaussian Algorithm","title":"Non-Gaussian Algorithm","text":"Please follow, a more detailed description is on the convolutional computations page.","category":"page"},{"location":"concepts/mmisam_alg/#Stochastic-Product-Approx-of-Infinite-Functionals","page":"Non-Gaussian Algorithm","title":"Stochastic Product Approx of Infinite Functionals","text":"","category":"section"},{"location":"concepts/mmisam_alg/","page":"Non-Gaussian Algorithm","title":"Non-Gaussian Algorithm","text":"See mixed-manifold products presented in the literature section.","category":"page"},{"location":"concepts/mmisam_alg/","page":"Non-Gaussian Algorithm","title":"Non-Gaussian Algorithm","text":"writing in progress","category":"page"},{"location":"concepts/mmisam_alg/#Mixture-Parametric-Method","page":"Non-Gaussian Algorithm","title":"Mixture Parametric Method","text":"","category":"section"},{"location":"concepts/mmisam_alg/","page":"Non-Gaussian Algorithm","title":"Non-Gaussian Algorithm","text":"Work In Progress – deferred for progress on full functional methods, but likely to have Gaussian legacy algorithm with mixture model expansion added in the near future.","category":"page"},{"location":"concepts/mmisam_alg/#Chapman-Kolmogorov","page":"Non-Gaussian Algorithm","title":"Chapman-Kolmogorov","text":"","category":"section"},{"location":"concepts/mmisam_alg/","page":"Non-Gaussian Algorithm","title":"Non-Gaussian Algorithm","text":"Work in progress","category":"page"},{"location":"concepts/using_manifolds/#On-Manifold-Operations","page":"Using Manifolds.jl","title":"On-Manifold Operations","text":"","category":"section"},{"location":"concepts/using_manifolds/","page":"Using Manifolds.jl","title":"Using Manifolds.jl","text":"Caesar.jl and libraries have adopted JuliaManifolds/Manifolds.jl as foundation for developing the algebraic operations used. ","category":"page"},{"location":"concepts/using_manifolds/","page":"Using Manifolds.jl","title":"Using Manifolds.jl","text":"The Community has been developing high quality documentation for Manifolds.jl, and we encourage the interested reader to learn and use everything available there.","category":"page"},{"location":"concepts/using_manifolds/#Separate-Manifold-Beliefs-Page","page":"Using Manifolds.jl","title":"Separate Manifold Beliefs Page","text":"","category":"section"},{"location":"concepts/using_manifolds/","page":"Using Manifolds.jl","title":"Using Manifolds.jl","text":"See building a Manifold Kernel Density or for more information.","category":"page"},{"location":"concepts/using_manifolds/#Why-Manifolds.jl","page":"Using Manifolds.jl","title":"Why Manifolds.jl","text":"","category":"section"},{"location":"concepts/using_manifolds/","page":"Using Manifolds.jl","title":"Using Manifolds.jl","text":"There is much to be said about how and why Manifolds.jl is the right decision for building a next-gen factor graph solver. We believe the future will show that mathematicians are way ahead of the curve, and that adopting a manifold approach will pretty much be the only way to develop the required mathematical operations in Caesar.jl for the forseable future.","category":"page"},{"location":"concepts/using_manifolds/#Are-Manifolds-Difficult?-No.","page":"Using Manifolds.jl","title":"Are Manifolds Difficult? No.","text":"","category":"section"},{"location":"concepts/using_manifolds/","page":"Using Manifolds.jl","title":"Using Manifolds.jl","text":"Do you need a math degree to be able to use Manifolds.jl? No you don't since Caesar.jl and related packages have already packaged many of the common functions and factors you need to get going. ","category":"page"},{"location":"concepts/using_manifolds/","page":"Using Manifolds.jl","title":"Using Manifolds.jl","text":"This page is meant to open the door for readers to learn more about how things work under the hood, and empower the Community to rapidly develop upon existing work. This page is also intended to show that the Caesar.jl related packages are being developed with strong focus on consolidation, single definition functionality, and serious cross discipline considerations.","category":"page"},{"location":"concepts/using_manifolds/","page":"Using Manifolds.jl","title":"Using Manifolds.jl","text":"If you are looking for rapid help or more expertise on a particular issue, consider reaching out by opening Issues or connecting to the ongoing chats in the Slack Channel.","category":"page"},{"location":"concepts/using_manifolds/#What-Are-Manifolds","page":"Using Manifolds.jl","title":"What Are Manifolds","text":"","category":"section"},{"location":"concepts/using_manifolds/","page":"Using Manifolds.jl","title":"Using Manifolds.jl","text":"If you are a newcomer to the term Manifold and want to learn more, fear not even though your first search results might be somewhat disorienting. ","category":"page"},{"location":"concepts/using_manifolds/","page":"Using Manifolds.jl","title":"Using Manifolds.jl","text":"The rest of this page is meant to introduce the basics, and point you to handy resources. Caesar.jl and NavAbility support open Community and are upstreaming improvements to Manifolds.jl, including code updates and documentation improvements.","category":"page"},{"location":"concepts/using_manifolds/#'One-Page'-Summary-of-Manifolds","page":"Using Manifolds.jl","title":"'One Page' Summary of Manifolds","text":"","category":"section"},{"location":"concepts/using_manifolds/","page":"Using Manifolds.jl","title":"Using Manifolds.jl","text":"Imagine you have a sheet of paper and you draw with a pencil a short line segment on the page. Now draw a second line segment from the end of the first. That should be pretty easy on a flat surface, right?","category":"page"},{"location":"concepts/using_manifolds/","page":"Using Manifolds.jl","title":"Using Manifolds.jl","text":"When the piece of paper is lying flat on the table, you have a line in the Euclidean(2) manifold, and you can easily assign [x,y] coordinates to describe these lines or vectors. Note coordinates here is a precise technical term.","category":"page"},{"location":"concepts/using_manifolds/","page":"Using Manifolds.jl","title":"Using Manifolds.jl","text":"If you roll the paper into a cyclinder... well now you have line segments on a cylindrical manifold. The question is, how to conduct mathematical operations concisely and consistently indepependent of the shape of your manifold? And, how to 'unroll' the paper for simple computations on a locally flat surface.","category":"page"},{"location":"concepts/using_manifolds/","page":"Using Manifolds.jl","title":"Using Manifolds.jl","text":"How far can the math go before there just isn't a good recipe for writing down generic operations? Turns out a few smart people have been working to solve this and the keyword here is Manifold.","category":"page"},{"location":"concepts/using_manifolds/","page":"Using Manifolds.jl","title":"Using Manifolds.jl","text":"If you are drinking some coffee right now, then you are moving the cup in Euclidean(3) space, that is you assume the motion is in flat coordinates [x;y;z]. A more technical way to say that is that the Euclidean manifold has zero curvature. ","category":"page"},{"location":"concepts/using_manifolds/","page":"Using Manifolds.jl","title":"Using Manifolds.jl","text":"What if you are concerned with the orientation of the cup too–-as in not spill the hot contents everywhere–-then you might actually want to work on the SpecialEuclidean(3) manifold – that is 3 degrees of translational freedom, and 3 degrees of rotational freedom. You might have heard of Lie Groups and Lie Algebras, well that is exactly it, Lie Groups are a special set of Group Manifolds and associated operations that are already supported by JuliaManifolds/Manifolds.jl.","category":"page"},{"location":"concepts/using_manifolds/","page":"Using Manifolds.jl","title":"Using Manifolds.jl","text":"Things are a little easier for a robot traveling around on a flat 2D surface. If your robot is moving around with coordinates xytheta, well then you are working with the coordinates of the SpecialEuclidean(2) manifold. There is more to say on how the coordinates xytheta get converted into the mathfrakse(2) Lie algebra, and that gets converted into a Lie Group element – i.e. (xy mathrmRotMat(theta)). More on that later.","category":"page"},{"location":"concepts/using_manifolds/","page":"Using Manifolds.jl","title":"Using Manifolds.jl","text":"Perhaps you are interested in relativistic effects where time as the fourth dimension is of interest, well then the Minkowski space provides Group and Manifold constructs for that – actually Minkowski falls under the supported Lorentz Manifolds.","category":"page"},{"location":"concepts/using_manifolds/","page":"Using Manifolds.jl","title":"Using Manifolds.jl","text":"The point here is that the math for drawing line segments in each of these manifolds above is almost exactly the same, thanks to the abstractions that have already been developed. And, many more powerful constructs exist which will become more apparent as you continue to work with Manifolds.","category":"page"},{"location":"concepts/using_manifolds/#7-Things-to-know-First","page":"Using Manifolds.jl","title":"7 Things to know First","text":"","category":"section"},{"location":"concepts/using_manifolds/","page":"Using Manifolds.jl","title":"Using Manifolds.jl","text":"As a robotics, navigation, or control person who wants to get started, you need to know what the following terms mean:","category":"page"},{"location":"concepts/using_manifolds/","page":"Using Manifolds.jl","title":"Using Manifolds.jl","text":"Q1) What are manifold points, tangent vectors, and user coordinates,\nQ2) What does the logarithm map of a manifold do,\nQ3) What does the exponential map of a manifold do,\nQ4) What do the vee and hat operations do,\nQ5) What is the difference between Riemannian or Group manifolds,\nQ6) Is a retraction the same as the exponential map,\nQ7) Is a projection the same as a logarithm map,","category":"page"},{"location":"concepts/using_manifolds/","page":"Using Manifolds.jl","title":"Using Manifolds.jl","text":"Know it sounds like a lot, but the point of this paragraph is that if you are able to answer these seven questions for yourself, then you will be empowered to venture into the math of manifolds much more easily. And, everything will begin to make sense. A lot of sense, to the point that you might agree with our assesment that JuliaManifolds/Manifolds.jl is the right direction for the future.","category":"page"},{"location":"concepts/using_manifolds/","page":"Using Manifolds.jl","title":"Using Manifolds.jl","text":"Although you will be able to find many answers for these seven questions in many places, our answers are listed at the bottom of this page.","category":"page"},{"location":"concepts/using_manifolds/#Manifold-Tutorials","page":"Using Manifolds.jl","title":"Manifold Tutorials","text":"","category":"section"},{"location":"concepts/using_manifolds/","page":"Using Manifolds.jl","title":"Using Manifolds.jl","text":"The rest of this page is devoted to showing you how to use the math, write your own code to do new things beyond what Caesar.jl can already do. If you are willing to share any contributions, please do so by opening pull requests against the related repos.","category":"page"},{"location":"concepts/using_manifolds/#Using-Manifolds-in-Factors","page":"Using Manifolds.jl","title":"Using Manifolds in Factors","text":"","category":"section"},{"location":"concepts/using_manifolds/","page":"Using Manifolds.jl","title":"Using Manifolds.jl","text":"The best way to show this is just dive straight into a factor that actually uses a Manifolds mechanization, and RoME.Pose2Pose2 is a fairly straight forward example. This factor gets used for rigid transforms on a 2D plane, with coordinates xytheta as alluded to above.","category":"page"},{"location":"concepts/using_manifolds/#A-Tutorial-on-Rotations","page":"Using Manifolds.jl","title":"A Tutorial on Rotations","text":"","category":"section"},{"location":"concepts/using_manifolds/","page":"Using Manifolds.jl","title":"Using Manifolds.jl","text":"note: Note\nWork in progress, Upstream Tutorial","category":"page"},{"location":"concepts/using_manifolds/#A-Tutorial-on-2D-Rigid-Transforms","page":"Using Manifolds.jl","title":"A Tutorial on 2D Rigid Transforms","text":"","category":"section"},{"location":"concepts/using_manifolds/","page":"Using Manifolds.jl","title":"Using Manifolds.jl","text":"note: Note\nWork in progress, Upstream Tutorial","category":"page"},{"location":"concepts/using_manifolds/#Existing-Manifolds","page":"Using Manifolds.jl","title":"Existing Manifolds","text":"","category":"section"},{"location":"concepts/using_manifolds/","page":"Using Manifolds.jl","title":"Using Manifolds.jl","text":"The most popular Manifolds used in Caesar.jl related packages are:","category":"page"},{"location":"concepts/using_manifolds/#Group-Manifolds","page":"Using Manifolds.jl","title":"Group Manifolds","text":"","category":"section"},{"location":"concepts/using_manifolds/","page":"Using Manifolds.jl","title":"Using Manifolds.jl","text":"TranslationGroup(N) (future work will relax to Euclidean(N)).\nSpecialOrthogonal(N).\nSpecialEuclidean(N).\n_CircleEuclid LEGACY, TODO.\nAMP.SE2_E2 LEGACY, TODO.","category":"page"},{"location":"concepts/using_manifolds/#Riemannian-Manifolds-(Work-in-progress)","page":"Using Manifolds.jl","title":"Riemannian Manifolds (Work in progress)","text":"","category":"section"},{"location":"concepts/using_manifolds/","page":"Using Manifolds.jl","title":"Using Manifolds.jl","text":"Sphere(N) WORK IN PROGRESS.","category":"page"},{"location":"concepts/using_manifolds/","page":"Using Manifolds.jl","title":"Using Manifolds.jl","text":"note: Note\nCaesar.jl encourages the JuliaManifolds approach to defining new manifolds, and can readily be used for Caesar.jl related operations.","category":"page"},{"location":"concepts/using_manifolds/#Creating-a-new-Manifold","page":"Using Manifolds.jl","title":"Creating a new Manifold","text":"","category":"section"},{"location":"concepts/using_manifolds/","page":"Using Manifolds.jl","title":"Using Manifolds.jl","text":"JuliaManifolds.jl is designed to make it as easy as possible to define your own manifold and then get all the benefits of the Manifolds.jl ecosystem. Follow the documentation there to make your own manifold, which can then readily be used with all the features of both JuliaManifolds as well as the Caesar.jl related packages.","category":"page"},{"location":"concepts/using_manifolds/#seven_mani_answers","page":"Using Manifolds.jl","title":"Answers to 7 Questions","text":"","category":"section"},{"location":"concepts/using_manifolds/#Q1)-What-are-Point,-Tangents,-Coordinates","page":"Using Manifolds.jl","title":"Q1) What are Point, Tangents, Coordinates","text":"","category":"section"},{"location":"concepts/using_manifolds/","page":"Using Manifolds.jl","title":"Using Manifolds.jl","text":"A manifold mathcalM is a collection of points that together create the given space. Points are like round sprinkles on the donut. The representation of points will vary from manifold to manifold. Sometimes it is even possible to have different representations for the same point on a manifold. These are usually denoted as p.","category":"page"},{"location":"concepts/using_manifolds/","page":"Using Manifolds.jl","title":"Using Manifolds.jl","text":"Tangent vectors (we prefer tangents for clarity) is a vector X that emanates from a point on a manifold tangential to the manifold curvature. A vector lives in the tangent space of the manifold, a locally flat region around a point Xin T_p mathcalM. On the donut, imagine a rod-shaped sprinkle stuck along the tangent of the surface at a particular point p. The tangent space is the collection of all possible tangents at p. ","category":"page"},{"location":"concepts/using_manifolds/","page":"Using Manifolds.jl","title":"Using Manifolds.jl","text":"Coordinates are a user defined property that uses the Euclidean nature of the tangent space at point p to operate as a regular linear space. Coordinates are just a list of the indepedent coordinate dimensions of the tangent space values collected together. Read this part carefully, as it can easily be confused with a conventional tangent vector in a regular Euclidean space. ","category":"page"},{"location":"concepts/using_manifolds/","page":"Using Manifolds.jl","title":"Using Manifolds.jl","text":"For example, a tangent vector to the Euclidean(2) manifold, at the origin point (00) is what you likely are familiar with from school as a \"vector\" (not the coordinates, although that happens to be the same thing in the trivial case). For Euclidean space, a vector from point p of length xy looks like the line segment between points p and q on the underlying manifold. ","category":"page"},{"location":"concepts/using_manifolds/","page":"Using Manifolds.jl","title":"Using Manifolds.jl","text":"This trivial overlapping of \"vectors\" in the Euclidean Manifold, and in a tangent space around p, and coordinates for that tangent space, are no longer trivial when the manifold has curvature.","category":"page"},{"location":"concepts/using_manifolds/#Q2)-What-is-the-Logarithm-map","page":"Using Manifolds.jl","title":"Q2) What is the Logarithm map","text":"","category":"section"},{"location":"concepts/using_manifolds/","page":"Using Manifolds.jl","title":"Using Manifolds.jl","text":"The logarithm X = logmap(M,p,q) computes, based at point p, the tangent vector X on the tangent plane T_pmathcalM from p. In other words, image a string following the curve of a manifold from p to q, pick up that string from q while holding p firm, until the string is flat against the tangent space emminating from p. The logarithm is the opposite of the exponential map. ","category":"page"},{"location":"concepts/using_manifolds/","page":"Using Manifolds.jl","title":"Using Manifolds.jl","text":"Multiple logmap interpretations exist, for example in the case of SpecialEuclidean(N) there are multiple definitions for oplus and ominus, see [2.15]. When using a library, it is worth testing how logmap and expmap are computed (away from the identity element for Groups).","category":"page"},{"location":"concepts/using_manifolds/#Q3)-What-is-the-Exponential-map","page":"Using Manifolds.jl","title":"Q3) What is the Exponential map","text":"","category":"section"},{"location":"concepts/using_manifolds/","page":"Using Manifolds.jl","title":"Using Manifolds.jl","text":"The exponential map does the opposite of the logarithm. Imagine a tangent vector X emanating from point p. The length and direction of X can be wrapped onto the curvature of the manifold to form a line on the manifold surface.","category":"page"},{"location":"concepts/using_manifolds/#Q4)-What-does-vee/hat-do","page":"Using Manifolds.jl","title":"Q4) What does vee/hat do","text":"","category":"section"},{"location":"concepts/using_manifolds/","page":"Using Manifolds.jl","title":"Using Manifolds.jl","text":"vee is an operation that converts a tangent vector representation into a coordinate representation. For example Lie algebra elements are tangent vector elements, so vee([0 -w; w 0]) = w. And visa versa for hat(w) = [0 -w; w 0], which goes from coordinates to tangent vectors.","category":"page"},{"location":"concepts/using_manifolds/#Q5)-What-Riemannian-vs.-Group-Manifolds","page":"Using Manifolds.jl","title":"Q5) What Riemannian vs. Group Manifolds","text":"","category":"section"},{"location":"concepts/using_manifolds/","page":"Using Manifolds.jl","title":"Using Manifolds.jl","text":"Groups are mathematical structures which often fit well inside the manifold way of working. For example in robotics, Lie Groups are popular under SpecialEuclidean(N) <: AbstractGroupManifold. Groups also have a well defined action. Most prominently for our usage, groups are sets of points for which there exists an identity point. Riemannian manifolds are more general than Lie groups, specifically Riemannian manifolds do not have an identity point. ","category":"page"},{"location":"concepts/using_manifolds/","page":"Using Manifolds.jl","title":"Using Manifolds.jl","text":"An easy example is that the Euclidean(N) manifold does not have an identity element, since what we know as 00 is actually a coordinate base point for the local tangent space, and which just happens to look the same as the underlying Euclidean(N) manifold. The TranslationGroup(N) exists as an additional structure over the Euclidean(N) space which has a defined identity element as well as a defined operations on points.","category":"page"},{"location":"concepts/using_manifolds/#Q6)-Retraction-vs.-Exp-map","page":"Using Manifolds.jl","title":"Q6) Retraction vs. Exp map","text":"","category":"section"},{"location":"concepts/using_manifolds/","page":"Using Manifolds.jl","title":"Using Manifolds.jl","text":"Retractions are numerically efficient approximations to convert a tangent vector into a point on the manifold. The exponential map is the theoretically precise retraction, but may well be computationally expensive beyond the need for most applications.","category":"page"},{"location":"concepts/using_manifolds/#Q7)-Projection-vs.-Log-map","page":"Using Manifolds.jl","title":"Q7) Projection vs. Log map","text":"","category":"section"},{"location":"concepts/using_manifolds/","page":"Using Manifolds.jl","title":"Using Manifolds.jl","text":"The term projection can be somewhat ambiguous between references. In Manifolds.jl, projections either project a point in the embedding to a point on the manifold, or a vector from the embedding onto a tangent space at a certain point. ","category":"page"},{"location":"concepts/using_manifolds/","page":"Using Manifolds.jl","title":"Using Manifolds.jl","text":"Confusion, can easily happen between cases where there is no ambient space around a particular manifold. Then the term projection may be moot.","category":"page"},{"location":"concepts/using_manifolds/","page":"Using Manifolds.jl","title":"Using Manifolds.jl","text":"In Manifolds.jl, an inverse retraction is an approximate logmap of a point up from the manifold onto a tangent space – i.e. not a projection. It is important not to confuse a point on the manifold as a point in the ambient space, when thinking about the term projection.","category":"page"},{"location":"concepts/using_manifolds/","page":"Using Manifolds.jl","title":"Using Manifolds.jl","text":"It is best to make sure you know which one is being used in any particular situation.","category":"page"},{"location":"concepts/using_manifolds/","page":"Using Manifolds.jl","title":"Using Manifolds.jl","text":"note: Note\nFor a slightly deeper dive into the relation between embedding, ambient space, and projections, see the background conversation here.","category":"page"},{"location":"examples/adding_variables_factors/#Variable/Factor-Considerations","page":"Variable/Factor Considerations","title":"Variable/Factor Considerations","text":"","category":"section"},{"location":"examples/adding_variables_factors/","page":"Variable/Factor Considerations","title":"Variable/Factor Considerations","text":"A couple of important points:","category":"page"},{"location":"examples/adding_variables_factors/","page":"Variable/Factor Considerations","title":"Variable/Factor Considerations","text":"You do not need to modify or insert your new code into Caesar/RoME/IncrementalInference source code libraries – they can be created and run anywhere on-the-fly!\nAs long as the factors exist in the working space when the solver is run, the factors are automatically used – this is possible due to Julia's multiple dispatch design\nCaesar.jl is designed to allow you to add new variables and factors to your own independent repository and incorporate them at will at compile-time or even run-time\nResidual function definitions for new factors types use a callable struct (a.k.a functor) architecture to simultaneously allow: \nMultiple dispatch (i.e. 'polymorphic' behavior)\nMeta-data and in-place memory storage for advanced and performant code\nAn outside callback implementation style\nIn most robotics scenarios, there is no need for new variables or factors:\nVariables have various mechanisms that allow you to attach data to them, e.g. raw sensory data or identified April tags, so you do not need to create a new variable type just to store data\nNew variables are required only if you are representing a new state - TODO: Example of needed state\nNew factors are needed if:\nYou need to represent a constraint for a variable (known as a singleton) and that constraint type doesn't exist\nYou need to represent a constraint between two variables and that constraint type doesn't exist","category":"page"},{"location":"examples/adding_variables_factors/","page":"Variable/Factor Considerations","title":"Variable/Factor Considerations","text":"All factors inherit from one of the following types, depending on their function:","category":"page"},{"location":"examples/adding_variables_factors/","page":"Variable/Factor Considerations","title":"Variable/Factor Considerations","text":"AbstractPrior is for priors (unary factors) that provide an absolute constraint for a single variable. A simple example of this is an absolute GPS prior, or equivalently a (0, 0, 0) starting location in a Pose2 scenario.\nRequires: A getSample function\nAbstractRelativeMinimize uses Optim.jl and is for relative factors that introduce an algebraic relationship between two or more variables. A simple example of this is an odometry factor between two pose variables, or a range factor indicating the range between a pose and another variable.\nRequires: A getSample function and a residual function definition\nThe minimize suffix specifies that the residual function of this factor will be enforced by numerical minimization (find me the minimum of this function)\n[NEW] AbstractManifoldMinimize uses Manopt.jl.","category":"page"},{"location":"examples/adding_variables_factors/","page":"Variable/Factor Considerations","title":"Variable/Factor Considerations","text":"How do you decide which to use?","category":"page"},{"location":"examples/adding_variables_factors/","page":"Variable/Factor Considerations","title":"Variable/Factor Considerations","text":"If you are creating factors for world-frame information that will be tied to a single variable, inherit from <:AbstractPrior\nGPS coordinates should be priors\nIf you are creating factors for local-frame relationships between variables, inherit from IIF.AbstractRelativeMinimize\nOdometry and bearing deltas should be introduced as pairwise factors and should be local frame","category":"page"},{"location":"examples/adding_variables_factors/","page":"Variable/Factor Considerations","title":"Variable/Factor Considerations","text":"TBD: Users should start with IIF.AbstractRelativeMinimize, discuss why and when they should promote their factors to IIF.AbstractRelativeRoots.","category":"page"},{"location":"examples/adding_variables_factors/","page":"Variable/Factor Considerations","title":"Variable/Factor Considerations","text":"note: Note\nAbstractRelativeMinimize does not imply that the overall inference algorithm only minimizes an objective function. The MM-iSAM algorithm is built around fixed-point analysis. Minimization is used here to locally enforce the residual function.","category":"page"},{"location":"examples/adding_variables_factors/","page":"Variable/Factor Considerations","title":"Variable/Factor Considerations","text":"What you need to build in the new factor:","category":"page"},{"location":"examples/adding_variables_factors/","page":"Variable/Factor Considerations","title":"Variable/Factor Considerations","text":"A struct for the factor itself\nA sampler function to return measurements from the random ditributions\nIf you are building a <:AbstractRelative you need to define a residual function to introduce the relative algebraic relationship between the variables\nMinimization function should be lower-bounded and smooth\nA packed type of the factor which must be named Packed[Factor name], and allows the factor to be packed/transmitted/unpacked\nSerialization and deserialization methods\nThese are convert functions that pack and unpack the factor (which may be highly complex) into serialization-compatible formats\nAs the factors are mostly comprised of distributions (of type SampleableBelief), while JSON3.jl` is used for serialization.","category":"page"},{"location":"concepts/2d_plotting/#plotting_2d","page":"Plotting (2D)","title":"Plotting 2D","text":"","category":"section"},{"location":"concepts/2d_plotting/","page":"Plotting (2D)","title":"Plotting (2D)","text":"Once the graph has been built, 2D plot visualizations are provided by RoMEPlotting.jl and KernelDensityEstimatePlotting.jl. These visualizations tools are readily modifiable to highlight various aspects of mobile platform navigation.","category":"page"},{"location":"concepts/2d_plotting/","page":"Plotting (2D)","title":"Plotting (2D)","text":"note: Note\nPlotting packages can be installed separately.","category":"page"},{"location":"concepts/2d_plotting/","page":"Plotting (2D)","title":"Plotting (2D)","text":"The major 2D plotting functions between RoMEPlotting.jl and KernelDensityEstimatePlotting.jl:","category":"page"},{"location":"concepts/2d_plotting/","page":"Plotting (2D)","title":"Plotting (2D)","text":"plotSLAM2D,\nplotSLAM2DPoses,\nplotSLAM2DLandmarks,\nplotPose,\nplotBelief\nLEGACY plotKDE\nplotLocalProduct,\nPDF, PNG, SVG,\nhstack, vstack.","category":"page"},{"location":"concepts/2d_plotting/#Example-Plot-SLAM-2D","page":"Plotting (2D)","title":"Example Plot SLAM 2D","text":"","category":"section"},{"location":"concepts/2d_plotting/","page":"Plotting (2D)","title":"Plotting (2D)","text":"This simplest example for visualizing a 2D robot trajectory–-such as first running the Hexagonal 2D SLAM example–-","category":"page"},{"location":"concepts/2d_plotting/","page":"Plotting (2D)","title":"Plotting (2D)","text":"Assuming some fg<:AbstractDFG has been loaded/constructed:","category":"page"},{"location":"concepts/2d_plotting/","page":"Plotting (2D)","title":"Plotting (2D)","text":"# load the plotting functionality\nusing RoME, RoMEPlotting\n\n# generate some factor graph with numerical values\nfg = generateGraph_Hexagonal()\nsolveTree!(fg)\n\n# or fg = loadDFG(\"somepath\")\n\n# slam2D plot\npl = plotSLAM2D(fg, drawhist=true, drawPoints=false)","category":"page"},{"location":"concepts/2d_plotting/","page":"Plotting (2D)","title":"Plotting (2D)","text":"

      \n\n

      ","category":"page"},{"location":"concepts/2d_plotting/","page":"Plotting (2D)","title":"Plotting (2D)","text":"plotSLAM2D","category":"page"},{"location":"concepts/2d_plotting/#RoMEPlotting.plotSLAM2D","page":"Plotting (2D)","title":"RoMEPlotting.plotSLAM2D","text":"plotSLAM2D(\n fgl;\n solveKey,\n from,\n to,\n minnei,\n meanmax,\n posesPPE,\n landmsPPE,\n recalcPPEs,\n lbls,\n scale,\n x_off,\n y_off,\n drawTriads,\n dyadScale,\n levels,\n drawhist,\n MM,\n xmin,\n xmax,\n ymin,\n ymax,\n showmm,\n window,\n point_size,\n line_width,\n regexLandmark,\n regexPoses,\n variableList,\n manualColor,\n drawPoints,\n pointsColor,\n drawContour,\n drawEllipse,\n ellipseColor,\n title,\n aspect_ratio\n)\n\n\n2D plot of both poses and landmarks contained in factor graph. Assuming poses and landmarks are labeled :x1, :x2, ... and :l0, :l1, ..., respectively. The range of numbers to include can be controlled with from and to along with other keyword functionality for manipulating the plot.\n\nNotes\n\nAssumes :l1, :l2, ... for landmarks – \nCan increase default Gadfly plot size (for JSSVG in browser): Gadfly.set_default_plot_size(35cm,20cm).\nEnable or disable features such as the covariance ellipse with keyword drawEllipse=true.\n\nDevNotes\n\nTODO update to use e.g. tags=[:LANDMARK],\nTODO fix drawHist,\nTODO deprecate, showmm, spscale.\n\nExamples:\n\nfg = generateGraph_Hexagonal()\nplotSLAM2D(fg)\nplotSLAM2D(fg, drawPoints=false)\nplotSLAM2D(fg, contour=false, drawEllipse=true)\nplotSLAM2D(fg, contour=false, title=\"SLAM result 1\")\n\n# or load a factor graph\nfg_ = loadDFG(\"somewhere.tar.gz\")\nplotSLAM2D(fg_)\n\nRelated\n\nplotSLAM2DPoses, plotSLAM2DLandmarks, plotPose, plotBelief \n\n\n\n\n\n","category":"function"},{"location":"concepts/2d_plotting/#Plot-Covariance-Ellipse-and-Points","page":"Plotting (2D)","title":"Plot Covariance Ellipse and Points","text":"","category":"section"},{"location":"concepts/2d_plotting/","page":"Plotting (2D)","title":"Plotting (2D)","text":"While the Caesar.jl framework is focussed on non-Gaussian inference, it is frequently desirable to relate the results to a more familiar covariance ellipse, and native support for this exists:","category":"page"},{"location":"concepts/2d_plotting/","page":"Plotting (2D)","title":"Plotting (2D)","text":"plotSLAM2D(fg, drawContour=false, drawEllipse=true, drawPoints=true)","category":"page"},{"location":"concepts/2d_plotting/","page":"Plotting (2D)","title":"Plotting (2D)","text":"

      \n\n

      ","category":"page"},{"location":"concepts/2d_plotting/#Plot-Poses-or-Landmarks","page":"Plotting (2D)","title":"Plot Poses or Landmarks","text":"","category":"section"},{"location":"concepts/2d_plotting/","page":"Plotting (2D)","title":"Plotting (2D)","text":"Lower down utility functions are used to plot poses and landmarks separately before joining the Gadfly layers.","category":"page"},{"location":"concepts/2d_plotting/","page":"Plotting (2D)","title":"Plotting (2D)","text":"plotSLAM2DPoses\nplotSLAM2DLandmarks","category":"page"},{"location":"concepts/2d_plotting/#RoMEPlotting.plotSLAM2DPoses","page":"Plotting (2D)","title":"RoMEPlotting.plotSLAM2DPoses","text":"plotSLAM2DPoses(\n fg;\n solveKey,\n regexPoses,\n from,\n to,\n variableList,\n meanmax,\n ppe,\n recalcPPEs,\n lbls,\n scale,\n x_off,\n y_off,\n drawhist,\n spscale,\n dyadScale,\n drawTriads,\n drawContour,\n levels,\n contour,\n line_width,\n drawPoints,\n pointsColor,\n drawEllipse,\n ellipseColor,\n manualColor\n)\n\n\n2D plot of all poses, assuming poses are labeled from `::Symbol type :x0, :x1, ..., :xn. Use to and from to limit the range of numbers n to be drawn. The underlying histogram can be enabled or disabled, and the size of maximum-point belief estimate cursors can be controlled with spscale.\n\nFuture:\n\nRelax to user defined pose labeling scheme, for example :p1, :p2, ...\n\n\n\n\n\n","category":"function"},{"location":"concepts/2d_plotting/#RoMEPlotting.plotSLAM2DLandmarks","page":"Plotting (2D)","title":"RoMEPlotting.plotSLAM2DLandmarks","text":"plotSLAM2DLandmarks(\n fg;\n solveKey,\n regexLandmark,\n from,\n to,\n minnei,\n variableList,\n meanmax,\n ppe,\n recalcPPEs,\n lbls,\n showmm,\n scale,\n x_off,\n y_off,\n drawhist,\n drawContour,\n levels,\n contour,\n manualColor,\n c,\n MM,\n point_size,\n drawPoints,\n pointsColor,\n drawEllipse,\n ellipseColor,\n resampleGaussianFit\n)\n\n\n2D plot of landmarks, assuming :l1, :l2, ... :ln. Use from and to to control the range of landmarks n to include.\n\n\n\n\n\n","category":"function"},{"location":"concepts/2d_plotting/#Plot-Belief-Density-Contour","page":"Plotting (2D)","title":"Plot Belief Density Contour","text":"","category":"section"},{"location":"concepts/2d_plotting/","page":"Plotting (2D)","title":"Plotting (2D)","text":"KernelDensityEstimatePlotting (as used in RoMEPlotting) provides an interface to visualize belief densities as counter plots. Something basic might be to just show all plane pairs of this variable marginal belief:","category":"page"},{"location":"concepts/2d_plotting/","page":"Plotting (2D)","title":"Plotting (2D)","text":"# Draw the KDE for x0\nplotBelief(fg, :x0)","category":"page"},{"location":"concepts/2d_plotting/","page":"Plotting (2D)","title":"Plotting (2D)","text":"Plotting the marginal density over say variables (x,y) in a Pose2 would be:","category":"page"},{"location":"concepts/2d_plotting/","page":"Plotting (2D)","title":"Plotting (2D)","text":"plotBelief(fg, :x1, dims=[1;2])","category":"page"},{"location":"concepts/2d_plotting/","page":"Plotting (2D)","title":"Plotting (2D)","text":"The following example better shows some of features (via Gadfly.jl):","category":"page"},{"location":"concepts/2d_plotting/","page":"Plotting (2D)","title":"Plotting (2D)","text":"# Draw the (x,y) marginal estimated belief contour for :x0, :x2, and Lx4\npl = plotBelief(fg, [:x0; :x2; :x4], c=[\"red\";\"green\";\"blue\"], levels=2, dims=[1;2])\n\n# add a few fun layers\npl3 = plotSLAM2DPoses(fg, regexPoses=r\"x\\d\", from=3, to=3, drawContour=false, drawEllipse=true)\npl5 = plotSLAM2DPoses(fg, regexPoses=r\"x\\d\", from=5, to=5, drawContour=false, drawEllipse=true, drawPoints=false)\npl_ = plotSLAM2DPoses(fg, drawContour=false, drawPoints=false, dyadScale=0.001, to=5)\nunion!(pl.layers, pl3.layers)\nunion!(pl.layers, pl5.layers)\nunion!(pl.layers, pl_.layers)\n\n# change the plotting coordinates\npl.coord = Coord.Cartesian(xmin=-10,xmax=20, ymin=-1, ymax=25)\n\n# save the plot to SVG and giving dedicated (although optional) sizing\npl |> SVG(\"/tmp/test.svg\", 25cm, 15cm)\n\n# also display the plot live\npl","category":"page"},{"location":"concepts/2d_plotting/","page":"Plotting (2D)","title":"Plotting (2D)","text":"

      \n\n

      ","category":"page"},{"location":"concepts/2d_plotting/","page":"Plotting (2D)","title":"Plotting (2D)","text":"See function documentation for more details on API features","category":"page"},{"location":"concepts/2d_plotting/","page":"Plotting (2D)","title":"Plotting (2D)","text":"plotBelief","category":"page"},{"location":"concepts/2d_plotting/#RoMEPlotting.plotBelief","page":"Plotting (2D)","title":"RoMEPlotting.plotBelief","text":"plotBelief(\n fgl,\n sym;\n solveKey,\n dims,\n title,\n levels,\n fill,\n layers,\n c,\n overlay\n)\n\n\nA peneric KDE plotting function that allows marginals of higher dimensional beliefs and various keyword options.\n\nExample for Position2:\n\n\np = manikde!(Position2, [randn(2) for _ in 1:100])\nq = manikde!(Position2, [randn(2).+[5;0] for _ in 1:100])\n\nplotBelief(p)\nplotBelief(p, dims=[1;2], levels=3)\nplotBelief(p, dims=[1])\n\nplotBelief([p;q])\nplotBelief([p;q], dims=[1;2], levels=3)\nplotBelief([p;q], dims=[1])\n\nExample for Pose2:\n\n# TODO\n\n\n\n\n\n","category":"function"},{"location":"concepts/2d_plotting/#Save-Plot-to-Image","page":"Plotting (2D)","title":"Save Plot to Image","text":"","category":"section"},{"location":"concepts/2d_plotting/","page":"Plotting (2D)","title":"Plotting (2D)","text":"VSCode/Juno can set plot to be opened in a browser tab instead. For scripting use-cases you can also export the image:","category":"page"},{"location":"concepts/2d_plotting/","page":"Plotting (2D)","title":"Plotting (2D)","text":"using Gadfly\n# can change the default plot size\n# Gadfly.set_default_plot_size(35cm, 30cm)\n\npl |> PDF(\"/tmp/test.pdf\", 20cm, 10cm) # or PNG, SVG","category":"page"},{"location":"concepts/2d_plotting/#Save-Plot-Object-To-File","page":"Plotting (2D)","title":"Save Plot Object To File","text":"","category":"section"},{"location":"concepts/2d_plotting/","page":"Plotting (2D)","title":"Plotting (2D)","text":"It is also possible to store the whole plot container to file using JLD2.jl:","category":"page"},{"location":"concepts/2d_plotting/","page":"Plotting (2D)","title":"Plotting (2D)","text":"JLD2.@save \"/tmp/myplot.jld2\" pl\n\n# and loading elsewhere\nJLD2.@load \"/tmp/myplot.jld2\" pl","category":"page"},{"location":"concepts/2d_plotting/#Interactive-Plots,-Zoom,-Pan-(Gadfly.jl)","page":"Plotting (2D)","title":"Interactive Plots, Zoom, Pan (Gadfly.jl)","text":"","category":"section"},{"location":"concepts/2d_plotting/","page":"Plotting (2D)","title":"Plotting (2D)","text":"See the following two discussions on Interactive 2D plots:","category":"page"},{"location":"concepts/2d_plotting/","page":"Plotting (2D)","title":"Plotting (2D)","text":"Interactivity\nInteractive-SVGs","category":"page"},{"location":"concepts/2d_plotting/","page":"Plotting (2D)","title":"Plotting (2D)","text":"note: Note\nRed and Green dyad lines represent the visualization-only assumption of X-forward and Y-left direction of Pose2. The inference and manifold libraries surrounding Caesar.jl are agnostic to any particular choice of reference frame alignment, such as north east down (NED) or forward left up (common in mobile robotics).","category":"page"},{"location":"concepts/2d_plotting/","page":"Plotting (2D)","title":"Plotting (2D)","text":"note: Note\nAlso see Gadfly.jl notes about hstack and vstack to combine plots side by side or vertically.","category":"page"},{"location":"concepts/2d_plotting/#Plot-Pose-Individually","page":"Plotting (2D)","title":"Plot Pose Individually","text":"","category":"section"},{"location":"concepts/2d_plotting/","page":"Plotting (2D)","title":"Plotting (2D)","text":"It is also possible to plot the belief density of a Pose2 on-manifold:","category":"page"},{"location":"concepts/2d_plotting/","page":"Plotting (2D)","title":"Plotting (2D)","text":"plotPose(fg, :x6)","category":"page"},{"location":"concepts/2d_plotting/","page":"Plotting (2D)","title":"Plotting (2D)","text":"

      \n\n

      ","category":"page"},{"location":"concepts/2d_plotting/","page":"Plotting (2D)","title":"Plotting (2D)","text":"plotPose","category":"page"},{"location":"concepts/2d_plotting/#RoMEPlotting.plotPose","page":"Plotting (2D)","title":"RoMEPlotting.plotPose","text":"plotPose(, pp; ...)\nplotPose(\n ,\n pp,\n title;\n levels,\n c,\n legend,\n axis,\n scale,\n overlay,\n hdl\n)\n\n\nPlot pose belief as contour information on visually sensible manifolds.\n\nExample:\n\nfg = generateGraph_ZeroPose()\ninitAll!(fg);\nplotPose(fg, :x0)\n\nRelated\n\nplotSLAM2D, plotSLAM2DPoses, plotBelief, plotKDECircular\n\n\n\n\n\nplotPose(\n fgl,\n syms;\n solveKey,\n levels,\n c,\n axis,\n scale,\n show,\n filepath,\n app,\n hdl\n)\n\n\nExample: pl = plotPose(fg, [:x1; :x2; :x3])\n\n\n\n\n\n","category":"function"},{"location":"concepts/2d_plotting/#Debug-With-Local-Graph-Product-Plot","page":"Plotting (2D)","title":"Debug With Local Graph Product Plot","text":"","category":"section"},{"location":"concepts/2d_plotting/","page":"Plotting (2D)","title":"Plotting (2D)","text":"One useful function is to check that data in the factor graph makes sense. While the full inference algorithm uses a Bayes (Junction) tree to assemble marginal belief estimates in an efficient manner, it is often useful for a straight forward graph based sanity check. The plotLocalProduct projects through approxConvBelief each of the factors connected to the target variable and plots the result. This example looks at the loop-closure point around :x0, which is also pinned down by the only prior in the canonical Hexagonal factor graph.","category":"page"},{"location":"concepts/2d_plotting/","page":"Plotting (2D)","title":"Plotting (2D)","text":"@show ls(fg, :x0);\n# ls(fg, :x0) = [:x0f1, :x0x1f1, :x0l1f1]\n\npl = plotLocalProduct(fg, :x0, dims=[1;2], levels=1)","category":"page"},{"location":"concepts/2d_plotting/","page":"Plotting (2D)","title":"Plotting (2D)","text":"

      \n\n

      ","category":"page"},{"location":"concepts/2d_plotting/","page":"Plotting (2D)","title":"Plotting (2D)","text":"While perhaps a little cluttered to read at first, this figure shows that a new calculation local to only the factor graph prod in greem matches well with the existing value curr in red in the fg from the earlier solveTree! call. These values are close to the prior prediction :x0f1 in blue (fairly trivial case), while the odometry :x0x1f1 and landmark sighting projection :x0l1f1 are also well in agreement.","category":"page"},{"location":"concepts/2d_plotting/","page":"Plotting (2D)","title":"Plotting (2D)","text":"plotLocalProduct","category":"page"},{"location":"concepts/2d_plotting/#RoMEPlotting.plotLocalProduct","page":"Plotting (2D)","title":"RoMEPlotting.plotLocalProduct","text":"plotLocalProduct(\n fgl,\n lbl;\n solveKey,\n N,\n dims,\n levels,\n show,\n dirpath,\n mimetype,\n sidelength,\n title,\n xmin,\n xmax,\n ymin,\n ymax\n)\n\n\nPlot the proposal belief from neighboring factors to lbl in the factor graph (ignoring Bayes tree representation), and show with new product approximation for reference.\n\nDevNotes\n\nTODO, standardize around ::MIME=\"image/svg\", see JuliaRobotics/DistributedFactorGraphs.jl#640\n\n\n\n\n\nplotLocalProduct(fgl, lbl; N, dims)\n\n\nPlot the proposal belief from neighboring factors to lbl in the factor graph (ignoring Bayes tree representation), and show with new product approximation for reference. String version is obsolete and will be deprecated.\n\n\n\n\n\n","category":"function"},{"location":"concepts/2d_plotting/#More-Detail-About-Density-Plotting","page":"Plotting (2D)","title":"More Detail About Density Plotting","text":"","category":"section"},{"location":"concepts/2d_plotting/","page":"Plotting (2D)","title":"Plotting (2D)","text":"Multiple beliefs can be plotted at the same time, while setting levels=4 rather than the default value:","category":"page"},{"location":"concepts/2d_plotting/","page":"Plotting (2D)","title":"Plotting (2D)","text":"plX1 = plotBelief(fg, [:x0; :x1], dims=[1;2], levels=4)\n\n# plX1 |> PNG(\"/tmp/testX1.png\")","category":"page"},{"location":"concepts/2d_plotting/","page":"Plotting (2D)","title":"Plotting (2D)","text":"

      \n\n

      ","category":"page"},{"location":"concepts/2d_plotting/","page":"Plotting (2D)","title":"Plotting (2D)","text":"One dimensional (such as Θ) or a stack of all plane projections is also available:","category":"page"},{"location":"concepts/2d_plotting/","page":"Plotting (2D)","title":"Plotting (2D)","text":"plTh = plotBelief(fg, [:x0; :x1], dims=[3], levels=4)\n\n# plTh |> PNG(\"/tmp/testTh.png\")","category":"page"},{"location":"concepts/2d_plotting/","page":"Plotting (2D)","title":"Plotting (2D)","text":"

      \n\n

      ","category":"page"},{"location":"concepts/2d_plotting/","page":"Plotting (2D)","title":"Plotting (2D)","text":"plAll = plotBelief(fg, [:x0; :x1], levels=3)\n# plAll |> PNG(\"/tmp/testX1.png\",20cm,15cm)","category":"page"},{"location":"concepts/2d_plotting/","page":"Plotting (2D)","title":"Plotting (2D)","text":"

      \n\n

      ","category":"page"},{"location":"concepts/2d_plotting/","page":"Plotting (2D)","title":"Plotting (2D)","text":"note: Note\nThe functions hstack and vstack is provided through the Gadfly package and allows the user to build a near arbitrary composition of plots.","category":"page"},{"location":"concepts/2d_plotting/","page":"Plotting (2D)","title":"Plotting (2D)","text":"Please see KernelDensityEstimatePlotting package source for more features.","category":"page"},{"location":"principles/bayestreePrinciples/#Principle:-Bayes-tree-prototyping","page":"Bayes (Junction) tree","title":"Principle: Bayes tree prototyping","text":"","category":"section"},{"location":"principles/bayestreePrinciples/","page":"Bayes (Junction) tree","title":"Bayes (Junction) tree","text":"This page describes how to visualize, study, test, and compare Bayes (Junction) tree concepts with special regard for variable ordering.","category":"page"},{"location":"principles/bayestreePrinciples/#Why-a-Bayes-(Junction)-tree","page":"Bayes (Junction) tree","title":"Why a Bayes (Junction) tree","text":"","category":"section"},{"location":"principles/bayestreePrinciples/","page":"Bayes (Junction) tree","title":"Bayes (Junction) tree","text":"The tree is algebraicly equivalent–-but acyclic–-structure to the factor graph: i.) Inference is easier on on acyclic graphs; ii.) We can exploit Smart Message Passing benefits (known from the full conditional independence structure encoded in the tree), since the tree represents the \"complete form\" when marginalizing each variable one at a time (also known as elimination game, marginalization, also related to smart factors). In loose terms, the Bayes (Junction) tree has implicit access to all Schur complements (if it parametric and linearized) of each variable to all others. Please see this page more information regarding advanced topics on the Bayes tree.","category":"page"},{"location":"principles/bayestreePrinciples/#What-is-a-Bayes-(Junction)-tree","page":"Bayes (Junction) tree","title":"What is a Bayes (Junction) tree","text":"","category":"section"},{"location":"principles/bayestreePrinciples/","page":"Bayes (Junction) tree","title":"Bayes (Junction) tree","text":"The Bayes tree data structure is a rooted and directed Junction tree (maximal elimination clique tree). It allows for exact inference to be carried out by leveraging and exposing the variables' conditional independence and, very interestingly, can be directly associated with the sparsity pattern exhibited by a system's factorized upper triangular square root information matrix (see picture below).","category":"page"},{"location":"principles/bayestreePrinciples/","page":"Bayes (Junction) tree","title":"Bayes (Junction) tree","text":"(Image: graph and matrix analagos)","category":"page"},{"location":"principles/bayestreePrinciples/","page":"Bayes (Junction) tree","title":"Bayes (Junction) tree","text":"Following this matrix-graph parallel, the picture also shows what the associated matrix interpretation is for a factor graph (~first order expansion in the form of a measurement Jacobian) and its corresponding Markov random field (sparsity pattern corresponding to the information matrix).","category":"page"},{"location":"principles/bayestreePrinciples/","page":"Bayes (Junction) tree","title":"Bayes (Junction) tree","text":"The procedure for obtaining the Bayes (Junction) tree is outlined in the figure shown below (factor graph to chrodal Bayes net via bipartite elimination game, and chordal Bayes net to Bayes tree via maximum cardinality search algorithm).","category":"page"},{"location":"principles/bayestreePrinciples/","page":"Bayes (Junction) tree","title":"Bayes (Junction) tree","text":"(Image: add the fg2net2tree outline)","category":"page"},{"location":"principles/bayestreePrinciples/#Constructing-a-Tree","page":"Bayes (Junction) tree","title":"Constructing a Tree","text":"","category":"section"},{"location":"principles/bayestreePrinciples/","page":"Bayes (Junction) tree","title":"Bayes (Junction) tree","text":"note: Note\nA visual illustration of factor graph to Bayes net to Bayes tree can be found in this PDF ","category":"page"},{"location":"principles/bayestreePrinciples/","page":"Bayes (Junction) tree","title":"Bayes (Junction) tree","text":"Trees and factor graphs are separated in the implementation, allowing the user to construct multiple different trees from one factor graph except for a few temporary values in the factor graph.","category":"page"},{"location":"principles/bayestreePrinciples/","page":"Bayes (Junction) tree","title":"Bayes (Junction) tree","text":"using IncrementalInference # RoME or Caesar will work too\n\n## construct a distributed factor graph object\nfg = generateGraph_Kaess()\n# add variables and factors\n# ...\n\n## build the tree\ntree = buildTreeReset!(fg)","category":"page"},{"location":"principles/bayestreePrinciples/","page":"Bayes (Junction) tree","title":"Bayes (Junction) tree","text":"The temporary values are reset from the distributed factor graph object fg<:AbstractDFG and a new tree is constructed. This buildTreeReset! call can be repeated as many times the user desires and results should be consistent for the same factor graph structure (regardless of numerical values contained within).","category":"page"},{"location":"principles/bayestreePrinciples/","page":"Bayes (Junction) tree","title":"Bayes (Junction) tree","text":"buildTreeReset!","category":"page"},{"location":"principles/bayestreePrinciples/#IncrementalInference.buildTreeReset!","page":"Bayes (Junction) tree","title":"IncrementalInference.buildTreeReset!","text":"buildTreeReset!(dfg; ...)\nbuildTreeReset!(\n dfg,\n eliminationOrder;\n ordering,\n drawpdf,\n show,\n filepath,\n viewerapp,\n imgs,\n ensureSolvable,\n eliminationConstraints\n)\n\n\nBuild a completely new Bayes (Junction) tree, after first wiping clean all temporary state in fg from a possibly pre-existing tree.\n\nDevNotes\n\nreplaces resetBuildTreeFromOrder!\n\nRelated:\n\nbuildTreeFromOrdering!, \n\n\n\n\n\n","category":"function"},{"location":"principles/bayestreePrinciples/#Variable-Ordering","page":"Bayes (Junction) tree","title":"Variable Ordering","text":"","category":"section"},{"location":"principles/bayestreePrinciples/#Getting-the-AMD-Variable-Ordering","page":"Bayes (Junction) tree","title":"Getting the AMD Variable Ordering","text":"","category":"section"},{"location":"principles/bayestreePrinciples/","page":"Bayes (Junction) tree","title":"Bayes (Junction) tree","text":"The variable ordering is described as a ::Vector{Symbol}. Note the automated methods can be varied between AMD, CCOLAMD, and others.","category":"page"},{"location":"principles/bayestreePrinciples/","page":"Bayes (Junction) tree","title":"Bayes (Junction) tree","text":"# get the automated variable elimination order\nvo = getEliminationOrder(fg)","category":"page"},{"location":"principles/bayestreePrinciples/","page":"Bayes (Junction) tree","title":"Bayes (Junction) tree","text":"It is also possible to manually define the Variable Ordering","category":"page"},{"location":"principles/bayestreePrinciples/","page":"Bayes (Junction) tree","title":"Bayes (Junction) tree","text":"vo = [:x1; :l3; :x2; ...]","category":"page"},{"location":"principles/bayestreePrinciples/","page":"Bayes (Junction) tree","title":"Bayes (Junction) tree","text":"And then reset the factor graph and build a new tree","category":"page"},{"location":"principles/bayestreePrinciples/","page":"Bayes (Junction) tree","title":"Bayes (Junction) tree","text":"buildTreeReset!(fg, vo)","category":"page"},{"location":"principles/bayestreePrinciples/","page":"Bayes (Junction) tree","title":"Bayes (Junction) tree","text":"note: Note\na list of variables or factors can be obtained through the ls and related functions, see Querying the Factor Graph.","category":"page"},{"location":"principles/bayestreePrinciples/#Interfacing-with-the-MM-iSAMv2-Solver","page":"Bayes (Junction) tree","title":"Interfacing with the MM-iSAMv2 Solver","text":"","category":"section"},{"location":"principles/bayestreePrinciples/","page":"Bayes (Junction) tree","title":"Bayes (Junction) tree","text":"The following parmaters (set before calling solveTree!) will show the solution progress on the tree visualization:","category":"page"},{"location":"principles/bayestreePrinciples/","page":"Bayes (Junction) tree","title":"Bayes (Junction) tree","text":"getSolverParams(fg).drawtree = true\ngetSolverParams(fg).showtree = true\n\n# asybc process will now draw and show the tree in linux\ntree = solveTree!(fg)","category":"page"},{"location":"principles/bayestreePrinciples/","page":"Bayes (Junction) tree","title":"Bayes (Junction) tree","text":"note: Note\nSee the Solving Graphs section for more details on the solver.","category":"page"},{"location":"principles/bayestreePrinciples/#Get-the-Elimination-Order-Used","page":"Bayes (Junction) tree","title":"Get the Elimination Order Used","text":"","category":"section"},{"location":"principles/bayestreePrinciples/","page":"Bayes (Junction) tree","title":"Bayes (Junction) tree","text":"The solver internally uses buildTreeReset! which sometimes requires the user extract the variable elimination order after the fact. This can be done with:","category":"page"},{"location":"principles/bayestreePrinciples/","page":"Bayes (Junction) tree","title":"Bayes (Junction) tree","text":"getEliminationOrder","category":"page"},{"location":"principles/bayestreePrinciples/#IncrementalInference.getEliminationOrder","page":"Bayes (Junction) tree","title":"IncrementalInference.getEliminationOrder","text":"getEliminationOrder(dfg; ordering, solvable, constraints)\n\n\nDetermine the variable ordering used to construct both the Bayes Net and Bayes/Junction/Elimination tree.\n\nNotes\n\nHeuristic method – equivalent to QR or Cholesky.\nAre using Blas QR function to extract variable ordering.\nNOT USING SUITE SPARSE – which would requires commercial license.\nFor now A::Array{<:Number,2} as a dense matrix.\nColumns of A are system variables, rows are factors (without differentiating between partial or full factor).\ndefault is to use solvable=1 and ignore factors and variables that might be used for dead reckoning or similar.\n\nFuture\n\nTODO: A should be sparse data structure (when we exceed 10'000 var dims)\nTODO: Incidence matrix is rectagular and adjacency is the square.\n\n\n\n\n\ngetEliminationOrder(treel)\n\n\nReturn the variable elimination order stored in a tree object.\n\n\n\n\n\n","category":"function"},{"location":"principles/bayestreePrinciples/#Visualizing","page":"Bayes (Junction) tree","title":"Visualizing","text":"","category":"section"},{"location":"principles/bayestreePrinciples/","page":"Bayes (Junction) tree","title":"Bayes (Junction) tree","text":"IncrementalInference.jl includes functions for visualizing the Bayes tree, and uses outside packages such as GraphViz (standard) and Latex tools (experimental, optional) to do so. ","category":"page"},{"location":"principles/bayestreePrinciples/#GraphViz","page":"Bayes (Junction) tree","title":"GraphViz","text":"","category":"section"},{"location":"principles/bayestreePrinciples/","page":"Bayes (Junction) tree","title":"Bayes (Junction) tree","text":"drawTree(tree, show=true) # , filepath=\"/tmp/caesar/mytree.pdf\"","category":"page"},{"location":"principles/bayestreePrinciples/","page":"Bayes (Junction) tree","title":"Bayes (Junction) tree","text":"drawTree","category":"page"},{"location":"principles/bayestreePrinciples/#IncrementalInference.drawTree","page":"Bayes (Junction) tree","title":"IncrementalInference.drawTree","text":"drawTree(\n treel;\n show,\n suffix,\n filepath,\n xlabels,\n dpi,\n viewerapp,\n imgs\n)\n\n\nDraw the Bayes (Junction) tree by means of graphviz .dot files. Ensure Linux packages are installed sudo apt-get install graphviz xdot.\n\nNotes\n\nxlabels is optional cliqid=>xlabel.\n\n\n\n\n\n","category":"function"},{"location":"principles/bayestreePrinciples/#Latex-Tikz-(Optional)","page":"Bayes (Junction) tree","title":"Latex Tikz (Optional)","text":"","category":"section"},{"location":"principles/bayestreePrinciples/","page":"Bayes (Junction) tree","title":"Bayes (Junction) tree","text":"EXPERIMENTAL, requiring special import.","category":"page"},{"location":"principles/bayestreePrinciples/","page":"Bayes (Junction) tree","title":"Bayes (Junction) tree","text":"First make sure the following packages are installed on your system:","category":"page"},{"location":"principles/bayestreePrinciples/","page":"Bayes (Junction) tree","title":"Bayes (Junction) tree","text":"$ sudo apt-get install texlive-pictures dot2tex\n$ pip install dot2tex","category":"page"},{"location":"principles/bayestreePrinciples/","page":"Bayes (Junction) tree","title":"Bayes (Junction) tree","text":"Then in Julia you should be able to do:","category":"page"},{"location":"principles/bayestreePrinciples/","page":"Bayes (Junction) tree","title":"Bayes (Junction) tree","text":"import IncrementalInference: generateTexTree\n\ngenerateTexTree(tree)","category":"page"},{"location":"principles/bayestreePrinciples/","page":"Bayes (Junction) tree","title":"Bayes (Junction) tree","text":"An example Bayes (Junction) tree representation obtained through generateTexTree(tree) for the sample factor graph shown above can be seen in the following image.","category":"page"},{"location":"principles/bayestreePrinciples/","page":"Bayes (Junction) tree","title":"Bayes (Junction) tree","text":"

      \n\n

      ","category":"page"},{"location":"principles/bayestreePrinciples/#Visualizing-Clique-Adjacency-Matrix","page":"Bayes (Junction) tree","title":"Visualizing Clique Adjacency Matrix","text":"","category":"section"},{"location":"principles/bayestreePrinciples/","page":"Bayes (Junction) tree","title":"Bayes (Junction) tree","text":"It is also possible to see the upward message passing variable/factor association matrix for each clique, requiring the Gadfly.jl package:","category":"page"},{"location":"principles/bayestreePrinciples/","page":"Bayes (Junction) tree","title":"Bayes (Junction) tree","text":"using Gadfly\n\nspyCliqMat(tree, :x1) # provided by IncrementalInference\n\n#or embedded in graphviz\ndrawTree(tree, imgs=true, show=true)","category":"page"},{"location":"principles/bayestreePrinciples/#Clique-State-Machine","page":"Bayes (Junction) tree","title":"Clique State Machine","text":"","category":"section"},{"location":"principles/bayestreePrinciples/","page":"Bayes (Junction) tree","title":"Bayes (Junction) tree","text":"The mmisam solver is based on a state machine design to handle the inter and intra clique operations during a variety of situations. Use of the clique state machine (CSM) makes debugging, development, verification, and modification of the algorithm real easy. Contact us for any support regarding modifications to the default algorithm. For pre-docs on working with CSM, please see IIF #443.","category":"page"},{"location":"principles/bayestreePrinciples/#STATUS-of-a-Clique","page":"Bayes (Junction) tree","title":"STATUS of a Clique","text":"","category":"section"},{"location":"principles/bayestreePrinciples/","page":"Bayes (Junction) tree","title":"Bayes (Junction) tree","text":"CSM currently uses the following statusses for each of the cliques during the inference process.","category":"page"},{"location":"principles/bayestreePrinciples/","page":"Bayes (Junction) tree","title":"Bayes (Junction) tree","text":"[:initialized;:upsolved;:marginalized;:downsolved;:uprecycled]","category":"page"},{"location":"principles/bayestreePrinciples/#Bayes-Tree-Legend-(from-IIF)","page":"Bayes (Junction) tree","title":"Bayes Tree Legend (from IIF)","text":"","category":"section"},{"location":"principles/bayestreePrinciples/","page":"Bayes (Junction) tree","title":"Bayes (Junction) tree","text":"The color legend for the refactored CSM from issue.","category":"page"},{"location":"principles/bayestreePrinciples/","page":"Bayes (Junction) tree","title":"Bayes (Junction) tree","text":"Blank / white – uninitialized or unprocessed,\nOrange – recycled clique upsolve solution from previous tree passed into solveTree! – TODO,\nBlue – fully marginalized clique that will not be updated during upsolve (maybe downsolved),\nLight blue – completed downsolve,\nGreen – trying to up initialize,\nDarkgreen – initUp some could up init,\nLightgreen – initUp no aditional variables could up init,\nOlive – trying to down initialize,\nSeagreen – initUp some could down init,\nKhaki – initUp no aditional variables could down init,\nBrown – initialized but not solved yet (likely child cliques that depend on downward autoinit msgs),\nLight red – completed upsolve,\nTomato – partial dimension upsolve but finished,\nRed – CPU working on clique's Chapman-Kolmogorov inference (up),\nMaroon – CPU working on clique's Chapman-Kolmogorov inference (down),\nRed – If finished cliques in red are in ERROR_STATUS","category":"page"},{"location":"concepts/entry_data/#section_data_entry_blob_store","page":"Entry=>Data Blob","title":"Additional (Large) Data","text":"","category":"section"},{"location":"concepts/entry_data/","page":"Entry=>Data Blob","title":"Entry=>Data Blob","text":"There are a variety of situations that require more data to be stored natively in the factor graph object. This page will showcase some of Entry=>Data features available.","category":"page"},{"location":"concepts/entry_data/#Adding-A-FolderStore","page":"Entry=>Data Blob","title":"Adding A FolderStore","text":"","category":"section"},{"location":"concepts/entry_data/","page":"Entry=>Data Blob","title":"Entry=>Data Blob","text":"Caesar.jl (with DFG) supports storage and retrieval of larger data blobs by means of various database/datastore technologies. To get going, you can use a conventional FolderStore: ","category":"page"},{"location":"concepts/entry_data/","page":"Entry=>Data Blob","title":"Entry=>Data Blob","text":"# temporary location example\nstoreDir = joinpath(\"/tmp\",\"cjldata\")\ndatastore = FolderStore{Vector{UInt8}}(:default_folder_store, storeDir) \naddBlobStore!(fg, datastore)","category":"page"},{"location":"concepts/entry_data/","page":"Entry=>Data Blob","title":"Entry=>Data Blob","text":"note: Note\nThis example places the data folder in the .logpath location which defaults to /tmp/caesar/UNIQUEDATETIME. This is not a long term storage location since /tmp is periodically cleared by the operating system. Note that the data folder can be used in combination with loading and saving factor graph objects.","category":"page"},{"location":"concepts/entry_data/#Adding-Data-Blobs","page":"Entry=>Data Blob","title":"Adding Data Blobs","text":"","category":"section"},{"location":"concepts/entry_data/","page":"Entry=>Data Blob","title":"Entry=>Data Blob","text":"Just showcasing a JSON Dict approach","category":"page"},{"location":"concepts/entry_data/","page":"Entry=>Data Blob","title":"Entry=>Data Blob","text":"using JSON2\nsomeDict = Dict(:name => \"Jane\", :data => randn(100))\naddData!(fg, :default_folder_store, :x1, :datalabel, Vector{UInt8}(JSON2.write( someDict )), mimeType=\"application/json/octet-stream\" )\n# see retrieval example below...","category":"page"},{"location":"concepts/entry_data/","page":"Entry=>Data Blob","title":"Entry=>Data Blob","text":"This approach allows the maximum flexibility, for example it is also possible to do:","category":"page"},{"location":"concepts/entry_data/","page":"Entry=>Data Blob","title":"Entry=>Data Blob","text":"# from https://juliaimages.org/stable/install/\nusing TestImages, Images, ImageView\nimg = testimage(\"mandrill\")\nimshow(img)\n\n# TODO, convert to Vector{UInt8}\nusing ImageMagick, FileIO\n# convert image to PNG bytestream\nio = IOBuffer()\npngSm = Stream(format\"PNG\", io)\nsave(pngSm, img) # think FileIO is required for this\npngBytes = take!(io)\naddData!(fg, :default_folder_store, :x1, :testImage, pngBytes, mimeType=\"image/png\", description=\"mandrill test image\" )","category":"page"},{"location":"concepts/entry_data/#section_retrieve_data_blob","page":"Entry=>Data Blob","title":"Retrieving a Data Blob","text":"","category":"section"},{"location":"concepts/entry_data/","page":"Entry=>Data Blob","title":"Entry=>Data Blob","text":"Data is stored as an Entry => Blob relationship, and the entries associated with a variable can be found via","category":"page"},{"location":"concepts/entry_data/","page":"Entry=>Data Blob","title":"Entry=>Data Blob","text":"julia> listDataEntries(fg, :x6)\n1-element Array{Symbol,1}:\n :JOYSTICK_CMD_VALS\n :testImage","category":"page"},{"location":"concepts/entry_data/","page":"Entry=>Data Blob","title":"Entry=>Data Blob","text":"And retrieved via:","category":"page"},{"location":"concepts/entry_data/","page":"Entry=>Data Blob","title":"Entry=>Data Blob","text":"rawData = getData(fg, :x6, :JOYSTICK_CMD_VALS);\nimgEntry, imgBytes = getData(fg, :x1, :testImage)","category":"page"},{"location":"concepts/entry_data/","page":"Entry=>Data Blob","title":"Entry=>Data Blob","text":"Looking at rawData in a bit more detail:","category":"page"},{"location":"concepts/entry_data/","page":"Entry=>Data Blob","title":"Entry=>Data Blob","text":"julia> rawData[1]\nBlobStoreEntry(:JOYSTICK_CMD_VALS, UUID(\"d21fc841-6214-4196-a396-b1d5ef95be49\"), :default_folder_store, \"deeb3ed0cba6ffd149298de21c361af26a207e565e27a3cd3fa6c807b9aaa44d\", \"DefaultUser|DefaultRobot|Session_851d81|x6\", \"\", \"application/json/octet-stream\", TimeZones.ZonedDateTime(2020, 8, 15, 14, 26, 36, 397, tz\"UTC-04:00\"))\n\njulia> rawData[2]\n3362-element Array{UInt8,1}:\n 0x5b\n 0x5b\n 0x32\n#...","category":"page"},{"location":"concepts/entry_data/","page":"Entry=>Data Blob","title":"Entry=>Data Blob","text":"For :testImage the data was packed in a familiar image/png and can be converted backto bitmap (array) format:","category":"page"},{"location":"concepts/entry_data/","page":"Entry=>Data Blob","title":"Entry=>Data Blob","text":"rgb = ImageMagick.readblob(imgBytes); # automatically detected as PNG format\n\nusing ImageView\nimshow(rgb)","category":"page"},{"location":"concepts/entry_data/","page":"Entry=>Data Blob","title":"Entry=>Data Blob","text":"In the other case where data was packed as \"application/json/octet-stream\":","category":"page"},{"location":"concepts/entry_data/","page":"Entry=>Data Blob","title":"Entry=>Data Blob","text":"myData = JSON2.read(IOBuffer(rawData[2]))\n\n# as example\njulia> myData[1]\n3-element Array{Any,1}:\n 2017\n 1532558043061497600\n (buttons = Any[0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0], axis = Any[0, 0.25026196241378784, 0, 0, 0, 0])","category":"page"},{"location":"concepts/entry_data/#Quick-Camera-Calibration-Storage-Example","page":"Entry=>Data Blob","title":"Quick Camera Calibration Storage Example","text":"","category":"section"},{"location":"concepts/entry_data/","page":"Entry=>Data Blob","title":"Entry=>Data Blob","text":"Consider storing camera calibration data inside the factor graph tar.gz object for later use:","category":"page"},{"location":"concepts/entry_data/","page":"Entry=>Data Blob","title":"Entry=>Data Blob","text":"fx = 341.4563903808594\nfy = 341.4563903808594\ncx = 329.19091796875\ncy = 196.3658447265625\n\nK = [-fx 0 cx;\n 0 fy cy]\n\n# Cheap way to include data as a Blob. Also see the more hacky `Smalldata` alternative for situations that make sense.\ncamCalib = Dict(:size=>size(K), :vecK=>vec(K))\naddData!(dfg,:default_folder_store,:x0,:camCalib,\n Vector{UInt8}(JSON2.write(camCalib)), mimeType=\"application/json/octet-stream\", \n description=\"reshape(camCalib[:vecK], camCalib[:size]...)\") ","category":"page"},{"location":"concepts/entry_data/#Working-with-Binary-Data-(BSON)","page":"Entry=>Data Blob","title":"Working with Binary Data (BSON)","text":"","category":"section"},{"location":"concepts/entry_data/","page":"Entry=>Data Blob","title":"Entry=>Data Blob","text":"Sometime it's useful to store binary data. Let's combine the example of storing a Flux.jl Neural Network object using the existing BSON approach. Also see BSON wrangling snippets here.","category":"page"},{"location":"concepts/entry_data/","page":"Entry=>Data Blob","title":"Entry=>Data Blob","text":"note: Note\nWe will store binary data as Base64 encoded string to avoid other framing problems. See Julia Docs on Base64","category":"page"},{"location":"concepts/entry_data/","page":"Entry=>Data Blob","title":"Entry=>Data Blob","text":"# the object you wish to store as binary\nmodel = Chain(Dense(5,2), Dense(2,3))\n\nio = IOBuffer()\n\n# using BSON\nBSON.@save io model\n\n# get base64 binary\nmdlBytes = take!(io)\n\naddData!(dfg,:default_folder_store,:x0,:nnModel,\n mdlBytes, mimeType=\"application/bson/octet-stream\", \n description=\"BSON.@load PipeBuffer(readBytes) model\") ","category":"page"},{"location":"concepts/entry_data/#Experimental-Features","page":"Entry=>Data Blob","title":"Experimental Features","text":"","category":"section"},{"location":"concepts/entry_data/","page":"Entry=>Data Blob","title":"Entry=>Data Blob","text":"Loading images is a relatively common task, hence a convenience function has been developed, when using ImageMagick try Caesar.fetchDataImage.","category":"page"},{"location":"installation_environment/#Install-Caesar.jl","page":"Installation","title":"Install Caesar.jl","text":"","category":"section"},{"location":"installation_environment/","page":"Installation","title":"Installation","text":"Caesar.jl is one of the packages within the JuliaRobotics community, and adheres to the code-of-conduct.","category":"page"},{"location":"installation_environment/#New-to-Julia","page":"Installation","title":"New to Julia","text":"","category":"section"},{"location":"installation_environment/#Installing-the-Julia-Binary","page":"Installation","title":"Installing the Julia Binary","text":"","category":"section"},{"location":"installation_environment/","page":"Installation","title":"Installation","text":"Although Julia (or JuliaPro) can be installed on a Linux/Mac/Windows via a package manager, we prefer a highly reproducible and self contained (local environment) install.","category":"page"},{"location":"installation_environment/","page":"Installation","title":"Installation","text":"The easiest method is–-via the terminal–-as described on the JuliaLang.org downloads page.","category":"page"},{"location":"installation_environment/","page":"Installation","title":"Installation","text":"note: Note\nFeel free to modify this setup as you see fit.","category":"page"},{"location":"installation_environment/#VSCode-IDE-Environment","page":"Installation","title":"VSCode IDE Environment","text":"","category":"section"},{"location":"installation_environment/","page":"Installation","title":"Installation","text":"VSCode IDE allows for interactive development of Julia code using the Julia Extension. After installing and running VSCode, install the Julia Language Support Extension:","category":"page"},{"location":"installation_environment/","page":"Installation","title":"Installation","text":"

      \n\n

      ","category":"page"},{"location":"installation_environment/","page":"Installation","title":"Installation","text":"In VSCode, open the command pallette by pressing Ctrl + Shift + p. There are a wealth of tips and tricks on how to use VSCode. See this JuliaCon presentation for as a general introduction into 'piece-by-piece' code execution and much much more. Working in one of the Julia IDEs like VS Code or Juno should feel something like this (Gif borrowed from DiffEqFlux.jl):","category":"page"},{"location":"installation_environment/","page":"Installation","title":"Installation","text":"

      \n\n

      ","category":"page"},{"location":"installation_environment/","page":"Installation","title":"Installation","text":"There are a variety of useful packages in VSCode, such as GitLens, LiveShare, and Todo Browser as just a few highlights. These VSCode Extensions are independent of the already vast JuliaLang Package Ecosystem (see JuliaObserver.com).","category":"page"},{"location":"installation_environment/","page":"Installation","title":"Installation","text":"note: Note\nFor ROS.org users, see at least one usage example at the ROS Direct page.","category":"page"},{"location":"installation_environment/#Installing-Julia-Packages","page":"Installation","title":"Installing Julia Packages","text":"","category":"section"},{"location":"installation_environment/#Vanilla-Install","page":"Installation","title":"Vanilla Install","text":"","category":"section"},{"location":"installation_environment/","page":"Installation","title":"Installation","text":"The philosophy around Julia packages are discussed at length in the Julia core documentation, where each Julia package relates to a git repository likely found on Github.com. Also see JuliaHub.com for dashboard-style representation of the broader Julia package ecosystem. To install a Julia package, simply start a julia REPL (equally the Julia REPL in VSCode) and then type:","category":"page"},{"location":"installation_environment/","page":"Installation","title":"Installation","text":"julia> ] # activate Pkg manager\n(v___) pkg> add Caesar","category":"page"},{"location":"installation_environment/#Version-Control,-Branches","page":"Installation","title":"Version Control, Branches","text":"","category":"section"},{"location":"installation_environment/","page":"Installation","title":"Installation","text":"These are registered packages maintained by JuliaRegistries/General. Unregistered latest packages can also be installed with using only the Pkg.develop function:","category":"page"},{"location":"installation_environment/","page":"Installation","title":"Installation","text":"# Caesar is registered on JuliaRegistries/General\njulia> ]\n(v___) pkg> add Caesar\n(v___) pkg> add Caesar#janes-awesome-fix-branch\n(v___) pkg> add Caesar@v0.16\n\n# or alternatively your own local fork (just using old link as example)\n(v___) pkg> add https://github.com/dehann/Caesar.jl","category":"page"},{"location":"installation_environment/#Virtual-Environments","page":"Installation","title":"Virtual Environments","text":"","category":"section"},{"location":"installation_environment/","page":"Installation","title":"Installation","text":"note: Note\nJulia has native support for virtual environments and exact package manifests. See Pkg.jl Docs for more info. More details and features regarding package management, development, version control, virtual environments are available there.","category":"page"},{"location":"installation_environment/#Next-Steps","page":"Installation","title":"Next Steps","text":"","category":"section"},{"location":"installation_environment/","page":"Installation","title":"Installation","text":"The sections hereafter describe Building, [Interacting], and Solving factor graphs. We also recommend reviewing the various examples available in the Examples section. ","category":"page"},{"location":"installation_environment/#Possible-System-Dependencies","page":"Installation","title":"Possible System Dependencies","text":"","category":"section"},{"location":"installation_environment/","page":"Installation","title":"Installation","text":"The following (Linux) system packages have been required on some systems in the past, but likely does not have to be installed system wide on newer versions of Julia:","category":"page"},{"location":"installation_environment/","page":"Installation","title":"Installation","text":"# Likely dependencies\nsudo apt-get install hdf5-tools imagemagick\n\n# optional packages\nsudo apt-get install graphviz xdot","category":"page"},{"location":"examples/custom_variables/#custom_variables","page":"Custom Variables","title":"Custom Variables","text":"","category":"section"},{"location":"examples/custom_variables/","page":"Custom Variables","title":"Custom Variables","text":"In most scenarios, the existing variables and factors should be sufficient for most robotics applications. Caesar however, is extensible and allows you to easily incorporate your own variable and factor types for specialized applications. Let's look at creating custom variables first.","category":"page"},{"location":"examples/custom_variables/","page":"Custom Variables","title":"Custom Variables","text":"A handy macro helps define new variables:","category":"page"},{"location":"examples/custom_variables/","page":"Custom Variables","title":"Custom Variables","text":"@defVariable(\n MyVar,\n TranslationGroup(2),\n MVector{2}(0.0,0.0)\n)","category":"page"},{"location":"examples/custom_variables/","page":"Custom Variables","title":"Custom Variables","text":"First, we define the name MyVar, then the manifold on which the variable probability estimates exist (a simple Cartesian translation in two dimensions). The third parameter is a default point for your new variable.","category":"page"},{"location":"examples/custom_variables/","page":"Custom Variables","title":"Custom Variables","text":"This new variable is now ready to be added to a factor graph:","category":"page"},{"location":"examples/custom_variables/","page":"Custom Variables","title":"Custom Variables","text":"addVariable!(fg, :myvar1, MyVar)","category":"page"},{"location":"examples/custom_variables/","page":"Custom Variables","title":"Custom Variables","text":"Another good example to look at is RoME's Pose2 with 3 degrees of freedom: X Y translation and a rotation matrix using R(theta). Caesar.jl uses JuliaManifolds/Manifolds.jl for structuring numerical operations, we can use either the Manifolds.ProductRepr (or RecursiveArrayTools.ArrayPartition), to define manifold point types:","category":"page"},{"location":"examples/custom_variables/","page":"Custom Variables","title":"Custom Variables","text":"# already exists in RoME/src/factors/Pose2D.jl\n@defVariable(\n Pose2,\n SpecialEuclidean(2),\n ArrayPartition(MVector{2}(0.0,0.0), MMatrix{2,2}(1.0,0.0,0.0,1.0))\n)","category":"page"},{"location":"examples/custom_variables/","page":"Custom Variables","title":"Custom Variables","text":"Here we used Manifolds.SpecialEuclidean(2) as the variable manifold, and the default data representation is similar to Manifolds.identity_element(SpecialEuclidean(2)), or Float32[1.0 0; 0 1], etc. In the example above, we used StaticArrays.MVector, StaticArrays.MMatrix for better performance, owing to better heap vs. stack memory management.","category":"page"},{"location":"examples/custom_variables/","page":"Custom Variables","title":"Custom Variables","text":"@defVariable","category":"page"},{"location":"examples/custom_variables/#DistributedFactorGraphs.@defVariable","page":"Custom Variables","title":"DistributedFactorGraphs.@defVariable","text":"@defVariable StructName manifolds<:ManifoldsBase.AbstractManifold\n\nA macro to create a new variable with name StructName and manifolds. Note that the manifolds is an object and must be a subtype of ManifoldsBase.AbstractManifold. See documentation in Manifolds.jl on making your own. \n\nExample:\n\nDFG.@defVariable Pose2 SpecialEuclidean(2) ArrayPartition([0;0.0],[1 0; 0 1.0])\n\n\n\n\n\n","category":"macro"},{"location":"examples/custom_variables/","page":"Custom Variables","title":"Custom Variables","text":"note: Note\nUsers can implement their own manifolds using the ManifoldsBase.jl API; and the tutorial. See JuliaManifolds/Manifolds.jl for general information.","category":"page"},{"location":"","page":"Welcome","title":"Welcome","text":"

      \n\n

      ","category":"page"},{"location":"#Open-Community","page":"Welcome","title":"Open Community","text":"","category":"section"},{"location":"","page":"Welcome","title":"Welcome","text":"Click here to go to the Caesar.jl Github repo:","category":"page"},{"location":"","page":"Welcome","title":"Welcome","text":"(Image: source)","category":"page"},{"location":"","page":"Welcome","title":"Welcome","text":"Caesar.jl is a community project to facilate software technology development for localization and mapping from multiple sensor data, and multiple sessions or human / semi-autonomous / autonomous agents. This software is being developed with broadly Industry 4.0, Robotics, and Work of the Future in mind. Caesar.jl is an \"umbrella package\" to combine many other libraries from across the Julia package ecosystem. ","category":"page"},{"location":"#Commercial-Products-and-Services","page":"Welcome","title":"Commercial Products and Services","text":"","category":"section"},{"location":"","page":"Welcome","title":"Welcome","text":"WhereWhen.ai's NavAbility products and services build upon, continually develop, and help administer the Caesar.jl suite of open-source libraries. Please reach out for any additional information (info@navability.io), or using the community links provided below.","category":"page"},{"location":"","page":"Welcome","title":"Welcome","text":"Various mapping and localization solutions are possible both for commercial and R&D. We recommend taking a look at:","category":"page"},{"location":"","page":"Welcome","title":"Welcome","text":"The human-to-machine friendly NavAbility App interaction; and\nThe machine-to-machine friendly NavAbilitySDKs (Python, Julia, JS, etc.). Also see the SDK.py Docs.","category":"page"},{"location":"#NavAbility-Zero-Install-Tutorials","page":"Welcome","title":"NavAbility Zero Install Tutorials","text":"","category":"section"},{"location":"","page":"Welcome","title":"Welcome","text":"Follow this page to see the NavAbility Tutorials which are zero install and build around specific application examples.","category":"page"},{"location":"#Origins-and-Ongoing-Research","page":"Welcome","title":"Origins and Ongoing Research","text":"","category":"section"},{"location":"","page":"Welcome","title":"Welcome","text":"Caesar.jl developed as a spin-out project from MIT's Computer Science and Artificial Intelligence Laboratory. See related works on the literature page. Many future directions are in the works – including fundamental research, implementation quality/performance, and system integration.","category":"page"},{"location":"","page":"Welcome","title":"Welcome","text":"Consider citing our work: CITATION.bib.","category":"page"},{"location":"#Community,-Issues,-Comments,-or-Help","page":"Welcome","title":"Community, Issues, Comments, or Help","text":"","category":"section"},{"location":"","page":"Welcome","title":"Welcome","text":"Post Issues, or Discussions for community help. Maintainers can easily transfer Issues to the best suited package location if necessary. Also see the history of changes and ongoing work can via the Milestone pages (click through badges here). You can also get in touch via Slack at (Image: ).","category":"page"},{"location":"","page":"Welcome","title":"Welcome","text":"note: Note\nPlease help improve this documentation–if something confuses you, chances are you're not alone. It's easy to do as you read along: just click on the \"Edit on GitHub\" link above, and then edit the files directly in your browser. Your changes will be vetted by developers before becoming permanent, so don't worry about whether you might say something wrong.","category":"page"},{"location":"#JuliaRobotics-Code-of-Conduct","page":"Welcome","title":"JuliaRobotics Code of Conduct","text":"","category":"section"},{"location":"","page":"Welcome","title":"Welcome","text":"The Caesar.jl project is part of the JuliaRobotics organization and adheres to the JuliaRobotics code-of-conduct.","category":"page"},{"location":"#Next-Steps","page":"Welcome","title":"Next Steps","text":"","category":"section"},{"location":"","page":"Welcome","title":"Welcome","text":"For installation steps, examples/tutorials, and concepts please refer to the following pages:","category":"page"},{"location":"","page":"Welcome","title":"Welcome","text":"Pages = [\n \"concepts/why_nongaussian.md\"\n \"installation_environment.md\"\n \"concepts/concepts.md\"\n \"concepts/building_graphs.md\"\n \"concepts/2d_plotting.md\"\n \"examples/examples.md\"\n]\nDepth = 1","category":"page"},{"location":"examples/deadreckontether/#Dead-Reckon-Tether","page":"Dead Reckon Tether","title":"Dead Reckon Tether","text":"","category":"section"},{"location":"examples/deadreckontether/","page":"Dead Reckon Tether","title":"Dead Reckon Tether","text":"Towards real-rime location prediction and model based target tracking. See brief description in this presentation.","category":"page"},{"location":"examples/deadreckontether/","page":"Dead Reckon Tether","title":"Dead Reckon Tether","text":"\n

      Towards Real-Time Non-Gaussian SLAM from Dehann on Vimeo.

      ","category":"page"},{"location":"examples/deadreckontether/#DRT-Functions","page":"Dead Reckon Tether","title":"DRT Functions","text":"","category":"section"},{"location":"examples/deadreckontether/","page":"Dead Reckon Tether","title":"Dead Reckon Tether","text":"Overview of related functions while this documentation is being expanded:","category":"page"},{"location":"examples/deadreckontether/","page":"Dead Reckon Tether","title":"Dead Reckon Tether","text":"addVariable!(fg, :drt_0, ..., solvable=0)\ndrec1 = MutablePose2Pose2Gaussian(...)\naddFactor!(dfg, [:x0; :drt_0], drec1, solvable=0, graphinit=false)\naccumulateDiscreteLocalFrame!\naccumulateFactorMeans\nduplicateToStandardFactorVariable","category":"page"},{"location":"examples/deadreckontether/#DRT-Construct","page":"Dead Reckon Tether","title":"DRT Construct","text":"","category":"section"},{"location":"examples/deadreckontether/","page":"Dead Reckon Tether","title":"Dead Reckon Tether","text":"The idea is that the dead reckong tracking method is to update a single value based on high-rate sensor data. Perhaps 'particles' values can be propagated as a non-Gaussian prediction, depending on allowable compute resources, and for that see approxConvBelief. Some specialized plumbing has been built to facilitate rapid single value propagation using the factor graph. ","category":"page"},{"location":"examples/deadreckontether/#Suppress-w/-solvable","page":"Dead Reckon Tether","title":"Suppress w/ solvable","text":"","category":"section"},{"location":"examples/deadreckontether/","page":"Dead Reckon Tether","title":"Dead Reckon Tether","text":"The construct uses regular addVariable! and addFactor! calls but with a few tweaks. The first is that some variables and factors should not be incorporated with the regular solveTree! call and can be achieved on a per node basis, e.g.:","category":"page"},{"location":"examples/deadreckontether/","page":"Dead Reckon Tether","title":"Dead Reckon Tether","text":"fg = initfg()\n\n# a regular variable and prior for solving in graph\naddVariable!(fg, :x0, Pose2) # default solvable=1\naddFactor!(fg, [:x0;], PriorPose2(MvNormal([0;0;0.0],diagm([0.1;0.1;0.01]))))\n\n# now add a variable that will not be included in solves\naddVariable!(fg, :drt0, Pose2, solvable=0)","category":"page"},{"location":"examples/deadreckontether/#A-Mutable-Factor","page":"Dead Reckon Tether","title":"A Mutable Factor","text":"","category":"section"},{"location":"examples/deadreckontether/","page":"Dead Reckon Tether","title":"Dead Reckon Tether","text":"The next part is to add a factor that can be rapidly updated from sensor data, hence liberal use of the term 'Mutable':","category":"page"},{"location":"examples/deadreckontether/","page":"Dead Reckon Tether","title":"Dead Reckon Tether","text":"drt0 = MutablePose2Pose2Gaussian(MvNormal([0;0;0.0],diagm([0.1;0.1;0.01])))\naddFactor!(dfg, [:x0; :drt0], drt0, solvable=0, graphinit=false)","category":"page"},{"location":"examples/deadreckontether/","page":"Dead Reckon Tether","title":"Dead Reckon Tether","text":"Notice that this factor is also set with solvable=0 to exclude it from the regular solving process. Also note the graphinit=false to prevent any immediate automated attempts to initialize the values to connected variables using this factor.","category":"page"},{"location":"examples/deadreckontether/#Sensor-rate-updates","page":"Dead Reckon Tether","title":"Sensor rate updates","text":"","category":"section"},{"location":"examples/deadreckontether/","page":"Dead Reckon Tether","title":"Dead Reckon Tether","text":"The idea of a dead reckon tether is that the value in the factor can rapidly be updated without affecting any other regular part of the factor graph or simultaneous solving progress. Imagine new sensor data from wheel odometry or an IMU is available which is then used to 'mutate' the values in a DRT factor:","category":"page"},{"location":"examples/deadreckontether/","page":"Dead Reckon Tether","title":"Dead Reckon Tether","text":"# continuous Gaussian process noise Q\nQc = 0.001*diagm(ones(3))\n\n# accumulate a Pose2 delta odometry measurement segment onto existing value in drt0\naccumulateDiscreteLocalFrame!(drt0,[0.1;0;0.05],Qc)","category":"page"},{"location":"examples/deadreckontether/#Dead-Reckoned-Prediction","page":"Dead Reckon Tether","title":"Dead Reckoned Prediction","text":"","category":"section"},{"location":"examples/deadreckontether/","page":"Dead Reckon Tether","title":"Dead Reckon Tether","text":"Using the latest available inference result fg[:x0], the drt0 factor can be used to predict the single parameteric location of variable :drt0:","category":"page"},{"location":"examples/deadreckontether/","page":"Dead Reckon Tether","title":"Dead Reckon Tether","text":"# can happen concurrently with most other operations on fg, including `solveTree!`\npredictDRT0 = accumulateFactorMeans(fg, [:x0drt0f1;])","category":"page"},{"location":"examples/deadreckontether/","page":"Dead Reckon Tether","title":"Dead Reckon Tether","text":"Note also a convenience function uses similar plumbing for integrating odometry as well as any other DRT operations. Imagine a robot is driving from pose position 0 to 1, then the final pose trigger value in factor drt0 is the same value required to instantiate a new factor graph Pose2Pose2, and hence:","category":"page"},{"location":"examples/deadreckontether/","page":"Dead Reckon Tether","title":"Dead Reckon Tether","text":"# add new regular rigid transform (odometry) factor between pose variables \nduplicateToStandardFactorVariable(Pose2Pose2, drt0, fg, :x0, :x1)","category":"page"},{"location":"examples/deadreckontether/","page":"Dead Reckon Tether","title":"Dead Reckon Tether","text":"warning: Warning\n(2021Q1) Some of these function names are likely to be better standardized in the future. Regular semver deprecation warnings will be used to simplify any potential updates that may occur. Please file issues at Caesar.jl if any problems arise.","category":"page"},{"location":"examples/deadreckontether/#Function-Reference","page":"Dead Reckon Tether","title":"Function Reference","text":"","category":"section"},{"location":"examples/deadreckontether/","page":"Dead Reckon Tether","title":"Dead Reckon Tether","text":"duplicateToStandardFactorVariable\naccumulateDiscreteLocalFrame!\naccumulateFactorMeans\nMutablePose2Pose2Gaussian","category":"page"},{"location":"examples/deadreckontether/#RoME.duplicateToStandardFactorVariable","page":"Dead Reckon Tether","title":"RoME.duplicateToStandardFactorVariable","text":"duplicateToStandardFactorVariable(\n ,\n mpp,\n dfg,\n prevsym,\n newsym;\n solvable,\n graphinit,\n cov\n)\n\n\nHelper function to duplicate values from a special factor variable into standard factor and variable. Returns the name of the new factor.\n\nNotes:\n\nDeveloped for accumulating odometry in a MutablePosePose and then cloning out a standard PosePose and new variable.\nDoes not change the original MutablePosePose source factor or variable in any way.\nAssumes timestampe from mpp object.\n\nRelated\n\naddVariable!, addFactor!\n\n\n\n\n\n","category":"function"},{"location":"examples/deadreckontether/#RoME.accumulateDiscreteLocalFrame!","page":"Dead Reckon Tether","title":"RoME.accumulateDiscreteLocalFrame!","text":"accumulateDiscreteLocalFrame!(mpp, DX, Qc; ...)\naccumulateDiscreteLocalFrame!(mpp, DX, Qc, dt; Fk, Gk, Phik)\n\n\nAdvance an odometry factor as though integrating an ODE – i.e. X_2 = X_1 ΔX. Accepts continuous domain process noise density Qc which is internally integrated to discrete process noise Qd. DX is assumed to already be incrementally integrated before this function. See related accumulateContinuousLocalFrame! for fully continuous system propagation.\n\nNotes\n\nThis update stays in the same reference frame but updates the local vector as though accumulating measurement values over time.\nKalman filter would have used for noise propagation: Pk1 = F*Pk*F + Qdk\nFrom Chirikjian, Vol.II, 2012, p.35: Jacobian SE(2), Jr = [cθ sθ 0; -sθ cθ 0; 0 0 1] – i.e. dSE2/dX' = SE2([0;0;-θ])\nDX = dX/dt*Dt\nassumed process noise for {}^b Qc = {}^b [x;y;yaw] = [fwd; sideways; rotation.rate]\n\nDev Notes\n\nTODO many operations here can be done in-place.\n\nRelated\n\naccumulateContinuousLocalFrame!, accumulateDiscreteReferenceFrame!, accumulateFactorMeans\n\n\n\n\n\n","category":"function"},{"location":"examples/deadreckontether/#IncrementalInference.accumulateFactorMeans","page":"Dead Reckon Tether","title":"IncrementalInference.accumulateFactorMeans","text":"accumulateFactorMeans(dfg, fctsyms; solveKey)\n\n\nAccumulate chains of binary factors–-potentially starting from a prior–-as a parameteric mean value only.\n\nNotes\n\nNot used during tree inference.\nExpected uses are for user analysis of factors and estimates.\nreal-time dead reckoning chain prediction.\nReturns mean value as coordinates\n\nDevNotes\n\nTODO consolidate with similar approxConvBelief\nTODO compare consolidate with solveParametricConditionals\nTODO compare consolidate with solveFactorParametric\n\nRelated:\n\napproxConvBelief, solveFactorParametric, RoME.MutablePose2Pose2Gaussian\n\n\n\n\n\n","category":"function"},{"location":"examples/deadreckontether/#RoME.MutablePose2Pose2Gaussian","page":"Dead Reckon Tether","title":"RoME.MutablePose2Pose2Gaussian","text":"mutable struct MutablePose2Pose2Gaussian <: AbstractManifoldMinimize\n\nSpecialized Pose2Pose2 factor type (Gaussian), which allows for rapid accumulation of odometry information as a branch on the factor graph.\n\n\n\n\n\n","category":"type"},{"location":"examples/deadreckontether/#Additional-Notes","page":"Dead Reckon Tether","title":"Additional Notes","text":"","category":"section"},{"location":"examples/deadreckontether/","page":"Dead Reckon Tether","title":"Dead Reckon Tether","text":"This will be consolidated with text above:","category":"page"},{"location":"examples/deadreckontether/","page":"Dead Reckon Tether","title":"Dead Reckon Tether","text":"regardless of slam solution going on in the background, you can then just call val = accumulateFactorMeans(fg, [:x0deadreckon_x0f1])","category":"page"},{"location":"examples/deadreckontether/","page":"Dead Reckon Tether","title":"Dead Reckon Tether","text":"for a new dead reckon tether solution;","category":"page"},{"location":"examples/deadreckontether/","page":"Dead Reckon Tether","title":"Dead Reckon Tether","text":"you can add as many tethers as you want. \nSo if you solving every 10 poses, you just add a new tether x0, x10, x20, x30...\nas the solves complete on previous segments, then you can just get the latest accumulateFactorMean","category":"page"},{"location":"examples/basic_continuousscalar/#Tutorials","page":"Canonical 1D Example","title":"Tutorials","text":"","category":"section"},{"location":"examples/basic_continuousscalar/#IncrementalInference.jl-ContinuousScalar","page":"Canonical 1D Example","title":"IncrementalInference.jl ContinuousScalar","text":"","category":"section"},{"location":"examples/basic_continuousscalar/","page":"Canonical 1D Example","title":"Canonical 1D Example","text":"The application of this tutorial is presented in abstract from which the user is free to imagine any system of relationships: For example, a robot driving in a one dimensional world; or a time traveler making uncertain jumps forwards and backwards in time. The tutorial implicitly shows a multi-modal uncertainty can be introduced from non-Gaussian measurements, and then transmitted through the system. The tutorial also illustrates consensus through an additional piece of information, which reduces all stochastic variable marginal beliefs to unimodal only beliefs. This tutorial illustrates how algebraic relations (i.e. residual functions) between multiple stochastic variables are calculated, as well as the final posterior belief estimate, from several pieces of information. Lastly, the tutorial demonstrates how automatic initialization of variables works.","category":"page"},{"location":"examples/basic_continuousscalar/","page":"Canonical 1D Example","title":"Canonical 1D Example","text":"This tutorial requires RoME.jl and RoMEPlotting packages be installed. In addition, the optional GraphViz package will allow easy visualization of the FactorGraph object structure.","category":"page"},{"location":"examples/basic_continuousscalar/","page":"Canonical 1D Example","title":"Canonical 1D Example","text":"To start, the two major mathematical packages are brought into scope.","category":"page"},{"location":"examples/basic_continuousscalar/","page":"Canonical 1D Example","title":"Canonical 1D Example","text":"using IncrementalInference","category":"page"},{"location":"examples/basic_continuousscalar/","page":"Canonical 1D Example","title":"Canonical 1D Example","text":"note: Note\nGuidelines for developing your own functions are discussed here in Adding Variables and Factors, and we note that mechanizations and manifolds required for robotic simultaneous localization and mapping (SLAM) has been tightly integrated with the expansion package RoME.jl.","category":"page"},{"location":"examples/basic_continuousscalar/","page":"Canonical 1D Example","title":"Canonical 1D Example","text":"The next step is to describe the inference problem with a graphical model with any of the existing concrete types that inherit from <: AbstractDFG. The first step is to create an empty factor graph object and start populating it with variable nodes. The variable nodes are identified by Symbols, namely :x0, :x1, :x2, :x3.","category":"page"},{"location":"examples/basic_continuousscalar/","page":"Canonical 1D Example","title":"Canonical 1D Example","text":"# Start with an empty factor graph\nfg = initfg()\n\n# add the first node\naddVariable!(fg, :x0, ContinuousScalar)\n\n# this is unary (prior) factor and does not immediately trigger autoinit of :x0.\naddFactor!(fg, [:x0], Prior(Normal(0,1)))","category":"page"},{"location":"examples/basic_continuousscalar/","page":"Canonical 1D Example","title":"Canonical 1D Example","text":"Factor graphs are bipartite graphs with factors that act as mathematical structure between interacting variables. After adding node :x0, a singleton factor of type Prior (which was defined by the user earlier) is 'connected to' variable node :x0. This unary factor is taken as a Distributions.Normal distribution with zero mean and a standard devitation of 1. Graphviz can be used to visualize the factor graph structure, although the package is not installed by default – $ sudo apt-get install graphviz. Furthermore, the drawGraph member definition is given at the end of this tutorial, which allows the user to store the graph image in graphviz supported image types.","category":"page"},{"location":"examples/basic_continuousscalar/","page":"Canonical 1D Example","title":"Canonical 1D Example","text":"drawGraph(fg, show=true)","category":"page"},{"location":"examples/basic_continuousscalar/","page":"Canonical 1D Example","title":"Canonical 1D Example","text":"The two node factor graph is shown in the image below.","category":"page"},{"location":"examples/basic_continuousscalar/","page":"Canonical 1D Example","title":"Canonical 1D Example","text":"

      \n\n

      ","category":"page"},{"location":"examples/basic_continuousscalar/#Graph-based-Variable-Initialization","page":"Canonical 1D Example","title":"Graph-based Variable Initialization","text":"","category":"section"},{"location":"examples/basic_continuousscalar/","page":"Canonical 1D Example","title":"Canonical 1D Example","text":"Automatic initialization of variables depend on how the factor graph model is constructed. This tutorial demonstrates this behavior by first showing that :x0 is not initialized:","category":"page"},{"location":"examples/basic_continuousscalar/","page":"Canonical 1D Example","title":"Canonical 1D Example","text":"@show isInitialized(fg, :x0) # false","category":"page"},{"location":"examples/basic_continuousscalar/","page":"Canonical 1D Example","title":"Canonical 1D Example","text":"Why is :x0 not initialized? Since no other variable nodes have been 'connected to' (or depend) on :x0 and future intentions of the user are unknown, the initialization of :x0 is deferred until the latest possible moment. IncrementalInference.jl assumes that the user will generally populate new variable nodes with most of the associated factors before moving to the next variable. By delaying initialization of a new variable (say :x0) until a second newer uninitialized variable (say :x1) depends on :x0, the IncrementalInference algorithms hope to then initialize :x0 with the more information from previous and surrounding variables and factors. Also note that graph-based initialization of variables is a local operation based only on the neighboring nodes – global inference occurs over the entire graph and is shown later in this tutorial.","category":"page"},{"location":"examples/basic_continuousscalar/","page":"Canonical 1D Example","title":"Canonical 1D Example","text":"By adding :x1 and connecting it through the LinearRelative and Normal distributed factor, the automatic initialization of :x0 is triggered.","category":"page"},{"location":"examples/basic_continuousscalar/","page":"Canonical 1D Example","title":"Canonical 1D Example","text":"addVariable!(fg, :x1, ContinuousScalar)\n# P(Z | :x1 - :x0 ) where Z ~ Normal(10,1)\naddFactor!(fg, [:x0, :x1], LinearRelative(Normal(10.0,1)))\n@show isInitialized(fg, :x0) # true","category":"page"},{"location":"examples/basic_continuousscalar/","page":"Canonical 1D Example","title":"Canonical 1D Example","text":"Note that the automatic initialization of :x0 is aware that :x1 is not initialized and therefore only used the Prior(Normal(0,1)) unary factor to initialize the marginal belief estimate for :x0. The structure of the graph has now been updated to two variable nodes and two factors.","category":"page"},{"location":"examples/basic_continuousscalar/","page":"Canonical 1D Example","title":"Canonical 1D Example","text":"

      \n\n

      ","category":"page"},{"location":"examples/basic_continuousscalar/","page":"Canonical 1D Example","title":"Canonical 1D Example","text":"Global inference requires that the entire factor graph be initialized before the numerical belief computation algorithms can be performed. Notice how the new :x1 variable is not yet initialized:","category":"page"},{"location":"examples/basic_continuousscalar/","page":"Canonical 1D Example","title":"Canonical 1D Example","text":"@show isInitialized(fg, :x1) # false","category":"page"},{"location":"examples/basic_continuousscalar/#Visualizing-the-Variable-Probability-Belief","page":"Canonical 1D Example","title":"Visualizing the Variable Probability Belief","text":"","category":"section"},{"location":"examples/basic_continuousscalar/","page":"Canonical 1D Example","title":"Canonical 1D Example","text":"The RoMEPlotting.jl package allows visualization (plotting) of the belief state over any of the variable nodes. Remember the first time executions are slow given required code compilation, and that future versions of these package will use more precompilation to reduce first execution running cost.","category":"page"},{"location":"examples/basic_continuousscalar/","page":"Canonical 1D Example","title":"Canonical 1D Example","text":"using RoMEPlotting\n\nplotKDE(fg, :x0)","category":"page"},{"location":"examples/basic_continuousscalar/","page":"Canonical 1D Example","title":"Canonical 1D Example","text":"

      \n\n

      ","category":"page"},{"location":"examples/basic_continuousscalar/","page":"Canonical 1D Example","title":"Canonical 1D Example","text":"By forcing the initialization of :x1 and plotting its belief estimate,","category":"page"},{"location":"examples/basic_continuousscalar/","page":"Canonical 1D Example","title":"Canonical 1D Example","text":"initAll!(fg)\nplotKDE(fg, [:x0, :x1])","category":"page"},{"location":"examples/basic_continuousscalar/","page":"Canonical 1D Example","title":"Canonical 1D Example","text":"the predicted influence of the P(Z| X1 - X0) = LinearRelative(Normal(10, 1)) is shown by the red trace.","category":"page"},{"location":"examples/basic_continuousscalar/","page":"Canonical 1D Example","title":"Canonical 1D Example","text":"

      \n\n

      ","category":"page"},{"location":"examples/basic_continuousscalar/","page":"Canonical 1D Example","title":"Canonical 1D Example","text":"The red trace (predicted belief of :x1) is noting more than the approximated convolution of the current marginal belief of :x0 with the conditional belief described by P(Z | X1 - X0).","category":"page"},{"location":"examples/basic_continuousscalar/#Defining-A-Mixture-Relative-on-ContinuousScalar","page":"Canonical 1D Example","title":"Defining A Mixture Relative on ContinuousScalar","text":"","category":"section"},{"location":"examples/basic_continuousscalar/","page":"Canonical 1D Example","title":"Canonical 1D Example","text":"Another ContinuousScalar variable :x2 is 'connected' to :x1 through a more complicated MixtureRelative likelihood function.","category":"page"},{"location":"examples/basic_continuousscalar/","page":"Canonical 1D Example","title":"Canonical 1D Example","text":"addVariable!(fg, :x2, ContinuousScalar)\nmmo = Mixture(LinearRelative, \n (hypo1=Rayleigh(3), hypo2=Uniform(30,55)), \n [0.4; 0.6])\naddFactor!(fg, [:x1, :x2], mmo)","category":"page"},{"location":"examples/basic_continuousscalar/","page":"Canonical 1D Example","title":"Canonical 1D Example","text":"

      \n\n

      ","category":"page"},{"location":"examples/basic_continuousscalar/","page":"Canonical 1D Example","title":"Canonical 1D Example","text":"The mmo variable illustrates how a near arbitrary mixture probability distribution can be used as a conditional relationship between variable nodes in the factor graph. In this case, a 40%/60% balance of a Rayleigh and truncated Uniform distribution which acts as a multi-modal conditional belief. Interpret carefully what a conditional belief of this nature actually means.","category":"page"},{"location":"examples/basic_continuousscalar/","page":"Canonical 1D Example","title":"Canonical 1D Example","text":"Following the tutorial's practical example frameworks (robot navigation or time travel), this multi-modal belief implies that moving from one of the probable locations in :x1 to a location in :x2 by some processes defined by mmo=P(Z | X2, X1) is uncertain to the same 40%/60% ratio. In practical terms, collapsing (through observation of an event) the probabilistic likelihoods of the transition from :x1 to :x2 may result in the :x2 location being at either 15-20, or 40-65-ish units. The predicted belief over :x2 is illustrated by plotting the predicted belief (green trace), after forcing initialization.","category":"page"},{"location":"examples/basic_continuousscalar/","page":"Canonical 1D Example","title":"Canonical 1D Example","text":"initAll!(fg)\nplotKDE(fg, [:x0, :x1, :x2])","category":"page"},{"location":"examples/basic_continuousscalar/","page":"Canonical 1D Example","title":"Canonical 1D Example","text":"

      \n\n

      ","category":"page"},{"location":"examples/basic_continuousscalar/","page":"Canonical 1D Example","title":"Canonical 1D Example","text":"Adding one more variable :x3 through another LinearRelative(Normal(-50,1))","category":"page"},{"location":"examples/basic_continuousscalar/","page":"Canonical 1D Example","title":"Canonical 1D Example","text":"addVariable!(fg, :x3, ContinuousScalar)\naddFactor!(fg, [:x2, :x3], LinearRelative(Normal(-50, 1)))","category":"page"},{"location":"examples/basic_continuousscalar/","page":"Canonical 1D Example","title":"Canonical 1D Example","text":"expands the factor graph to to four variables and four factors.","category":"page"},{"location":"examples/basic_continuousscalar/","page":"Canonical 1D Example","title":"Canonical 1D Example","text":"

      \n\n

      ","category":"page"},{"location":"examples/basic_continuousscalar/","page":"Canonical 1D Example","title":"Canonical 1D Example","text":"This part of the tutorial shows how a unimodal likelihood (conditional belief) can transmit the bimodal belief currently contained in :x2.","category":"page"},{"location":"examples/basic_continuousscalar/","page":"Canonical 1D Example","title":"Canonical 1D Example","text":"initAll!(fg)\nplotKDE(fg, [:x0, :x1, :x2, :x3])","category":"page"},{"location":"examples/basic_continuousscalar/","page":"Canonical 1D Example","title":"Canonical 1D Example","text":"Notice the blue trace (:x3) is a shifted and slightly spread out version of the initialized belief on :x2, through the convolution with the conditional belief P(Z | X2, X3).","category":"page"},{"location":"examples/basic_continuousscalar/","page":"Canonical 1D Example","title":"Canonical 1D Example","text":"

      \n\n

      ","category":"page"},{"location":"examples/basic_continuousscalar/","page":"Canonical 1D Example","title":"Canonical 1D Example","text":"Global inference over the entire factor graph has still not occurred, and will at this stage produce roughly similar results to the predicted beliefs shown above. Only by introducing more information into the factor graph can inference extract more precise marginal belief estimates for each of the variables. A final piece of information added to this graph is a factor directly relating :x3 with :x0.","category":"page"},{"location":"examples/basic_continuousscalar/","page":"Canonical 1D Example","title":"Canonical 1D Example","text":"addFactor!(fg, [:x3, :x0], LinearRelative(Normal(40, 1)))","category":"page"},{"location":"examples/basic_continuousscalar/","page":"Canonical 1D Example","title":"Canonical 1D Example","text":"Pay close attention to what this last factor means in terms of the probability density traces shown in the previous figure. The blue trace for :x3 has two major modes, one that overlaps with :x0, :x1 near 0 and a second mode further to the left at -40. The last factor introduces a shift LinearRelative(Normal(40,1)) which essentially aligns the left most mode of :x3 back onto :x0.","category":"page"},{"location":"examples/basic_continuousscalar/","page":"Canonical 1D Example","title":"Canonical 1D Example","text":"

      \n\n

      ","category":"page"},{"location":"examples/basic_continuousscalar/","page":"Canonical 1D Example","title":"Canonical 1D Example","text":"This last factor forces a mode selection through consensus. By doing global inference, the new information obtained in :x3 will be equally propagated to :x2 where only one of the two modes will remain.","category":"page"},{"location":"examples/basic_continuousscalar/","page":"Canonical 1D Example","title":"Canonical 1D Example","text":"Global inference is achieved with local computation using two function calls, as follows.","category":"page"},{"location":"examples/basic_continuousscalar/","page":"Canonical 1D Example","title":"Canonical 1D Example","text":"tree = solveTree!(fg)\n\n# and visualization\nplotKDE(fg, [:x0, :x1, :x2, :x3])","category":"page"},{"location":"examples/basic_continuousscalar/","page":"Canonical 1D Example","title":"Canonical 1D Example","text":"The resulting posterior marginal beliefs over all the system variables are:","category":"page"},{"location":"examples/basic_continuousscalar/","page":"Canonical 1D Example","title":"Canonical 1D Example","text":"

      \n\n

      ","category":"page"},{"location":"examples/basic_continuousscalar/","page":"Canonical 1D Example","title":"Canonical 1D Example","text":"It is import to note that although this tutorial ends with all marginal beliefs having near Gaussian shape and are unimodal, that the package supports multi-modal belief estimates during both the prediction and global inference processes. In fact, many of the same underlying inference functions are involved with the automatic initialization process and the global multi-modal iSAM inference procedure. This concludes the ContinuousScalar tutorial particular to the IncrementalInference package.","category":"page"},{"location":"concepts/concepts/#Graph-Concepts","page":"Initial Concepts","title":"Graph Concepts","text":"","category":"section"},{"location":"concepts/concepts/","page":"Initial Concepts","title":"Initial Concepts","text":"Factor graphs are bipartite consisting of variables and factors, which are connected by edges to form a graph structure. The terminology of nodes is reserved for actually storing the data on some graph oriented technology.","category":"page"},{"location":"concepts/concepts/#What-are-Variables-and-Factors","page":"Initial Concepts","title":"What are Variables and Factors","text":"","category":"section"},{"location":"concepts/concepts/","page":"Initial Concepts","title":"Initial Concepts","text":"Variables, denoted as the larger nodes in the figur below, represent state variables of interest such as vehicle or landmark positions, sensor calibration parameters, and more. Variables are likely hidden values which are not directly observed, but we want to estimate them them from observed data and at least some minimal algebra structure from probabilistic measurement models.","category":"page"},{"location":"concepts/concepts/","page":"Initial Concepts","title":"Initial Concepts","text":"Factors, the smaller nodes in the figure, represent the algebraic interaction between particular variables, which is captured through edges. Factors must adhere to the limits of probabilistic models – for example conditional likelihoods capture the likelihood correlations between variables; while priors (unary to one variable) represent absolute information to be introduced. A heterogeneous factor graph illustration is shown below; also see a broader discussion linked on the literature page.","category":"page"},{"location":"concepts/concepts/","page":"Initial Concepts","title":"Initial Concepts","text":"(Image: factorgraphexample)","category":"page"},{"location":"concepts/concepts/","page":"Initial Concepts","title":"Initial Concepts","text":"We assume factors are constructed from statistically independent measurements (i.e. no direct correlations between measurements other than the known algebraic model that might connect them), then we can use Probabilistic Chain rule to write inference operation down (unnormalized):","category":"page"},{"location":"concepts/concepts/","page":"Initial Concepts","title":"Initial Concepts","text":"P(Theta Z) propto P(Z Theta) P(Theta)","category":"page"},{"location":"concepts/concepts/","page":"Initial Concepts","title":"Initial Concepts","text":"This unnormalized \"Bayes rule\" is a consequence of two ideas, namely the probabilistic chain rule where Theta represents all variables and Z represents all measurements or data","category":"page"},{"location":"concepts/concepts/","page":"Initial Concepts","title":"Initial Concepts","text":"P(Theta Z) = P(Z Theta) P(Theta)","category":"page"},{"location":"concepts/concepts/","page":"Initial Concepts","title":"Initial Concepts","text":"or similarly,","category":"page"},{"location":"concepts/concepts/","page":"Initial Concepts","title":"Initial Concepts","text":"P(Theta Z) = P(Theta Z) P(Z)","category":"page"},{"location":"concepts/concepts/","page":"Initial Concepts","title":"Initial Concepts","text":"The inference objective is to invert this system, so as to find the states given the product between all the likelihood models (based on the data):","category":"page"},{"location":"concepts/concepts/","page":"Initial Concepts","title":"Initial Concepts","text":"P(Theta Z) propto prod_i P(Z_i Theta_i) prod_j P(Theta_j)","category":"page"},{"location":"concepts/concepts/","page":"Initial Concepts","title":"Initial Concepts","text":"We use the uncorrelated measurement process assumption that measurements Z are independent given the constructed algebraic model.","category":"page"},{"location":"concepts/concepts/","page":"Initial Concepts","title":"Initial Concepts","text":"note: Note\nStrictly speaking, factors are actually \"observed variables\" that are stochastically \"fixed\" and not free for estimation in the conventional SLAM perspective. Waving hands over the fact that factors encode both the algebraic model and the observed measurement values provides a perspective on learning structure of a problem, including more mundane operations such as sensor calibration or learning of channel transfer models.","category":"page"},{"location":"concepts/concepts/","page":"Initial Concepts","title":"Initial Concepts","text":"note: Note\nWikipedia too provides a short overview of factor graphs.","category":"page"},{"location":"examples/using_pcl/#pointclouds_and_pcl","page":"Pointclouds and PCL","title":"Pointclouds and PCL Types","text":"","category":"section"},{"location":"examples/using_pcl/#Introduction-Caesar._PCL","page":"Pointclouds and PCL","title":"Introduction Caesar._PCL","text":"","category":"section"},{"location":"examples/using_pcl/","page":"Pointclouds and PCL","title":"Pointclouds and PCL","text":"A wide ranging and well used point cloud library exists called PCL which is implemented in C++. To get access to many of those features and bridge the Caesar.jl suite of packages, the base PCL.PointCloud types have been implemented in Julia and reside under Caesar._PCL. The main types of interest:","category":"page"},{"location":"examples/using_pcl/","page":"Pointclouds and PCL","title":"Pointclouds and PCL","text":"Caesar._PCL.PointCloud\nCaesar._PCL.PCLPointCloud2\nCaesar._PCL.PointXYZ\nCaesar._PCL.Header\nCaesar._PCL.PointField\nCaesar._PCL.FieldMapper","category":"page"},{"location":"examples/using_pcl/","page":"Pointclouds and PCL","title":"Pointclouds and PCL","text":"The PointCloud types use Colors.jl:","category":"page"},{"location":"examples/using_pcl/","page":"Pointclouds and PCL","title":"Pointclouds and PCL","text":"using Colors, Caesar\nusing StaticArrays\n\n# one point\nx,y,z,intens = 1f0,0,0,1\npt = Caesar._PCL.PointXYZ(;data=SA[x,y,z,intens])\n\n# etc.","category":"page"},{"location":"examples/using_pcl/","page":"Pointclouds and PCL","title":"Pointclouds and PCL","text":"_PCL.PointCloud","category":"page"},{"location":"examples/using_pcl/#Caesar._PCL.PointCloud","page":"Pointclouds and PCL","title":"Caesar._PCL.PointCloud","text":"struct PointCloud{T<:Caesar._PCL.PointT, P, R}\n\nConvert a PCLPointCloud2 binary data blob into a Caesar._PCL.PointCloud{T} object using a field_map::Caesar._PCL.MsgFieldMap.\n\nUse PointCloud(::Caesar._PCL.PCLPointCloud2) directly or create you own MsgFieldMap:\n\nfield_map = Caesar._PCL.createMapping(msg.fields, field_map)\n\nNotes\n\nTested on Radar data with height z=constant for all points – i.e. 2D sweeping scan where .height=1.\n\nDevNotes\n\nTODO .PCLPointCloud2 convert not tested on regular 3D data from structured light or lidar yet, but current implementation should be close (or already working).\n\nReferences\n\nhttps://pointclouds.org/documentation/classpcl11pointcloud.html\n(seems older) https://docs.ros.org/en/hydro/api/pcl/html/conversions8hsource.html#l00123 \n\n\n\n\n\n","category":"type"},{"location":"examples/using_pcl/#Conversion-with-ROS.PointCloud2","page":"Pointclouds and PCL","title":"Conversion with ROS.PointCloud2","text":"","category":"section"},{"location":"examples/using_pcl/","page":"Pointclouds and PCL","title":"Pointclouds and PCL","text":"Strong integration between PCL and ROS predominantly through the message types","category":"page"},{"location":"examples/using_pcl/","page":"Pointclouds and PCL","title":"Pointclouds and PCL","text":"@rosimport std_msgs.msg: Header, @rosimport sensor_msgs.msg: PointField, @rosimport sensor_msgs.msg: PointCloud2.","category":"page"},{"location":"examples/using_pcl/","page":"Pointclouds and PCL","title":"Pointclouds and PCL","text":"These have been integrated through conversions to equivalent Julian types already listed above. ROS conversions requires RobotOS.jl be loaded, see page on using ROS Direct.","category":"page"},{"location":"examples/using_pcl/","page":"Pointclouds and PCL","title":"Pointclouds and PCL","text":"_PCL.PointXYZ\n_PCL.Header\n_PCL.PointField\n_PCL.FieldMapper\n_PCL.PCLPointCloud2","category":"page"},{"location":"examples/using_pcl/#Caesar._PCL.PointXYZ","page":"Pointclouds and PCL","title":"Caesar._PCL.PointXYZ","text":"struct PointXYZ{C<:Colorant, T<:Number} <: Caesar._PCL.PointT\n\nImmutable PointXYZ with color information. E.g. PointXYZ{RGB}, PointXYZ{Gray}, etc.\n\nAliases\n\nPointXYZRGB\nPointXYZRGBA\n\nSee \n\nhttps://pointclouds.org/documentation/structpcl11pointxyz.html\nhttps://pointclouds.org/documentation/point__types8hppsource.html\n\n\n\n\n\n","category":"type"},{"location":"examples/using_pcl/#Caesar._PCL.Header","page":"Pointclouds and PCL","title":"Caesar._PCL.Header","text":"struct Header\n\nImmutable Header.\n\nSee https://pointclouds.org/documentation/structpcl11pclheader.html\n\n\n\n\n\n","category":"type"},{"location":"examples/using_pcl/#Caesar._PCL.PointField","page":"Pointclouds and PCL","title":"Caesar._PCL.PointField","text":"struct PointField\n\nHow a point is stored in memory.\n\nhttps://pointclouds.org/documentation/structpcl11pclpoint_field.html\n\n\n\n\n\n","category":"type"},{"location":"examples/using_pcl/#Caesar._PCL.FieldMapper","page":"Pointclouds and PCL","title":"Caesar._PCL.FieldMapper","text":"struct FieldMapper{T<:Caesar._PCL.PointT}\n\nWhich field values to store and how to map them to values during serialization.\n\nhttps://docs.ros.org/en/hydro/api/pcl/html/conversions8hsource.html#l00091\n\n\n\n\n\n","category":"type"},{"location":"examples/using_pcl/#Caesar._PCL.PCLPointCloud2","page":"Pointclouds and PCL","title":"Caesar._PCL.PCLPointCloud2","text":"struct PCLPointCloud2\n\nImmutable point cloud type. Immutable for performance, computations are more frequent and intensive than anticipated frequency of constructing new clouds.\n\nReferences:\n\nhttps://pointclouds.org/documentation/structpcl11pclpoint_cloud2.html\nhttps://pointclouds.org/documentation/classpcl11pointcloud.html\nhttps://pointclouds.org/documentation/common2include2pcl2point__cloud8h_source.html\n\nSee also: Caesar._PCL.toROSPointCloud2\n\n\n\n\n\n","category":"type"},{"location":"examples/using_pcl/#Aligning-Point-Clouds","page":"Pointclouds and PCL","title":"Aligning Point Clouds","text":"","category":"section"},{"location":"examples/using_pcl/","page":"Pointclouds and PCL","title":"Pointclouds and PCL","text":"Caesar.jl is currently growing support for two related point cloud alignment methods, namely:","category":"page"},{"location":"examples/using_pcl/","page":"Pointclouds and PCL","title":"Pointclouds and PCL","text":"Continuous density function alignment ScatterAlignPose2, ScatterAlignPose3,\nTraditional Iterated Closest Point (with normals) alignICP_Simple.","category":"page"},{"location":"examples/using_pcl/#sec_scatter_align","page":"Pointclouds and PCL","title":"ScatterAlign for Pose2 and Pose3","text":"","category":"section"},{"location":"examples/using_pcl/","page":"Pointclouds and PCL","title":"Pointclouds and PCL","text":"These factors use minimum mean distance embeddings to cost the alignment between pointclouds and supports various other interesting function alignment cases. These functions require Images.jl, see page Using Images.","category":"page"},{"location":"examples/using_pcl/","page":"Pointclouds and PCL","title":"Pointclouds and PCL","text":"Caesar.ScatterAlign\nCaesar.ScatterAlignPose2\nCaesar.ScatterAlignPose3","category":"page"},{"location":"examples/using_pcl/#Caesar.ScatterAlign","page":"Pointclouds and PCL","title":"Caesar.ScatterAlign","text":"ScatterAlign{P,H1,H2} where {H1 <: Union{<:ManifoldKernelDensity, <:HeatmapGridDensity}, \n H2 <: Union{<:ManifoldKernelDensity, <:HeatmapGridDensity}}\n\nAlignment factor between point cloud populations, using either\n\na continuous density function cost: ApproxManifoldProducts.mmd, or\na conventional iterative closest point (ICP) algorithm (when .sample_count < 0).\n\nThis factor can support very large density clouds, with sample_count subsampling for individual alignments.\n\nKeyword Options:\n\nsample_count::Int = 100, number of subsamples to use during each alignment in getSample. \nValues greater than 0 use MMD alignment, while values less than 0 use ICP alignment.\nbw::Real, the bandwidth to use for mmd distance\nrescale::Real\nN::Int\ncvt::Function, convert function for image when using HeatmapGridDensity.\nuseStashing::Bool = false, to switch serialization strategy to using Stashing.\ndataEntry_cloud1::AbstractString = \"\", blob identifier used with stashing.\ndataEntry_cloud2::AbstractString = \"\", blob identifier used with stashing.\ndataStoreHint::AbstractString = \"\"\n\nExample\n\narp2 = ScatterAlignPose2(img1, img2, 2) # e.g. 2 meters/pixel \n\nNotes\n\nSupports two belief \"clouds\" as either\nManifoldKernelDensitys, or\nHeatmapGridDensitys.\nStanard cvt argument is lambda function to convert incoming images to user convention of image axes,\nGeography map default cvt flips image rows so that Pose2 +xy-axes corresponds to img[-x,+y]\ni.e. rows down is \"North\" and columns across from top left corner is \"East\".\nUse rescale to resize the incoming images for lower resolution (faster) correlations\nBoth images passed to the construct must have the same type some matrix of type T.\nExperimental support for Stashing based serialization.\n\nDevNotes:\n\nTODO Upgrade to use other information during alignment process, e.g. point normals for Pose3.\n\nSee also: ScatterAlignPose2, ScatterAlignPose3, overlayScanMatcher, Caesar._PCL.alignICP_Simple.\n\n\n\n\n\n","category":"type"},{"location":"examples/using_pcl/#Caesar.ScatterAlignPose2","page":"Pointclouds and PCL","title":"Caesar.ScatterAlignPose2","text":"ScatterAlignPose2(im1::Matrix, im2::Matrix, domain; options...)\nScatterAlignPose2(; mkd1::ManifoldKernelDensity, mkd2::ManifoldKernelDensity, moreoptions...)\n\nSpecialization of ScatterAlign for Pose2.\n\nSee also: ScatterAlignPose3\n\n\n\n\n\n","category":"type"},{"location":"examples/using_pcl/#Caesar.ScatterAlignPose3","page":"Pointclouds and PCL","title":"Caesar.ScatterAlignPose3","text":"ScatterAlignPose3(; cloud1=mkd1::ManifoldKernelDensity, \n cloud2=mkd2::ManifoldKernelDensity, \n moreoptions...)\n\nSpecialization of ScatterAlign for Pose3.\n\nSee also: ScatterAlignPose2\n\n\n\n\n\n","category":"type"},{"location":"examples/using_pcl/","page":"Pointclouds and PCL","title":"Pointclouds and PCL","text":"note: Note\nFuture work may include ScatterAlignPose2z, please open issues at Caesar.jl if this is of interest.","category":"page"},{"location":"examples/using_pcl/#Iterative-Closest-Point","page":"Pointclouds and PCL","title":"Iterative Closest Point","text":"","category":"section"},{"location":"examples/using_pcl/","page":"Pointclouds and PCL","title":"Pointclouds and PCL","text":"Ongoing work is integrating ICP into a factor similar to ScatterAlign.","category":"page"},{"location":"examples/using_pcl/","page":"Pointclouds and PCL","title":"Pointclouds and PCL","text":"Caesar._PCL.alignICP_Simple","category":"page"},{"location":"examples/using_pcl/#Caesar._PCL.alignICP_Simple","page":"Pointclouds and PCL","title":"Caesar._PCL.alignICP_Simple","text":"alignICP_Simple(\n X_fix,\n X_mov;\n correspondences,\n neighbors,\n min_planarity,\n max_overlap_distance,\n min_change,\n max_iterations,\n verbose,\n H\n)\n\n\nAlign two point clouds using ICP (with normals).\n\nExample:\n\nusing Downloads, DelimitedFiles\nusing Colors, Caesar\n\n# get some test data (~50mb download)\nlidar1_url = \"https://github.com/JuliaRobotics/CaesarTestData.jl/raw/main/data/lidar/simpleICP/terrestrial_lidar1.xyz\"\nlidar2_url = \"https://github.com/JuliaRobotics/CaesarTestData.jl/raw/main/data/lidar/simpleICP/terrestrial_lidar2.xyz\"\nio1 = PipeBuffer()\nio2 = PipeBuffer()\nDownloads.download(lidar1_url, io1)\nDownloads.download(lidar2_url, io2)\n\nX_fix = readdlm(io1)\nX_mov = readdlm(io2)\n\nH, HX_mov, stat = Caesar._PCL.alignICP_Simple(X_fix, X_mov; verbose=true)\n\nNotes\n\nMostly consolidated with Caesar._PCL types.\nInternally uses Caesar._PCL._ICP_PointCloud which was created to help facilite consolidation of code:\nModified from www.github.com/pglira/simpleICP (July 2022).\nSee here for a brief example on Visualizing Point Clouds.\n\nDevNotes\n\nTODO switch rigid transfrom to Caesar._PCL.apply along with performance considerations, instead of current transform!.\n\nSee also: PointCloud\n\n\n\n\n\n","category":"function"},{"location":"examples/using_pcl/#Visualizing-Point-Clouds","page":"Pointclouds and PCL","title":"Visualizing Point Clouds","text":"","category":"section"},{"location":"examples/using_pcl/","page":"Pointclouds and PCL","title":"Pointclouds and PCL","text":"See work in progress on alng with example code on the page 3D Visualization.","category":"page"},{"location":"principles/initializingOnBayesTree/#Advanced-Topics-on-Bayes-Tree","page":"Advanced Bayes Tree Topics","title":"Advanced Topics on Bayes Tree","text":"","category":"section"},{"location":"principles/initializingOnBayesTree/#Definitions","page":"Advanced Bayes Tree Topics","title":"Definitions","text":"","category":"section"},{"location":"principles/initializingOnBayesTree/","page":"Advanced Bayes Tree Topics","title":"Advanced Bayes Tree Topics","text":"Squashing or collapsing the Bayes tree back into a 'flat' Bayes net, by chain rule: ","category":"page"},{"location":"principles/initializingOnBayesTree/","page":"Advanced Bayes Tree Topics","title":"Advanced Bayes Tree Topics","text":"p(xy) = p(xy)p(y) = p(yx)p(x) \np(xyz) = p(xyz)p(yz) = p(xyz)p(z) = p(xyz)p(yz)p(z) \np(xyz) = p(xyz)p(y)p(z) textiff y is independent of z also p(yz)=p(y)","category":"page"},{"location":"principles/initializingOnBayesTree/#Are-cliques-in-the-Bayes-(Junction)-tree-densly-connected?","page":"Advanced Bayes Tree Topics","title":"Are cliques in the Bayes (Junction) tree densly connected?","text":"","category":"section"},{"location":"principles/initializingOnBayesTree/","page":"Advanced Bayes Tree Topics","title":"Advanced Bayes Tree Topics","text":"Yes and no. From the chordal Bayes net's perspective (obtained through the elimination game in order to build the clique tree), the nodes of the Bayes tree are indeed fully connected subgraphs (they are called cliques after all!). From the perspective of the subgraph of the original factor graph induced by the clique's variables, cliques need not be fully connected, since we are assuming the factor graph as sparse, and that no new information can be created out of nothing–-hence each clique must be sparse. That said, the potential exists for the inference within a clique to become densly connected (experience full \"fill-in\"). See the paper on square-root-SAM, where the connection between dense covariance matrix of a Kalman filter (EKF-SLAM) is actually related to the inverse square root (rectangular) matrix which structure equivalent to the clique subgraph adjacency matrix. ","category":"page"},{"location":"principles/initializingOnBayesTree/","page":"Advanced Bayes Tree Topics","title":"Advanced Bayes Tree Topics","text":"Also remember that the intermediate Bayes net (which has densly connected cliques) hides the underlying tree structure – think of the Bayes net as looking at the tree from on top or below, thereby encoding the dense connectivity in the structure of the tree itself. All information below any clique of the tree is encoded in the upward marginal belief messages at that point (i.e. the densly connected aspects pertained lower down in the tree).","category":"page"},{"location":"principles/initializingOnBayesTree/#LU/QR-vs.-Belief-Propagation","page":"Advanced Bayes Tree Topics","title":"LU/QR vs. Belief Propagation","text":"","category":"section"},{"location":"principles/initializingOnBayesTree/","page":"Advanced Bayes Tree Topics","title":"Advanced Bayes Tree Topics","text":"LU/QR is a special case (Parametric/Linear) of more general belief propagation. The story though is more intricate, where QR/LU assume that product-factors can be formed through the chain rule – using congruency – it is not that straight forward with general beliefs. In the general case we are almost forced to use belief propagation, which in turn implies special care is needed to describe the relationship between sparse factor graph fragments in cliques on the tree, and the more densely connected structure of the Bayes Net.","category":"page"},{"location":"principles/initializingOnBayesTree/#Bayes-Tree-vs-Bayes-Net","page":"Advanced Bayes Tree Topics","title":"Bayes Tree vs Bayes Net","text":"","category":"section"},{"location":"principles/initializingOnBayesTree/","page":"Advanced Bayes Tree Topics","title":"Advanced Bayes Tree Topics","text":"The Bayes tree is a purely symbolic structure – i.e. special grouping of factors that all come from the factor graph joint product (product of independently sampled likelihood/conditional models):","category":"page"},{"location":"principles/initializingOnBayesTree/","page":"Advanced Bayes Tree Topics","title":"Advanced Bayes Tree Topics","text":"Theta Z propto prod_i Z_i=z_i Theta_i ","category":"page"},{"location":"principles/initializingOnBayesTree/","page":"Advanced Bayes Tree Topics","title":"Advanced Bayes Tree Topics","text":"A sparse factor graph problem can be squashed into smaller dense problem of product-factor conditionals (from variable elimination). Therefore each product-factor (aka \"smart factor\" in other uses of the language) represent both the factors as well as the sequencing of cliques in that branch. This process repeats recursively from the root down to the leaves. The leaves of the tree have no further reduced product factors condensing child cliques below, and therefore sparse factor fragments can be computed to start the upward belief propagation process. More importantly, as belief propagation progresses up the tree, upward belief messages (on clique separators) capture the same structure as the densely connected Bayes net but each clique in the Bayes tree still only contains sparse fragments from the original factor graph. The structure of the tree (combined parent-child relationships) encodes the same information as the product-factor conditionals!","category":"page"},{"location":"principles/initializingOnBayesTree/#Initialization-on-the-Tree","page":"Advanced Bayes Tree Topics","title":"Initialization on the Tree","text":"","category":"section"},{"location":"principles/initializingOnBayesTree/","page":"Advanced Bayes Tree Topics","title":"Advanced Bayes Tree Topics","text":"It more challenging but possible to initialize all variables in a factor graph through belief propagation on the Bayes tree.","category":"page"},{"location":"principles/initializingOnBayesTree/","page":"Advanced Bayes Tree Topics","title":"Advanced Bayes Tree Topics","text":"As a thought experiment: Wouldn't it be awesome if we could compile the upsolve as a symbolic process only, and only assign numerical values once during a single downsolve procedure. The origin of this idea comes from the realization that a complete upsolve on the Bayes (Junction) tree is very nearly the same thing finding good numerical initialization values for the factor graph. If the up-init-solve can be performed as a purely symbolic process, it would greatly simplify numerical computations by deferring them to the down solve alone.","category":"page"},{"location":"principles/initializingOnBayesTree/","page":"Advanced Bayes Tree Topics","title":"Advanced Bayes Tree Topics","text":"Trying to do initialization for real, we might want to replace up-init-symbolic operations with numerical equivalents. Either way, it would be worth knowing what the equivalent numerical operations of a full up-init-solve of an uninitialized factor graph would look like.","category":"page"},{"location":"principles/initializingOnBayesTree/","page":"Advanced Bayes Tree Topics","title":"Advanced Bayes Tree Topics","text":"In general, if a clique can not be initialized based on information from lower down in that branch of the tree; more information is need from the parent. In the Gaussian (more accurately the congruent factor) case, all information lower down in the branch–-i.e. the relationships between variables in parent–-can be summarized by a new conditional product-factor that is computed with the probabilistic chain rule. To restate, the process of squashing the Bayes tree branch back down into a Bayes net, is effectively the the chain rule process used in variable elimination.","category":"page"},{"location":"principles/initializingOnBayesTree/","page":"Advanced Bayes Tree Topics","title":"Advanced Bayes Tree Topics","text":"note: Note\nQuestion, are cascading up and down solves are required if you do not use eliminated factor conditionals in parent cliques.","category":"page"},{"location":"principles/initializingOnBayesTree/#Gaussian-only-special-case","page":"Advanced Bayes Tree Topics","title":"Gaussian-only special case","text":"","category":"section"},{"location":"principles/initializingOnBayesTree/","page":"Advanced Bayes Tree Topics","title":"Advanced Bayes Tree Topics","text":"Elimination of variables and factors using chain rule reduction is a special case of belief propagation, and thus far only the reduction of congruent beliefs (such as Gaussian) is known.","category":"page"},{"location":"principles/initializingOnBayesTree/","page":"Advanced Bayes Tree Topics","title":"Advanced Bayes Tree Topics","text":"These computations can be parallelized depending on the conditional independence structure of the Bayes tree – separate branches are effectively separate chain rule instances. This is precisely the same process exploited by multi-frontal QR matrix factorization.","category":"page"},{"location":"principles/initializingOnBayesTree/","page":"Advanced Bayes Tree Topics","title":"Advanced Bayes Tree Topics","text":"On the down solve the conditionals–-from eliminated chains of previously eliminated variables and factors–-can be used for inference directly in the parent. ","category":"page"},{"location":"principles/initializingOnBayesTree/","page":"Advanced Bayes Tree Topics","title":"Advanced Bayes Tree Topics","text":"See node x1 to x3 in IncrementalInference issue 464. It does not branch or provide additional prior information. so it is collapsed into one factor between x1 and x3, solved in the root and the individual variable can be solved by inference.","category":"page"},{"location":"principles/initializingOnBayesTree/","page":"Advanced Bayes Tree Topics","title":"Advanced Bayes Tree Topics","text":"note: Note\nQuestion, what does the Jacobian in Gaussian only case mean with regard to a symbolic upsolve?","category":"page"},{"location":"faq/#Frequently-Asked-Questions","page":"FAQ","title":"Frequently Asked Questions","text":"","category":"section"},{"location":"faq/#Factor-Graphs:-why-not-just-filter?","page":"FAQ","title":"Factor Graphs: why not just filter?","text":"","category":"section"},{"location":"faq/","page":"FAQ","title":"FAQ","text":"Why can't I just filter, or what is the connection with FGs? See the \"Principles\" section in the documentation. ","category":"page"},{"location":"faq/#Why-worry-about-non-Gaussian-Probabilities","page":"FAQ","title":"Why worry about non-Gaussian Probabilities","text":"","category":"section"},{"location":"faq/","page":"FAQ","title":"FAQ","text":"The non-Gaussian/multimodal section in the docs is dedicated to precisely this question.","category":"page"},{"location":"faq/#Why-Julia","page":"FAQ","title":"Why Julia","text":"","category":"section"},{"location":"faq/","page":"FAQ","title":"FAQ","text":"The JuliaLang and (JuliaPro) is an open-source Just-In-Time (JIT) & optionally precompiled, strongly-typed, and high-performance programming language. The algorithmic code is implemented in Julia for many reasons, such as agile development, high level syntax, performance, type safety, multiple dispatch replacement for object oriented which exhibits several emergent properties, parallel computing, dynamic development, cross compilable (with gcc and clang) and foundational cross-platform (LLVM) technologies. See JuliaCon2018 highlights video. Julia can be thought of as either {C+, Mex (done right), or as a modern Fortran replacement}.","category":"page"},{"location":"faq/#Current-Julia-version?","page":"FAQ","title":"Current Julia version?","text":"","category":"section"},{"location":"faq/","page":"FAQ","title":"FAQ","text":"Caesar.jl and packages are currently targeting Julia version as per the local install page.","category":"page"},{"location":"faq/#Just-In-Time-Compiling-(i.e.-why-are-first-runs-slow?)","page":"FAQ","title":"Just-In-Time Compiling (i.e. why are first runs slow?)","text":"","category":"section"},{"location":"faq/","page":"FAQ","title":"FAQ","text":"Julia uses just-in-time compilation (unless already pre-compiled) which takes additional time the first time a new function is called. Additional calls to a cached function are fast from the second call onwards since the static binary code is now cached and ready for use.","category":"page"},{"location":"faq/#How-does-garbage-collection-work?","page":"FAQ","title":"How does garbage collection work?","text":"","category":"section"},{"location":"faq/","page":"FAQ","title":"FAQ","text":"A short description of Julia's garbage collection is described in Discourse here.","category":"page"},{"location":"faq/","page":"FAQ","title":"FAQ","text":"note: Note\nGarbage collection can be influenced in a few ways to allow more certainty about operational outcome, see the Julia Docs Garbage Collection Internal functions like enable, preserve, safepoint, etc.","category":"page"},{"location":"faq/#Using-Julia-in-real-time-systems?","page":"FAQ","title":"Using Julia in real-time systems?","text":"","category":"section"},{"location":"faq/","page":"FAQ","title":"FAQ","text":"See the JuliaCon presentation by rdeits here.","category":"page"},{"location":"faq/#Can-Caesar.jl-be-used-in-other-languages-beyond-Julia?-Yes.","page":"FAQ","title":"Can Caesar.jl be used in other languages beyond Julia? Yes.","text":"","category":"section"},{"location":"faq/","page":"FAQ","title":"FAQ","text":"The Caesar.jl project is expressly focused on making this algorithmic code available to C/Fortran/C++/C#/Python/Java/JS. Julia itself offers many additional interops. ZMQ and HTTP/WebSockets are the standardized interfaces of choice, please see details at the multi-language section). Consider opening issues or getting in touch for more information.","category":"page"},{"location":"faq/#Can-Julia-Compile-Binaries-/-Shared-Libraries","page":"FAQ","title":"Can Julia Compile Binaries / Shared Libraries","text":"","category":"section"},{"location":"faq/","page":"FAQ","title":"FAQ","text":"Yes, see the Compile Binaries Page.","category":"page"},{"location":"faq/#Can-Julia-be-Embedded-into-C/C","page":"FAQ","title":"Can Julia be Embedded into C/C++","text":"","category":"section"},{"location":"faq/","page":"FAQ","title":"FAQ","text":"Yes, see the Julia embedding documentation page.","category":"page"},{"location":"faq/#ROS-Integration","page":"FAQ","title":"ROS Integration","text":"","category":"section"},{"location":"faq/","page":"FAQ","title":"FAQ","text":"ROS and ZMQ interfaces are closely related. Please see the ROS Integration Page for details on using ROS with Caesar.jl.","category":"page"},{"location":"faq/#Why-ZMQ-Middleware-Layer-(multilang)?","page":"FAQ","title":"Why ZMQ Middleware Layer (multilang)?","text":"","category":"section"},{"location":"faq/","page":"FAQ","title":"FAQ","text":"Zero Message Queue (ZMQ) is a widely used data transport layer used to build various other multiprocess middleware with wide support among other programming languages. Caesar.jl has on been used with a direct ZMQ type link, which is similar to a ROS workflow. Contributions are welcome for binding ZMQ endpoints for a non-ROS messaging interface.","category":"page"},{"location":"faq/","page":"FAQ","title":"FAQ","text":"Note ZMQ work has been happening on and off based on behind the main priority on resolving abstractions with the DistributedFactorGraphs.jl framework. See ongoing work for the ZMQ interface.","category":"page"},{"location":"faq/#What-is-supersolve?","page":"FAQ","title":"What is supersolve?","text":"","category":"section"},{"location":"faq/","page":"FAQ","title":"FAQ","text":"When multiple numerical values/solutions exists for the (or nearly) same factor graph – then solutions, including a reference solution (ground truth) can just be stacked in that variable. See and comment on a few cases here.","category":"page"},{"location":"faq/#Variable-Scope-in-For-loop-Error","page":"FAQ","title":"Variable Scope in For loop Error","text":"","category":"section"},{"location":"faq/","page":"FAQ","title":"FAQ","text":"Julia wants you to be specific about global variables, and variables packed in a development script at top level are created as globals. Globals can be accessed using the global varname at the start of the context. When writing for loops (using Julia versions 0.7 through 1.3) stricter rules on global scoping applied. The purest way to ensure scope of variables are properly managed in the REPL or Juno script Main context is using the let syntax (not required post Julia 1.4).","category":"page"},{"location":"faq/","page":"FAQ","title":"FAQ","text":"fg = ...\ntree = solveTree!(fg)\n...\n# and then a loop here:\nlet tree=tree, fg=fg\nfor i 2:100\n # global tree, fg # forcing globals is the alternative\n # add variables and stuff\n ...\n # want to solve again\n tree = solveTree!(fg, tree)\n ...\n # more stuff\nend\nend # let block","category":"page"},{"location":"faq/","page":"FAQ","title":"FAQ","text":"See Stack overflow on let or the Julia docs page on scoping. Also note it is good practice to use local scope (i.e. inside a function) variables for performance reasons.","category":"page"},{"location":"faq/","page":"FAQ","title":"FAQ","text":"note: Note\nThis behaviour is going to change in Julia 1.5 back to what Julia 0.6 was in interactive cases, and therefore likely less of a problem in future versions. See Julia 1.5 Change Notes, ([#28789], [#33864]).","category":"page"},{"location":"faq/#How-to-Enable-@debug-Logging.jl","page":"FAQ","title":"How to Enable @debug Logging.jl","text":"","category":"section"},{"location":"faq/","page":"FAQ","title":"FAQ","text":"https://stackoverflow.com/questions/53548681/how-to-enable-debugging-messages-in-juno-julia-editor","category":"page"},{"location":"faq/#Julia-Images.jl-Axis-Convention","page":"FAQ","title":"Julia Images.jl Axis Convention","text":"","category":"section"},{"location":"faq/","page":"FAQ","title":"FAQ","text":"Julia Images.jl follows the common `::Array column-major–-i.e. vertical-major–-index convention","category":"page"},{"location":"faq/","page":"FAQ","title":"FAQ","text":"That is img[vertical, horizontal]\nSee https://evizero.github.io/Augmentor.jl/images/#Vertical-Major-vs-Horizontal-Major-1 for more details.\nAlso, https://juliaimages.org/latest/pkgs/axes/#Names-and-locations","category":"page"},{"location":"faq/#How-does-JSON-Schema-work?","page":"FAQ","title":"How does JSON-Schema work?","text":"","category":"section"},{"location":"faq/","page":"FAQ","title":"FAQ","text":"Caesar.jl intends to follow json-schema.org, see step-by-step guide here.","category":"page"},{"location":"faq/#How-to-get-Julia-memory-allocation-points?","page":"FAQ","title":"How to get Julia memory allocation points?","text":"","category":"section"},{"location":"faq/","page":"FAQ","title":"FAQ","text":"See discourse discussion.","category":"page"},{"location":"faq/#Increase-Linux-Open-File-Limit?","page":"FAQ","title":"Increase Linux Open File Limit?","text":"","category":"section"},{"location":"faq/","page":"FAQ","title":"FAQ","text":"If you see the error \"Open Files Limit\", please follow these intructions on your local system. This is likely to happen when debug code and a large number of files are stored in the general solution specific logpath.","category":"page"},{"location":"examples/canonical_graphs/#Canonical-Graphs","page":"Canonical Generators","title":"Canonical Graphs","text":"","category":"section"},{"location":"examples/canonical_graphs/","page":"Canonical Generators","title":"Canonical Generators","text":"try tab-completion in the REPL:","category":"page"},{"location":"examples/canonical_graphs/","page":"Canonical Generators","title":"Canonical Generators","text":"IncrementalInference.generateGraph_Kaess\nIncrementalInference.generateGraph_TestSymbolic\nIncrementalInference.generateGraph_CaesarRing1D\nIncrementalInference.generateGraph_LineStep\nIncrementalInference.generateGraph_EuclidDistance\nRoME.generateGraph_Circle\nRoME.generateGraph_ZeroPose\nRoME.generateGraph_Hexagonal\nRoME.generateGraph_Beehive!\nRoME.generateGraph_Helix2D!\nRoME.generateGraph_Helix2DSlew!\nRoME.generateGraph_Helix2DSpiral!","category":"page"},{"location":"examples/canonical_graphs/#IncrementalInference.generateGraph_Kaess","page":"Canonical Generators","title":"IncrementalInference.generateGraph_Kaess","text":"generateGraph_Kaess(; graphinit)\n\n\nCanonical example from literature, Kaess, et al.: ISAM2, IJRR, 2011.\n\nNotes\n\nPaper variable ordering: p = [:l1;:l2;:x1;:x2;:x3]\n\n\n\n\n\n","category":"function"},{"location":"examples/canonical_graphs/#IncrementalInference.generateGraph_TestSymbolic","page":"Canonical Generators","title":"IncrementalInference.generateGraph_TestSymbolic","text":"generateGraph_TestSymbolic(; graphinit)\n\n\nCanonical example introduced by Borglab.\n\nNotes\n\nKnown variable ordering: p = [:x1; :l3; :l1; :x5; :x2; :l2; :x4; :x3]\n\n\n\n\n\n","category":"function"},{"location":"examples/canonical_graphs/#IncrementalInference.generateGraph_CaesarRing1D","page":"Canonical Generators","title":"IncrementalInference.generateGraph_CaesarRing1D","text":"generateGraph_CaesarRing1D(; graphinit)\n\n\nCanonical example introduced originally as Caesar Hex Example.\n\nNotes\n\nPaper variable ordering: p = [:x0;:x2;:x4;:x6;:x1;:l1;:x5;:x3;]\n\n\n\n\n\n","category":"function"},{"location":"examples/canonical_graphs/#IncrementalInference.generateGraph_LineStep","page":"Canonical Generators","title":"IncrementalInference.generateGraph_LineStep","text":"generateGraph_LineStep(\n lineLength;\n poseEvery,\n landmarkEvery,\n posePriorsAt,\n landmarkPriorsAt,\n sightDistance,\n vardims,\n noisy,\n graphinit,\n σ_pose_prior,\n σ_lm_prior,\n σ_pose_pose,\n σ_pose_lm,\n solverParams\n)\n\n\nContinuous, linear scalar and multivariate test graph generation. Follows a line with the pose id equal to the ground truth.\n\n\n\n\n\n","category":"function"},{"location":"examples/canonical_graphs/#IncrementalInference.generateGraph_EuclidDistance","page":"Canonical Generators","title":"IncrementalInference.generateGraph_EuclidDistance","text":"generateGraph_EuclidDistance(; ...)\ngenerateGraph_EuclidDistance(\n points;\n dist,\n σ_prior,\n σ_dist,\n N,\n graphinit\n)\n\n\nGenerate a EuclidDistance test graph where 1 landmark position is unknown. \n\n\n\n\n\n","category":"function"},{"location":"examples/canonical_graphs/#RoME.generateGraph_Circle","page":"Canonical Generators","title":"RoME.generateGraph_Circle","text":"generateGraph_Circle(; ...)\ngenerateGraph_Circle(\n poses;\n fg,\n offsetPoses,\n autoinit,\n graphinit,\n landmark,\n loopClosure,\n stopEarly,\n biasTurn,\n kappaOdo,\n cyclePoses\n)\n\n\nGenerate a canonical factor graph: driving in a circular pattern with one landmark.\n\nNotes\n\nPoses, :x0, :x1,... Pose2,\nOdometry, :x0x1f1, etc., Pose2Pose2 (Gaussian)\nOPTIONAL: 1 Landmark, :l1, Point2,\n2 Sightings, :x0l1f1, :x6l1f1, RangeBearing (Gaussian)\n\nExample\n\nusing RoME\n\nfg = generateGraph_Hexagonal()\ndrawGraph(fg, show=true)\n\nDevNotes\n\nTODO refactor to use new calcHelix_T.\n\nRelated\n\ngenerateGraph_Circle, generateGraph_Kaess, generateGraph_TwoPoseOdo\n\n\n\n\n\n","category":"function"},{"location":"examples/canonical_graphs/#RoME.generateGraph_ZeroPose","page":"Canonical Generators","title":"RoME.generateGraph_ZeroPose","text":"generateGraph_ZeroPose(\n;\n varType,\n graphinit,\n solverParams,\n dfg,\n doRef,\n useMsgLikelihoods,\n label,\n priorType,\n μ0,\n Σ0,\n priorArgs,\n solvable,\n variableTags,\n factorTags,\n postpose_cb\n)\n\n\nGenerate a canonical factor graph with a Pose2 :x0 and MvNormal with covariance P0.\n\nNotes\n\nUse e.g. varType=Point2 to change from the default variable type Pose2.\nUse priorArgs::Tuple to override the default input arguments to priorType.\nUse callback postpose_cb(g::AbstractDFG,lastpose::Symbol) to call user operations after each pose step.\n\n\n\n\n\n","category":"function"},{"location":"examples/canonical_graphs/#RoME.generateGraph_Hexagonal","page":"Canonical Generators","title":"RoME.generateGraph_Hexagonal","text":"generateGraph_Hexagonal(\n;\n fg,\n landmark,\n loopClosure,\n N,\n autoinit,\n graphinit\n)\n\n\nGenerate a canonical factor graph: driving in a hexagonal circular pattern with one landmark.\n\nNotes\n\n7 Poses, :x0-:x6, Pose2,\n1 Landmark, :l1, Point2,\n6 Odometry, :x0x1f1, etc., Pose2Pose2 (Gaussian)\n2 Sightings, :x0l1f1, :x6l1f1, RangeBearing (Gaussian)\n\nExample\n\nusing RoME\n\nfg = generateGraph_Hexagonal()\ndrawGraph(fg, show=true)\n\nRelated\n\ngenerateGraph_Circle, generateGraph_Kaess, generateGraph_TwoPoseOdo, generateGraph_Boxes2D!\n\n\n\n\n\n","category":"function"},{"location":"examples/canonical_graphs/#RoME.generateGraph_Beehive!","page":"Canonical Generators","title":"RoME.generateGraph_Beehive!","text":"generateGraph_Beehive!(; ...)\ngenerateGraph_Beehive!(\n poseCountTarget;\n graphinit,\n dfg,\n useMsgLikelihoods,\n solvable,\n refKey,\n addLandmarks,\n landmarkSolvable,\n poseRegex,\n pose0,\n yaw0,\n μ0,\n postpose_cb,\n locality,\n atol\n)\n\n\nPretend a bee is walking in a hive where each step (pose) follows one edge of an imaginary honeycomb lattice, and at after each step a new direction left or right is stochastically chosen and the process repeats.\n\nNotes\n\nThe keyword locality=1 is a positive ::Real ∈ [0,∞) value, where higher numbers imply direction decisions are more sticky for multiple steps.\nUse keyword callback function postpose_cb = (fg, lastpose) -> ... to hook in your own features right after each new pose step.\n\nDevNotes\n\nTODO rewrite as a recursive generator function instead.\n\nSee also: generateGraph_Honeycomb!, generateGraph_Hexagonal, generateGraph_ZeroPose\n\n\n\n\n\n","category":"function"},{"location":"examples/canonical_graphs/#RoME.generateGraph_Helix2D!","page":"Canonical Generators","title":"RoME.generateGraph_Helix2D!","text":"generateGraph_Helix2D!(; ...)\ngenerateGraph_Helix2D!(\n numposes;\n posesperturn,\n graphinit,\n useMsgLikelihoods,\n solverParams,\n dfg,\n radius,\n spine_t,\n xr_t,\n yr_t,\n poseRegex,\n μ0,\n refKey,\n Qd,\n postpose_cb\n)\n\n\nGeneralized canonical graph generator function for helix patterns.\n\nNotes\n\nassumes poses are labeled according to r\"x\\d+\"\nGradient (i.e. angle) calculations are on the order of 1e-8.\nUse callback spine_t(t)::Complex to modify how the helix pattern is moved in x, y along the progression of t,\nSee related wrapper functions for convenient generators of helix patterns in 2D,\nReal valued xr_t(t) and yr_t(t) can be modified (and will override) complex valued spine_t instead.\nuse postpose_cb = (fg_, lastestpose) -> ... for additional user features after each new pose\ncan be used to grow a graph with repeated calls, but keyword parameters are assumed identical between calls.\n\nSee also: generateGraph_Helix2DSlew!, generateGraph_Helix2DSpiral!, generateGraph_Beehive!\n\n\n\n\n\n","category":"function"},{"location":"examples/canonical_graphs/#RoME.generateGraph_Helix2DSlew!","page":"Canonical Generators","title":"RoME.generateGraph_Helix2DSlew!","text":"generateGraph_Helix2DSlew!(; ...)\ngenerateGraph_Helix2DSlew!(\n numposes;\n slew_x,\n slew_y,\n spine_t,\n kwargs...\n)\n\n\nGenerate canonical slewed helix graph (like a flattened slinky).\n\nNotes\n\nUse slew_x and slew_y to pull the \"slinky\" out in different directions at constant rate.\nSee generalized helix generator for more details. \nDefaults are choosen to slew along x and have multple trajectory intersects between consecutive loops of the helix.\n\nRelated\n\ngenerateGraph_Helix2D!, generateGraph_Helix2DSpiral!\n\n\n\n\n\n","category":"function"},{"location":"examples/canonical_graphs/#RoME.generateGraph_Helix2DSpiral!","page":"Canonical Generators","title":"RoME.generateGraph_Helix2DSpiral!","text":"generateGraph_Helix2DSpiral!(; ...)\ngenerateGraph_Helix2DSpiral!(\n numposes;\n rate_r,\n rate_a,\n spine_t,\n kwargs...\n)\n\n\nGenerate canonical helix graph that expands along a spiral pattern, analogous flower petals.\n\nNotes\n\nThis function wraps the complex spine_t(t) function to generate the spiral pattern.\nrate_a and rate_r can be varied for different spiral behavior.\nSee generalized helix generator for more details. \nDefaults are choosen to slewto have multple trajectory intersects between consecutive loops of the helix and do a decent job of moving around coverage area with a relative balance of encircled area sizes.\n\nRelated \n\ngenerateGraph_Helix2D!, generateGraph_Helix2DSlew!\n\n\n\n\n\n","category":"function"},{"location":"examples/canonical_graphs/","page":"Canonical Generators","title":"Canonical Generators","text":"","category":"page"},{"location":"examples/basic_hexagonal2d/#Hexagonal-2D-SLAM-Example-(Local-Compute)","page":"Hexagonal 2D SLAM","title":"Hexagonal 2D SLAM Example (Local Compute)","text":"","category":"section"},{"location":"examples/basic_hexagonal2d/","page":"Hexagonal 2D SLAM","title":"Hexagonal 2D SLAM","text":"A simple 2D robot trajectory example is expanded below using techniques developed in simultaneous localization and mapping (SLAM). This example is available as a single script here.","category":"page"},{"location":"examples/basic_hexagonal2d/#Creating-the-Factor-Graph-with-Pose2","page":"Hexagonal 2D SLAM","title":"Creating the Factor Graph with Pose2","text":"","category":"section"},{"location":"examples/basic_hexagonal2d/","page":"Hexagonal 2D SLAM","title":"Hexagonal 2D SLAM","text":"The first step is to load the required modules, and in our case we will add a few Julia processes to help with the compute later on. ","category":"page"},{"location":"examples/basic_hexagonal2d/","page":"Hexagonal 2D SLAM","title":"Hexagonal 2D SLAM","text":"# add more julia processes\nnprocs() < 4 ? addprocs(4-nprocs()) : nothing\n\n# tell Julia that you want to use these modules/namespaces\nusing RoME, Distributions, LinearAlgebra","category":"page"},{"location":"examples/basic_hexagonal2d/","page":"Hexagonal 2D SLAM","title":"Hexagonal 2D SLAM","text":"After loading the RoME and Distributions modules, we construct a local factor graph object in memory:","category":"page"},{"location":"examples/basic_hexagonal2d/","page":"Hexagonal 2D SLAM","title":"Hexagonal 2D SLAM","text":"# start with an empty factor graph object\nfg = initfg()\n\n# Add the first pose :x0\naddVariable!(fg, :x0, Pose2)\n\n# Add at a fixed location PriorPose2 to pin :x0 to a starting location\naddFactor!(fg, [:x0], PriorPose2(MvNormal(zeros(3), 0.01*Matrix(LinearAlgebra.I,3,3))) )","category":"page"},{"location":"examples/basic_hexagonal2d/","page":"Hexagonal 2D SLAM","title":"Hexagonal 2D SLAM","text":"A factor graph object fg (of type <:AbstractDFG) has been constructed; the first pose :x0 has been added; and a prior factor setting the origin at [0,0,0] over variable node dimensions [x,y,θ] in the world frame. The type Pose2 is used to indicate what variable is stored in the node. Caesar.jl allows a little more freedom in how factor and variable nodes can be connected, while still allowing for type-assertion to occur.","category":"page"},{"location":"examples/basic_hexagonal2d/","page":"Hexagonal 2D SLAM","title":"Hexagonal 2D SLAM","text":"NOTE Julia uses just-in-time compilation (unless pre-compiled) which is slow the first time a function is called but fast from the second call onwards, since the static function is now cached and ready for use.","category":"page"},{"location":"examples/basic_hexagonal2d/","page":"Hexagonal 2D SLAM","title":"Hexagonal 2D SLAM","text":"The next 6 nodes are added with odometry in an counter-clockwise hexagonal manner. Note how variables are denoted with symbols, :x2 == Symbol(\"x2\"):","category":"page"},{"location":"examples/basic_hexagonal2d/","page":"Hexagonal 2D SLAM","title":"Hexagonal 2D SLAM","text":"# Drive around in a hexagon\nfor i in 0:5\n psym = Symbol(\"x$i\")\n nsym = Symbol(\"x$(i+1)\")\n addVariable!(fg, nsym, Pose2)\n pp = Pose2Pose2(MvNormal([10.0;0;pi/3], Matrix(Diagonal([0.1;0.1;0.1].^2))))\n addFactor!(fg, [psym;nsym], pp )\nend","category":"page"},{"location":"examples/basic_hexagonal2d/","page":"Hexagonal 2D SLAM","title":"Hexagonal 2D SLAM","text":"At this point it would be good to see what the factor graph actually looks like:","category":"page"},{"location":"examples/basic_hexagonal2d/","page":"Hexagonal 2D SLAM","title":"Hexagonal 2D SLAM","text":"drawGraph(fg)","category":"page"},{"location":"examples/basic_hexagonal2d/","page":"Hexagonal 2D SLAM","title":"Hexagonal 2D SLAM","text":"You should see the program evince open with this visual:","category":"page"},{"location":"examples/basic_hexagonal2d/","page":"Hexagonal 2D SLAM","title":"Hexagonal 2D SLAM","text":"(Image: exfg2d)","category":"page"},{"location":"examples/basic_hexagonal2d/#Performing-Inference","page":"Hexagonal 2D SLAM","title":"Performing Inference","text":"","category":"section"},{"location":"examples/basic_hexagonal2d/","page":"Hexagonal 2D SLAM","title":"Hexagonal 2D SLAM","text":"Let's run the multimodal-incremental smoothing and mapping (mm-iSAM) solver against this fg object:","category":"page"},{"location":"examples/basic_hexagonal2d/","page":"Hexagonal 2D SLAM","title":"Hexagonal 2D SLAM","text":"# perform inference, and remember first runs are slower owing to Julia's just-in-time compiling\ntree = solveTree!(fg)","category":"page"},{"location":"examples/basic_hexagonal2d/","page":"Hexagonal 2D SLAM","title":"Hexagonal 2D SLAM","text":"This will take a couple of seconds (including first time compiling for all Julia processes). If you wanted to see the Bayes tree operations during solving, set the following parameters before calling the solver:","category":"page"},{"location":"examples/basic_hexagonal2d/","page":"Hexagonal 2D SLAM","title":"Hexagonal 2D SLAM","text":"getSolverParams(fg).drawtree = true\ngetSolverParams(fg).showtree = true","category":"page"},{"location":"examples/basic_hexagonal2d/#Some-Visualization-Plot","page":"Hexagonal 2D SLAM","title":"Some Visualization Plot","text":"","category":"section"},{"location":"examples/basic_hexagonal2d/","page":"Hexagonal 2D SLAM","title":"Hexagonal 2D SLAM","text":"2D plots of the factor graph contents is provided by the RoMEPlotting package. See further discussion on visualizations and packages here.","category":"page"},{"location":"examples/basic_hexagonal2d/","page":"Hexagonal 2D SLAM","title":"Hexagonal 2D SLAM","text":"## Inter-operating visualization packages for Caesar/RoME/IncrementalInference exist\nusing RoMEPlotting\n\n# For Juno/Jupyter style use\npl = drawPoses(fg)\n\n# For scripting use-cases you can export the image\npl |> Gadfly.PDF(\"/tmp/test.pdf\") # or PNG(...)","category":"page"},{"location":"examples/basic_hexagonal2d/","page":"Hexagonal 2D SLAM","title":"Hexagonal 2D SLAM","text":"(Image: test)","category":"page"},{"location":"examples/basic_hexagonal2d/#Adding-Landmarks-as-Point2","page":"Hexagonal 2D SLAM","title":"Adding Landmarks as Point2","text":"","category":"section"},{"location":"examples/basic_hexagonal2d/","page":"Hexagonal 2D SLAM","title":"Hexagonal 2D SLAM","text":"Suppose some sensor detected a feature of interest with an associated range and bearing measurement. The new variable and measurement can be included into the factor graph as follows:","category":"page"},{"location":"examples/basic_hexagonal2d/","page":"Hexagonal 2D SLAM","title":"Hexagonal 2D SLAM","text":"# Add landmarks with Bearing range measurements\naddVariable!(fg, :l1, Point2, tags=[:LANDMARK;])\np2br = Pose2Point2BearingRange(Normal(0,0.1),Normal(20.0,1.0))\naddFactor!(fg, [:x0; :l1], p2br)\n\n# Initialize :l1 numerical values but do not rerun solver\ninitAll!(fg)","category":"page"},{"location":"examples/basic_hexagonal2d/","page":"Hexagonal 2D SLAM","title":"Hexagonal 2D SLAM","text":"NOTE The default behavior for initialization of variable nodes implies the last variable node added will not have any numerical values yet, please see ContinuousScalar Tutorial for deeper discussion on automatic initialization (autoinit). A slightly expanded plotting function will draw both poses and landmarks (and currently assumes labels starting with :x and :l respectively)–-notice the new landmark bottom right:","category":"page"},{"location":"examples/basic_hexagonal2d/","page":"Hexagonal 2D SLAM","title":"Hexagonal 2D SLAM","text":"drawPosesLandms(fg)","category":"page"},{"location":"examples/basic_hexagonal2d/","page":"Hexagonal 2D SLAM","title":"Hexagonal 2D SLAM","text":"(Image: test)","category":"page"},{"location":"examples/basic_hexagonal2d/#One-type-of-Loop-Closure","page":"Hexagonal 2D SLAM","title":"One type of Loop-Closure","text":"","category":"section"},{"location":"examples/basic_hexagonal2d/","page":"Hexagonal 2D SLAM","title":"Hexagonal 2D SLAM","text":"Loop-closures are a major part of SLAM based state estimation. One illustration is to take a second sighting of the same :l1 landmark from the last pose :x6; followed by repeating the inference and re-plotting the result–-notice the tighter confidences over all variables:","category":"page"},{"location":"examples/basic_hexagonal2d/","page":"Hexagonal 2D SLAM","title":"Hexagonal 2D SLAM","text":"# Add landmarks with Bearing range measurements\np2br2 = Pose2Point2BearingRange(Normal(0,0.1),Normal(20.0,1.0))\naddFactor!(fg, [:x6; :l1], p2br2)\n\n# solve\ntree = solveTree!(fg, tree)\n\n# redraw\npl = drawPosesLandms(fg)","category":"page"},{"location":"examples/basic_hexagonal2d/","page":"Hexagonal 2D SLAM","title":"Hexagonal 2D SLAM","text":"(Image: test)","category":"page"},{"location":"examples/basic_hexagonal2d/","page":"Hexagonal 2D SLAM","title":"Hexagonal 2D SLAM","text":"This concludes the Hexagonal 2D SLAM example.","category":"page"},{"location":"examples/basic_hexagonal2d/#Interest:-The-Bayes-(Junction)-tree","page":"Hexagonal 2D SLAM","title":"Interest: The Bayes (Junction) tree","text":"","category":"section"},{"location":"examples/basic_hexagonal2d/","page":"Hexagonal 2D SLAM","title":"Hexagonal 2D SLAM","text":"The Bayes (Junction) tree is used as an acyclic (has no loops) computational object, an exact algebraic refactorizating of factor graph, to perform the associated sum-product inference. The visual structure of the tree can be extracted by modifying the command tree = wipeBuildNewTree!(fg, drawpdf=true) to produce representations such as this in bt.pdf.","category":"page"},{"location":"examples/basic_hexagonal2d/","page":"Hexagonal 2D SLAM","title":"Hexagonal 2D SLAM","text":"(Image: exbt2d)","category":"page"},{"location":"concepts/multisession/#Multisession-Operation","page":"Multi-session/agent Solving","title":"Multisession Operation","text":"","category":"section"},{"location":"concepts/multisession/","page":"Multi-session/agent Solving","title":"Multi-session/agent Solving","text":"Having all the data consolidated in a factor graph allows us to do something we find really exciting: reason against data for different robots, different robot sessions, even different users. Of course, this is all optional, and must be explicitly configured, but if enabled, current inference solutions can make use of historical data to continually improve their solutions.","category":"page"},{"location":"concepts/multisession/","page":"Multi-session/agent Solving","title":"Multi-session/agent Solving","text":"Consider a single robot working in a common environment that has driven around the same area a number of times and has identified a landmark that is (probably) the same. We can automatically close the loop and use the information from the prior data to improve our current solution. This is called a multisession solve.","category":"page"},{"location":"concepts/multisession/","page":"Multi-session/agent Solving","title":"Multi-session/agent Solving","text":"To perform a multisession solve, you need to specify that a session is part of a common environment, e.g 'lab'. A user then requests a multisession solve (manually for the moment), and this creates relationships between common landmarks. The collective information is used to produce a consensus on the shared landmarks. A chain of session solves is then created, and the information is propagated into the individual sessions, improving their results.","category":"page"},{"location":"concepts/multisession/#Steps-in-Multisession-Solve","page":"Multi-session/agent Solving","title":"Steps in Multisession Solve","text":"","category":"section"},{"location":"concepts/multisession/","page":"Multi-session/agent Solving","title":"Multi-session/agent Solving","text":"The following steps are performed by the user:","category":"page"},{"location":"concepts/multisession/","page":"Multi-session/agent Solving","title":"Multi-session/agent Solving","text":"Indicate which sessions are part of a common environment - this is done via GraffSDK when the session is created\nRequest a multisession solve","category":"page"},{"location":"concepts/multisession/","page":"Multi-session/agent Solving","title":"Multi-session/agent Solving","text":"Upon request, the solver performs the following actions:","category":"page"},{"location":"concepts/multisession/","page":"Multi-session/agent Solving","title":"Multi-session/agent Solving","text":"Updates the common existing multisession landmarks with any new information (propagation from session to common information)\nBuilds common landmarks for any new sessions or updated data\nSolves the common, multisession graph\nPropagates the common consensus result to the individual sessions\nFreezes all the session landmarks so that the session solving does not update the consensus result\nRequests session solves for all the updated sessions","category":"page"},{"location":"concepts/multisession/","page":"Multi-session/agent Solving","title":"Multi-session/agent Solving","text":"Note the current approach is well positioned to transition to the \"Federated Bayes (Junction) Tree\" multisession solving method, and will be updated accordingly in due coarse. The Federated method will allow faster multi-session solving times by avoiding the current iterated approach.","category":"page"},{"location":"concepts/multisession/#Example","page":"Multi-session/agent Solving","title":"Example","text":"","category":"section"},{"location":"concepts/multisession/","page":"Multi-session/agent Solving","title":"Multi-session/agent Solving","text":"Consider three sessions which exist in the same, shared environment. In this environment, during each session the robot identified the same l0 landmark, as shown in the below figure. (Image: Independent Sessions)","category":"page"},{"location":"concepts/multisession/","page":"Multi-session/agent Solving","title":"Multi-session/agent Solving","text":"If we examine this in terms of the estimates of the actual landmarks, we have three independent densities (blue, green, and orange) giving measures of l0 located at (20, 0):","category":"page"},{"location":"concepts/multisession/","page":"Multi-session/agent Solving","title":"Multi-session/agent Solving","text":"(Image: Independent densities)","category":"page"},{"location":"concepts/multisession/","page":"Multi-session/agent Solving","title":"Multi-session/agent Solving","text":"Now we trigger a multisession solve. For each landmark that is seen in multiple session, we produce a common landmark (we call a prime landmark) and link it to the session landmarks via factors - all denoted in black outline.","category":"page"},{"location":"concepts/multisession/","page":"Multi-session/agent Solving","title":"Multi-session/agent Solving","text":"(Image: Linked landmarks)","category":"page"},{"location":"concepts/multisession/","page":"Multi-session/agent Solving","title":"Multi-session/agent Solving","text":"A multisession solve is performed, which for each common (prime) landmark, we produce a common estimate. In terms of densities, this is a single answer for the disparate information, as shown in red in the below figure (for a slightly different dataset):","category":"page"},{"location":"concepts/multisession/","page":"Multi-session/agent Solving","title":"Multi-session/agent Solving","text":"(Image: Prime density)","category":"page"},{"location":"concepts/multisession/","page":"Multi-session/agent Solving","title":"Multi-session/agent Solving","text":"This information is then propagated back to the individual session landmarks, giving one common density for each landmark. As above, our green, blue, and orange individual densities are now all updated to match the consensus shown in black:","category":"page"},{"location":"concepts/multisession/","page":"Multi-session/agent Solving","title":"Multi-session/agent Solving","text":"(Image: Prime density)","category":"page"},{"location":"concepts/multisession/","page":"Multi-session/agent Solving","title":"Multi-session/agent Solving","text":"The session landmarks are then frozen, and individual session solves are triggered to propagate the information back into the sessions. Until the federated upgrade is completed, the above process is iterated a few times to allow information to cross propagate through all sessions. There federated tree solution requires only a single iteration up and down the federated Bayes (Junction) tree. ","category":"page"},{"location":"concepts/multisession/#Next-Steps","page":"Multi-session/agent Solving","title":"Next Steps","text":"","category":"section"},{"location":"concepts/multisession/","page":"Multi-session/agent Solving","title":"Multi-session/agent Solving","text":"This provides an initial implementation for stitching data from multiple sessions, robots, and users. In the short term, we may trigger this automatically for any shared environments. Multisession solving along with other automated techniques for additional measurement discovery in data allows the system to 'dream' – i.e. reducing succint info from the large volumes of heterogenous sensor data.","category":"page"},{"location":"concepts/multisession/","page":"Multi-session/agent Solving","title":"Multi-session/agent Solving","text":"In the medium future we will extend this functionality to operate in the Bayes tree, which we call 'federated solving', so that we perform the operation using cached results of subtrees. ","category":"page"},{"location":"concepts/dataassociation/#data_multihypo","page":"Multi-Modal/Hypothesis","title":"Data Association and Hypotheses","text":"","category":"section"},{"location":"concepts/dataassociation/","page":"Multi-Modal/Hypothesis","title":"Multi-Modal/Hypothesis","text":"Ambiguous data and processing often produce complicated data association situations. In SLAM, loop-closures are a major source of concern when developing autonomous subsystems or behaviors. To illustrate this point, consider the two scenarios depicted below:","category":"page"},{"location":"concepts/dataassociation/","page":"Multi-Modal/Hypothesis","title":"Multi-Modal/Hypothesis","text":"

      \n\n

      ","category":"page"},{"location":"concepts/dataassociation/","page":"Multi-Modal/Hypothesis","title":"Multi-Modal/Hypothesis","text":"In conventional parametric Gaussian-only systems an incorrect loop-closure can occur, resulting in highly unstable numerical solutions. The mm-iSAM algorithm was conceived to directly address these (and other related) issues by changing the fundamental manner in which the statistical inference is performed.","category":"page"},{"location":"concepts/dataassociation/","page":"Multi-Modal/Hypothesis","title":"Multi-Modal/Hypothesis","text":"The data association problem applies well beyond just loop-closures including (but not limited to) navigation-affordance matching and discrepancy detection, and indicates the versatility of the IncrementalInference.jl standardized multihypo interface. Note that much more is possible, however, the so-called single-fraction multihypo approach already yields significant benefits and simplicity.","category":"page"},{"location":"concepts/dataassociation/#section_multihypo","page":"Multi-Modal/Hypothesis","title":"Multihypothesis","text":"","category":"section"},{"location":"concepts/dataassociation/","page":"Multi-Modal/Hypothesis","title":"Multi-Modal/Hypothesis","text":"Consider for example a regular three variable factor [:pose;:landmark;:calib] that due to some decision has a triple association uncertainty about the middle variable. This fractional certainty can easily be modelled via:","category":"page"},{"location":"concepts/dataassociation/","page":"Multi-Modal/Hypothesis","title":"Multi-Modal/Hypothesis","text":"addFactor!(fg, [:p10, :l1_a,:l1_b,:l1_c, :c], PoseLandmCalib, multihypo=[1; 0.6;0.3;0.1; 1])","category":"page"},{"location":"concepts/dataassociation/","page":"Multi-Modal/Hypothesis","title":"Multi-Modal/Hypothesis","text":"Therefore, the user can \"partition\" certainty about one variable using any arbitrary n-ary factor. The 100% certain variables are indicated as 1, while the remaining uncertainties regarding the uncertain data association decision are grouped as positive fractions that sum to 1. In this example, the values 0.6,0.3,0.1 represent the confidence about the association between :p10 and either of :l1_a,:l1_b,:l1_c.","category":"page"},{"location":"concepts/dataassociation/","page":"Multi-Modal/Hypothesis","title":"Multi-Modal/Hypothesis","text":"A more classical binary multihypothesis example is illustated in the multimodal (non-Gaussian) factor graph below:","category":"page"},{"location":"concepts/dataassociation/","page":"Multi-Modal/Hypothesis","title":"Multi-Modal/Hypothesis","text":"

      \n\n

      ","category":"page"},{"location":"concepts/dataassociation/#Mixture-Models","page":"Multi-Modal/Hypothesis","title":"Mixture Models","text":"","category":"section"},{"location":"concepts/dataassociation/","page":"Multi-Modal/Hypothesis","title":"Multi-Modal/Hypothesis","text":"Mixture is a different kind of multi-modal modeling where different hypotheses of the measurement itself are unknown. It is possible to also model uncertain data associations as a Mixture(Prior,...) but this is a feature of factor graph modeling something different than data association uncertainty in n-ary factors: e.g. it is possible to use Mixture together with multihypo= and be sure to take the time to understand the different and how these concepts interact. The Caesar.jl solution is more general than simply allocating different mixtures to different association decisions. All these elements together can create quite the multi-modal soup. A practical example from SLAM is a loop-closure where a robot observes an object similar to one previously seen. The measurement observation is one thing (can maybe be a Mixture) and the association of this \"measurement\" with this or that variable is a multihypothesis selection.","category":"page"},{"location":"concepts/dataassociation/","page":"Multi-Modal/Hypothesis","title":"Multi-Modal/Hypothesis","text":"See the familiar RobotFourDoor.jl as example as a highly simplified case using priors where these elements effectively all the same thing. Again, Mixture is something different than multihypo= and the two can be used together.","category":"page"},{"location":"concepts/dataassociation/","page":"Multi-Modal/Hypothesis","title":"Multi-Modal/Hypothesis","text":"A mixture can be created from any existing prior or relative likelihood factor, for example:","category":"page"},{"location":"concepts/dataassociation/","page":"Multi-Modal/Hypothesis","title":"Multi-Modal/Hypothesis","text":"mlr = Mixture(LinearRelative, \n (correlator=AliasingScalarSampler(...), naive=Normal(0.5,5), lucky=Uniform(0,10)),\n [0.5;0.4;0.1])\n\naddFactor!(fg, [:x0;:x1], mlr)","category":"page"},{"location":"concepts/dataassociation/","page":"Multi-Modal/Hypothesis","title":"Multi-Modal/Hypothesis","text":"See a example with Defining A Mixture Relative on ContinuousScalar for more details.","category":"page"},{"location":"concepts/dataassociation/","page":"Multi-Modal/Hypothesis","title":"Multi-Modal/Hypothesis","text":"Mixture","category":"page"},{"location":"concepts/dataassociation/#IncrementalInference.Mixture","page":"Multi-Modal/Hypothesis","title":"IncrementalInference.Mixture","text":"struct Mixture{N, F<:AbstractFactor, S, T<:Tuple} <: AbstractFactor\n\nA Mixture object for use with either a <: AbstractPrior or <: AbstractRelative.\n\nNotes\n\nThe internal data representation is a ::NamedTuple, which allows total type-stability for all component types.\nVarious construction helpers can accept a variety of inputs, including <: AbstractArray and Tuple.\nN is the number of components used to make the mixture, so two bumps from two Normal components means N=2.\n\nDevNotes\n\nFIXME swap API order so Mixture of distibutions works like a distribtion, see Caesar.jl #808\nShould not have field mechanics.\nTODO on sampling see #1099 and #1094 and #1069 \n\nExample\n\n# prior factor\nmsp = Mixture(Prior, \n [Normal(0,0.1), Uniform(-pi/1,pi/2)],\n [0.5;0.5])\n\naddFactor!(fg, [:head], msp, tags=[:MAGNETOMETER;])\n\n# Or relative\nmlr = Mixture(LinearRelative, \n (correlator=AliasingScalarSampler(...), naive=Normal(0.5,5), lucky=Uniform(0,10)),\n [0.5;0.4;0.1])\n\naddFactor!(fg, [:x0;:x1], mlr)\n\n\n\n\n\n","category":"type"},{"location":"concepts/dataassociation/#Raw-Correlator-Probability-(Matched-Filter)","page":"Multi-Modal/Hypothesis","title":"Raw Correlator Probability (Matched Filter)","text":"","category":"section"},{"location":"concepts/dataassociation/","page":"Multi-Modal/Hypothesis","title":"Multi-Modal/Hypothesis","text":"Realistic measurement processes are based on physical process observations such as wave function interferometry or matched filtering correlation. This style of measurement is common in RADAR and SONAR systems, and can be directly incorporated in Caesar.jl since the measurement likelihood models need not be parametric. There the raw correlator output from a sensor measurement can be directly modelled and included as part of the factor algebriac likelihood probability function:","category":"page"},{"location":"concepts/dataassociation/","page":"Multi-Modal/Hypothesis","title":"Multi-Modal/Hypothesis","text":"# Building a samplable likelihood, using softmax to convert intensity-energy into a pseudo-probability\nrangeLikeli = AliasingScalarSampler(rangeIndex, Flux.softmax(correlatorIntensity))\n\n# or alternatively with existing samples similar to a what a particle filter would have done\nrangeLikeli = manikde!(Euclid{1}, probPoints)\n\n# add the relative algebra, and remember you can construct your own highly non-linear factor\nrangeFct = Pose2Point2Range(rangeLikeli)\n\naddFactor!(fg, [:x8, :beacon_8], rangeFct)","category":"page"},{"location":"concepts/dataassociation/#Various-SamplableBelief-Distribution-Types","page":"Multi-Modal/Hypothesis","title":"Various SamplableBelief Distribution Types","text":"","category":"section"},{"location":"concepts/dataassociation/","page":"Multi-Modal/Hypothesis","title":"Multi-Modal/Hypothesis","text":"Also recognize that other features like multihypo= and Mixture readily be combined with object like this rangeFct shown above. These tricks are all possible due to the multiple dispatch magic of JuliaLang, more explicitly the following is code will all return true:","category":"page"},{"location":"concepts/dataassociation/","page":"Multi-Modal/Hypothesis","title":"Multi-Modal/Hypothesis","text":"IIF.AliasingScalarSampler <: IIF.SamplableBelief\nIIF.Mixture <: IIF.SamplableBelief\nKDE.BallTreeDensity <: IIF.SamplableBelief\nDistribution.Rayleigh <: IIF.SamplableBelief\nDistribution.Uniform <: IIF.SamplableBelief\nDistribution.MvNormal <: IIF.SamplableBelief","category":"page"},{"location":"concepts/dataassociation/","page":"Multi-Modal/Hypothesis","title":"Multi-Modal/Hypothesis","text":"One of the more exotic examples is to natively represent Synthetic Aperture Sonar (SAS) as a deeply non-Gaussian factor in the factor graph. See Synthetic Aperture Sonar SLAM. Also see the full AUV stack using a single reference beacon and Towards Real-Time Underwater Acoustic Navigation.","category":"page"},{"location":"concepts/dataassociation/#Null-Hypothesis","page":"Multi-Modal/Hypothesis","title":"Null Hypothesis","text":"","category":"section"},{"location":"concepts/dataassociation/","page":"Multi-Modal/Hypothesis","title":"Multi-Modal/Hypothesis","text":"Sometimes there is basic uncertainty about whether a measurement is at all valid. Note that the above examples (multihypo and Mixture) still accept that a certain association definitely exists. A null hypothesis models the situation in which a factor might be completely bogus, in which case it should be ignored. The underlying mechanics of this approach are not entirely straightforward since removing one or more factors essentially changes the structure of the graph. That said, IncrementalInference.jl employs a reasonable stand-in solution that does not require changing the graph structure and can simply be included for any factor.","category":"page"},{"location":"concepts/dataassociation/","page":"Multi-Modal/Hypothesis","title":"Multi-Modal/Hypothesis","text":"addFactor!(fg, [:x7;:l13], Pose2Point2Range(...), nullhypo=0.1)","category":"page"},{"location":"concepts/dataassociation/","page":"Multi-Modal/Hypothesis","title":"Multi-Modal/Hypothesis","text":"This keyword indicates to the solver that there is a 10% chance that this factor is not valid.","category":"page"},{"location":"concepts/dataassociation/","page":"Multi-Modal/Hypothesis","title":"Multi-Modal/Hypothesis","text":"note: Note\nAn entirely separate page is reserved for incorporating Flux neural network models into Caesar.jl as highly plastic and trainable (i.e. learnable) factors.","category":"page"},{"location":"concepts/stash_and_cache/#section_stash_and_cache","page":"Caching and Stashing","title":"EXPL Stash and Cache","text":"","category":"section"},{"location":"concepts/stash_and_cache/","page":"Caching and Stashing","title":"Caching and Stashing","text":"warning: Warning\nStashing and Caching are new EXPERIMENTAL features (22Q2) and is not yet be fully integrated throughout the overall system. See Notes below for specific considerations.","category":"page"},{"location":"concepts/stash_and_cache/","page":"Caching and Stashing","title":"Caching and Stashing","text":"Caching aims to improve in-place, memory, and communication bandwidth requirements for factor calculations and serialization.","category":"page"},{"location":"concepts/stash_and_cache/#Preamble-Cache","page":"Caching and Stashing","title":"Preamble Cache","text":"","category":"section"},{"location":"concepts/stash_and_cache/","page":"Caching and Stashing","title":"Caching and Stashing","text":"The Caesar.jl framework has a standardized feature to preload or cache important data for factor calculations the first time a factor is created/loaded into a graph (i.e. during addFactor!). The preambleCache function runs just once before any computations are performed. A default dispatch for preambleCache returns nothing as a cache object that is later used in several places throughout the code.","category":"page"},{"location":"concepts/stash_and_cache/#Overriding-preambleCache","page":"Caching and Stashing","title":"Overriding preambleCache","text":"","category":"section"},{"location":"concepts/stash_and_cache/","page":"Caching and Stashing","title":"Caching and Stashing","text":"A user may choose to override the dispatch for a particular factor's preambleCache and thereby return a more intricate/optimized cache object for later use. Any object can be returned, but we strongly recommend you return a type-stable object for best performance in production. Returning non-concrete types is allowed and likely faster for development, just remember to check type-stability before calling it a day.","category":"page"},{"location":"concepts/stash_and_cache/","page":"Caching and Stashing","title":"Caching and Stashing","text":"Whatever object is returned by the preambleCache(dfg, vars, fnc) function is referenced and duplicated within the solver code. During the design, use of cache is expected to predominantly occur during factor sampling, factor residual calculations, and deserialization (i.e. unpacking) of previously persisted graph objects.","category":"page"},{"location":"concepts/stash_and_cache/","page":"Caching and Stashing","title":"Caching and Stashing","text":"The preambleCache function has access to the parent factor graph object as well as an ordered list of the DFGVariables attached to said factor. The user created factor type objects passed as the third argument. The combination of these three objects allows the user much freedom wrt to where and how large data might be stored in the system.","category":"page"},{"location":"concepts/stash_and_cache/","page":"Caching and Stashing","title":"Caching and Stashing","text":"preambleCache","category":"page"},{"location":"concepts/stash_and_cache/#IncrementalInference.preambleCache","page":"Caching and Stashing","title":"IncrementalInference.preambleCache","text":"preambleCache(dfg, vars, usrfnc)\n\n\nOverload for specific factor preamble usage.\n\nNotes:\n\nSee https://github.com/JuliaRobotics/IncrementalInference.jl/issues/1462\n\nDevNotes\n\nIntegrate into CalcFactor\nAdd threading\n\nExample:\n\nimport IncrementalInference: preableCache\n\npreableCache(dfg::AbstractDFG, vars::AbstractVector{<:DFGVariable}, usrfnc::MyFactor) = MyFactorCache(randn(10))\n\n# continue regular use, e.g.\nmfc = MyFactor(...)\naddFactor!(fg, [:a;:b], mfc)\n# ... \n\n\n\n\n\n","category":"function"},{"location":"concepts/stash_and_cache/#In-Place-vs.-In-Line-Cache","page":"Caching and Stashing","title":"In-Place vs. In-Line Cache","text":"","category":"section"},{"location":"concepts/stash_and_cache/","page":"Caching and Stashing","title":"Caching and Stashing","text":"Depending on your particular bent, two different cache models might be more appealing. The design of preambleCache does not preclude either design options, and actually promote use of either depending on the particular situation at hand. The purpose of preambleCache is to provide and opportunity for caching when working with factors in the factor graph rather than dictate one design over the other.","category":"page"},{"location":"concepts/stash_and_cache/#CalcFactor.cache::T","page":"Caching and Stashing","title":"CalcFactor.cache::T","text":"","category":"section"},{"location":"concepts/stash_and_cache/","page":"Caching and Stashing","title":"Caching and Stashing","text":"One likely use of the preambleCache function is for in-place memory allocation for solver hot-loop operations. Consider for example a getSample or factor residual calculation that is memory intensive. The best way to improve performance is remove any memory allocations during the hot-loop. For this reason the CalcFactor object has a cache::T field which will have exactly the type ::T that is returned by the user's preambleCache dispatch override. To usein the factor getSample or residual functions, simply use the calcfactor.cache field.","category":"page"},{"location":"concepts/stash_and_cache/#Pulling-Data-from-Stores","page":"Caching and Stashing","title":"Pulling Data from Stores","text":"","category":"section"},{"location":"concepts/stash_and_cache/","page":"Caching and Stashing","title":"Caching and Stashing","text":"The Caesar.jl framework supports various data store designs. Some of these data stores are likely best suited for in-line caching design. Values can be retrieved from a data store during the preambleCache step, irrespective of where the data is stored. ","category":"page"},{"location":"concepts/stash_and_cache/","page":"Caching and Stashing","title":"Caching and Stashing","text":"If the user chooses to store weird and wonderful caching links to alternative hardware via the described caching, go forth and be productive! Consider sharing enhancements back the public repositories.","category":"page"},{"location":"concepts/stash_and_cache/#section_stash_unstash","page":"Caching and Stashing","title":"Stash Serialization","text":"","category":"section"},{"location":"concepts/stash_and_cache/","page":"Caching and Stashing","title":"Caching and Stashing","text":"note: Note\nStashing uses Additional (Large) Data storage and retrieval following starved graph design considerations.","category":"page"},{"location":"concepts/stash_and_cache/","page":"Caching and Stashing","title":"Caching and Stashing","text":"Some applications use graph factors with with large memory requirements during computation. Often, it is not efficient/performant to store large data blobs directly within the graph when persisted. Caesar.jl therefore supports a concept called stashing (similar to starved graphs), where particular operationally important data is stored separately from the graph which can then be retrieved during the preambleCache step – a.k.a. unstashing.","category":"page"},{"location":"concepts/stash_and_cache/#Deserialize-only-Stash-Design-(i.e.-unstashing)","page":"Caching and Stashing","title":"Deserialize-only Stash Design (i.e. unstashing)","text":"","category":"section"},{"location":"concepts/stash_and_cache/","page":"Caching and Stashing","title":"Caching and Stashing","text":"Presently, we recommend following a deserialize-only design. This is where factor graph are reconstituted from some persisted storage into computable form in memory, a.k.a. loadDFG. During the load steps, factors are added to the destination graph using the addFactor! calls, which in turn call preambleCache for each factor. ","category":"page"},{"location":"concepts/stash_and_cache/","page":"Caching and Stashing","title":"Caching and Stashing","text":"Therefore, factors which are persisted using the 'stash' methodology are only fully reconstructed after the preambleCache step, and the user is responsible for defining the preambleCache override for a particular factor. The desired stashed data should also already be available in said data store before the factor graph is loaded.","category":"page"},{"location":"concepts/stash_and_cache/","page":"Caching and Stashing","title":"Caching and Stashing","text":"Caesar.jl does have factors that can use the stash design, but are currently only available as experimental features. Specifically, see the ScatterAlignPose2 factor code.","category":"page"},{"location":"concepts/stash_and_cache/","page":"Caching and Stashing","title":"Caching and Stashing","text":"Modifying the overall Caesar.jl code for both read and write stashing might be considered in future work but is not in the current roadmap.","category":"page"},{"location":"concepts/stash_and_cache/#stashcache_notes","page":"Caching and Stashing","title":"Notes","text":"","category":"section"},{"location":"concepts/stash_and_cache/","page":"Caching and Stashing","title":"Caching and Stashing","text":"Please see or open issues for specific questions not yet covered here. You can also reach out via Slack, or contact NavAbility.io for help.","category":"page"},{"location":"concepts/stash_and_cache/","page":"Caching and Stashing","title":"Caching and Stashing","text":"Use caution in designing preambleCache for situations where multihypo= functionality is used. If factor memory is tied to specific variables, then the association ambiguities to multihypo situations at compute time must considered. E.g. if you are storing images for two landmarks in two landmark variable hypotheses, then just remember that the user cache must during the sampling or residual calculations track which hypothesis is being used before using said data – we recommend using NamedTuple in your cache structure.\nIf using the Deserialize-Stash design, note that the appropriate data blob stores should already be attached to the destination factor graph object, else the preambleCache function will not be able to succesfully access any of the getData functions you are likely to use to 'unstash' data.\nUsers can readily implement their own threading inside factor samping and residual computations. Caching is not yet thread-safe for some internal solver side-by-side computations, User can self manage shared vs. separate memory for the Multithreaded Factor option, but we'd recommend reaching out to or getting involved with the threading redesign, see IIF 1094.","category":"page"},{"location":"concepts/building_graphs/#building_graphs","page":"Building Graphs","title":"Building Graphs","text":"","category":"section"},{"location":"concepts/building_graphs/","page":"Building Graphs","title":"Building Graphs","text":"Irrespective of your application - real-time robotics, batch processing of survey data, or really complex multi-hypothesis modeling - you're going to need to add factors and variables to a graph. This section discusses how to do that in Caesar.","category":"page"},{"location":"concepts/building_graphs/","page":"Building Graphs","title":"Building Graphs","text":"The following sections discuss the steps required to construct a graph and solve it:","category":"page"},{"location":"concepts/building_graphs/","page":"Building Graphs","title":"Building Graphs","text":"Initializing the Factor Graph\nAdding Variables and Factors to the Graph\nSolving the Graph\nInforming the Solver About Ready Data","category":"page"},{"location":"concepts/building_graphs/#Familiar-Canonical-Factor-Graphs","page":"Building Graphs","title":"Familiar Canonical Factor Graphs","text":"","category":"section"},{"location":"concepts/building_graphs/","page":"Building Graphs","title":"Building Graphs","text":"Starting with a shortcut to just quickly getting a small predefined canonical graph containing a few variables and factors. Functions to generate a canonical factor graph object that is useful for orientation, testing, learning, or validation. You can generate any of these factor graphs at any time, for example when quickly wanting to test some idea midway through building a more sophisiticated fg, you might just want to quickly do:","category":"page"},{"location":"concepts/building_graphs/","page":"Building Graphs","title":"Building Graphs","text":"fg_ = generateGraph_Hexagonal()","category":"page"},{"location":"concepts/building_graphs/","page":"Building Graphs","title":"Building Graphs","text":"and then work with fg_ to try out something risky.","category":"page"},{"location":"concepts/building_graphs/","page":"Building Graphs","title":"Building Graphs","text":"note: Note\nSee the Canonical Graphs page for a more complete list of existing graph generators.","category":"page"},{"location":"concepts/building_graphs/#Building-a-new-Graph","page":"Building Graphs","title":"Building a new Graph","text":"","category":"section"},{"location":"concepts/building_graphs/","page":"Building Graphs","title":"Building Graphs","text":"The first step is to model the data (using the most appropriate factors) among variables of interest. To start model, first create a distributed factor graph object:","category":"page"},{"location":"concepts/building_graphs/","page":"Building Graphs","title":"Building Graphs","text":"# start with an empty factor graph object\nfg = initfg()","category":"page"},{"location":"concepts/building_graphs/","page":"Building Graphs","title":"Building Graphs","text":"initfg","category":"page"},{"location":"concepts/building_graphs/#IncrementalInference.initfg","page":"Building Graphs","title":"IncrementalInference.initfg","text":"initfg(; ...)\ninitfg(dfg; sessionname, robotname, username, cloudgraph)\n\n\nInitialize an empty in-memory DistributedFactorGraph ::DistributedFactorGraph object.\n\n\n\n\n\n","category":"function"},{"location":"concepts/building_graphs/#Variables","page":"Building Graphs","title":"Variables","text":"","category":"section"},{"location":"concepts/building_graphs/","page":"Building Graphs","title":"Building Graphs","text":"Variables (a.k.a. poses or states in navigation lingo) are created with the addVariable! fucntion call.","category":"page"},{"location":"concepts/building_graphs/","page":"Building Graphs","title":"Building Graphs","text":"# Add the first pose :x0\naddVariable!(fg, :x0, Pose2)\n# Add a few more poses\nfor i in 1:10\n addVariable!(fg, Symbol(\"x\",i), Pose2)\nend","category":"page"},{"location":"concepts/building_graphs/","page":"Building Graphs","title":"Building Graphs","text":"Variables contain a label, a data type (e.g. in 2D RoME.Point2 or RoME.Pose2). Note that variables are solved - i.e. they are the product, what you wish to calculate when the solver runs - so you don't provide any measurements when creating them.","category":"page"},{"location":"concepts/building_graphs/","page":"Building Graphs","title":"Building Graphs","text":"addVariable!\ndeleteVariable!","category":"page"},{"location":"concepts/building_graphs/#DistributedFactorGraphs.addVariable!","page":"Building Graphs","title":"DistributedFactorGraphs.addVariable!","text":"addVariable!(dfg, variable)\n\n\nAdd a DFGVariable to a DFG.\n\n\n\n\n\naddVariable!(\n dfg,\n label,\n varTypeU;\n N,\n solvable,\n timestamp,\n nanosecondtime,\n dontmargin,\n tags,\n smalldata,\n checkduplicates,\n initsolvekeys\n)\n\n\nAdd a variable node label::Symbol to dfg::AbstractDFG, as varType<:InferenceVariable.\n\nNotes\n\nkeyword nanosecondtime is experimental and intended as the whole subsection portion – i.e. accurateTime = (timestamp MOD second) + Nanosecond\n\nExample\n\nfg = initfg()\naddVariable!(fg, :x0, Pose2)\n\n\n\n\n\n","category":"function"},{"location":"concepts/building_graphs/#DistributedFactorGraphs.deleteVariable!","page":"Building Graphs","title":"DistributedFactorGraphs.deleteVariable!","text":"deleteVariable!(dfg, label)\n\n\nDelete a DFGVariable from the DFG using its label.\n\n\n\n\n\ndeleteVariable!(dfg, variable)\n\n\nDelete a referenced DFGVariable from the DFG.\n\nNotes\n\nReturns Tuple{AbstractDFGVariable, Vector{<:AbstractDFGFactor}}\n\n\n\n\n\n","category":"function"},{"location":"concepts/building_graphs/","page":"Building Graphs","title":"Building Graphs","text":"The MM-iSAMv2 algorithm uses one of two approaches to automatically initialize variables, or can be initialized manually.","category":"page"},{"location":"concepts/building_graphs/#Factors","page":"Building Graphs","title":"Factors","text":"","category":"section"},{"location":"concepts/building_graphs/","page":"Building Graphs","title":"Building Graphs","text":"Factors are algebraic relationships between variables based on data cues such as sensor measurements. Examples of factors are absolute (pre-resolved) GPS readings (unary factors/priors) and odometry changes between pose variables. All factors encode a stochastic measurement (measurement + error), such as below, where a generic Prior belief is add to x0 (using the addFactor! call) as a normal distribution centered around [0,0,0].","category":"page"},{"location":"concepts/building_graphs/","page":"Building Graphs","title":"Building Graphs","text":"addFactor!\ndeleteFactor!","category":"page"},{"location":"concepts/building_graphs/#DistributedFactorGraphs.addFactor!","page":"Building Graphs","title":"DistributedFactorGraphs.addFactor!","text":"Add a DFGFactor to a DFG.\n\naddFactor!(dfg, factor)\n\n\n\n\n\n\naddFactor!(dfg, variables, factor)\n\n\n\n\n\n\naddFactor!(dfg, variableLabels, factor)\n\n\n\n\n\n\naddFactor!(\n dfg,\n Xi,\n usrfnc;\n multihypo,\n nullhypo,\n solvable,\n tags,\n timestamp,\n graphinit,\n suppressChecks,\n inflation,\n namestring,\n _blockRecursion\n)\n\n\nAdd factor with user defined type <:AbstractFactorto the factor graph object. Define whether the automatic initialization of variables should be performed. Use order sensitivemultihypo` keyword argument to define if any variables are related to data association uncertainty.\n\nExperimental\n\ninflation, to better disperse kernels before convolution solve, see IIF #1051.\n\n\n\n\n\n","category":"function"},{"location":"concepts/building_graphs/#DistributedFactorGraphs.deleteFactor!","page":"Building Graphs","title":"DistributedFactorGraphs.deleteFactor!","text":"deleteFactor!(dfg, label; suppressGetFactor)\n\n\nDelete a DFGFactor from the DFG using its label.\n\n\n\n\n\ndeleteFactor!(dfg, factor; suppressGetFactor)\n\n\nDelete the referened DFGFactor from the DFG.\n\n\n\n\n\n","category":"function"},{"location":"concepts/building_graphs/#Priors","page":"Building Graphs","title":"Priors","text":"","category":"section"},{"location":"concepts/building_graphs/","page":"Building Graphs","title":"Building Graphs","text":"# Add at a fixed location Prior to pin :x0 to a starting location (0,0,pi/6.0)\naddFactor!(fg, [:x0], PriorPose2( MvNormal([0; 0; pi/6.0], Matrix(Diagonal([0.1;0.1;0.05].^2)) )))","category":"page"},{"location":"concepts/building_graphs/#Factors-Between-Variables","page":"Building Graphs","title":"Factors Between Variables","text":"","category":"section"},{"location":"concepts/building_graphs/","page":"Building Graphs","title":"Building Graphs","text":"# Add odometry indicating a zigzag movement\nfor i in 1:10\n pp = Pose2Pose2(MvNormal([10.0;0; (i % 2 == 0 ? -pi/3 : pi/3)], Matrix(Diagonal([0.1;0.1;0.1].^2))))\n addFactor!(fg, [Symbol(\"x$(i-1)\"); Symbol(\"x$(i)\")], pp )\nend","category":"page"},{"location":"concepts/building_graphs/#[OPTIONAL]-Understanding-Internal-Factor-Naming-Convention","page":"Building Graphs","title":"[OPTIONAL] Understanding Internal Factor Naming Convention","text":"","category":"section"},{"location":"concepts/building_graphs/","page":"Building Graphs","title":"Building Graphs","text":"The factor name used by Caesar is automatically generated from ","category":"page"},{"location":"concepts/building_graphs/","page":"Building Graphs","title":"Building Graphs","text":"addFactor!(fg, [:x0; :x1],...)","category":"page"},{"location":"concepts/building_graphs/","page":"Building Graphs","title":"Building Graphs","text":"will create a factor with name :x0x1f1","category":"page"},{"location":"concepts/building_graphs/","page":"Building Graphs","title":"Building Graphs","text":"When you were to add a another factor betweem :x0, :x1:","category":"page"},{"location":"concepts/building_graphs/","page":"Building Graphs","title":"Building Graphs","text":"addFactor!(fg, [:x0; :x1],...)","category":"page"},{"location":"concepts/building_graphs/","page":"Building Graphs","title":"Building Graphs","text":"will create a second factor with the name :x0x1f2.","category":"page"},{"location":"concepts/building_graphs/#Adding-Tags","page":"Building Graphs","title":"Adding Tags","text":"","category":"section"},{"location":"concepts/building_graphs/","page":"Building Graphs","title":"Building Graphs","text":"It is possible to add tags to variables and factors that make later graph management tasks easier, e.g.:","category":"page"},{"location":"concepts/building_graphs/","page":"Building Graphs","title":"Building Graphs","text":"addVariable!(fg, :l7_3, Pose2, tags=[:APRILTAG; :LANDMARK])","category":"page"},{"location":"concepts/building_graphs/#Drawing-the-Factor-Graph","page":"Building Graphs","title":"Drawing the Factor Graph","text":"","category":"section"},{"location":"concepts/building_graphs/","page":"Building Graphs","title":"Building Graphs","text":"Once you have a graph, you can visualize the graph as follows (beware though if the fg object is large):","category":"page"},{"location":"concepts/building_graphs/","page":"Building Graphs","title":"Building Graphs","text":"# requires `sudo apt-get install graphviz\ndrawGraph(fg, show=true)","category":"page"},{"location":"concepts/building_graphs/","page":"Building Graphs","title":"Building Graphs","text":"By setting show=true, the application evince will be called to show the fg.pdf file that was created using GraphViz. A GraphPlot.jl visualization engine is also available.","category":"page"},{"location":"concepts/building_graphs/","page":"Building Graphs","title":"Building Graphs","text":"using GraphPlot\nplotDFG(fg)","category":"page"},{"location":"concepts/building_graphs/","page":"Building Graphs","title":"Building Graphs","text":"drawGraph","category":"page"},{"location":"concepts/building_graphs/#IncrementalInference.drawGraph","page":"Building Graphs","title":"IncrementalInference.drawGraph","text":"drawGraph(fgl; viewerapp, filepath, engine, show)\n\n\nDraw and show the factor graph <:AbstractDFG via system graphviz and xdot app.\n\nNotes\n\nRequires system install on Linux of sudo apt-get install xdot\nShould not be calling outside programs.\nNeed long term solution\nDFG's toDotFile a better solution – view with xdot application.\nalso try engine={\"sfdp\",\"fdp\",\"dot\",\"twopi\",\"circo\",\"neato\"}\n\nNotes:\n\nCalls external system application xdot to read the .dot file format\ntoDot(fg,file=...); @async run(`xdot file.dot`)\n\nRelated\n\ndrawGraphCliq, drawTree, printCliqSummary, spyCliqMat\n\n\n\n\n\n","category":"function"},{"location":"concepts/building_graphs/","page":"Building Graphs","title":"Building Graphs","text":"For more details, see the DFG docs on Drawing Graphs.","category":"page"},{"location":"concepts/building_graphs/#When-to-Instantiate-Poses-(i.e.-new-Variables-in-Factor-Graph)","page":"Building Graphs","title":"When to Instantiate Poses (i.e. new Variables in Factor Graph)","text":"","category":"section"},{"location":"concepts/building_graphs/","page":"Building Graphs","title":"Building Graphs","text":"Consider a robot traversing some area while exploring, localizing, and wanting to find strong loop-closure features for consistent mapping. The creation of new poses and landmark variables is a trade-off in computational complexity and marginalization errors made during factor graph construction. Common triggers for new poses are:","category":"page"},{"location":"concepts/building_graphs/","page":"Building Graphs","title":"Building Graphs","text":"Time-based trigger (eg. new pose a second or 5 minutes if stationary)\nDistance traveled (eg. new pose every 0.5 meters)\nRotation angle (eg. new pose every 15 degrees)","category":"page"},{"location":"concepts/building_graphs/","page":"Building Graphs","title":"Building Graphs","text":"Computation will progress faster if poses and landmarks are very sparse. To extract the benefit of dense reconstructions, one approach is to use the factor graph as sparse index in history about the general progression of the trajectory and use additional processing from dense sensor data for high-fidelity map reconstructions. Either interpolations, or better direct reconstructions from inertial data can be used for dense reconstruction.","category":"page"},{"location":"concepts/building_graphs/","page":"Building Graphs","title":"Building Graphs","text":"For completeness, one could also re-project the most meaningful measurements from sensor measurements between pose epochs as though measured from the pose epoch. This approach essentially marginalizes the local dead reckoning drift errors into the local interpose re-projections, but helps keep the pose count low.","category":"page"},{"location":"concepts/building_graphs/","page":"Building Graphs","title":"Building Graphs","text":"In addition, see Fixed-lag Solving for limiting during inference the number of fluid variables manually to a user desired count.","category":"page"},{"location":"concepts/building_graphs/#Which-Variables-and-Factors-to-use","page":"Building Graphs","title":"Which Variables and Factors to use","text":"","category":"section"},{"location":"concepts/building_graphs/","page":"Building Graphs","title":"Building Graphs","text":"See the next page on available variables and factors","category":"page"},{"location":"examples/custom_factor_features/#Custom-Factor-Features","page":"Important Factor Features","title":"Custom Factor Features","text":"","category":"section"},{"location":"examples/custom_factor_features/#Contributing-back-to-the-Community","page":"Important Factor Features","title":"Contributing back to the Community","text":"","category":"section"},{"location":"examples/custom_factor_features/","page":"Important Factor Features","title":"Important Factor Features","text":"Consider contributioning back, so if you have developed variables and factors that may be useful to the community, please write up an issue in Caesar.jl or submit a PR to the relavent repo.","category":"page"},{"location":"examples/custom_factor_features/#whatiscalcfactor","page":"Important Factor Features","title":"What is CalcFactor","text":"","category":"section"},{"location":"examples/custom_factor_features/","page":"Important Factor Features","title":"Important Factor Features","text":"CalcFactor is part of the IIF interface to all factors. It contains metadata and other important bits of information that are useful in a wide swath of applications. As work requires more interesting features from the code base, it is likely that the cfo::CalcFactor object will contain such data. If not, please open an issue with Caesar.jl so that the necessary options may be added.","category":"page"},{"location":"examples/custom_factor_features/","page":"Important Factor Features","title":"Important Factor Features","text":"The cfo object contains the field .factor::T which is the type of the user factor being used, e.g. myprior from above example. That is cfo.factor::MyPrior. This is why getSample is using rand(cfo.factor.Z).","category":"page"},{"location":"examples/custom_factor_features/","page":"Important Factor Features","title":"Important Factor Features","text":"CalcFactor was introduced in IncrementalInference v0.20 to consolidate and standardize a variety of features that had previously been diseparate and unwieldy.","category":"page"},{"location":"examples/custom_factor_features/","page":"Important Factor Features","title":"Important Factor Features","text":"The MM-iSAMv2 algorithm relies on the Kolmogorov-Criteria as well as uncorrelated factor sampling. This means that when generating fresh samples for a factor, those samples should not depend on values of variables in the graph or independent volatile variables. That said, if you have a non-violating reason for using additional data in the factor sampling or residual calculation process, you can do so via the cf::CalcFactor interface.","category":"page"},{"location":"examples/custom_factor_features/","page":"Important Factor Features","title":"Important Factor Features","text":"At present cf contains three main fields:","category":"page"},{"location":"examples/custom_factor_features/","page":"Important Factor Features","title":"Important Factor Features","text":"cf.factor::MyFactor the factor object as defined in the struct definition,\ncf.fullvariables, which can be used for large data blob retrieval such as used in Terrain Relative Navigation (TRN).\nAlso see Stashing and Caching\ncf.cache, which is user controlled via preambleCache function, see Cache Section.\ncf.manifold, for the manifold the factor operates on.\ncf._sampleIdx is the index of which computational sample is currently being calculated.","category":"page"},{"location":"examples/custom_factor_features/","page":"Important Factor Features","title":"Important Factor Features","text":"IncrementalInference.CalcFactor","category":"page"},{"location":"examples/custom_factor_features/#IncrementalInference.CalcFactor","page":"Important Factor Features","title":"IncrementalInference.CalcFactor","text":"Residual function for MutablePose2Pose2Gaussian.\n\nRelated\n\nPose2Pose2, Pose3Pose3, InertialPose3, DynPose2Pose2, Point2Point2, VelPoint2VelPoint2\n\n\n\n\n\n","category":"type"},{"location":"examples/custom_factor_features/","page":"Important Factor Features","title":"Important Factor Features","text":"tip: Tip\nMany factors already exists in IncrementalInference, RoME, and Caesar. Please see their src directories for more details.","category":"page"},{"location":"examples/custom_factor_features/","page":"Important Factor Features","title":"Important Factor Features","text":"warning: Warning\nThe old .specialSampler framework has been replaced with the standardized ::CalcFactor interface. See http://www.github.com/JuliaRobotics/IIF.jl/issues/467 for details.","category":"page"},{"location":"examples/custom_factor_features/#Partial-Factors","page":"Important Factor Features","title":"Partial Factors","text":"","category":"section"},{"location":"examples/custom_factor_features/","page":"Important Factor Features","title":"Important Factor Features","text":"In some cases a factor only effects a partial set of dimensions of a variable. For example a magnetometer being added onto a Pose2 variable would look something like this:","category":"page"},{"location":"examples/custom_factor_features/","page":"Important Factor Features","title":"Important Factor Features","text":"struct MyMagnetoPrior{T<:SamplableBelief} <: AbstractPrior\n Z::T\n partial::Tuple{Int}\nend\n\n# define a helper constructor\nMyMagnetoPrior(z) = MyMagnetoPrior(z, (3,))\n\ngetSample(cfo::CalcFactor{<:MyMagnetoPrior}) = samplePoint(cfo.factor.Z)","category":"page"},{"location":"examples/custom_factor_features/","page":"Important Factor Features","title":"Important Factor Features","text":"Similarly for <:IIF.AbstractRelativeMinimize, and note that the Roots version currently does not support the .partial option.","category":"page"},{"location":"examples/custom_factor_features/#Factors-supporting-a-Parametric-Solution","page":"Important Factor Features","title":"Factors supporting a Parametric Solution","text":"","category":"section"},{"location":"examples/custom_factor_features/","page":"Important Factor Features","title":"Important Factor Features","text":"See the parametric solve section","category":"page"},{"location":"examples/custom_factor_features/#factor_serialization","page":"Important Factor Features","title":"Standardized Factor Serialization","text":"","category":"section"},{"location":"examples/custom_factor_features/","page":"Important Factor Features","title":"Important Factor Features","text":"To take advantage of features like DFG.saveDFG and DFG.loadDFG a user specified type should be able to serialize via JSON standards. The decision was taken to require bespoke factor types to always be converted into a JSON friendly struct which must be prefixed as type name with PackedMyPrior{T}. Similarly, the user must also overload Base.convert as follows:","category":"page"},{"location":"examples/custom_factor_features/","page":"Important Factor Features","title":"Important Factor Features","text":"# necessary for overloading Base.convert\nimport Base: convert\n\nstruct PackedMyPrior <: AbstractPackedFactor\n Z::String\nend\n\n# IIF provides convert methods for `SamplableBelief` types\nconvert(::Type{PackedMyPrior}, pr::MyPrior{<:SamplableBelief}) = PackedMyPrior(convert(PackedSamplableBelief, pr.Z))\nconvert(::Type{MyPrior}, pr::PackedMyPrior) = MyPrior(IIF.convert(SamplableBelief, pr.Z))","category":"page"},{"location":"examples/custom_factor_features/","page":"Important Factor Features","title":"Important Factor Features","text":"Now you should be able to saveDFG and loadDFG your own factor graph types to Caesar.jl / FileDFG standard .tar.gz format.","category":"page"},{"location":"examples/custom_factor_features/","page":"Important Factor Features","title":"Important Factor Features","text":"fg = initfg()\naddVariable!(fg, :x0, ContinuousScalar)\naddFactor!(fg, [:x0], MyPrior(Normal()))\n\n# generate /tmp/myfg.tar.gz\nsaveDFG(\"/tmp/myfg\", fg)\n\n# test loading the .tar.gz (extension optional)\nfg2 = loadDFG(\"/tmp/myfg\")\n\n# list the contents\nls(fg2), lsf(fg2)\n# should see :x0 and :x0f1 listed","category":"page"},{"location":"examples/using_ros/#ros_direct","page":"ROS Middleware","title":"ROS Direct","text":"","category":"section"},{"location":"examples/using_ros/","page":"ROS Middleware","title":"ROS Middleware","text":"Since 2020, Caesar.jl has native support for ROS via the RobotOS.jl package. ","category":"page"},{"location":"examples/using_ros/#Load-the-ROS-Environment-Variables","page":"ROS Middleware","title":"Load the ROS Environment Variables","text":"","category":"section"},{"location":"examples/using_ros/","page":"ROS Middleware","title":"ROS Middleware","text":"The first thing to ensure is that the ROS environment variables are loaded before launching Julia, see \"1.5 Environment setup at ros.org\", something similar to:","category":"page"},{"location":"examples/using_ros/","page":"ROS Middleware","title":"ROS Middleware","text":"source /opt/ros/noetic/setup.bash","category":"page"},{"location":"examples/using_ros/#Setup-a-Catkin-Workspace","page":"ROS Middleware","title":"Setup a Catkin Workspace","text":"","category":"section"},{"location":"examples/using_ros/","page":"ROS Middleware","title":"ROS Middleware","text":"Assuming you have bespoke msg types, we suggest using a catkin workspace of choice, for example:","category":"page"},{"location":"examples/using_ros/","page":"ROS Middleware","title":"ROS Middleware","text":"mkdir -p ~/caesar_ws/src\ncd ~/caesar_ws/src\ngit clone https://github.com/pvazteixeira/caesar_ros","category":"page"},{"location":"examples/using_ros/","page":"ROS Middleware","title":"ROS Middleware","text":"Now build and configure your workspace","category":"page"},{"location":"examples/using_ros/","page":"ROS Middleware","title":"ROS Middleware","text":"cd ~/caesar_ws\ncatkin_make\nsource devel/setup.sh","category":"page"},{"location":"examples/using_ros/","page":"ROS Middleware","title":"ROS Middleware","text":"This last command is important, as you must have the workspace configuration in your environment when you run the julia process, so that you can import the service specifications.","category":"page"},{"location":"examples/using_ros/#RobotOS.jl-with-Correct-Python","page":"ROS Middleware","title":"RobotOS.jl with Correct Python","text":"","category":"section"},{"location":"examples/using_ros/","page":"ROS Middleware","title":"ROS Middleware","text":"RobotOS.jl currently uses PyCall.jl to interface through the rospy system. After launching Julia, make sure that PyCall is using the correct Python binary on your local system.","category":"page"},{"location":"examples/using_ros/","page":"ROS Middleware","title":"ROS Middleware","text":"# Assuming multiprocess will be used.\nusing Distributed\n# addprocs(4)\n\n# Prepare python version\nusing Pkg\nDistributed.@everywhere using Pkg\n\nDistributed.@everywhere begin\n ENV[\"PYTHON\"] = \"/usr/bin/python3\"\n Pkg.build(\"PyCall\")\nend\n\nusing PyCall\nDistributed.@everywhere using PyCall","category":"page"},{"location":"examples/using_ros/#Load-RobotOS.jl-along-with-Caesar.jl","page":"ROS Middleware","title":"Load RobotOS.jl along with Caesar.jl","text":"","category":"section"},{"location":"examples/using_ros/","page":"ROS Middleware","title":"ROS Middleware","text":"Caesar.jl has native by optional package tools relating to RobotOS.jl (leveraging Requires.jl):","category":"page"},{"location":"examples/using_ros/","page":"ROS Middleware","title":"ROS Middleware","text":"using RobotOS\n\n@rosimport std_msgs.msg: Header\n@rosimport sensor_msgs.msg: PointCloud2\n\nrostypegen()\n\nusing Caesar\nDistributed.@everywhere using Colors, Caesar","category":"page"},{"location":"examples/using_ros/","page":"ROS Middleware","title":"ROS Middleware","text":"Colors.jl is added as a conditional requirement to get Caesar._PCL.PointCloud support (see PCL page here).","category":"page"},{"location":"examples/using_ros/","page":"ROS Middleware","title":"ROS Middleware","text":"note: Note\nImports and type generation are necessary for RobotOS and Caesar to work properly.","category":"page"},{"location":"examples/using_ros/#Prepare-Any-Outer-Objects","page":"ROS Middleware","title":"Prepare Any Outer Objects","text":"","category":"section"},{"location":"examples/using_ros/","page":"ROS Middleware","title":"ROS Middleware","text":"Usually a factor graph or detectors, or some more common objects are required. For the example lets just say a basic SLAMWrapper containing a regular fg=initfg():","category":"page"},{"location":"examples/using_ros/","page":"ROS Middleware","title":"ROS Middleware","text":"robotslam = SLAMWrapperLocal()","category":"page"},{"location":"examples/using_ros/#Example-Caesar.jl-ROS-Handler","page":"ROS Middleware","title":"Example Caesar.jl ROS Handler","text":"","category":"section"},{"location":"examples/using_ros/","page":"ROS Middleware","title":"ROS Middleware","text":"Some function will also be required to consume the ROS traffic on any particular topic, where for the example we assume extraneous data will only be fg_:","category":"page"},{"location":"examples/using_ros/","page":"ROS Middleware","title":"ROS Middleware","text":"function myHandler(msgdata, slam_::SLAMWrapperLocal)\n # show some header information\n @show \"myHandler\", msgdata[2].header.seq\n\n # do stuff\n # addVariable!(slam.dfg, ...)\n # addFactor!(slam.dfg, ...)\n #, etc.\n\n nothing\nend","category":"page"},{"location":"examples/using_ros/#Read-or-Write-Bagfile-Messages","page":"ROS Middleware","title":"Read or Write Bagfile Messages","text":"","category":"section"},{"location":"examples/using_ros/","page":"ROS Middleware","title":"ROS Middleware","text":"Assuming that you are working from a bagfile, the following code makes it easy to consume the bagfile directly. Alternatively, see RobotOS.jl for wiring up publishers and subscribers for live data. Caesar.jl methods to consuming a bagfile are:","category":"page"},{"location":"examples/using_ros/","page":"ROS Middleware","title":"ROS Middleware","text":"# find the bagfile\nbagfile = joinpath(ENV[\"HOME\"],\"data/somedata.bag\")\n\n# open the file\nbagSubscriber = RosbagSubscriber(bagfile)\n\n# subscriber callbacks\nbagSubscriber(\"/zed/left/image_rect_color\", myHandler, robotslam)","category":"page"},{"location":"examples/using_ros/#Run-the-ROS-Loop","page":"ROS Middleware","title":"Run the ROS Loop","text":"","category":"section"},{"location":"examples/using_ros/","page":"ROS Middleware","title":"ROS Middleware","text":"Once everything is set up as you need, it's easy to loop over all the traffic in the bagfile (one message at a time):","category":"page"},{"location":"examples/using_ros/","page":"ROS Middleware","title":"ROS Middleware","text":"maxloops = 1000\nrosloops = 0\nwhile loop!(bagSubscriber)\n # plumbing to limit the number of messages\n rosloops += 1\n if maxloops < rosloops\n @warn \"reached --msgloops limit of $rosloops\"\n break\n end\n # delay progress for whatever reason\n blockProgress(robotslam) # required to prevent duplicate solves occuring at the same time\nend","category":"page"},{"location":"examples/using_ros/","page":"ROS Middleware","title":"ROS Middleware","text":"note: Note\nSee page on Synchronizing over the Graph","category":"page"},{"location":"examples/using_ros/#Write-Msgs-to-a-Bag","page":"ROS Middleware","title":"Write Msgs to a Bag","text":"","category":"section"},{"location":"examples/using_ros/","page":"ROS Middleware","title":"ROS Middleware","text":"Support is also provided for writing messages to bag files with Caesar.RosbagWriter:","category":"page"},{"location":"examples/using_ros/","page":"ROS Middleware","title":"ROS Middleware","text":"# Link with ROSbag infrastructure via rospy\nusing Pkg\nENV[\"PYTHON\"] = \"/usr/bin/python3\"\nPkg.build(\"PyCall\")\nusing PyCall\nusing RobotOS\n@rosimport std_msgs.msg: String\nrostypegen()\nusing Caesar\n\nbagwr = Caesar.RosbagWriter(\"/tmp/test.bag\")\ns = std_msgs.msg.StringMsg(\"test\")\nbagwr.write_message(\"/ch1\", s)\nbagwr.close()","category":"page"},{"location":"examples/using_ros/","page":"ROS Middleware","title":"ROS Middleware","text":"This has been tested and use with much more complicated types such as the Caesar._PCL.PCLPointCloud2.","category":"page"},{"location":"examples/using_ros/#Additional-Notes","page":"ROS Middleware","title":"Additional Notes","text":"","category":"section"},{"location":"examples/using_ros/#ROS-Conversions,-e.g.-PCL","page":"ROS Middleware","title":"ROS Conversions, e.g. PCL","text":"","category":"section"},{"location":"examples/using_ros/","page":"ROS Middleware","title":"ROS Middleware","text":"By loading RobotOS.jl, the Caesar module will also load additional functionality to convert some of the basic data types between ROS and PCL familiar types, for example PCLPointCloud2:","category":"page"},{"location":"examples/using_ros/","page":"ROS Middleware","title":"ROS Middleware","text":"wPC = Caesar._PCL.PointCloud()\nwPC2 = Caesar._PCL.PCLPointCloud2(wPC)\nrmsg = Caesar._PCL.toROSPointCloud2(wPC2);","category":"page"},{"location":"examples/using_ros/#More-Tools-for-Real-Time","page":"ROS Middleware","title":"More Tools for Real-Time","text":"","category":"section"},{"location":"examples/using_ros/","page":"ROS Middleware","title":"ROS Middleware","text":"See tools such as ","category":"page"},{"location":"examples/using_ros/","page":"ROS Middleware","title":"ROS Middleware","text":"ST = manageSolveTree!(robotslam.dfg, robotslam.solveSettings, dbg=false)","category":"page"},{"location":"examples/using_ros/","page":"ROS Middleware","title":"ROS Middleware","text":"manageSolveTree!","category":"page"},{"location":"examples/using_ros/#RoME.manageSolveTree!","page":"ROS Middleware","title":"RoME.manageSolveTree!","text":"manageSolveTree!(dfg, mss; dbg, timinglog, limitfixeddown)\n\n\nAsynchronous solver manager that can run concurrently while other Tasks are modifying a common distributed factor graph object.\n\nNotes\n\nWhen adding Variables and Factors, use solvable=0 to disable the new fragments until ready for inference.\ne.g. addVariable!(fg, :x45, Pose2, solvable=0)\nThese parts of the factor graph can simply be activated for solving setSolvable!(fg, :x45, 1)\n\n\n\n\n\n","category":"function"},{"location":"examples/using_ros/","page":"ROS Middleware","title":"ROS Middleware","text":"for solving a factor graph while the middleware processes are modifying the graph, while documentation is being completed see the code here: https://github.com/JuliaRobotics/RoME.jl/blob/a662d45e22ae4db2b6ee20410b00b75361294545/src/Slam.jl#L175-L288","category":"page"},{"location":"examples/using_ros/","page":"ROS Middleware","title":"ROS Middleware","text":"To stop or trigger a new solve in the SLAM manager you can just use either of these","category":"page"},{"location":"examples/using_ros/","page":"ROS Middleware","title":"ROS Middleware","text":"stopManageSolveTree!\ntriggerSolve!","category":"page"},{"location":"examples/using_ros/#RoME.stopManageSolveTree!","page":"ROS Middleware","title":"RoME.stopManageSolveTree!","text":"stopManageSolveTree!(slam)\n\n\nStops a manageSolveTree! session. Usually up to the user to do so as a SLAM process comes to completion.\n\nRelated\n\nmanageSolveTree!\n\n\n\n\n\n","category":"function"},{"location":"examples/using_ros/#RoME.triggerSolve!","page":"ROS Middleware","title":"RoME.triggerSolve!","text":"triggerSolve!(slam)\n\n\nTrigger a factor graph solveTree!(slam.dfg,...) after clearing the solvable buffer slam.?? (assuming the manageSolveTree! task is already running).\n\nNotes\n\nUsed in combination with manageSolveTree!\n\n\n\n\n\n","category":"function"},{"location":"examples/using_ros/","page":"ROS Middleware","title":"ROS Middleware","text":"note: Note\nNative code for consuming rosbags also includes methods:RosbagSubscriber, loop!, getROSPyMsgTimestamp, nanosecond2datetime","category":"page"},{"location":"examples/using_ros/","page":"ROS Middleware","title":"ROS Middleware","text":"note: Note\nAdditional notes about tricks that came up during development is kept in this wiki.","category":"page"},{"location":"examples/using_ros/","page":"ROS Middleware","title":"ROS Middleware","text":"note: Note\nSee ongoing RobotOS.jl discussion on building a direct C++ interface and skipping PyCall.jl entirely: https://github.com/jdlangs/RobotOS.jl/issues/59","category":"page"},{"location":"examples/basic_slamedonut/#Range-only-SLAM,-Singular-–-i.e.-\"Under-Constrained\"","page":"Underconstrained Range-only","title":"Range only SLAM, Singular – i.e. \"Under-Constrained\"","text":"","category":"section"},{"location":"examples/basic_slamedonut/","page":"Underconstrained Range-only","title":"Underconstrained Range-only","text":"Keywords: underdetermined, under-constrained, range-only, singular","category":"page"},{"location":"examples/basic_slamedonut/","page":"Underconstrained Range-only","title":"Underconstrained Range-only","text":"This tutorial describes a range-only system where there are always more variable dimensions than range measurements made. The error distribution over ranges could be nearly anything, but are restricted to Gaussian-only in this example to illustrate an alternative point – other examples show inference results where highly non-Gaussian error distributions are used.","category":"page"},{"location":"examples/basic_slamedonut/#Presentation-Style-Discussion","page":"Underconstrained Range-only","title":"Presentation Style Discussion","text":"","category":"section"},{"location":"examples/basic_slamedonut/","page":"Underconstrained Range-only","title":"Underconstrained Range-only","text":"A presentation discussion of this example is available here:","category":"page"},{"location":"examples/basic_slamedonut/","page":"Underconstrained Range-only","title":"Underconstrained Range-only","text":"\n

      Towards Real-Time Non-Gaussian SLAM from Dehann on Vimeo.

      ","category":"page"},{"location":"examples/basic_slamedonut/","page":"Underconstrained Range-only","title":"Underconstrained Range-only","text":"A script to recreate this example is provided in RoME/examples here. This singular range-only illustration:","category":"page"},{"location":"examples/basic_slamedonut/","page":"Underconstrained Range-only","title":"Underconstrained Range-only","text":"\n

      Multi-modal iSAM range and distance only example from Dehann on Vimeo.

      ","category":"page"},{"location":"examples/basic_slamedonut/#Quick-Install","page":"Underconstrained Range-only","title":"Quick Install","text":"","category":"section"},{"location":"examples/basic_slamedonut/","page":"Underconstrained Range-only","title":"Underconstrained Range-only","text":"If you already have Julia 1.0 or above, alternatively see complete installation instructions here:","category":"page"},{"location":"examples/basic_slamedonut/","page":"Underconstrained Range-only","title":"Underconstrained Range-only","text":"julia> ]\n(v1.0) pkg> add RoME, Distributed\n(v1.0) pkg> add RoMEPlotting","category":"page"},{"location":"examples/basic_slamedonut/","page":"Underconstrained Range-only","title":"Underconstrained Range-only","text":"The Julia REPL/console is sufficient for this example (copy-paste from this page). Note that more involved work in Julia is simplified by using the Juno IDE.","category":"page"},{"location":"examples/basic_slamedonut/","page":"Underconstrained Range-only","title":"Underconstrained Range-only","text":"Note A recent test (May 2019, IIF v0.6.0) showed a possible bug was introduced with one of the solver upgrades. THe figures shown on this example page are still, however, valid. Previous versions of the solver, such as IncrementalInference v0.4.x and v0.5.x, should still work as expected. Follow progress on issue 335 here as bug is being resolved. Previous versions of the solver can be installed with the package manager, for example: (v1.0) pkg> add IncrementalInference@v0.5.7. Please comment for further details.","category":"page"},{"location":"examples/basic_slamedonut/#Loading-The-Data","page":"Underconstrained Range-only","title":"Loading The Data","text":"","category":"section"},{"location":"examples/basic_slamedonut/","page":"Underconstrained Range-only","title":"Underconstrained Range-only","text":"Starting a Juno IDE or Julia REPL session, the ground truth positions for vehicle positions GTp and landmark positions GTl can be loaded into memory directly with these values:","category":"page"},{"location":"examples/basic_slamedonut/","page":"Underconstrained Range-only","title":"Underconstrained Range-only","text":"GTp = Dict{Symbol, Vector{Float64}}()\nGTp[:l100] = [0.0;0]\nGTp[:l101] = [50.0;0]\nGTp[:l102] = [100.0;0]\nGTp[:l103] = [100.0;50.0]\nGTp[:l104] = [100.0;100.0]\nGTp[:l105] = [50.0;100.0]\nGTp[:l106] = [0.0;100.0]\nGTp[:l107] = [0.0;50.0]\nGTp[:l108] = [0.0;-50.0]\nGTp[:l109] = [0.0;-100.0]\nGTp[:l110] = [50.0;-100.0]\nGTp[:l111] = [100.0;-100.0]\nGTp[:l112] = [100.0;-50.0]\n\nGTl = Dict{Symbol, Vector{Float64}}()\nGTl[:l1] = [10.0;30]\nGTl[:l2] = [30.0;-30]\nGTl[:l3] = [80.0;40]\nGTl[:l4] = [120.0;-50]","category":"page"},{"location":"examples/basic_slamedonut/","page":"Underconstrained Range-only","title":"Underconstrained Range-only","text":"NOTE 1. that by using location indicators :l1, :l2, ... or :l100, :l101, ... is of practical benefit when visualizing with existing RoMEPlotting functions.","category":"page"},{"location":"examples/basic_slamedonut/","page":"Underconstrained Range-only","title":"Underconstrained Range-only","text":"NOTE 2. Landmarks must be in range before range measurements can be made to them.","category":"page"},{"location":"examples/basic_slamedonut/#Creating-the-Factor-Graph-with-Point2","page":"Underconstrained Range-only","title":"Creating the Factor Graph with Point2","text":"","category":"section"},{"location":"examples/basic_slamedonut/","page":"Underconstrained Range-only","title":"Underconstrained Range-only","text":"The first step is to load the required modules, and in our case we will add a few Julia processes to help with the compute later on. ","category":"page"},{"location":"examples/basic_slamedonut/","page":"Underconstrained Range-only","title":"Underconstrained Range-only","text":"# add more julia processes\nusing Distributed\nnprocs() < 4 ? addprocs(4-nprocs()) : nothing\n\n# tell Julia that you want to use these modules/namespaces\nusing RoME","category":"page"},{"location":"examples/basic_slamedonut/","page":"Underconstrained Range-only","title":"Underconstrained Range-only","text":"NOTE Julia uses just-in-time compiling (unless pre-compiled), therefore each time a new function call on a Julia process will be slow, but all following calls to the same functions will be as fast as the statically compiled code.","category":"page"},{"location":"examples/basic_slamedonut/","page":"Underconstrained Range-only","title":"Underconstrained Range-only","text":"This example exclusively uses Point2 variable node types, which have dimension 2 and represent [x, y] position estimates in the world frame.","category":"page"},{"location":"examples/basic_slamedonut/","page":"Underconstrained Range-only","title":"Underconstrained Range-only","text":"Next construct the factor graph containing the first pose :l100 (without any knowledge of where it is) and three measured beacons/landmarks :l1,:l2,:l3 – with prior location knowledge for :l1 and :l2:","category":"page"},{"location":"examples/basic_slamedonut/","page":"Underconstrained Range-only","title":"Underconstrained Range-only","text":"# create the factor graph object\nfg = initfg()\n\n# first pose with no initial estimate\naddVariable!(fg, :l100, Point2)\n\n# add three landmarks\naddVariable!(fg, :l1, Point2)\naddVariable!(fg, :l2, Point2)\naddVariable!(fg, :l3, Point2)\n\n# and put priors on :l101 and :l102\naddFactor!(fg, [:l1;], PriorPoint2(MvNormal(GTl[:l1], diagm(ones(2)))) )\naddFactor!(fg, [:l2;], PriorPoint2(MvNormal(GTl[:l2], diagm(ones(2)))) )","category":"page"},{"location":"examples/basic_slamedonut/","page":"Underconstrained Range-only","title":"Underconstrained Range-only","text":"The PriorPoint2 is assumed to be a multivariate normal distribution of covariance diagm(ones(2)). Note the API PriorPoint2(::T) where T <: SamplableBelief = PriorPoint2{T} to accept distribution objects, discussed further in subsection Various SamplableBelief Distribution Types.","category":"page"},{"location":"examples/basic_slamedonut/#Adding-Range-Measurements-Between-Variables","page":"Underconstrained Range-only","title":"Adding Range Measurements Between Variables","text":"","category":"section"},{"location":"examples/basic_slamedonut/","page":"Underconstrained Range-only","title":"Underconstrained Range-only","text":"Next we connect the three range measurements from the vehicle location :l0 to the three beacons, respectively – and consider that the range measurements are completely relative between the vehicle and beacon position estimates:","category":"page"},{"location":"examples/basic_slamedonut/","page":"Underconstrained Range-only","title":"Underconstrained Range-only","text":"# first range measurement\nrhoZ1 = norm(GTl[:l1]-GTp[:l100])\nppr = Point2Point2Range( Normal(rhoZ1, 2) )\naddFactor!(fg, [:l100;:l1], ppr)\n\n# second range measurement\nrhoZ2 = norm(GTl[:l2]-GTp[:l100])\nppr = Point2Point2Range( Normal(rhoZ2, 3.0) )\naddFactor!(fg, [:l100; :l2], ppr)\n\n# second range measurement\nrhoZ3 = norm(GTl[:l3]-GTp[:l100])\nppr = Point2Point2Range( Normal(rhoZ3, 3.0) )\naddFactor!(fg, [:l100; :l3], ppr)","category":"page"},{"location":"examples/basic_slamedonut/","page":"Underconstrained Range-only","title":"Underconstrained Range-only","text":"The ranging measurement standard deviation of 2.0 or 3.0 is taken, assuming a Gaussian measurement assumption. Again, any distribution could have been used. The factor graph should look as follows:","category":"page"},{"location":"examples/basic_slamedonut/","page":"Underconstrained Range-only","title":"Underconstrained Range-only","text":"drawGraph(fg) # show the factor graph","category":"page"},{"location":"examples/basic_slamedonut/","page":"Underconstrained Range-only","title":"Underconstrained Range-only","text":"(Image: rangesonlyfirstfg)","category":"page"},{"location":"examples/basic_slamedonut/#Inference-and-Visualizations","page":"Underconstrained Range-only","title":"Inference and Visualizations","text":"","category":"section"},{"location":"examples/basic_slamedonut/","page":"Underconstrained Range-only","title":"Underconstrained Range-only","text":"At this point we can call the solver start interpreting the first results:","category":"page"},{"location":"examples/basic_slamedonut/","page":"Underconstrained Range-only","title":"Underconstrained Range-only","text":"tree = solveTree!(fg)","category":"page"},{"location":"examples/basic_slamedonut/","page":"Underconstrained Range-only","title":"Underconstrained Range-only","text":"The factor graph figure above showed the structure between variables and factors. In order to see the numerical values contained in the factor graph, a set of tools are provided by the RoMEPlotting and KernelDensityEstimatePlotting packages. For more details, please see the dedicated visualization discussion here.","category":"page"},{"location":"examples/basic_slamedonut/","page":"Underconstrained Range-only","title":"Underconstrained Range-only","text":"First look at the two landmark positions :l1, :l2 at (10.0,30),(30.0,-30) respectively.","category":"page"},{"location":"examples/basic_slamedonut/","page":"Underconstrained Range-only","title":"Underconstrained Range-only","text":"using RoMEPlotting\n\nplotKDE(fg, [:l1;:l2], dims=[1;2])","category":"page"},{"location":"examples/basic_slamedonut/","page":"Underconstrained Range-only","title":"Underconstrained Range-only","text":"(Image: testl1_2)","category":"page"},{"location":"examples/basic_slamedonut/","page":"Underconstrained Range-only","title":"Underconstrained Range-only","text":"Similarly, the belief estimate for the first vehicle position :l100 is bi-modal, due to the intersection of two range measurements:","category":"page"},{"location":"examples/basic_slamedonut/","page":"Underconstrained Range-only","title":"Underconstrained Range-only","text":"plotKDE(fg, :l100, dims=[1;2], levels=6)","category":"page"},{"location":"examples/basic_slamedonut/","page":"Underconstrained Range-only","title":"Underconstrained Range-only","text":"(Image: testl100)","category":"page"},{"location":"examples/basic_slamedonut/","page":"Underconstrained Range-only","title":"Underconstrained Range-only","text":"An alternative plotting interface can also be used, that shows a histogram of desired elements instead:","category":"page"},{"location":"examples/basic_slamedonut/","page":"Underconstrained Range-only","title":"Underconstrained Range-only","text":"drawLandms(fg, from=1, to=101, contour=false, drawhist=true)","category":"page"},{"location":"examples/basic_slamedonut/","page":"Underconstrained Range-only","title":"Underconstrained Range-only","text":"(Image: testlall)","category":"page"},{"location":"examples/basic_slamedonut/","page":"Underconstrained Range-only","title":"Underconstrained Range-only","text":"Notice the ring of particles which represents the belief on the third beacon/landmark :l3, which was not constrained by a prior factor. Instead, the belief over the position of :l3 is being estimated simultaneous to estimating the vehicle position :l100.","category":"page"},{"location":"examples/basic_slamedonut/#Implicit-Growth-and-Decay-of-Modes-(i.e.-Hypotheses)","page":"Underconstrained Range-only","title":"Implicit Growth and Decay of Modes (i.e. Hypotheses)","text":"","category":"section"},{"location":"examples/basic_slamedonut/","page":"Underconstrained Range-only","title":"Underconstrained Range-only","text":"Next consider the vehicle moving a distance of 50 units–-and by design the direction of travel is not known–-to the next true position. The video above gives away the vehicle position with the cyan line, showing travel in the shape of a lower case 'e'. The following function handles (pseudo odometry) factors as range-only between positions and range-only measurement factors to beacons as the vehice travels.","category":"page"},{"location":"examples/basic_slamedonut/","page":"Underconstrained Range-only","title":"Underconstrained Range-only","text":"function vehicle_drives_to!(fgl::G, pos_sym::Symbol, GTp::Dict, GTl::Dict; measurelimit::R=150.0) where {G <: AbstractDFG, R <: Real}\n currvar = union(ls(fgl)...)\n prev_sym = Symbol(\"l$(maximum(Int[parse(Int,string(currvar[i])[2:end]) for i in 2:length(currvar)]))\")\n if !(pos_sym in currvar)\n println(\"Adding variable vertex $pos_sym, not yet in fgl<:AbstractDFG.\")\n addVariable!(fgl, pos_sym, Point2)\n @show rho = norm(GTp[prev_sym] - GTp[pos_sym])\n ppr = Point2Point2Range( Normal(rho, 3.0) )\n addFactor!(fgl, [prev_sym;pos_sym], ppr)\n else\n @warn \"Variable node $pos_sym already in the factor graph.\"\n end\n beacons = keys(GTl)\n for ll in beacons\n rho = norm(GTl[ll] - GTp[pos_sym])\n # Check for feasible measurements: vehicle within 150 units from the beacons/landmarks\n if rho < measurelimit\n ppr = Point2Point2Range( Normal(rho, 3.0) )\n if !(ll in currvar)\n println(\"Adding variable vertex $ll, not yet in fgl<:AbstractDFG.\")\n addVariable!(fgl, ll, Point2)\n end\n addFactor!(fgl, [pos_sym;ll], ppr)\n end\n end\n nothing\nend","category":"page"},{"location":"examples/basic_slamedonut/","page":"Underconstrained Range-only","title":"Underconstrained Range-only","text":"After pasting (or running) this function in Julia, a new member definition vehicle_drives_to! can be used line any other function. Julia will handle the just-in-time compiling for the type specific function required and cach the static code for repeat executions.","category":"page"},{"location":"examples/basic_slamedonut/","page":"Underconstrained Range-only","title":"Underconstrained Range-only","text":"NOTE The exclamation mark at the end of the function name has no syntactic significance in Julia, since the full UTF8 character set is available for functions or variables. Instead, the exclamation serves as a Julia community convention to tell the caller that this function will modify the contents of at least some of the variables being passed into it – in this case the factor graph fg will be modified.","category":"page"},{"location":"examples/basic_slamedonut/","page":"Underconstrained Range-only","title":"Underconstrained Range-only","text":"Now the actual driving event can be added to the factor graph:","category":"page"},{"location":"examples/basic_slamedonut/","page":"Underconstrained Range-only","title":"Underconstrained Range-only","text":"#drive to location :l101, then :l102\nvehicle_drives_to!(fg, :l101, GTp, GTl)\nvehicle_drives_to!(fg, :l102, GTp, GTl)\n\n# see the graph\ndrawGraph(fg, engine=\"neato\")","category":"page"},{"location":"examples/basic_slamedonut/","page":"Underconstrained Range-only","title":"Underconstrained Range-only","text":"NOTE The distance traveled could be any combination of accrued direction and speeds, however, a straight line Gaussian error model is used to keep the visual presentation of this example as simple as possible.","category":"page"},{"location":"examples/basic_slamedonut/","page":"Underconstrained Range-only","title":"Underconstrained Range-only","text":"The marginal posterior estimates are found by repeating inference over the factor graph, followed drawing all vehicle locations as a contour map:","category":"page"},{"location":"examples/basic_slamedonut/","page":"Underconstrained Range-only","title":"Underconstrained Range-only","text":"# solve and show message passing on Bayes (Junction) tree\ngetSolverParams(fg).drawtree=true\ngetSolverParams(fg).showtree=true\ntree = solveTree!(fg)\n\n# draw all vehicle locations\npl = plotKDE(fg, [Symbol(\"l$(100+i)\") for i in 0:2], dims=[1;2])\n# Gadfly.draw(PDF(\"/tmp/testL100_102.pdf\", 20cm, 10cm),pl) # for storing image to disk\n\npl = plotKDE(fg, [:l3;:l4], dims=[1;2], levels=4)\n# Gadfly.draw(PNG(\"/tmp/testL3_4.png\", 20cm, 10cm),pl)","category":"page"},{"location":"examples/basic_slamedonut/","page":"Underconstrained Range-only","title":"Underconstrained Range-only","text":"Notice how the vehicle positions have two hypotheses, one left to right and one diagonal right to bottom left – both are valid solutions!","category":"page"},{"location":"examples/basic_slamedonut/","page":"Underconstrained Range-only","title":"Underconstrained Range-only","text":"(Image: testl100_102)","category":"page"},{"location":"examples/basic_slamedonut/","page":"Underconstrained Range-only","title":"Underconstrained Range-only","text":"The two \"free\" beacons/landmarks :l3,:l4 still have several modes each, implying insufficient data to constrain either to a strong unimodal belief.","category":"page"},{"location":"examples/basic_slamedonut/","page":"Underconstrained Range-only","title":"Underconstrained Range-only","text":"(Image: testl3_4)","category":"page"},{"location":"examples/basic_slamedonut/","page":"Underconstrained Range-only","title":"Underconstrained Range-only","text":"\nvehicle_drives_to!(fg, :l103, GTp, GTl)\nvehicle_drives_to!(fg, :l104, GTp, GTl)\n\ntree = solveTree!(fg)\n\npl = plotKDE(fg, [Symbol(\"l$(100+i)\") for i in 0:4], dims=[1;2])\n# Gadfly.draw(PDF(\"/tmp/testL100_104.pdf\", 20cm, 10cm),pl)","category":"page"},{"location":"examples/basic_slamedonut/","page":"Underconstrained Range-only","title":"Underconstrained Range-only","text":"Moving up to position :l104 still shows strong multiodality in the vehicle position estimates:","category":"page"},{"location":"examples/basic_slamedonut/","page":"Underconstrained Range-only","title":"Underconstrained Range-only","text":"(Image: testl100_105)","category":"page"},{"location":"examples/basic_slamedonut/","page":"Underconstrained Range-only","title":"Underconstrained Range-only","text":"vehicle_drives_to!(fg, :l105, GTp, GTl)\nvehicle_drives_to!(fg, :l106, GTp, GTl)\n\ntree = solveTree!(fg)\n\n\nvehicle_drives_to!(fg, :l107, GTp, GTl)\n\ntree = solveTree!(fg)\n\n\nvehicle_drives_to!(fg, :l108, GTp, GTl)\n\ntree = solveTree!(fg)\n\n\npl = plotKDE(fg, [Symbol(\"l$(100+i)\") for i in 2:8], dims=[1;2], levels=6)\n# Gadfly.draw(PDF(\"/tmp/testL103_108.pdf\", 20cm, 10cm),pl)","category":"page"},{"location":"examples/basic_slamedonut/","page":"Underconstrained Range-only","title":"Underconstrained Range-only","text":"Next we see a strong return to a single dominant mode in all vehicle position estimates, owing to the increased measurements to beacons/landmarks as well as more unimodal estimates in :l3, :l4 beacon/landmark positions.","category":"page"},{"location":"examples/basic_slamedonut/","page":"Underconstrained Range-only","title":"Underconstrained Range-only","text":"vehicle_drives_to!(fg, :l109, GTp, GTl)\nvehicle_drives_to!(fg, :l110, GTp, GTl)\n\ntree = solveTree!(fg)\n\n\nvehicle_drives_to!(fg, :l111, GTp, GTl)\nvehicle_drives_to!(fg, :l112, GTp, GTl)\n\ntree = solveTree!(fg)\n\n\npl = plotKDE(fg, [Symbol(\"l$(100+i)\") for i in 7:12], dims=[1;2])\n# Gadfly.draw(PDF(\"/tmp/testL106_112.pdf\", 20cm, 10cm),pl)\n\npl = plotKDE(fg, [:l1;:l2;:l3;:l4], dims=[1;2], levels=4)\n# Gadfly.draw(PDF(\"/tmp/testL1234.pdf\", 20cm, 10cm),pl)\n\npl = drawLandms(fg, from=100)\n# Gadfly.draw(PDF(\"/tmp/testLocsAll.pdf\", 20cm, 10cm),pl)","category":"page"},{"location":"examples/basic_slamedonut/","page":"Underconstrained Range-only","title":"Underconstrained Range-only","text":"Several location belief estimates exhibit multimodality as the trajectory progresses (not shown), but collapses and finally collapses to a stable set of dominant position estimates.","category":"page"},{"location":"examples/basic_slamedonut/","page":"Underconstrained Range-only","title":"Underconstrained Range-only","text":"(Image: testl106_112)","category":"page"},{"location":"examples/basic_slamedonut/","page":"Underconstrained Range-only","title":"Underconstrained Range-only","text":"Landmark estimates are also stable at one estimate:","category":"page"},{"location":"examples/basic_slamedonut/","page":"Underconstrained Range-only","title":"Underconstrained Range-only","text":"(Image: testl1234)","category":"page"},{"location":"examples/basic_slamedonut/","page":"Underconstrained Range-only","title":"Underconstrained Range-only","text":"In addition, the SLAM 2D landmark visualization can be re-used to plot more information at once:","category":"page"},{"location":"examples/basic_slamedonut/","page":"Underconstrained Range-only","title":"Underconstrained Range-only","text":"# pl = drawLandms(fg, from=100, to=200)\n# Gadfly.draw(PDF(\"/tmp/testLocsAll.pdf\", 20cm, 10cm),pl)\n\npl = drawLandms(fg)\n# Gadfly.draw(PDF(\"/tmp/testAll.pdf\", 20cm, 10cm),pl)","category":"page"},{"location":"examples/basic_slamedonut/","page":"Underconstrained Range-only","title":"Underconstrained Range-only","text":"(Image: testall)","category":"page"},{"location":"examples/basic_slamedonut/","page":"Underconstrained Range-only","title":"Underconstrained Range-only","text":"This example used the default of N=200 particles per marginal belief. By increasing the number to N=300 throughout the test many more modes and interesting features can be explored, and we refer the reader to an alternative and longer discussion on the same example, in Chapter 6 here.","category":"page"},{"location":"install_viz/#Install-Visualization-Tools","page":"Installing Viz","title":"Install Visualization Tools","text":"","category":"section"},{"location":"install_viz/#2D/3D-Plotting,-Arena.jl","page":"Installing Viz","title":"2D/3D Plotting, Arena.jl","text":"","category":"section"},{"location":"install_viz/","page":"Installing Viz","title":"Installing Viz","text":"pkg> add Arena","category":"page"},{"location":"install_viz/#2D-Plotting,-RoMEPlotting.jl","page":"Installing Viz","title":"2D Plotting, RoMEPlotting.jl","text":"","category":"section"},{"location":"install_viz/","page":"Installing Viz","title":"Installing Viz","text":"note: Note\n24Q1: Plotting is being consolidated into Arena.jl and RoMEPlotting.jl will become obsolete.","category":"page"},{"location":"install_viz/","page":"Installing Viz","title":"Installing Viz","text":"RoMEPlotting.jl (2D) and Arena.jl (3D) are optional visualization packages:","category":"page"},{"location":"install_viz/","page":"Installing Viz","title":"Installing Viz","text":"pkg> add RoMEPlotting","category":"page"},{"location":"examples/interm_fixedlag_hexagonal/#fixedlag_solving","page":"Fixed-Lag Solving 2D","title":"Hexagonal 2D with Fixed-Lag Solving","text":"","category":"section"},{"location":"examples/interm_fixedlag_hexagonal/","page":"Fixed-Lag Solving 2D","title":"Fixed-Lag Solving 2D","text":"note: Note\nThis feature has recently been updated and the documentation below needs to be updated. The new interface is greatly simplified from the example below. The results presented below are also out of date, new performance figures are expected to be faster (2Q2020). ","category":"page"},{"location":"examples/interm_fixedlag_hexagonal/","page":"Fixed-Lag Solving 2D","title":"Fixed-Lag Solving 2D","text":"This example provides an overview of how to enable it and the benefits of using fixed-lag solving. The objective is to provide a near-constant solve time for ever-growing graphs by only recalculating the most recent portion. Think of this as a placeholder, as we develop the solution this tutorial will be updated to demonstrate how that is achieved.","category":"page"},{"location":"examples/interm_fixedlag_hexagonal/#Example-Code","page":"Fixed-Lag Solving 2D","title":"Example Code","text":"","category":"section"},{"location":"examples/interm_fixedlag_hexagonal/","page":"Fixed-Lag Solving 2D","title":"Fixed-Lag Solving 2D","text":"The complete code for this example can be found in the fixed-lag branch of RoME: Hexagonal Fixed-Lag Example.","category":"page"},{"location":"examples/interm_fixedlag_hexagonal/#Introduction","page":"Fixed-Lag Solving 2D","title":"Introduction","text":"","category":"section"},{"location":"examples/interm_fixedlag_hexagonal/","page":"Fixed-Lag Solving 2D","title":"Fixed-Lag Solving 2D","text":"Fixed-lag solving is enabled when creating the factor-graph. Users provide a window–-the quasi fixed-lag constant (QFL)–-which defines how many of the most-recent variables should be calculated. Any other variables are 'frozen.' The objective of this example is to explore providing a near-constant solve time for ever-growing graphs by only recalculating the most recent portion.","category":"page"},{"location":"examples/interm_fixedlag_hexagonal/#Example-Overview","page":"Fixed-Lag Solving 2D","title":"Example Overview","text":"","category":"section"},{"location":"examples/interm_fixedlag_hexagonal/","page":"Fixed-Lag Solving 2D","title":"Fixed-Lag Solving 2D","text":"In the example, the basic Hexagonal 2D is grown to solve 200 variables. The original example remains the same, i.e., a vehicle is driving around in a hexagon and seeing the same bearing+range landmark as it crosses the starting point. At every 20th variable, a solve is invoked. Rather than use solveTree!(fg), the solve is performed in parts (construction of Bayes tree, solving the graph) to get performance statistics as the graph grows.","category":"page"},{"location":"examples/interm_fixedlag_hexagonal/","page":"Fixed-Lag Solving 2D","title":"Fixed-Lag Solving 2D","text":"numVariables = 200\nsolveEveryNVariables = 20\nlagLength = 30\n\n# Standard Hexagonal example for totalIterations - solve every iterationsPerSolve iterations.\nfunction runHexagonalExample(fg::G, totalIterations::Int, iterationsPerSolve::Int)::DataFrame where {G <: AbstractDFG}\n # Add the first pose :x0\n addVariable!(fg, :x0, Pose2)\n\n # dummy tree used later for incremental updates\n tree = wipeBuildNewTree!(fg)\n\n # Add at a fixed location PriorPose2 to pin :x0 to a starting location\n addFactor!(fg, [:x0], PriorPose2(MvNormal(zeros(3), 0.01*Matrix{Float64}(LinearAlgebra.I, 3,3))))\n\n # Add a landmark l1\n addVariable!(fg, :l1, Point2, tags=[:LANDMARK])\n\n # Drive around in a hexagon a number of times\n solveTimes = DataFrame(GraphSize = [], TimeBuildBayesTree = [], TimeSolveGraph = [])\n for i in 0:totalIterations\n psym = Symbol(\"x$i\")\n nsym = Symbol(\"x$(i+1)\")\n @info \"Adding pose $nsym...\"\n addVariable!(fg, nsym, Pose2)\n pp = Pose2Pose2(MvNormal([10.0;0;pi/3], Matrix(Diagonal( [0.1;0.1;0.1].^2 ) )))\n @info \"Adding odometry factor between $psym -> $nsym...\"\n addFactor!(fg, [psym;nsym], pp )\n\n if i % 6 == 0\n @info \"Creating factor between $psym and l1...\"\n p2br = Pose2Point2BearingRange(Normal(0,0.1),Normal(20.0,1.0))\n addFactor!(fg, [psym; :l1], p2br)\n end\n if i % iterationsPerSolve == 0 && i != 0\n @info \"Performing inference!\"\n if getSolverParams(fg).isfixedlag\n @info \"Quasi fixed-lag is enabled (a feature currently in testing)!\"\n fifoFreeze!(fg)\n end\n tInfer = @timed tree = solveTree!(fg, tree)\n graphSize = length([ls(fg)[1]..., ls(fg)[2]...])\n push!(solveTimes, (graphSize, tInfer[2], tInfer[2]))\n end\n end\n return solveTimes\nend","category":"page"},{"location":"examples/interm_fixedlag_hexagonal/","page":"Fixed-Lag Solving 2D","title":"Fixed-Lag Solving 2D","text":"Two cases are set up:","category":"page"},{"location":"examples/interm_fixedlag_hexagonal/","page":"Fixed-Lag Solving 2D","title":"Fixed-Lag Solving 2D","text":"One solving the full graph every time a solve is performed:","category":"page"},{"location":"examples/interm_fixedlag_hexagonal/","page":"Fixed-Lag Solving 2D","title":"Fixed-Lag Solving 2D","text":"# start with an empty factor graph object\nfg = initfg()\n# DO NOT enable fixed-lag operation\nsolverTimesForBatch = runHexagonalExample(fg, numVariables, solveEveryNVariables)","category":"page"},{"location":"examples/interm_fixedlag_hexagonal/","page":"Fixed-Lag Solving 2D","title":"Fixed-Lag Solving 2D","text":"The other enabling fixed-lag with a window of 20 variables:","category":"page"},{"location":"examples/interm_fixedlag_hexagonal/","page":"Fixed-Lag Solving 2D","title":"Fixed-Lag Solving 2D","text":"fgFixedLag = initfg()\nfgFixedLag.solverParams.isfixedlag = true\nfgFixedLag.solverParams.qfl = lagLength\n\nsolverTimesFixedLag = runHexagonalExample(fgFixedLag, numVariables, solveEveryNVariables)","category":"page"},{"location":"examples/interm_fixedlag_hexagonal/","page":"Fixed-Lag Solving 2D","title":"Fixed-Lag Solving 2D","text":"The resultant path of the robot can be seen by using RoMEPlotting and is drawn if the visualization lines are uncommented:","category":"page"},{"location":"examples/interm_fixedlag_hexagonal/","page":"Fixed-Lag Solving 2D","title":"Fixed-Lag Solving 2D","text":"#### Visualization\n\n# Plot the many iterations to see that it succeeded.\n# Batch\n# drawPosesLandms(fg)\n\n# Fixed lag\n# drawPosesLandms(fgFixedLag)","category":"page"},{"location":"examples/interm_fixedlag_hexagonal/","page":"Fixed-Lag Solving 2D","title":"Fixed-Lag Solving 2D","text":"Lastly, the timing results of both scenarios are merged into a single DataFrame table, exported to CSV, and a summary graph is shown using GadFly.","category":"page"},{"location":"examples/interm_fixedlag_hexagonal/","page":"Fixed-Lag Solving 2D","title":"Fixed-Lag Solving 2D","text":"using Gadfly\nusing Colors\nusing CSV\n\n# Make a clean dataset\nrename!(solverTimesForBatch, :TimeBuildBayesTree => :Batch_BayedBuild, :TimeSolveGraph => :Batch_SolveGraph);\nrename!(solverTimesFixedLag, :TimeBuildBayesTree => :FixedLag_BayedBuild, :TimeSolveGraph => :FixedLag_SolveGraph);\ntimingMerged = DataFrames.join(solverTimesForBatch, solverTimesFixedLag, on=:GraphSize)\nCSV.write(\"timing_comparison.csv\", timingMerged)\n\nPP = []\npush!(PP, Gadfly.layer(x=timingMerged[:GraphSize], y=timingMerged[:FixedLag_SolveGraph], Geom.path, Theme(default_color=colorant\"green\"))[1]);\npush!(PP, Gadfly.layer(x=timingMerged[:GraphSize], y=timingMerged[:Batch_SolveGraph], Geom.path, Theme(default_color=colorant\"magenta\"))[1]);\n\nplt = Gadfly.plot(PP...,\n Guide.title(\"Solving Time vs. Iteration for Fixed-Lag Operation\"),\n Guide.xlabel(\"Solving Iteration\"),\n Guide.ylabel(\"Solving Time (seconds)\"),\n Guide.manual_color_key(\"Legend\", [\"fixed\", \"batch\"], [\"green\", \"magenta\"]))\nGadfly.draw(PNG(\"results_comparison.png\", 12cm, 15cm), plt)","category":"page"},{"location":"examples/interm_fixedlag_hexagonal/#Results","page":"Fixed-Lag Solving 2D","title":"Results","text":"","category":"section"},{"location":"examples/interm_fixedlag_hexagonal/","page":"Fixed-Lag Solving 2D","title":"Fixed-Lag Solving 2D","text":"warning: Warning\nNote these results are out of date, much improved performance is possible and work is in progress to improve the documentation around this feature.","category":"page"},{"location":"examples/interm_fixedlag_hexagonal/","page":"Fixed-Lag Solving 2D","title":"Fixed-Lag Solving 2D","text":"Preliminary results for the comparison can be seen below. However, this is just a start and we need to perform more testing. At the moment we are working on providing consistent results and further improving performance/flattening the fixed-lag time. It should be noted that the below graph is not to demonstrate the absolute solve time, but rather the relative behavior of full-graph solve vs. fixed-lag.","category":"page"},{"location":"examples/interm_fixedlag_hexagonal/","page":"Fixed-Lag Solving 2D","title":"Fixed-Lag Solving 2D","text":"(Image: Timing comparison of full solve vs. fixed-lag)","category":"page"},{"location":"examples/interm_fixedlag_hexagonal/","page":"Fixed-Lag Solving 2D","title":"Fixed-Lag Solving 2D","text":"NOTE Work is underway (aka \"Project Tree House\") to reduce overhead computations that result in poorer fixed-lag solving times. We expect the fixed-lag performance to improve in the coming months (Written Nov 2018). Please file issues if a deeper discussion is required.","category":"page"},{"location":"examples/interm_fixedlag_hexagonal/#Additional-Example","page":"Fixed-Lag Solving 2D","title":"Additional Example","text":"","category":"section"},{"location":"examples/interm_fixedlag_hexagonal/","page":"Fixed-Lag Solving 2D","title":"Fixed-Lag Solving 2D","text":"Work In Progress, but In the mean time see the following examples:","category":"page"},{"location":"examples/interm_fixedlag_hexagonal/","page":"Fixed-Lag Solving 2D","title":"Fixed-Lag Solving 2D","text":"https://github.com/JuliaRobotics/Caesar.jl/blob/master/examples/wheeled/racecar/apriltagandzed_slam.jl","category":"page"},{"location":"concepts/solving_graphs/#solving_graphs","page":"Solving Graphs","title":"Solving Graphs","text":"","category":"section"},{"location":"concepts/solving_graphs/#Non-parametric-Batch-Solve","page":"Solving Graphs","title":"Non-parametric Batch Solve","text":"","category":"section"},{"location":"concepts/solving_graphs/","page":"Solving Graphs","title":"Solving Graphs","text":"When you have built the graph, you can call the solver to perform inference with the following:","category":"page"},{"location":"concepts/solving_graphs/","page":"Solving Graphs","title":"Solving Graphs","text":"# Perform inference\ntree = solveTree!(fg) # or solveGraph!","category":"page"},{"location":"concepts/solving_graphs/","page":"Solving Graphs","title":"Solving Graphs","text":"The returned Bayes (Junction) tree object is described in more detail on a dedicated documentation page, while smt and hist return values most closely relate to development and debug outputs which can be ignored during general use. Should an error occur during, the exception information is easily accessible in the smt object (as well as file logs which default to /tmp/caesar/).","category":"page"},{"location":"concepts/solving_graphs/","page":"Solving Graphs","title":"Solving Graphs","text":"solveTree!","category":"page"},{"location":"concepts/solving_graphs/#IncrementalInference.solveTree!","page":"Solving Graphs","title":"IncrementalInference.solveTree!","text":"solveTree!(dfgl; ...)\nsolveTree!(\n dfgl,\n oldtree;\n timeout,\n storeOld,\n verbose,\n verbosefid,\n delaycliqs,\n recordcliqs,\n limititercliqs,\n injectDelayBefore,\n skipcliqids,\n eliminationOrder,\n eliminationConstraints,\n smtasks,\n dotreedraw,\n runtaskmonitor,\n algorithm,\n solveKey,\n multithread\n)\n\n\nPerform inference over the Bayes tree according to opt::SolverParams and keyword arguments.\n\nNotes\n\nAliased with solveGraph!\nVariety of options, including fixed-lag solving – see getSolverParams(fg) for details.\nSee online Documentation for more details: https://juliarobotics.org/Caesar.jl/latest/\nLatest result always stored in solvekey=:default.\nExperimental storeOld::Bool=true will duplicate the current result as supersolve :default_k.\nBased on solvable==1 assumption.\nlimititercliqs allows user to limit the number of iterations a specific CSM does.\nkeywords verbose and verbosefid::IOStream can be used together to to send output to file or default stdout.\nkeyword recordcliqs=[:x0; :x7...] identifies by frontals which cliques to record CSM steps.\nSee repeatCSMStep!, printCSMHistoryLogical, printCSMHistorySequential\n\nDevNotes\n\nTODO Change keyword arguments to new @parameter SolverOptions type.\n\nExample\n\n# pass in old `tree` to enable compute recycling -- see online Documentation for more details\ntree = solveTree!(fg [,tree])\n\nRelated\n\nsolveGraph!, solveCliqUp!, solveCliqDown!, buildTreeReset!, repeatCSMStep, printCSMHistoryLogical\n\n\n\n\n\n","category":"function"},{"location":"concepts/solving_graphs/#variable_init","page":"Solving Graphs","title":"Automatic vs Manual Init","text":"","category":"section"},{"location":"concepts/solving_graphs/","page":"Solving Graphs","title":"Solving Graphs","text":"Currently the main automatic initialization technique used by IncrementalInference.jl by delayed propagation of belief on the factor graph. This can be globally or locally controlled via:","category":"page"},{"location":"concepts/solving_graphs/","page":"Solving Graphs","title":"Solving Graphs","text":"getSolverParams(fg).graphinit = false\n\n# or locally at each addFactor\naddFactor!(fg, [:x0;:x1], LinearRelative(Normal()); graphinit=false)","category":"page"},{"location":"concepts/solving_graphs/","page":"Solving Graphs","title":"Solving Graphs","text":"Use initVariable! if you'd like to force a particular numerical initialization of some or all the variables.","category":"page"},{"location":"concepts/solving_graphs/","page":"Solving Graphs","title":"Solving Graphs","text":"initVariable!","category":"page"},{"location":"concepts/solving_graphs/#IncrementalInference.initVariable!","page":"Solving Graphs","title":"IncrementalInference.initVariable!","text":"initVariable!(\n variable::DFGVariable,\n ptsArr::ManifoldKernelDensity;\n ...\n)\ninitVariable!(\n variable::DFGVariable,\n ptsArr::ManifoldKernelDensity,\n solveKey::Symbol;\n dontmargin,\n N\n)\n\n\nMethod to manually initialize a variable using a set of points.\n\nNotes\n\nDisable automated graphinit on `addFactor!(fg, ...; graphinit=false)\nany un-initialized variables will automatically be initialized by solveTree!\n\nExample:\n\n# some variable is added to fg\naddVariable!(fg, :somepoint3, ContinuousEuclid{2})\n\n# data is organized as (row,col) == (dimension, samples)\npts = randn(2,100)\ninitVariable!(fg, :somepoint3, pts)\n\n# manifold management should be done automatically.\n# note upgrades are coming to consolidate with Manifolds.jl, see RoME #244\n\n## it is also possible to initVariable! by using existing factors, e.g.\ninitVariable!(fg, :x3, [:x2x3f1])\n\nDevNotes\n\nTODO better document graphinit and treeinit.\n\n\n\n\n\n","category":"function"},{"location":"concepts/solving_graphs/","page":"Solving Graphs","title":"Solving Graphs","text":"All the variables can be initialized without solving with:","category":"page"},{"location":"concepts/solving_graphs/","page":"Solving Graphs","title":"Solving Graphs","text":"initAll!","category":"page"},{"location":"concepts/solving_graphs/#IncrementalInference.initAll!","page":"Solving Graphs","title":"IncrementalInference.initAll!","text":"initAll!(dfg; ...)\ninitAll!(dfg, solveKey; _parametricInit, solvable, N)\n\n\nPerform graphinit over all variables with solvable=1 (default).\n\nSee also: ensureSolvable!, (EXPERIMENTAL 'treeinit')\n\n\n\n\n\n","category":"function"},{"location":"concepts/solving_graphs/#Using-Incremental-Updates-(Clique-Recycling-I)","page":"Solving Graphs","title":"Using Incremental Updates (Clique Recycling I)","text":"","category":"section"},{"location":"concepts/solving_graphs/","page":"Solving Graphs","title":"Solving Graphs","text":"One of the major features of the MM-iSAMv2 algorithm (implemented by IncrementalInference.jl) is reducing computational load by recycling and marginalizing different (usually older) parts of the factor graph. In order to utilize the benefits of recycing, the previous Bayes (Junction) tree should also be provided as input (see fixed-lag examples for more details):","category":"page"},{"location":"concepts/solving_graphs/","page":"Solving Graphs","title":"Solving Graphs","text":"tree = solveTree!(fg, tree)","category":"page"},{"location":"concepts/solving_graphs/#Using-Clique-out-marginalization-(Clique-Recycling-II)","page":"Solving Graphs","title":"Using Clique out-marginalization (Clique Recycling II)","text":"","category":"section"},{"location":"concepts/solving_graphs/","page":"Solving Graphs","title":"Solving Graphs","text":"When building sysmtes with limited computation resources, the out-marginalization of cliques on the Bayes tree can be used. This approach limits the amount of variables that are inferred on each solution of the graph. This method is also a compliment to the above Incremental Recycling – these two methods can work in tandem. There is a default setting for a FIFO out-marginalization strategy (with some additional tricks):","category":"page"},{"location":"concepts/solving_graphs/","page":"Solving Graphs","title":"Solving Graphs","text":"defaultFixedLagOnTree!(fg, 50, limitfixeddown=true)","category":"page"},{"location":"concepts/solving_graphs/","page":"Solving Graphs","title":"Solving Graphs","text":"This call will keep the latest 50 variables fluid for inference during Bayes tree inference. The keyword limitfixeddown=true in this case will also prevent downward message passing on the Bayes tree from propagating into the out-marginalized branches on the tree. A later page in this documentation will discuss how the inference algorithm and Bayes tree aspects are put together.","category":"page"},{"location":"concepts/solving_graphs/#sync_over_graph_solvable","page":"Solving Graphs","title":"Synchronizing Over a Factor Graph","text":"","category":"section"},{"location":"concepts/solving_graphs/","page":"Solving Graphs","title":"Solving Graphs","text":"When adding Variables and Factors, use solvable=0 to disable the new fragments until ready for inference, for example","category":"page"},{"location":"concepts/solving_graphs/","page":"Solving Graphs","title":"Solving Graphs","text":"addVariable!(fg, :x45, Pose2, solvable=0)\nnewfct = addFactor!(fg, [:x11,:x12], Pose2Pose2, solvable=0)","category":"page"},{"location":"concepts/solving_graphs/","page":"Solving Graphs","title":"Solving Graphs","text":"These parts of the factor graph can simply be activated for solving:","category":"page"},{"location":"concepts/solving_graphs/","page":"Solving Graphs","title":"Solving Graphs","text":"setSolvable!(fg, :x45, 1)\nsetSolvable!(fg, newfct.label, 1)","category":"page"},{"location":"principles/filterCorrespondence/#Build-your-own-(Bayes)-Filter","page":"Filters vs. Graphs","title":"Build your own (Bayes) Filter","text":"","category":"section"},{"location":"principles/filterCorrespondence/#Correspondence-with-Kalman-Filtering?","page":"Filters vs. Graphs","title":"Correspondence with Kalman Filtering?","text":"","category":"section"},{"location":"principles/filterCorrespondence/","page":"Filters vs. Graphs","title":"Filters vs. Graphs","text":"A frequent discussion point is the correspondence between Kalman/particle/log-flow filtering strategies and factor graph formulations. This section aims to shed light on the relationship, and to show that factor graph interpretations are a powerful generalization of existing filtering techniques. The discussion follows a build-your-own-filter style and combines the Approximate Convolution and Multiplying Densities pages as the required prediction and update cycle steps, respectively. Using the steps described here, the user will be able to build fully-functional–-i.e. non-Gaussian–-(Bayes) filters. ","category":"page"},{"location":"principles/filterCorrespondence/","page":"Filters vs. Graphs","title":"Filters vs. Graphs","text":"note: Note\nA simple 1D predict correct Bayesian filtering example (using underlying convolution and product operations of the mmisam algorithm) can be used as a rough template to familiarize yourself on the correspondence between filters and newer graph-based operations.","category":"page"},{"location":"principles/filterCorrespondence/","page":"Filters vs. Graphs","title":"Filters vs. Graphs","text":"This page tries to highlight some of the reasons why using a factor graph approach (w/ Bayes/junction tree inference) in a incremental/fixed-lag/federated sense–-e.g. simultaneous localization and mapping (SLAM) approach–-has merit. The described steps form part of the core operations used by the multimodal incremental smoothing and mapping (mmisam) algorithm.","category":"page"},{"location":"principles/filterCorrespondence/","page":"Filters vs. Graphs","title":"Filters vs. Graphs","text":"Further topics on factor graph (and Bayes/junction tree) inference formulation, including how out-marginalization works is discussed separately as part of the Bayes tree description page. It is also worth reiterating the section on why do we even care about non-Gaussian signal processing.","category":"page"},{"location":"principles/filterCorrespondence/","page":"Filters vs. Graphs","title":"Filters vs. Graphs","text":"note: Note\nComing soon, the steps described on this page will be fully accessible via multi-language interfaces (middleware) – some of these interfaces already exist.","category":"page"},{"location":"principles/filterCorrespondence/#Causality-and-Markov-Assumption","page":"Filters vs. Graphs","title":"Causality and Markov Assumption","text":"","category":"section"},{"location":"principles/filterCorrespondence/","page":"Filters vs. Graphs","title":"Filters vs. Graphs","text":"WIP: Causal connection explanation: How is the graph based method the same as Kalman filtering variants (UKF, EKF), including Bayesian filtering (PF, etc.), and the Hidden Markov Model (HMM) methodology. ","category":"page"},{"location":"principles/filterCorrespondence/","page":"Filters vs. Graphs","title":"Filters vs. Graphs","text":"Furthermore, see below for connection to EKF-SLAM too.","category":"page"},{"location":"principles/filterCorrespondence/#Joint-Probability-and-Chapman-Kolmogorov","page":"Filters vs. Graphs","title":"Joint Probability and Chapman-Kolmogorov","text":"","category":"section"},{"location":"principles/filterCorrespondence/","page":"Filters vs. Graphs","title":"Filters vs. Graphs","text":"WIP; The high level task is to \"invert\" measurements Z give the state of the world Theta","category":"page"},{"location":"principles/filterCorrespondence/#Maximum-Likelihood-vs.-Message-Passing","page":"Filters vs. Graphs","title":"Maximum Likelihood vs. Message Passing","text":"","category":"section"},{"location":"principles/filterCorrespondence/","page":"Filters vs. Graphs","title":"Filters vs. Graphs","text":"WIP; This dicussion will lead towards Bayesian Networks (Pearl) and Bayes Trees (Kaess et al., Fourie et al.).","category":"page"},{"location":"principles/filterCorrespondence/#The-Target-Tracking-Problem-(Conventional-Filtering)","page":"Filters vs. Graphs","title":"The Target Tracking Problem (Conventional Filtering)","text":"","category":"section"},{"location":"principles/filterCorrespondence/","page":"Filters vs. Graphs","title":"Filters vs. Graphs","text":"Consider a common example, two dimensional target tracking, where a projectile transits over a tracking station using various sensing technologies [Zarchan 2013]. Position and velocity estimates of the target","category":"page"},{"location":"principles/filterCorrespondence/#Prediction-Step-using-a-Factor-Graph","page":"Filters vs. Graphs","title":"Prediction Step using a Factor Graph","text":"","category":"section"},{"location":"principles/filterCorrespondence/","page":"Filters vs. Graphs","title":"Filters vs. Graphs","text":"Assume a constant velocity model from which the estimate will be updated through the measurement model described in the next section. A constant velocity model is taken as (cartesian)","category":"page"},{"location":"principles/filterCorrespondence/","page":"Filters vs. Graphs","title":"Filters vs. Graphs","text":"fracdxdt = 0 + eta_x\nfracdydt = 0 + eta_y","category":"page"},{"location":"principles/filterCorrespondence/","page":"Filters vs. Graphs","title":"Filters vs. Graphs","text":"or polar coordinates","category":"page"},{"location":"principles/filterCorrespondence/","page":"Filters vs. Graphs","title":"Filters vs. Graphs","text":"fracdrhodt = 0 + eta_rho\nfracdthetadt = 0 + eta_theta","category":"page"},{"location":"principles/filterCorrespondence/","page":"Filters vs. Graphs","title":"Filters vs. Graphs","text":"In this example, noise is introduced as an affine slack variable \\eta, but could be added as any part of the process model:","category":"page"},{"location":"principles/filterCorrespondence/","page":"Filters vs. Graphs","title":"Filters vs. Graphs","text":"eta_j sim p()","category":"page"},{"location":"principles/filterCorrespondence/","page":"Filters vs. Graphs","title":"Filters vs. Graphs","text":"where p is any allowable noise probability density/distribution model – discussed more in the next section.","category":"page"},{"location":"principles/filterCorrespondence/","page":"Filters vs. Graphs","title":"Filters vs. Graphs","text":"After integration (assume zeroth order) the associated residual function can be constructed:","category":"page"},{"location":"principles/filterCorrespondence/","page":"Filters vs. Graphs","title":"Filters vs. Graphs","text":"delta_i (theta_k theta_k-1 fracd theta_kdt Delta t) = theta_k - (theta_k-1 + fracd theta_kdt Delta t)","category":"page"},{"location":"principles/filterCorrespondence/","page":"Filters vs. Graphs","title":"Filters vs. Graphs","text":"Filter prediction steps are synonymous with a binary factor (conditional likelihood) between two variables where a prior estimate from one variable is projected (by means of a convolution) to the next variable. The convolutional principle page describes a more detailed example on how a convolution can be computed. ","category":"page"},{"location":"principles/filterCorrespondence/#Measurement-Step-using-a-Factor-Graph","page":"Filters vs. Graphs","title":"Measurement Step using a Factor Graph","text":"","category":"section"},{"location":"principles/filterCorrespondence/","page":"Filters vs. Graphs","title":"Filters vs. Graphs","text":"The measurement update is a product operation of infinite functional objects (probability densities)","category":"page"},{"location":"principles/filterCorrespondence/","page":"Filters vs. Graphs","title":"Filters vs. Graphs","text":"p(X_k X_k-1 Z_a Z_b) approx p(X_k X_k-1 Z_a) times p(X_k Z_b)","category":"page"},{"location":"principles/filterCorrespondence/","page":"Filters vs. Graphs","title":"Filters vs. Graphs","text":"where Z_. represents conditional information for two beliefs on the same variable. The product of the two functional estimates (beliefs) are multiplied by a stochastic algorithm described in more detail on the multiplying functions page.","category":"page"},{"location":"principles/filterCorrespondence/","page":"Filters vs. Graphs","title":"Filters vs. Graphs","text":"Direct state observations can be added to the factor graph as prior factors directly on the variables. An illustration of both predictions (binary likelihood process model) and direct observations (measurements) is presented:","category":"page"},{"location":"principles/filterCorrespondence/","page":"Filters vs. Graphs","title":"Filters vs. Graphs","text":"

      \n\n

      ","category":"page"},{"location":"principles/filterCorrespondence/","page":"Filters vs. Graphs","title":"Filters vs. Graphs","text":"Alternatively, indirect measurements of the state variables are should be modeled with the most sensible function","category":"page"},{"location":"principles/filterCorrespondence/","page":"Filters vs. Graphs","title":"Filters vs. Graphs","text":"y = h(theta eta)\ndelta_j(theta_j eta_j) = ominus h_j(theta_j eta_j) oplus y_j","category":"page"},{"location":"principles/filterCorrespondence/","page":"Filters vs. Graphs","title":"Filters vs. Graphs","text":"which approximates the underlying (on-manifold) stochastics and physics of the process at hand. The measurement models can be used to project belief through a measurement function, and should be recognized as a standard representation for a Hidden Markov Model (HMM):","category":"page"},{"location":"principles/filterCorrespondence/","page":"Filters vs. Graphs","title":"Filters vs. Graphs","text":"

      \n\n

      ","category":"page"},{"location":"principles/filterCorrespondence/#Beyond-Filtering","page":"Filters vs. Graphs","title":"Beyond Filtering","text":"","category":"section"},{"location":"principles/filterCorrespondence/","page":"Filters vs. Graphs","title":"Filters vs. Graphs","text":"Consider a multi-sensory system along with data transmission delays, variable sampling rates, etc.; when designing a filtering system to track one or multiple targets, it quickly becomes difficult to augment state vectors with the required state and measurement histories. In contrast, the factor graph as a language allows for heterogeneous data streams to be combined in a common inference framework, and is discussed further in the building distributed factor graphs section.","category":"page"},{"location":"principles/filterCorrespondence/","page":"Filters vs. Graphs","title":"Filters vs. Graphs","text":"note: Note\nFactor graphs are constructed along with the evolution of time which allows the mmisam inference algorithm to resolve variable marginal estimates both forward and backwards in time. Conventional filtering only allows for forward-backward \"smoothing\" as two separate processes. When inferring over a factor graph, all variables and factors are considered simultaneously according the topological connectivity irrespective of when and where which measurements were made or communicated – as long as the factor graph (probabilistic model) captures the stochastics of the situation with sufficient accuracy. ","category":"page"},{"location":"principles/filterCorrespondence/","page":"Filters vs. Graphs","title":"Filters vs. Graphs","text":"TODO: Multi-modal (belief) vs. multi-hypothesis – see thesis work on multimodal solutions in the mean time.","category":"page"},{"location":"principles/filterCorrespondence/","page":"Filters vs. Graphs","title":"Filters vs. Graphs","text":"note: Note\nMmisam allows for parametric, non-parametric, or intensity noise models which can be incorporated into any differentiable residual function.","category":"page"},{"location":"principles/filterCorrespondence/#Anecdotal-Example-(EKF-SLAM-/-MSC-KF)","page":"Filters vs. Graphs","title":"Anecdotal Example (EKF-SLAM / MSC-KF)","text":"","category":"section"},{"location":"principles/filterCorrespondence/","page":"Filters vs. Graphs","title":"Filters vs. Graphs","text":"WIP: Explain how this method is similar to EKF-SLAM and MSC-KF...","category":"page"},{"location":"examples/basic_definingfactors/#custom_prior_factor","page":"Custom Prior Factor","title":"Custom Prior Factor","text":"","category":"section"},{"location":"examples/basic_definingfactors/","page":"Custom Prior Factor","title":"Custom Prior Factor","text":"Julia's type inference allows overloading of member functions outside a module. Therefore new factors can be defined at any time. ","category":"page"},{"location":"examples/basic_definingfactors/","page":"Custom Prior Factor","title":"Custom Prior Factor","text":"Required Brief description\nMyFactor struct Prior (<:AbstractPrior) factor definition\nOptional methods Brief description\ngetSample(cfo::CalcFactor{<:MyFactor}) Get a sample from the measurement model","category":"page"},{"location":"examples/basic_definingfactors/","page":"Custom Prior Factor","title":"Custom Prior Factor","text":"To better illustrate, in this example we will add new factors into the Main context after construction of the factor graph has already begun.","category":"page"},{"location":"examples/basic_definingfactors/","page":"Custom Prior Factor","title":"Custom Prior Factor","text":"tip: Tip\nIIF is a convenient const alias of the module IncrementalInference, similarly AMP for ApproxManifoldProducts.","category":"page"},{"location":"examples/basic_definingfactors/#Defining-a-New-Prior-(:AbsoluteFactor)","page":"Custom Prior Factor","title":"Defining a New Prior (<:AbsoluteFactor)","text":"","category":"section"},{"location":"examples/basic_definingfactors/","page":"Custom Prior Factor","title":"Custom Prior Factor","text":"Now lets define our own prior, MyPrior which allows for arbitrary distributions that inherit from <: IIF.SamplableBelief:","category":"page"},{"location":"examples/basic_definingfactors/","page":"Custom Prior Factor","title":"Custom Prior Factor","text":"struct MyPrior{T <: SamplableBelief} <: IIF.AbstractPrior\n Z::T\nend","category":"page"},{"location":"examples/basic_definingfactors/","page":"Custom Prior Factor","title":"Custom Prior Factor","text":"New priors must inheret from IIF.AbstractPrior, and usually takes a user input <:SamplableBelief as probabilistic model. <:AbstractPrior is a unary factor that introduces absolute information about only one variable.","category":"page"},{"location":"examples/basic_definingfactors/#specialized_getSample","page":"Custom Prior Factor","title":"Specialized getSample (if .Z)","text":"","category":"section"},{"location":"examples/basic_definingfactors/","page":"Custom Prior Factor","title":"Custom Prior Factor","text":"Caesar.jl uses a convention (non-binding) to simplify factor definitions in easier cases, but not restrict more complicated cases – a default getSample function already exists in IIF which assumes the field .Z <: SamplableBelief is used to generate the random sample values. So, the example above actually does not require the user to provide a specific getSample(cf::CalcFactor{<:MyPrior}) dispatch. ","category":"page"},{"location":"examples/basic_definingfactors/","page":"Custom Prior Factor","title":"Custom Prior Factor","text":"For the sake of the tutorial, let's write one anyway. Remember that we are now overriding the IIF API with a new dispatch, for that we need to import the function","category":"page"},{"location":"examples/basic_definingfactors/","page":"Custom Prior Factor","title":"Custom Prior Factor","text":"import IncrementalInference: getSample\n\n# adding our own specialized dispatch on getSample\nIIF.getSample(cfo::CalcFactor{<:MyPrior}) = rand(cfo.factor.Z)","category":"page"},{"location":"examples/basic_definingfactors/","page":"Custom Prior Factor","title":"Custom Prior Factor","text":"It is important to note that for <:AbstractPrior the getSample must return a point on the manifold, not a tangent vector or coordinate. ","category":"page"},{"location":"examples/basic_definingfactors/","page":"Custom Prior Factor","title":"Custom Prior Factor","text":"To recap, the getSample function for priors returns a measurement sample as points on the manifold.","category":"page"},{"location":"examples/basic_definingfactors/#Ready-to-Use","page":"Custom Prior Factor","title":"Ready to Use","text":"","category":"section"},{"location":"examples/basic_definingfactors/","page":"Custom Prior Factor","title":"Custom Prior Factor","text":"This new prior can now readily be added to an ongoing factor graph:","category":"page"},{"location":"examples/basic_definingfactors/","page":"Custom Prior Factor","title":"Custom Prior Factor","text":"# lets generate a random nonparametric belief\n\npts = [samplePoint(getManifold(Position{1}), Normal(8.0,2.0)) for _=1:75]\nsomeBelief = manikde!(Position{1}, pts)\n\n# and build your new factor as an object\nmyprior = MyPrior(someBelief)","category":"page"},{"location":"examples/basic_definingfactors/","page":"Custom Prior Factor","title":"Custom Prior Factor","text":"and add it to the existing factor graph from earlier, lets say:","category":"page"},{"location":"examples/basic_definingfactors/","page":"Custom Prior Factor","title":"Custom Prior Factor","text":"addFactor!(fg, [:x1], myprior)","category":"page"},{"location":"examples/basic_definingfactors/","page":"Custom Prior Factor","title":"Custom Prior Factor","text":"note: Note\nVariable types Postion{1} or ContinuousEuclid{1} are algebraically equivalent.","category":"page"},{"location":"examples/basic_definingfactors/","page":"Custom Prior Factor","title":"Custom Prior Factor","text":"That's it, this factor is now part of the graph. This should be a solvable graph:","category":"page"},{"location":"examples/basic_definingfactors/","page":"Custom Prior Factor","title":"Custom Prior Factor","text":"solveGraph!(fg); # exact alias of solveTree!(fg)","category":"page"},{"location":"examples/basic_definingfactors/","page":"Custom Prior Factor","title":"Custom Prior Factor","text":"Later we will see how to ensure these new factors can be properly serialized to work with features like saveDFG and loadDFG. See What is CalcFactor for more details.","category":"page"},{"location":"examples/basic_definingfactors/","page":"Custom Prior Factor","title":"Custom Prior Factor","text":"See the next page on how to build your own Custom Relative Factor. Serialization of factors is also discussed in more detail at Standardized Factor Serialization.","category":"page"},{"location":"concepts/arena_visualizations/#visualization_3d","page":"Visualization (3D)","title":"Visualization 3D","text":"","category":"section"},{"location":"concepts/arena_visualizations/#Introduction","page":"Visualization (3D)","title":"Introduction","text":"","category":"section"},{"location":"concepts/arena_visualizations/","page":"Visualization (3D)","title":"Visualization (3D)","text":"Over time, Caesar.jl/Arena.jl has used at various different 3D visualization technologies. ","category":"page"},{"location":"concepts/arena_visualizations/#Arena.jl-Visualization","page":"Visualization (3D)","title":"Arena.jl Visualization","text":"","category":"section"},{"location":"concepts/arena_visualizations/#viz_pointcloud","page":"Visualization (3D)","title":"Plotting a PointCloud","text":"","category":"section"},{"location":"concepts/arena_visualizations/","page":"Visualization (3D)","title":"Visualization (3D)","text":"Visualization support for point clouds is available through Arena and Caesar. The follow example shows some of the basics:","category":"page"},{"location":"concepts/arena_visualizations/","page":"Visualization (3D)","title":"Visualization (3D)","text":"using Arena\nusing Caesar\nusing Downloads\nusing DelimitedFiles\nusing LasIO\nusing Test\n\n##\n\nfunction downloadTestData(datafile, url)\n if 0 === Base.filesize(datafile)\n Base.mkpath(dirname(datafile))\n @info \"Downloading $url\"\n Downloads.download(url, datafile)\n end\n return datafile\nend\n\ntestdatafolder = joinpath(tempdir(), \"caesar\", \"testdata\") # \"/tmp/caesar/testdata/\"\n\nlidar_terr1_file = joinpath(testdatafolder,\"lidar\",\"simpleICP\",\"terrestrial_lidar1.xyz\")\nif !isfile(lidar_terr1_file)\n lidar_terr1_url = \"https://github.com/JuliaRobotics/CaesarTestData.jl/raw/main/data/lidar/simpleICP/terrestrial_lidar1.xyz\"\n downloadTestData(lidar_terr1_file,lidar_terr1_url)\nend\n\n# load the data to memory\nX_fix = readdlm(lidar_terr1_file, Float32)\n# convert data to PCL types\npc_fix = Caesar._PCL.PointCloud(X_fix);\n\n\npl = Arena.plotPointCloud(pc_fix)\n","category":"page"},{"location":"concepts/arena_visualizations/","page":"Visualization (3D)","title":"Visualization (3D)","text":"This should result in a plot similar to:","category":"page"},{"location":"concepts/arena_visualizations/","page":"Visualization (3D)","title":"Visualization (3D)","text":"

      \n\n

      ","category":"page"},{"location":"concepts/arena_visualizations/","page":"Visualization (3D)","title":"Visualization (3D)","text":"note: Note\n24Q1: Currently work is underway to better standardize within the Julia ecosystem, with the 4th generation of Arena.jl – note that this is work in progress. Information about legacy generations is included below.","category":"page"},{"location":"concepts/arena_visualizations/","page":"Visualization (3D)","title":"Visualization (3D)","text":"For more formal visualization support, contact www.NavAbility.io via email or slack. ","category":"page"},{"location":"concepts/arena_visualizations/#4th-Generation-Dev-Scripts-using-Makie.jl","page":"Visualization (3D)","title":"4th Generation Dev Scripts using Makie.jl","text":"","category":"section"},{"location":"concepts/arena_visualizations/","page":"Visualization (3D)","title":"Visualization (3D)","text":"Working towards new Makie.jl. Makie supports both GL and WGL, including 3rd party libraries such as three.js (previously used via MeshCat.jl, see Legacy section below.).","category":"page"},{"location":"concepts/arena_visualizations/#viz_pointcloud_makie","page":"Visualization (3D)","title":"Visualizing Point Clouds","text":"","category":"section"},{"location":"concepts/arena_visualizations/","page":"Visualization (3D)","title":"Visualization (3D)","text":"Point clouds could be massive, on the order of a million points or more. Makie.jl has good performance for handling such large point cloud datasets. Here is a quick example script.","category":"page"},{"location":"concepts/arena_visualizations/","page":"Visualization (3D)","title":"Visualization (3D)","text":"using Makie, GLMakie\n\n# n x 3 matrix of 3D points in pointcloud\npts1 = randn(100,3)\npts2 = randn(100,3)\n\n# plot first and update with second\nplt = scatter(pts1[:,1],pts1[:,2],pts1[:,3], color=pts1[:,3])\nscatter!(pts2[:,1],pts2[:,2],pts2[:,3], color=-pts2[:,3])","category":"page"},{"location":"concepts/arena_visualizations/#Visualizing-with-Arena.jl","page":"Visualization (3D)","title":"Visualizing with Arena.jl","text":"","category":"section"},{"location":"concepts/arena_visualizations/","page":"Visualization (3D)","title":"Visualization (3D)","text":"warning: Warning\nArena.jl is currently out of date since the package will likely support Makie via both GL and WGL interfaces. Makie.jl has been receiving much attention over the past years and starting to mature to a point where Arena.jl can be revived again. 2D plotting is done via RoMEPlotting.jl.","category":"page"},{"location":"concepts/arena_visualizations/","page":"Visualization (3D)","title":"Visualization (3D)","text":"The sections below discuss 3D visualization techniques available to the Caesar.jl robot navigation system. Caesar.jl uses the Arena.jl package for all the visualization requirements. This part of the documentation discusses the robotic visualization aspects supported by Arena.jl. Arena.jl supports a wide variety of general visualization as well as developer visualization tools more focused on research and development. The visualizations are also intended to help with subgraph plotting for finding loop closures in data or compare two datasets.","category":"page"},{"location":"concepts/arena_visualizations/#Legacy-Visualizers","page":"Visualization (3D)","title":"Legacy Visualizers","text":"","category":"section"},{"location":"concepts/arena_visualizations/","page":"Visualization (3D)","title":"Visualization (3D)","text":"Previous generations used various technologies, including WebGL and three.js by means of the MeshCat.jl package. Previous incarnations used a client side installation of VTK by means of the DrakeVisualizer.jl and Director libraries. Different 2D plotting libraries have also been used, with evolutions to improve usability for a wider user base. Each epoch has been aimed at reducing dependencies and increasing multi-platform support.","category":"page"},{"location":"concepts/arena_visualizations/#3rd-Generation-MeshCat.jl-(Three.js)","page":"Visualization (3D)","title":"3rd Generation MeshCat.jl (Three.js)","text":"","category":"section"},{"location":"concepts/arena_visualizations/","page":"Visualization (3D)","title":"Visualization (3D)","text":"For the latest work on using MeshCat.jl, see proof or concept examples in Amphitheater.jl (1Q20). The code below inspired the Amphitheater work.","category":"page"},{"location":"concepts/arena_visualizations/","page":"Visualization (3D)","title":"Visualization (3D)","text":"note: Note\nSee installation page for instructions.","category":"page"},{"location":"concepts/arena_visualizations/","page":"Visualization (3D)","title":"Visualization (3D)","text":"Factor graphs of two or three dimensions can be visualized with the 3D visualizations provided by Arena.jl and it's dependencies. The 2D example above and also be visualized in a 3D space with the commands:","category":"page"},{"location":"concepts/arena_visualizations/","page":"Visualization (3D)","title":"Visualization (3D)","text":"vc = startdefaultvisualization() # to load a DrakeVisualizer/Director process instance\nvisualize(fg, vc, drawlandms=false)\n# visualizeallposes!(vc, fg, drawlandms=false)","category":"page"},{"location":"concepts/arena_visualizations/","page":"Visualization (3D)","title":"Visualization (3D)","text":"Here is a basic example of using visualization and multi-core factor graph solving:","category":"page"},{"location":"concepts/arena_visualizations/","page":"Visualization (3D)","title":"Visualization (3D)","text":"addprocs(2)\nusing Caesar, RoME, TransformUtils, Distributions\n\n# load scene and ROV model (might experience UDP packet loss LCM buffer not set)\nsc1 = loadmodel(:scene01); sc1(vc)\nrovt = loadmodel(:rov); rovt(vc)\n\ninitCov = 0.001*eye(6); [initCov[i,i] = 0.00001 for i in 4:6];\nodoCov = 0.0001*eye(6); [odoCov[i,i] = 0.00001 for i in 4:6];\nrangecov, bearingcov = 3e-4, 2e-3\n\n# start and add to a factor graph\nfg = identitypose6fg(initCov=initCov)\ntf = SE3([0.0;0.7;0.0], Euler(pi/4,0.0,0.0) )\naddOdoFG!(fg, Pose3Pose3(MvNormal(veeEuler(tf), odoCov) ) )\n\naddLinearArrayConstraint(fg, (4.0, 0.0), :x0, :l1, rangecov=rangecov,bearingcov=bearingcov)\naddLinearArrayConstraint(fg, (4.0, 0.0), :x1, :l1, rangecov=rangecov,bearingcov=bearingcov)\n\nsolveBatch!(fg)\n\nusing Arena\n\nvc = startdefaultvisualization()\nvisualize(fg, vc, drawlandms=true, densitymeshes=[:l1;:x2])\nvisualizeDensityMesh!(vc, fg, :l1)\n# visualizeallposes!(vc, fg, drawlandms=false)","category":"page"},{"location":"concepts/arena_visualizations/","page":"Visualization (3D)","title":"Visualization (3D)","text":"For more information see JuliaRobotcs/MeshCat.jl.","category":"page"},{"location":"concepts/arena_visualizations/#2nd-Generation-3D-Viewer-(VTK-/-Director)","page":"Visualization (3D)","title":"2nd Generation 3D Viewer (VTK / Director)","text":"","category":"section"},{"location":"concepts/arena_visualizations/","page":"Visualization (3D)","title":"Visualization (3D)","text":"note: Note\nThis code is obsolete","category":"page"},{"location":"concepts/arena_visualizations/","page":"Visualization (3D)","title":"Visualization (3D)","text":"Previous versions used the much larger VTK based Director available via DrakeVisualizer.jl package. This requires the following preinstalled packages:","category":"page"},{"location":"concepts/arena_visualizations/","page":"Visualization (3D)","title":"Visualization (3D)","text":" sudo apt-get install libvtk5-qt4-dev python-vtk","category":"page"},{"location":"concepts/arena_visualizations/#1st-Generation-MIT-LCM-Collections-viewer","page":"Visualization (3D)","title":"1st Generation MIT LCM Collections viewer","text":"","category":"section"},{"location":"concepts/arena_visualizations/","page":"Visualization (3D)","title":"Visualization (3D)","text":"This code has been removed.","category":"page"},{"location":"concepts/zero_install/#Using-The-NavAbility-Cloud","page":"Zero Install Solution","title":"Using The NavAbility Cloud","text":"","category":"section"},{"location":"concepts/zero_install/","page":"Zero Install Solution","title":"Zero Install Solution","text":"See NavAbilitySDK for details. These features will include Multi-session/agent support.","category":"page"},{"location":"principles/approxConvDensities/#Principle:-Approximate-Convolutions","page":"Generic Convolutions","title":"Principle: Approximate Convolutions","text":"","category":"section"},{"location":"principles/approxConvDensities/","page":"Generic Convolutions","title":"Generic Convolutions","text":"This example illustrates a central concept of approximating the convolution of belief density functions. Convolutions are required to compute (estimate) the probabilistic chain rule with conditional probability density functions. One easy illustration is robotics where an odometry chain of poses has a continuous increase–-or spreading–-of the confidence/uncertainty of a next pose. This tutorial will demonstrate that process.","category":"page"},{"location":"principles/approxConvDensities/","page":"Generic Convolutions","title":"Generic Convolutions","text":"This page describes a Julia language interface, followed by a CaesarZMQ interface; a link to the mathematical description is provided thereafter.","category":"page"},{"location":"principles/approxConvDensities/#Convolutions-of-Infinite-Objects-(Functionals)","page":"Generic Convolutions","title":"Convolutions of Infinite Objects (Functionals)","text":"","category":"section"},{"location":"principles/approxConvDensities/","page":"Generic Convolutions","title":"Generic Convolutions","text":"Consider the following vehicle odometry prediction (probabilistic) operation, where odometry measurement Z is an independent stochastic process from prior belief on pose X0","category":"page"},{"location":"principles/approxConvDensities/","page":"Generic Convolutions","title":"Generic Convolutions","text":"p(X_1 X_0 Z) propto p(Z X_0 X_1) p(X_0)","category":"page"},{"location":"principles/approxConvDensities/","page":"Generic Convolutions","title":"Generic Convolutions","text":"and recognize this process as a convolution operation where the prior belief on X0 is spread to a less certain prediction of pose X1. The figure below shows an example quasi-deterministic convolution of green densitty with the red density, which results in the black density below:","category":"page"},{"location":"principles/approxConvDensities/","page":"Generic Convolutions","title":"Generic Convolutions","text":"\"Bayes/Junction","category":"page"},{"location":"principles/approxConvDensities/","page":"Generic Convolutions","title":"Generic Convolutions","text":"Note that this operation is precisely the same as a prediction step in filtering applications, where the state transition model–-usually annotated as d/dt x = f(x, z)–-is here presented by the conditional belief p(Z | X_0, X_1).","category":"page"},{"location":"principles/approxConvDensities/","page":"Generic Convolutions","title":"Generic Convolutions","text":"The convolution computation described above is a core operation required for solving the Chapman-Kolmogorov transit equations.","category":"page"},{"location":"principles/approxConvDensities/#Underlying-Mathematical-Operations","page":"Generic Convolutions","title":"Underlying Mathematical Operations","text":"","category":"section"},{"location":"principles/approxConvDensities/","page":"Generic Convolutions","title":"Generic Convolutions","text":"In order to compute generic convolutions, the mmisam algorithm uses non-linear gradient descent to resolve estimates of the target variable based on the values of other dependent variables. The conditional likelihood (multidimensional factor) is based on a residual function:","category":"page"},{"location":"principles/approxConvDensities/","page":"Generic Convolutions","title":"Generic Convolutions","text":"z_i = delta_i (theta_i)","category":"page"},{"location":"principles/approxConvDensities/","page":"Generic Convolutions","title":"Generic Convolutions","text":"where z_i is the innovation of any smooth twice differentiable residual function delta. The residual function depends on specific variables collected as theta_i. The IIF code supports both root finding or minimization trust-region operations, which are each provided by NLsolve.jl or Optim.jl packages respectively.","category":"page"},{"location":"principles/approxConvDensities/","page":"Generic Convolutions","title":"Generic Convolutions","text":"The choice between root finding or minimization is a performance consideration only. Minimization of the residual squared will always work but certain situations allow direct root finding to be used. If the residual function is guaranteed to cross zero–-i.e. z*=0–-the root finding approach can be used. Each measurement function has a certain number of dimensions – e.g. ranges or bearings are dimension one, and an inter Pose2 rigid transform (delta x, y, theta) is dimension 3. If the variable being resolved has larger dimension than the measurement residual, then the minimization approach must be used.","category":"page"},{"location":"principles/approxConvDensities/","page":"Generic Convolutions","title":"Generic Convolutions","text":"The method of solving the target variable is to fix all other variable values and resolve, sample by sample, the particle estimates of the target. The Julia programming language has good support for functional programming and is used extensively in the IIF implementation to utilize user defined functions to resolve any variable, including the null-hypothesis and multi-hypothesis generalizations.","category":"page"},{"location":"principles/approxConvDensities/","page":"Generic Convolutions","title":"Generic Convolutions","text":"The following section illustrates a single convolution operation by using a few high level and some low level function calls. An additional tutorial exists where a related example in one dimension is performed as a complete factor graph solution/estimation problem.","category":"page"},{"location":"principles/approxConvDensities/#Previous-Text-(to-be-merged-here)","page":"Generic Convolutions","title":"Previous Text (to be merged here)","text":"","category":"section"},{"location":"principles/approxConvDensities/","page":"Generic Convolutions","title":"Generic Convolutions","text":"Proposal distributions are computed by means of (analytical or numerical – i.e. \"algebraic\") factor which defines a residual function:","category":"page"},{"location":"principles/approxConvDensities/","page":"Generic Convolutions","title":"Generic Convolutions","text":"delta S times Eta rightarrow mathcalR","category":"page"},{"location":"principles/approxConvDensities/","page":"Generic Convolutions","title":"Generic Convolutions","text":"where S times Eta is the domain such that theta_i in S eta sim P(Eta), and P(cdot) is a probability.","category":"page"},{"location":"principles/approxConvDensities/","page":"Generic Convolutions","title":"Generic Convolutions","text":"A trust-region, nonlinear gradient decent method is used to enforce the residual function delta (theta_S) in a leave-one-out-Gibbs strategy for all the factors and variables in each clique. Each time a factor residual is enforced for another particle along with a sample from the stochastic noise term. Solutions are found either through root finding on \"full dimension\" equations (source code here):","category":"page"},{"location":"principles/approxConvDensities/","page":"Generic Convolutions","title":"Generic Convolutions","text":"textsolve_theta_i st 0 = delta(theta_S eta)","category":"page"},{"location":"principles/approxConvDensities/","page":"Generic Convolutions","title":"Generic Convolutions","text":"Or minimization of \"low dimension\" equations (source code here) that might not have any roots in theta_i:","category":"page"},{"location":"principles/approxConvDensities/","page":"Generic Convolutions","title":"Generic Convolutions","text":"textargmin_theta_i delta(theta_S eta)^2","category":"page"},{"location":"principles/approxConvDensities/","page":"Generic Convolutions","title":"Generic Convolutions","text":"Gradient decent methods are obtained from the Julia Package community, namely NLsolve.jl and Optim.jl.","category":"page"},{"location":"principles/approxConvDensities/","page":"Generic Convolutions","title":"Generic Convolutions","text":"The factor noise term can be any samplable belief (a.k.a. IIF.SamplableBelief), either through algebraic modeling, or (critically) directly from the sensor measurement that is driven by the underlying physics process. Parametric factors (Distributions.jl) or direct physical measurement noise can be used via AliasingScalarSampler or KernelDensityEstimate.","category":"page"},{"location":"principles/approxConvDensities/","page":"Generic Convolutions","title":"Generic Convolutions","text":"note: Note\nAlso see [1.2], Chap. 5, Approximate Convolutions for more details.","category":"page"},{"location":"principles/approxConvDensities/#Illustrated-Calculation-in-Julia","page":"Generic Convolutions","title":"Illustrated Calculation in Julia","text":"","category":"section"},{"location":"principles/approxConvDensities/","page":"Generic Convolutions","title":"Generic Convolutions","text":"The IncrementalInference.jl package provides a generic interface for estimating the convolution of full functional objects given some user specified residual or cost function. The residual/cost function is then used, with the help of non-linear gradient decent, to project/resolve a set of particles for any one variable associated with a any factor. In the binary variable factor case, such as the odometry tutorial, either pose X2 will be resolved from X1 using the user supplied likelihood residual function, or visa versa for X1 from X2. ","category":"page"},{"location":"principles/approxConvDensities/","page":"Generic Convolutions","title":"Generic Convolutions","text":"note: Note\nNote in a factor graph sense, the flow of time is captured in the structure of the graph and a requirement of the IncrementalInference system is that factors can be resolved towards any variable, given current estimates on all other variables connected to that factor. Furthermore, this forwards or backwards resolving/convolution through a factor should adhere to the Kolmogorov Criterion of reversibility to ensure that detailed balance is maintained in the overall marginal posterior solutions.","category":"page"},{"location":"principles/approxConvDensities/","page":"Generic Convolutions","title":"Generic Convolutions","text":"The IncrementalInference (IIF) package provides a few generic conditional likelihood functions such as LinearRelative or MixtureRelative which we will use in this illustration. ","category":"page"},{"location":"principles/approxConvDensities/","page":"Generic Convolutions","title":"Generic Convolutions","text":"note: Note\nNote that the RoME.jl package provides many more factors that are useful to robotics applications. For a listing of current factors see this docs page, details on developing your own factors on this page. One of the clear design objectives of the IIF package was to allow easier user extension of arbitrary residual functions that allows for vast capacity to represent non-Gaussian stochastic processes.","category":"page"},{"location":"principles/approxConvDensities/","page":"Generic Convolutions","title":"Generic Convolutions","text":"Consider a robot traveling in one dimension, progressing along the x-axis at varying speed. Lets assume pose locations are determined by a constant delta-time rule of say one pose every second, named X0, X1, X2, and so on.","category":"page"},{"location":"principles/approxConvDensities/","page":"Generic Convolutions","title":"Generic Convolutions","text":"note: Note\nNote the bread-crum discretization of the trajectory history by means of poses can later be used to allow estimation of previously unknown mapping parameters simultaneous to the ongoing localization problem.","category":"page"},{"location":"principles/approxConvDensities/","page":"Generic Convolutions","title":"Generic Convolutions","text":"Lets a few basic factor graph operations to develop the desired convolutions:","category":"page"},{"location":"principles/approxConvDensities/","page":"Generic Convolutions","title":"Generic Convolutions","text":"using IncrementalInference\n\n# empty factor graph container\nfg = initfg()\n\n# add two variables of interest\naddVariable!(fg, :x0, ContinuousScalar)\naddVariable!(fg, :x1, ContinuousScalar)\n\n# gauge the solution by adding the first prior information that represents all history up to the current starting position for the robot\npr = Prior(Normal(0.0, 0.1))\naddFactor!(fg, [:x0], pr)\n\n# numerically initialize variable :x0 -- this avoids repeat computations later (specific to this tutorial)\ndoautoinit!(fg, :x0)\n\n# lastly add the odometry conditional likelihood function between the two variables of interest\nodo = LinearConditional(Rayleigh(...))\naddFactor!(fg, [:x0;:x1], odo) # note the list is order sensitive","category":"page"},{"location":"principles/approxConvDensities/","page":"Generic Convolutions","title":"Generic Convolutions","text":"The code block above (not solved yet) describes a algebraic setup exactly equivalent to the convolution equation presented at the top of this page. ","category":"page"},{"location":"principles/approxConvDensities/","page":"Generic Convolutions","title":"Generic Convolutions","text":"note: Note\nIIF does not require the distribution functions to only be parametric, such as Normal, Rayleigh, mixture models, but also allows intensity based values or kernel density estimates. Parametric types are just used here for ease of illustration.","category":"page"},{"location":"principles/approxConvDensities/","page":"Generic Convolutions","title":"Generic Convolutions","text":"To perform an stochastic approximate convolution with the odometry conditional, one can simply call a low level function used the mmisam solver:","category":"page"},{"location":"principles/approxConvDensities/","page":"Generic Convolutions","title":"Generic Convolutions","text":"pts = approxConvBelief(fg, :x0x1f1, :x1) |> getPoints","category":"page"},{"location":"principles/approxConvDensities/","page":"Generic Convolutions","title":"Generic Convolutions","text":"The approxConvBelief function call reads as a operation on fg which won't influence any values of parameter list (common Julia exclamation mark convention) and must use the first factor :x0x1f1 to resolve a convolution on target variable :x1. Implicitly, this result is based on the current estimate contained in :x0. The value of pts is a ::Array{Float64,2} where the rows represent the different dimensions (1-D in this case) and the columns are each of the different samples drawn from the intermediate posterior (i.e. convolution result). ","category":"page"},{"location":"principles/approxConvDensities/","page":"Generic Convolutions","title":"Generic Convolutions","text":"approxConvBelief","category":"page"},{"location":"principles/approxConvDensities/#IncrementalInference.approxConvBelief","page":"Generic Convolutions","title":"IncrementalInference.approxConvBelief","text":"approxConvBelief(dfg, from, target; ...)\napproxConvBelief(\n dfg,\n from,\n target,\n measurement;\n solveKey,\n N,\n tfg,\n setPPEmethod,\n setPPE,\n path,\n skipSolve,\n nullSurplus\n)\n\n\nCalculate the sequential series of convolutions in order as listed by fctLabels, and starting from the value already contained in the first variable. \n\nNotes\n\ntarget must be a variable.\nThe ultimate target variable must be given to allow path discovery through n-ary factors.\nFresh starting point will be used if first element in fctLabels is a unary <:AbstractPrior.\nThis function will not change any values in dfg, and might have slightly less speed performance to meet this requirement.\npass in tfg to get a recoverable result of all convolutions in the chain.\nsetPPE and setPPEmethod can be used to store PPE information in temporary tfg\n\nDevNotes\n\nTODO strong requirement that this function is super efficient on single factor/variable case!\nFIXME must consolidate with accumulateFactorMeans\nTODO solveKey not fully wired up everywhere yet\ntfg gets all the solveKeys inside the source dfg variables\nTODO add a approxConv on PPE option\nConsolidate with accumulateFactorMeans, approxConvBinary\n\nRelated\n\napproxDeconv, findShortestPathDijkstra\n\n\n\n\n\n","category":"function"},{"location":"principles/approxConvDensities/","page":"Generic Convolutions","title":"Generic Convolutions","text":"IIF currently uses kernel density estimation to convert discrete samples into a smooth function estimate. The sample set can be converted into an on-manifold functional object as follows:","category":"page"},{"location":"principles/approxConvDensities/","page":"Generic Convolutions","title":"Generic Convolutions","text":"# create kde object by referencing back the existing memory location pts\nhatX1 = manikde!(ContinuousScalar, pts)","category":"page"},{"location":"principles/approxConvDensities/","page":"Generic Convolutions","title":"Generic Convolutions","text":"The functional object X1 is now ready for other operations such as function evaluation or product computations discussed on another principles page. The ContinuousScalar manifold is just Manifolds.TranslationGroup(1).","category":"page"},{"location":"principles/approxConvDensities/#approxDeconv","page":"Generic Convolutions","title":"approxDeconv","text":"","category":"section"},{"location":"principles/approxConvDensities/","page":"Generic Convolutions","title":"Generic Convolutions","text":"Analogous to a 'forward' convolution calculation, we can similarly approximate the inverse:","category":"page"},{"location":"principles/approxConvDensities/","page":"Generic Convolutions","title":"Generic Convolutions","text":"approxDeconv","category":"page"},{"location":"principles/approxConvDensities/#IncrementalInference.approxDeconv","page":"Generic Convolutions","title":"IncrementalInference.approxDeconv","text":"approxDeconv(fcto; ...)\napproxDeconv(fcto, ccw; N, measurement, retries)\n\n\nInverse solve of predicted noise value and returns tuple of (newly calculated-predicted, and known measurements) values.\n\nNotes\n\nOnly works for first value in measurement::Tuple at this stage.\n\"measured\" is used as starting point for the \"calculated-predicted\" values solve.\nNot all factor evaluation cases are support yet.\nNOTE only works on .threadid()==1 at present, see #1094\nThis function is still part of the initial implementation and needs a lot of generalization improvements.\n\nDevNotes\n\nTODO Test for various cases with multiple variables.\nTODO make multithread-safe, and able, see #1094\nTODO Test for cases with nullhypo\nFIXME FactorMetadata object for all use-cases, not just empty object.\nTODO resolve #1096 (multihypo)\nTODO Test cases for multihypo.\nTODO figure out if there is a way to consolidate with evalFactor and approxConv?\nbasically how to do deconv for just one sample with unique values (wrt TAF)\nTODO N should not be hardcoded to 100\n\nRelated\n\napproxDeconv, _solveCCWNumeric!\n\n\n\n\n\napproxDeconv(dfg, fctsym; ...)\napproxDeconv(dfg, fctsym, solveKey; retries)\n\n\nGeneralized deconvolution to find the predicted measurement values of the factor fctsym in dfg. Inverse solve of predicted noise value and returns tuple of (newly predicted, and known \"measured\" noise) values.\n\nNotes\n\nOpposite operation contained in approxConvBelief.\nFor more notes see solveFactorMeasurements.\n\nRelated\n\napproxConvBelief, deconvSolveKey\n\n\n\n\n\n","category":"function"},{"location":"principles/approxConvDensities/","page":"Generic Convolutions","title":"Generic Convolutions","text":"This feature is not yet as feature rich as the approxConvBelief function, and also requires further work to improve the consistency of the calculation – but none the less exists and is useful in many applications.","category":"page"},{"location":"concepts/parallel_processing/#Parallel-Processing","page":"Parallel Processing","title":"Parallel Processing","text":"","category":"section"},{"location":"concepts/parallel_processing/","page":"Parallel Processing","title":"Parallel Processing","text":"note: Note\nKeywords: parallel processing, multi-threading, multi-process","category":"page"},{"location":"concepts/parallel_processing/","page":"Parallel Processing","title":"Parallel Processing","text":"Julia allows high-performance, parallel processing from the ground up. Depending on the configuration, Caesar.jl can utilize a combination of four styles of multiprocessing: i) separate memory multi-process; ii) shared memory multi-threading; iii) asynchronous shared-memory (forced-atomic) co-routines; and iv) multi-architecture such as JuliaGPU. As of Julia 1.4, the most reliable method of loading all code into all contexts (for multi-processor speedup) is as follows.","category":"page"},{"location":"concepts/parallel_processing/#Multiprocessing","page":"Parallel Processing","title":"Multiprocessing","text":"","category":"section"},{"location":"concepts/parallel_processing/","page":"Parallel Processing","title":"Parallel Processing","text":"Make sure the environment variable JULIA_NUM_THREADS is set as default or per call and recommended to use 4 as starting point.","category":"page"},{"location":"concepts/parallel_processing/","page":"Parallel Processing","title":"Parallel Processing","text":"JULIA_NUM_THREADS=4 julia -O3","category":"page"},{"location":"concepts/parallel_processing/","page":"Parallel Processing","title":"Parallel Processing","text":"In addition to multithreading, Caesar.jl utilizes multiprocessing to distribute computation during the inference steps. Following standard Julia, more processes can be added as follows:","category":"page"},{"location":"concepts/parallel_processing/","page":"Parallel Processing","title":"Parallel Processing","text":"# load the required packages into procid()==1\nusing Flux, RoME, Caesar, RoMEPlotting\n\n# then start more processes\nusing Distributed\naddprocs(8) # note this yields 6*8=40 possible processing threads\n\n# now make sure all code is loaded everywhere (for separate memory cases)\n@everywhere using Flux, RoME, Caesar","category":"page"},{"location":"concepts/parallel_processing/","page":"Parallel Processing","title":"Parallel Processing","text":"It might also be convenient to warm up some of the Just-In-Time compiling:","category":"page"},{"location":"concepts/parallel_processing/","page":"Parallel Processing","title":"Parallel Processing","text":"# solve a few graphs etc, to get majority of solve code compiled before running a robot.\n[warmUpSolverJIT() for i in 1:3];","category":"page"},{"location":"concepts/parallel_processing/#Start-up-Time","page":"Parallel Processing","title":"Start-up Time","text":"","category":"section"},{"location":"concepts/parallel_processing/","page":"Parallel Processing","title":"Parallel Processing","text":"The best way to avoid compile time (when not developing) is to use the established Julia \"first time to plot\" approach based on PackageCompiler.jl, and more details are provided at Ahead of Time compiling.","category":"page"},{"location":"concepts/parallel_processing/#Multithreading","page":"Parallel Processing","title":"Multithreading","text":"","category":"section"},{"location":"concepts/parallel_processing/","page":"Parallel Processing","title":"Parallel Processing","text":"Julia has strong support for shared-memory multithreading. The most sensible breakdown into threaded work is either within each factor calculation or across individual samples of a factor calculation. Either of these cases require some special considerations.","category":"page"},{"location":"concepts/parallel_processing/#Threading-Within-the-Residual","page":"Parallel Processing","title":"Threading Within the Residual","text":"","category":"section"},{"location":"concepts/parallel_processing/","page":"Parallel Processing","title":"Parallel Processing","text":"A factor residual function itself can be broken down further into threaded operations. For example, see many of the features available at JuliaSIMD/LoopVectorization.jl. It is recommended to keep memory allocations down to zero, since the solver code will call on the factor samping and residual funtions mulitple times in random access. Also keep in mind the interaction between conventional thread pool balancing and the newer PARTR cache senstive automated thread scheduling.","category":"page"},{"location":"concepts/parallel_processing/#Threading-Across-Parallel-Samples-[DEPRECATED-–-REFACTORING]","page":"Parallel Processing","title":"Threading Across Parallel Samples [DEPRECATED – REFACTORING]","text":"","category":"section"},{"location":"concepts/parallel_processing/","page":"Parallel Processing","title":"Parallel Processing","text":"IncrementalInference.jl internally has the capability to span threads across samples in parallel computations during convolution operations. Keep in mind which parts of residual factor computation is shared memory. Likely the best course of action is for the factor definition to pre-allocate Threads.nthreads() many memory blocks for factor in-place operations.","category":"page"},{"location":"concepts/parallel_processing/","page":"Parallel Processing","title":"Parallel Processing","text":"To use this feature, IIF must be told that there are no data race concerns with a factor. The current API uses a keyword argument on addFactor!:","category":"page"},{"location":"concepts/parallel_processing/","page":"Parallel Processing","title":"Parallel Processing","text":"# NOTE, legacy `threadmodel=MultiThreaded` is being refactored with new `CalcFactor` pattern\naddFactor!(fg, [:x0; :x1], MyFactor(...))","category":"page"},{"location":"concepts/parallel_processing/","page":"Parallel Processing","title":"Parallel Processing","text":"warning: Warning\nThe current IIF factor multithreading interface is likely to be reworked/improved in the near future (penciled in for 1H2022).","category":"page"},{"location":"concepts/parallel_processing/","page":"Parallel Processing","title":"Parallel Processing","text":"See page Custom Factors for details on how factor computations are represented in code. Regarding threading, consider for example OtherFactor.userdata. The residual calculations from different threads might create a data race on userdata for some volatile internal computation. In that case it is recommended the to instead use Threads.nthreads() and Threads.threadid() to make sure the shared-memory issues are avoided:","category":"page"},{"location":"concepts/parallel_processing/","page":"Parallel Processing","title":"Parallel Processing","text":"struct MyThreadSafeFactor{T <: SamplableBelief} <: IIF.AbstractManifoldMinimize\n Z::T\n inplace::Vector{MyInplaceMem}\nend\n\n# helper function\nMyThreadSafeFactor(z) = MyThreadSafeFactor(z, [MyInplaceMem(0) for i in 1:Threads.nthreads()])\n\n# in residual function just use `thr_inplace = cfo.factor.inplace[Threads.threadid()]`","category":"page"},{"location":"concepts/parallel_processing/","page":"Parallel Processing","title":"Parallel Processing","text":"note: Note\nBeyond the cases discussed above, other features in the IncrementalInference.jl code base (especially regarding the Bayes tree) are already multithreaded.","category":"page"},{"location":"concepts/parallel_processing/#Factor-Caching-(In-place-operations)","page":"Parallel Processing","title":"Factor Caching (In-place operations)","text":"","category":"section"},{"location":"concepts/parallel_processing/","page":"Parallel Processing","title":"Parallel Processing","text":"In-place memory operations for factors can have a significant performance improvement. See the Cache and Stash section for more details.","category":"page"},{"location":"principles/interm_dynpose/#Adding-Velocity-(Preintegration)","page":"Creating DynPose Factor","title":"Adding Velocity (Preintegration)","text":"","category":"section"},{"location":"principles/interm_dynpose/","page":"Creating DynPose Factor","title":"Creating DynPose Factor","text":"This tutorial describes how a new factor can be developed, beyond the pre-existing implementation in RoME.jl. Factors can accept any number of variable dependencies and allow for a wide class of allowable function calls can be used. Our intention is to make it as easy as possible for users to create their own factor types.","category":"page"},{"location":"principles/interm_dynpose/#Example:-Adding-Velocity-to-RoME.Point2","page":"Creating DynPose Factor","title":"Example: Adding Velocity to RoME.Point2","text":"","category":"section"},{"location":"principles/interm_dynpose/","page":"Creating DynPose Factor","title":"Creating DynPose Factor","text":"A smaller example in two dimensions where we wish to estimate the velocity of some target: Consider two variables :x0 with a prior as well as a conditional–-likelihood for short–-to variable :x1. Priors are in the \"global\" reference frame (how ever you choose to define it), while likelihoods are in the \"local\" / \"relative\" frame that only exist between variables.","category":"page"},{"location":"principles/interm_dynpose/","page":"Creating DynPose Factor","title":"Creating DynPose Factor","text":"(Image: dynpoint2fg)","category":"page"},{"location":"principles/interm_dynpose/","page":"Creating DynPose Factor","title":"Creating DynPose Factor","text":"warning: Warning\nText below is outdated (2021Q1) and needs to be updated for changes softtype-->variableType and CalcFactor.","category":"page"},{"location":"principles/interm_dynpose/#Brief-on-Variable-Node-softtypes","page":"Creating DynPose Factor","title":"Brief on Variable Node softtypes","text":"","category":"section"},{"location":"principles/interm_dynpose/","page":"Creating DynPose Factor","title":"Creating DynPose Factor","text":"Variable nodes retain meta data (so called \"soft types\") describing the type of variable. Common VariableNode types are RoME.Point2D, RoME.Pose3D. VariableNode soft types are passed during construction of the factor graph, for example:","category":"page"},{"location":"principles/interm_dynpose/","page":"Creating DynPose Factor","title":"Creating DynPose Factor","text":"v1 = addVariable!(fg, :x1, Pose2)","category":"page"},{"location":"principles/interm_dynpose/","page":"Creating DynPose Factor","title":"Creating DynPose Factor","text":"Certain cases require that more information be retained for each VariableNode, and velocity calculations are a clear example where time stamp data across positions is required. ","category":"page"},{"location":"principles/interm_dynpose/","page":"Creating DynPose Factor","title":"Creating DynPose Factor","text":"Note Larger data can also be stored under the bigdata framework which is discussed here (TBD).","category":"page"},{"location":"principles/interm_dynpose/","page":"Creating DynPose Factor","title":"Creating DynPose Factor","text":"If the required VariableNode does not exist, then one can be created, such as adding velocity states with DynPoint2:","category":"page"},{"location":"principles/interm_dynpose/","page":"Creating DynPose Factor","title":"Creating DynPose Factor","text":"mutable struct DynPoint2 <: IncrementalInference.InferenceVariable\n ut::Int64 # microsecond time\n dims::Int\n DynPoint2(;ut::Int64=0) = new(ut, 4)\nend","category":"page"},{"location":"principles/interm_dynpose/","page":"Creating DynPose Factor","title":"Creating DynPose Factor","text":"The dims field is permanently set to 4, i.e. [x, y, dx/dt, dy/dt]. The utparameter is for storing the microsecond time stamp for that variable node.","category":"page"},{"location":"principles/interm_dynpose/","page":"Creating DynPose Factor","title":"Creating DynPose Factor","text":"In order to implement your own factor type outside IncrementalInference you should import the required identifiers, as follows:","category":"page"},{"location":"principles/interm_dynpose/","page":"Creating DynPose Factor","title":"Creating DynPose Factor","text":"using IncrementalInference\nimport IncrementalInference: getSample","category":"page"},{"location":"principles/interm_dynpose/","page":"Creating DynPose Factor","title":"Creating DynPose Factor","text":"Note that new factor types can be defined at any time, even after you have started to construct the FactorGraph object.","category":"page"},{"location":"principles/interm_dynpose/#DynPoint2VelocityPrior","page":"Creating DynPose Factor","title":"DynPoint2VelocityPrior","text":"","category":"section"},{"location":"principles/interm_dynpose/","page":"Creating DynPose Factor","title":"Creating DynPose Factor","text":"Work in progress.","category":"page"},{"location":"principles/interm_dynpose/","page":"Creating DynPose Factor","title":"Creating DynPose Factor","text":"mutable struct DynPoint2VelocityPrior{T} <: IncrementalInference.AbstractPrior where {T <: Distribution}\n z::T\n DynPoint2VelocityPrior{T}() where {T <: Distribution} = new{T}()\n DynPoint2VelocityPrior(z1::T) where {T <: Distribution} = new{T}(z1)\nend\ngetSample(dp2v::DynPoint2VelocityPrior, N::Int=1) = (rand(dp2v.z,N), )","category":"page"},{"location":"principles/interm_dynpose/#DynPoint2DynPoint2-(preintegration)","page":"Creating DynPose Factor","title":"DynPoint2DynPoint2 (preintegration)","text":"","category":"section"},{"location":"principles/interm_dynpose/","page":"Creating DynPose Factor","title":"Creating DynPose Factor","text":"warning: Warning\n::IIF.FactorMetadata is being refactored and improved. Some of the content below is out of date. See IIF #1025 for details. (1Q2021)","category":"page"},{"location":"principles/interm_dynpose/","page":"Creating DynPose Factor","title":"Creating DynPose Factor","text":"The basic idea is that change in position is composed of three components (originating from double integration of Newton's second law):","category":"page"},{"location":"principles/interm_dynpose/","page":"Creating DynPose Factor","title":"Creating DynPose Factor","text":"(Image: deltapositionplus) ( eq. 1)","category":"page"},{"location":"principles/interm_dynpose/","page":"Creating DynPose Factor","title":"Creating DynPose Factor","text":"DynPoint2DynPoint2 factor is using the above equation to define the difference in position between the two DynPoint2s. The position part stored in DynPoint2DynPoint2 factor corresponds to (Image: deltaposplusonly). A new multi-variable (so called \"pairwise\") factor between any number of variables is defined with three elements:","category":"page"},{"location":"principles/interm_dynpose/","page":"Creating DynPose Factor","title":"Creating DynPose Factor","text":"Factor type definition that inherits either IncrementalInference.FunctorPairwise or IncrementalInference.FunctorPairwiseMinimize;","category":"page"},{"location":"principles/interm_dynpose/","page":"Creating DynPose Factor","title":"Creating DynPose Factor","text":"mutable struct DynPoint2DynPoint2{T} <: IncrementalInference.FunctorPairwise where {T <: Distribution}\n z::T\n DynPoint2DynPoint2{T}() where {T <: Distribution} = new{T}()\n DynPoint2DynPoint2(z1::T) where {T <: Distribution} = new{T}(z1)\nend","category":"page"},{"location":"principles/interm_dynpose/","page":"Creating DynPose Factor","title":"Creating DynPose Factor","text":"A sampling function with exactly the signature: getSample(dp2dp2::DynPoint2DynPoint2, N::Int=1) and returning a Tuple (legacy reasons);","category":"page"},{"location":"principles/interm_dynpose/","page":"Creating DynPose Factor","title":"Creating DynPose Factor","text":"getSample(dp2dp2::DynPoint2DynPoint2, N::Int=1) = (rand(dp2dp2.z,N), )","category":"page"},{"location":"principles/interm_dynpose/","page":"Creating DynPose Factor","title":"Creating DynPose Factor","text":"A residual or minimization function with exactly the signature described below.","category":"page"},{"location":"principles/interm_dynpose/","page":"Creating DynPose Factor","title":"Creating DynPose Factor","text":"Residual (related to FunctorPairwise) or factor minimization function (related to FunctorPairwiseMinimize) signatures should match this dp2dp2::DynPoint2DynPoint2 example:","category":"page"},{"location":"principles/interm_dynpose/","page":"Creating DynPose Factor","title":"Creating DynPose Factor","text":"function (dp2dp2::DynPoint2DynPoint2)(\n res::Array{Float64},\n userdata,\n idx::Int,\n meas::Tuple,\n Xs... )::Nothing","category":"page"},{"location":"principles/interm_dynpose/","page":"Creating DynPose Factor","title":"Creating DynPose Factor","text":"where Xs can be expanded to the particular number of variable nodes this factor will be associated, and note they are order sensitive at addFactor!(fg, ...) time. The res parameter is a vector of the same dimension defined by the largest of the Xs terms. The userdata value contains the small metadata / userdata portions of information that was introduced to the factor graph at construction time – please consult error(string(fieldnames(userdata))) for details at this time. This is a relatively new feature in the code and likely to be improved. The idx parameter represents a legacy index into the measurement meas[1] and variables Xs to select the desired marginal sample value. Future versions of the code plan to remove the idx parameter entirely. The Xs array of parameter are each of type ::Array{Float64,2} and contain the estimated samples from each of the current best marginal belief estimates of the factor graph variable node. ","category":"page"},{"location":"principles/interm_dynpose/","page":"Creating DynPose Factor","title":"Creating DynPose Factor","text":"function (dp2dp2::DynPoint2DynPoint2)(\n res::Array{Float64},\n userdata,\n idx::Int,\n meas::Tuple,\n Xi::Array{Float64,2},\n Xj::Array{Float64,2} )\n #\n z = meas[1][:,idx]\n xi, xj = Xi[:,idx], Xj[:,idx]\n dt = (userdata.variableuserdata[2].ut - userdata.variableuserdata[1].ut)*1e-6 # roughly the intended use of userdata\n res[1:2] = z[1:2] - (xj[1:2] - (xi[1:2]+dt*xi[3:4]))\n res[3:4] = z[3:4] - (xj[3:4] - xi[3:4])\n nothing\nend","category":"page"},{"location":"principles/interm_dynpose/","page":"Creating DynPose Factor","title":"Creating DynPose Factor","text":"A brief usage example looks as follows, and further questions about how the preintegration strategy was implemented can be traced through the original issue JuliaRobotics/RoME.jl#60 or the literature associated with this project, or contact for more information.","category":"page"},{"location":"principles/interm_dynpose/","page":"Creating DynPose Factor","title":"Creating DynPose Factor","text":"using RoME, Distributions\nfg = initfg()\nv0 = addVariable!(fg, :x0, DynPoint2(ut=0))\n\n# Prior factor as boundary condition\npp0 = DynPoint2VelocityPrior(MvNormal([zeros(2);10*ones(2)], 0.1*eye(4)))\nf0 = addFactor!(fg, [:x0;], pp0)\n\n# conditional likelihood between Dynamic Point2\nv1 = addVariable!(fg, :x1, DynPoint2(ut=1000_000)) # time in microseconds\ndp2dp2 = DynPoint2DynPoint2(MvNormal([10*ones(2);zeros(2)], 0.1*eye(4)))\nf1 = addFactor!(fg, [:x0;:x1], dp2dp2)\n\ninitAll!(fg)\ntree = wipeBuildNewTree!(fg)\ninferOverTree!(fg, tree)\n\nusing KernelDensityEstimate\n@show x0 = getKDEMax(getBelief(fg, :x0))\n# julia> ... = [-0.19441, 0.0187019, 10.0082, 10.0901]\n@show x1 = getKDEMax(getBelief(fg, :x1))\n # julia> ... = [19.9072, 19.9765, 10.0418, 10.0797]","category":"page"},{"location":"principles/interm_dynpose/#VelPoint2VelPoint2-(back-differentiation)","page":"Creating DynPose Factor","title":"VelPoint2VelPoint2 (back-differentiation)","text":"","category":"section"},{"location":"principles/interm_dynpose/","page":"Creating DynPose Factor","title":"Creating DynPose Factor","text":"In case the preintegrated approach is not the first choice, we include VelPoint2VelPoint2 <: IncrementalInference.FunctorPairwiseMinimize as a second likelihood factor example which may seem more intuitive:","category":"page"},{"location":"principles/interm_dynpose/","page":"Creating DynPose Factor","title":"Creating DynPose Factor","text":"mutable struct VelPoint2VelPoint2{T} <: IncrementalInference.FunctorPairwiseMinimize where {T <: Distribution}\n z::T\n VelPoint2VelPoint2{T}() where {T <: Distribution} = new{T}()\n VelPoint2VelPoint2(z1::T) where {T <: Distribution} = new{T}(z1)\nend\ngetSample(vp2vp2::VelPoint2VelPoint2, N::Int=1) = (rand(vp2vp2.z,N), )\nfunction (vp2vp2::VelPoint2VelPoint2)(\n res::Array{Float64},\n userdata,\n idx::Int,\n meas::Tuple,\n Xi::Array{Float64,2},\n Xj::Array{Float64,2} )\n #\n z = meas[1][:,idx]\n xi, xj = Xi[:,idx], Xj[:,idx]\n dt = (userdata.variableuserdata[2].ut - userdata.variableuserdata[1].ut)*1e-6 # roughly the intended use of userdata\n dp = (xj[1:2]-xi[1:2])\n dv = (xj[3:4]-xi[3:4])\n res[1] = 0.0\n res[1] += sum((z[1:2] - dp).^2)\n res[1] += sum((z[3:4] - dv).^2)\n res[1] += sum((dp/dt - xi[3:4]).^2) # (dp/dt - 0.5*(xj[3:4]+xi[3:4])) # midpoint integration\n res[1]\nend","category":"page"},{"location":"principles/interm_dynpose/","page":"Creating DynPose Factor","title":"Creating DynPose Factor","text":"A similar usage example here shows:","category":"page"},{"location":"principles/interm_dynpose/","page":"Creating DynPose Factor","title":"Creating DynPose Factor","text":"fg = initfg()\n\n# add three point locations\nv0 = addVariable!(fg, :x0, DynPoint2(ut=0))\nv1 = addVariable!(fg, :x1, DynPoint2(ut=1000_000))\nv2 = addVariable!(fg, :x2, DynPoint2(ut=2000_000))\n\n# Prior factor as boundary condition\npp0 = DynPoint2VelocityPrior(MvNormal([zeros(2);10*ones(2)], 0.1*eye(4)))\nf0 = addFactor!(fg, [:x0;], pp0)\n\n# conditional likelihood between Dynamic Point2\ndp2dp2 = VelPoint2VelPoint2(MvNormal([10*ones(2);zeros(2)], 0.1*eye(4)))\nf1 = addFactor!(fg, [:x0;:x1], dp2dp2)\n\n# conditional likelihood between Dynamic Point2\ndp2dp2 = VelPoint2VelPoint2(MvNormal([10*ones(2);zeros(2)], 0.1*eye(4)))\nf2 = addFactor!(fg, [:x1;:x2], dp2dp2)\n\n# Graphs.plot(fg.g)\ninitAll!(fg)\ntree = wipeBuildNewTree!(fg)\ninferOverTree!(fg, tree)\n\n# see the output\n@show x0 = getKDEMax(getBelief(getVariable(fg, :x0)))\n@show x1 = getKDEMax(getBelief(getVariable(fg, :x1)))\n@show x2 = getKDEMax(getBelief(getVariable(fg, :x2)))","category":"page"},{"location":"principles/interm_dynpose/","page":"Creating DynPose Factor","title":"Creating DynPose Factor","text":"Producing output:","category":"page"},{"location":"principles/interm_dynpose/","page":"Creating DynPose Factor","title":"Creating DynPose Factor","text":"x0 = getKDEMax(getBelief(getVariable(fg, :x0))) = [0.101503, -0.0273216, 9.86718, 9.91146]\nx1 = getKDEMax(getBelief(getVariable(fg, :x1))) = [10.0087, 9.95139, 10.0622, 10.0195]\nx2 = getKDEMax(getBelief(getVariable(fg, :x2))) = [19.9381, 19.9791, 10.0056, 9.92442]","category":"page"},{"location":"principles/interm_dynpose/#IncrementalInference.jl-Defining-Factors-(Future-API)","page":"Creating DynPose Factor","title":"IncrementalInference.jl Defining Factors (Future API)","text":"","category":"section"},{"location":"principles/interm_dynpose/","page":"Creating DynPose Factor","title":"Creating DynPose Factor","text":"We would like to remove the idx indexing from the residual function calls, since that is an unnecessary burden on the user. Instead, the package will use views and SubArray types to simplify the interface. Please contact author for more details (8 June 2018).","category":"page"},{"location":"principles/interm_dynpose/#Contributions","page":"Creating DynPose Factor","title":"Contributions","text":"","category":"section"},{"location":"principles/interm_dynpose/","page":"Creating DynPose Factor","title":"Creating DynPose Factor","text":"Thanks to mc2922 for raising the catalyst issue and conversations that followed from JuliaRobotics/RoME.jl#60.","category":"page"}] }