Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Slam sim with rules #50

Open
wants to merge 1 commit into
base: main
Choose a base branch
from
Open

Slam sim with rules #50

wants to merge 1 commit into from

Conversation

dr-soong
Copy link
Contributor

@dr-soong dr-soong commented May 6, 2022

This is a branch slam_sim that uses rules. It was the active branch until very recently when rules were removed to simplify the code and assist debugging. The level of detail of the sim/app didn't really benefit from rules as this is a computationally intensive task and is not event based (events can be generated but the code to generate them can just as easily handle the logic to deal with them).

While this doesn't seem like a great use case for rules, having two running code bases doing the same thing still provides a good platform for making comparisons.

@dr-soong dr-soong requested a review from daxhaw May 6, 2022 07:09
# Tests

add_executable(test_blob_cache
tests/test_blob_cache.cpp
Copy link
Contributor

@daxhaw daxhaw May 9, 2022

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

possibly deja-vu: does this CMake file compile? I.e., I don't see a tests directory in the diff and yet this is referencing a tests here and below.

// the graph optimization check in a serial group.
on_insert(edges)
{
printf("1 enter on_insert(edges)\n");
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

nit: if we ever publish this should use gaia_log with appropriate trace levels that can be turned on/off by just changing settings in gaia_log.conf for the app logger. I.e. the printf could be gaia_log::app().trace(...).

{
printf("1 enter on_insert(edges)\n");
// New edge created. See if it's time to run new graph optimization.
if (optimization_required())
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I realize that this function simply returns true for now but this is where I hope that rules could be more expressive. I.e. you have state in the database to "determine if enough data has been collected" the moment that conditions change. In steawd of checking if optimization is required, some state would change in the db such that we know that optimization is required.

if (optimization_required())
{
optimize_graph(edges.dest->vertices.graph->graphs);
// Whenever new error-correction data is available, regenerate
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

When does error-correction data become available? Is it deterministic/clock-driven or based on multiple factors? I can't tell if an insert into the edges table means that new error-correction data is available or there are other variables. If the latter, then there is a case perhaps for another rule.

// Whenever new error-correction data is available, regenerate
// maps. This can be triggered in a separate rule or simply
// done here, so do it here.
build_area_map(/destination, /area_map, /observed_area);
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Did you verify that the generated code is only calling build_area_map one time? I guess that if you guarantee that these tables only ever have one row in them then it will execute exactly once despite the generation of 3 nested loops.

// Build map, including paths, to destination.
build_area_map(/destination, /area_map, /observed_area);
// Start movement toward destination.
g_running = true;
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Would it make sense to put g_running into a table somewhere and then have rules react to changes in this running state?

printf("3 enter on_insert(vertices)\n");
// If working map will go beyond border of area map, rebuild
// area map.
if (need_to_extend_map(vertices.position->positions, /observed_area))
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

My general feedback would be to attempt to hoist all "decision" code into rules. For example, could the logic behind need_to_extend_map be put into a rule that runs when positions_t is inserted/updated?

// closer than it's supposed to be. Initiate a stop, which will
// in turn create a new observation.
gaia_log::app().info("Collision detected. Stopping bot");
full_stop();
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This function is only updating g_running. Seems like it might work to have a state table with a running column.


ruleset runtime_ruleset
{
// Vertices are created at a lower infrastructure level. The
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Good comments - they are very helpful. It feels like there should be a way to have rules react to incoming sensor data which then fire further rules to determine whether a vertex should be generated. The fact that determining whether to make an observation may be longer than the interval of the next packet of sensor data seems like a concern independent of whether you are using rules or not. At some point, you need to deal with this either by throwing away older sensor data or making the timing such that this can't happen. This seems like a fairly common scenario that I want to wrap my head around.

We read data, we make some complex decisions about this data, and then push out the results. Repeat. This is the classic "control loop" and generally how the indy stuff worked. I really wanted to turn this on the head to be more event based. What about the sensor data caused me to do something different? Maybe the indy scenario is not related to this one (and I never had time to do this ... ) but I had a hunch that an event-based model would be way more efficient. In the vast majority of the time on the track, you are not changing course because the cars around you aren't changing positions. You have to sense them every cycle, but you only had to react in a vast minority of situations. Anyway, I am trying to think of turning the problem on its head a bit to couch the problem in terms of the decisions I have to make given a stream of sensor data.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants