Skip to content

Latest commit

 

History

History
111 lines (92 loc) · 2.77 KB

README.md

File metadata and controls

111 lines (92 loc) · 2.77 KB

robotics_algorithms

logo

This repository contains pure-python implementation for essential robotics algorithms.

The main benefits are:

  1. Have a single source of truth of various algorithms with clear explanation.
  2. Implemented with clear separation between dynamics, environment and algorithm, emphasizing that a lot of algorithms, e.g planing under uncertainties, optimal control, share the same underlying problem formulation,eg. MDP.

Scope

It should include popular and representative algorithms from robot dynamics, state estimation, planning, control and learning.

Requirement

python 3.10

How to use

  • Run pip install -e .
  • Run various scripts inside examples folder.

For example, to run a* to find the shortest path between start and goal in a grid world

python examples/planning/test_a_star.py

News

Added AMCL from v0.9.0 onwards.

Status

This repository is undergoing significant development. Here is the status checklist.

  • Robot Dynamics
    • Differential drive
    • Cartpole
    • Double Integrator
    • Arm
    • Car
    • Quadrotor
    • Quadruped
  • State Estimation
    • Localizaion
      • Discrete bayes filter
      • Kalman filter
      • Extended Kalman filter
      • Particle filter (MCL)
      • AMCL
    • SLAM
      • EKF SLAM
      • Fast SLAM
      • Graph SLAM
  • Planing
    • Discrete Planning
      • Dijkstra
      • A-star
    • Motion Planning
      • RRT / RRT-Connect
      • RRT*
      • RRT*-Connect
      • PRM
      • Informed RRT*
      • BIT*
    • MDP
      • Value iteration
      • policy iteration
      • Policy tree search
      • MCTS
    • POMDP
      • Belief tree search
      • SARSOP
      • DESPOT
  • Control
    • Classical control (PID)
    • LQR
    • MPPI
    • CEM-MPC
  • Imitation learning
  • Reinforcement learning
    • Tabular
      • On-policy MC
      • Off-policy MC
      • On-policy TD (SARSA)
      • Off-policy TD (Q-learning)
    • Function approximation
  • Environments
    • Frozen lake (MDP)
    • Cliff walking (MDP)
    • Windy gridworld (MDP)
    • 1D navigation with double integrator
      • Deterministic and fully-observable
      • Stochastic and partially-observable
    • 2D navigation with omni-directional robot
      • Deterministic and fully-observable
    • 2D navigation with differential drive
      • Deterministic and fully-observable
      • Stochastic and partially-observable
    • 2D localization
    • 2D SLAM
    • Multi-arm bandits (POMDP)
    • Tiger (POMDP)

Known issues

  • EKF gives high localisation error at some instances.