-
Notifications
You must be signed in to change notification settings - Fork 1
Data Structures & How to Start
Revision history:
2013-03-01 - Added use case: Experiment with different sensor layouts
2013-03-01 - Added use case: Using other scenarios
2013-03-01 - Using OpenDaVINCI on PandaBoard
2013-03-01 - recommended first steps
2013-02-28 - Page created.
This page gives you a short overview of the available data structures and some hints how you should start to use hesperia-light and OpenDaVINCI to develop your algorithms.
The provided development and simulation environment consists of two parts: hesperia-light and OpenDaVINCI. The first part is used to simulate vehicle movements and sensor data like camera, ultra sonic, and infrared sensors. The latter is provided to easily develop and experiment with algorithmic concepts in combination with the simulation environment.
The simulation environment bases on a scenario model encoded in an .SCNX file (cf. folder Scenarios).
You will find all components mentioned below in the folder hesperia-light/binaries/bin.
- supercomponent: This component creates a UDP-multicast session which is used to exchange data (called Container) and provides the configuration data to all running components.
- monitor: This component is a graphical environment to visualize the scenario (.SCNX) file and exchanged Containers.
- camgen: This component is used to provide a virtual camera source.
- irus: This component provides distance data from virtualized ultra sonic and infrared sensors.
- vehicle: This is the core component to simulate the vehicle movements which influence in turn the virtualized sensor data mentioned above.
You will find the source for all components mentioned below in the folder OpenDaVINCI/. It is suggested to use this middleware, because:
- This middleware can connect to the UDP-multicast session created by hesperia-light and understands also the Container protocol.
- You can easily prototype and experiment with your algorithmic ideas in C++.
- This middleware is portable and runs on the PandaBoard as well. Thus, you can test your algorithms with the simulation environment and run them later on the PandaBoard transparently without changing the code.
The development environment consists of following components:
- supercomponent: This component creates a UDP-multicast session which is used to exchange data (called Container) and provides the configuration data to all running components.
- cockpit: This component is a graphical environment to visualize exchanged Containers.
- spy: This component is used to print exchanged Containers on command line.
- recorder: This component records all exchanged Containers into a file for later replay.
- player: This component can replay recorded Containers.
From the msv folder:
- msv/lanedetector: This component is a template for your lane detector code where you can start to implement and experiment with image processing and image feature detection. This component produces the example data structure "SteeringData" (cf. folder msv-data).
- msv/driver: This component is a template for your driver code where you can start to implement and experiment with contol algorithm data for calculating required speed and steering wheel angles for controlling the simulated/real vehicle.
- msv/proxy: This component is provided to translate data from the MotorController-hardware-interface to the UDP-multicast-session. This component is only required when you are running your components on the PandaBoard.
From the hesperia-light/binaries/bin folder, run the following components
- supercomponent --cid=111
- monitor --cid=111
You can double-click on EnvironmentViewer and freely navigate through the scenario.
From the hesperia-light/binaries/bin folder, run the following components
- supercomponent --cid=111
- monitor --cid=111
- vehicle --cid=111 --freq=10
You can double-click on EnvironmentViewer and change the camera to EgoCar. The camera is now in the vehicle-following mode.
From the OpenDaVINCI/ folder, run the following component:
- cockpit --cid=111
Start the component Controller. Click on the button "NOT sending" to activate the vehicle control. Now, click in the lower part of the widget where you see the textual representation of the data structure "VehicleControl". Now, you can use your cursor keys to control the vehicle manually: With cursor up/down, you can increase and decrease the desired speed value for the car; with cursor left/right, you can steer to the left and to the right. In cockpit, you can directly see the values, which are send from cockpit (OpenDaVINCI) to vehicle (hesperia-light).
The steering values are sent in radians; maximum steering wheel angle to the left is ~26° and to the right ~25°.
Use Case 3: Start simulation environment with virtual camera data and move vehicle manually while running lanedetector:
From the hesperia-light/binaries/bin folder, run the following components
- supercomponent --cid=111
- monitor --cid=111
- vehicle --cid=111 --freq=10
- camgen --cid=111
You can double-click on EnvironmentViewer and change the camera to EgoCar. The camera is now in the vehicle-following mode.
From the OpenDaVINCI/ folder, run the following component:
- cockpit --cid=111
Start the component Controller. Click on the button "NOT sending" to activate the vehicle control. Now, click in the lower part of the widget where you see the textual representation of the data structure "VehicleControl". Now, you can use your cursor keys to control the vehicle manually: With cursor up/down, you can increase and decrease the desired speed value for the car; with cursor left/right, you can steer to the left and to the right. In cockpit, you can directly see the values, which are send from cockpit (OpenDaVINCI) to vehicle (hesperia-light).
The steering values are sent in radians; maximum steering wheel angle to the left is ~26° and to the right ~25°.
- lanedetector --cid=111 --freq=10
You should see a new window which is live updating the images from the virtual camera.
Use Case 4: Start simulation environment with virtual camera data and ultra sonic/infrared data and move vehicle autonomously with lanedetector and driver:
From the hesperia-light/binaries/bin folder, run the following components
- supercomponent --cid=111
- monitor --cid=111
- vehicle --cid=111 --freq=10
- camgen --cid=111
- irus --cid=111 --freq=10
You can double-click on EnvironmentViewer and change the camera to EgoCar. The camera is now in the vehicle-following mode.
From the OpenDaVINCI/ folder, run the following component:
- lanedetector --cid=111 --freq=10
You should see a new window which is live updating the images from the virtual camera.
- driver --cid=111 --freq=10
You should see the vehicle turning slowly to the right. Now, you can start developing and experimenting with your control algorithm design in the source code for Driver. In the source file, you will find code examples how to read the distance values from the ultra sonic/infrared sensors, how to read data which is sent from lanedetector, and also signals from the PandaBoard's user button.
Use Case 5: Start simulation environment with virtual camera data and ultra sonic/infrared data and move vehicle autonomously with lanedetector and driver and simulate PandaBoard's user button:
From the hesperia-light/binaries/bin folder, run the following components
- supercomponent --cid=111
- monitor --cid=111
- vehicle --cid=111 --freq=10
- camgen --cid=111
- irus --cid=111 --freq=10
You can double-click on EnvironmentViewer and change the camera to EgoCar. The camera is now in the vehicle-following mode.
From the OpenDaVINCI/ folder, run the following component:
- lanedetector --cid=111 --freq=10
You should see a new window which is live updating the images from the virtual camera.
- driver --cid=111 --freq=10
You should see the vehicle turning slowly to the right. Now, you can start developing and experimenting with your control algorithm design in the source code for Driver. In the source file, you will find code examples how to read the distance values from the ultra sonic/infrared sensors, how to read data which is sent from lanedetector, and also signals from the PandaBoard's user button.
- cockpit --cid=111
Start the component Controller. Click on the button "User Button" to generate the same signal which is generated by the PandaBoard and mapped from Proxy. In driver, you should see changing values for the button and the duration the button was pressed.
To run your code on the PandaBoard, you can either pull everything from GitHub on the PandaBoard and compile it directly there. However, that would consume a lot of time because the board is not that powerful.
The recommended way is to use a cross-compiler to create the binaries from the sources directly on your powerful host computer. Then, you simply copy the resulting binaries to the PandaBoard's SD card, log in to the board, and run your code. For copying, it is suggested to use rsync+ssh for your convenience. In the CMakeLists.txt, you'll a specific target 'push2meili-1' or 'push2meili-2', which you can adapt for your need.
To run your code on the PandaBoard, you don't need components from hesperia (because it is only the simulation environment!).
From the OpenDaVINCI/ folder, use the following components:
- supercomponent --cid=111
- lanedetector --cid=111 --freq=10 # Here, you have to experiment with the frequency settings.
- driver --cid=111 --freq=10 # Here, you have to experiment with the frequency settings.
- proxy --cid=111 --freq=20 # Here, you have to experiment with the frequency settings.
The first component creates the UDP multicast session and provides the configuration data to all other components. The second component could be your lanedetector component that provides desired steering wheel angles. These desired angles are used by driver for example to calculate speed and steering wheel angles that are sent to 'vehicle' in the simulation or to the real vehicle.
Attention! OpenDaVINCI uses UDP multicast as a transport mechanism and thus, a valid multicast route must always be defined.
To use OpenDaVINCI for inter-application communication on a single host (without other communication partners or without an Ethernet/Wifi-connection), you must have a multicast-enabled network interface. If the computer is already connected to a network (wired or wifi), OpenDaVINCI will simply work because the default route will allow OpenDaVINCI to find the correct network interface for multicast traffic.
If however, the computer is not connected to any network, you must explicitly enable multicast traffic by adding multicast entries to your system's routing table. You can setup the loopback interface for multicast with the following commands:
sudo ifconfig lo multicast
sudo route add -net 224.0.0.0 netmask 240.0.0.0 dev lo
This must be done whenever the computer is rebooted and not connected to an external network.
You'll find further scenarios here. To use them in the simulation, you need to stop all running components, edit the file 'configuration' in the same directory where you also have supercomponent, and start all components again.
In the configuration file, you'll change the entry 'global.scenario' to another .scnx file.
To experiment with different sensor layouts, you need to stop all running components, edit the file 'configuration' in the same directory where you also have supercomponent, and start supercomponent, monitor and irus to see an effect.
In the configuration file, you'll change the entries 'irus.sensorN.???'. Here, you'll find an example for sensor1:
irus.sensor1.id = 1 # This ID is used in SensorBoardData structure.
irus.sensor1.name = Infrared_Rear # Name of the sensor
irus.sensor1.rotZ = -180 # Rotation of the sensor around the Z-axis in degrees, positive = counterclockwise, negative = clockwise, 0 = 12am, -90 = 3pm, ...
irus.sensor1.translation = (-1.0;0.0;0.0) # Translation (X;Y;Z) w.r.t. vehicle's center
irus.sensor1.angleFOV = 5 # In degrees.
irus.sensor1.distanceFOV = 3 # In meters.
irus.sensor1.clampDistance = 2.9 # Any distances greater than this distance will be ignored and -1 will be returned.
irus.sensor1.showFOV = 1
To replay previously recorded data, you can use the tool player, which is part of the OpenDaVINCI distribution. You'll find some examples for recorded video data here. These zip-files contain two files: a .rec and a .rec.mem file; the former one contains all recorded containers that are exchanged between components, and the latter one contains the video dump.
To replay the data, just do the following:
According to the configuration file, player expects the file to be replayed named as recorder.rec and recorder.rec.mem. Thus, make sure that you rename the extracted files accordingly.
From the OpenDaVINCI/ folder, use the following components:
- supercomponent --cid=111
- cockpit --cid=111 Start the plugin SharedImageViewer to watch the recorded data.
- player --cid=111
Alternatively, you can also use the plugin player to replay recorded files instead of using the command line tool.
To record data (containers and video data for example), simply run the tool recorded in parallel to your components.
- supercomponent --cid=111
- recorder --cid=111
Alternatively, you can also access the recorder component from within your code as demonstrated in the component app/2013/proxy/src/Proxy.cpp
- VehicleData - This data is produced from the vehicle in reality and accordingly in the simulation.
- SensorBoardData - This data is produced from the distance sensors in reality and accordingly in the simulation.
- UserButtonData - This data is produced from proxy when the user is pressing the user button.
- IplImage - This data is used in the lanedetector template. You can use algorithms from OpenCV for further image processing.
- VehicleControl - This data structure is used to control the vehicle in the simulation and in reality (which is then passed from proxy to MotorController).
- Make yourself familiar with the manual vehicle control so that you get an idea of the values you need to send through the data structure 'VehicleControl' to accelerate and steer the vehicle.
- Identify a correlation between the values you need to send to the vehicle to follow a straight road/a curve and the 'input features' you can derive from the camera.
- Derive a concept for handling image data to derive the required 'input features'.
- In the source code of Driver.cpp, you'll find template code how to access the most recently received data from various input sources. You should understand the data structures, what the single data fields actually mean, how you can access the data, how the data changes over time (e.g. when you are passing obstacles, pressing the user button and so on).
- You should start to develop a structure for Driver; a recommended way to do so is to develop an appropriate state machine that handles the input data over time as outlined in one of the lectures.
- Make sure that you consider user input from UserButtonData (e.g. status of the button over time), distance data, and the desired steering angle from lanedetector.
- Once you have everything in place, you should think about deriving a steering/accelerating control algorithm (fundamentals, control theory).
- Run your lanedetector and driver in the simulation environment and experiment with their respective parameters until you are happy with your setup.
- Integrate your code with the PandaBoard and run it in reality. Identify weird and unexpected behavior, discuss with your team fellows about possible reasons, change some aspects in your code, run it again in the simulation, and start over with the hardware integration.