Jekyll2023-11-14T23:17:01+00:00https://fjp.at/feed.xmlfjp.github.ioThings I learn along the way while working on self driving car technology.Franz PucherPorbabilistic Robotics Overview2021-08-18T13:00:42+00:002021-08-18T13:00:42+00:00https://fjp.at/posts/probabilistic-robotics<p>The following sections summairze the Grid-based FastSLAM algorithm which is one instance of FastSLAM. This algorithm estimates the trajectory of a mobile robot while simultaneously creating a grid map of the environment.
Grid-based FastSLAM is combination of a particle filter such as Adaptive <a href="/posts/localization/mcl/">Monte Carlo Localization</a> (amcl) and a mapping algorithm such as occupancy grid mapping.</p>
<h2 id="basics">Basics</h2>
<h3 id="introduction">Introduction</h3>
<h3 id="recursive-state-estimation">Recursive State Estimation</h3>
<h4 id="robot-environment-interaction">Robot Environment Interaction</h4>
<h5 id="state">State</h5>
<p>Environments are characterized by state. State that change over time is called dynamic state, e.g., moving people or other vehicles.
Static state is non-changing state, such as the location of walls in (most) buildings. The state also includes variables regarding the robot itself, such as its pose, velocity, whether or not its sensors are functioning correctly and so on. State is denoted by $x$ and the state at time $t$ by $x_t$. Typical state variables include:</p>
<ul>
<li>Robot <strong>pose</strong></li>
<li>In robot manipulation, the pose includes variables for the <strong>configuration of robot’s actuators</strong> e.g., joint angles. Degrees of freedom is related to the <strong>kinematic state</strong> of a robot.</li>
<li>Robot <strong>velocity</strong> and <strong>velocities of its joints</strong> are commonly referred to as <strong>dynamic state</strong>.</li>
<li><strong>Locations and features of surrounding objects</strong> in the environment are also state variables e.g., trees, walls. In some problems, objects will assume the form of <strong>landmarks</strong>, which are distinct, staationary features of the environment that can be recognized reliably.</li>
<li><strong>Location and velocity of moving objects</strong> and people are also potential state variables.</li>
<li>Broken sensors or level of battery can be state variables.</li>
<li></li>
</ul>
<h5 id="environment-interaction">Environment Interaction</h5>
<h5 id="probabilistic-generative-laws">Probabilistic Generative Laws</h5>
<h5 id="belief-distributions">Belief Distributions</h5>
<h4 id="bayes-filter">Bayes Filter</h4>
<ul>
<li>Bayes Filter Algorithm</li>
<li>Markov Assumption</li>
</ul>
<h3 id="gaussian-filters">Gaussian Filters</h3>
<h3 id="nonparametric-filters">Nonparametric Filters</h3>
<h3 id="robot-motion">Robot Motion</h3>
<h3 id="robot-perception">Robot Perception</h3>
<h2 id="localization">Localization</h2>
<h3 id="mobile-robot-localization-markov-and-gaussian">Mobile Robot Localization: Markov and Gaussian</h3>
<h3 id="mobile-robot-localization-grid-and-monte-carlo">Mobile Robot Localization: Grid And Monte Carlo</h3>
<h2 id="mapping">Mapping</h2>
<h3 id="occupancy-grid-mapping">Occupancy Grid Mapping</h3>
<h3 id="simultaneous-localization-and-mapping">Simultaneous Localization and Mapping</h3>
<h3 id="the-graphslam-algorithm">The GraphSLAM Algorithm</h3>
<h3 id="the-sparse-extended-information-filter">The Sparse Extended Information Filter</h3>
<h3 id="the-fastslam-algorithm">The FastSLAM Algorithm</h3>
<h2 id="planning-and-control">Planning and Control</h2>
<h3 id="markov-decision-processes">Markov Decision Processes</h3>
<h3 id="partially-observable-markov-decision-processes">Partially Observable Markov Decision Processes</h3>
<h3 id="approximate-pomdp-techniques">Approximate POMDP Techniques</h3>
<h3 id="exploration">Exploration</h3>
<pre id="read-1" style="display:none;">
\begin{algorithm}
\caption{Grid-based FastSLAM}
\begin{algorithmic}
\PROCEDURE{FastSLAM}{$X_{t-1}, u_t, z_t$}
\STATE $\bar{X}_t = X_t = \empty$
\FOR{$m = 1$ \TO $M$}
\STATE $x_t^{[k]} = $ \CALL{MotionUpdate}{$u_t, x_{t-1}^{[k]}$}
\STATE $w_t^{[k]} = $ \CALL{SensorUpdate}{$z_t, x_{t}^{[k]}$}
\STATE $m_t^{[k]} = $ \CALL{UpdateOccupancyGrid}{$z_t, x_{t}^{[k]}, m_{t-1}^{[k]}$}
\STATE $\bar{X}_t = \bar{X}_t + \left < x_{t}^{[k]}, w_{t}^{[k]} \right >$
\ENDFOR
\FOR{$k = 1$ \TO $M$}
\STATE draw $i$ with probability $w_t^{[i]}$
\STATE add $\left < x_t^{[i]}, m_t^{[i]} \right >$ \TO $X_t$
\ENDFOR
\RETURN $X_t$
\ENDPROCEDURE
\end{algorithmic}
\end{algorithm}
</pre>
<div class="post-pseudocode" id="goal-1"></div>
<script type="text/javascript">
var code = document.getElementById("read-1").textContent;
var parentEl = document.getElementById("goal-1");
var options = {
lineNumber: true
};
pseudocode.render(code, parentEl, options);
</script>
<h2 id="links">Links</h2>
<h2 id="reference">Reference</h2>Franz PucherOverview of algorithms in probabilistic robotics.Vector Robot2020-09-18T17:31:41+00:002020-09-18T17:31:41+00:00https://fjp.at/posts/vector<p>After Digital Dream Labs aquired the Vector Robot from Anki the following steps had to be done to get Vector back to fully functioning.</p>
<p class="notice">Note that I backed the Kickstarter Campaign: <a href="https://www.kickstarter.com/projects/digitaldreamlabs/vector-unleashed?">Vector Unleashed</a>.
If you didn’t then you will probably have to buy a license to get all the features working and the latest updates.</p>
<p>For me it was required to</p>
<ol>
<li><a href="https://support.digitaldreamlabs.com/article/112-how-to-reset-erase-and-restore-vector#h_542474264551539948167887">Erase User Data</a></li>
<li>Reset Vector by brining it in <a href="https://support.digitaldreamlabs.com/article/112-how-to-reset-erase-and-restore-vector#h_279886715391539948161727">Recovery Mode</a></li>
</ol>
<p>I then was able to use the iOS App to restore Vector, following its instructions.</p>
<p class="notice">Also make sure to use the correct username (email) and password. Don’t use the Forum password and also not the Digital Dream Labs Store Account credentials.
Instead use the password set in your iOS App or the <code class="language-plaintext highlighter-rouge">vector-web-setup</code> program. In case you forgot the password it can be reset with the email inside the App.</p>
<p>Another possible way to connect to Vector from a Computer that is running Chrome is to use the <a href="https://github.com/digital-dream-labs/vector-web-setup"><code class="language-plaintext highlighter-rouge">vector-web-setup</code></a>.</p>
<p>To enable bluetooth in Chrome or Chromium enter the following:</p>
<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>chrome://flags/#enable-experimental-web-platform-features
</code></pre></div></div>
<p>Then enable <code class="language-plaintext highlighter-rouge">Experimental Web Platform features</code> and restart the browser.
To use <code class="language-plaintext highlighter-rouge">vector-web-setup</code> install nodejs in Ubuntu 20.04:</p>
<div class="language-console highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="go">sudo snap install node --classic --channel=14
sudo snap refresh node --classic --channel=14
</span></code></pre></div></div>
<p>https://docs.npmjs.com/resolving-eacces-permissions-errors-when-installing-packages-globally#manually-change-npms-default-directory</p>
<p>After getting Vector working again together with the App you can continue to work with the Anki Vector SDK.</p>
<h2 id="version">Version</h2>
<p>You can check what version of software Vector is on by opening the Vector Robot app,
then tapping the Settings icon on the top right, and clicking “Updates”.
For members and subscribers, he should be on software version v1.7.0.3412. Nonmembers will be on v1.6.</p>
<p>Another way to check the version is to follow these steps:</p>
<ol>
<li>Place Vector on his charger and plug the charger in</li>
<li>If Vector is booting up, wait for the “V” to finish</li>
<li>Double click Vector’s Back Button</li>
<li>Raise and lower Vector’s lift</li>
<li>Read the displayed information on Vector’s face</li>
<li>Select Exit by turning Vector’s treads to move through selections (backwards to go up, forwards to go down)</li>
<li>Confirm the selection by raising and lowering Vector’s lift and wait for Vector to reboot</li>
</ol>
<h2 id="setup-anki-vector-sdk">Setup Anki Vector SDK</h2>
<ul>
<li>You have completed the Installation steps, found here: <a href="https://developer.anki.com/vector/docs/initial.html#initial">Initial Setup</a></li>
<li>You have downloaded and extracted the example programs, found in the <a href="https://github.com/anki/vector-python-sdk/tree/master/examples"><code class="language-plaintext highlighter-rouge">vector-python-sdk</code> GitHub repository</a></li>
<li>The Vector companion app is not currently connected to Vector.</li>
<li>Vector is connected to the same network as your computer.</li>
<li>You can see Vector’s eyes on his screen.</li>
</ul>
<p>To test if everything is working run the <code class="language-plaintext highlighter-rouge">vector-python-sdk/examples/tutorials/01_hello_world.py</code>:</p>
<div class="language-console highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="gp">$</span><span class="w"> </span>python3 01_hello_world.py
<span class="go">or
</span><span class="gp">$</span><span class="w"> </span>./01_hello_world.py
</code></pre></div></div>
<p>If everything is working then Vector will say <code class="language-plaintext highlighter-rouge">Hello World</code> :blush:</p>
<h2 id="remote-control-vector">Remote Control Vector</h2>
<p>To remotly control vector and see what he is seeing the <code class="language-plaintext highlighter-rouge">remote_control.py</code> in the <code class="language-plaintext highlighter-rouge">vector-python-sdk/examples/apps/remote_control</code> directory.</p>
<figure>
<a href="/assets/vector/remote_control_vector.png"><img src="/assets/vector/remote_control_vector.png" /></a>
<figcaption>Remote Control for Vector.</figcaption>
</figure>
<h2 id="working-interactively-with-vector">Working Interactively with Vector</h2>
<p>To work interactively with Vector install the <code class="language-plaintext highlighter-rouge">ipython</code> package in your virtual python environment:</p>
<div class="language-console highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="go">pip3 install ipython
</span><span class="gp">#</span><span class="w"> </span>Run it with:
<span class="go">ipython
</span></code></pre></div></div>
<p>All modules can be found in the <a href="https://developer.anki.com/vector/docs/api.html">documentation</a>.</p>
<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>import anki_vector
</code></pre></div></div>
<p>References:</p>
<ul>
<li><a href="https://randym32.github.io/Vector-TRM.pdf">Vector Book</a></li>
<li><a href="https://support.digitaldreamlabs.com/category/16-troubleshooting">Technical Troubleshooting</a>
<ul>
<li><a href="https://support.digitaldreamlabs.com/article/112-how-to-reset-erase-and-restore-vector#h_279886715391539948161727">How to use recovery mode</a></li>
<li><a href="https://support.digitaldreamlabs.com/article/54-vector-software-update">How do I update Vector’s Software</a></li>
</ul>
</li>
<li><a href="https://forums.anki.com/">Anki Forums</a></li>
<li><a href="https://www.facebook.com/digitaldreamlabs/">Digital Dream Labs Facebook Group</a></li>
</ul>Franz PucherRegaining access to Vector Robot and working with the Vector Python SDK.Vim Setup and Basics2020-06-24T11:50:41+00:002020-06-24T11:50:41+00:00https://fjp.at/editors/vim%20setup/vim<p>Describes how to setup vim using a popular vimrc and explains the basics.</p>
<h2 id="search-and-replace">Search and Replace</h2>
<div class="language-console highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="go"> ?
Up
</span><span class="gp">g#</span><span class="w"> </span><span class="c"># N Find n * g*</span>
<span class="go">Prev Prev Prev / Next Next Next
Partial Current Match Down Match Current Partial
Match Word Word Match
</span></code></pre></div></div>
<p>Delete from cursor to the end of a search:</p>
<div class="language-console highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="gp">d/pattern/e<Enter></span><span class="w">
</span></code></pre></div></div>
<p>Yank (copy) from cursor to beginning of previous “pattern”:</p>
<div class="language-console highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="gp">y?pattern<Enter></span><span class="w">
</span></code></pre></div></div>
<p>Search and replace:</p>
<div class="language-console highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="go">:%s/search/replace/
</span></code></pre></div></div>
<p>The previous searchs and repleaces only the first occurence of the pattern in each line of the current buffer <code class="language-plaintext highlighter-rouge">%</code>.
To replace all occurences in all lines (not just the first) use:</p>
<div class="language-console highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="go">:%s/search/replace/g
</span></code></pre></div></div>
<p>Other useful search flags:</p>
<div class="language-console highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="go">:[range]s[ubstitute]/{pattern}/{string}/[flags] [count]
</span></code></pre></div></div>
<ul>
<li><code class="language-plaintext highlighter-rouge">c</code> Confirm or skip each match</li>
<li><code class="language-plaintext highlighter-rouge">i</code> Ignore case</li>
<li><code class="language-plaintext highlighter-rouge">I</code> Case-sensitive</li>
<li><code class="language-plaintext highlighter-rouge">n</code> Show number of matches (non-destructive)</li>
<li><code class="language-plaintext highlighter-rouge">p</code> Print matching lines</li>
</ul>
<p>Use scopes for search to act on either the current line or the entire file:</p>
<div class="language-console highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="go">:%s/a/b/
</span></code></pre></div></div>
<p>A leading percent (denotes the current buffer) searches all lines in the current file.</p>
<div class="language-console highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="go">:s/a/b/
</span></code></pre></div></div>
<p>Omit the percent to search only the current line.</p>
<div class="language-console highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="go">:.,`a s/a/b/
</span></code></pre></div></div>
<p>Complicated ranges are possible. This searches from the cursor (<code class="language-plaintext highlighter-rouge">.</code>) to mark <code class="language-plaintext highlighter-rouge">a</code>. Search ranges work with lines not characters.
Other ways to specify a line:</p>
<ul>
<li><code class="language-plaintext highlighter-rouge">.</code> Current line</li>
<li><code class="language-plaintext highlighter-rouge">+5</code> Five lines down</li>
<li><code class="language-plaintext highlighter-rouge">-3</code> Three lines up</li>
<li><code class="language-plaintext highlighter-rouge">1</code> Line one of buffer</li>
<li><code class="language-plaintext highlighter-rouge">$</code> Last line of buffer</li>
<li><code class="language-plaintext highlighter-rouge">%</code> All lines in buffer</li>
<li><code class="language-plaintext highlighter-rouge">'t</code> Position mark t</li>
<li><code class="language-plaintext highlighter-rouge">/pattern/</code> Next line where pattern matches (Also try <code class="language-plaintext highlighter-rouge">?patter?</code>)</li>
</ul>
<p>To see the range, use visual mode <code class="language-plaintext highlighter-rouge">v</code>. Use <code class="language-plaintext highlighter-rouge">o</code> to reposition the cursor to the other end of the region.
Use the range specifier <code class="language-plaintext highlighter-rouge">:'<,'> s/a/b/</code> to search in the visually selected region.</p>
<p>You can use global command to execute a command on all lines that match a pattern.</p>
<div class="language-console highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="go">:[range]g[lobal]/{pattern}/[cmd]
</span></code></pre></div></div>
<ul>
<li><code class="language-plaintext highlighter-rouge">#</code> Show matches with line numbers</li>
<li><code class="language-plaintext highlighter-rouge">d</code> Delete matching lines</li>
<li><code class="language-plaintext highlighter-rouge">y</code> Yank matching lines</li>
<li><code class="language-plaintext highlighter-rouge">normal {command}</code> Execute an extended sequence</li>
</ul>
<p>Show lines and line numbers where pattern occures:</p>
<div class="language-console highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="gp">:g/pattern/#</span><span class="w">
</span></code></pre></div></div>
<p>Delete blank lines:</p>
<div class="language-console highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="gp">:g/^$</span>/d
</code></pre></div></div>
<p>Yank lines after the one that match:</p>
<div class="language-console highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="go">:g/pattern/+y
</span></code></pre></div></div>Franz PucherDescribes how to setup vim using a popular vimrc and explains the basics.Docker Basics2020-06-19T17:31:41+00:002020-06-19T17:31:41+00:00https://fjp.at/posts/tooling/docker-basics<h2 id="docker-basics">Docker Basics</h2>
<p>Basic Dockerfile:</p>
<div class="language-bash highlighter-rouge"><div class="highlight"><pre class="highlight"><code>FROM alpine:latest
</code></pre></div></div>
<h2 id="basic-workflow">Basic Workflow</h2>
<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>#!/bin/bash
export PROJECT_ID=$(gcloud config list project --format "value(core.project)")
export IMAGE_REPO_NAME=my-project-tf2.3-gpu
export IMAGE_TAG=latest
export IMAGE_URI=gcr.io/$PROJECT_ID/$IMAGE_REPO_NAME:$IMAGE_TAG
# Do other stuff such as building source distributions
python venv/src/package setup.py sdist --dist-dir packages
# Build the image using the Dockerfile
docker build -f Dockerfile --tag $IMAGE_URI ./
</code></pre></div></div>
<p>Then source the script to create source distributions from local dependencies in a local folder named <code class="language-plaintext highlighter-rouge">packages</code> and build the image.</p>
<p>Behind the scenes: sourcing this script will export environment variables such as <code class="language-plaintext highlighter-rouge">$IMAGE_URI</code>.
With this environment variable set the image is built with the command <code class="language-plaintext highlighter-rouge">docker build -f Dockerfile -t $IMAGE_URI ./</code>.
This will send the build context to the docker engine (everything in this repository except what’s inside <code class="language-plaintext highlighter-rouge">.dockerignore</code>) and
start building the image using the instructions inside the <code class="language-plaintext highlighter-rouge">Dockerfile</code>.</p>
<p>To run the container’s entry point and thereby start executing a main script execute the command with arguments to the dataset directory, whole-model or retraining and which pre-trained model to use:</p>
<div class="language-console highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="gp">docker container run --name mycontainer --rm -it --privileged -p 6006:6006 $</span>IMAGE_URI ARGS
</code></pre></div></div>
<p>Behind the scenes: This command will remove (<code class="language-plaintext highlighter-rouge">--rm</code>) a container previously created from the image and
run a new one with the name <code class="language-plaintext highlighter-rouge">mycontainer</code> in interactive terminal mode <code class="language-plaintext highlighter-rouge">-it</code> and extended priviledges.
To view web interfaces (e.g. Tensorboard during a training run) the port 6006 of the container is exposed to the host on the same port.</p>
<h2 id="debug-the-container">Debug the container</h2>
<p>To debug the scripts inside the container (especially the main script with different command line arguments) run one of the following:</p>
<div class="language-console highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="gp">docker run -rm --name debugmycontainer -it --entrypoint "bash" $</span>IMAGE_URI
<span class="go">
</span><span class="gp">docker run --rm --name debugmycontainer -it \
--entrypoint "bash" \
-e GOOGLE_APPLICATION_CREDENTIALS=/root/service-account.json \
-v $</span>GOOGLE_APPLICATION_CREDENTIALS:/root/service-account.json:ro <span class="se">\</span>
<span class="nv">$IMAGE_URI</span>
</code></pre></div></div>
<p>This will overwrite the default <code class="language-plaintext highlighter-rouge">ENTRYPOINT</code> (which would execute the main script otherwise) to execute a bash shell in a container named <code class="language-plaintext highlighter-rouge">debugmycontainer</code>.</p>
<p>When running the container, you will find yourself inside the <code class="language-plaintext highlighter-rouge">/app</code> directory, which is the path to the workspace of this docker container.
From there it is possible to work like you are in a linux terminal, use <code class="language-plaintext highlighter-rouge">cd</code> or execute stuff.</p>
<h2 id="test-dockerignore">Test .dockerignore</h2>
<p>To test which files will be used for the build context use the following build command
(<a href="https://stackoverflow.com/questions/38946683/how-to-test-dockerignore-file">source</a>:</p>
<div class="language-console highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="go">docker image build -t build-context -f - . <<EOF
FROM busybox
COPY . /build-context
WORKDIR /build-context
CMD find .
EOF
</span></code></pre></div></div>
<p>This will create an image with the current folder’s build context.</p>
<p>Once created, run the container and inspect the contents of the <code class="language-plaintext highlighter-rouge">/build-context</code> directory which includes everything not excluded by the <code class="language-plaintext highlighter-rouge">.dockerignore</code> file.
The following will run the default find command, specified in the image:</p>
<div class="language-console highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="go">docker container run --rm build-context
</span></code></pre></div></div>
<p>Another option is to ionspect it from a shell:</p>
<div class="language-console highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="go">docker container run --rm -it build-context /bin/sh
</span></code></pre></div></div>
<p>You can then cleanup with:</p>
<div class="language-console highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="go">docker image rm build-context
</span></code></pre></div></div>
<h2 id="essential-docker-commands">Essential Docker commands:</h2>
<div class="language-console highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="gp">#</span><span class="w"> </span>run commands
<span class="go">docker container run
docker run
</span></code></pre></div></div>
<h2 id="build-in-the-cloud">Build in the Cloud</h2>
<p>The following command can be used to build the image in the cloud, see the <a href="https://cloud.google.com/sdk/gcloud/reference/builds/submit">docs</a> for more options.</p>
<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>gcloud builds submit --tag $IMAGE_URI
</code></pre></div></div>
<p>Make sure to keep a reasonable <code class="language-plaintext highlighter-rouge">.gitignore</code> because this command will create a <code class="language-plaintext highlighter-rouge">.gcloudignore</code> with the content of the <code class="language-plaintext highlighter-rouge">.gitignore</code> if no <code class="language-plaintext highlighter-rouge">.gcloudignore</code> exists.</p>Franz PucherDocker Container Basics.ROS Control, An overview2020-03-13T17:31:41+00:002020-03-13T17:31:41+00:00https://fjp.at/posts/ros/ros-controls<p>ROS Control combines a set of packages that include controller interfaces, controller managers, transmissions and hardware_interfaces. The ROS control framework is used to implement and manage robot controllers for real robots and in simulation within gazebo.</p>
<h2 id="big-picture-and-goals">Big Picture and Goals</h2>
<p>Let’s start with a typical scenario: you have a robot, or even more general, a piece of hardware, and you would
like to run an application that leverages ROS specific libraries like <a href="https://moveit.ros.org/">MovieIt!</a> or the <a href="http://wiki.ros.org/navigation">navigation stack</a>.
All these libraries are well tested and you don’t want to rewrite them. On the other hand you probably have a
computer that is running a simulator like <a href="http://gazebosim.org/">Gazebo</a> to simulate your robot. To make ends meet we can make use of the <a href="http://wiki.ros.org/ros_control">ROS control</a> project.</p>
<figure>
<a href="/assets/ros/ros-control/ros-control-overview.png"><img src="/assets/ros/ros-control/ros-control-overview.png" /></a>
<figcaption>Overview of ROS control project (Source: <a href="http://wiki.ros.org/ros_control">ROS.org ros_control</a>).</figcaption>
</figure>
<p>ROS control helps you writing controllers including real-time constraints, transmissions and joint limits.
It provides tools for modeling the robot in software, represented by the robot hardware interface (<code class="language-plaintext highlighter-rouge">hardware_interface::RobotHW</code>), and comes
with ready to use out of the box controllers, thanks to a common interface (<a href="http://docs.ros.org/melodic/api/controller_interface/html/c++/classcontroller__interface_1_1ControllerBase.html"><code class="language-plaintext highlighter-rouge">controller_interface::ControllerBase</code></a>) for different applications which then talk to third-party tools.
The important aspects to highlight is that the hardware access to the robot is decoupled, so you don’t expose how you talk to your robot. This means that controllers and your robot hardware structure are decoupled so you can develop them independently and in isolation which enables you to share and mix and match controllers.</p>
<p>When you want to write your robot backend, there’s a possibility to leverage an out of the box simulation backend (<code class="language-plaintext highlighter-rouge">gazebo_ros_control</code> plugin).
After you’ve tested your robot in simulation, and want to work on the real robot hardware, you will want to write your custom hardware backend for your robot. For this, there are also tools that help making this task easier for you.</p>
<p>There is already a set of existing controllers (see the <a href="http://wiki.ros.org/ros_controllers"><code class="language-plaintext highlighter-rouge">ros_controllers</code></a> meta package) that are robot agnostic, which should hopefully enable you to leverage some of those. However, if your application requires it, then there are tools (<a href="http://wiki.ros.org/controller_interface"><code class="language-plaintext highlighter-rouge">controller_interface</code></a>) that help you also implement custom controllers. ROS control also provides the <a href="http://wiki.ros.org/controller_manager">controller manager</a> for controlling the life cycle of your controllers.</p>
<p>When it comes to third party libraries, there is out of the box compatibility with general purpose robot tools like visualization, <code class="language-plaintext highlighter-rouge">tf</code> and <code class="language-plaintext highlighter-rouge">rqt</code> plugins. The project provides also compatibility with the navigation stack for differential drive robots and biped humanoid robots and it also enables motion planning with <code class="language-plaintext highlighter-rouge">MoveIt</code>.</p>
<p>The whole idea is that what you can focus on your actual application or research paper and not have to bother writing the boiler plate code to get everything working with your robot hardware.</p>
<p>Another important aspect is that ROS control is real-time ready in the sense that if your application has (hard) real-time constraints then you can use ROS control with it. However, if your application doesn’t have real-time constraints then it is not imposed on you. Also the choice of using a real-time operating system is entirely up to you.</p>
<p>In a nutshell, the goals of the project are to</p>
<ul>
<li>lower the entry barrier for exposing hardware to ROS.</li>
<li>promote the reuse of control code in the same spirit that ROS has done for non-control code, at least non-real-time control code.</li>
<li>provide a set of ready-to-use tools.</li>
<li>have a real-time ready implementation.</li>
</ul>
<h3 id="history">History</h3>
<p>By the year 2009 there was the <code class="language-plaintext highlighter-rouge">pr2_controller_manager</code> that was developed by Willow Garage which was specific for the pr2 robot. This changed in the late 2012, where hiDOF in collaboration with Willow Garage started the ROS control project which by early 2013 was continued by PAL robotics and community efforts. ROS control is basically a robot agnostic version of the <code class="language-plaintext highlighter-rouge">pr2_controller_manager</code>.</p>
<h3 id="related-work">Related Work</h3>
<p>There’s quite a few component-based frameworks like Orocos RTT (and also there’s a couple of projects that build upon Orocos RTT as well), openRTM, Yarp. For computational graph models there’s Ecto there’s Microblx, and many non-robotics implementations, especially within the audio and graphics communities.</p>
<p>Without ROS control there would be no end-to-end solution for ROS users. ROS control tries to bring standard ROS interfaces (topics, services, actions) closer to the hardware and therefore to push the integration effort to the driver level, the rightmost part of the overview image.</p>
<figure>
<a href="/assets/ros/ros-control/ros-control-black-box.png"><img src="/assets/ros/ros-control/ros-control-black-box.png" /></a>
<figcaption>Overview of ROS control project (Source: <a href="http://wiki.ros.org/ros_control">ROS.org ros_control</a>).</figcaption>
</figure>
<h2 id="ros-control-and-its-repositories">Ros Control and its Repositories</h2>
<p>The ROS control project consists of a github organization that’s called <a href="https://github.com/ros-controls">ros controls</a>.
There you find messages and action definitions, tools that help you writing real-time code (<a href="https://github.com/ros-controls/realtime_tools">realtime_tools</a>) in the context of control, hardware interfaces (<a href="http://wiki.ros.org/hardware_interface"><code class="language-plaintext highlighter-rouge">hardware_interface</code></a> and the <a href="http://wiki.ros.org/transmission_interface"><code class="language-plaintext highlighter-rouge">transmission_interface</code></a> package to describe your robot and the <a href="http://wiki.ros.org/controller_manager"><code class="language-plaintext highlighter-rouge">controller_manager</code></a>. The core framework is found in the <a href="https://github.com/ros-controls/ros_control"><code class="language-plaintext highlighter-rouge">ros_control</code></a> package which is compatible with <a href="https://github.com/ros-controls/ros_controllers"><code class="language-plaintext highlighter-rouge">ros_controllers</code></a> that provides a set of robot agnostic controllers.</p>
<p>The <a href="http://wiki.ros.org/ros_control"><code class="language-plaintext highlighter-rouge">ros_control</code></a> meta package is composed of the following individual packages:</p>
<ul>
<li><a href="http://wiki.ros.org/control_toolbox"><code class="language-plaintext highlighter-rouge">control_toolbox</code></a>: This package contains common modules (PID and Sine) that can be used by all controllers.</li>
<li><a href="http://wiki.ros.org/controller_interface"><code class="language-plaintext highlighter-rouge">controller_interface</code></a>: This package contains the interface base class for existing controllers and to develop new ones that will work with the ROS control framework.</li>
<li><a href="http://wiki.ros.org/controller_manager"><code class="language-plaintext highlighter-rouge">controller_manager</code></a>: This package provides the infrastructure to load, unload, start, and stop controllers.</li>
<li><a href="http://wiki.ros.org/controller_manager_msgs"><code class="language-plaintext highlighter-rouge">controller_manager_msgs</code></a>: This package provides the message and service definitions for the controller manager.</li>
<li><a href="http://wiki.ros.org/hardware_interface"><code class="language-plaintext highlighter-rouge">hardware_interface</code></a>: This contains the base class for the hardware interfaces</li>
<li><a href="http://wiki.ros.org/transmission_interface"><code class="language-plaintext highlighter-rouge">transmission_interface</code></a>: This package contains the interface classes to model transmissions (<a href="http://docs.ros.org/melodic/api/transmission_interface/html/c++/classtransmission__interface_1_1SimpleTransmission.html">simple reducers</a>, <a href="http://docs.ros.org/melodic/api/transmission_interface/html/c++/classtransmission__interface_1_1DifferentialTransmission.html">differentials</a> or <a href="http://docs.ros.org/melodic/api/transmission_interface/html/c++/classtransmission__interface_1_1FourBarLinkageTransmission.html">four-bar linkages</a>)</li>
</ul>
<h3 id="setting-up-a-robot">Setting up a Robot</h3>
<p>ROS control is split into two parts. First there is the robot hardware abstraction,
which is used to communicate with the hardware. It provides resources like joints and it also handles resource conflicts.
On the other hand, we have controllers. They talk to the hardware through the hardware interface and they require resources that are provided by the hardware abstraction.</p>
<figure>
<a href="/assets/ros/ros-control/controllers-vs-hardware-abstraction.png"><img src="/assets/ros/ros-control/controllers-vs-hardware-abstraction.png" /></a>
<figcaption>Controllers vs Hardware abstraction (Source: <a href="http://wiki.ros.org/ros_control">ROS.org ros_control</a>).</figcaption>
</figure>
<h4 id="hardware-abstraction-and-controllers">Hardware Abstraction and Controllers</h4>
<p>Let’s get a more detailed view of the hardware abstraction and the communication between and with ROS controllers.</p>
<figure>
<a href="/assets/ros/ros-control/controllers-and-hardware-interfaces.png"><img src="/assets/ros/ros-control/controllers-and-hardware-interfaces.png" /></a>
<figcaption>Controllers and hardware interfaces (Source: <a href="http://wiki.ros.org/ros_control">ROS.org ros_control</a>).</figcaption>
</figure>
<p>The hardware interface is a software representation of the robot and its abstract hardware.
In ROS control speak, we have the notion of resources, which are actuators, joints and sensors.<br />
Some resources are are read-only, such as joint states, IMU, force-torque sensors, and so on, and some are read and write compatible, such as position, velocity, and effort joints.
A bunch of hardware interfaces, where one is just a set of similar resources, represent a robot.</p>
<p>The robot hardware, represented in software, and the controllers are connected via ROS control’s interfaces,
not to be confused with typical ROS interfaces like topics, actions or services.
This means, we are just passing pointers around which is real-time safe. The interfaces of the robot hardware are
used by controllers to connect and communicate with the hardware.
At the leftmost part of the image we see that controllers have their interfaces, which are typically ROS interfaces
(topics, services or actions) that are custom so your controller can expose whatever it wants.</p>
<h4 id="exclusive-resource-ownwership">Exclusive Resource Ownwership</h4>
<p>You can have multiple controllers accessing the same interface.
In the image below you can see that the two arm controllers both use the first arm joint.</p>
<figure>
<a href="/assets/ros/ros-control/exclusive-resource-ownership.png"><img src="/assets/ros/ros-control/exclusive-resource-ownership.png" /></a>
<figcaption>Exclusive resource ownership (Source: <a href="http://wiki.ros.org/ros_control">ROS.org ros_control</a>).</figcaption>
</figure>
<p>By default there’s a policy of exclusive resource ownership.
In this case, you can either have one or the other but not both controllers running at the same time,
which is enforced by ROS control. If you want other policies, you can implement them as well.</p>
<h4 id="memory-layout">Memory Layout</h4>
<p>Now we will learn how the hardware interface actually communicates with the real or simulated hardware using doubles and floats which represent the actual data that you want to pass around.</p>
<p>The following image depicts the memory layout of your raw data and you can see the memory regions where you
actually read and write from and to hardware.</p>
<figure class="half">
<a href="/assets/ros/ros-control/memory-layout-read.png"><img src="/assets/ros/ros-control/memory-layout-read.png" /></a>
<a href="/assets/ros/ros-control/memory-layout-rw.png"><img src="/assets/ros/ros-control/memory-layout-rw.png" /></a>
<figcaption>Memory layouts for a joint state interface for reading states of a resource and another joint command interface for reading and writing from and to a resource (Source: <a href="http://wiki.ros.org/ros_control">ROS.org ros_control</a>).</figcaption>
</figure>
<p>A resource is nothing else than pointers to the raw data that adds some semantics to the data such as describing that it is a position or velocity, the name of the joint and that it belongs for example to a joint state interface.</p>
<p>As you see in the memory layout image above, joint state interfaces are used for reading joint state.
If you have a resource that can receive commands and at the same time provide feedback you can use the joint command interface. This interface is the base interface for other joint interfaces for position, velocity and effort (see the <a href="http://docs.ros.org/melodic/api/hardware_interface/html/c++/classhardware__interface_1_1JointCommandInterface.html">C++ <code class="language-plaintext highlighter-rouge">hardware_interface</code> API</a>).
These interfaces provide semantic meaning which allow you to read the joint state and also send semantically meaningful commands.</p>
<p>The following is a simple code example.</p>
<div class="language-c++ highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="cp">#include <hardware_interface/robot_hw.h>
</span>
<span class="k">class</span> <span class="nc">MyRobot</span> <span class="o">:</span>
<span class="k">public</span> <span class="n">hardware_interface</span><span class="o">::</span><span class="n">RobotHW</span>
<span class="p">{</span>
<span class="nl">public:</span>
<span class="n">MyRobot</span><span class="p">();</span> <span class="c1">// Setup robot</span>
<span class="n">MyRobot</span><span class="p">(</span><span class="n">ros</span><span class="o">::</span><span class="n">NodeHandle</span> <span class="o">&</span><span class="n">nh</span><span class="p">,</span> <span class="n">urdf</span><span class="o">::</span><span class="n">Model</span> <span class="o">*</span><span class="n">urdf_model</span> <span class="o">=</span> <span class="nb">NULL</span><span class="p">);</span>
<span class="c1">// Talk to HW</span>
<span class="kt">void</span> <span class="n">read</span><span class="p">();</span>
<span class="kt">void</span> <span class="n">write</span><span class="p">();</span>
<span class="c1">// Requirement only if needed ...</span>
<span class="k">virtual</span> <span class="kt">bool</span> <span class="n">checkForConflict</span><span class="p">(...)</span> <span class="k">const</span><span class="p">;</span>
<span class="p">}</span>
</code></pre></div></div>
<p>To implement your custom robot hardware interface you inherit from a class called <code class="language-plaintext highlighter-rouge">hardware_interface::RobotHW</code> that is part of ROS control. In the constructor you set up your robot, either implemented in code (see for example: <a href="https://github.com/eborghi10/my_ROS_mobile_robot/blob/master/my_robot_base/include/my_robot_hw_interface.h">eborghi10/my_ROS_mobile_robot</a>) or using a robot description in the form of an urdf model. The latter requires that you pass the urdf model to the constructor or load it from the parameter server. An example using the urdf description can be found in the <a href="https://github.com/PickNikRobotics/ros_control_boilerplate">ros_control_boilerpalte</a> repository.</p>
<p>After the robot setup we need to implement the <code class="language-plaintext highlighter-rouge">read()</code> and <code class="language-plaintext highlighter-rouge">write()</code> methods, which is robot specific and up to you to implement. The <code class="language-plaintext highlighter-rouge">read()</code> method is used to populate the raw data with your robot’s state. <code class="language-plaintext highlighter-rouge">write()</code> is used to send out
raw commands via CAN bus, Ethernet, Ethercat, serial port or whatever protocol your robot hardware uses.</p>
<p>Within the ROS community, there are some projects that can help you out with this.
Examples are <a href="https://rosindustrial.org/">ROS-Industrial</a>, <a href="http://wiki.ros.org/rosserial">rosserial</a>, <a href="http://wiki.ros.org/sr_ronex">SR Ronex</a> and other custom solutions out there that you can leverage.</p>
<p>In case exclusive resource ownership is not good enough, you can always reimplement the virtual member function <a href="http://docs.ros.org/melodic/api/hardware_interface/html/c++/classhardware__interface_1_1RobotHW.html#ab491cf8d359534fe104f59300c188878"><code class="language-plaintext highlighter-rouge">hardware_interface::RobotHW::checkForConflicts()</code></a> with your custom implementation.</p>
<p>In summary, the package called <a href="http://wiki.ros.org/hardware_interface">hardware_interface</a> (see also its <a href="http://docs.ros.org/melodic/api/hardware_interface/html/c++/annotated.html">C++ API</a>) provides all the building blocks for constructing a robot hardware abstraction. It is the software representation of your robot
and the idea is to abstract hardware away which is done using resources (actuators, joints and sensors), interfaces, that are sets of similar resources, and make up a robot, which is a set of interfaces. The hardware interface also handles resource conflicts and by default we have exclusive resource ownership.</p>
<p>Out of the box ROS control provides resources and interfaces to read states such as joint state (position, velocity and effort) IMU and force-torque sensors. If, in addition to reading, you also want to send commands there are interfaces for position control joints, but also velocity and effort control joints. For a list of all available interfaces check out the <a href="http://wiki.ros.org/ros_control#Hardware_Interfaces">Hardware Interfaces</a> section on the <code class="language-plaintext highlighter-rouge">ros_control</code> wiki page.</p>
<p>Similar interfaces exist for actuators, which will be explained later.</p>
<h4 id="simulation-backend">Simulation Backend</h4>
<figure>
<a href="/assets/ros/ros-control/simulation-overview.png"><img src="/assets/ros/ros-control/simulation-overview.png" /></a>
<figcaption>Robot simulation using ROS control and Gazebo (Source: <a href="http://wiki.ros.org/ros_control">ROS.org ros_control</a>).</figcaption>
</figure>
<p>To simulate your robot there is a package called <a href="http://wiki.ros.org/gazebo_ros_control"><code class="language-plaintext highlighter-rouge">gazebo_ros_control</code></a> which is located outside the ROS control organization repositories in <a href="https://github.com/ros-simulation/gazebo_ros_pkgs">ros-simulation/gazebo_ros_pkgs</a>. It is a <a href="http://gazebosim.org/">Gazebo</a> plugin for ROS control.
There exists a default plugin, which populates all the joint interfaces from the URDF by parsing transmission specification and optionally joint limits, which will be describe later in the post. If you use the default plugin and want your
robot to show up in your Gazebo simulator, the following is all you have to add to your urdf description, apart from the robot’s transmissions.</p>
<div class="language-xml highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="nt"><gazebo></span>
<span class="nt"><plugin</span> <span class="na">name=</span><span class="s">"gazebo_ros_control"</span> <span class="na">filename=</span><span class="s">"libgazebo_ros_control.so"</span><span class="nt">></span>
<span class="nt"><robotNamespace></span>/my_robot<span class="nt"></robotNamespace></span>
<span class="nt"></plugin></span>
<span class="nt"></gazebo></span>
</code></pre></div></div>
<p>This makes it simple to test ROS control because we don’t have to write any robot hardware interface code.
Instead, we just need to setup some configuration files.</p>
<p>There’s also a custom plugin where most of the details are up to you. TODO source?</p>
<h3 id="controllers">Controllers</h3>
<p>In the following images we concentrate on the controllers</p>
<figure class="half">
<a href="/assets/ros/ros-control/controllers01.png"><img src="/assets/ros/ros-control/controllers01.png" /></a>
<a href="/assets/ros/ros-control/controllers02.png"><img src="/assets/ros/ros-control/controllers02.png" /></a>
<figcaption>Controllers (Source: <a href="http://wiki.ros.org/ros_control">ROS.org ros_control</a>).</figcaption>
</figure>
<p>and see more of the arm controller’s internals.</p>
<figure class="half">
<a href="/assets/ros/ros-control/arm-controller.png"><img src="/assets/ros/ros-control/arm-controller.png" /></a>
<figcaption>arm controller (Source: <a href="http://wiki.ros.org/ros_control">ROS.org ros_control</a>).</figcaption>
</figure>
<p>As mentioned, a controller requires resources in the form of <code class="language-plaintext highlighter-rouge">ros_control</code> hardware interfaces (located on the right side in the image). On the left hand side we have the controller’s ROS API.</p>
<p>Controllers use a plugin interface that implements the controller lifecycle. The lifecycle is a simple state machine with transitions between two states which are “stopped” and “running”. A controller has two computation units where one is real-time safe and the other is non real-time.</p>
<p>The state machine uses non real-time operations to load and unload a controller.</p>
<figure class="half">
<a href="/assets/ros/ros-control/controller-load-unload.png"><img src="/assets/ros/ros-control/controller-load-unload.png" /></a>
<figcaption>Plugin interface transitions: load and unload (Source: <a href="http://wiki.ros.org/ros_control">ROS.org ros_control</a>).</figcaption>
</figure>
<p>Loading controller is done by the <code class="language-plaintext highlighter-rouge">controller_manager</code> which initializes the controller plugin defined in a config yaml file and checks requisites, for example the existence of hardware resources
(note that this is not the same as hardware resource conflict handling). In the arm controller example,
we need two joints named <code class="language-plaintext highlighter-rouge">arm_1_joint</code> and <code class="language-plaintext highlighter-rouge">arm_2_joint</code> and they must implement the <code class="language-plaintext highlighter-rouge">PositionJointInterface</code>.
In case your robot doesn’t have that interface with the resources, the controller you want to use cannot work with that robot. You can also check for things like the robot URDF description configuration or a controller specific configuration like ROS parameters scoped within the controller namespace. Finally in the load method you setup the ROS interfaces which define how your clients will talk to the controller.</p>
<p>The steps to unload a controller plugin are implemented in the destructor of the controller.</p>
<p>ROS control has real-time safe operations, for transitioning between the stopped and running states.</p>
<figure class="half">
<a href="/assets/ros/ros-control/controller-start-stop.png"><img src="/assets/ros/ros-control/controller-start-stop.png" /></a>
<figcaption>Plugin interface transitions: start and stop (Source: <a href="http://wiki.ros.org/ros_control">ROS.org ros_control</a>).</figcaption>
</figure>
<p>When you <code class="language-plaintext highlighter-rouge">start()</code> the controller the following is what gets executed before the first controller <code class="language-plaintext highlighter-rouge">update()</code>:</p>
<ul>
<li>Resource conflict handling (different from resources existence: by this time we know that resources exists but we check if the resource is already in use or available for your controller (in the case of exclusive ownership)</li>
<li>The next typical policy is to apply what’s called semantic zero. This means, after the controller starts you would for example hold the position, set the velocity to zero, activate gravity compensation or something else that makes sense in your use case.</li>
</ul>
<p>The <code class="language-plaintext highlighter-rouge">stop()</code> operation goes from the “running” to the “stop” state. The typical policy for this operation is to cancle goals.
Cancelling all goals avoids a rapid movement the next time you start your controller because it doesn’t try to resume what it was doing the last time around.</p>
<p>The last real-time safe operation is <code class="language-plaintext highlighter-rouge">update()</code> which is a real-time safe computation, if implemented accordingly. This means you have to take care the implementation of <code class="language-plaintext highlighter-rouge">update()</code> is actually real-time safe, or if you don’t have real-time constrains, that the implementation is deterministic.
The <code class="language-plaintext highlighter-rouge">updated()</code> method is executed periodically in the “running” state.</p>
<p>Finally, the last part of computations that exist in controllers are non real-time operations which are executed asynchronously and take place in the callbacks of your controller’s ROS interface.</p>
<p>In summary, controllers are dynamically loadable <strong>plugins</strong> that have an interface which defines a very <strong>simple state machine</strong>. This interface clearly separates the operations that are <strong>non real-time safe</strong> from those that are required to be <strong>real-time safe</strong>. Finally, the computation can take place in the <strong>controller update</strong>, which in this case is both periodic and real-time safe, and we have computation in the <strong>ROS API callbacks</strong>, which is asynchronous and non real-time safe.</p>
<h4 id="controller-toolbox">Controller Toolbox</h4>
<p>To write your own controllers there is a helpful package <a href="http://wiki.ros.org/control_toolbox"><code class="language-plaintext highlighter-rouge">controller_toolbox</code></a> which contains tools useful for writing
<strong>controllers</strong> or <strong>robot abstractions</strong>.</p>
<p>The following is a list of tools you can use:</p>
<ul>
<li><strong>Pid</strong> PID loop</li>
<li><strong>SineSweep</strong> for joint frequency analysis</li>
<li><strong>Dither</strong> white noise generator</li>
<li><strong>LimitedProxy</strong> for convergence without overshoot</li>
<li>…</li>
</ul>
<h4 id="ros-controllers">ROS Controllers</h4>
<p>Another helpful repository is <a href="http://wiki.ros.org/ros_controllers"><code class="language-plaintext highlighter-rouge">ros_controllers</code></a> which provides ready to use out of the box controllers.
If you plan to develop a controller, you should first check in this repository if it already exists.</p>
<p>This repository has three controllers for reporting the joint state from sensors:</p>
<ul>
<li><a href="http://wiki.ros.org/joint_state_controller"><code class="language-plaintext highlighter-rouge">joint_state_controller</code></a> reads joint states and publishes it to the <code class="language-plaintext highlighter-rouge">/joint_state</code> topic of type <a href="http://docs.ros.org/melodic/api/sensor_msgs/html/msg/JointState.html"><code class="language-plaintext highlighter-rouge">sensor_msgs/JointState</code></a>.
Note: although it contains the word controller in its name, it is not actually controlling anything. However, it should always be used to publish the current joint states for other nodes such as <code class="language-plaintext highlighter-rouge">tf</code>. It is also worth mentioning that it should not be confused with <a href="http://wiki.ros.org/joint_state_publisher"><code class="language-plaintext highlighter-rouge">joint_state_publisher</code></a> (see <a href="https://answers.ros.org/question/303358/what-is-the-difference-between-joint_state_publisher-and-joint_state_controller/">this answer</a> for a distinction).</li>
<li><a href="http://wiki.ros.org/imu_sensor_controller"><code class="language-plaintext highlighter-rouge">imu_sensor_controller</code></a> publishes <a href="http://docs.ros.org/melodic/api/sensor_msgs/html/msg/Imu.html"><code class="language-plaintext highlighter-rouge">sensor_msgs/Imu</code></a> topics.</li>
<li><a href="http://wiki.ros.org/force_torque_sensor_controller"><code class="language-plaintext highlighter-rouge">force_torque_sensor_controller</code></a> publishes <a href="http://docs.ros.org/melodic/api/geometry_msgs/html/msg/Wrench.html"><code class="language-plaintext highlighter-rouge">geometry_msgs/Wrench</code></a> topics.</li>
</ul>
<p>It also provides multiple general purpose controllers. There are three packages that implement simple single joint controllers that operate in different control spaces (position, velocity and effort).
These main ROS controllers are grouped according to the commands that get passed to your hardware/simulator:</p>
<ul>
<li><a href="http://wiki.ros.org/effort_controllers"><code class="language-plaintext highlighter-rouge">effort_controllers</code></a>: used when you want to send commands to an effort interface. This means that the joints you want to control accept an effort command.
<ul>
<li><code class="language-plaintext highlighter-rouge">joint_effort_controller</code> with this subclass the <code class="language-plaintext highlighter-rouge">effort_controllers</code> plugin accepts effort set values as input.</li>
<li><code class="language-plaintext highlighter-rouge">joint_position_controller</code> with this subclass the <code class="language-plaintext highlighter-rouge">effort_controllers</code> plugin accepts position set values as input.</li>
<li><code class="language-plaintext highlighter-rouge">joint_velocity_controller</code> with this subclass the <code class="language-plaintext highlighter-rouge">effort_controllers</code> plugin accepts velocity set values as input.</li>
</ul>
</li>
<li><a href="http://wiki.ros.org/position_controllers"><code class="language-plaintext highlighter-rouge">position_controllers</code></a>: used when you want to send commands to a position interface. This means that the joints you want to control accept a position command.
<ul>
<li><code class="language-plaintext highlighter-rouge">joint_position_controller</code> with this subclass the <code class="language-plaintext highlighter-rouge">position_controllers</code> plugin accepts only position set values as input.</li>
<li><code class="language-plaintext highlighter-rouge">joint_group_position_controller</code></li>
</ul>
</li>
<li><a href="http://wiki.ros.org/velocity_controllers"><code class="language-plaintext highlighter-rouge">velocity_controllers</code></a>: used when you want to send commands to a position interface. This means that the joints you want to control accept a velocity command.
<ul>
<li><code class="language-plaintext highlighter-rouge">joint_velocity_controller</code> with this subclass the <code class="language-plaintext highlighter-rouge">position_controllers</code> plugin accepts only velocity set values as input.</li>
<li><code class="language-plaintext highlighter-rouge">joint_group_velocity_controller</code></li>
</ul>
</li>
</ul>
<p>To set an entire trajectory, the following controllers are defined:</p>
<ul>
<li><a href="http://wiki.ros.org/joint_trajectory_controller?distro=melodic"><code class="language-plaintext highlighter-rouge">joint_trajectory_controllers</code></a> is mostly used for manipulation and implements the action interface that is expected by <a href="https://moveit.ros.org/">MoveIt!</a> in the default ROS bindings. The controller accepts commands as either actions of type <a href="http://docs.ros.org/api/control_msgs/html/action/FollowJointTrajectory.html"><code class="language-plaintext highlighter-rouge">control_msgs/FollowJointTrajectory</code></a> or via the topic <a href="http://docs.ros.org/melodic/api/trajectory_msgs/html/msg/JointTrajectory.html"><code class="language-plaintext highlighter-rouge">trajectory_msgs/JointTrajectory</code></a>.
<ul>
<li><code class="language-plaintext highlighter-rouge">position_controller</code></li>
<li><code class="language-plaintext highlighter-rouge">velocity_controller</code></li>
<li><code class="language-plaintext highlighter-rouge">effort_controller</code></li>
<li><code class="language-plaintext highlighter-rouge">position_velocity_controller</code></li>
<li><code class="language-plaintext highlighter-rouge">position_velocity_acceleration_controller</code></li>
</ul>
</li>
</ul>
<p>For navigation there is the <a href="http://wiki.ros.org/diff_drive_controller"><code class="language-plaintext highlighter-rouge">diff_drive_controller</code></a> which accepts commands in the form of <a href="http://docs.ros.org/melodic/api/geometry_msgs/html/msg/Twist.html"><code class="language-plaintext highlighter-rouge">geometry_msgs/Twist</code></a> topics and publishes odometry to <a href="http://wiki.ros.org/tf"><code class="language-plaintext highlighter-rouge">tf</code></a> and <a href="http://docs.ros.org/melodic/api/nav_msgs/html/msg/Odometry.html"><code class="language-plaintext highlighter-rouge">nav_msgs/Odometry</code></a>. It exposes a ROS interface that is compatible with the <a href="http://wiki.ros.org/navigation">ROS navigation stack</a>.</p>
<p>For manipulation we can use the <a href="http://wiki.ros.org/gripper_action_controller"><code class="language-plaintext highlighter-rouge">gripper_action_controller</code></a> which implements a single-DOF gripper controller.
It accepts commands as actions of type <a href="http://docs.ros.org/melodic/api/control_msgs/html/msg/GripperCommand.html"><code class="language-plaintext highlighter-rouge">control_msgs/GripperCommandAction</code></a> and is compatible with <a href="https://moveit.ros.org/">MoveIt!</a>.</p>
<h3 id="controller-lifecycle-management">Controller Lifecycle Management</h3>
<p>To bring the robot hardware abstraction and the controllers together ROS control uses the <strong>controller manager</strong>
which lives in a package with the same <a href="http://wiki.ros.org/controller_manager"><code class="language-plaintext highlighter-rouge">controller_manager</code></a> and takes care of the control loop.</p>
<h4 id="controller-manager">Controller Manager</h4>
<figure class="half">
<a href="/assets/ros/ros-control/controller-manager.png"><img src="/assets/ros/ros-control/controller-manager.png" /></a>
<figcaption>Controller lifecycle management (Source: <a href="http://wiki.ros.org/ros_control">ROS.org ros_control</a>).</figcaption>
</figure>
<p>The controller manager knows about the robot and the controllers. Its two main purposes are: <strong>robot resource management</strong> and <strong>controller lifecycle management</strong>.</p>
<figure class="half">
<a href="/assets/ros/ros-control/robot-resources.png"><img src="/assets/ros/ros-control/robot-resources.png" /></a>
<figcaption>Robot resource conflict handling (Source: <a href="http://wiki.ros.org/ros_control">ROS.org ros_control</a>).</figcaption>
</figure>
<p>The controller manager knows about resources and can therefore enforce the resource conflict policy, for example, exclusive ownership.</p>
<p>It also takes care of controller lifecycle management. Because the <code class="language-plaintext highlighter-rouge">controller_manager</code> knows about the controllers
it handles the transitions of the controller state machine and it also updates the running controllers.
It is important to mention that, by design, controller updates are serialized in a single thread and they are periodic.</p>
<figure class="half">
<a href="/assets/ros/ros-control/controller-lifecycle.png"><img src="/assets/ros/ros-control/controller-lifecycle.png" /></a>
<figcaption>Controller lifecycle management (Source: <a href="http://wiki.ros.org/ros_control">ROS.org ros_control</a>).</figcaption>
</figure>
<p>The controller manager has an API which is based on ROS services. This API is used for</p>
<ul>
<li>Controller lifecycle managment: to transition between the states of the controller state machine
<ul>
<li><code class="language-plaintext highlighter-rouge">load_controller</code></li>
<li><code class="language-plaintext highlighter-rouge">unload_controller</code></li>
<li><code class="language-plaintext highlighter-rouge">switch_controller</code>: Note that there are no separate start and stop services. Instead, there is a service that is called
switch. With that, it is possible to to stop a set of controllers and then start a set of controllers in the same control cycle. This might be relevant if you have a robot arm that is stopped hanging in mid air and you want to change control strategies. What might happen in this case is that your arm falls down when a few control cycles pass. With the switch service you can guarantee that there is no control cycle without any controller running unless that is what you want.</li>
</ul>
</li>
<li>Query services:
<ul>
<li><code class="language-plaintext highlighter-rouge">list_controllers</code>: list the currently running controllers or which controllers are available for loading.</li>
<li><code class="language-plaintext highlighter-rouge">list_controller_types</code>:</li>
</ul>
</li>
<li>Other services useful for debugging
<ul>
<li><code class="language-plaintext highlighter-rouge">reload_controller_libraries</code></li>
</ul>
</li>
</ul>
<h4 id="the-control-loop">The Control Loop</h4>
<p>The following image shows the control loop in its most basic and simplified form:</p>
<figure class="half">
<a href="/assets/ros/ros-control/control-loop-simple.png"><img src="/assets/ros/ros-control/control-loop-simple.png" /></a>
<figcaption>Basic control loop (Source: <a href="http://wiki.ros.org/ros_control">ROS.org ros_control</a>).</figcaption>
</figure>
<p>Typically, you first read the state from the hardware through the hardware interface, then use the controller manager to update the controllers with the current hardware state, and finally you send the commands from the update step back out to the hardware.</p>
<div class="language-cpp highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="cp">#include <ros/ros.h>
#include <my_robot/my_robot.h>
#include <controller_manager/controller_manager.h>
</span>
<span class="kt">int</span> <span class="nf">main</span><span class="p">(</span><span class="kt">int</span> <span class="n">argc</span><span class="p">,</span> <span class="kt">char</span> <span class="o">**</span><span class="n">argv</span><span class="p">)</span>
<span class="p">{</span>
<span class="c1">// Setup</span>
<span class="n">ros</span><span class="o">::</span><span class="n">init</span><span class="p">(</span><span class="n">argc</span><span class="p">,</span> <span class="n">argv</span><span class="p">,</span> <span class="s">"my_robot"</span><span class="p">);</span>
<span class="n">MyRobot</span><span class="o">::</span><span class="n">MyRobot</span> <span class="n">robot</span><span class="p">;</span>
<span class="n">controller_manager</span><span class="o">::</span><span class="n">ControllerManager</span> <span class="n">cm</span><span class="p">(</span><span class="o">&</span><span class="n">robot</span><span class="p">);</span>
<span class="nl">ros:</span><span class="n">AsyncSpinner</span> <span class="n">spinner</span><span class="p">(</span><span class="mi">1</span><span class="p">);</span>
<span class="n">spinner</span><span class="p">.</span><span class="n">start</span><span class="p">();</span>
<span class="c1">// Control loop</span>
<span class="n">ros</span><span class="o">::</span><span class="n">Time</span> <span class="n">prev_time</span> <span class="o">=</span> <span class="n">ros</span><span class="o">::</span><span class="n">Time</span><span class="o">::</span><span class="n">now</span><span class="p">();</span>
<span class="n">ros</span><span class="o">::</span><span class="n">Rate</span> <span class="n">rate</span><span class="p">(</span><span class="mf">10.0</span><span class="p">);</span> <span class="c1">// 10 Hz rate</span>
<span class="k">while</span> <span class="p">(</span><span class="n">ros</span><span class="o">::</span><span class="n">ok</span><span class="p">())</span>
<span class="p">{</span>
<span class="k">const</span> <span class="n">ros</span><span class="o">::</span><span class="n">Time</span> <span class="n">time</span> <span class="o">=</span> <span class="n">ros</span><span class="o">:</span><span class="n">Time</span><span class="o">::</span><span class="n">now</span><span class="p">();</span>
<span class="k">const</span> <span class="n">ros</span><span class="o">::</span><span class="n">Duration</span> <span class="n">period</span> <span class="o">=</span> <span class="n">time</span> <span class="o">-</span> <span class="n">prev_time</span><span class="p">;</span>
<span class="n">robot</span><span class="p">.</span><span class="n">read</span><span class="p">();</span>
<span class="n">cm</span><span class="p">.</span><span class="n">update</span><span class="p">(</span><span class="n">time</span><span class="p">,</span> <span class="n">period</span><span class="p">);</span>
<span class="n">root</span><span class="p">.</span><span class="n">write</span><span class="p">();</span>
<span class="n">rate</span><span class="p">.</span><span class="n">sleep</span><span class="p">();</span>
<span class="p">}</span>
<span class="k">return</span> <span class="mi">0</span><span class="p">;</span>
<span class="p">}</span>
</code></pre></div></div>
<p>Let’s step through the code snippet:</p>
<p>We first initialize our ROS node:</p>
<div class="language-cpp highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="c1">// Setup</span>
<span class="n">ros</span><span class="o">::</span><span class="n">init</span><span class="p">(</span><span class="n">argc</span><span class="p">,</span> <span class="n">argv</span><span class="p">,</span> <span class="s">"my_robot"</span><span class="p">);</span>
</code></pre></div></div>
<p>Create an instance of your robot so that this instance knows about all the resources that are available (remember the <code class="language-plaintext highlighter-rouge">hardware_interface</code>). Next we create an instance of the controller manager and pass it the robot, so that it can handle its resources:</p>
<div class="language-cpp highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="n">MyRobot</span><span class="o">::</span><span class="n">MyRobot</span> <span class="n">robot</span><span class="p">;</span>
<span class="n">controller_manager</span><span class="o">::</span><span class="n">ControllerManager</span> <span class="nf">cm</span><span class="p">(</span><span class="o">&</span><span class="n">robot</span><span class="p">);</span>
</code></pre></div></div>
<p>Next, we setup a separate thread that will be used to service ROS callbacks:</p>
<div class="language-cpp highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="nl">ros:</span><span class="n">AsyncSpinner</span> <span class="nf">spinner</span><span class="p">(</span><span class="mi">1</span><span class="p">);</span>
<span class="n">spinner</span><span class="p">.</span><span class="n">start</span><span class="p">();</span>
</code></pre></div></div>
<p>The following snippes show the actual control loop.
First we setup the period of the control loop, which in this case is slow (non real-time) 10 Hz.</p>
<div class="language-cpp highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="c1">// Control loop</span>
<span class="n">ros</span><span class="o">::</span><span class="n">Time</span> <span class="n">prev_time</span> <span class="o">=</span> <span class="n">ros</span><span class="o">::</span><span class="n">Time</span><span class="o">::</span><span class="n">now</span><span class="p">();</span>
<span class="n">ros</span><span class="o">::</span><span class="n">Rate</span> <span class="nf">rate</span><span class="p">(</span><span class="mf">10.0</span><span class="p">);</span> <span class="c1">// 10 Hz rate</span>
</code></pre></div></div>
<p>Inside the <code class="language-plaintext highlighter-rouge">while</code> loop we do some basic bookkeeping to get the system time in order to compute the control period:</p>
<div class="language-cpp highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="k">const</span> <span class="n">ros</span><span class="o">::</span><span class="n">Time</span> <span class="n">time</span> <span class="o">=</span> <span class="n">ros</span><span class="o">:</span><span class="n">Time</span><span class="o">::</span><span class="n">now</span><span class="p">();</span>
<span class="k">const</span> <span class="n">ros</span><span class="o">::</span><span class="n">Duration</span> <span class="n">period</span> <span class="o">=</span> <span class="n">time</span> <span class="o">-</span> <span class="n">prev_time</span><span class="p">;</span>
</code></pre></div></div>
<p>And then the actual control loop (read, update, write) begins executing:</p>
<div class="language-cpp highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="n">robot</span><span class="p">.</span><span class="n">read</span><span class="p">();</span>
<span class="n">cm</span><span class="p">.</span><span class="n">update</span><span class="p">(</span><span class="n">time</span><span class="p">,</span> <span class="n">period</span><span class="p">);</span>
<span class="n">root</span><span class="p">.</span><span class="n">write</span><span class="p">();</span>
</code></pre></div></div>
<p>All these steps keep getting repeated with the specified rate:</p>
<div class="language-cpp highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="n">rate</span><span class="p">.</span><span class="n">sleep</span><span class="p">();</span>
</code></pre></div></div>
<p>The <code class="language-plaintext highlighter-rouge">read</code>, <code class="language-plaintext highlighter-rouge">update</code> and <code class="language-plaintext highlighter-rouge">write</code> operations take place in a thread, which is called the control thread:</p>
<figure class="third">
<a href="/assets/ros/ros-control/control-thread.png"><img src="/assets/ros/ros-control/control-thread.png" /></a>
<figcaption>The control loop thread (Source: <a href="http://wiki.ros.org/ros_control">ROS.org ros_control</a>).</figcaption>
</figure>
<p>This thread potentially operates in real-time. If your system has real-time contraints, then it is this part of the code (read, update, write) where you should make sure it has real-time scheduling.</p>
<p>There is another non real-time thread, called the spinner thread, that is responsible for servicing ROS callbacks.</p>
<figure class="half">
<a href="/assets/ros/ros-control/control-spinner-thread.png"><img src="/assets/ros/ros-control/control-spinner-thread.png" /></a>
<figcaption>The control and spinner threads (Source: <a href="http://wiki.ros.org/ros_control">ROS.org ros_control</a>).</figcaption>
</figure>
<p>As we see in the image above, the spinner thread</p>
<p>There are different kinds of callbacks that are serviced by this thread. First, those from the controller manager API
and second, there’s the ROS API from your controllers which is custom and up to you.
And finally there might be other callbacks registered which depends on your setup.</p>
<p>It is important to mention, that we have the computations, which are non real-time in a separate thread.
Having these non real-time computations separate means that we respect requirements from real-time applications but also deterministic execution.</p>
<p>In the code example above we don’t have real-time constraints because it’s a slow robot operating at 10 Hz.
However, it also makes sense in this example to have the threads separate. For example, imagine a robot with 10,000 joints that has a controller which needs to parse its URDF while loading. In this example, there is no way to guarantee that parsing that URDF will take less than 0.1 seconds.</p>
<p>If we would serialize everything in the same thread then such long lasting operations would not guarantee to respect the rate of the control loop. Therefore, these operations are separated into a spinner thread (even if we’re not using a real-time operating system).</p>
<p>Because we have two threads, they need to talk to each other, as we can see in the image above.
We have to handle concurrency with care, especially with respect to the control thread.
Because it operates in real-time we have to make sure that we never block it to avoid suffering from <a href="https://en.wikipedia.org/wiki/Priority_inversion">priority inversions</a> or other bad things that might happen.</p>
<p>To deal with concurrency and other things there’s a package that’s called <a href="http://wiki.ros.org/realtime_tools"><code class="language-plaintext highlighter-rouge">realtime_tools</code></a> which has tools that are usable from a hard real-time thread. It includes:</p>
<ul>
<li><strong>RealtimePublisher</strong>: publish to a ROS topic</li>
<li><strong>RealtimeBuffer</strong>: share resource with non real-time thread.</li>
<li><strong>RealtimeClock</strong>: query estimates of the system clock.</li>
<li>…</li>
</ul>
<h4 id="the-control-loop-extended">The Control Loop Extended</h4>
<p>Let’s now look at the complete control loop used in ROS control’s framework:</p>
<figure class="half">
<a href="/assets/ros/ros-control/control-loop-extended.png"><img src="/assets/ros/ros-control/control-loop-extended.png" /></a>
<figcaption>The extended control loop (Source: <a href="http://wiki.ros.org/ros_control">ROS.org ros_control</a>).</figcaption>
</figure>
<p>With this extended control loop we can make use of mechanical transmissions, take care of joint limits and provide emergency stops.</p>
<h5 id="mechanical-transmissions">Mechanical Transmissions</h5>
<p>If your hardware doesn’t take care of mechanical transmissions by itself than we have to do this in software.
For this, there is a package called <a href="http://wiki.ros.org/transmission_interface"><code class="language-plaintext highlighter-rouge">transmission_interface</code></a> which contains data structures for representing mechanical transmissions, and methods for propagating position, velocity and effort variables between <strong>actuator and joint spaces</strong>.</p>
<p>In the same spirit as the <code class="language-plaintext highlighter-rouge">hardware_interface</code> package, this package wraps existing raw data (eg. current actuator position, reference joint command, etc.) under a consistent interface. By not imposing a specific layout on the raw data, it becomes easier to support arbitrary hardware drivers to software control.</p>
<p>The <code class="language-plaintext highlighter-rouge">transmission_interface</code> is not used by controllers themselves (it does not implement a HardwareInterface) but instead operates before or after the controllers update, in the <code class="language-plaintext highlighter-rouge">read()</code> and <code class="language-plaintext highlighter-rouge">write()</code> methods (or equivalents) of the robot abstraction.</p>
<p>The following transmissions already exist and are ready to use:</p>
<ul>
<li>Simple reducer</li>
<li>Four-bar linkage</li>
<li>Differential</li>
</ul>
<figure class="third">
<a href="/assets/ros/ros-control/simple_transmission.png"><img src="/assets/ros/ros-control/simple_transmission.png" /></a>
<a href="/assets/ros/ros-control/four_bar_linkage_transmission.png"><img src="/assets/ros/ros-control/four_bar_linkage_transmission.png" /></a>
<a href="/assets/ros/ros-control/differential_transmission.png"><img src="/assets/ros/ros-control/differential_transmission.png" /></a>
<figcaption>Transmission types (Source: <a href="http://wiki.ros.org/ros_control">ROS.org ros_control</a>).</figcaption>
</figure>
<p>There are plugins that help you in loading transmissions from the URDF and thereby simplifies the process of populating the <code class="language-plaintext highlighter-rouge">hardware_interface::RobotHW</code> interfaces.
Without these plugins it would be very complex to populate your robot hardware interface.</p>
<p>The following URDF xml snippet shows a configuration for a simple reducer. This is what you have to put in your URDF.</p>
<div class="language-xml highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="nt"><transmission</span> <span class="na">name=</span><span class="s">"arm_1_trans"</span><span class="nt">></span>
<span class="nt"><type></span>transmission_interface/SimpleTransmission<span class="nt"></type></span>
<span class="nt"><actuator</span> <span class="na">name=</span><span class="s">"arm_1_motor"</span><span class="nt">></span>
<span class="nt"><mechanicalReduction></span>42<span class="nt"></mechanicalReduction></span>
<span class="nt"></actuator></span>
<span class="nt"><joint</span> <span class="na">name=</span><span class="s">"arm_1_joint"</span><span class="nt">></span>
<span class="nt"><hardwareInterface></span>hardware_interface/PositionJointInterface<span class="nt"></hardwareInterface></span>
<span class="nt"></joint></span>
<span class="nt"></transmission></span>
</code></pre></div></div>
<p>Here you’re specifying the type of the transmission the name of the actuator and joint that you were relating to one another and things like the mechanical reduction and the type of hardware interfaces that you’d expect to expose.</p>
<h5 id="safety-issues">Safety Issues</h5>
<p>You might be interested in enforcing joint limits before you send the commands to the robot hardware.
For this, there is a package called <code class="language-plaintext highlighter-rouge">joint_limits_interface</code>. It is the last line of defense before commanding hardware</p>
<ul>
<li>don’t trust controllers.
It is implemented using simple and fast low-level stragegies.</li>
</ul>
<p>Inside the package you find</p>
<ul>
<li>data structures for <strong>representing</strong> joint limits.</li>
<li>Convenience methods for <strong>populating</strong> these data structrues from:
<ul>
<li>URDF</li>
<li>yaml (similar to MoveIt! + jerk)</li>
</ul>
</li>
<li>Methods for <strong>enforcing</strong> joint limits
<ul>
<li>soft-limits (PR2 like) and clamping strategies</li>
<li>work for different hardware interfaces</li>
</ul>
</li>
</ul>
<p>The following example curves, generated from real data, explains the joint limits implementation:</p>
<figure>
<a href="/assets/ros/ros-control/joint_limits.png"><img src="/assets/ros/ros-control/joint_limits.png" /></a>
<figcaption>Joint limits (Source: <a href="http://wiki.ros.org/ros_control">ROS.org ros_control</a>).</figcaption>
</figure>
<p>The blue curve shows the command of a controller that we don’t trust. The red line shows the result after enforcing
the joint limits. At the bottom we see, that the velocity limit is enforced and at the top the position is enforced too.
However, it is a smooth stop and we see that it is not a hard clamping at the top, because we used a soft limit specification.</p>
<p>The last issue that is also concerned with safety is the emergency stop (E-stop).
The E-stop is typically robot specific which means you have to take care of its implementation.
There’s also the distinction between an <strong>emergency stop</strong>,
which is usually much harder where you for example remove energy sources, and there’s also the notion of a <strong>protective stop</strong> which is meant to reduce risks and control hazards.</p>
<p>Relevant standards:</p>
<ul>
<li>ISO 10218-1:2011 Robots and robotic devices - Safety requirements for industrial robots</li>
<li>ISO 13482:2014 Robots and robotic devices - Safety requirements for personal care robots</li>
</ul>
<p>The issue that remains, whatever you do in case of an emergency stop,
is that you have to recover from it. The question that remins is: What is the sane thing to do when you release the stop?
There are potentially many controllers running that do different things. This means that there is no one size fits all solution on what you should do.</p>
<p>One interesting solution to this is that you should let controllers decide what to do in case of an emergency stop.
The controller manager has this feature that it can be updated with an optional prior that says controllers reset.
This means that in a single control cycle you stop all running controllers, followed by starting them and finally you update them. If we adhere to the typical policy that, explained before, is to stop and thereby cancel all goals and then start,
which enters semantic zero mode, that when you update, you have the same behavior.</p>
<figure>
<a href="/assets/ros/ros-control/e-stop-handling.png"><img src="/assets/ros/ros-control/e-stop-handling.png" /></a>
<figcaption>E-stop handling. (Source: <a href="http://wiki.ros.org/ros_control">ROS.org ros_control</a>).</figcaption>
</figure>
<p>This concludes the part of the control loop.</p>
<h2 id="reference">Reference</h2>
<ul>
<li><a href="https://vimeo.com/107507546">ROSCon 2014 talk</a> and the presented <a href="http://roscon.ros.org/2014/wp-content/uploads/2014/07/ros_control_an_overview.pdf">slides</a></li>
</ul>Franz PucherROS Control combines a set of packages that include controller interfaces, controller managers, transmissions and hardware_interfaces.Occupancy Grid Mapping2019-06-05T17:00:42+00:002019-06-05T17:00:42+00:00https://fjp.at/posts/mapping/occupancy-grid-mapping<p>This post describes how to map an environment with the Occupancy Grid Map algorithm.
The algorithm can map any arbitrary environment by dividing it into a finite number of grid cells.</p>
<h2 id="mapping">Mapping</h2>
<p>In mapping problems the robot pose $x_{1:t}$ is known and the map $m_{t}$ at time $t$, either static or dynamic is unknown.
Therefore the mapping problem is to find the posterior belief of the map $p(m_t|x_{1:t}, z_{1:t})$ given the robot poses and its measurements $z_{1:t}$.
Compared to <a href="/posts/localization/">localization</a>, where a robot pose is estimated in a known map, the goal of mapping is to estimate the map itself.</p>
<h2 id="motivation">Motivation</h2>
<p>Mapping is required in dynamic as well as static environments. In dynamic environments obstacles in the environment change and the robot has adapt to these changes by updating its map.
Because sensor measurements are always noisy, also static environments need to be mapped. This is a correction to account fo the noise and to obtain a high accuracy map.</p>
<h2 id="challenges">Challenges</h2>
<p>The challenges in mapping are the number of state variables. In localization, only the robots pose is estimated with its $x$ and $y$ position in a known environment, where a map was previously generated.
A map on the other hand lies in a continuous space.
This can lead to infinitely many variables used to describe the map. Additional uncertainty is present through sensor data perception.
Other challenges are the space and its geometries that should be mapped. For example repetitive environments such as walkways with no doors or similar looking ones.<br />
To map an environment, the robot pose is assumed to be known and the occupancy grid mapping algorithm can be used to solve the problem.
However, the hypotesis space is huge. This is the space of all possible maps that can be formed during mapping. The space is highly dimensional because maps are defined over a continuous space.
To overcome these challenges, the occupancy grid map algorithm presents a discrete approximation of the map. But even with these approximations the space of all possible maps will still be high.
Therefore, the challenge is to estimate the full posterior map for maps with high dimensional spaces.</p>
<p>Information about walls and objcts are required to map an environment. This data can be gathered by laser range finders or cameras. With this sensory information a mobile robot can
detect obstacles in the environment. Such data can be combined into a final map.</p>
<p>The following lists the difficulties while mapping an environment:</p>
<ol>
<li>Environment size: Large environments are harder to map because of the amount of data to be processed. In such a case the robot has to collect all the instantaneous poses and obstacles, form
a resulting map and localize the robot with respect to this map.</li>
<li>Perceptual range: If the map is larger than the perceptual range of the robot the mapping problem becomes more challenging.</li>
<li>Noisy sensors: All sensors, such as perception sensors, odometry sensors and actuators. During mapping the noise has to be filtered from these sensors to avoid adding up these errors.</li>
<li>Perceptual ambiguity: Repetitive environments or places that look a like result in an ambiguity. The robot then must correlate between these two places, which the robot travelled to at different points in time.</li>
<li>Cycles: If the robot travels in a cyclic manner, for example going back and forth in a corridor. In such a case robot odometry incrementally accumulates an error, which results in a large error at the end of the cycle.</li>
</ol>
<h2 id="mapping-with-known-poses">Mapping with Known Poses</h2>
<p>The problem of generating a map under the assumption that the robot poses are known and non noisy is refered to as mapping with known poses. The problem can be represented as a graph with a node $m$ as the map,
$z_t$, which are nodes representing the measurements of sensing the environment and the poses $x_t$ also as nodes.
The occupancy grid mapping algorithm can estimate the posterior given noisy measurements and known poses. Usually though the poses are unknown which is the case in <a href="/posts/slam/slam">SLAM</a>. Mapping, however, happens after SLAM.
During SLAM the robot built a map of the environment and localized itself relative to it. After SLAM the occupancy grid mapping algorithm uses the exact robot poses filtered from SLAM.
With the known poses from SLAM and noisy measurements, the mapping algorithm generates a map fit for path planning and navigation.</p>
<h2 id="ros-gmapping">ROS gmapping</h2>
<p>The <a href="http://wiki.ros.org/gmapping">gmapping</a> ROS package uses the Grid-based FastSLAM algorithm. This package contains the single <code class="language-plaintext highlighter-rouge">slam_gmapping</code> node, which subscribes to the <code class="language-plaintext highlighter-rouge">tf</code> and <code class="language-plaintext highlighter-rouge">scans</code> topics.
Using these inputs, it generates a 2D occupancy grid map and outputs robot poses on the <code class="language-plaintext highlighter-rouge">map</code> and <code class="language-plaintext highlighter-rouge">entropy</code> topics. Additional map data is provided through the <code class="language-plaintext highlighter-rouge">map_metadata</code> topic. Another way to access the map is to use the service provided by the node.
To demonstrate gmapping, turtlebot will be deployed in the willow garage environment inside gazebo. Moving the turtlebout around using the <code class="language-plaintext highlighter-rouge">teleop</code> package and running the <code class="language-plaintext highlighter-rouge">slam_gmapping</code> node will generate a map.</p>
<h2 id="links">Links</h2>
<p>The gmapping algorithm can be found <a href="https://openslam-org.github.io/gmapping.html">here</a>.</p>
<h2 id="reference">Reference</h2>
<p>This post is a summary of the lesson on Occupancy Grid Mapping from the <a href="https://eu.udacity.com/course/robotics-software-engineer--nd209">Robotics Nanodegree of Udacity</a>.</p>Franz PucherOccupancy Grid Map algorithm to map an environment.Monte Carlo Localization2019-04-17T17:00:42+00:002019-04-17T17:00:42+00:00https://fjp.at/posts/localization/monte-carlo-localization<p>This post is a summary of the <a href="https://eu.udacity.com/course/robotics-software-engineer--nd209">Udacity Robotics Nanodegree Lab</a> on localization using <a href="https://en.wikipedia.org/wiki/Monte_Carlo_localization">Monte Carlo Localization</a> (MCL).
The Udacity repo can be found <a href="https://github.com/udacity/RoboND-MCL-Lab">here</a></p>
<p>To follow this tutorial, clone the repo to a folder of your choice.</p>
<figure class="highlight"><pre><code class="language-bash" data-lang="bash">git clone https://github.com/udacity/RoboND-MCL-Lab </code></pre></figure>
<h2 id="monte-carlo-localization-algorithm">Monte Carlo Localization Algorithm</h2>
<pre id="read-1" style="display:none;">
\begin{algorithm}
\caption{SLAM}
\begin{algorithmic}
\PROCEDURE{SLAM}{$X_{t-1}, u_t, z_t$}
\STATE $\bar{X}_t = X_t = \empty$
\FOR{$m = 1$ \TO $M$}
\STATE $x_t^{[k]} = $ \CALL{MotionUpdate}{$u_t, x_{t-1}^{[k]}$}
\STATE $w_t^{[k]} = $ \CALL{SensorUpdate}{$z_t, x_{t}^{[k]}$}
\STATE $m_t^{[k]} = $ \CALL{UpdateOccupancyGrid}{$z_t, x_{t}^{[k]}, m_{t-1}^{[k]}$}
\STATE $\bar{X}_t = \bar{X}_t + \left < x_{t}^{[k]}, w_{t}^{[k]} \right >$
\ENDFOR
\FOR{$k = 1$ \TO $M$}
\STATE draw $i$ with probability $w_t^{[i]}$
\STATE add $\left < x_t^{[i]}, m_t^{[i]} \right >$ \TO $X_t$
\ENDFOR
\RETURN $X_t$
\ENDPROCEDURE
\end{algorithmic}
\end{algorithm}
</pre>
<div class="post-pseudocode" id="goal-1"></div>
<script type="text/javascript">
var code = document.getElementById("read-1").textContent;
var parentEl = document.getElementById("goal-1");
var options = {
lineNumber: true
};
pseudocode.render(code, parentEl, options);
</script>
<h2 id="c-implementation">C++ Implementation</h2>
<p>The following headers are used in the lab, which are mainly from the standard c++ library.
One exception is the third party plotting library found <a href="https://github.com/lava/matplotlib-cpp">here</a> that uses python’s matplotlib as its backend.</p>
<figure class="highlight"><pre><code class="language-cpp" data-lang="cpp"><span class="cp">#include "src/matplotlibcpp.h" //Graph Library
#include <iostream>
#include <string>
#include <math.h>
#include <stdexcept> // throw errors
#include <random> //C++ 11 Random Numbers
#include <vector>
</span>
<span class="k">namespace</span> <span class="n">plt</span> <span class="o">=</span> <span class="n">matplotlibcpp</span><span class="p">;</span>
<span class="k">using</span> <span class="k">namespace</span> <span class="n">std</span><span class="p">;</span></code></pre></figure>
<p>Next, some global variables are defined for the fixed landmarks and the world size.
The random generator gets initialized and a forward declaration of two functions is made, namely
<code class="language-plaintext highlighter-rouge">mod</code> and <code class="language-plaintext highlighter-rouge">gen_real_random</code>.</p>
<figure class="highlight"><pre><code class="language-cpp" data-lang="cpp"><span class="c1">// Landmarks</span>
<span class="kt">double</span> <span class="n">landmarks</span><span class="p">[</span><span class="mi">8</span><span class="p">][</span><span class="mi">2</span><span class="p">]</span> <span class="o">=</span> <span class="p">{</span> <span class="p">{</span> <span class="mf">20.0</span><span class="p">,</span> <span class="mf">20.0</span> <span class="p">},</span> <span class="p">{</span> <span class="mf">20.0</span><span class="p">,</span> <span class="mf">80.0</span> <span class="p">},</span> <span class="p">{</span> <span class="mf">20.0</span><span class="p">,</span> <span class="mf">50.0</span> <span class="p">},</span>
<span class="p">{</span> <span class="mf">50.0</span><span class="p">,</span> <span class="mf">20.0</span> <span class="p">},</span> <span class="p">{</span> <span class="mf">50.0</span><span class="p">,</span> <span class="mf">80.0</span> <span class="p">},</span> <span class="p">{</span> <span class="mf">80.0</span><span class="p">,</span> <span class="mf">80.0</span> <span class="p">},</span>
<span class="p">{</span> <span class="mf">80.0</span><span class="p">,</span> <span class="mf">20.0</span> <span class="p">},</span> <span class="p">{</span> <span class="mf">80.0</span><span class="p">,</span> <span class="mf">50.0</span> <span class="p">}</span> <span class="p">};</span>
<span class="c1">// Map size in meters</span>
<span class="kt">double</span> <span class="n">world_size</span> <span class="o">=</span> <span class="mf">100.0</span><span class="p">;</span>
<span class="c1">// Random Generators</span>
<span class="n">random_device</span> <span class="n">rd</span><span class="p">;</span>
<span class="n">mt19937</span> <span class="nf">gen</span><span class="p">(</span><span class="n">rd</span><span class="p">());</span>
<span class="c1">// Global Functions</span>
<span class="kt">double</span> <span class="nf">mod</span><span class="p">(</span><span class="kt">double</span> <span class="n">first_term</span><span class="p">,</span> <span class="kt">double</span> <span class="n">second_term</span><span class="p">);</span>
<span class="kt">double</span> <span class="nf">gen_real_random</span><span class="p">();</span></code></pre></figure>
<h3 id="robot-base-class">Robot Base Class</h3>
<p>The lab uses a robot class that initializes a robot with a random x and y location and orientation in its constructor.</p>
<figure class="highlight"><pre><code class="language-cpp" data-lang="cpp"><span class="n">Robot</span><span class="p">()</span>
<span class="p">{</span>
<span class="c1">// Constructor</span>
<span class="n">x</span> <span class="o">=</span> <span class="n">gen_real_random</span><span class="p">()</span> <span class="o">*</span> <span class="n">world_size</span><span class="p">;</span> <span class="c1">// robot's x coordinate</span>
<span class="n">y</span> <span class="o">=</span> <span class="n">gen_real_random</span><span class="p">()</span> <span class="o">*</span> <span class="n">world_size</span><span class="p">;</span> <span class="c1">// robot's y coordinate</span>
<span class="n">orient</span> <span class="o">=</span> <span class="n">gen_real_random</span><span class="p">()</span> <span class="o">*</span> <span class="mf">2.0</span> <span class="o">*</span> <span class="n">M_PI</span><span class="p">;</span> <span class="c1">// robot's orientation</span>
<span class="n">forward_noise</span> <span class="o">=</span> <span class="mf">0.0</span><span class="p">;</span> <span class="c1">//noise of the forward movement</span>
<span class="n">turn_noise</span> <span class="o">=</span> <span class="mf">0.0</span><span class="p">;</span> <span class="c1">//noise of the turn</span>
<span class="n">sense_noise</span> <span class="o">=</span> <span class="mf">0.0</span><span class="p">;</span> <span class="c1">//noise of the sensing</span>
<span class="p">}</span></code></pre></figure>
<figure class="highlight"><pre><code class="language-cpp" data-lang="cpp"><span class="kt">void</span> <span class="nf">set</span><span class="p">(</span><span class="kt">double</span> <span class="n">new_x</span><span class="p">,</span> <span class="kt">double</span> <span class="n">new_y</span><span class="p">,</span> <span class="kt">double</span> <span class="n">new_orient</span><span class="p">)</span>
<span class="p">{</span>
<span class="c1">// Set robot new position and orientation</span>
<span class="k">if</span> <span class="p">(</span><span class="n">new_x</span> <span class="o"><</span> <span class="mi">0</span> <span class="o">||</span> <span class="n">new_x</span> <span class="o">>=</span> <span class="n">world_size</span><span class="p">)</span>
<span class="k">throw</span> <span class="n">std</span><span class="o">::</span><span class="n">invalid_argument</span><span class="p">(</span><span class="s">"X coordinate out of bound"</span><span class="p">);</span>
<span class="k">if</span> <span class="p">(</span><span class="n">new_y</span> <span class="o"><</span> <span class="mi">0</span> <span class="o">||</span> <span class="n">new_y</span> <span class="o">>=</span> <span class="n">world_size</span><span class="p">)</span>
<span class="k">throw</span> <span class="n">std</span><span class="o">::</span><span class="n">invalid_argument</span><span class="p">(</span><span class="s">"Y coordinate out of bound"</span><span class="p">);</span>
<span class="k">if</span> <span class="p">(</span><span class="n">new_orient</span> <span class="o"><</span> <span class="mi">0</span> <span class="o">||</span> <span class="n">new_orient</span> <span class="o">>=</span> <span class="mi">2</span> <span class="o">*</span> <span class="n">M_PI</span><span class="p">)</span>
<span class="k">throw</span> <span class="n">std</span><span class="o">::</span><span class="n">invalid_argument</span><span class="p">(</span><span class="s">"Orientation must be in [0..2pi]"</span><span class="p">);</span>
<span class="n">x</span> <span class="o">=</span> <span class="n">new_x</span><span class="p">;</span>
<span class="n">y</span> <span class="o">=</span> <span class="n">new_y</span><span class="p">;</span>
<span class="n">orient</span> <span class="o">=</span> <span class="n">new_orient</span><span class="p">;</span>
<span class="p">}</span></code></pre></figure>
<figure class="highlight"><pre><code class="language-cpp" data-lang="cpp"><span class="kt">void</span> <span class="nf">set_noise</span><span class="p">(</span><span class="kt">double</span> <span class="n">new_forward_noise</span><span class="p">,</span> <span class="kt">double</span> <span class="n">new_turn_noise</span><span class="p">,</span> <span class="kt">double</span> <span class="n">new_sense_noise</span><span class="p">)</span>
<span class="p">{</span>
<span class="c1">// Simulate noise, often useful in particle filters</span>
<span class="n">forward_noise</span> <span class="o">=</span> <span class="n">new_forward_noise</span><span class="p">;</span>
<span class="n">turn_noise</span> <span class="o">=</span> <span class="n">new_turn_noise</span><span class="p">;</span>
<span class="n">sense_noise</span> <span class="o">=</span> <span class="n">new_sense_noise</span><span class="p">;</span>
<span class="p">}</span></code></pre></figure>
<figure class="highlight"><pre><code class="language-cpp" data-lang="cpp"><span class="n">vector</span><span class="o"><</span><span class="kt">double</span><span class="o">></span> <span class="n">sense</span><span class="p">()</span>
<span class="p">{</span>
<span class="c1">// Measure the distances from the robot toward the landmarks</span>
<span class="n">vector</span><span class="o"><</span><span class="kt">double</span><span class="o">></span> <span class="n">z</span><span class="p">(</span><span class="k">sizeof</span><span class="p">(</span><span class="n">landmarks</span><span class="p">)</span> <span class="o">/</span> <span class="k">sizeof</span><span class="p">(</span><span class="n">landmarks</span><span class="p">[</span><span class="mi">0</span><span class="p">]));</span>
<span class="kt">double</span> <span class="n">dist</span><span class="p">;</span>
<span class="k">for</span> <span class="p">(</span><span class="kt">int</span> <span class="n">i</span> <span class="o">=</span> <span class="mi">0</span><span class="p">;</span> <span class="n">i</span> <span class="o"><</span> <span class="k">sizeof</span><span class="p">(</span><span class="n">landmarks</span><span class="p">)</span> <span class="o">/</span> <span class="k">sizeof</span><span class="p">(</span><span class="n">landmarks</span><span class="p">[</span><span class="mi">0</span><span class="p">]);</span> <span class="n">i</span><span class="o">++</span><span class="p">)</span> <span class="p">{</span>
<span class="n">dist</span> <span class="o">=</span> <span class="n">sqrt</span><span class="p">(</span><span class="n">pow</span><span class="p">((</span><span class="n">x</span> <span class="o">-</span> <span class="n">landmarks</span><span class="p">[</span><span class="n">i</span><span class="p">][</span><span class="mi">0</span><span class="p">]),</span> <span class="mi">2</span><span class="p">)</span> <span class="o">+</span> <span class="n">pow</span><span class="p">((</span><span class="n">y</span> <span class="o">-</span> <span class="n">landmarks</span><span class="p">[</span><span class="n">i</span><span class="p">][</span><span class="mi">1</span><span class="p">]),</span> <span class="mi">2</span><span class="p">));</span>
<span class="n">dist</span> <span class="o">+=</span> <span class="n">gen_gauss_random</span><span class="p">(</span><span class="mf">0.0</span><span class="p">,</span> <span class="n">sense_noise</span><span class="p">);</span>
<span class="n">z</span><span class="p">[</span><span class="n">i</span><span class="p">]</span> <span class="o">=</span> <span class="n">dist</span><span class="p">;</span>
<span class="p">}</span>
<span class="k">return</span> <span class="n">z</span><span class="p">;</span>
<span class="p">}</span></code></pre></figure>
<figure class="highlight"><pre><code class="language-cpp" data-lang="cpp"><span class="n">Robot</span> <span class="nf">move</span><span class="p">(</span><span class="kt">double</span> <span class="n">turn</span><span class="p">,</span> <span class="kt">double</span> <span class="n">forward</span><span class="p">)</span>
<span class="p">{</span>
<span class="k">if</span> <span class="p">(</span><span class="n">forward</span> <span class="o"><</span> <span class="mi">0</span><span class="p">)</span>
<span class="k">throw</span> <span class="n">std</span><span class="o">::</span><span class="n">invalid_argument</span><span class="p">(</span><span class="s">"Robot cannot move backward"</span><span class="p">);</span>
<span class="c1">// turn, and add randomness to the turning command</span>
<span class="n">orient</span> <span class="o">=</span> <span class="n">orient</span> <span class="o">+</span> <span class="n">turn</span> <span class="o">+</span> <span class="n">gen_gauss_random</span><span class="p">(</span><span class="mf">0.0</span><span class="p">,</span> <span class="n">turn_noise</span><span class="p">);</span>
<span class="n">orient</span> <span class="o">=</span> <span class="n">mod</span><span class="p">(</span><span class="n">orient</span><span class="p">,</span> <span class="mi">2</span> <span class="o">*</span> <span class="n">M_PI</span><span class="p">);</span>
<span class="c1">// move, and add randomness to the motion command</span>
<span class="kt">double</span> <span class="n">dist</span> <span class="o">=</span> <span class="n">forward</span> <span class="o">+</span> <span class="n">gen_gauss_random</span><span class="p">(</span><span class="mf">0.0</span><span class="p">,</span> <span class="n">forward_noise</span><span class="p">);</span>
<span class="n">x</span> <span class="o">=</span> <span class="n">x</span> <span class="o">+</span> <span class="p">(</span><span class="n">cos</span><span class="p">(</span><span class="n">orient</span><span class="p">)</span> <span class="o">*</span> <span class="n">dist</span><span class="p">);</span>
<span class="n">y</span> <span class="o">=</span> <span class="n">y</span> <span class="o">+</span> <span class="p">(</span><span class="n">sin</span><span class="p">(</span><span class="n">orient</span><span class="p">)</span> <span class="o">*</span> <span class="n">dist</span><span class="p">);</span>
<span class="c1">// cyclic truncate</span>
<span class="n">x</span> <span class="o">=</span> <span class="n">mod</span><span class="p">(</span><span class="n">x</span><span class="p">,</span> <span class="n">world_size</span><span class="p">);</span>
<span class="n">y</span> <span class="o">=</span> <span class="n">mod</span><span class="p">(</span><span class="n">y</span><span class="p">,</span> <span class="n">world_size</span><span class="p">);</span>
<span class="c1">// set particle</span>
<span class="n">Robot</span> <span class="n">res</span><span class="p">;</span>
<span class="n">res</span><span class="p">.</span><span class="n">set</span><span class="p">(</span><span class="n">x</span><span class="p">,</span> <span class="n">y</span><span class="p">,</span> <span class="n">orient</span><span class="p">);</span>
<span class="n">res</span><span class="p">.</span><span class="n">set_noise</span><span class="p">(</span><span class="n">forward_noise</span><span class="p">,</span> <span class="n">turn_noise</span><span class="p">,</span> <span class="n">sense_noise</span><span class="p">);</span>
<span class="k">return</span> <span class="n">res</span><span class="p">;</span>
<span class="p">}</span></code></pre></figure>
<figure class="highlight"><pre><code class="language-cpp" data-lang="cpp"><span class="n">string</span> <span class="nf">show_pose</span><span class="p">()</span>
<span class="p">{</span>
<span class="c1">// Returns the robot current position and orientation in a string format</span>
<span class="k">return</span> <span class="s">"[x="</span> <span class="o">+</span> <span class="n">to_string</span><span class="p">(</span><span class="n">x</span><span class="p">)</span> <span class="o">+</span> <span class="s">" y="</span> <span class="o">+</span> <span class="n">to_string</span><span class="p">(</span><span class="n">y</span><span class="p">)</span> <span class="o">+</span> <span class="s">" orient="</span> <span class="o">+</span> <span class="n">to_string</span><span class="p">(</span><span class="n">orient</span><span class="p">)</span> <span class="o">+</span> <span class="s">"]"</span><span class="p">;</span>
<span class="p">}</span></code></pre></figure>
<figure class="highlight"><pre><code class="language-cpp" data-lang="cpp"><span class="n">string</span> <span class="nf">read_sensors</span><span class="p">()</span>
<span class="p">{</span>
<span class="c1">// Returns all the distances from the robot toward the landmarks</span>
<span class="n">vector</span><span class="o"><</span><span class="kt">double</span><span class="o">></span> <span class="n">z</span> <span class="o">=</span> <span class="n">sense</span><span class="p">();</span>
<span class="n">string</span> <span class="n">readings</span> <span class="o">=</span> <span class="s">"["</span><span class="p">;</span>
<span class="k">for</span> <span class="p">(</span><span class="kt">int</span> <span class="n">i</span> <span class="o">=</span> <span class="mi">0</span><span class="p">;</span> <span class="n">i</span> <span class="o"><</span> <span class="n">z</span><span class="p">.</span><span class="n">size</span><span class="p">();</span> <span class="n">i</span><span class="o">++</span><span class="p">)</span> <span class="p">{</span>
<span class="n">readings</span> <span class="o">+=</span> <span class="n">to_string</span><span class="p">(</span><span class="n">z</span><span class="p">[</span><span class="n">i</span><span class="p">])</span> <span class="o">+</span> <span class="s">" "</span><span class="p">;</span>
<span class="p">}</span>
<span class="n">readings</span><span class="p">[</span><span class="n">readings</span><span class="p">.</span><span class="n">size</span><span class="p">()</span> <span class="o">-</span> <span class="mi">1</span><span class="p">]</span> <span class="o">=</span> <span class="sc">']'</span><span class="p">;</span>
<span class="k">return</span> <span class="n">readings</span><span class="p">;</span>
<span class="p">}</span></code></pre></figure>
<figure class="highlight"><pre><code class="language-cpp" data-lang="cpp"><span class="kt">double</span> <span class="nf">measurement_prob</span><span class="p">(</span><span class="n">vector</span><span class="o"><</span><span class="kt">double</span><span class="o">></span> <span class="n">measurement</span><span class="p">)</span>
<span class="p">{</span>
<span class="c1">// Calculates how likely a measurement should be</span>
<span class="kt">double</span> <span class="n">prob</span> <span class="o">=</span> <span class="mf">1.0</span><span class="p">;</span>
<span class="kt">double</span> <span class="n">dist</span><span class="p">;</span>
<span class="k">for</span> <span class="p">(</span><span class="kt">int</span> <span class="n">i</span> <span class="o">=</span> <span class="mi">0</span><span class="p">;</span> <span class="n">i</span> <span class="o"><</span> <span class="k">sizeof</span><span class="p">(</span><span class="n">landmarks</span><span class="p">)</span> <span class="o">/</span> <span class="k">sizeof</span><span class="p">(</span><span class="n">landmarks</span><span class="p">[</span><span class="mi">0</span><span class="p">]);</span> <span class="n">i</span><span class="o">++</span><span class="p">)</span> <span class="p">{</span>
<span class="n">dist</span> <span class="o">=</span> <span class="n">sqrt</span><span class="p">(</span><span class="n">pow</span><span class="p">((</span><span class="n">x</span> <span class="o">-</span> <span class="n">landmarks</span><span class="p">[</span><span class="n">i</span><span class="p">][</span><span class="mi">0</span><span class="p">]),</span> <span class="mi">2</span><span class="p">)</span> <span class="o">+</span> <span class="n">pow</span><span class="p">((</span><span class="n">y</span> <span class="o">-</span> <span class="n">landmarks</span><span class="p">[</span><span class="n">i</span><span class="p">][</span><span class="mi">1</span><span class="p">]),</span> <span class="mi">2</span><span class="p">));</span>
<span class="n">prob</span> <span class="o">*=</span> <span class="n">gaussian</span><span class="p">(</span><span class="n">dist</span><span class="p">,</span> <span class="n">sense_noise</span><span class="p">,</span> <span class="n">measurement</span><span class="p">[</span><span class="n">i</span><span class="p">]);</span>
<span class="p">}</span>
<span class="k">return</span> <span class="n">prob</span><span class="p">;</span>
<span class="p">}</span></code></pre></figure>
<p>The class has the following public member variables</p>
<figure class="highlight"><pre><code class="language-cpp" data-lang="cpp"><span class="kt">double</span> <span class="n">x</span><span class="p">,</span> <span class="n">y</span><span class="p">,</span> <span class="n">orient</span><span class="p">;</span> <span class="c1">//robot poses</span>
<span class="kt">double</span> <span class="n">forward_noise</span><span class="p">,</span> <span class="n">turn_noise</span><span class="p">,</span> <span class="n">sense_noise</span><span class="p">;</span> <span class="c1">//robot noises</span></code></pre></figure>
<p>It uses the follwoing private methods</p>
<figure class="highlight"><pre><code class="language-cpp" data-lang="cpp"><span class="kt">double</span> <span class="nf">gen_gauss_random</span><span class="p">(</span><span class="kt">double</span> <span class="n">mean</span><span class="p">,</span> <span class="kt">double</span> <span class="n">variance</span><span class="p">)</span>
<span class="p">{</span>
<span class="c1">// Gaussian random</span>
<span class="n">normal_distribution</span><span class="o"><</span><span class="kt">double</span><span class="o">></span> <span class="n">gauss_dist</span><span class="p">(</span><span class="n">mean</span><span class="p">,</span> <span class="n">variance</span><span class="p">);</span>
<span class="k">return</span> <span class="n">gauss_dist</span><span class="p">(</span><span class="n">gen</span><span class="p">);</span>
<span class="p">}</span></code></pre></figure>
<figure class="highlight"><pre><code class="language-cpp" data-lang="cpp"><span class="kt">double</span> <span class="nf">gaussian</span><span class="p">(</span><span class="kt">double</span> <span class="n">mu</span><span class="p">,</span> <span class="kt">double</span> <span class="n">sigma</span><span class="p">,</span> <span class="kt">double</span> <span class="n">x</span><span class="p">)</span>
<span class="p">{</span>
<span class="c1">// Probability of x for 1-dim Gaussian with mean mu and var. sigma</span>
<span class="k">return</span> <span class="n">exp</span><span class="p">(</span><span class="o">-</span><span class="p">(</span><span class="n">pow</span><span class="p">((</span><span class="n">mu</span> <span class="o">-</span> <span class="n">x</span><span class="p">),</span> <span class="mi">2</span><span class="p">))</span> <span class="o">/</span> <span class="p">(</span><span class="n">pow</span><span class="p">(</span><span class="n">sigma</span><span class="p">,</span> <span class="mi">2</span><span class="p">))</span> <span class="o">/</span> <span class="mf">2.0</span><span class="p">)</span> <span class="o">/</span> <span class="n">sqrt</span><span class="p">(</span><span class="mf">2.0</span> <span class="o">*</span> <span class="n">M_PI</span> <span class="o">*</span> <span class="p">(</span><span class="n">pow</span><span class="p">(</span><span class="n">sigma</span><span class="p">,</span> <span class="mi">2</span><span class="p">)));</span>
<span class="p">}</span></code></pre></figure>
<h3 id="global-functions">Global functions</h3>
<p>Other useufl global functions</p>
<figure class="highlight"><pre><code class="language-cpp" data-lang="cpp"><span class="c1">// Functions</span>
<span class="kt">double</span> <span class="nf">gen_real_random</span><span class="p">()</span>
<span class="p">{</span>
<span class="c1">// Generate real random between 0 and 1</span>
<span class="n">uniform_real_distribution</span><span class="o"><</span><span class="kt">double</span><span class="o">></span> <span class="n">real_dist</span><span class="p">(</span><span class="mf">0.0</span><span class="p">,</span> <span class="mf">1.0</span><span class="p">);</span> <span class="c1">//Real</span>
<span class="k">return</span> <span class="n">real_dist</span><span class="p">(</span><span class="n">gen</span><span class="p">);</span>
<span class="p">}</span>
<span class="kt">double</span> <span class="nf">mod</span><span class="p">(</span><span class="kt">double</span> <span class="n">first_term</span><span class="p">,</span> <span class="kt">double</span> <span class="n">second_term</span><span class="p">)</span>
<span class="p">{</span>
<span class="c1">// Compute the modulus</span>
<span class="k">return</span> <span class="n">first_term</span> <span class="o">-</span> <span class="p">(</span><span class="n">second_term</span><span class="p">)</span><span class="o">*</span><span class="n">floor</span><span class="p">(</span><span class="n">first_term</span> <span class="o">/</span> <span class="p">(</span><span class="n">second_term</span><span class="p">));</span>
<span class="p">}</span>
<span class="kt">double</span> <span class="nf">evaluation</span><span class="p">(</span><span class="n">Robot</span> <span class="n">r</span><span class="p">,</span> <span class="n">Robot</span> <span class="n">p</span><span class="p">[],</span> <span class="kt">int</span> <span class="n">n</span><span class="p">)</span>
<span class="p">{</span>
<span class="c1">//Calculate the mean error of the system</span>
<span class="kt">double</span> <span class="n">sum</span> <span class="o">=</span> <span class="mf">0.0</span><span class="p">;</span>
<span class="k">for</span> <span class="p">(</span><span class="kt">int</span> <span class="n">i</span> <span class="o">=</span> <span class="mi">0</span><span class="p">;</span> <span class="n">i</span> <span class="o"><</span> <span class="n">n</span><span class="p">;</span> <span class="n">i</span><span class="o">++</span><span class="p">)</span> <span class="p">{</span>
<span class="c1">//the second part is because of world's cyclicity</span>
<span class="kt">double</span> <span class="n">dx</span> <span class="o">=</span> <span class="n">mod</span><span class="p">((</span><span class="n">p</span><span class="p">[</span><span class="n">i</span><span class="p">].</span><span class="n">x</span> <span class="o">-</span> <span class="n">r</span><span class="p">.</span><span class="n">x</span> <span class="o">+</span> <span class="p">(</span><span class="n">world_size</span> <span class="o">/</span> <span class="mf">2.0</span><span class="p">)),</span> <span class="n">world_size</span><span class="p">)</span> <span class="o">-</span> <span class="p">(</span><span class="n">world_size</span> <span class="o">/</span> <span class="mf">2.0</span><span class="p">);</span>
<span class="kt">double</span> <span class="n">dy</span> <span class="o">=</span> <span class="n">mod</span><span class="p">((</span><span class="n">p</span><span class="p">[</span><span class="n">i</span><span class="p">].</span><span class="n">y</span> <span class="o">-</span> <span class="n">r</span><span class="p">.</span><span class="n">y</span> <span class="o">+</span> <span class="p">(</span><span class="n">world_size</span> <span class="o">/</span> <span class="mf">2.0</span><span class="p">)),</span> <span class="n">world_size</span><span class="p">)</span> <span class="o">-</span> <span class="p">(</span><span class="n">world_size</span> <span class="o">/</span> <span class="mf">2.0</span><span class="p">);</span>
<span class="kt">double</span> <span class="n">err</span> <span class="o">=</span> <span class="n">sqrt</span><span class="p">(</span><span class="n">pow</span><span class="p">(</span><span class="n">dx</span><span class="p">,</span> <span class="mi">2</span><span class="p">)</span> <span class="o">+</span> <span class="n">pow</span><span class="p">(</span><span class="n">dy</span><span class="p">,</span> <span class="mi">2</span><span class="p">));</span>
<span class="n">sum</span> <span class="o">+=</span> <span class="n">err</span><span class="p">;</span>
<span class="p">}</span>
<span class="k">return</span> <span class="n">sum</span> <span class="o">/</span> <span class="n">n</span><span class="p">;</span>
<span class="p">}</span>
<span class="kt">double</span> <span class="nf">max</span><span class="p">(</span><span class="kt">double</span> <span class="n">arr</span><span class="p">[],</span> <span class="kt">int</span> <span class="n">n</span><span class="p">)</span>
<span class="p">{</span>
<span class="c1">// Identify the max element in an array</span>
<span class="kt">double</span> <span class="n">max</span> <span class="o">=</span> <span class="mi">0</span><span class="p">;</span>
<span class="k">for</span> <span class="p">(</span><span class="kt">int</span> <span class="n">i</span> <span class="o">=</span> <span class="mi">0</span><span class="p">;</span> <span class="n">i</span> <span class="o"><</span> <span class="n">n</span><span class="p">;</span> <span class="n">i</span><span class="o">++</span><span class="p">)</span> <span class="p">{</span>
<span class="k">if</span> <span class="p">(</span><span class="n">arr</span><span class="p">[</span><span class="n">i</span><span class="p">]</span> <span class="o">></span> <span class="n">max</span><span class="p">)</span>
<span class="n">max</span> <span class="o">=</span> <span class="n">arr</span><span class="p">[</span><span class="n">i</span><span class="p">];</span>
<span class="p">}</span>
<span class="k">return</span> <span class="n">max</span><span class="p">;</span>
<span class="p">}</span></code></pre></figure>
<h3 id="visualization">Visualization</h3>
<p>For visualization matplotlib is used as backend.</p>
<figure class="highlight"><pre><code class="language-cpp" data-lang="cpp"><span class="kt">void</span> <span class="nf">visualization</span><span class="p">(</span><span class="kt">int</span> <span class="n">n</span><span class="p">,</span> <span class="n">Robot</span> <span class="n">robot</span><span class="p">,</span> <span class="kt">int</span> <span class="n">step</span><span class="p">,</span> <span class="n">Robot</span> <span class="n">p</span><span class="p">[],</span> <span class="n">Robot</span> <span class="n">pr</span><span class="p">[])</span>
<span class="p">{</span>
<span class="c1">//Draw the robot, landmarks, particles and resampled particles on a graph</span>
<span class="c1">//Graph Format</span>
<span class="n">plt</span><span class="o">::</span><span class="n">title</span><span class="p">(</span><span class="s">"MCL, step "</span> <span class="o">+</span> <span class="n">to_string</span><span class="p">(</span><span class="n">step</span><span class="p">));</span>
<span class="n">plt</span><span class="o">::</span><span class="n">xlim</span><span class="p">(</span><span class="mi">0</span><span class="p">,</span> <span class="mi">100</span><span class="p">);</span>
<span class="n">plt</span><span class="o">::</span><span class="n">ylim</span><span class="p">(</span><span class="mi">0</span><span class="p">,</span> <span class="mi">100</span><span class="p">);</span>
<span class="c1">//Draw particles in green</span>
<span class="k">for</span> <span class="p">(</span><span class="kt">int</span> <span class="n">i</span> <span class="o">=</span> <span class="mi">0</span><span class="p">;</span> <span class="n">i</span> <span class="o"><</span> <span class="n">n</span><span class="p">;</span> <span class="n">i</span><span class="o">++</span><span class="p">)</span> <span class="p">{</span>
<span class="n">plt</span><span class="o">::</span><span class="n">plot</span><span class="p">({</span> <span class="n">p</span><span class="p">[</span><span class="n">i</span><span class="p">].</span><span class="n">x</span> <span class="p">},</span> <span class="p">{</span> <span class="n">p</span><span class="p">[</span><span class="n">i</span><span class="p">].</span><span class="n">y</span> <span class="p">},</span> <span class="s">"go"</span><span class="p">);</span>
<span class="p">}</span>
<span class="c1">//Draw resampled particles in yellow</span>
<span class="k">for</span> <span class="p">(</span><span class="kt">int</span> <span class="n">i</span> <span class="o">=</span> <span class="mi">0</span><span class="p">;</span> <span class="n">i</span> <span class="o"><</span> <span class="n">n</span><span class="p">;</span> <span class="n">i</span><span class="o">++</span><span class="p">)</span> <span class="p">{</span>
<span class="n">plt</span><span class="o">::</span><span class="n">plot</span><span class="p">({</span> <span class="n">pr</span><span class="p">[</span><span class="n">i</span><span class="p">].</span><span class="n">x</span> <span class="p">},</span> <span class="p">{</span> <span class="n">pr</span><span class="p">[</span><span class="n">i</span><span class="p">].</span><span class="n">y</span> <span class="p">},</span> <span class="s">"yo"</span><span class="p">);</span>
<span class="p">}</span>
<span class="c1">//Draw landmarks in red</span>
<span class="k">for</span> <span class="p">(</span><span class="kt">int</span> <span class="n">i</span> <span class="o">=</span> <span class="mi">0</span><span class="p">;</span> <span class="n">i</span> <span class="o"><</span> <span class="k">sizeof</span><span class="p">(</span><span class="n">landmarks</span><span class="p">)</span> <span class="o">/</span> <span class="k">sizeof</span><span class="p">(</span><span class="n">landmarks</span><span class="p">[</span><span class="mi">0</span><span class="p">]);</span> <span class="n">i</span><span class="o">++</span><span class="p">)</span> <span class="p">{</span>
<span class="n">plt</span><span class="o">::</span><span class="n">plot</span><span class="p">({</span> <span class="n">landmarks</span><span class="p">[</span><span class="n">i</span><span class="p">][</span><span class="mi">0</span><span class="p">]</span> <span class="p">},</span> <span class="p">{</span> <span class="n">landmarks</span><span class="p">[</span><span class="n">i</span><span class="p">][</span><span class="mi">1</span><span class="p">]</span> <span class="p">},</span> <span class="s">"ro"</span><span class="p">);</span>
<span class="p">}</span>
<span class="c1">//Draw robot position in blue</span>
<span class="n">plt</span><span class="o">::</span><span class="n">plot</span><span class="p">({</span> <span class="n">robot</span><span class="p">.</span><span class="n">x</span> <span class="p">},</span> <span class="p">{</span> <span class="n">robot</span><span class="p">.</span><span class="n">y</span> <span class="p">},</span> <span class="s">"bo"</span><span class="p">);</span>
<span class="c1">//Save the image and close the plot</span>
<span class="n">plt</span><span class="o">::</span><span class="n">save</span><span class="p">(</span><span class="s">"./Images/Step"</span> <span class="o">+</span> <span class="n">to_string</span><span class="p">(</span><span class="n">step</span><span class="p">)</span> <span class="o">+</span> <span class="s">".png"</span><span class="p">);</span>
<span class="n">plt</span><span class="o">::</span><span class="n">clf</span><span class="p">();</span>
<span class="p">}</span></code></pre></figure>
<h3 id="main">Main</h3>
<figure class="highlight"><pre><code class="language-cpp" data-lang="cpp"><span class="kt">int</span> <span class="nf">main</span><span class="p">()</span>
<span class="p">{</span>
<span class="c1">//Practice Interfacing with Robot Class</span>
<span class="n">Robot</span> <span class="n">myrobot</span><span class="p">;</span>
<span class="n">myrobot</span><span class="p">.</span><span class="n">set_noise</span><span class="p">(</span><span class="mf">5.0</span><span class="p">,</span> <span class="mf">0.1</span><span class="p">,</span> <span class="mf">5.0</span><span class="p">);</span>
<span class="n">myrobot</span><span class="p">.</span><span class="n">set</span><span class="p">(</span><span class="mf">30.0</span><span class="p">,</span> <span class="mf">50.0</span><span class="p">,</span> <span class="n">M_PI</span> <span class="o">/</span> <span class="mf">2.0</span><span class="p">);</span>
<span class="n">myrobot</span><span class="p">.</span><span class="n">move</span><span class="p">(</span><span class="o">-</span><span class="n">M_PI</span> <span class="o">/</span> <span class="mf">2.0</span><span class="p">,</span> <span class="mf">15.0</span><span class="p">);</span>
<span class="c1">//cout << myrobot.read_sensors() << endl;</span>
<span class="n">myrobot</span><span class="p">.</span><span class="n">move</span><span class="p">(</span><span class="o">-</span><span class="n">M_PI</span> <span class="o">/</span> <span class="mf">2.0</span><span class="p">,</span> <span class="mf">10.0</span><span class="p">);</span>
<span class="c1">//cout << myrobot.read_sensors() << endl;</span>
<span class="c1">// Create a set of particles</span>
<span class="kt">int</span> <span class="n">n</span> <span class="o">=</span> <span class="mi">1000</span><span class="p">;</span>
<span class="n">Robot</span> <span class="n">p</span><span class="p">[</span><span class="n">n</span><span class="p">];</span>
<span class="k">for</span> <span class="p">(</span><span class="kt">int</span> <span class="n">i</span> <span class="o">=</span> <span class="mi">0</span><span class="p">;</span> <span class="n">i</span> <span class="o"><</span> <span class="n">n</span><span class="p">;</span> <span class="n">i</span><span class="o">++</span><span class="p">)</span> <span class="p">{</span>
<span class="n">p</span><span class="p">[</span><span class="n">i</span><span class="p">].</span><span class="n">set_noise</span><span class="p">(</span><span class="mf">0.05</span><span class="p">,</span> <span class="mf">0.05</span><span class="p">,</span> <span class="mf">5.0</span><span class="p">);</span>
<span class="c1">//cout << p[i].show_pose() << endl;</span>
<span class="p">}</span>
<span class="c1">//Re-initialize myrobot object and Initialize a measurment vector</span>
<span class="n">myrobot</span> <span class="o">=</span> <span class="n">Robot</span><span class="p">();</span>
<span class="n">vector</span><span class="o"><</span><span class="kt">double</span><span class="o">></span> <span class="n">z</span><span class="p">;</span>
<span class="c1">//Iterating 50 times over the set of particles</span>
<span class="kt">int</span> <span class="n">steps</span> <span class="o">=</span> <span class="mi">50</span><span class="p">;</span>
<span class="k">for</span> <span class="p">(</span><span class="kt">int</span> <span class="n">t</span> <span class="o">=</span> <span class="mi">0</span><span class="p">;</span> <span class="n">t</span> <span class="o"><</span> <span class="n">steps</span><span class="p">;</span> <span class="n">t</span><span class="o">++</span><span class="p">)</span> <span class="p">{</span>
<span class="c1">//Move the robot and sense the environment afterwards</span>
<span class="n">myrobot</span> <span class="o">=</span> <span class="n">myrobot</span><span class="p">.</span><span class="n">move</span><span class="p">(</span><span class="mf">0.1</span><span class="p">,</span> <span class="mf">5.0</span><span class="p">);</span>
<span class="n">z</span> <span class="o">=</span> <span class="n">myrobot</span><span class="p">.</span><span class="n">sense</span><span class="p">();</span>
<span class="c1">// Simulate a robot motion for each of these particles</span>
<span class="n">Robot</span> <span class="n">p2</span><span class="p">[</span><span class="n">n</span><span class="p">];</span>
<span class="k">for</span> <span class="p">(</span><span class="kt">int</span> <span class="n">i</span> <span class="o">=</span> <span class="mi">0</span><span class="p">;</span> <span class="n">i</span> <span class="o"><</span> <span class="n">n</span><span class="p">;</span> <span class="n">i</span><span class="o">++</span><span class="p">)</span> <span class="p">{</span>
<span class="n">p2</span><span class="p">[</span><span class="n">i</span><span class="p">]</span> <span class="o">=</span> <span class="n">p</span><span class="p">[</span><span class="n">i</span><span class="p">].</span><span class="n">move</span><span class="p">(</span><span class="mf">0.1</span><span class="p">,</span> <span class="mf">5.0</span><span class="p">);</span>
<span class="n">p</span><span class="p">[</span><span class="n">i</span><span class="p">]</span> <span class="o">=</span> <span class="n">p2</span><span class="p">[</span><span class="n">i</span><span class="p">];</span>
<span class="p">}</span>
<span class="c1">//Generate particle weights depending on robot's measurement</span>
<span class="kt">double</span> <span class="n">w</span><span class="p">[</span><span class="n">n</span><span class="p">];</span>
<span class="k">for</span> <span class="p">(</span><span class="kt">int</span> <span class="n">i</span> <span class="o">=</span> <span class="mi">0</span><span class="p">;</span> <span class="n">i</span> <span class="o"><</span> <span class="n">n</span><span class="p">;</span> <span class="n">i</span><span class="o">++</span><span class="p">)</span> <span class="p">{</span>
<span class="n">w</span><span class="p">[</span><span class="n">i</span><span class="p">]</span> <span class="o">=</span> <span class="n">p</span><span class="p">[</span><span class="n">i</span><span class="p">].</span><span class="n">measurement_prob</span><span class="p">(</span><span class="n">z</span><span class="p">);</span>
<span class="c1">//cout << w[i] << endl;</span>
<span class="p">}</span>
<span class="c1">//Resample the particles with a sample probability proportional to the importance weight</span>
<span class="n">Robot</span> <span class="n">p3</span><span class="p">[</span><span class="n">n</span><span class="p">];</span>
<span class="kt">int</span> <span class="n">index</span> <span class="o">=</span> <span class="n">gen_real_random</span><span class="p">()</span> <span class="o">*</span> <span class="n">n</span><span class="p">;</span>
<span class="c1">//cout << index << endl;</span>
<span class="kt">double</span> <span class="n">beta</span> <span class="o">=</span> <span class="mf">0.0</span><span class="p">;</span>
<span class="kt">double</span> <span class="n">mw</span> <span class="o">=</span> <span class="n">max</span><span class="p">(</span><span class="n">w</span><span class="p">,</span> <span class="n">n</span><span class="p">);</span>
<span class="c1">//cout << mw;</span>
<span class="k">for</span> <span class="p">(</span><span class="kt">int</span> <span class="n">i</span> <span class="o">=</span> <span class="mi">0</span><span class="p">;</span> <span class="n">i</span> <span class="o"><</span> <span class="n">n</span><span class="p">;</span> <span class="n">i</span><span class="o">++</span><span class="p">)</span> <span class="p">{</span>
<span class="n">beta</span> <span class="o">+=</span> <span class="n">gen_real_random</span><span class="p">()</span> <span class="o">*</span> <span class="mf">2.0</span> <span class="o">*</span> <span class="n">mw</span><span class="p">;</span>
<span class="k">while</span> <span class="p">(</span><span class="n">beta</span> <span class="o">></span> <span class="n">w</span><span class="p">[</span><span class="n">index</span><span class="p">])</span> <span class="p">{</span>
<span class="n">beta</span> <span class="o">-=</span> <span class="n">w</span><span class="p">[</span><span class="n">index</span><span class="p">];</span>
<span class="n">index</span> <span class="o">=</span> <span class="n">mod</span><span class="p">((</span><span class="n">index</span> <span class="o">+</span> <span class="mi">1</span><span class="p">),</span> <span class="n">n</span><span class="p">);</span>
<span class="p">}</span>
<span class="n">p3</span><span class="p">[</span><span class="n">i</span><span class="p">]</span> <span class="o">=</span> <span class="n">p</span><span class="p">[</span><span class="n">index</span><span class="p">];</span>
<span class="p">}</span>
<span class="k">for</span> <span class="p">(</span><span class="kt">int</span> <span class="n">k</span> <span class="o">=</span> <span class="mi">0</span><span class="p">;</span> <span class="n">k</span> <span class="o"><</span> <span class="n">n</span><span class="p">;</span> <span class="n">k</span><span class="o">++</span><span class="p">)</span> <span class="p">{</span>
<span class="n">p</span><span class="p">[</span><span class="n">k</span><span class="p">]</span> <span class="o">=</span> <span class="n">p3</span><span class="p">[</span><span class="n">k</span><span class="p">];</span>
<span class="c1">//cout << p[k].show_pose() << endl;</span>
<span class="p">}</span>
<span class="c1">//Evaluate the Error</span>
<span class="n">cout</span> <span class="o"><<</span> <span class="s">"Step = "</span> <span class="o"><<</span> <span class="n">t</span> <span class="o"><<</span> <span class="s">", Evaluation = "</span> <span class="o"><<</span> <span class="n">evaluation</span><span class="p">(</span><span class="n">myrobot</span><span class="p">,</span> <span class="n">p</span><span class="p">,</span> <span class="n">n</span><span class="p">)</span> <span class="o"><<</span> <span class="n">endl</span><span class="p">;</span>
<span class="c1">//#### DON'T MODIFY ANYTHING ABOVE HERE! ENTER CODE BELOW ####</span>
<span class="c1">//Graph the position of the robot and the particles at each step</span>
<span class="n">visualization</span><span class="p">(</span><span class="n">n</span><span class="p">,</span> <span class="n">myrobot</span><span class="p">,</span> <span class="n">t</span><span class="p">,</span> <span class="n">p2</span><span class="p">,</span> <span class="n">p3</span><span class="p">);</span>
<span class="p">}</span> <span class="c1">//End of Steps loop</span>
<span class="k">return</span> <span class="mi">0</span><span class="p">;</span>
<span class="p">}</span></code></pre></figure>
<h2 id="compile-and-run">Compile and Run</h2>
<p>Compile with</p>
<figure class="highlight"><pre><code class="language-bash" data-lang="bash">g++ solution.cpp <span class="nt">-o</span> app <span class="nt">-std</span><span class="o">=</span>c++11 <span class="nt">-I</span>/usr/include/python2.7 <span class="nt">-lpython2</span>.7</code></pre></figure>
<p>And finally run the program with</p>
<figure class="highlight"><pre><code class="language-bash" data-lang="bash">./app</code></pre></figure>
<p>This will output:</p>
<figure class="highlight"><pre><code class="language-bash" data-lang="bash">Step <span class="o">=</span> 0, Evaluation <span class="o">=</span> 4.36165
Step <span class="o">=</span> 1, Evaluation <span class="o">=</span> 4.13259
Step <span class="o">=</span> 2, Evaluation <span class="o">=</span> 3.42951
Step <span class="o">=</span> 3, Evaluation <span class="o">=</span> 3.2404
Step <span class="o">=</span> 4, Evaluation <span class="o">=</span> 2.7659
Step <span class="o">=</span> 5, Evaluation <span class="o">=</span> 2.48962
Step <span class="o">=</span> 6, Evaluation <span class="o">=</span> 2.31978
Step <span class="o">=</span> 7, Evaluation <span class="o">=</span> 2.24096
Step <span class="o">=</span> 8, Evaluation <span class="o">=</span> 2.2645
Step <span class="o">=</span> 9, Evaluation <span class="o">=</span> 2.16855
Step <span class="o">=</span> 10, Evaluation <span class="o">=</span> 2.0289
Step <span class="o">=</span> 11, Evaluation <span class="o">=</span> 1.90762
Step <span class="o">=</span> 12, Evaluation <span class="o">=</span> 1.90886
Step <span class="o">=</span> 13, Evaluation <span class="o">=</span> 1.86255
Step <span class="o">=</span> 14, Evaluation <span class="o">=</span> 1.80935
Step <span class="o">=</span> 15, Evaluation <span class="o">=</span> 1.75033
Step <span class="o">=</span> 16, Evaluation <span class="o">=</span> 1.73623
Step <span class="o">=</span> 17, Evaluation <span class="o">=</span> 1.66427
Step <span class="o">=</span> 18, Evaluation <span class="o">=</span> 1.65443
Step <span class="o">=</span> 19, Evaluation <span class="o">=</span> 1.68175
Step <span class="o">=</span> 20, Evaluation <span class="o">=</span> 1.62883
Step <span class="o">=</span> 21, Evaluation <span class="o">=</span> 1.61669
Step <span class="o">=</span> 22, Evaluation <span class="o">=</span> 1.60328
Step <span class="o">=</span> 23, Evaluation <span class="o">=</span> 1.55554
Step <span class="o">=</span> 24, Evaluation <span class="o">=</span> 1.54531
Step <span class="o">=</span> 25, Evaluation <span class="o">=</span> 1.48853
Step <span class="o">=</span> 26, Evaluation <span class="o">=</span> 1.52531
Step <span class="o">=</span> 27, Evaluation <span class="o">=</span> 1.54713
Step <span class="o">=</span> 28, Evaluation <span class="o">=</span> 1.57839
Step <span class="o">=</span> 29, Evaluation <span class="o">=</span> 1.59364
Step <span class="o">=</span> 30, Evaluation <span class="o">=</span> 1.65056
Step <span class="o">=</span> 31, Evaluation <span class="o">=</span> 1.6718
Step <span class="o">=</span> 32, Evaluation <span class="o">=</span> 1.67659
Step <span class="o">=</span> 33, Evaluation <span class="o">=</span> 1.61774
Step <span class="o">=</span> 34, Evaluation <span class="o">=</span> 1.57891
Step <span class="o">=</span> 35, Evaluation <span class="o">=</span> 1.50999
Step <span class="o">=</span> 36, Evaluation <span class="o">=</span> 1.40922
Step <span class="o">=</span> 37, Evaluation <span class="o">=</span> 1.40538
Step <span class="o">=</span> 38, Evaluation <span class="o">=</span> 1.41737
Step <span class="o">=</span> 39, Evaluation <span class="o">=</span> 1.39369
Step <span class="o">=</span> 40, Evaluation <span class="o">=</span> 1.38676
Step <span class="o">=</span> 41, Evaluation <span class="o">=</span> 1.43119
Step <span class="o">=</span> 42, Evaluation <span class="o">=</span> 1.39935
Step <span class="o">=</span> 43, Evaluation <span class="o">=</span> 1.37321
Step <span class="o">=</span> 44, Evaluation <span class="o">=</span> 1.4212
Step <span class="o">=</span> 45, Evaluation <span class="o">=</span> 1.55304
Step <span class="o">=</span> 46, Evaluation <span class="o">=</span> 1.75291
Step <span class="o">=</span> 47, Evaluation <span class="o">=</span> 1.93479
Step <span class="o">=</span> 48, Evaluation <span class="o">=</span> 1.94307
Step <span class="o">=</span> 49, Evaluation <span class="o">=</span> 1.25727</code></pre></figure>
<h2 id="results">Results</h2>
<figure class="third ">
<a href="/assets/posts/2019-04-17-monte-carlo-localization/Images/Step0.png" title="Iteration 0">
<img src="/assets/posts/2019-04-17-monte-carlo-localization/Images/Step0.png" alt="Iteration 0" />
</a>
<a href="/assets/posts/2019-04-17-monte-carlo-localization/Images/Step1.png" title="Iteration 1">
<img src="/assets/posts/2019-04-17-monte-carlo-localization/Images/Step1.png" alt="Iteration 1" />
</a>
<a href="/assets/posts/2019-04-17-monte-carlo-localization/Images/Step2.png" title="Iteration 2">
<img src="/assets/posts/2019-04-17-monte-carlo-localization/Images/Step2.png" alt="Iteration 2" />
</a>
<a href="/assets/posts/2019-04-17-monte-carlo-localization/Images/Step3.png" title="Iteration 3">
<img src="/assets/posts/2019-04-17-monte-carlo-localization/Images/Step3.png" alt="Iteration 3" />
</a>
<a href="/assets/posts/2019-04-17-monte-carlo-localization/Images/Step4.png" title="Iteration 4">
<img src="/assets/posts/2019-04-17-monte-carlo-localization/Images/Step4.png" alt="Iteration 4" />
</a>
<a href="/assets/posts/2019-04-17-monte-carlo-localization/Images/Step5.png" title="Iteration 5">
<img src="/assets/posts/2019-04-17-monte-carlo-localization/Images/Step5.png" alt="Iteration 5" />
</a>
<a href="/assets/posts/2019-04-17-monte-carlo-localization/Images/Step6.png" title="Iteration 6">
<img src="/assets/posts/2019-04-17-monte-carlo-localization/Images/Step6.png" alt="Iteration 6" />
</a>
<a href="/assets/posts/2019-04-17-monte-carlo-localization/Images/Step7.png" title="Iteration 7">
<img src="/assets/posts/2019-04-17-monte-carlo-localization/Images/Step7.png" alt="Iteration 7" />
</a>
<a href="/assets/posts/2019-04-17-monte-carlo-localization/Images/Step8.png" title="Iteration 8">
<img src="/assets/posts/2019-04-17-monte-carlo-localization/Images/Step8.png" alt="Iteration 8" />
</a>
<a href="/assets/posts/2019-04-17-monte-carlo-localization/Images/Step9.png" title="Iteration 9">
<img src="/assets/posts/2019-04-17-monte-carlo-localization/Images/Step9.png" alt="Iteration 9" />
</a>
<a href="/assets/posts/2019-04-17-monte-carlo-localization/Images/Step10.png" title="Iteration 10">
<img src="/assets/posts/2019-04-17-monte-carlo-localization/Images/Step10.png" alt="Iteration 10" />
</a>
<a href="/assets/posts/2019-04-17-monte-carlo-localization/Images/Step11.png" title="Iteration 11">
<img src="/assets/posts/2019-04-17-monte-carlo-localization/Images/Step11.png" alt="Iteration 11" />
</a>
<a href="/assets/posts/2019-04-17-monte-carlo-localization/Images/Step12.png" title="Iteration 12">
<img src="/assets/posts/2019-04-17-monte-carlo-localization/Images/Step12.png" alt="Iteration 12" />
</a>
<a href="/assets/posts/2019-04-17-monte-carlo-localization/Images/Step13.png" title="Iteration 13">
<img src="/assets/posts/2019-04-17-monte-carlo-localization/Images/Step13.png" alt="Iteration 13" />
</a>
<a href="/assets/posts/2019-04-17-monte-carlo-localization/Images/Step14.png" title="Iteration 14">
<img src="/assets/posts/2019-04-17-monte-carlo-localization/Images/Step14.png" alt="Iteration 14" />
</a>
<a href="/assets/posts/2019-04-17-monte-carlo-localization/Images/Step15.png" title="Iteration 15">
<img src="/assets/posts/2019-04-17-monte-carlo-localization/Images/Step15.png" alt="Iteration 15" />
</a>
<a href="/assets/posts/2019-04-17-monte-carlo-localization/Images/Step16.png" title="Iteration 16">
<img src="/assets/posts/2019-04-17-monte-carlo-localization/Images/Step16.png" alt="Iteration 16" />
</a>
<a href="/assets/posts/2019-04-17-monte-carlo-localization/Images/Step17.png" title="Iteration 17">
<img src="/assets/posts/2019-04-17-monte-carlo-localization/Images/Step17.png" alt="Iteration 17" />
</a>
<a href="/assets/posts/2019-04-17-monte-carlo-localization/Images/Step18.png" title="Iteration 18">
<img src="/assets/posts/2019-04-17-monte-carlo-localization/Images/Step18.png" alt="Iteration 18" />
</a>
<a href="/assets/posts/2019-04-17-monte-carlo-localization/Images/Step19.png" title="Iteration 19">
<img src="/assets/posts/2019-04-17-monte-carlo-localization/Images/Step19.png" alt="Iteration 19" />
</a>
<a href="/assets/posts/2019-04-17-monte-carlo-localization/Images/Step20.png" title="Iteration 20">
<img src="/assets/posts/2019-04-17-monte-carlo-localization/Images/Step20.png" alt="Iteration 20" />
</a>
<a href="/assets/posts/2019-04-17-monte-carlo-localization/Images/Step21.png" title="Iteration 21">
<img src="/assets/posts/2019-04-17-monte-carlo-localization/Images/Step21.png" alt="Iteration 21" />
</a>
<a href="/assets/posts/2019-04-17-monte-carlo-localization/Images/Step22.png" title="Iteration 22">
<img src="/assets/posts/2019-04-17-monte-carlo-localization/Images/Step22.png" alt="Iteration 22" />
</a>
<a href="/assets/posts/2019-04-17-monte-carlo-localization/Images/Step23.png" title="Iteration 23">
<img src="/assets/posts/2019-04-17-monte-carlo-localization/Images/Step23.png" alt="Iteration 23" />
</a>
<a href="/assets/posts/2019-04-17-monte-carlo-localization/Images/Step24.png" title="Iteration 24">
<img src="/assets/posts/2019-04-17-monte-carlo-localization/Images/Step24.png" alt="Iteration 24" />
</a>
<a href="/assets/posts/2019-04-17-monte-carlo-localization/Images/Step25.png" title="Iteration 25">
<img src="/assets/posts/2019-04-17-monte-carlo-localization/Images/Step25.png" alt="Iteration 25" />
</a>
<a href="/assets/posts/2019-04-17-monte-carlo-localization/Images/Step26.png" title="Iteration 26">
<img src="/assets/posts/2019-04-17-monte-carlo-localization/Images/Step26.png" alt="Iteration 26" />
</a>
<a href="/assets/posts/2019-04-17-monte-carlo-localization/Images/Step27.png" title="Iteration 27">
<img src="/assets/posts/2019-04-17-monte-carlo-localization/Images/Step27.png" alt="Iteration 27" />
</a>
<a href="/assets/posts/2019-04-17-monte-carlo-localization/Images/Step28.png" title="Iteration 28">
<img src="/assets/posts/2019-04-17-monte-carlo-localization/Images/Step28.png" alt="Iteration 28" />
</a>
<a href="/assets/posts/2019-04-17-monte-carlo-localization/Images/Step29.png" title="Iteration 29">
<img src="/assets/posts/2019-04-17-monte-carlo-localization/Images/Step29.png" alt="Iteration 29" />
</a>
<a href="/assets/posts/2019-04-17-monte-carlo-localization/Images/Step30.png" title="Iteration 30">
<img src="/assets/posts/2019-04-17-monte-carlo-localization/Images/Step30.png" alt="Iteration 30" />
</a>
<a href="/assets/posts/2019-04-17-monte-carlo-localization/Images/Step31.png" title="Iteration 31">
<img src="/assets/posts/2019-04-17-monte-carlo-localization/Images/Step31.png" alt="Iteration 31" />
</a>
<a href="/assets/posts/2019-04-17-monte-carlo-localization/Images/Step32.png" title="Iteration 32">
<img src="/assets/posts/2019-04-17-monte-carlo-localization/Images/Step32.png" alt="Iteration 32" />
</a>
<a href="/assets/posts/2019-04-17-monte-carlo-localization/Images/Step33.png" title="Iteration 33">
<img src="/assets/posts/2019-04-17-monte-carlo-localization/Images/Step33.png" alt="Iteration 33" />
</a>
<a href="/assets/posts/2019-04-17-monte-carlo-localization/Images/Step34.png" title="Iteration 34">
<img src="/assets/posts/2019-04-17-monte-carlo-localization/Images/Step34.png" alt="Iteration 34" />
</a>
<a href="/assets/posts/2019-04-17-monte-carlo-localization/Images/Step35.png" title="Iteration 35">
<img src="/assets/posts/2019-04-17-monte-carlo-localization/Images/Step35.png" alt="Iteration 35" />
</a>
<a href="/assets/posts/2019-04-17-monte-carlo-localization/Images/Step36.png" title="Iteration 36">
<img src="/assets/posts/2019-04-17-monte-carlo-localization/Images/Step36.png" alt="Iteration 36" />
</a>
<a href="/assets/posts/2019-04-17-monte-carlo-localization/Images/Step37.png" title="Iteration 37">
<img src="/assets/posts/2019-04-17-monte-carlo-localization/Images/Step37.png" alt="Iteration 37" />
</a>
<a href="/assets/posts/2019-04-17-monte-carlo-localization/Images/Step38.png" title="Iteration 38">
<img src="/assets/posts/2019-04-17-monte-carlo-localization/Images/Step38.png" alt="Iteration 38" />
</a>
<a href="/assets/posts/2019-04-17-monte-carlo-localization/Images/Step39.png" title="Iteration 39">
<img src="/assets/posts/2019-04-17-monte-carlo-localization/Images/Step39.png" alt="Iteration 39" />
</a>
<figcaption>Iterations of the Monte Carlo Localization Algorithm.
</figcaption>
</figure>
<h2 id="links">Links</h2>
<p>Further details about MCL are found in the <a href="http://robots.stanford.edu/papers/thrun.robust-mcl.pdf">paper</a> of Sebastian Thrun et al.</p>
<h2 id="reference">Reference</h2>
<p>This post is a summary of the MCLLab from the <a href="https://eu.udacity.com/course/robotics-software-engineer--nd209">Robotics Nanodegree of Udacity</a></p>Franz PucherMonte Carlo Localization Algorithm and C++ ImplementationSLAM using Grid-based FastSLAM2019-04-17T17:00:42+00:002019-04-17T17:00:42+00:00https://fjp.at/posts/slam/gmapping<p>The following sections summairze the Grid-based FastSLAM algorithm which is one instance of FastSLAM. This algorithm estimates the trajectory of a mobile robot while simultaneously creating a grid map of the environment.
Grid-based FastSLAM is combination of a particle filter such as Adaptive <a href="/posts/localization/mcl/">Monte Carlo Localization</a> (amcl) and a mapping algorithm such as occupancy grid mapping.</p>
<h2 id="slam-fundamentals">SLAM Fundamentals</h2>
<p>SLAM stands for Simultaneous Localization and Mapping sometimes refered to as Concurrent Localization and Mappping (CLAM).
The SLAM algorithm combines localization and mapping, where a robot has access only to its own movement and sensory data.
The robot must build a map while simultaneously localizing itself relative to the map.</p>
<p>The map and the robot pose will be uncertain, and the errors in the robot’s pose estimate and map will be correlated. The accuracy of the map depends on the accuracy of the localization and vice versa.
Chicken and eggo problem: The map is needed for localization, and the robot’s pose is needed for mapping. This makes SLAM a real challenge but is essential for mobile robotics.<br />
They must be able to move in environments they have never seen before. Examples are a vacuum cleaner where also the map can change due to moving furniture. Of course self driving vehicles require SLAM to
update their maps while localizing themselfs in it.</p>
<p>There exist generally five categories of SLAM algorithms:</p>
<ol>
<li>Extended Kalman Filter SLAM (EKF)</li>
<li>Sparse Extended Information Filter (SEIF)</li>
<li>Extended Information Form (EIF)</li>
<li>FastSLAM</li>
<li>GraphSLAM</li>
</ol>
<p>This posts describes the FastSLAM approach which uses a particle filter and a low dimensional Extended Kalman filter. This algorithm will be adapted to grid maps which results in Grid-based FastSLAM.
GraphSLAM on the other hand uses constraints to represent relationships between robot poses and the environment. With this, the algorithm tries to resolve all the constraints
to create the most likely map given the data. An implementation of GraphSLAM is called Real Time Apperance Based Mapping (RTABMap).</p>
<h3 id="localization">Localization</h3>
<p>In Localization problems a map is known beforehand and the robot pose is estimated using its sensor mesaurements $z_{1:t}$, control inputs $u_{1:t}$ and its initial pose $x_{1:t-1}$.
With this data, the new belief $p(x_{1:t}|x_{1:t-1}, z_{1:t}, u_{1:t})$ can be computed as a probability distribution.</p>
<p>The localization estimation can be done with an Extended Kalman filter or Monte Carlo localization.
With the Monte Carlo particle filter approach (MCL) each particle consists of the robot pose $(x, y, \theta)$ and its importance weight $w$. With motion and sensor updates, followed by resampling it is possible to estimate the robots pose.</p>
<h3 id="mapping">Mapping</h3>
<p>In mapping problems the robot pose $x_{1:t}$ is known and the map $m_{t}$ at time $t$, either static or dynamic is unknown.
Therefore the mapping problem is to find the posterior belief of the map $p(m_t|x_{1:t}, z_{1:t})$ given the robot poses and its measurements $z_{1:t}$.</p>
<p>The challenges in mapping are the number of state variables. In localization, only the robots pose is estimated with its $x$ and $y$ position. A map on the other hand lies in a continuous space.
This can lead to infinitely many variables used to describe the map. Additional uncertainty is present through sensor data perception.
Other challenges are the space and its geometries that should be mapped. For example repetitive environments such as walkways with no doors or similar looking ones.</p>
<p>The mapping algorithm that is described in this post is occupancy grid mapping. The algorithm can map any arbitrary environment by dividing it into a finite number of grid cells.</p>
<h3 id="slam-characteristics">SLAM Characteristics</h3>
<p>SLAM exists in two forms which are Online SLAM and Full SLAM.
In both forms, the algorithm estimates a map of its environment. However, Online SLAM estimates only single poses of the robot at specific time instances.
Given the measurements $z_{1:t}$ and the control inputs $u_{1:t}$ the problem is to find the posterior belief of the robot pose $x_{t}$ at time $t$ and the map $m_{t}$.</p>
\[\begin{align}
p(x_{t}, m|z_{1:t}, u_{1:t})
\end{align}\]
<p>Full SLAM on the other hand, estimates a full trajectory $x_{1:t}$ of the robot instead of just a single pose $x_t$ at a particular time step.</p>
\[\begin{align}
p(x_{1:t}, m|z_{1:t}, u_{1:t})
\end{align}\]
<p>Both problems are related to each other. The Online SLAM problem is result of integrating over the individual robot poses of the Full SLAM problem once at a time.</p>
\[\begin{align}
\underbrace{p(x_t, m|z_{1:t}, u_{1:t})}_{\text{Online SLAM}} = \int \int \dots \int \underbrace{p(x\_{1:t}, m|z_{1:t}, u_{1:t})}_{\text{Full SLAM}} dx_1 dx_2 \dots dx_{t-1}
\end{align}\]
<p>Another characteristic of SLAM is that it is a continouous and discrete problem. Robot poses and object or landmark locations are continouous aspects of the SLAM problem.
While sensing the environment continously, a discrete relation between detected objects and newly detected ones needs to be made. This relation is known by correspondance and helps the robot to detect if it has been in the same location.
With SLAM, a mobile robot is establishing a discrete relation between newly and previously detected objects.</p>
<p>Correspondences should be included in the estimation problem meaning that the posterior includes the correspondence in both, the online and full SLAM problem.</p>
\[\begin{align}
&\text{Online SLAM: } p(x_t, m|z_{1:t}, u_{1:t}) \Rightarrow p(x_t, m, c_t|z_{1:t}, u_{1:t}) \\
&\text{Full SLAM: } p(x_{1:t}, m|z_{1:t}, u_{1:t}) \Rightarrow p(x_{1:t}, m, c_{1:t}|z_{1:t}, u_{1:t})
\end{align}\]
<p>The advantage to add the correspondances to both problems is to have the robot better understand where it is located by establishing a relation between objects.
The relation between the online SLAM and full SLAM problem is defined as</p>
\[\begin{align}
\underbrace{p(x_t, m, c_t|z_{1:t}, u_{1:t})}_{\text{Online SLAM}} = \int \int \dots \int \sum_{c_1} \sum_{c_2} \dots \sum_{c_{t-1}} \underbrace{p(x_{1:t}, m, c_{1:t}|z_{1:t}, u_{1:t})}_{\text{Full SLAM}} dx_1 dx_2 \dots dx_{t-1}
\end{align}\]
<p>where it is now required to sum over the correspondence values and integrate over the robot poses from the Full SLAM problem.</p>
<h3 id="challenges">Challenges</h3>
<p>The continouous portion consists of the robot poses and object locations and is highly dimensional.
Also the discrete correspondences between detected objects are highly dimensional.
These aspects require an approximation even when known correspondences are assumed.<br />
There exist two instances of FastSLAM that require known correspondences which are FastSLAM 1.0 and FastSLAM 2.0. With these approaches each particle holds a guess of the robot trajectory and by doing so the SLAM problem is reduced to mapping with known poses.</p>
<p>To do SLAM without known correspondences, meaning without known landmark positions the algorithm in the following section can be used.</p>
<h2 id="grid-based-fastslam-algorithm">Grid-based FastSLAM Algorithm</h2>
<p>FastSLAM solves the Full SLAM problem with known correspondences using a custom particle filter approach known by the Rao-Blackwellized particle filter approach. This approach estimates a posterior over the trajectory using a particle filter.
With this trajectory the robot poses are now known and the mapping problem is then solved with a low dimensional Extended Kalman Filter. This filter models independent features of the map with local Gaussians.<br />
Using a grid map the environment can be modeled and FastSLAM gets extended without predefining any landmark positions. This allows to solve the SLAM problem in an arbitrary environment.</p>
<p>With the Grid-based FastSLAM algorithm, each particle holds a guess of the robot trajectory using a MCL particle filter. Addionaly, each particle maintains its own map by utilizing the occupancy grid mapping algorithm.<br />
The steps of the algorithm consist of sampling motion $p(x_t|x_{t-1}^{[k]}, u_t)$, map estimation $p(m_t|z_t, x_t^{[k]}, m_{t-1}^{[k]})$ and importance weight $p(z_t|x_t^{[k]}, m^{[k]})$.</p>
<pre id="read-1" style="display:none;">
\begin{algorithm}
\caption{Grid-based FastSLAM}
\begin{algorithmic}
\PROCEDURE{FastSLAM}{$X_{t-1}, u_t, z_t$}
\STATE $\bar{X}_t = X_t = \empty$
\FOR{$m = 1$ \TO $M$}
\STATE $x_t^{[k]} = $ \CALL{MotionUpdate}{$u_t, x_{t-1}^{[k]}$}
\STATE $w_t^{[k]} = $ \CALL{SensorUpdate}{$z_t, x_{t}^{[k]}$}
\STATE $m_t^{[k]} = $ \CALL{UpdateOccupancyGrid}{$z_t, x_{t}^{[k]}, m_{t-1}^{[k]}$}
\STATE $\bar{X}_t = \bar{X}_t + \left < x_{t}^{[k]}, w_{t}^{[k]} \right >$
\ENDFOR
\FOR{$k = 1$ \TO $M$}
\STATE draw $i$ with probability $w_t^{[i]}$
\STATE add $\left < x_t^{[i]}, m_t^{[i]} \right >$ \TO $X_t$
\ENDFOR
\RETURN $X_t$
\ENDPROCEDURE
\end{algorithmic}
\end{algorithm}
</pre>
<div class="post-pseudocode" id="goal-1"></div>
<script type="text/javascript">
var code = document.getElementById("read-1").textContent;
var parentEl = document.getElementById("goal-1");
var options = {
lineNumber: true
};
pseudocode.render(code, parentEl, options);
</script>
<p>The algorithm takes the previous belief $X_{t-1}$ or pose, the actuation commands $u_t$ and the sensor measurements $z_t$ as input. Initially $M \in \mathbb{R}$ particles are generated randomly which defines the initial belief $\bar{X}_t$.
The first for loop represents the motion, sensor and map update steps. Here, the pose of each particle is estimated and the likelihoods of the measurements and the map are updated.
To update the measurements model likelihood the importance weight technique is used in the <code class="language-plaintext highlighter-rouge">measurement_model_map</code> function. In the <code class="language-plaintext highlighter-rouge">update_occupancy_grid</code> function, each particle updates its map using the occupancy grid mapping algorithm.
The newly estimated k-th particle pose, map and likelihood of the measurement are all added to the hypotetical belief $\bar{X}_t$.</p>
<p>In the second for loop the resampling process of the particles takes place. The resampling is implementd using a resampling wheel technique.
Here, particle measurements that are close to the robots real world measurement values are redrawn more frequently in upcoming iterations. The drawn particle poses and maps are added to the system belief $X_t$ which is returnd from the algorithm to start a new iteration with the next motion and sensor updates.</p>
<h2 id="ros-gmapping">ROS gmapping</h2>
<p>The <a href="http://wiki.ros.org/gmapping">gmapping</a> ROS package uses the Grid-based FastSLAM algorithm. This package contains the single <code class="language-plaintext highlighter-rouge">slam_gmapping</code> node, which subscribes to the <code class="language-plaintext highlighter-rouge">tf</code> and <code class="language-plaintext highlighter-rouge">scans</code> topics.
Using these inputs, it generates a 2D occupancy grid map and outputs robot poses on the <code class="language-plaintext highlighter-rouge">map</code> and <code class="language-plaintext highlighter-rouge">entropy</code> topics. Additional map data is provided through the <code class="language-plaintext highlighter-rouge">map_metadata</code> topic. Another way to access the map is to use the service provided by the node.
To demonstrate gmapping, turtlebot will be deployed in the willow garage environment inside gazebo. Moving the turtlebout around using the <code class="language-plaintext highlighter-rouge">teleop</code> package and running the <code class="language-plaintext highlighter-rouge">slam_gmapping</code> node will generate a map.</p>
<h2 id="links">Links</h2>
<p>Refere to Wikipedia for a <a href="https://en.wikipedia.org/wiki/List_of_SLAM_Methods">list of SLAM methods</a>. There you can also find resources for the FastSLAM instances:</p>
<ul>
<li><a href="">FastSLAM 1.0</a></li>
<li><a href="">FastSLAM 2.0</a></li>
<li><a href="">Grid-based FastSLAM</a></li>
</ul>
<p>Further details about MCL are found in the <a href="http://robots.stanford.edu/papers/thrun.robust-mcl.pdf">paper</a> of Sebastian Thrun et al.
The gmapping algorithm can be found <a href="https://openslam-org.github.io/gmapping.html">here</a>.</p>
<h2 id="reference">Reference</h2>
<p>This post is a summary of the lesson on FastSLAM from the <a href="https://eu.udacity.com/course/robotics-software-engineer--nd209">Robotics Nanodegree of Udacity</a>.</p>Franz PucherFor Simultaneous Localization and Mapping a lot of algorithms exist. This post shows the basics of SLAM and how Grid-based FastSLAM works using ROS.ROS Kalman Filter for Sensor Fusion2019-03-06T17:31:41+00:002019-03-06T17:31:41+00:00https://fjp.at/posts/ros/ros-kalman-filter<p>The <a href="/posts/state-estimation/extended-kalman-filter">previous post</a> described the extended <a href="https://de.wikipedia.org/wiki/Kalman-Filter">Kalman filter</a>.
This post explains how to create a ROS package that implements an extended Kalman filter, which can be used for sensor fusion.
The sensor data that will be fused together comes from a robots <a href="/projects/autonomous-rc-car/minimu9v5/">inertial measurement unit</a> (imu), rotary encoders (wheel odometry) and vision sensors (camera). The ekf package that is developed in this post will be used to compare the sensor data and
apply sensor fusion to estimate the pose of the robot as it moves around.
To achieve this, the following packages are used.</p>
<ul>
<li><strong>turtlebot_gazebo</strong> launches a mobile robot in the gazebo environment.</li>
<li><strong>robot_pose_ekf</strong> estimates the position and orientation of the robot.</li>
<li><strong>odom_to_trajectory</strong> append the odometry values generated over time into a trajectory path.</li>
<li><strong>turtlebot_teleop</strong> allows to steer the robot using keyboard commands.</li>
<li><strong>rviz</strong> lets us visualize the estimated position and orientation of the robot.</li>
</ul>
<p>This post refers to the ROS <a href="http://wiki.ros.org/roscpp/Overview/Publishers%20and%20Subscribers">documentation</a> and makes use of the <a href="http://docs.ros.org/api/roscpp/html/classros_1_1Publisher.html">publisher</a> and <a href="http://docs.ros.org/api/roscpp/html/classros_1_1Subscriber.html">subscriber</a> <a href="http://wiki.ros.org/Nodes">nodes</a> which publish and subscribe <a href="http://wiki.ros.org/Messages">messages</a> to <a href="http://wiki.ros.org/Topics">topics</a> of ROS. It explains how to change the mentioned packages to integrate these nodes into a single launch file.<br />
With this launch file it is possible to launch the entire environment.</p>
<h2 id="mobile-robot-sensors">Mobile Robot Sensors</h2>
<p>The following section starts off with some basics about the sensors of the mobil robot and shows that each has its disadvantages.
To account for these, the sensor readings of each sensor are combined using sensor fusion with the EKF. This is done by taking the noisy measurements,
compare them, filter the noise, removie the uncertainties as much as possible and provide a good estimate of the robot’s pose.</p>
<h3 id="inertial-measurement-unit">Inertial Measurement Unit</h3>
<p>An intertial measurement unit (imu) usually consists of a gyroscope (3DOF) and a accelerometer (3DOF) and can have a magnetometer (3DOF) as well.
Each of these elements of the imu has three degrees of fredom (3DOF) which results in 9DOF in total.
The imu can measure the linear velocity and the position using the accelerometer by integration</p>
\[\begin{align}
v(t_1) &= v(t_0) + \int_{t_0}^{t_1} a(t) dt \\
x(t_1) &= x(t_0) + \int_{t_0}^{t_1} v(t) dt \\
\end{align}\]
<p>Taking the double integral results in a noisy position. This error is accumulated even more over time and makes it necessary to check the drift or error parameters over time.</p>
<p>The same is done to find the orientation, by integrating the angular velocity provided by the imu.</p>
<h3 id="rotary-encoders">Rotary Encoders</h3>
<p>This type of sensors are attached to the robots actuated wheels to measure the velocity and position of the wheels. To estimate the robots position, the integral of the velocity is calculated.
However, the robot wheel might be stuck between obstacles or slip on the ground because the robot might be stuck against a wall. In this case the position won’t be calculated correctly.
It is furthermore important to check an encoders resolution. An encoder with low resolution is highly sensitive to slippage.</p>
<h3 id="vision-sensor">Vision Sensor</h3>
<p>A vision sensor can be an rgb 2D camera or even an rgbd camera that can sense the depth of its environment.
The robot is then able to senese the depth towards an obstacle which can be translated to a position.
The drawbacks of cameras are present in low light environments which would require other sensors, such as radar or lidar. It is also important to look at the field of view (FOV) of the vision system and
take the smalles range of operation into account.</p>
<h2 id="the-ros-packages">The ROS Packages</h2>
<p>In the next sections the single packages are explained. To follow the instructions, a ROS <a href="http://wiki.ros.org/catkin/workspaces">catkin workspace</a> needs to be initialized, which allows to hold the packages.</p>
<figure class="highlight"><pre><code class="language-bash" data-lang="bash"><span class="nb">mkdir</span> <span class="nt">-p</span> /home/workspace/catkin_ws/src
<span class="nb">cd</span> /home/workspace/catkin_ws/src
catkin_init_workspace
<span class="nb">cd</span> ..
catkin_make</code></pre></figure>
<p>The <a href="http://wiki.ros.org/catkin/commands/catkin_make"><code class="language-plaintext highlighter-rouge">catkin_make</code></a> command is a convenience tool for working with catkin workspaces. Running it the first time in your workspace, it will create a <code class="language-plaintext highlighter-rouge">CMakeLists.txt</code> symlink in your <code class="language-plaintext highlighter-rouge">src</code> folder.
This link points to a template <code class="language-plaintext highlighter-rouge">CMakeLists.txt</code> ROS provides. Calling <code class="language-plaintext highlighter-rouge">catkin_make</code> will also build any packages located in <code class="language-plaintext highlighter-rouge">~/catkin_ws/src</code>.</p>
<h3 id="turtlebot-gazebo-package">TurtleBot Gazebo Package</h3>
<p><a href="https://www.turtlebot.com/">TurtleBot</a> is not only a ROS package but also a real robot which is used extensively in ROS for localization, mapping and path planning. At the time of writing there exist three different versions of the TurtleBot:</p>
<ol>
<li>The TurtleBot1 with an iRobot Create Base (2010)</li>
<li>TurtleBot2 with the Kobuki Base (2012)</li>
<li>TurtleBot3 family (2017)</li>
</ol>
<p>The TurtleBot platforms follow the <a href="http://www.ros.org/reps/rep-0119.html">REP 119 specification</a> which is part of the <a href="http://www.ros.org/reps/rep-0000.html">ROS Enhancement Proposals (REP)</a>
Here, TurtleBot2 is deployed in a gazebo environment and estimate its pose. This is done by using two of its onboard sensors, which consist of rotary encoders, imu and a rgbd camera.</p>
<iframe width="1280" height="720" src="https://www.youtube.com/embed/MOEjL8JDvd0" frameborder="0" allow="accelerometer; autoplay; encrypted-media; gyroscope; picture-in-picture" allowfullscreen=""></iframe>
<p>To obtain the list of subscriber and publisher topics we obtain the turtlebot package from ROS with the following terminal commands. This will clone the package into the catkin workspace</p>
<p><strong>Clone the package</strong></p>
<p>New packages from external repositories such as github are conventionally placed into the src folder of your catkin workspace.</p>
<figure class="highlight"><pre><code class="language-bash" data-lang="bash"><span class="nb">cd</span> /home/workspace/catkin_ws/src
git clone https://github.com/turtlebot/turtlebot_simulator</code></pre></figure>
<p><strong>Install the dependencies</strong></p>
<p>Dependencies for a ROS package can be installed using <a href="http://docs.ros.org/independent/api/rosdep/html/"><code class="language-plaintext highlighter-rouge">rosdep</code></a> which is the dependency tool of ROS.</p>
<figure class="highlight"><pre><code class="language-bash" data-lang="bash"><span class="nb">cd</span> /home/workspace/catkin_ws
<span class="nb">source </span>devel/setup.bash
rosdep <span class="nt">-i</span> <span class="nb">install </span>turtlebot_gazeboit clone https://github.com/turtlebot/turtlebot_simulator</code></pre></figure>
<p>Output</p>
<figure class="highlight"><pre><code class="language-bash" data-lang="bash"><span class="c">#All required rosdeps installed successfully</span></code></pre></figure>
<p class="notice--info">The <a href="https://en.wikipedia.org/wiki/Source_(command)">source command</a> is required to overlay this workspace on top of your <a href="http://wiki.ros.org/ROS/Tutorials/InstallingandConfiguringROSEnvironment#Managing_Your_Environment">environment</a>. To understand more about this see the general <a href="http://wiki.ros.org/catkin">catkin documentation</a>.</p>
<p><strong>Build package and source the workspace</strong></p>
<figure class="highlight"><pre><code class="language-bash" data-lang="bash">catkin_make
<span class="nb">source </span>devel/setup.bash</code></pre></figure>
<p><strong>Launch the nodes</strong></p>
<figure class="highlight"><pre><code class="language-bash" data-lang="bash">roslaunch turtlebot_gazebo turtlebot_world.launch</code></pre></figure>
<figure class=" ">
<a href="/assets/posts/2019-03-06-ros-kalman-filter/turtlebot_gazebo.png" title="Gazebo simulation of the turtlebot package.">
<img src="/assets/posts/2019-03-06-ros-kalman-filter/turtlebot_gazebo.png" alt="Gazebo simulation of the turtlebot package." />
</a>
<figcaption>Gazebo simulation of the turtlebot package.
</figcaption>
</figure>
<p><strong>List the topics</strong></p>
<figure class="highlight"><pre><code class="language-bash" data-lang="bash">rostopic list</code></pre></figure>
<p>The topics that will be listed are</p>
<figure class="highlight"><pre><code class="language-bash" data-lang="bash">/camera/depth/camera_info
/camera/depth/image_raw
/camera/depth/points
/camera/parameter_descriptions
/camera/parameter_updates
/camera/rgb/camera_info
/camera/rgb/image_raw
/camera/rgb/image_raw/compressed
/camera/rgb/image_raw/compressed/parameter_descriptions
/camera/rgb/image_raw/compressed/parameter_updates
/camera/rgb/image_raw/compressedDepth
/camera/rgb/image_raw/compressedDepth/parameter_descriptions
/camera/rgb/image_raw/compressedDepth/parameter_updates
/camera/rgb/image_raw/theora
/camera/rgb/image_raw/theora/parameter_descriptions
/camera/rgb/image_raw/theora/parameter_updates
/clock
/gazebo/link_states
/gazebo/model_states
/gazebo/parameter_descriptions
/gazebo/parameter_updates
/gazebo/set_link_state
/gazebo/set_model_state
/joint_states
/mobile_base/commands/motor_power
/mobile_base/commands/reset_odometry
/mobile_base/commands/velocity
/mobile_base/events/bumper
/mobile_base/events/cliff
/mobile_base/sensors/imu_data
/odom
/rosout
/rosout_agg
/tf</code></pre></figure>
<p>It is also possible to use rqt graph to visualize the topics of the package.</p>
<figure class="highlight"><pre><code class="language-bash" data-lang="bash">rosrun rqt_graph rqt_graph</code></pre></figure>
<figure class=" ">
<a href="/assets/posts/2019-03-06-ros-kalman-filter/turtlebot_rqt_graph.png" title="The rqt_graph of the turtlebot package.">
<img src="/assets/posts/2019-03-06-ros-kalman-filter/turtlebot_rqt_graph.png" alt="The rqt_graph of the turtlebot package." />
</a>
<figcaption>The rqt_graph of the turtlebot package.
</figcaption>
</figure>
<p>The turtlebot_gazebo node has the following subscribers and publishers that are important for us:</p>
<table>
<tbody>
<tr>
<td>Subscribers</td>
<td>/cmd_vel_mux/input/teleop</td>
<td> </td>
<td> </td>
</tr>
<tr>
<td>Publishers</td>
<td>/odom</td>
<td>/camera/depth/image_raw</td>
<td>/mobile_base/sensors/imu_data</td>
</tr>
</tbody>
</table>
<h3 id="robot-pose-ekf-package">Robot Pose EKF Package</h3>
<p>In this section we implement the ekf Kalman filter package to localize the robot’s pose.
The documentation of the <a href="http://wiki.ros.org/robot_pose_ekf">robot_pose_ekf package</a> shows that the node subscribes to the rotary encoder data through the /odom topic.
It is also subscribing to the imu data through the /imu_data topic. Lastly the node is subscribing to three dimensional odometry data through
the /vo topic. These sensor data input gets fused by the ekf which results in a filtered and more accurate output pose than one sensor could provide.
This combined output is published by ekf node which acts as a publisher. The topic is called /robot_pose_ekf/odom_combined, which is an estimated 3D pose of the robot.</p>
<p>To install the package its git reposityor is cloned into the src directory of the catkin workspace.</p>
<figure class="highlight"><pre><code class="language-bash" data-lang="bash"><span class="nb">cd</span> /home/workspace/catkin_ws/src/
git clone https://github.com/udacity/robot_pose_ekf</code></pre></figure>
<p>To interface the robot_pose_ekf package with the turtlebot_gazebo package and estimate the robots pose it is necessary to rename the following topics to match them.
The turtlebot_gazebo package provides a rgbd topic which cannot direclty be used by the robot_pose_ekf package. Also the 3D odometry /vo topic of the ekf package cannot be provided by the turtlebot_gazebo package.
Therefore we use and match only the remaining two topics:</p>
<table>
<thead>
<tr>
<th>turtlebot_gazebo</th>
<th>robot_pose_ekf</th>
</tr>
</thead>
<tbody>
<tr>
<td>/odom</td>
<td>/odom</td>
</tr>
<tr>
<td>/mobile_base/sensors/imu_data</td>
<td>/imu_data</td>
</tr>
</tbody>
</table>
<p>These modifications can be achieved by editing the ekf launch file <code class="language-plaintext highlighter-rouge">robot_pose_ekf.launch</code> in the <code class="language-plaintext highlighter-rouge">src</code> folder of the ekf package in order to turn off the 3D odometry.</p>
<figure class="highlight"><pre><code class="language-xml" data-lang="xml"><span class="nt"><launch></span>
<span class="nt"><node</span> <span class="na">pkg=</span><span class="s">"robot_pose_ekf"</span> <span class="na">type=</span><span class="s">"robot_pose_ekf"</span> <span class="na">name=</span><span class="s">"robot_pose_ekf"</span><span class="nt">></span>
<span class="nt"><param</span> <span class="na">name=</span><span class="s">"output_frame"</span> <span class="na">value=</span><span class="s">"odom_combined"</span><span class="nt">/></span>
<span class="nt"><param</span> <span class="na">name=</span><span class="s">"base_footprint_frame"</span> <span class="na">value=</span><span class="s">"base_footprint"</span><span class="nt">/></span>
<span class="nt"><param</span> <span class="na">name=</span><span class="s">"freq"</span> <span class="na">value=</span><span class="s">"30.0"</span><span class="nt">/></span>
<span class="nt"><param</span> <span class="na">name=</span><span class="s">"sensor_timeout"</span> <span class="na">value=</span><span class="s">"1.0"</span><span class="nt">/></span>
<span class="nt"><param</span> <span class="na">name=</span><span class="s">"odom_used"</span> <span class="na">value=</span><span class="s">"true"</span><span class="nt">/></span>
<span class="nt"><param</span> <span class="na">name=</span><span class="s">"imu_used"</span> <span class="na">value=</span><span class="s">"true"</span><span class="nt">/></span>
<span class="nt"><param</span> <span class="na">name=</span><span class="s">"vo_used"</span> <span class="na">value=</span><span class="s">"false"</span><span class="nt">/></span>
<span class="nt"><remap</span> <span class="na">from=</span><span class="s">"imu_data"</span> <span class="na">to=</span><span class="s">"/mobile_base/sensors/imu_data"</span> <span class="nt">/></span>
<span class="nt"></node></span>
<span class="nt"></launch></span></code></pre></figure>
<p><strong>Build the package</strong></p>
<figure class="highlight"><pre><code class="language-bash" data-lang="bash"><span class="nb">cd</span> /home/workspace/catkin_ws
catkin_make
<span class="nb">source </span>devel/setup.bash</code></pre></figure>
<p><strong>Launch the Node</strong></p>
<figure class="highlight"><pre><code class="language-bash" data-lang="bash">roslaunch robot_pose_ekf robot_pose_ekf.launch</code></pre></figure>
<p><strong>Visualize the topics:</strong></p>
<p>To confirm that the topics of the robot_pose_ekf package and the turtlebot_gazebo package are communicating with each other we check the rqt_graph.</p>
<figure class="highlight"><pre><code class="language-bash" data-lang="bash">rosrun rqt_graph rqt_graph</code></pre></figure>
<figure class=" ">
<a href="/assets/posts/2019-03-06-ros-kalman-filter/turtlebot_ekf_rqt_graph.png" title="The rqt_graph of the turtlebot and ekf package.">
<img src="/assets/posts/2019-03-06-ros-kalman-filter/turtlebot_ekf_rqt_graph.png" alt="The rqt_graph of the turtlebot and ekf package." />
</a>
<figcaption>The rqt_graph of the turtlebot and ekf package.
</figcaption>
</figure>
<h3 id="odometry-to-trajectory-package">Odometry to Trajectory Package</h3>
<p>Currently the turtlebot is publishing the unfiltered pose through the <code class="language-plaintext highlighter-rouge">/odom</code> topic and the ekf is publishing the filtered pose via the <code class="language-plaintext highlighter-rouge">/robot_pose_ekf/odom_combined</code> topic as the following commands show.</p>
<figure class="highlight"><pre><code class="language-bash" data-lang="bash">root@pc:/home/workspace/catkin_ws# rostopic info /odom
Type: nav_msgs/Odometry
Publishers:
<span class="k">*</span> /gazebo <span class="o">(</span>http://pc:36747/<span class="o">)</span>
Subscribers:
<span class="k">*</span> /robot_pose_ekf <span class="o">(</span>http://pc:44537/<span class="o">)</span>
root@pc:/home/workspace/catkin_ws# rostopic info /robot_pose_ekf/odom_combined
Type: geometry_msgs/PoseWithCovarianceStamped
Publishers:
<span class="k">*</span> /robot_pose_ekf <span class="o">(</span>http://d7ea9f1e67d3:44537/<span class="o">)</span>
Subscribers: None</code></pre></figure>
<p>To compare filtered and unfiltered trajectories it is required to append the timestamped poses, which are generated over time, to a trajectory using another node.
To do this, two trajectories are created using the <code class="language-plaintext highlighter-rouge">odometry_to_trajectory</code> ROS package. It subscribes to odometry values and publishes trajectories.<br />
The package contains two nodes, where the first node subscribes to the unfilterd robot poses and appends the last 1000 poses to a trajectory that is then published.
The second node subscribes to the filtered robot pose from the ekf package and publishes also a trajectory using the last 1000 poses.</p>
<p><strong>Install the package</strong></p>
<figure class="highlight"><pre><code class="language-bash" data-lang="bash"><span class="nb">cd</span> /home/workspace/catkin_ws/src
git clone https://github.com/udacity/odom_to_trajectory</code></pre></figure>
<p><strong>Build the package</strong></p>
<figure class="highlight"><pre><code class="language-bash" data-lang="bash"><span class="nb">cd</span> /home/workspace/catkin_ws
catkin_make
<span class="nb">source </span>devel/setup.bash</code></pre></figure>
<p><strong>Launch the Node</strong></p>
<figure class="highlight"><pre><code class="language-bash" data-lang="bash">roslaunch robot_pose_ekf robot_pose_ekf.launch</code></pre></figure>
<p><strong>Visualize the topics:</strong></p>
<p>To confirm that the topics of the robot_pose_ekf package and the turtlebot_gazebo package are communicating with each other we check the rqt_graph.</p>
<figure class="highlight"><pre><code class="language-bash" data-lang="bash">rosrun rqt_graph rqt_graph</code></pre></figure>
<figure class=" ">
<a href="/assets/posts/2019-03-06-ros-kalman-filter/turtlebot_ekf_trajectory_rqt_graph.png" title="The rqt_graph of the turtlebot ekf and trajectory package.">
<img src="/assets/posts/2019-03-06-ros-kalman-filter/turtlebot_ekf_trajectory_rqt_graph.png" alt="The rqt_graph of the turtlebot ekf and trajectory package." />
</a>
<figcaption>The rqt_graph of the turtlebot ekf and trajectory package.
</figcaption>
</figure>
<p>When we show only the active nodes and topics, we see now that the /robot_pose_ekf/odom_combined topic is publishing to the subscribing /path_plotter_ekf node.</p>
<h3 id="turtlebot-teleop-package">TurtleBot Teleop Package</h3>
<p>Currently the robot is standing still which results in trajectories with zero values and the robot is only performing sensor updates without motion updates.
To get more interesting trajectory data from the previously mentioned node, the robot needs to move and collect further sensory information.
To move the robot we use the <a href="http://wiki.ros.org/turtlebot_teleop">turtlebot_teleop</a> package, which lets you control a robot using the keyboard or a joystick by publishing driving commands.</p>
<p>The properties of this package does not subscribe to a topic. It only publishes control commands to the <code class="language-plaintext highlighter-rouge">/cmd_vel_mux/input/teleop</code> topic.
This topic is the same to which the turtlebot_gazebo package is subscribing.</p>
<p>To let the turtlebot_gazebo package subscribe to the turtlebot_teleop package via the <code class="language-plaintext highlighter-rouge">/cmd_vel_mux/input/teleop</code> topic, the new node needs to be launched.
This makes it possible to drive the robot around in the gazebo environment and create trajectories which can be compared.</p>
<p><strong>Install the package</strong></p>
<figure class="highlight"><pre><code class="language-bash" data-lang="bash"><span class="nb">cd</span> /home/workspace/catkin_ws/src
git clone https://github.com/turtlebot/turtlebot</code></pre></figure>
<p><strong>Install the dependencies</strong></p>
<figure class="highlight"><pre><code class="language-bash" data-lang="bash"><span class="nb">cd</span> /home/workspace/catkin_ws
<span class="nb">source </span>devel/setup.bash
rosdep <span class="nt">-i</span> <span class="nb">install </span>turtlebot_teleop</code></pre></figure>
<p><strong>Build the package</strong></p>
<figure class="highlight"><pre><code class="language-bash" data-lang="bash"><span class="nb">cd</span> /home/workspace/catkin_ws
catkin_make
<span class="nb">source </span>devel/setup.bash</code></pre></figure>
<p><strong>Launch the Node</strong></p>
<figure class="highlight"><pre><code class="language-bash" data-lang="bash">roslaunch turtlebot_teleop keyboard_teleop.launch</code></pre></figure>
<p>Now it is possible to move the robot around using the keyboard commands.
However, make sure that the terminal running the teleop node is active and receiving the keyboard commands.</p>
<figure class="highlight"><pre><code class="language-bash" data-lang="bash">Control Your Turtlebot!
<span class="nt">---------------------------</span>
Moving around:
u i o
j k l
m , <span class="nb">.</span>
q/z : increase/decrease max speeds by 10%
w/x : increase/decrease only linear speed by 10%
e/c : increase/decrease only angular speed by 10%
space key, k : force stop
anything <span class="k">else</span> : stop smoothly</code></pre></figure>
<h3 id="rviz-package">Rviz Package</h3>
<p><a href="http://wiki.ros.org/rviz">Rviz</a> is used to visualize the robot data and helpful for debugging. An introduction can be found in the <a href="http://wiki.ros.org/rviz/UserGuide">user guide</a>.
To display the data of the nodes that are running we run rviz.</p>
<figure class="highlight"><pre><code class="language-bash" data-lang="bash">rosrun rviz rviz</code></pre></figure>
<p>After rviz runs, its configuration needs to be configured.</p>
<ul>
<li>Change the <a href="">Fixed Frame</a> to <code class="language-plaintext highlighter-rouge">base_footprint</code></li>
<li>Change the <a href="">Reference Frame</a> to <code class="language-plaintext highlighter-rouge">odom</code></li>
<li>Add a robot model</li>
<li>Add a camera and select <code class="language-plaintext highlighter-rouge">/camera/rgb/image_raw</code> topic</li>
<li>Add a <code class="language-plaintext highlighter-rouge">/ekfpath</code> topic and change the display name to EKFPath</li>
<li>Add a <code class="language-plaintext highlighter-rouge">/odom</code> topic and change the display name to OdomPath and its color to red <code class="language-plaintext highlighter-rouge">(255,0,0)</code></li>
</ul>
<figure class="third ">
<a href="/assets/posts/2019-03-06-ros-kalman-filter/rviz01_add_robot_model.png" title="Add robot model">
<img src="/assets/posts/2019-03-06-ros-kalman-filter/rviz01_add_robot_model.png" alt="Add robot model in rviz" />
</a>
<a href="/assets/posts/2019-03-06-ros-kalman-filter/rviz02_add_camera.png" title="Add the camera of the robot">
<img src="/assets/posts/2019-03-06-ros-kalman-filter/rviz02_add_camera.png" alt="Add camera of robot in rviz" />
</a>
<a href="/assets/posts/2019-03-06-ros-kalman-filter/rviz03_add_odom_topic.png" title="Add the odom topic">
<img src="/assets/posts/2019-03-06-ros-kalman-filter/rviz03_add_odom_topic.png" alt="Add odom topic to rviz" />
</a>
<a href="/assets/posts/2019-03-06-ros-kalman-filter/rviz04_add_ekf_topic.png" title="Add the ekf topic">
<img src="/assets/posts/2019-03-06-ros-kalman-filter/rviz04_add_ekf_topic.png" alt="Add ekf topic to rviz" />
</a>
<figcaption>Add these to the rviz configuration.
</figcaption>
</figure>
<p>Completing these steps will result in the following rviz configuration when the robot is controlled by the turtlebot_teleop node.</p>
<figure class=" ">
<a href="/assets/posts/2019-03-06-ros-kalman-filter/rviz_trajectories.png" title="Trajectories of the robot shown in rviz when it is driven around.">
<img src="/assets/posts/2019-03-06-ros-kalman-filter/rviz_trajectories.png" alt="Trajectories of the robot shown in rviz when it is driven around." />
</a>
<figcaption>Trajectories of the robot shown in rviz when it is driven around.
</figcaption>
</figure>
<p>Save this rviz configuration to avoid repeating the steps above.
You can save it for example in the <code class="language-plaintext highlighter-rouge">src</code> folder of your catkin workspace as <code class="language-plaintext highlighter-rouge">TurtleBotEKF.rviz</code>.</p>
<p>To test if the saved configuration works, kill the rviz terminal and relaunch rviz with the following parameter.</p>
<figure class="highlight"><pre><code class="language-bash" data-lang="bash">rosrun rviz rviz <span class="nt">-d</span> /home/workspace/catkin_ws/src/TurtleBotEKF.rviz</code></pre></figure>
<p>It is also possible to use a launch file in order to run this rviz configuration using <code class="language-plaintext highlighter-rouge">roslaunch</code>.</p>
<figure class="highlight"><pre><code class="language-xml" data-lang="xml"><span class="nt"><launch></span>
<span class="c"><!--RVIZ--></span>
<span class="nt"><node</span> <span class="na">pkg=</span><span class="s">"rviz"</span> <span class="na">type=</span><span class="s">"rviz"</span> <span class="na">name=</span><span class="s">"rviz"</span> <span class="na">args=</span><span class="s">"-d /home/workspace/catkin_ws/src/TurtleBotEKF.rviz"</span><span class="nt">/></span>
<span class="nt"></launch></span></code></pre></figure>
<h2 id="main-launch">Main Launch</h2>
<p>The created launch file can be integrated together with the launch files of the other nodes in a main launch file <code class="language-plaintext highlighter-rouge">main.launch</code>.
This simplifies the process of launching all the created nodes in the mentioned packages. Instead of launch them individually in seperate terminals it is sufficient to lauch the main launch file.</p>
<p>To achieve this, we create a main package that contains the <code class="language-plaintext highlighter-rouge">main.launch</code>.
A new ROS package can be created with the <a href="http://wiki.ros.org/ROS/Tutorials/catkin/CreatingPackage"><code class="language-plaintext highlighter-rouge">catkin_create_pkg</code></a> command.</p>
<p><strong>Create a main package</strong></p>
<figure class="highlight"><pre><code class="language-bash" data-lang="bash">/home/workspace/catkin_ws/src
catkin_create_pkg main</code></pre></figure>
<p><strong>Build the package</strong></p>
<figure class="highlight"><pre><code class="language-bash" data-lang="bash">/home/workspace/catkin_ws
catkin_make </code></pre></figure>
<p><strong>Create and edit the main.launch file</strong></p>
<figure class="highlight"><pre><code class="language-bash" data-lang="bash"><span class="nb">cd</span> /home/workspace/catkin_ws/src/main
<span class="nb">mkdir </span>launch
<span class="nb">cd </span>launch
vim main.launch</code></pre></figure>
<p>Use the content for the <code class="language-plaintext highlighter-rouge">main.lauch</code> file from <a href="https://github.com/udacity/RoboND-EKFLab/blob/master/main/launch/main.launch">Udacity’s GitHub repository</a> or copy it below.</p>
<figure class="highlight"><pre><code class="language-xml" data-lang="xml"><span class="nt"><launch></span>
<span class="c"><!--Robot Pose EKF Package --></span>
<span class="c"><!-- The path_ekf_plotter node --></span>
<span class="nt"><node</span> <span class="na">name=</span><span class="s">"path_ekf_plotter"</span> <span class="na">type=</span><span class="s">"path_ekf_plotter.py"</span> <span class="na">pkg=</span><span class="s">"odom_to_trajectory"</span><span class="nt">></span>
<span class="nt"></node></span>
<span class="c"><!-- The path_odom_plotter node --></span>
<span class="nt"><node</span> <span class="na">name=</span><span class="s">"path_odom_plotter"</span> <span class="na">type=</span><span class="s">"path_odom_plotter.py"</span> <span class="na">pkg=</span><span class="s">"odom_to_trajectory"</span><span class="nt">></span>
<span class="c"><!--RobotPose EKF package--></span>
<span class="nt"></node></span>
<span class="nt"><node</span> <span class="na">pkg=</span><span class="s">"robot_pose_ekf"</span> <span class="na">type=</span><span class="s">"robot_pose_ekf"</span> <span class="na">name=</span><span class="s">"robot_pose_ekf"</span><span class="nt">></span>
<span class="nt"><param</span> <span class="na">name=</span><span class="s">"output_frame"</span> <span class="na">value=</span><span class="s">"odom_combined"</span><span class="nt">/></span>
<span class="nt"><param</span> <span class="na">name=</span><span class="s">"base_footprint_frame"</span> <span class="na">value=</span><span class="s">"base_footprint"</span><span class="nt">/></span>
<span class="nt"><param</span> <span class="na">name=</span><span class="s">"freq"</span> <span class="na">value=</span><span class="s">"30.0"</span><span class="nt">/></span>
<span class="nt"><param</span> <span class="na">name=</span><span class="s">"sensor_timeout"</span> <span class="na">value=</span><span class="s">"1.0"</span><span class="nt">/></span>
<span class="nt"><param</span> <span class="na">name=</span><span class="s">"odom_used"</span> <span class="na">value=</span><span class="s">"true"</span><span class="nt">/></span>
<span class="nt"><param</span> <span class="na">name=</span><span class="s">"imu_used"</span> <span class="na">value=</span><span class="s">"true"</span><span class="nt">/></span>
<span class="nt"><param</span> <span class="na">name=</span><span class="s">"vo_used"</span> <span class="na">value=</span><span class="s">"false"</span><span class="nt">/></span>
<span class="nt"><remap</span> <span class="na">from=</span><span class="s">"imu_data"</span> <span class="na">to=</span><span class="s">"/mobile_base/sensors/imu_data"</span> <span class="nt">/></span>
<span class="nt"></node></span>
<span class="c"><!-- TurleBot Gazzebo--></span>
<span class="nt"><arg</span> <span class="na">name=</span><span class="s">"world_file"</span> <span class="na">default=</span><span class="s">"$(env TURTLEBOT_GAZEBO_WORLD_FILE)"</span><span class="nt">/></span>
<span class="nt"><arg</span> <span class="na">name=</span><span class="s">"base"</span> <span class="na">value=</span><span class="s">"$(optenv TURTLEBOT_BASE kobuki)"</span><span class="nt">/></span> <span class="c"><!-- create, roomba --></span>
<span class="nt"><arg</span> <span class="na">name=</span><span class="s">"battery"</span> <span class="na">value=</span><span class="s">"$(optenv TURTLEBOT_BATTERY /proc/acpi/battery/BAT0)"</span><span class="nt">/></span> <span class="c"><!-- /proc/acpi/battery/BAT0 --></span>
<span class="nt"><arg</span> <span class="na">name=</span><span class="s">"gui"</span> <span class="na">default=</span><span class="s">"true"</span><span class="nt">/></span>
<span class="nt"><arg</span> <span class="na">name=</span><span class="s">"stacks"</span> <span class="na">value=</span><span class="s">"$(optenv TURTLEBOT_STACKS hexagons)"</span><span class="nt">/></span> <span class="c"><!-- circles, hexagons --></span>
<span class="nt"><arg</span> <span class="na">name=</span><span class="s">"3d_sensor"</span> <span class="na">value=</span><span class="s">"$(optenv TURTLEBOT_3D_SENSOR kinect)"</span><span class="nt">/></span> <span class="c"><!-- kinect, asus_xtion_pro --></span>
<span class="nt"><include</span> <span class="na">file=</span><span class="s">"$(find gazebo_ros)/launch/empty_world.launch"</span><span class="nt">></span>
<span class="nt"><arg</span> <span class="na">name=</span><span class="s">"use_sim_time"</span> <span class="na">value=</span><span class="s">"true"</span><span class="nt">/></span>
<span class="nt"><arg</span> <span class="na">name=</span><span class="s">"debug"</span> <span class="na">value=</span><span class="s">"false"</span><span class="nt">/></span>
<span class="nt"><arg</span> <span class="na">name=</span><span class="s">"gui"</span> <span class="na">value=</span><span class="s">"$(arg gui)"</span> <span class="nt">/></span>
<span class="nt"><arg</span> <span class="na">name=</span><span class="s">"world_name"</span> <span class="na">value=</span><span class="s">"$(arg world_file)"</span><span class="nt">/></span>
<span class="nt"></include></span>
<span class="nt"><include</span> <span class="na">file=</span><span class="s">"$(find turtlebot_gazebo)/launch/includes/$(arg base).launch.xml"</span><span class="nt">></span>
<span class="nt"><arg</span> <span class="na">name=</span><span class="s">"base"</span> <span class="na">value=</span><span class="s">"$(arg base)"</span><span class="nt">/></span>
<span class="nt"><arg</span> <span class="na">name=</span><span class="s">"stacks"</span> <span class="na">value=</span><span class="s">"$(arg stacks)"</span><span class="nt">/></span>
<span class="nt"><arg</span> <span class="na">name=</span><span class="s">"3d_sensor"</span> <span class="na">value=</span><span class="s">"$(arg 3d_sensor)"</span><span class="nt">/></span>
<span class="nt"></include></span>
<span class="nt"><node</span> <span class="na">pkg=</span><span class="s">"robot_state_publisher"</span> <span class="na">type=</span><span class="s">"robot_state_publisher"</span> <span class="na">name=</span><span class="s">"robot_state_publisher"</span><span class="nt">></span>
<span class="nt"><param</span> <span class="na">name=</span><span class="s">"publish_frequency"</span> <span class="na">type=</span><span class="s">"double"</span> <span class="na">value=</span><span class="s">"30.0"</span> <span class="nt">/></span>
<span class="nt"></node></span>
<span class="c"><!-- Fake laser --></span>
<span class="nt"><node</span> <span class="na">pkg=</span><span class="s">"nodelet"</span> <span class="na">type=</span><span class="s">"nodelet"</span> <span class="na">name=</span><span class="s">"laserscan_nodelet_manager"</span> <span class="na">args=</span><span class="s">"manager"</span><span class="nt">/></span>
<span class="nt"><node</span> <span class="na">pkg=</span><span class="s">"nodelet"</span> <span class="na">type=</span><span class="s">"nodelet"</span> <span class="na">name=</span><span class="s">"depthimage_to_laserscan"</span>
<span class="na">args=</span><span class="s">"load depthimage_to_laserscan/DepthImageToLaserScanNodelet laserscan_nodelet_manager"</span><span class="nt">></span>
<span class="nt"><param</span> <span class="na">name=</span><span class="s">"scan_height"</span> <span class="na">value=</span><span class="s">"10"</span><span class="nt">/></span>
<span class="nt"><param</span> <span class="na">name=</span><span class="s">"output_frame_id"</span> <span class="na">value=</span><span class="s">"/camera_depth_frame"</span><span class="nt">/></span>
<span class="nt"><param</span> <span class="na">name=</span><span class="s">"range_min"</span> <span class="na">value=</span><span class="s">"0.45"</span><span class="nt">/></span>
<span class="nt"><remap</span> <span class="na">from=</span><span class="s">"image"</span> <span class="na">to=</span><span class="s">"/camera/depth/image_raw"</span><span class="nt">/></span>
<span class="nt"><remap</span> <span class="na">from=</span><span class="s">"scan"</span> <span class="na">to=</span><span class="s">"/scan"</span><span class="nt">/></span>
<span class="nt"></node></span>
<span class="c"><!--RVIZ--></span>
<span class="nt"><node</span> <span class="na">pkg=</span><span class="s">"rviz"</span> <span class="na">type=</span><span class="s">"rviz"</span> <span class="na">name=</span><span class="s">"rviz"</span> <span class="na">args=</span><span class="s">"-d /home/workspace/catkin_ws/src/EKFLab.rviz"</span><span class="nt">/></span>
<span class="c"><!--Use this insted if you are cloning the whole repo in your src--></span>
<span class="c"><!--<node pkg="rviz" type="rviz" name="rviz" args="-d /home/workspace/catkin_ws/src/RoboND-EKFLab/EKFLab.rviz"/>--></span>
<span class="c"><!--Turtlebot Teleop--></span>
<span class="nt"><node</span> <span class="na">pkg=</span><span class="s">"turtlebot_teleop"</span> <span class="na">type=</span><span class="s">"turtlebot_teleop_key"</span> <span class="na">name=</span><span class="s">"turtlebot_teleop_keyboard"</span> <span class="na">output=</span><span class="s">"screen"</span><span class="nt">></span>
<span class="nt"><param</span> <span class="na">name=</span><span class="s">"scale_linear"</span> <span class="na">value=</span><span class="s">"0.5"</span> <span class="na">type=</span><span class="s">"double"</span><span class="nt">/></span>
<span class="nt"><param</span> <span class="na">name=</span><span class="s">"scale_angular"</span> <span class="na">value=</span><span class="s">"1.5"</span> <span class="na">type=</span><span class="s">"double"</span><span class="nt">/></span>
<span class="nt"><remap</span> <span class="na">from=</span><span class="s">"turtlebot_teleop_keyboard/cmd_vel"</span> <span class="na">to=</span><span class="s">"cmd_vel_mux/input/teleop"</span><span class="nt">/></span>
<span class="nt"></node></span>
<span class="nt"></launch></span></code></pre></figure>
<p><strong>Launch the main.launch file</strong></p>
<figure class="highlight"><pre><code class="language-bash" data-lang="bash"><span class="nb">cd</span> /home/workspace/catkin_ws/
<span class="nb">source </span>devel/setup.bash
roslaunch main main.launch</code></pre></figure>
<h2 id="rqt-multiplot">Rqt Multiplot</h2>
<p>So far it is possible to move the robot using the turtlebot_teleop node, publish the unfiltered odometry and filtered trajectories using the ekf package.
To visualize how close the estimated pose of the ekf is to the unfiltred odometry trajectory we use <a href="https://github.com/ethz-asl/rqt_multiplot_plugin"><code class="language-plaintext highlighter-rouge">rqt multiplot</code></a> from ROS. This is an additional package that needs to be installed.
Note, that ROS comes with plotting package <a href="http://wiki.ros.org/rqt_plot"><code class="language-plaintext highlighter-rouge">rqt_plot</code></a> which can graph values that are echoed by different topics. However, these signals will be only plotted over time.
To get more plotting capability, such as plotting the y position over the x position, <code class="language-plaintext highlighter-rouge">rqt_multiplot</code> is required.</p>
<figure class="highlight"><pre><code class="language-bash" data-lang="bash">apt-get <span class="nb">install </span>ros-kinetic-rqt <span class="nt">-y</span>
apt-get <span class="nb">install </span>ros-kinetic-rqt-multiplot <span class="nt">-y</span>
apt-get <span class="nb">install </span>libqwt-dev <span class="nt">-y</span>
<span class="nb">rm</span> <span class="nt">-rf</span> ~/.config/ros.org/rqt_gui.ini</code></pre></figure>
<p>Then run the <code class="language-plaintext highlighter-rouge">rqt_multiplot</code> package node.</p>
<figure class="highlight"><pre><code class="language-bash" data-lang="bash">rosrun rqt_multiplot rqt_multiplot</code></pre></figure>
<p>Follow the steps below to add two plots. One will show the unfiltered y position over the x position. The other plot will show the filtered y(x) position from the ekf package.</p>
<figure class="third ">
<a href="/assets/posts/2019-03-06-ros-kalman-filter/multiplot_empty.png" title="Configure the multiplot to add plots.">
<img src="/assets/posts/2019-03-06-ros-kalman-filter/multiplot_empty.png" alt="Configure the multiplot to add plots." />
</a>
<a href="/assets/posts/2019-03-06-ros-kalman-filter/multiplot_config.png" title="Add these two curves to the multiplot.">
<img src="/assets/posts/2019-03-06-ros-kalman-filter/multiplot_config.png" alt="Add these two curves to the multiplot." />
</a>
<a href="/assets/posts/2019-03-06-ros-kalman-filter/multiplot_unfiltered.png" title="Settings for the plot of the unfiltered y over x position.">
<img src="/assets/posts/2019-03-06-ros-kalman-filter/multiplot_unfiltered.png" alt="Settings for the plot of the unfiltered y over x position." />
</a>
<a href="/assets/posts/2019-03-06-ros-kalman-filter/multiplot_filtered.png" title="Settings for the plot of the filtered y over x position.">
<img src="/assets/posts/2019-03-06-ros-kalman-filter/multiplot_filtered.png" alt="Settings for the plot of the filtered y over x position." />
</a>
<figcaption>Configuration steps of the filtered and unfilterd position y(x) plots in the rqt_multiplot package node.
</figcaption>
</figure>
<p>By default the graph plotting is paused. Simply press play in <code class="language-plaintext highlighter-rouge">rqt_multiplot</code> and move the robot around. Now it is possible to visualize how close the odometry trajectory is to the filtered trajectory produced by the ekf.</p>
<figure class=" ">
<a href="/assets/posts/2019-03-06-ros-kalman-filter/multiplot_traj.png" title="Plots of the unfiltered and filtered y over x positions from the odometry and ekf.">
<img src="/assets/posts/2019-03-06-ros-kalman-filter/multiplot_traj.png" alt="Plots of the unfiltered and filtered y over x positions from the odometry and ekf." />
</a>
<figcaption>Plots of the unfiltered and filtered y over x positions from the odometry and ekf.
</figcaption>
</figure>
<p>Another comparison that is left open is to compare the filtered and unfiltered data with percise reference sensor, such as <a href="https://en.wikipedia.org/wiki/Differential_GPS">differential gps</a>.</p>
<h2 id="summary">Summary</h2>
<p>This concludes the Udacity Kalman Filter Lab. We’ve estimated the robot’s pose with an extended Kalman filter ROS package.
That was done by remapping some publisher and subscriber topics and establishing a communication between nodes of different packages.
To simplify the launch, all of the launch files were grouped into a single <code class="language-plaintext highlighter-rouge">main.launch</code> file, that can be launched from the terminal.
Access the <a href="https://github.com/udacity/RoboND-EKFLab">EKF Lab</a> in the Udacity GitHub repository.</p>
<h2 id="links">Links</h2>
<ul>
<li><a href="https://www.turtlebot.com/">TurtleBot</a></li>
<li><a href="http://wiki.ros.org/Robots/TurtleBot">TurtleBot on ROS</a></li>
<li><a href="https://github.com/turtlebot">TurtleBot on GitHub</a></li>
</ul>
<h2 id="reference">Reference</h2>
<p>This post is a summary of the EKFLab from the Robotics Nanodegree of Udacity found <a href="https://eu.udacity.com/course/robotics-software-engineer--nd209">here</a></p>Franz PucherThe Kalman filter is used for state estimation and sensor fusion. This post shows how sensor fusion is done using the Kalman filter and ROS.Extended Kalman Filter2019-03-06T15:31:41+00:002019-03-06T15:31:41+00:00https://fjp.at/posts/state-estimation/extended-kalman-filter<p>As explained in the <a href="/posts/state-estimation/extended-kalman-filter">previous post</a>, a <a href="https://de.wikipedia.org/wiki/Kalman-Filter">Kalman filter</a> is used as a state predictor for linear systems.
To deal with nonlinear systems given by nonlinear state prediction $x’=f(x)$ and/or nonlinear measurement function $z = h(x)$ the extended Kalman filter can be used.
The essence of this extended approach is the usage of the Taylor series, to linearize the nonliner system around an operating point.</p>
<p>The extended Kalman filter requires the calculation of the Jacobian of a nonlinear function as part of every single iteration, since the mean (which is the point that at which the nonlinear funciton is linearize about) is updated.</p>
<h2 id="differences-to-the-linear-kalman-filter">Differences to the linear Kalman Filter</h2>
<p>The Kalman filter consists of an prediction and an measurement update step. In the case of a nonlinear model the state is still updated with the
nonlinear state equation $x’= f(x)$. However, the covariance cannot be updated using the nonlinear function as it would result in a non-Gaussian distribution.
To update the state covariance, the nonlinear function $\mathbf{f}(\mathbf{x})$ needs to be linearized around a small section at the operation point $\mathbf{x}$, which results in the Jacobian matrix $\mathbf{F}$.
The linearization is achieved by the taking the first two terms of the Taylor Series of the function centered around the mean.</p>
\[\mathbf{F} = \mathbf{f}(\mathbf{\mu}) + \frac{\partial \mathbf{f}(\mathbf{\mu})}{\partial \mathbf{x}} (\mathbf{x} - \mathbf{\mu})\]
<p>Because the measurement function is assumed to be nonlinear the residual results in the following equation</p>
\[y = z - h(x')\]
<p>To update the measurement covariance matrix $\mathbf{S}$, the measuremtn function $\mathbf{h}$ needs to be linearized around the mean.
This is done again with the Taylor series or the Jacobian in the multi-dimensional case.</p>
<p>\(\mathbf{h}(\mathbf{x}) \approx \mathbf{h}(\mathbf{\mu}) + (\mathbf{x} - \mathbf{\mu}) J_{\mathbf{h}}(\mathbf{\mu})\)
either the measurement update or the state prediction.</p>
<h2 id="links">Links</h2>
<ul>
<li><a href="https://github.com/rlabbe/Kalman-and-Bayesian-Filters-in-Python">Kalman and Bayesian Filters in Python</a></li>
</ul>
<h2 id="references">References</h2>
<p><a href="https://www.amazon.de/Applied-Optimal-Estimation-Mit-Press/dp/0262570483/ref=as_li_ss_il?s=books-intl-de&ie=UTF8&qid=1551390892&sr=1-1&keywords=optimal+estimation&linkCode=li2&tag=fjp-21&linkId=85bcdf63f00d2b9b918d322eb6079771&language=de_DE" target="_blank"><img border="0" src="//ws-eu.amazon-adsystem.com/widgets/q?_encoding=UTF8&ASIN=0262570483&Format=_SL160_&ID=AsinImage&MarketPlace=DE&ServiceVersion=20070822&WS=1&tag=fjp-21&language=de_DE" /></a><img src="https://ir-de.amazon-adsystem.com/e/ir?t=fjp-21&language=de_DE&l=li2&o=3&a=0262570483" width="1" height="1" border="0" alt="" style="border:none !important; margin:0px !important;" /></p>
<p><a href="https://www.amazon.de/Probabilistic-Robotics-INTELLIGENT-ROBOTICS-AUTONOMOUS/dp/0262201623/ref=as_li_ss_il?ie=UTF8&qid=1551730012&sr=8-1&keywords=probabilistic+robotics&linkCode=li2&tag=fjp-21&linkId=7fba87448a00855820511f309b7a4d41&language=de_DE" target="_blank"><img border="0" src="//ws-eu.amazon-adsystem.com/widgets/q?_encoding=UTF8&ASIN=0262201623&Format=_SL160_&ID=AsinImage&MarketPlace=DE&ServiceVersion=20070822&WS=1&tag=fjp-21&language=de_DE" /></a><img src="https://ir-de.amazon-adsystem.com/e/ir?t=fjp-21&language=de_DE&l=li2&o=3&a=0262201623" width="1" height="1" border="0" alt="" style="border:none !important; margin:0px !important;" /></p>Franz PucherThe Kalman filter is used for state estimation but limited to linear models. To deal with nonlinear models the extended Kalman filter can be used instead.