Machine Vision Project, Spring 2011

In the second semester of Masters of Science in Smart Systems at Jacobs University Bremen, one is required to complete two projects. One of my choices was Machine Vision Project with Professor Andreas Nuechter. I had already decided last semester that I would take it, after the professor had given us a basic introduction. I have always loved mobile robotics, and this was the perfect choice. So what is the project?

To come up with a cheap solution to create a mobile platform for capturing 3-d point cloud.

The mobile platform would be a car (in our case, the professor’s WV Touran). The main tool to generate the point cloud is a RIEGL 3D scanner. The tools used to get accurate pose (location) estimate include, a LMS100 sick laser scanner for scan matching, an Inertial Measurement Unit (IMU), a GPS, and velocity readings from the car. The whole project would be implemented using ROS (

The first task was to make every sensor work separately. This part of the project took surprisingly long for other people, considering a lot of the code needed was already available. My task was to make the laser scanner work. I found a sick wrapper for the laser scanner for ROS, but the first problem I faced was regarding the basic connection set-up. I knew that I would have to open the sick scanner to connect the power cables, but I did not want to venture there without supervision, so I used the laser scanner on another existing robot, to work. A simple launch file, figuring out the static IP of the laser scanner and some tweaking later, viola: it worked. This was the first scan I got:

These were good results, but my task wasnt complete yet. We would be using two laser scanners. I had to figure out how to use them both simultaneously. This is where the use of ROS, instead of a simple server-client model, paid off. I could just use the package I created for both scanners. I would just have to create “groups” called sick1 and sick2, and pass separate parameters, such as the IP of the laser scanner. Now all we have to do it plug the laser scanners in and test.

This was a relatively simple task. I used all my electrical engineering skills to quickly finish wiring and the set-up of all the power supplies, while others mounting everything. We are using two 12 volts batteries connected in series, and each of the sensors require 24 volts to operate. Hence, all the components have to be connected in parallel to the batteries. I did this as shown here:

By the end of the day, we had a basic set-up of the platform.

Now that I had access to the two laser scanners, I had to configure them and to make them work. The first step in accomplishing this was to change the static IPs of the two scanners. This can be done using the software which comes along with the laser scanners (in Windows). Once that was done, I modified the launch file accordingly, crossed my fingers, and started the laser scanners. I got the follow results:

The following week, I was back in the lab again. Everything was set-up, but the dry run (where we were creating one file to make everything work together) took quite a while, but by the end of the day, we had the set-up everything and managed to record a couple of data sets.

Now that we have some data, the current task was to use it to find accurate pose estimates. I got the laser scanners again. This time I used Polar Scan Matching to get pose estimates using the forward-looking scanner. The initial results I got were not very promising, with very few data points available in the data.

Lets see how we proceed. ūüôā

This entry was posted in Code, Studies and tagged , , , , , , , , , , , , , , , . Bookmark the permalink.

Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

This site uses Akismet to reduce spam. Learn how your comment data is processed.