Robotic mapping

Robotic mapping is a discipline related to computer vision[1] and cartography. The goal for an autonomous robot is to be able to construct (or use) a map (outdoor use) or floor plan (indoor use) and to localize itself and its recharging bases or beacons in it. Robotic mapping is that branch which deals with the study and application of ability to localize itself in a map / plan and sometimes to construct the map or floor plan by the autonomous robot.

Evolutionarily shaped blind action may suffice to keep some animals alive. For some insects for example, the environment is not interpreted as a map, and they survive only with a triggered response. A slightly more elaborated navigation strategy dramatically enhances the capabilities of the robot. Cognitive maps enable planning capacities and use of current perceptions, memorized events, and expected consequences.

Operation

The robot has two sources of information: the idiothetic and the allothetic sources. When in motion, a robot can use dead reckoning methods such as tracking the number of revolutions of its wheels; this corresponds to the idiothetic source and can give the absolute position of the robot, but it is subject to cumulative error which can grow quickly.

The allothetic source corresponds the sensors of the robot, like a camera, a microphone, laser, lidar or sonar.[citation needed] The problem here is "perceptual aliasing". This means that two different places can be perceived as the same. For example, in a building, it is nearly impossible to determine a location solely with the visual information, because all the corridors may look the same.[2] 3-dimensional models of a robot's environment can be generated using range imaging sensors[3] or 3D scanners.[4][5]

Map representation

The internal representation of the map can be "metric" or "topological":[6]

  • The metric framework is the most common for humans and considers a two-dimensional space in which it places the objects. The objects are placed with precise coordinates. This representation is very useful, but is sensitive to noise and it is difficult to calculate the distances precisely.
  • The topological framework only considers places and relations between them. Often, the distances between places are stored. The map is then a graph, in which the nodes corresponds to places and arcs correspond to the paths.

Many techniques use probabilistic representations of the map, in order to handle uncertainty.

There are three main methods of map representations, i.e., free space maps, object maps, and composite maps. These employ the notion of a grid, but permit the resolution of the grid to vary so that it can become finer where more accuracy is needed and more coarse where the map is uniform.

Map learning

Map learning cannot be separated from the localization process, and a difficulty arises when errors in localization are incorporated into the map. This problem is commonly referred to as Simultaneous localization and mapping (SLAM).

An important additional problem is to determine whether the robot is in a part of environment already stored or never visited. One way to solve this problem is by using electric beacons, Near field communication (NFC), WiFi, Visible light communication (VLC) and Li-Fi and Bluetooth.[7]

Path planning

Path planning is an important issue as it allows a robot to get from point A to point B. Path planning algorithms are measured by their computational complexity. The feasibility of real-time motion planning is dependent on the accuracy of the map (or floorplan), on robot localization and on the number of obstacles. Topologically, the problem of path planning is related to the shortest path problem of finding a route between two nodes in a graph.

Robot navigation

Outdoor robots can use GPS in a similar way to automotive navigation systems.

Alternative systems can be used with floor plan and beacons instead of maps for indoor robots, combined with localization wireless hardware.[8] Electric beacons can help for cheap robot navigational systems.

See also

References

  1. ^ Fernández-Madrigal, Juan-Antonio (30 September 2012). Simultaneous Localization and Mapping for Mobile Robots: Introduction and Methods: Introduction and Methods. IGI Global. ISBN 978-1-4666-2105-3.
  2. ^ Filliat, David, and Jean-Arcady Meyer. "Map-based navigation in mobile robots:: I. a review of localization strategies." Cognitive Systems Research 4.4 (2003): 243-282.
  3. ^ Jensen, Björn, et al. Laser range imaging using mobile robots: From pose estimation to 3D-models. ETH-Zürich, 2005, 2005.
  4. ^ Surmann, Hartmut, Andreas Nüchter, and Joachim Hertzberg. "An autonomous mobile robot with a 3D laser range finder for 3D exploration and digitalization of indoor environments." Robotics and Autonomous Systems 45.3-4 (2003): 181-198.
  5. ^ Malik, Aamir Saeed (30 November 2011). Depth Map and 3D Imaging Applications: Algorithms and Technologies: Algorithms and Technologies. IGI Global. ISBN 978-1-61350-327-0.
  6. ^ Thrun, Sebastian. "Learning metric-topological maps for indoor mobile robot navigation." Artificial Intelligence 99.1 (1998): 21-71.
  7. ^ "Your partner in creating smart indoor spaces". IndoorAtlas.
  8. ^ "An Autonomous Passive RFID-Assisted Mobile Robot System for Indoor Positioning" (PDF). Retrieved 19 October 2015.