Home > Papers from 2016 > Visual familiarity for navigation

Visual familiarity for navigation

In recent years the idea that visual navigation can be implemented by a simple familiarity algorithm has become useful as a baseline hypothesis. The ideas is that navigation can be driven by selecting the direction that maximises visual familiarity relative to an unstructured set of egocentric views. Here, Doug and Brad (Gaffin and Brayfield if you like) show how algorithms of this type can also be used in indoors environments, which has implications for autonomous robot technologies.

Gaffin DD, Brayfield BP (2016) Autonomous Visual Navigation of an Indoor Environment Using a Parsimonious, Insect Inspired Familiarity Algorithm. PLoS ONE 11(4): e0153706. doi:10.1371/journal.pone.0153706

Categories: Papers from 2016
  1. No comments yet.
  1. No trackbacks yet.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s