Home > Papers from 2013 > Global and Local Scene Encoding

Global and Local Scene Encoding

It is well understood that a key navigational mechanism for insects involves the learning of visual information from panoramic scenes. This leaves us with a basic question of how insects encode visual scenes for navigational. Computational studies have shown us how visual navigation can be achieved with: (i) Raw images; (ii) Sets of local visual features (such as oriented contrast edges), extracted from an image and tagged with retinal position; (iii) Sparse encodings where an entire scene is decomposed to simple parameters which represent a global property of the entire scene (such as centre of mass).

However, the fine details of real ants’ scene encoding remain a mystery. In Lent et al., we present evidence for ants using local and global scene features, when using vision for navigation. The major finding is that as ants aim to a particular point in a scene they learn the ratio of the shape that is in their left and right visual field. This simple ratio can then be used to guide paths to previously unseen shapes. We also see evidence of ants using local features, the interesting thing will be to ask how these mechanisms relate to each other within the dynamic learning process.
David D. Lent, Paul Graham, Thomas S. Collett (2013) Visual Scene Perception in Navigating Wood Ants Current Biology – 22 April 2013 (Vol. 23, Issue 8, pp. 684-690)
Categories: Papers from 2013
  1. No comments yet.
  1. No trackbacks yet.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google+ photo

You are commenting using your Google+ account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )


Connecting to %s