Home > Papers from 2010 > Do ants need to recognise visual features at all retinal positions?

Do ants need to recognise visual features at all retinal positions?

The goal of snapshot matching is to compare one’s currently experienced visual scene with a remembered view and then move so as to minimise the difference. Computational models have demonstrated that (theoretically) this process could be achieved by many methods, 2 of the most different include : (1) identifying corresponding features in the current and remembered scenes and then calculating the movement which would remove the retinotopic discrepancy, or, (2) monitoring a running total of a simple image difference score (between current and remembered views) and using gradient descent to minimise this image difference. Unfortunately, the behavioural characteristics of these two behaviours are hard to distinguish. However, a recent discovery about the saccade-like course corrections made by wood ants could shed some light  on the machinery of “snapshot” guidance. Here is what PNAS had to say about the article in their news section:

“Previous studies have shown that some insects use remembered visual landmarks to navigate foraging routes. David Lent et al. (pp. 16348–16353) trained wood ants to navigate across a featureless area to a sucrose reward, which was situated beneath a computer monitor displaying a light-dark, vertical edge as a solitary landmark. The authors tracked the ants and found that the insects appeared to stay on course by periodically making rapid turns to face the reward’s location. After determining that the ants’ initial rotation speed was correlated to the magnitude of the turn, the authors surmised that the ants knew prior to turning where on their retina to position the light-dark edge. To test this possibility, the authors simulated heading changes en route—from the ants’ perspective—by shifting the computer-controlled landmark horizontally by various predetermined distances. In response, the authors report, the ants turned by the amount needed to offset the perceived course deviations. These findings, according to the authors, provide evidence that ants compare their surroundings to stored images of known landmarks, and remove any differences by adjusting their angle of approach.” – PNAS “In this issue”.

David Lent, Paul Graham and Tom Collett (2010) Image-matching during ant navigation occurs through saccade-like body turns controlled by learned visual features. Proc Natl Acad Sci USA. 107, 16348-16353.

Categories: Papers from 2010
  1. No comments yet.
  1. No trackbacks yet.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s