Do ants learn what’s unfamiliar?

Experiments with a variety of ant species have shown that ants displaced away from a familiar feeder will follow a Path Integration defined direction for only a portion of the home vector before beginning a systematic search. This behaviour is more pronounced in species living in cluttered environments. In this paper, Schwarz et al., look at the behaviour of naive and experienced ants in a similar situation. As expected the experienced ants only follow a portion of their PI defined route before searching. However, naive ants follow PI for significantly longer, thus demonstrating that the pattern of following a portion of PI indicated home vectors before search is not a species level innate bias, but depends on visual experience. The authors suggest that experienced ants are better able to identify an unfamiliar location and therefore know to switch to search more rapidly. An alternative is that individuals with visual experience may be trying to apply those memories, but unsuccessfully in the unfamiliar location. This faulty visual guidance will not influence PI defined paths for long home vectors when PI is weighted strongly, but may lead to a disorganised (search-like) paths for weaker PI vectors later in the path.

Schwarz, S., Wystrach, A., & Cheng, K. (2017). Ants’ navigation in an unfamiliar environment is influenced by their experience of a familiar route. Scientific Reports, 7(1), 14161.

Advertisements
Categories: Papers from 2017

The view from above (or below)

Analysing the information available in views has became a staple of insect navigational research. Here, the authors take this to the third dimension, to look at the use of views in different planes.

Abstract: Panoramic views of natural environments provide visually navigating animals with two kinds of information: they define locations because image differences increase smoothly with distance from a reference location and they provide compass information, because image differences increase smoothly with rotation away from a reference orientation. The range over which a given reference image can provide navigational guidance (its ‘catchment area’) has to date been quantified from the perspective of walking animals by determining how image differences develop across the ground plane of natural habitats. However, to understand the information available to flying animals there is a need to characterize the ‘catchment volumes’ within which panoramic snapshots can provide navigational guidance. We used recently developed camera-based methods for constructing 3D models of natural environments and rendered panoramic views at defined locations within these models with the aim of mapping navigational information in three dimensions. We find that in relatively open woodland habitats, catchment volumes are surprisingly large extending for metres depending on the sensitivity of the viewer to image differences. The size and the shape of catchment volumes depend on the distance of visual features in the environment. Catchment volumes are smaller for reference images close to the ground and become larger for reference images at some distance from the ground and in more open environments. Interestingly, catchment volumes become smaller when only above horizon views are used and also when views include a 1 km distant panorama. We discuss the current limitations of mapping navigational information in natural environments and the relevance of our findings for our understanding of visual navigation in animals and autonomous robots.

Murray T, Zeil J (2017) Quantifying navigational information: The catchment volumes of panoramic snapshots in outdoor scenes. PLoS ONE 12(10): e0187226.

Categories: Papers from 2017

Visually guided behaviour with small sets of neurons

A general problem in neuroscience is understanding how sensory systems organise information to be at the service of behaviour. Computational approaches can be useful for such studies as they allow one to simulate the sensory experience of a behaving animal whilst considering how sensory information should be encoded. In flies, small subpopulations of identifiable neurons are known to be necessary for particular visual tasks, and the response properties of these populations have now been described in detail. Surprisingly, these populations are small, with only 14 or 28 neurons each, which suggests something of a sensory bottleneck. In this paper, Dewar et al. consider how the population code from these neurons relates to the information required to control specific behaviours, concluding that flies are unlikely to possess a general-purpose pattern-learning ability. However, implicit information about the shape and size of objects, which is necessary for many ecologically important visually guided behaviours, does pass through the sensory bottleneck. These findings show that nervous systems can be particularly economical when specific populations of cells are paired with specific visual behaviours.

Dewar ADM, Wystrach A, Philippides A, Graham P (2017) Neural coding in the visual system of Drosophila melanogaster: How do small neural populations support visually guided behaviours? PLoS Comput Biol 13(10): e1005735. https://doi.org/10.1371/journal.pcbi.1005735

Categories: Papers from 2017

Did you see where it went?

We don’t feature many techniques papers on this BLOG, but here is a brilliant one which may open up new experimental opportunities for people. The computer vision technique allows for insect position to be extracted from cluttered backgrounds. Therefore insects can be tracked without specially prepared observation areas, or specialist cameras. It’s fantastic.

Risse, B., Mangan, M., Del Pero, L., & Webb, B. (2017). Visual Tracking of Small Animals in Cluttered Natural Environments Using a Freely Moving Camera. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (pp. 2840-2849).

Categories: Papers from 2017

How does path integration work?

Path Integration has teased neurobiologists for a long while; The mathematically well-defined process has drawn-in those who thought they could propose how natural circuits might implement those mathematical calculations. Elegant models have been proposed, but there has always been a lack of information about the neural substrate of PI. This paper changes that, e.g. Stone et al. show how to augment celestial compass information, bees have neurons that respond to a bee’s velocity as provided by optic flow information. Having knowledge of the bee’s direction and speed processing, Stone et al. were able to propose how a PI network could incorporate known features of the insect brain.

Personally I like the style of this paper. Modelling is used to nicely bridge known and unknown physiological elements and propose a function.

Stone, Thomas, Barbara Webb, Andrea Adden, Nicolai Ben Weddig, Anna Honkanen, Rachel Templin, William Wcislo, Luca Scimeca, Eric Warrant, and Stanley Heinze. “An Anatomically Constrained Model for Path Integration in the Bee Brain.” Current Biology (2017).

Categories: Papers from 2017

Navigation in the brain

The last few years have been exciting times in insect navigation as we begin to unravel the neural underpinnings of navigation behaviours. Here Stanley Heinze gives us a review of what we know of the one particular brain region (the Central Complex) in a variety of insects. He nicely relates the requirements of different insect behaviours to differences in Central Complex organisation thus highlighting how an evolutionary approach to neuroscience can be as valuable as the study of model systems.

Heinze, S. (2017). Unraveling the neural basis of insect navigation. Current Opinion in Insect Science.

Categories: Papers from 2017

Mushroom Body models for navigation

One of the most exciting possibilities over the next few years of insect navigation research will be the increasing availability of technology allowing reconstructions of natural environments. This is going to improve the understanding of experimental results and allow for more sophisticated and meaningful modelling. This paper from Müller et al., is an example of what we can expect. Following work from Barbara Webb’s group they use a Mushroom Body implementation of the familiarity model of visual navigation (Baddeley et al., 2012) to investigate visual navigation of honeybees in a reconstruction of terrain used for many previous honeybee experiments. They find that the basic MB model implementation can be used to drive visual navigation, although the specific environment type is important for the success of the model in these experiments.

Müller, J., Nawrot, M., Menzel, R., & Landgraf, T. (2017). A neural network model for familiarity and context learning during honeybee foraging flights. Biological Cybernetics, 1-14.

Categories: Papers from 2017