Foraging is a fundamental aspect of animal behavior that has implications for predator-prey interactions, competition, and studies of animal cognitive abilities. Animal foraging may be as dramatic as a lion stalking a gazelle, or as mundane as a barnacle filtering plankton from sea-water. Foraging theory seeks to bring order to this diversity, by recognizing and analyzing the common problems faced by foraging animals. Foraging theory, or optimal foraging theory as it was originally known, has its origins in seminal papers by Schoener, Charnov, Parker, MacArthur, and Pianka, and Pulliam and Emlen published in the 1960s and 1970s respectively. Taken together, these papers produced a remarkably cohesive body of theory based on two common foraging problems: patch exploitation and prey selection.
Both of these basic models assume that a forager encounters items (prey or patches) one at a time, according to some well-behaved process (often a Poisson process). For example, if a forager encounters and consumes prey items that take h time units to handle and provide e calories of energetic benefit when consumed, then Holling's disk equation gives the rate of energy intake:
where A gives the encounter rate, so that 1/A is the expected time between encounters with prey items. The form of the equation on the left hand side shows how Holling's disk equation is simply the expected energetic gains per encounter divided by the expected time (search time plus handling time) per encounter. One can generalize this basic structure to include multiple prey types or to situations where the forager encounters patches instead of prey items. Both models find the foraging behavior that maximizes the rate of energy intake as given by Holling's disk equation, but the models focus on different aspects of foraging behavior. The following sections outline the prey and patch models.
As described above, this model assumes that we can associate an energy value e, a handling time h, and an encounter rate A with each prey type. The model solves for the attack probability pi for each prey type that maximizes the rate of energy intake. The model makes three predictions: (1) A forager should always take or always ignore a given prey type. Foraging models call this the zero-one rule because it follows from the mathematical observation that the rate-maximizing attack probabilities can only be zero or one. (2) Prey types should be ranked by their profitabilities, which we define to be quotients of the form e /h . That is, the prey type with the highest energy to handling time quotient is the 'best' type (rank 1), and the next highest is rank 2 and so on. (3) One can determine the set of prey that maximizes intake rate by working through the possible 'diets' in rank order. That is, first considering a diet by rank 1 prey only, then a diet of rank 1 and 2. Obviously, one can use Holling's disk equation to calculate the rate of energy intake for each of these diets. We can show mathematically that if, for example, a diet consisting of types 1-3 yields an intake that is smaller than the profitability of the fourth ranked prey type e4/h4, then adding this fourth ranked type will increase the intake rate. Therefore, to find the i rate-maximizing diet, we simply add prey types to the diet in rank order until this is no longer true.
This result focuses our attention on the properties of the 'best' or first ranked prey item. Obviously, a forager should always attack this best item upon encounter, and the properties of this type determine whether the forager should attack the second best item. Specifically, if the best type is abundant (has a high encounter), then a 'best-type-only' diet may make sense. If, however, the best type is rare, then it typically makes sense to add the second best type to the diet. Notice that the abundance of the second best type is not important in this reasoning! Some investigators find this result counterintuitive, because they can imagine situations in which superabundant but mediocre prey items attract a forager's attention.
The patch model assumes that a forager encounters patches rather than prey items. While we characterize a prey item by its available energy and handling time, we characterize a patch by its gain function. A gain function, g(t), gives the relationship between the time spent exploiting a patch (t) and the energy gains extracted from a patch (g). We assume that a forager can extract energy from a patch at a fairly high rate initially, but this rate declines as the forager spends more time in the patch simply because the forager's exploitation depletes the patch. So, we typically draw gain functions as increasing curves that bend down. The patch models solve for the patch residence time (t) that maximizes intake rate. For a situation with only one patch type, one can easily show that the rate-maximizing patch residence time, say t*, satisfies
Where g'(t*) represents the derivative of the gain function with respect to patch residence time. Modelers call this condition the marginal-value theorem because marginal rate is a synonym for derivative. The condition tells us that at the rate-maximizing residence time, the instantaneous (or marginal) rate of intake equals the overall rate of intake. Algebraically, this marginal-value condition can be difficult to solve, but it has a very simple graphical solution (Figure 1). The model predicts that in poor habitats (low overall rate of intake) foragers should spend more time in patches extracting more from each patch; while in rich habitats the model predicts that foragers should 'skim the cream' from each patch - spend less time and extract less.
Notice that although these models seem to address quite different aspects of foraging behavior, they are logically similar. The marginal rate of intake in the patch model
Travel time, t
Patch time, t
Travel time, t
Patch time, t
Travel time, t tx
Patch time, t
Travel time, t tx
Patch time, t
Travel time, ti t2
Patch time, t
Figure 1 The figure shows the graphical solution of the classical optimal patch exploitation model (the marginal-value theorem). The right hand side of all three panels shows a curve that gives the relationship between time spent exploiting a patch (patch time) and the resources extracted from the patch (gains). This curve is the function g(t). The left hand side of each panel shows travel time (r) increasing to the left. (a) shows a given travel time (r) and an arbitrary chosen patch time (t). If we consider a slanting line from r on the travel time axis to the point [t, g(t)] on the right side, we see that the slope of this line g(t)/(r +1) is the rate of gain associated with patch time t. Clearly, the patch time shown in (a) is not the best, because we can increase this slope (the rate intake) by choosing a larger patch time. (b) shows that the optimal patch time corresponds to the case where the slanting 'rate line' just touches (is tangent to) the gain function. (c) uses this solution to show the main prediction of the patch model, when travel times are long, say r2, then this tangent point just corresponds to a long patch time (t2), but if travel times are short (e.g., r1) then we predict a short patch time (t1).
plays a role that is very similar to the profitability oflower ranked types in the diet model's inclusion algorithm. Both models maximize the long-term rate of energy intake, and the central properties of both models stem from opportunity costs. For example, it can never be a mistake to attack the highest ranked item in the prey model, but it can be a
mistake to attack the lower ranked items because in doing so, a forager might miss an opportunity to attack a higher ranked item. The role of opportunity costs is even clearer in the patch model. The model predicts that foragers should 'skim the cream' from patches in rich habitats, because when a forager spends too much time in patches in a rich habitat, it loses opportunity to exploit fresh patches elsewhere.
Experimentalists and field workers have tested these models many times. Data broadly support many of their qualitative predictions. For example, virtually all patch-use studies support the prediction that foragers should spend more time exploiting patches in poor habitats. In addition, the claim that the abundance of high-quality prey types shapes animal selectivitiy is also well supported. Yet, the zero-one rule almost never holds. Each investigator in this area has a different interpretation of the empirical results. Some are encouraged by the pattern of qualitative agreement, while others emphasize the quantitative failures. In modern foraging theory, these models play a role like that of the Lotka-Volterra models in population ecology. They may not apply to any given situation, but modelers and students of foraging need to understand them so that they can use them as starting points for new models, and recognize their predictions within more complex situations. Literally hundreds of studies have used these models as building blocks. The work of Pirolli and Card on 'information foraging' provides an especially novel example. These workers have used ideas from patch exploitation theory to analyze how computer users interact with websites and databases.
Most investigators recognize the patch and prey as the historical foundation of foraging theory, but as the theory has developed over the last 20 years, these same investigators have come to recognize two more ideas as basic building blocks of foraging theory: the ideal free distribution and dynamic optimization.
Classical foraging theory, as represented by the patch and prey models, focused on the actions of solitary forager. The ideal free distribution provides a framework for extending foraging theory to cases in which animals forage in groups. The ideal free distribution considers how animals in a group should distribute themselves between two feeding sites. The foragers in this model are 'ideal' in the sense they can immediately recognize the quality of sites and they share the resources at a site equally,that is, if n individuals occupy a site, each obtains 1/nth of the available food. The foragers are 'free' because they can move freely between sites without cost. If site one produces food at rate rb then the nx individuals present there can expect to obtain food at rate ^/n^ similarly individuals at site two can expect to obtain r2/n2. The ideal free distribution holds that a stable distribution of individuals among sites can only occur when the intake rates in the two sites are approximately equal, because if intake rates differ in the two sites, then individuals at the lower-rate site can do better by moving to the higher-rate site. Data supports the ideal free model's predictions surprisingly well, even though the assumptions (e.g., equal sharing) are seldom satisfied. Although one can criticize the ideal free model at many levels, its focus on the economics of leaving and joining is fundamental to social foraging models.
Dynamic optimization is not a model like the patch or prey model, instead it is a technique for modeling foraging and life-history problems. The patch model is a single-variable maximization problem like those studied by calculus students (e.g., find the height of tin can that maximizes volume). Dynamic optimization allows us to solve for an optimal trajectory or function, rather than a single variable. Consider the problem of foraging in the presence of predators. Typically, rich feeding sites are also the riskiest, while poor feeding sites are safer. As a rule, larger animals experience a lower risk of predation than small ones. So a small animal choosing the poorer but safer feeding site is also choosing a smaller future body size that could increase its risk of predation in the future. In situations like this, we cannot consider the best action at time 1 without also understanding the implications for time 2 and so on. Two pairs of workers (Marc Mangel and Colin Clark, and Alasdair Houston and John McNamara) have pioneered the application of dynamic models to foraging and behavioral ecology. Investigators have applied dynamic optimization to the study of predator avoidance and food storage (e.g., caching) with notable success, and dynamic models clearly represent a step forward in sophistication. The downside is that it can be difficult to solve dynamic optimization problems, and we must often resort to numerical techniques.
Was this article helpful?