Abstract:
Background Organisms use a variety of mechanisms to protect themselves against perturbations. For example, repair mechanisms fix damage, feedback loops keep homeostatic systems at their setpoints, and biochemical filters distinguish signal from noise. Such buffering mechanisms are often discussed in terms of robustness, which may be measured by reduced sensitivity of performance to perturbations. Methodology/Principal Findings I use a mathematical model to analyze the evolutionary dynamics of robustness in order to understand aspects of organismal design by natural selection. I focus on two characters: one character performs an adaptive task; the other character buffers the performance of the first character against perturbations. Increased perturbations favor enhanced buffering and robustness, which in turn decreases sensitivity and reduces the intensity of natural selection on the adaptive character. Reduced selective pressure on the adaptive character often leads to a less costly, lower performance trait. Conclusions/Significance The paradox of robustness arises from evolutionary dynamics: enhanced robustness causes an evolutionary reduction in the adaptive performance of the target character, leading to a degree of maladaptation compared to what could be achieved by natural selection in the absence of robustness mechanisms. Over evolutionary time, buffering traits may become layered on top of each other, while the underlying adaptive traits become replaced by cheaper, lower performance components. The paradox of robustness has widespread implications for understanding organismal design.

Abstract:
I examine mortality patterns by first plotting the data of mortality rate versus age on a log-log scale. The slope of the age-specific mortality rate at each age is the age-specific acceleration of mortality. About one-half of total deaths have causes with similar shapes for the age-specific acceleration of mortality: a steady rise in acceleration from midlife until a well-defined peak at 80 years, followed by a nearly linear decline in acceleration. This first group of causes includes heart disease, cerebrovascular disease, and accidental deaths. A second group, accounting for about one-third of all deaths, follows a different pattern of age-specific acceleration. These diseases show an approximately linear rise in acceleration to a peak at 35–45 years of age, followed by a steep and steady decline in acceleration for the remainder of life. This second group includes cancer, chronic respiratory diseases, and liver disease. I develop a multistage model of disease progression to explain the observed patterns of mortality acceleration.A multistage model of disease progression can explain both the early-life increase and late-life decrease in mortality acceleration. An early-life rise in acceleration may be caused by increasing rates of transition between stages as individuals grow older. The late-life decline in acceleration may be caused by progression through earlier stages, leaving only a few stages remaining for older individuals.Humans die at an increasing rate until late in life, when mortality rates level off. The causes of the late-life mortality plateau have been debated extensively over the past few years [1-6]. Here, I examine mortality patterns separately for each of the leading causes of death. The different causes of death show distinct mortality patterns, providing some clues about the varying acceleration of mortality at different ages [2,7].For most causes of death, the acceleration in mortality rises until middle or late life, and then declines rapid

Abstract:
Microbes require several complex organic molecules for growth. A species may obtain a required factor by taking up molecules released by other species or by synthesizing the molecule. The patterns of uptake and synthesis set a flow of resources through the multiple species that create a microbial community. This article analyzes a simple mathematical model of the tradeoff between uptake and synthesis. Key factors include the influx rate from external sources relative to the outflux rate, the rate of internal decay within cells, and the cost of synthesis. Aspects of demography also matter, such as cellular birth and death rates, the expected time course of a local resource flow, and the associated lifespan of the local population. Spatial patterns of genetic variability and differentiation between populations may also strongly influence the evolution of metabolic regulatory controls of individual species and thus the structuring of microbial communities. The widespread use of optimality approaches in recent work on microbial metabolism has ignored demography and genetic structure.

Abstract:
We typically observe large-scale outcomes that arise from the interactions of many hidden, small-scale processes. Examples include age of disease onset, rates of amino acid substitutions, and composition of ecological communities. The macroscopic patterns in each problem often vary around a characteristic shape that can be generated by neutral processes. A neutral generative model assumes that each microscopic process follows unbiased stochastic fluctuations: random connections of network nodes; amino acid substitutions with no effect on fitness; species that arise or disappear from communities randomly. These neutral generative models often match common patterns of nature. In this paper, I present the theoretical background by which we can understand why these neutral generative models are so successful. I show how the classic patterns such as Poisson and Gaussian arise. Each classic pattern was often discovered by a simple neutral generative model. The neutral patterns share a special characteristic: they describe the patterns of nature that follow from simple constraints on information. For example, any aggregation of processes that preserves information only about the mean and variance attracts to the Gaussian pattern; any aggregation that preserves information only about the mean attracts to the exponential pattern; any aggregation that preserves information only about the geometric mean attracts to the power law pattern. I present an informational framework of the common patterns of nature based on the method of maximum entropy. This framework shows that each neutral generative model is a special case that helps to discover a particular set of informational constraints; those informational constraints define a much wider domain of non-neutral generative processes that attract to the same neutral pattern.

Abstract:
d'Alembert's principle describes the balance between two opposing forces. Directly applied forces change a system with respect to a fixed frame of reference. Inertial forces alter the frame of reference so that a system appears to be unchanged. In addition, the forces of constraint limit the possible changes that the system may follow. I show that the direct forces move a system along a path that maximizes the gain in entropy. That maximum entropy production principle can only be understood in the context of d'Alembert's special separation between direct, inertial, and constraining forces. The maximum entropy production principle unifies aspects of mechanics, thermodynamics, natural selection, statistical inference, and probability theory. Although maximum entropy production is a general principle, a purely geometric interpretation provides a more fundamental and universal perspective than does entropy. In particular, the conservation of total probability imposes strong geometric symmetry and constraint on d'Alembert's separation of direct and inertial forces. Maximum entropy production is a useful but sometimes unnatural way of expressing those fundamental geometric principles. I also show that maximum entropy production and maximum gain in Fisher information are equivalent ways of describing the underlying geometric principles.

Abstract:
The theory of natural selection has two forms. Deductive theory describes how populations change over time. One starts with an initial population and some rules for change. From those assumptions, one calculates the future state of the population. Deductive theory predicts how populations adapt to environmental challenge. Inductive theory describes the causes of change in populations. One starts with a given amount of change. One then assigns different parts of the total change to particular causes. Inductive theory analyzes alternative causal models for how populations have adapted to environmental challenge. This chapter emphasizes the inductive analysis of cause.

Abstract:
George Williams defined an evolutionary unit as hereditary information for which the selection bias between competing units dominates the informational decay caused by imperfect transmission. In this article, I extend Williams' approach to show that the ratio of selection bias to transmission bias provides a unifying framework for diverse biological problems. Specific examples include Haldane and Lande's mutation-selection balance, Eigen's error threshold and quasispecies, Van Valen's clade selection, Price's multilevel formulation of group selection, Szathmary and Demeter's evolutionary origin of primitive cells, Levin and Bull's short-sighted evolution of HIV virulence, Frank's timescale analysis of microbial metabolism, and Maynard Smith and Szathmary's major transitions in evolution. The insights from these diverse applications lead to a deeper understanding of kin selection, group selection, multilevel evolutionary analysis, and the philosophical problems of evolutionary units and individuality.