Ole Peters

Ole Peters


Economic growth is measured as the rate of relative change in gross domestic product (GDP) per capita. Yet, when incomes follow random multiplicative growth, the ensemble-average (GDP per capita) growth rate is higher than the time-average growth rate achieved by each individual in the long run. This mathematical fact is the starting point of ergodicity economics. Using the atypically high ensemble-average growth rate as the principal growth measure creates an incomplete picture. Policymaking would be better informed by reporting both ensemble-average and time-average growth rates. We analyse rigorously these growth rates and describe their evolution in the United States and France over the last fifty years. The difference between the two growth rates gives rise to a natural measure of income inequality, equal to the mean logarithmic deviation. Despite being estimated as the average of individual income growth rates, the time-average growth rate is independent of income mobility.

Many studies of wealth inequality make the ergodic hypothesis that rescaled wealth converges rapidly to a stationary distribution. Changes in distribution are expressed through changes in model parameters, reflecting shocks in economic conditions, with rapid equilibration thereafter. Here we test the ergodic hypothesis in an established model of wealth in a growing and reallocating economy. We fit model parameters to historical data from the United States. In recent decades, we find negative reallocation, from poorer to richer, for which no stationary distribution exists. When we find positive reallocation, convergence to the stationary distribution is slow. Our analysis does not support using the ergodic hypothesis in this model for these data. It suggests that inequality evolves because the distribution is inherently unstable on relevant timescales, regardless of shocks. Studies of other models and data, in which the ergodic hypothesis is made, would benefit from similar tests.

Peters [2011a] defined an optimal leverage which maximizes the time-average growth rate of an investment held at constant leverage. It was hypothesized that this optimal leverage is attracted to 1, such that, e.g., leveraging an investment in the market portfolio cannot yield long-term outperformance. This places a strong constraint on the stochastic properties of prices of traded assets, which we call "leverage efficiency." Market conditions that deviate from leverage efficiency are unstable and may create leverage-driven bubbles. Here we expand on the hypothesis and its implications. These include a theory of noise that explains how systemic stability rules out smooth price changes at any pricing frequency; a resolution of the so-called equity premium puzzle; a protocol for central bank interest rate setting to avoid leverage-driven price instabilities; and a method for detecting fraudulent investment schemes by exploiting differences between the stochastic properties of their prices and those of legitimately-traded assets. To submit the hypothesis to a rigorous test we choose price data from different assets: the S&P500 index, Bitcoin, Berkshire Hathaway Inc., and Bernard L. Madoff Investment Securities LLC. Analysis of these data supports the hypothesis.

Behavioural economics provides labels for patterns in human economic behaviour. Probability weighting is one such label. It expresses a mismatch between probabilities used in a formal model of a decision (i.e. model parameters) and probabilities inferred from real people's decisions (the same parameters estimated empirically). The inferred probabilities are called ``decision weights.'' It is considered a robust experimental finding that decision weights are higher than probabilities for rare events, and (necessarily, through normalisation) lower than probabilities for common events. Typically this is presented as a cognitive bias, i.e. an error of judgement by the person. Here we point out that the same observation can be described differently: broadly speaking, probability weighting means that a decision maker has greater uncertainty about the world than the observer. We offer a plausible mechanism whereby such differences in uncertainty arise naturally: when a decision maker must estimate probabilities as frequencies in a time series while the observer knows them a priori. This suggests an alternative presentation of probability weighting as a principled response by a decision maker to uncertainties unaccounted for in an observer's model.

Submitted on 2019-10-04

An important question in economics is how people choose between different payments in the future. The classical normative model predicts that a decision maker discounts a later payment relative to an earlier one by an exponential function of the time between them. Descriptive models use non-exponential functions to fit observed behavioral phenomena, such as preference reversal. Here we propose a model of discounting, consistent with standard axioms of choice, in which decision makers maximize the growth rate of their wealth. Four specifications of the model produce four forms of discounting - no discounting, exponential, hyperbolic, and a hybrid of exponential and hyperbolic - two of which predict preference reversal. Our model requires no assumption of behavioral bias or payment risk.

Daniel Bernoulli’s study of 1738 [1] is considered the beginning of expected utility theory. Here I point out that in spite of this, it is formally inconsistent with today’s standard form of expected utility theory. Bernoulli’s criterion for participating in a lottery, as written in [1], is not the expected change in utility.

Cooperation is a persistent behavioral pattern of entities pooling and sharing resources. Its ubiquity in nature poses a conundrum: whenever two entities cooperate, one must willingly relinquish something of value to the other. Why is this apparent altruism favored in evolution? Classical treatments assume a priori a net fitness gain in a cooperative transaction which, through reciprocity or relatedness, finds its way back from recipient to donor. Our analysis makes no such assumption. It rests on the insight that evolutionary processes are typically multiplicative and noisy. Fluctuations have a net negative effect on the long-time growth rate of resources but no effect on the growth rate of their expectation value. This is a consequence of non-ergodicity. Pooling and sharing reduces the amplitude of fluctuations and, therefore, increases the long-time growth rate for cooperators. Put simply, cooperators' resources outgrow those of similar non-cooperators. This constitutes a fundamental and widely applicable mechanism for the evolution of cooperation. Furthermore, its minimal assumptions make it a candidate explanation in simple settings, where other explanations, such as emergent function and specialization, are implausible. An example of this is the transition from single cells to early multicellular life.

© 2018-2020 Researchers.One