Category Archives: Population

Income inequality in the Roman Empire

Agrippina the Younger

Over the last 30 years, wealth in the United States has been steadily concentrating in the upper economic echelons. Whereas the top 1 percent used to control a little over 30 percent of the wealth, they now control 40 percent. It’s a trend that was for decades brushed under the rug but is now on the tops of minds and at the tips of tongues.

Since too much inequality can foment revolt and instability, the CIA regularly updates statistics on income distribution for countries around the world, including the U.S. Between 1997 and 2007, inequality in the U.S. grew by almost 10 percent, making it more unequal than Russia, infamous for its powerful oligarchs. The U.S. is not faring well historically, either. Even the Roman Empire, a society built on conquest and slave labor, had a more equitable income distribution.

To determine the size of the Roman economy and the distribution of income, historians Walter Schiedel and Steven Friesen pored over papyri ledgers, previous scholarly estimates, imperial edicts, and Biblical passages. Their target was the state of the economy when the empire was at its population zenith, around 150 C.E. Schiedel and Friesen estimate that the top 1 percent of Roman society controlled 16 percent of the wealth, less than half of what America’s top 1 percent control.

To arrive at that number, they broke down Roman society into its established and implicit classes. Deriving income for the majority of plebeians required estimating the amount of wheat they might have consumed. From there, they could backtrack to daily wages based on wheat costs (most plebs did not have much, if any, discretionary income). Next they estimated the incomes of the “respectable” and “middling” sectors by multiplying the wages of the bottom class by a coefficient derived from a review of the literature. The few “respectable” and “middling” Romans enjoyed comfortable, but not lavish, lifestyles.

Above the plebs were perched the elite Roman orders. These well-defined classes played important roles in politics and commerce. The ruling patricians sat at the top, though their numbers were likely too few to consider. Below them were the senators. Their numbers are well known—there were 600 in 150 C.E.—but estimating their wealth was difficult. Like most politicians today, they were wealthy—to become a senator, a man had to be worth at least 1 million sesterces (a Roman coin, abbreviated HS). In reality, most possessed even greater fortunes. Schiedel and Friesen estimate the average senator was worth over HS5 million and drew annual incomes of more than HS300,000.

After the senators came the equestrians. Originally the Roman army’s cavalry, they evolved into a commercial class after senators were banned from business deals in 218 B.C. An equestrian’s holdings were worth on average about HS600,000, and he earned an average of HS40,000 per year. The decuriones, or city councilmen, occupied the step below the equestrians. They earning about HS9,000 per year and held assets of around HS150,000. Other miscellaneous wealthy people drew incomes and held fortunes of about the same amount as the decuriones.

In total, Schiedel and Friesen figure the elite orders and other wealthy made up about 1.5 percent of the 70 million inhabitants the empire claimed at its peak. Together, they controlled around 20 percent of the wealth.

These numbers paint a picture of two Romes, one of respectable, if not fabulous, wealth and the other of meager wages, enough to survive day-to-day but not enough to prosper. The wealthy were also largely concentrated in the cities. It’s not unlike the U.S. today. Indeed, based on a widely used measure of income inequality, the Gini coefficient, imperial Rome was slightly more equal than the U.S.

The CIA, World Bank, and other institutions track the Gini coefficients of modern nations. It’s a unitless number, which can make it somewhat tricky to understand. I find visualizing it helps. Take a look at the following graph.

Gini coefficient of inequality

To calculate the Gini coefficient, you divide the orange area (A) by the sum of the orange and blue areas (A + B). The more unequal the income distribution, the larger the orange area. The Gini coefficient scales from 0 to 1, where 0 means each portion of the population gathers an equal amount of income and 1 means a single person collects everything. Schiedel and Friesen calculated a Gini coefficient of 0.42–0.44 for Rome. By comparison, the Gini coefficient in the U.S. in 2007 was 0.45.

Schiedel and Friesen aren’t passing judgement on the ancient Romans, nor are they on modern day Americans. Theirs is an academic study, one used to further scholarship on one of the great ancient civilizations. But buried at the end, they make a point that’s difficult to parse, yet provocative. They point out that the majority of extant Roman ruins resulted from the economic activities of the top 10 percent. “Yet the disproportionate visibility of this ‘fortunate decile’ must not let us forget the vast but—to us—inconspicuous majority that failed even to begin to share in the moderate amount of economic growth associated with large-scale formation in the ancient Mediterranean and its hinterlands.”

In other words, what we see as the glory of Rome is really just the rubble of the rich, built on the backs of poor farmers and laborers, traces of whom have all but vanished. It’s as though Rome’s 99 percent never existed. Which makes me wonder, what will future civilizations think of us?

Source:

Scheidel, W., & Friesen, S. (2010). The Size of the Economy and the Distribution of Income in the Roman Empire Journal of Roman Studies, 99 DOI: 10.3815/007543509789745223

Photo by Biker Jun.

Related posts:

Ghosts of ecology

Population density fostered literacy, the Industrial Revolution

Ghosts of geography

Population density fostered literacy, the Industrial Revolution

Class portrait, unknown English school (undated)

Without the Industrial Revolution, there would be no modern agriculture, no modern medicine, no climate change, no population boom. A rapid-fire series of inventions reshaped one economy after another, eventually affecting the lives of every person on the planet. But exactly how it all began is still the subject of intense debate among scholars. Three economists, Raouf Boucekkine, Dominique Peeters, and David de la Croix, think population density had something to do with it.

Their argument is relatively simple: The Industrial Revolution was fostered by a surge in literacy rates. Improvements in reading and writing were nurtured by the spread of schools. And the founding of schools was aided by rising population density.

Unlike violent revolutions where monarchs lost their heads, the Industrial Revolution had no specific powder-keg. Though if you had to trace it to one event, James Hargreaves’ invention of the spinning jenny would be as good as any. Hargreaves, a weaver from Lancashire, England, devised a machine that allowed spinners to produce more and better yarn. Spinners loathed the contraption, fearing that they would be replaced by machines. But the cat was out of the bag, and subsequent inventions like the steam engine and better blast furnaces used in iron production would only hasten the pace of change.

This wave of ideas that drove the Industrial Revolution didn’t fall out of the ether. Literacy in England had been steadily rising since the 16th century when between the 1720s and 1740s, it skyrocketed. In just two decades, literacy rose from 58 percent to 70 percent among men and from 26 percent to 32 percent among women. The three economists combed through historical documents searching for an explanation and discovered a startling rise in school establishments starting in 1700 and extending through 1740. In just 40 years, 988 schools were founded in Britain, nearly as many as had been established in previous centuries.

School establishments in Great Britain before 1860

The reason behind the remarkable flurry of school establishments, the economists suspected, was a rise in population density in Great Britain. To test this theory, they developed a mathematical model that simulated how demographic, technological, and productivity changes influenced school establishments. The model’s most significant variable was population density, which the authors’ claim can explain at least one-third of the rise in literacy between 1530 and 1850. No other variable came close to explaining as much.

Logistically, it makes sense. Aside from cost, one of the big hurdles preventing children from attending school was proximity. The authors’ recount statistics and anecdotes from the report of the Schools Inquiry Commission of 1868, which said boys would travel up to an hour or more each way to get to school. One 11 year old girl walked ten miles a day for her schooling.

Many people knew of the value of an education even in those days, but there were obvious limits to how far a person could travel to obtain one. Yet as population density on the island rose, headmasters could confidently establish more schools, knowing they could attract enough students to fill their classrooms. What those students learned not only prepared them for a rapidly changing economy, it also cultivated a society which valued knowledge and ideas. That did more than just help spark the Industrial Revolution—it gave Great Britain a decades-long head start.

Sources:

Boucekkine, R., Croix, D., & Peeters, D. (2007). Early Literacy Achievements, Population Density, and the Transition to Modern Growth Journal of the European Economic Association, 5 (1), 183-226 DOI: 10.1162/JEEA.2007.5.1.183

Stephens, W. (1990). Literacy in England, Scotland, and Wales, 1500-1900 History of Education Quarterly, 30 (4) DOI: 10.2307/368946

Related posts:

Hidden cost of sprawl: Getting to school

Hunter-gatherer populations show humans are hardwired for density

Do people follow trains, or do trains follow people? London’s Underground solves a riddle

Photo scanned by pellethepoet.

7 billion

Earth

Sometime today—or maybe it’s already happened—the 7 billionth person on this planet will be born. It’s a milestone, that’s for certain, though I’m unsure whether it’s auspicious or portentous. What I do know is it’s a bit contrived. The 7 billionth person will face the same challenges as the baby born just before or just after. They are all entering a world that is trying to answer its most pressing question—how many of us can it support?

The answer depends, of course, on what sort of future those people will have. Will they live like Americans—sated and safe—or like Somalians—as uncertain about their next meal as they are about their country’s fate? That, of course, depends on resources. In truth, we won’t know the answers to any of these questions until we get there, if we’re even lucky enough to realize when we’ve arrived.

For years now, I’ve felt as though the world has been filling up around me. Part of that has been the result of changing scenery, an impression reinforced by years of moving up the density ladder from small towns to bigger cities. But that feeling is also supported by cold, hard facts. My worlds are filling up. It’s most evident in my hometown, a small city where change comes slowly if at all. Yet even there, the roads and houses and shops I knew can’t contain the now pulsing masses, grown half again as large as when I first knew them. Like a teenager, the city is coping with its new size awkwardly. Ambivalent about the future, it keeps trying to be the city I knew. But even I—with my propensity for nostalgia—know better. Every time I return, as I sit trapped a dozen deep at a stoplight, a lesson is writ large in the taillights of the car in front of me. Growth, like progress, cannot be stopped.

So as we cross this synthetic threshold, close your eyes for a second to take snapshot of the world as it is. It will never be the same. Then open them to a future that’s two people fuller.

Related posts:

If the world’s population lived in one city

Ghosts of geography

Density solidified early human domination

Sunset over the Kenyan savanna

It’s no surprise that Homo sapiens dominates the Earth. After all, we’re resourceful, social, and smart. No, the surprise is how we did so in just 50,000 years. Such a pace is unprecedented, especially for a long living, slow reproducing species such as ours. Intelligence and opposable thumbs certainly helped, but we aren’t the only ones who can use a tool or solve a puzzle. Rather, a peculiarity of our social nature may be what has set us apart, allowing us to live in nearly every biome on Earth.

The exact mechanics of how sociality fostered our dominance are fuzzy. Myriad archaeologists and anthropologists work hard to resolve those uncertainties, but history is vast and their resources are comparatively small. There is another option, though, one that relies on mathematical machinations and close study of the characteristics of modern day hunter-gatherer groups. Using those methods, a group of anthropologists and biologists think they may have solved part of the migratory riddle. Our predisposition to living densely, they suppose, may have contributed to our stunning success beyond the savannas of Africa.

A sublinear relationship between population size and home range size—meaning that larger groups live at higher densities—imparts special advantages for species that can deal with the twin burdens of density, overshoot and social conflict. Overshoot describes a population that overwhelms its habitat, devouring all available food and otherwise making a mess of the place. Social conflict is as it sounds, where tight proximities provoke fights between individuals. Together, those snags can bring a once booming population to it’s knees.

But social animals are uniquely adapted to cope with those problems. For one, social behavior soothes tensions when they do rise. And when it comes to the necessities of life, density conveys a distinct advantage for social species—resources, chiefly food, become easier to find. Larger, denser populations squeeze more out of a plot of land than an individual could on his or her own.

Density itself wasn’t directly responsible for the first forays out of Africa. Those groups were were too small and dispersed to receive a substantial boost from density. They faced the worst the natural world had to offer, and many probably couldn’t hack it.

Where population density conferred its advantages was when subsequent waves of colonizers followed. Density allowed those people to thrive. They joined the initial groups, growing more populous and drawing more resources from the land. This made groups more stable both physically and socially—full bellies lead to happier and healthier people. As each group’s numbers grew larger, their social bonds grew stronger and their chances of regional extinction plummeted. In other words, once people worked together to establish themselves, they were likely there to stay.

It’s a heartwarming story the scientific paper tells in the unsentimental language of mathematics. It implies that the essential success of our species can be boiled down to one variable, β, and one value of that variable, ¾. The variable β is an exponent that describes how populations scale numerically and geographically. Its value of ¾ is significant. When β equals one or greater, each additional person requires the same amount of land or more—the group misses out on density’s advantages. But when β is less than one—as it is in our case—then a population becomes denser as it grows larger.

The degree of our sociality has allowed us to bend the curve of population density in our favor. If early humans had been an entirely selfish species—each individual requiring as much or more land than the previous—β would be equal to one or greater. We wouldn’t have lived at higher densities as our populations grew, and early forays beyond the savanna might have petered out. Instead of conquering the globe, we’d have been a footnote of evolution.¹

And here is where we can consider how this affects our modern lives. Population density may have aided our sojourn out of Africa, but it’s clear there are limits. Hunter-gatherer populations appear to be limited to around 1,000 people, depending on the carrying capacity of the ecosystem. Technology has raised carrying capacities beyond that number—as evinced by the last few millennia of human history—but we don’t know it’s limits. A scaling exponent equal to ¾ may have helped our rise to dominance, but it also could hasten our downfall. Technology may be able to smooth the path to beyond 7 billion, but what if it can’t? What if ¾ is an unbreakable rule? What happens if we reach a point where density can no longer save us from ourselves?

¹ I might point out here that β=¾ could tell us something about the viability of libertarianism, but that’s a subject for another post.

Source:

Hamilton, M., Burger, O., DeLong, J., Walker, R., Moses, M., & Brown, J. (2009). Population stability, cooperation, and the invasibility of the human species Proceedings of the National Academy of Sciences, 106 (30), 12255-12260 DOI: 10.1073/pnas.0905708106

Photo by lukasz dzierzanowski.

Related posts:

Floral metabolic densities

Hunter-gatherer populations show humans are hardwired for density

The curious relationship between place names and population density

What do population density, lightning, and the phone company have in common?

Lightning strike in Tokyo

File this one under “applications of population density”. Researchers working for Nippon Telephone and Telegraph—better known as NTT—discovered they could use an area’s population density to predict telecommunications equipment failure due to lighting strikes.

Telecommunications is an expensive business. Like other infrastructure, it requires a lot of manpower and capital to expand and maintain. But unlike many other systems, telecommunications—especially cellular network technology—has been advancing at a breakneck pace, requiring equipment to be upgraded or replaced every few years to stay current. Furthermore, the equipment is both delicate and expensive. Something like a lightning strike can easily cost tens to hundreds of thousands of dollars to repair.

The NTT researchers were interested in predicting where lightning strikes would exact the most damage in coming years, especially since some climate models predict more severe weather, which can lead to more lightning. The study focused on three prefectures in Japan—Tokyo, Saitama, and Gunma—which represent a gradient of population density ranging from one of the most built-up urban environments to relatively sparse farmland. The prefectures also fall along a gradient of lightning intensity, with Gunma at the high end receiving 10 strikes per square kilometer and Tokyo at the low end receiving 3 strikes per square kilometer.

Using past data on lightning strikes, telecom equipment failures due to lightning strikes, and the 2005 Japanese census, they developed a model to describe how telecom equipment failures due to lightning correlate with population density. At first blush, I expected urban areas to receive the brunt of the impact—after all, they have loads more equipment than rural areas—but the results were just the opposite. Expensive circuitry and antennas were safer in urban Tokyo than they were in rural Gunma, even when the discrepancy in lightning strikes between the two regions was taken into account.

The authors offer two explanations for why telecom equipment is safer in urban areas. First, many of the copper lines that feed base stations and boxes run underground in cities, which lowers the induced voltage during a strike. Second, the equipment itself tends to be exposed to the elements in the country, either on the ground or perched atop telephone poles. In the city, most of it in encased in apartment buildings.

But there is another possible explanation they missed—the design of telecom networks and their relationship to population density. The evidence lies in their calculated coefficient that describes  how population density can predict equipment failures due to lightning strikes. The coefficient is ¾, and if you’ve been reading this blog for a while, you’ll no doubt recognize that number. As an exponent, ¾ is powerful descriptor, explaining a variety of phenomenon ranging from how plant sizes influences population density to how human population density affects the density of place names.

In this case, ¾ seems to say less about the pattern of lightning strikes than it does about telecom network design and the differences between rural and urban infrastructure. Denser populations require more equipment, but not at a fixed rate. Cellular networks provide a good example. In rural areas, cell sizes are limited by area, not the number of users. It’s the opposite in the city—the more users, the smaller cells become. Therefore, phone companies can rely on fewer cells and less equipment per person in the city than in the country.

The relationship between infrastructure demands and population density could go a long way to explaining why there is a lower rate of equipment failure in denser areas—there’s simply less equipment per person in the city than in the country. But the fact that telecom infrastructure—and damage to it—appears to scale at the same power that describes an range of phenomena related to density and metabolism, well, that’s just too good to be a coincidence.

Sources:

X. Zhang, A. Sugiyama, & H. Kitabayashi (2011). Estimating telecommunication equipment failures due to lightning surges by using population density 2011 IEEE International Conference on Quality and Reliability (ICQR) , 182-185 : 10.1109/ICQR.2011.6031705

Photo by potarou.

Related posts:

Floral metabolic densities

Hunter-gatherer populations show humans are hardwired for density

The curious relationship between place names and population density

Hunter-gatherer populations show humans are hardwired for density

People represented in a cave painting

This post originally appeared on Scientific American’s Guest Blog.

High density living seems like a particularly modern phenomenon. After all, the first subway didn’t run until 1863 and the first skyscraper wasn’t built until 1885. While cities have existed for thousands of years—some with population densities that rival today’s major metropolises—most of humanity has lived at relatively low densities until recently, close to the land and the resources it provided. Before farming, nearly everyone was directly involved in the day-to-day hunting and gathering of food, which required living at even lower densities. It would seem as though our current proclivity for high density living runs counter to our biological underpinnings, that density has been thrust upon us by the demands of modern life.

This post was chosen as an Editor's Selection for ResearchBlogging.orgIt’s easy to arrive at that conclusion, in part because density is a hot topic these days. More than 50 percent of the world’s population now lives in cities—a fact repeated so often it’s almost a litany. But reciting that phrase doesn’t reveal the subtle effects implied by the drastic demographic shift. People migrating from the countryside face untold challenges wrought by density. Cities are complex places, fraught with crime, diseases, and pollution. Yet cities are also places of great dynamism, creativity, and productivity. Clearly, the benefits outweigh the drawbacks or else cities would have dissolved back into the landscape.

The benefits of living close to other people are evident even to hunter-gatherers. Though their societies have changed over the millennia, studying characteristics of present-day hunter-gatherers can let us peer into the past. That’s what was done by three anthropologists—Marcus Hamilton, Bruce Milne, and Robert Walker—and one ecologist—Jim Brown. In the process, they seem to have discovered a fundamental law that drives human agglomeration. Though their survey of 339 present-day hunter-gatherer societies doesn’t explicitly mention cities, it does show that as populations grow, people tend to live closer together—much closer together. For every doubling of population, the home ranges of hunter-gatherer groups increased by only 70 percent.

The way home ranges scale with population follows a mathematical relationship known as a power law. Graphs of power laws bend like a graceful limbo dancer—sharply at the base and more gradually thereafter—toward one axis or another, depending on the nature of the relationship. They only straighten when plotted against logarithmic axes—the kind that step from 1 to 10 to 100 and so on. One variable, known as the scaling exponent, is responsible for these attributes.

Hunter-gatherer population size and home range (updated)

Fig. 1 Hunter-gatherer home ranges scale to the three-fourths power. Above are representations of three populations and the size of their home range according to this relationship.

To see how scaling exponents apply in the case of hunter-gatherer territories, let’s look at the range of possible values and what each would mean in terms of density. If the exponent were equal to one, then home ranges would scale linearly with population size—10 people would occupy 10 square miles and 100 people would occupy 100 square miles. If the exponent were 1.2, then a group of 100 would occupy 250 square miles. And if the exponent were 0.75, a group of 100 people will only occupy 32 square miles. This last one is what Hamilton and his co-authors found.

Their result is the average of 339 societies, and there’s a bit of heterogeneity within that statistic. Not every group has a perfectly “average” way of hunting and gathering. Some hunt more, some gather more. Some find food on land, others in the water. Where and how hunter-gatherers get their food has a large impact on how densely they live, causing the density exponent to deviate slightly or greatly from three-quarters. For instance, groups which derive more than 40 percent of their food from hunting require larger territories because prey is not always evenly distributed or easily found. Their home ranges scale to the nine-tenths power, indicating sparser living. Gatherers require less space—their home ranges’ scale at the 0.64 power—largely due to plants’ sedentary lifestyles.

Hunter-gatherer societies which draw food from the water lived more compactly, too. The home range of aquatic foragers was consistently smaller across the range of population sizes—their exponent was 0.78 versus terrestrial foragers’ 0.79. Hamilton and his colleagues suspect this is because food from rivers, lakes, and ocean shores is more abundant and predictable than comparable terrestrial ecosystems.

But no matter what types of food are consumed, the overall trend remains the same. Every additional person requires less land than the previous one. That’s an important statement. Not only does it say we’re hardwired for density, it also says a group becomes 15 percent more efficient at extracting resources from the land every time their population doubles. Each successive doubling in turn frees up 15 percent more resources to be directed towards something other than hunting and gathering. In other words, complex societies didn’t just evolve as a way to cope with high-density—they evolved in part because of high density.

Update: The figure in this post originally reported 10.8 sq km for a group of 50 people. It should have been 18.8 sq km. The figure has been updated.

Source:

Hamilton, M., Milne, B., Walker, R., & Brown, J. (2007). Nonlinear scaling of space use in human hunter-gatherers Proceedings of the National Academy of Sciences, 104 (11), 4765-4769 DOI: 10.1073/pnas.0611197104

Photo by Gruban.

Related posts:

The curious relationship between place names and population density

Density in the pre-Columbian United States: A look at Cahokia

When it's too crowded to have kids

Fushimi Inari Taisha, head shrine of Inari, the Japanese kami of fertility, among other things

Density can have profound effects on fertility. Population biologists call this phenomenon density dependence, and they’ve witnessed it in everything from single-celled organisms to elephants. It can influence fertility positively—individuals are more likely to meet mates in dense populations—or negatively—increased stress or lower food availability may drive fertility rates down. But despite evidence of the phenomenon in the natural world, little has been said about the its role in declining human fertility rates.

The relative paucity of studies examining density dependence in humans may be due in part to our persisting belief (conscious or unconscious) that in the course of developing culture, we have isolated ourselves from natural forces. Many previous studies of fertility rates tended to overlook what biologists see in the wild.

One study did, though. A demographic survey of around 150 countries, it uncovered strong evidence that population density is driving down human fertility rates. The authors accounted for all the usual variables found in fertility studies—infant mortality, gross domestic product per capita, percentage of women in the workforce, female literacy, and degree of urbanization. While those oft studied factors all still play a role, population density stood out as a new addition.

The researchers found that people throughout the world tend to have fewer kids when population densities are high, a pattern that repeated itself over the course of forty years. There were a few outliers—Australia has low population densities and low fertility while the Maldives has the opposite—but population density remained significant even when variables like infant mortality and GDP per capita included.

Density dependence was apparent even in the number of children people wanted, hinting that the cause may be more than just environmental. The authors used the Eurobarometer survey to see if people’s desires were aligned with population density. By and large, they were. People in sparsely populated Scandinavian countries desired more children, while people in the Netherlands wanted fewer. There were outliers, of course. The Irish continue to want larger families than average, the Germans fewer.

The exact mechanisms at work are still unknown. Density could be making food scarcer, or stress could be reducing fertility biochemically. Pollution may also be to blame. The psychological effects of crowding might be lowering libidos. Economics could be another driver. After all, many things are more expensive in higher density areas, whether that be food, shelter, child care, and so on. The truth is, we just don’t know at this point. But what should be clear is that culture and society have not insulated us from the forces of nature.

Source:

Lutz, W., Testa, M., & Penn, D. (2007). Population Density is a Key Factor in Declining Human Fertility Population and Environment, 28 (2), 69-81 DOI: 10.1007/s11111-007-0037-6

Photo by Miguel Michán.

The curious relationship between place names and population density

Political map with toponyms

Giving a name to a place is an important act. It says a place has meaning, that it should be remembered. For thousands of years, the way we kept track of place names—or toponyms—was by using our memory. Today, we’re not nearly so limited, and the number of toponyms seems to have exploded. Yet oddly enough, the number of places we name in a given area follows a trend uncannily similar to one seen in hunter-gatherer societies.

Eugene Hunn, now a professor emeritus of anthropology at the University of Washington, stumbled upon what appears to be a fundamental relationship between toponyms and population density when he published a paper on the subject in 1994. His discovery stemmed from a literature survey of twelve hunter-gatherer societies from around the globe. Hunn tabulated each society’s toponym repertoire and the size of their home territory to calculate the number of toponyms per square mile, or toponymic density. From this data, he distilled two trends.

First, the average number of toponyms converged on what he called the “magic number 500”. Hunn found that trend in a few other papers on topics like folk taxonomies of plants and animals, and he posited that the number was an inherent limitation of the human mind—that when relying on memory alone, individuals tend to retain names to 500 items per category. A hunter-gatherer, for example, may be able to name 500 different types of plants. Unfortunately, Hunn’s “magic number 500” wasn’t all that magical given the variability about it—individuals in the hunter-gatherer groups he studied actually recalled between 200 to 1000 toponyms. The concept doesn’t appear to have caught on in the academic world.

Hunn’s second finding, though, is more compelling. When he arranged the toponymic and population densities of the twelve hunter-gatherer groups on a graph, a clear relationship stood out. Where people lived closer together, the number of place names per square mile skyrocketed. Where they lived farther apart, they named fewer places per square mile. The figure, which I’ve reproduced below, appears to have a linear relationship. That’s an artifact of the logarithmic scale of the axes, which compresses the data as you move away from the origin. The scale is hiding a subtle curve, one that bends down as though the x-axis has roped the line and is pulling it closer.

Toponymic and Population Density of Twelve Hunter Gatherer Groups

The general trend in Hunn’s figure—that we name more places when living at higher densities—makes such good sense that I knew there had to be a modern corollary. Despite all our sophisticated maps and petabytes of computer storage, I suspected that we still hew to the same basic pattern as our hunter-gatherer forebears. So I dove into a simple yet relatively modern set of toponyms—the U.S. Postal Service’s ZIP code system.

First proposed in the 1940s, ZIP codes were meant to speed the processing of mail at sorting facilities. Most major cities at the time were already divided into postal zones, like “Milwaukee 4”, but small towns and rural areas had no such system. Mail volume swelled after World War II, so the postal service introduced the Zone Improvement Plan in 1963. From what I can tell, there don’t appear to be any hard and fast rules about the size of ZIP codes. Exactly how they are delineated seems to be a postal service secret and one that likely depends on their logistical needs. They can even overlap. But none of that really matters, because ZIP codes give names to places. They’re toponyms. I suspected that the more densely populated states had a higher density of ZIP codes, just like in hunter-gatherer societies. And sure enough, they do.

ZIP code and population density by state

The wrinkle lies in the trend line’s curve, which is masked by logarithmic axes the same way the curve in Hunn’s figure is hidden. The best way to read both graphs is backwards, from right to left, from high population density to low population density, paying special attention to the scale of the axes. Before we start, we should assume one thing, that people name places at the same rate per square mile regardless of population density. In other words, people will name seven things per square mile regardless of whether they live at ten or 100 people per square mile. Returning to the graphs, if we start at high population densities on the right and move left to lower population densities, the curve drops below our straight line assumption. Not only do people name fewer things at lower population densities, they name fewer things per square mile than our fixed rate assumption would have predicted. In other words, a hypothetical group living at ten people per square mile will name only four things per square mile, compared with the seven named if the population density were 100 people per square mile.

That’s key. There are plenty of gullies and hillocks of grass in the Great Plains, for example, but few people. As such, we name fewer things per square mile. It makes navigation easier—fewer waypoints to remember when traveling—and keeps us focused on the resources that matter. After all, population density is often driven by resource availability, whether that be food, water, shelter, or some other necessity. It’s as though our minds can’t cope with vastness, and so we name fewer things to compress the interstitial space.

The intriguing part is that ZIP codes and Hunn’s hunter-gatherer toponyms are described by one particular mathematical relationship (a power law, for the interested math-types). Not only that, they’re following the trend in a strikingly similar way.¹ As humans, we seem to have settled on a comfortable way of describing the world regardless of whether we remember it with neurons or silicon.


  1. Toponymic density = 0.3675(population density)0.8388
    ZIP code density = 0.0005(population density)0.6944

Source:

Hunn, E. (1994). Place-Names, Population Density, and the Magic Number 500 Current Anthropology, 35 (1) DOI: 10.1086/204245

Photo by Tim De Chant.

Related posts:

Density in the pre-Columbian United States: A look at Cahokia

Thinking about how we think about landscapes

Scientific American Guest Blog: Why we live in dangerous places

Coaxing more food from less land

wheat ears

It’s easy to forget amidst the concern over sprawl that agriculture is still the dominant human impact on the land. Perhaps that’s because it’s easy to rationalize the consequences of agriculture’s land use—it feeds us, after all. But that shouldn’t dissuade us from finding ways to improve farm efficiency. Global population growth shows no signs of stopping before 2050, and rising standards of living mean everyone will be consuming more calories than ever. And why shouldn’t many of them? Malnutrition still plagues much of the developing world.

That’s not to say we haven’t made progress. The Green Revolution boosted crop production by between 250 and 300 percent while only using about 12 percent more acreage. This put a serious dent in starvation rates, but it hasn’t been enough to eradicate the problem nor will it be enough to keep it at bay in the future. Troublingly, crop yields have begun to level off, raising concerns that the the only way to meet the inexorably rising demand will be to put more land under cultivation.

As a humanitarian and conservationist, both prospects alarm me. I’m not alone. Jason Clay, a vice president at the World Wildlife Fund, published an essay in the latest issue of Nature raising many of the same concerns. He offers eight strategies to alleviate the problem, all of which are forward thinking but only some of which will be easy to implement. Clay also focuses intensely on how these strategies can help Africa, a continent in dire need of more productive agriculture, as you can see in a worldwide map of crop yields (cereal yields are mapped below). He also rightly points out that those strategies need to be implemented in the developed world. But Clay fails to say how doing so will benefit nations developed and developing. That’s where I’d like to step in.

World cereal yields (2009)

view larger
view interactive version

Clay’s eight strategies run the gamut. The careful study of genomes can lead to greatly improved yields. But his approach is different in a subtle yet important way from many genetically modified crops. Rather than inserting genes from other organisms, he proposes geneticists speed the old process of selective breeding, where the best traits are kept and the rest discarded. He also supports training farmers in best practices, rehabilitating degraded land, reducing waste from field to table, raising the efficiency of inputs like fertilizer and irrigation, improving soil organic matter, and the reducing consumption in developed nations (which would have obvious benefits for their citizens). Clay also says giving farmers title to their land—something often absent in developing nations—would raise yields by encouraging stewardship.

Poor practices and low yields can lead to a cycle of cultivation and abandonment, which I think is part of the concern in Africa. Unless broken, some of the world’s most important ecosystems will be destroyed. Developed nations have been pushing conservation in developing nations, hoping they won’t repeat the mistakes many of us made decades or centuries ago. However, many people in developing nations have more urgent concerns, like food. Here’s where improvements in the developed world could help. Further raising crop yields in developed nations would not only allow us to save more of our land for conservation—increasing total protected area worldwide—we could direct the surpluses toward a food-for-conservation effort, similar to those proposed for carbon offsets. Such programs would require careful implementation to encourage self-sufficiency and prevent developed nations from lording over the poor.

Developed nations should also look inwards to expand their crop production before going abroad. That’s not to say developing nations should abandon the export market. Crop exports do provide poor nations with cash. But there is a growing trend of foreign interests purchasing cropland and exporting the harvests, removing local farmers and reducing the value of exports to the local economy. For example, China, India, and other countries have purchased or are leasing large tracts of land in Africa for that purpose. While there are good arguments for the globalization of the food supply—increased efficiency can offset the need for new tillage—it shouldn’t be done at the expense of local farmers or virgin land.

In essence, Europe, China, North America, and other developed regions need to further raise their agricultural efficiency and lend a hand to those who are struggling to do so. That can include food aid, but should also include training, research into more sustainable agricultural techniques, and further technology transfers. Many of these already take place, but need to be more creative and larger in scale.

Implementing the same strategies in developed nations that Clay suggests for the developing world would be sensible international policy. Rather than exhorting developing nations to “make better choices” and not repeat the mistakes we made in the past, we should be putting these strategies into action ourselves. It would help fight the appearance of imperialism and perhaps lead to more trusting international relationships, sending the signal that we’re all in this together.

Sources:

Clay, J. (2011). Freeze the footprint of food Nature, 475 (7356), 287-289 DOI: 10.1038/475287a

Foley, J. et al. (2005). Global Consequences of Land Use Science, 309 (5734), 570-574 DOI: 10.1126/science.1111772

United Nations Food and Agriculture Organization. 2011. FAOSTAT 2009 Crop Data. (available online)

Photo by five blondes.

Related posts:

Can we feed the world and save its forests?

Small farms in modern times

16,422 people per square mile

cardboard box

Nostalgia is not something I avoid easily. It usually lies dormant until woken by some change, distinct or subdued. And when it does, it takes charge. Small details and insignificant landmarks shout out, begging me to remember everything about this place and this time, from the noteworthy to the not-so-historic. And something as momentous as a cross-country move hurtles me back even further as I sift through the scraps of paper that invariably define my life. A ticket stub from a tram in Germany. A Polaroid with an Elvis impersonator from college. A note my wife scribbled, explaining how to write her name in Chinese, and how its homonym is “Little Zero”.

And so every time I—now we—move, I am reminded of how this last place and the one before it and so on were good to me. Each time, I found friends, opportunities, and new perspectives. And each time, I am sad to leave.

The first time in my life I moved I was 16, and we were only moving three miles away on the same side of town. My school didn’t change and neither did my friends. But I was moving away from the only home I knew, and I was desperate to feel some attachment to our new house. I remember closing the linen closet door one night and noting how it made a whooshing sound. I thought, this is something I will remember. I belong here, because I will always remember how the linen closet makes a whooshing sound when you close it. It’s a silly detail, but I still notice it fourteen years later.

And so every time I move to a new place, I start cataloging those silly details. The first year is always the most difficult. With a sense of place like mine that is anchored in hundreds, even thousands of details on small, subtle scales, it can take a while. Necessities dominate at first—the path to the grocery store, a running route, a good place to get pizza—but the later details are what tell me I’m at home—the sound of the train, the smell after a rainstorm, the putterings of the neighbors.

And so here I sit, waxing nostalgic about my time in Chicago, and by extension the Midwest. The scraps of paper from Chicago and places previous are all safely tucked away in boxes, waiting to be shipped, again, across the country. 16,422 people per square mile. The population density of Cambridge, Massachusetts, our soon-to-be new home. It’s a statistic that’s very similar to our current Chicago neighborhood. But it’s only a number, and much as I’d like to think that I’ll quickly find my groove because the numbers align, deep down I know better. I know that it won’t feel like home until I know that number and the people and the trees and the streets and the sounds and the smells and more.

Photo by Justin Shearer.

Atlantropa, or how to grow a continent

Atlantropa, dams at the Strait of Gibraltar

The 1920s were a time of great lows and tremendous highs. The specter of the Great War hung like a shadow over Europe, and financial ruin imposed by said war engulfed the belligerents (the United States being the lone exception). Adding to the calamity, Europe’s empires were wheezing under the weight of their colonies.

It is perhaps because of all that gloom that some people turned towards breathlessly optimistic ideas. After all, it was an age of wonder in many ways. Aeroplanes and dirigibles drifted and swooped from city to city and continent to continent. The Industrial Revolution had firmly cemented itself throughout the Western world, producing a surfeit of landmarks to human progress. Many big thinkers were dreaming up dams and skyscrapers and subways of unimaginable scale for the time. But one man was busy out-thinking the rest. Herman Sörgel had a plan so grand that it would have changed the face of the Earth.

Sörgel was a German architect living in Munich. A follower of the Bauhaus school, he did not produce any notable architectural works in his lifetime. Instead, he is remembered for devising an audacious plan to dam the Strait of Gibraltar and lower the Mediterranean Sea by 200 meters. Known as Atlantropa, the project would have generated enormous amounts of electricity, irrigated much of the Northern Africa (though how the water would be desalinated is unclear), flooded the heart of the African continent, and exposed over 250,000 square miles (660,000 square kilometers) of land in a bathtub ring around the Mediterranean.

Atlantropa had a solution for every European problem. Feeling crowded? The plan would add another France and Austria’s worth of land. Need more electricity? Various dams would have generated 365,000 megawatts of power by some estimates. Short on food? Just import it from the new African breadbasket, now a mere train shipment away.

There were, however, numerous problems that Sörgel overlooked. Workaday harbor towns would have been left high and dry. Dropping the Mediterranean Sea by 200 meters would have drastically altered its ecosystem, and the flooding of Africa would have inundated some world’s most diverse forests. Local climate regimes would have almost certainly changed in unpredictable ways. The plan also reflected the decidedly colonial and Euro-centric attitudes of the times. The proposed Saharan breadbasket may seem noble at first blush, but flooding central Africa in the process smacks of indifference. But perhaps Sörgel’s most glaring oversight was, how would they pay for it all? Europe was flat broke following World War I, and many countries would have had to work together to see it through. Such tight international cooperation is elusive even in today’s relatively peaceful times.

At its heart, the Atlantropa plan seems to be a reaction to the continent’s crowded landscape and thinning resources. Land was in short supply, mineral wealth was being mined at a rapid pace, and energy sources were hotly contested. The losers of the Great War—Germany among them—were in a deeper hole, having lost both territory on the continent and their colonies abroad. Sörgel hoped his massive project would change all that, ending resource shortages and bringing Europe and Africa together. Sörgel felt Europe could not compete with the land- and resource-rich continents of Asia and the Americas unless something drastic was done. His solution embodied a simple, if flawed, maxim: If you can’t import resources from faraway lands, bring the land needed to you. Sörgel’s fatal error, though, was the plan’s sheer scale. Securing resources for Europe’s future required spending a continent’s worth of resources today. Ultimately, Atlantropa was little better than a perpetual motion machine.

Sources and other links:

Edit Suisse Group. “Atlantropa.” Cabinet Magazine, Spring 2003.

Ptak, John F. 2011. “A Monumental and Fantastically Bad Idea: Draining the Mediterranean.” Ptak Science Books.

Jacobs, Frank. 2008. “Dam You, Mediterranean: The Atlantropa Project.” Big Think.

Morales, Michel. 2005. “Atlantropa: Der Traum vom neuen Kontinent.” TV documentary, German. (Parts 1, 2, 3, 4, 5, and 6 on YouTube.)

Urbanites leave the car behind, but not as often as you might think

Night traffic in New York City

It’s generally accepted as fact that people in big cities drive less. Things are closer together there, making it easier to walk to the store for a gallon of milk. For longer trips, mass transit is also an option. But boiling all that common sense down to a single number is difficult. And though our data-happy society can overdo it at times, numbers can reveal some surprising facts that simple observations otherwise wouldn’t—like the fact that increasing density reduces the amount people have to drive, but not by as much as you might suspect.

Determining precisely how far people drive—vehicle mile travelled (VMT) in the parlance of the field—is important for a number of reasons. With it, we can gauge greenhouse gas emissions from automobiles, allocate resources for road repairs or expansions, and refocus efforts to reduce car dependency. VMT is especially important in California, where the Global Warming Solution Act (also known as AB 32) calls on the state to reduce greenhouse gas emissions by 25 percent below 1990 levels by 2020. Cities and counties without a plan to reduce emissions within their jurisdictions risk losing transportation funding. With this in mind, two researchers from the University of California, Berkeley, set out to measure VMT per person in 370 urban areas around the United States and see how the built environment affects that number.

They discovered that people in more urbanized areas do drive less, but not that much less. Their lowest estimate for VMT was based on population density, but their more realistic result, which came from a model with more parameters, shows less of a reduction. Ironically, greater density is to blame for the lower than expected drop in VMT. As population density increases, VMT drops, but the densities of roads, shops, and services rise, all of which encourage automobile trips.

Urban areas need more roads because they have more traffic, which in turn tends to encourage more traffic. The researchers call it the “Los Angeles effect.” As Los Angeles filled the San Fernando Valley, it didn’t follow central travel corridors. Rather, it oozed out like a blob. This not only hindered mass transit in the city, it also required more roads to accommodate people’s varied travel patterns. The result is a “thicket of criss-crossing freeways and major arteries that form a dense road network,” the study’s authors write. Though Los Angeles is an extreme example of the road/population density relationship, other cities suffer from the malady, too.

The second curiosity that drives VMT in cities is the density of shops and services. It’s often easier to find a wider variety of goods and services closer to home in a city, which encourages people to leave home more often. So while people in cities still drive less, they don’t drive as little as they could.

Source:

Cervero, R., & Murakami, J. (2010). Effects of built environments on vehicle miles traveled: evidence from 370 US urbanized areas Environment and Planning A, 42 (2), 400-418 DOI: 10.1068/a4236

Photo by Josh Liba.

Related post:

Tell me how much you drive, and I’ll tell you where you live

The counterintuitive case of suicide and population density

Blueberry farm in winter

Suicide often raises one question more than any other: why? The answers are often varied, but that hasn’t stopped epidemiologists, psychiatrists, and other experts from trying to find some common threads. They may include anything from mental health to financial condition to gun ownership. Population density plays a role, too, though not the one you might suspect.

Sociologists in the 1930s speculated that the mayhem of the modern city drove people to take their own lives. On the surface, it sounds logical. Cities can be large, impersonal places. It’s easy to imagine a single person becoming lost in a swarm of millions with no safety net of friends or family to prevent him or her from falling into deep despair. Yet research seems to have proven that theory wrong. Many studies have discovered that people in rural areas—not cities—seem to have higher rates of suicide.

In Japan, a nation with a culture steeped in ritual suicide, suicide rates for men living in cities dropped between 1970 and 1990. Over roughly the same period, rates increased in rural areas. Suicide rates among Japan’s rural elderly are much higher than its urban elderly, too. Similar trends show up on the other side of the globe. In England and Wales, more people between the ages of 15 and 44 living in rural areas took their lives compared to those in cities. Many studies in the United States have discovered the same.

Suicide is also a significant problem in the Australian outback, where rates are two, even three times higher among men than their metropolitan analogs. At fault may be the consolidation of farms that took place in the latter half of the 20th century, leaving young men in the country with fewer employment opportunities. Combine that with easy access to firearms and pesticides (a very common method of suicide in agricultural areas around the world), and you have a recipe for disaster. Indeed, between 1964 and 1988, suicide rates for 15 to 19 year old boys living in the outback increased nearly fivefold, and the use of firearms in the act also increased fivefold.

Access to firearms is a recurring theme in the literature on suicide. Experts think easy access to firearms is partly behind the high rates observed in the countryside. Part of the problem is the lethal reliability of guns—an attempt with a gun is often more successful than other methods. Guns and suicidal tendencies are such a lethal combination that more people in the United States people kill themselves with guns than any other method.

Blaming guns would be a convenient way to wrap up this story, but the reality is that they are merely a means—albeit a very effective means—of committing suicide. Rather, there are deeper issues behind high rural suicides rates, most of which revolve around how mental health issues are handled. Mental health disorders are one of the main factors that lead people to suicide—as much as 70 percent of cases involve someone with a mental illness. Rural ideologies of self-reliance and hard work can lead to stigma against people with mental illness and discourage them from seeking help. Furthermore, the vastness of rural areas often means mental health services are few and far between. Even simple isolation is also a factor. Long distances mean fewer social bonds that could help pull someone back from the brink, especially for elderly people that have a hard time getting around.

That’s not to say the picture is hopeless. Education focused on reducing the stigma of mental illness can go a long way, especially considering that the majority of suicides occur among people with mental illness. Traveling counselors, crisis lines, and even educating clergy about risk factors can help. Though some of these proposed solutions are speculative—rural suicide is still greatly understudied—many are based on proven models from urban areas. The barrier, as always, is money. Providing services to rural areas is notoriously expensive, and in this age of budget cutting, social programs like these are often first in line for the axe.

Sources:

Dudley M, Waters B, Kelk N, & Howard J (1992). Youth suicide in New South Wales: urban-rural trends. The Medical journal of Australia, 156 (2), 83-8 PMID: 1736082

Hirsch, J. (2006). A Review of the Literature on Rural Suicide Crisis: The Journal of Crisis Intervention and Suicide Prevention, 27 (4), 189-199 DOI: 10.1027/0227-5910.27.4.189

Strong K., Trickett P., Titulaer I., & Bhatia K. (1998). Health in rural and remote Australia: the first report of the Australian Institute of Health and Welfare on rural health Report Other: 9780642247827

Photo by rkramer62.

The rural-urban fringe, circa 1942

Sears House No. 115

It’s cliché to say, “Everything that’s old is new again,” but boy if it isn’t true sometimes. I recently unearthed a monograph from 1942 about the conflict between urban and rural land uses, and a number of sections read like they were written yesterday.

George Wehrwein, the author of the monograph and a well respected land economist in his time, speaks with a voice that sounds distinctly modern. He assails unguided development as “suburban slums.” He points out farmland’s unfortunate role in absorbing willy-nilly growth. He mentions the more than 2,000 cities that, even in 1942, were utterly dependent on automobiles—“these cities have no street cars or buses of any type.” Even in that austere time, the beginnings of the automobile’s coming golden age were evident.

While the automobile and the “modern highway” were accelerating the pace of suburbanization, they didn’t start the trend—streetcars and interurban lines were initially responsible. “Almost as soon as railways became established, industries began to ‘decentralize’ by seeking locations in the suburban areas,” Wehrwein writes. While both trains and automobiles drove decentralization, they invaded rural spaces in distinctly different ways. Trains left a pattern of hub-and-spoke development, drawing some industries far out of the metropolis while leaving closer yet less accessible land under the plow. Automobiles allowed this decentralization to diffuse across the landscape even further while also invading the interstitial spaces left by train-focused development. “As a result, cities have not merely expanded, they have ‘exploded,’ ” he wrote.

Cars and highways spread development more evenly across the landscape, but much of the growth came at the expense of valuable farmland, something that clearly rankled the economist. Large tracts of land weren’t developed immediately, leaving empty lots set amidst trafficless roads, both of which became financial burdens on the local government. In his paper, Wehrwein condemns speculators that drove such slapdash development and rails against weak rural governments that did little to check them. He wasn’t universally panning the suburbs, but he was dismayed at what he saw as a waste of land and resources.

If Wehrwein’s lamentations sound distinctly modern, then so too do his solutions. He calls for large scale regional planning in his paper and advocates granting counties the power to guide development in unincorporated areas. Thanks to his earlier efforts, that experiment had already begun on in a few places. Twenty-five of Wisconsin’s 72 counties had zoning laws, and the state of California granted local authorities power to do the same. But Wehrwein also realized that granting authority does not ensure a desired outcome. “Mere power does not carry with it the desire, courage, or the wisdom necessary to make for a well planned rural-urban region.”

Source:

Wehrwein, G. (1942). The Rural-Urban Fringe Economic Geography, 18 (3) DOI: 10.2307/141123

Image in the public domain.

Tourism’s carrying capacity

Landing approach to Princess Juliana International Airport on Sint Maarten

Tourism can be a real boon to a local economy, propping up otherwise sleepy towns with an influx of cash. But as with many things, there’s a point where “just enough” becomes “too much.” Popular tourist towns with a few thousand residents can practically burst under the pressure of peak season. They’re like an experiment in population growth and decline repeated on an annual basis, and the strain can quickly become evident.

Perhaps nowhere has this experiment been more intensive than on the islands of the Caribbean. Between 1970 and 2010, the number of tourists soared from 4.6 million to 17.3 million, and that number doesn’t include the 18 million cruise passengers that traipsed around an island for an hour or three. It’s a staggering increase, and one that brings to mind the growth trends that populations exhibit as they near carrying capacity, the limits of their habitat.

Indeed, tourism for a particular region generally follows the same S-shaped—or logistic—curve. The initial number of visitors is a slow trickle, but once the place is “discovered,” the number of arrivals soars. Eventually, prices rise, the environment suffers, and crowding becomes an issue, all of which discourage further growth. Unless a new attraction is built or another natural wonder discovered, the annual number of tourists will either level off if appropriately managed or decline if not.

Carrying capacity for tourism is similar in some ways to ecological carrying capacity and different in others. Ultimately, physical resources limit both population and tourist levels. A dearth of land, shortage of food, or decline in water quality and availability will quickly put the brakes on both types of growth. But for tourism, cultural resources are also a concern.¹ Unique cultures that once attracted visitors can become spoiled or diluted, losing the draw they once had.

Some islands in the Caribbean may be in danger of soon reaching or exceeding their carrying capacities, according to a 2005 study. For example, Bermuda, St. Kitts, and Barbados, to name a few, had relatively high tourist densities and low increases in arrivals—the classic tapering off of the S-curve as the limit of carrying capacity approaches. The authors did discover a few anomalous islands that may show the way for the others. The Caymans, for example, have both high tourist densities and growing arrival rates, which the authors attribute to modern resorts and tourism development that limits environmental degradation.

Since that study, annual tourist arrivals for the entire Caribbean have been leveling off. While the number of cruise passengers shot up from 11 million to 17.3 million between 1996 and 2010, traditional arrivals only rose by about 600,000. While cruise ship passengers may substitute for other vacationers in some ways, they are not equal replacements. Their stays are shorter—typically only a few hours—meaning they spend less. And while cruise ships may require fewer local resources, they leave other environmental burdens, including sewage pollution from older ships and problems resulting from harbor dredging.

I never would have suspected that there could be a plausible link between limits on tourist capacity and ecological carrying capacity, but the comparison seems appropriate. Tourists’ demands are similar to those of their every day life, only ratcheted up a few notches. On vacation, service is expected where it otherwise wouldn’t be, and consumption is often higher (how many people wash their towels on a daily basis when at home?). Vacationers are experimental treatments of sorts, testing the limits of their destination’s infrastructure and environment.


  1. Though you could argue that a lack of culture could, in a way, lead to a decline in population growth as well.

Source:

Robert N. Thomas, Bruce Wm. Pigozzi, & Richard Alan Sambrook (2005). Tourist Carrying Capacity Measures: Crowding Syndrome in the Caribbean The Professional Geographer, 57 (1), 13-20

Photo by ToddonFlickr.

Keep your eyes to yourself

Crowded sidewalk in New York City

There’s an unwritten rule followed by nearly all city dwellers—never make eye contact. If you attempt to do so, your glance will be met with utter disregard. You do not exist, other than being an object to avoid. I learned this the hard way. Upon moving to San Francisco from Minnesota—the friendliest of all possible places—I would attempt to make eye contact with strangers on the street out of courtesy. In Minnesota, this is commonplace. There, my glances were often met with a polite smile or a courteous “hello.” In San Francisco—even on streets that were anything but crowded—they were ignored with complete indifference.

Imagine, then, my surprise when I learned of San Francisco’s reputation as a friendly city. If San Francisco is considered friendly, I thought, then I’m steering clear of New York. I mused that such indifference to others must be an artifact of city life. That’s not to say there aren’t friendly people there—it’s true that San Franciscans are a generally genial bunch once you get them off the sidewalk, as are the New Yorkers I’ve met and nearly every other person from a big city. But when I’m in a small town, things sure do feel different. Walking down the street is no longer a sterile affair. It’s no family reunion, but it is degrees warmer than in cities. Still, my own experiences weren’t enough to convince me that this could be a universal trend.

Luckily, my hunch was proved correct the other day by a study which compared the rates of eye contact among people in central Philadelphia, suburban Bryn Mawr, and rural Parkesburg. The study’s authors parked two college students—a guy and a girl—outside a post office and a store in each location for two hours. The students counted the number of people who made eye contact and if anyone said “hello,” “how are you,” or the like. Lo and behold, rural Parkesburg held true to the small town stereotype. Between 70 and 80 percent of passersby glanced at the stationary students in the Parkesburg, while just 10 to 20 percent did in Philadelphia. Bryn Mawr’s pedestrians fell predictably in the middle, with around 40 to 50 percent making eye contact.

The rural types were also much more likely to say something to the strangers. One quarter of people in Parkesburg opened their mouths in greeting, while just three percent did for Bryn Mawr and Philadelphia combined. (The city center was by far the least friendly—only one person said something to each person at both the post office and the store.) In addition, everyone who did say something did make eye contact.

The study’s authors contemplated a few possible explanations for why the city dwellers were so hesitant to make eye contact. They favored the sensory overload hypothesis—that people in big cities are surrounded by too many people, noises, and other distractions—though they also speculated that city folk may fear strangers more or that small town people may be more curious about strangers. They also touched on the idea that city people are more hurried than either suburban or small town people. This notion has been covered both before and since by a number of different researchers. In general, people in larger cities do tend to walk faster, so there may be some truth to this.

Whatever the reason, I admit I exhaled a slight sigh of relief when I discovered that science confirmed my suspicions. San Franciscans, New Yorkers, Londoners—no matter how friendly they are underneath, suffer the same aversion to eye contact as other big cities. Small towns do feel friendlier.

Sources:

Newman, J., & McCauley, C. (1977). Eye Contact with Strangers in City, Suburb, and Small Town Environment and Behavior, 9 (4), 547-558 DOI: 10.1177/001391657794006

Bornstein, M., & Bornstein, H. (1976). The pace of life Nature, 259 (5544), 557-559 DOI: 10.1038/259557a0

Bornstein, M. (1979). The Pace of Life: Revisited International Journal of Psychology, 14 (1), 83-90 DOI: 10.1080/00207597908246715

Wirtz, P., & Ries, G. (1992). The Pace of Life – Reanalysed: Why Does Walking Speed of Pedestrians Correlate With City Size? Behaviour, 123 (1), 77-83 DOI: 10.1163/156853992X00129

Photo by Susan NYC.

Why we live in dangerous places

Great Wave off Kanagawa, Hokusai

This post originally appeared on Scientific American’s Guest Blog.

Natural disasters always seem to strike in the worst places. The Sendai earthquake has caused over 8,000 deaths, destroyed 450,000 people’s homes, crippled four nuclear reactors, and wreaked over $300 billion in damage. And it’s only the latest disaster. Haiti will need decades to rebuild after its earthquake. New Orleans still hasn’t repopulated following Hurricane Katrina. Indonesia still feels the effect of the 2004 tsunami. The list could go on and on. The unfortunate lesson is that we live in dangerous places.

We have built civilization’s cornerstones on amorphous, impermanent stuff. Coasts, rivers, deltas, and earthquake zones are places of dramatic upheaval. Shorelines are constantly being rewritten. Rivers fussily overtop their banks and reroute themselves. With one hand, earthquakes open the earth, and with the other they send it coursing down hillsides. We settled those places for good reason. What makes them attractive is the same thing that makes them dangerous. Periodic disruption and change is the progenitor of diversity, stability, and abundance. Where there is disaster, there is also opportunity. Ecologists call it the “intermediate disturbance hypothesis.”

This post was chosen as an Editor's Selection for ResearchBlogging.orgThe intermediate disturbance hypothesis is one answer to an existential ecological question: Why are there so many different types of plants and animals? The term was first coined by Joseph Connell, a professor at UC Santa Barbara, in 1978.¹ Connell studied tropical forests and coral reefs, and during the course of his work, he noticed something peculiar. The places with the highest diversity of species were not the most stable. In fact, the most stable and least disturbed locations had relatively low biodiversity. The same was true of the places that suffered constant upheaval. But there, in the middle, was a level of disturbance that was just right. Not too frequent or too harsh, but also not too sparing or too light. Occasional disturbances that inflict moderate damage are, ecologically speaking, a good thing.

To see how this works, let’s imagine a hypothetical forest, one that escapes disturbance for thousands, even millions of years. Eventually, it will be dominated by two species—a tree species that is best adapted to the type of soil, quantity of water, and amount of sunlight, and an understory species that can best cope with limited sunlight under the canopy. No other species could possibly compete; eventually, two species would become the best plants for the conditions. While it’s a gross generalization, it illustrates the point. Stable environments can stifle diversity.

This would not be the case in a more realistic forest, however, one that suffers from the periodic fallen tree, occasional fire, or odd tornado. In the window opened by disturbances, other species would have ample opportunity to gain a foothold. If a tree falls, other species could bolt toward the sun. After a fire, herbs that sprout vigorously would have a leg up on previously dominant plants that bud languorously. Life explodes into the openings when given new opportunities.

Biodiversity has flourished where the occasional disturbance kicks open a door. These places are also all the more stable for it. Diversity breeds stability. They are also richer in food and resources, two qualities that attracted our ancestors. The natural bounty of those places made the occasional hurricane or tsunami tolerable.

Today, many of us don’t have the same problems our forebears did. We don’t need to live next to our food. Our water comes from a tap. We can drop our packages off at the post office. But the past is hard to escape. While the requirements of the last century may have disappeared, our cities have not. We are creatures of habit.

Yet social inertia is not the only reason we still live in dangerous places. As aesthetically tuned creatures, we crave dramatic landscapes forged by catastrophe. California is celebrated for its tectonic rocky shores. Mount Saint Helens almost certainly has more visitors now than before it blew its top. The Mississippi River is responsible for untold hardship, yet it’s held up as an honored piece of Americana.

That’s not to say that the Sendai earthquake or the Australia’s Black Saturday bushfires will be lauded in the future. They are more than intermediate disturbances—they are real disasters. Yet in ecological terms, each would have been a small speed bump. What turns intermediate disturbances into natural disasters is population density. Earthquakes didn’t kill when our buildings didn’t require stairs. And though tsunamis still have always been devastating, they caused few human casualties before we built cities. Avalanches in remote parts of Alaska don’t usually raise eyebrows, but they are a constant concern for many villages in the Swiss Alps. Much of the handwringing over sea level rise is precisely because so much of the world’s population lives near the ocean.

That’s not to say we should flee the coasts or abandon the breathtaking but dangerous places. Our fear of change may seem like a hindrance, but our stubbornness is also one of our greatest assets. Without overcoming intermediate disturbances like floods or sandstorms, there would be no Rome or Cairo. We live on a tumultuous planet where life has thrived under a regime of constant upheaval. Adapting is—and always has been—our last, best hope.


  1. Though he coined the term, two previous studies had described essentially the same concept.

Source:

Connell JH (1978). Diversity in tropical rain forests and coral reefs. Science (New York, N.Y.), 199 (4335), 1302-10 PMID: 17840770

Are wildlife diseases cities' next public health problem?

Raccoon chows down a pumpkin

Cities were nasty, filthy places to live until very recently. For many people in slums around the world, this remains a cruel part of life. The place that holds the most opportunity also harbors disease and illness. People have been grappling with the ill effects of population density for thousands of years, and most of the effort has focused on how to stop one person from getting another sick. But as cities’ populations boom, there’s another less considered and seemingly unlikely source of disease—wildlife.

This post was chosen as an Editor's Selection for ResearchBlogging.orgPeople and animals have a long history of trading diseases. The Black Death was caused by bubonic plague, a bacteria carried by fleas that typically infest rats and other rodents. HIV is widely believed to have first infected humans via contact with another species of ape. And though animals have given us a few beneficial diseases—cowpox, for example, which helped end smallpox—by and large, they’re something we’d rather avoid.

The collision of urban and rural areas has brought a number of diseases to our attention. Lyme disease has become a household name in much of North America thanks to expanding suburbs. As cities pushed out into old farm fields and forests, more people came into contact with the deer ticks that carry the bacteria that causes the ailment. A bite from a host tick causes a telltale bullseye rash followed by fever, headaches, and if left untreated, joint inflammation and nerve damage.

Rabies is another disease that can be problematic in cities, but unlike Lyme disease, it has a worldwide reach. Though dogs are the primary vector for humans, other canines such as raccoons provide a reservoir for the disease. As anyone who woken up to marauded garbage cans knows, raccoons are a big problem in North American cities (and in Germany, Japan, and parts of the former USSR, thanks to introductions). Raccoons thrive in human dominated landscapes: Our cities and towns are largely free of predators. They offer a variety of housing opportunities, and they come replete with buffets every night of the week—garbage night. The masked canines have made the most of it—their birth rates have skyrocketed. As their populations rise, there is more social contact and more probability for disease transmission. Nightly garbage feasts only compound this problem, bringing crowds of raccoons together. Rabies has become such a problem among raccoons that a study in Connecticut in the early 1990s discovered almost half the study population was infected.

For each common critter that is susceptible to a disease, it seems there are vulnerable species that are also threatened. Raccoon roundworm, for example, has taken its toll on the endangered Allegheny woodrat. Gray squirrels, which are not native to the United Kingdom, carry a virus which is lethal to native red squirrels. As gray and red squirrels come into contact at the same food sources in cities and suburbs, the likelihood of cross-species transmission increases. Even pets pose a threat to native fauna. Sea otters off the coast of California have come down with toxoplasmosis, the source of which was traced to urban runoff tainted with cat feces.

People once thought that cleaning up cities involved brooms, sewer systems, and potable water supplies. While those advances have gone a long way to making cities healthier, our inadvertent assistance of certain animals has raised an entirely new set of problems. Because of the wide range of diseases and hosts, the solutions will have to be varied. In the case of rabies, some areas are experimenting with oral vaccines for raccoons. But for other, more vulnerable species, the best solution may be to promote biodiversity and native landscapes. Landscapes filled with native plants would not only be less stressful for the animals—giving their immune systems a boost—they would also support more diverse fauna. A wider variety of animals within city limits would likely reduce the spread of some diseases by introducing new hosts, some of which may be better at fighting off pathogens.

Sources:

BRADLEY, C., & ALTIZER, S. (2007). Urbanization and the ecology of wildlife diseases Trends in Ecology & Evolution, 22 (2), 95-102 DOI: 10.1016/j.tree.2006.11.001

Wilson ML, Bretsky PM, Cooper GH Jr, Egbertson SH, Van Kruiningen HJ, & Cartter ML (1997). Emergence of raccoon rabies in Connecticut, 1991-1994: spatial and temporal characteristics of animal infection and human contact. The American journal of tropical medicine and hygiene, 57 (4), 457-63 PMID: 9347964

Photo by clarkmaxwell.

Do people follow trains, or do trains follow people? London’s Underground solves a riddle

Notting Hill Station, London Underground

Transit oriented development is all the rage in urban planning these days. Proponents claim new transit coupled with mixed-use zoning will ignite growth in otherwise struggling areas. Detractors claim running new lines to low-density neighborhoods will leave cities burdened with white elephants. Overall, reality is probably somewhere in between, but transit and population density is a real chicken-or-the-egg problem. Which comes first?

This post was chosen as an Editor's Selection for ResearchBlogging.org

Greater London is perhaps the perfect region to explore this question. Home to the world’s first metro system, London was also one of the first cities to explore transit oriented development. The Metropolitan Railway (the Underground’s former name) had the authority to coordinate rail lines with housing development, which it leveraged to the tune of 15,000 houses on 2,200 acres. It also built lines to serve neighborhoods already teeming with people.

But the Metropolitan Railway did not have a monopoly on rail transport between London and the suburbs. Other companies both served existing towns and built new lines to otherwise underpopulated regions. Some teamed up with developers in the hopes of ensuring a steady stream of riders. Others lines were purely speculative, with owners hoping that development would follow.

Fortunately for the Metropolitan Railway and other companies, the majority of speculative rail lines were successful in spurring growth, according to a 2007 research paper. The study examined two hypotheses: One, that transit oriented development works, and two, that transit follows population density. Both proved to be true. The paper’s author found that population density was driven by the presence of train stations, and that the presence of train stations could be explained by population density. For each one percent increase in rail capacity, population density increased nearly a quarter of a percent. And each one percent increase in population density over ten years leads to about a one quarter percent increase in train station density. “Train service led to a suburbanization of countryside and increased population of new developments, which attracted more railways,” he wrote.

Mass transit, at least in the case of rail, appears to both drive development and benefit from it. Furthermore, the study claims, the Underground has helped build London’s city center into the commercial powerhouse it is today by fostering commuting from the city’s periphery. With more people commuting from the suburbs, commercial space could expand within the city center. And as people moved out of the city center, the sorts of traffic the Underground carried began to change too, adding more business-to-business traffic than before. The shifting uses of the London Underground can also inform future transit planning. Building systems to merely serve existing commuting traffic will likely result in an overburdened system.

Source:

Levinson, D. (2007). Density and dispersion: the co-development of land use and rail in London Journal of Economic Geography, 8 (1), 55-77 DOI: 10.1093/jeg/lbm038

Photo by drewleavy.

Related posts:

Paying for proximity: The value of houses near train stations

Proximity sans convenience: Houses near train tracks and freeways

Can we feed the world and save its forests?

corn pile

Nine billion is the number that will define the 21st century. That’s the number of people expected to live on this planet by 2045. But 9 billion mouths are a lot to feed, and each of them will hopefully have more than enough to eat. Achieving both goals—feeding 9 billion and feeding them properly—will be a herculean task. It’s also one that could eliminate the world’s great remaining forests, taking with them ecosystem services like carbon storage, reliable rainfall, biodiversity, and the magnificence of miles upon miles of wilderness.

So what if someone said it might be possible to ramp up food production while expanding forested land? Sounds like a perpetual motion machine, right? It smacks of the impossible. But that’s just what Eric Lambin and Patrick Meyfroidt are proposing in their paper which appears in the Proceedings of the National Academy of Science. They think that boosting agricultural intensity can not only increase agricultural output, but also reverse deforestation. In theory, doing more with less is a great way to solve the world’s problems, but these theories usually come with a catch. And Lambin and Meyfroidt’s catch is a big one—the radical globalization of the world’s food markets. It’s an intriguing proposal with some hard data to back it up, but the authors are also quick to brush aside the potential problems.

Globalization, Lambin and Meyfroidt claim, can focus farming where it is most productive. In their vision, different regions will specialize in different crops, eking the most out of every acre of cultivated land. Those crops will then be whisked across borders to where they are needed. It’s the exact opposite of every locally-harvested, free-range, organically-grown mantra that’s proposed to save the world.

The authors outline four facets of globalization that could help or harm the cause. The first, displacement, could go either way. Displacement essentially moves crop production and timber harvesting from one place to another. As rich nations seek to protect their forests, for example, they must import their timber from elsewhere. The same goes with food stuffs. In Switzerland, for instance, the land required to grow its imported food is one and a half times more than the country’s currently cultivated land. But displacement may not be all bad, they say. For every 20 hectares of forest protected in North America and Europe, for example, only about one hectare of primary forest is logged in Russia or the tropics.

The second proviso they cite is rebound. As a new technology becomes cheaper, demand for it will increase as it becomes cost effective for more industries. Cheap gasoline, for example, has led to a proliferation of gas-powered devices (think leaf blowers, lawn mowers, and so on). More efficient agriculture, Lambin and Meyfroidt state, could also be more lucrative agriculture, driving an expansion of cropland rather than a reduction. They point to soybeans in Brazil and oil palm in Indonesia and Malaysia as examples. But they also counter that agricultural intensification since 1961 has reduced the amount of land that otherwise would have been needed to feed the world’s people.

The third caveat of globalization is cascade effects, whereby one crop displaces another, exploiting previously uncultivated land. Biofuels are a case in point. As some farmers use their land to cash in on the craze, other land is put under the plow to produce food stuffs.

Remittances are the last side-effect of globalization, and one that seems to be a net positive. Foreign workers often send money back to family members still in their home country, and the added influx of cash reduces the need for farmland in that country. With supplemented incomes, people can afford to purchase more food as opposed to growing all of it. Since subsistence farming is not very intensive, people’s remittance-assisted diets rely on less land.

Lambin and Meyfroidt offer four examples of countries in which agricultural output has risen concomitant with population and the amount of forested land—China, Costa Rica, El Salvador, and Brazil. Each case appears to be fairly unique, though, and is not enough to convince me that globalization and intensification are the best solution. China, for example, has turned to Africa to help feed its 1.3 billion people, locking up over 28,000 square miles of farmland and counting. Costa Rica’s success has been dependent on foreign groups purchasing land for conservation (a sort of highfalutin remittance), while El Salvador relies on small-scale remittances. Furthermore, statistics on Vietnam’s and China’s forests have both benefitted from plantations, which some experts have called “ecological deserts” and poor substitutes for the real thing. Furthermore, Lambin and Meyfroidt admit that many other countries in similar circumstances have not seen an uptick in the amount of forested land.

The authors also gloss over a major pitfall of globalization—pollution. Currently, shipping releases 1.12 billion metric tons of CO₂ per year into the atmosphere, or more than Germany, the world’s sixth largest emitter. Transporting food all around the world will only drive that number up. With climate change threatening to upend farming as we know it, pumping more CO₂ into the atmosphere may not be the best idea. That’s not to say the proposal is worthless—nine billion people are an awful lot to feed, after all—but there are some big questions that need to be answered before it should be seriously considered.

Source:

Lambin, E., & Meyfroidt, P. (2011). Inaugural Article: Global land use change, economic globalization, and the looming land scarcity Proceedings of the National Academy of Sciences DOI: 10.1073/pnas.1100480108

Photo by ConanTheLibrarian.

Related posts:

Small farms in modern times

The slings and arrows of geography and clean water

If the world’s population lived in one city…