Monday, October 15, 2007

I confess: I can't do statistics--and that's what my prof wanted for this project!

Summary of “Service Changes in a Central-Place System”
Most studies on rural service provision have focused on one period of time and within the context of rural depopulation, ignoring functional complexity and possible relationships between service provision and rural development policy. Though waning, the relationship between rural areas and the towns which provide their services is important for understanding rural life and economic policies intended to protect it. This 1986 study by Hourihan and Lyons analyzes the evolution of the functional complexity of rural service provision within a stable population, and amid governmental policies designed to maintain that population and its areal distribution.
Technology shifts, increased specialization, and economic rationalization lead to declines in established services, while new technologies, increased affluence, and changing consumer tastes gave rise to new services. Increased mobility correlated to a greater range of travel, during which goods may be purchased, and combined with multi-purpose shopping, is associated with the demise of intermediate levels of the service provision hierarchy.
When Ireland joined the European Union in 1973, it had to adopt EU policies that subsidized rural/agricultural populations, significantly raising farm-family incomes and standards of living. The Irish national government also created economic policies specifically designed to attract multi-national firms to small towns and villages, providing new sources of income to the rural/agricultural populations. This resulted in rising population levels, rapid economic growth, greater affluence, and the emergence of a more sophisticated consumer market.
The original study of County Tipperary conducted by O’Farrell in 1966 classified 143 central places, all of which were revisited for the 1986 study. No new central places were revealed in the latter. Both studies recorded every functional unit provided by service establishments, recording the number and spatial distribution of the outlets, and thus, is an indication of the direction of change over time.
O’Farrell’s original list of 43 services was retained, along with his still valid four-tier hierarchy, though a number of new services were identified and added. All 143 central places in the O’Farrell study still were active during the 1986 study period, and the range of services remained relatively stable, although the total number of functional units decreased by 13.9%, mostly due to a decline in the number of grocery outlets. In general, lower-order services contracted and higher-order services expanded. The differences between the two study periods reflect changes in the economic and social organization of Irish society: larger-scale operations, changing technologies, rising affluence, and changing consumer tastes.
To examine these changes, the 1966 services were categorized as contracting, expanding or relocating.
♣ 15 services declined; for 7 of these functions, proportional losses were heaviest at the village (intermediate) level, 4 in major towns (highest level), 3 in minor towns (2nd highest level), and 1 at the hamlet (lowest) level.
♣ 17 services expanded, 13 concentrated at the town levels, and 4 at the village level, reflecting the services’ higher order, and their consequent requirement of larger markets.
♣ 8 services relocated, half moving up, half down the order.
This indicates a broad pattern of service provision migrating from the village level to that of the major town. Between 1966 and 1986, 23 of the 41 services measured became less concentrated at any one level, 13 more so, and 5 did not change.
Specialization in functions was most common at the lower levels of the hierarchy, a decrease in their overall availability. Increasing functions suggested wider availability. Functions that decreased due to economic rationalization showed no significant change in concentration.
Of the new services that emerged, less than 10% of their functional units occurred in villages and hamlets, a figure in keeping with the rising affluence and cosmopolitanism during the study period.

Three major conclusions were drawn:
1. Overall, services remained relatively stable
2. Decreases in service provision occurred at the village level; increases at the major town level
3. New services emerged as two broad groups: leisure-hobby and business-personal

The stability of the overall Central-Place system is the result of rising affluence and increased consumer demand of the sustained rural population base—suggesting that the EU and Irish national government policies were effective. However, their costly implementation may have contributed to problems within the national economy—recession, unemployment, and rising emigration.
The decline in service availability can be explained by the increased mobility of rural residents: Commuting from village to town for work drew villagers to the markets of the larger towns. Thus demand at the village level waned, reducing the number of customers to below the minimum threshold. However, this trend was somewhat mitigated by the farm-families’ propensity to patronize local establishments. Perhaps more a social function than a purely economic one, this suggests that policy should be aimed at supporting farm-families rather than rural industrialization.
A second unintended consequence of rural development policy—and its effectiveness—is new consumer demand. As rural affluence rose, demand for new and better services increased. These were satisfied at the town level, where access to larger potential markets reduced economic risk for the service provider, and which accommodated the larger minimum threshold required by new services. Also, economies of scale can be generated at the town level, undermining the economic competitiveness of the village-level providers.
Yet, increased demand draws higher-order services to the lower-levels in the hierarchy (e.g., gas stations became more diffused), and may do the same for new services such as video rentals. This trend may act to stabilize the economic viability of the village level. Of course, these policies marginalize non-commuters such as the poor or the elderly, who must pay higher prices for the remaining services, or do without.
-30-

More on Alternative energy

Suggestions for a Regional Development Plan Utilizing Three Renewable Energy Sources for Texas


Texas is faced with energy policy issues unique among the 50 states due to its geographic size, current infrastructure, population, and available natural resources. Though at present the state is poised to shed itself of any tie to a public electric utility, this paper constructs a geographic argument in support of a regional development plan for renewable energy in west Texas. State level input in both regulatory and policy-making capacities would greatly benefit residential and commercial electric customers, local municipalities, and the state’s ecology.
Texas has three sources for electric power generation that are underutilized: solar, wind, and biogas. Through a combination of large- and small-scale power generation facilities utilizing these sources, Texas can continue to expand its supply of electricity, better secure electricity transmission through multiplicity, and improve air quality without placing a prohibitive economic burden on for-profit retail providers. However, changing current practices would require well-coordinated efforts between the various levels of government, especially urban municipalities, power generators, residential developers, and private individuals. For these reasons, there is a need for state-level policy implementation.

CURRENT INFRASTRUCTURE
Texas has its own electricity grid supplied by power generated mostly from locally available resources, namely coal, natural gas, hydroelectric and wind. Nuclear power accounts for roughly 9% Texas electricity generation, but is not powered a local resource. Together, coal and natural gas account for nearly 88% of power generation in Texas, according to the Texas Energy Planning Council’s most recent report, from 2004. However, it states, Texas hit peak production of its coal reserves in 1996, and now imports almost half of its supply for power generation, mainly from Wyoming. This represents a significant transport cost, and since the 1990s, power generated from coal has declined in favor of locally available natural gas. But demand for natural gas, according to the report, is expected to keep prices high, and therefore, discourage further growth in that sector. The limited supplies of coal and natural gas mark them as non-viable for an expanding demand of electric power. To meet that demand, new sources of energy must be tapped.
No one form of energy will be enough to meet continuing demand, therefore the researcher of this paper proposes developing a coordinated set of policies designed to foster development in renewable forms of energy. A comprehensive set of policies would coordinate conservation efforts with building practices and codes, land use planning, and, most importantly, public transportation, requiring very long-term construction and redevelopment efforts. Austin Energy has set a goal of producing 30% of its power from renewable sources, part of the city of Austin’s Climate Protection Plan. Austin’s comprehensive plan addresses the energy problem in the holistic way necessary to solve it.
To transition toward that necessary overall goal, this paper focuses on three renewable energy sources that have proven technologies ready for immediate implementation at state, county, city, and individual levels by connecting to existing physical and organizational infrastructures. Wind, solar, and biogas sources already are being developed by Austin Energy, supplying power to the state capital, and demonstrating the viability of such renewable energy.

WIND POWER
Texas has become the largest producer of wind-generated electricity in the U.S., with an installed capacity of 2,768 megawatts, according to the American Wind Energy Association, and has another 1,000 MW of production capacity being constructed. Refining their system designs has dropped the price of wind power to between 3 and 5 cents per kilowatt-hour, about what coal or natural gas costs, at 2.9 to 3.6 cents per kilowatt-hour, the latter figures coming from the state of Oregon’s Energy Review, which has performed a cost comparison between the various methods of power generation commercially available.
Texas already is being developed for wind. This presents an opportunity to assist a non-sequitur problem—school funding. In a presentation to the Texas Energy Planning Council in Houston, The Wind Coalition showed that wind power had more than one billion dollars of private investment. Local school taxes paid by wind power plants were more than three times that of natural gas and almost twice as high as coal, generating more than 23 million dollars from 2002-2004. Furthermore, the industry created, according to the TWC, more than 2,500 hundred jobs statewide.
However, their research also showed a significant time lag in the implementation of wind power. Because large-scale wind-power facilities require sustained high wind speed (class 4 or above) their utilization is geographically limited to the west Texas and Oklahoma panhandle regions. To get power from those parts of the state to the population centers in north central Texas, eastern, and Gulf Coast regions will require extensive power transmission lines. The financial cost of a project to create the needed lines, TWC estimates based on ERCOT and Public Utility Commission sources, is low, at 5 cents per month per customer. But the cost in time is daunting; constructing a wind-power plant takes only one year, but connecting it to consumers takes five years or more.
One possible policy goal to solve this problem would be the creation of a Tennessee Valley Authority-style public works project. For the sake of argument, call it the Texas Sky Power Authority, whose mission would be to develop property location, establish ownership through the issuance of permits, oversee standards and quality regulation in conjunction with the Texas Commission on Environmental Quality and U.S. Environmental Protection Agency, as well as PUC, and other state and federal agencies, and other necessary and prudent operations to install and maintain the necessary transmission lines. Small-scale wind power generation for residential or commercial uses should be overseen by local municipalities under guidelines developed by the TSPA.
A less grandiose goal would be to develop financial incentives such as bonds or loans, or an increase in the state tax cap so that city governments can participate in capitalization efforts. For a model, one need only look as far as McKamey and Sweetwater, where Austin Energy gets the supply for most of its 665 million kWh in subscriptions. Austin Energy is developing another 225 MW of wind power according to its website, austinenergy.com.

SOLAR POWER
On average, Texas receives 72% of the sunshine possible (that is, if every day of the year were perfectly sunny with no cloud cover), as much as 80% in far west Texas, but even the lower end of the range at 60% on the eastern half of the state is somewhat offset by the solar intensity, according to data from the State Climatologist’s Office. All portions of the state, according to a 1995 report to the Texas Sustainable Energy Development Council prepared by Virtus Energy Consultants, “Solar radiation is available throughout the state in sufficient quantity to power . . . water heaters and off-grid photovoltaic cells,” and that solar power of various kinds “can become major contributors to [sic] satisfying the future energy needs of Texas.” The report calculates that approximately 4,300 quads (one quadrillion BTUs) of solar energy strike the state, on average, but Texas consumes only about 10 quads. Modern solar panels convert approximately 20% of the radiant energy into electricity, representing a potential energy source of 860 quads, more than ten times greater than the U.S. as a whole consumes, although Time Magazine estimated U.S. consumption at 97.6 quads, based on government data.
Like any energy source, not all is recoverable, but by utilizing that twenty percent of this resource through commercial and residential applications, base load on the electric grid can be reduced, especially beneficial during the peak usage months of summer. Commercial applications of rooftop panels are available (residential ones cost around $20,00), and some businesses, such as Whole Foods, are adopting them to reduce the amount of electricity they purchase to run their facilities. For residential purposes, the report states, homes with less than half of the rooftops covered by 10% efficient solar cells would generate enough electricity for their own usage, again reducing the demand on the grid during peak usage. Roof top solar technologies, according to Nova, could provide for up to 40% of a city’s peak electric demand. Widespread installation would reduce cost to customers during the most expensive usage period, and provide for extra grid capacity.
The recent episode of Nova, entitled Saved by the Sun which reported the above, also highlighted the nation of Germany, which has created financial incentive programs to encourage homeowners to invest in rooftop solar applications. Here in the U.S., so has the state of Oregon, whose initiative is much like the plan from Austin. In all, fifteen states, Nova says, offer similar incentives. The episode reported a single home eliminating 70 tons of CO2 emissions during its eleven year use of solar panels.
The German program also installed panels alongside existing highways. Texas has a great number of rooftops and highway miles available for such installations, but lacks specific legislation defining the sale of solar electric power from the residential or commercial producer to the retail provider. Oregon developed a “net metering” law that ensures electric customers with solar production capacity are entitled to sell their surplus power back to the utility, in a sense only paying for the net difference between their consumption and production over a month’s billing period.
The second policy goal herein recommended is the development of laws, codes, and regulations that define and monitor the sale of surplus power and the rights of both residents and commercial or retail operators to sell surplus power. To complement such legislation, a statewide solar loan and solar rebate program could be created, modeled on that now offered by Austin, Texas, to provide the market.

BIOGAS
The biogas sources are sometimes lumped together under the umbrella category of biomass, which includes energy sources from plant matter as well as reclaimed methane. However, because plant fuel sources require space to grow the crops and because the agricultural production is intense, there is little advantage to investing in these sources. Although, as technology improves in utilizing plant debris left over from agricultural operations, this situation may change, making biomass an affordable combustible fuel. Methane reclamation, on the other hand, taps an energy reserve that exists, and offers the additional benefit of treating waste streams in an ecologically sustainable way and more cheaply than conventional systems.
Biogas is the broad term for methane reclamation processes, of which two are practical for Texas in the long term; however, these systems would only be practical operating at the municipal level and perhaps best in private operations. For example, individual ranching operations or whole cities might invest in digester units which process human or animal waste.
Millions of small-scale digesters, according to the Virtus report, are in use in the developing world, mainly in India and China. Various sources report on two test facilities that have been in use in South Africa since 2001. The first was a household of eight family members (and two cows), the second, a school of 1,000 students (with access to other local cows). The technology also is being adapted to dairy and beef cattle operations (no human input needed), and can be adapted to housing developments. The process has been experimented with since the 1950s, and in a wide range of rural settings in Africa, India, Nepal, and China, so the technology is mature—and cheaper than solar power electrification. In mainly rural west Texas, where some of the largest cattle lots are, anaerobic digesters could be installed piecemeal by their operators, and later incorporated into the grid to sell surplus power.
At the municipal level, the city of Austin powers its Hornsby Bend wastewater treatment plant from recovered methane. Austin Energy is harvesting methane from three landfills in Austin and San Antonio. Both systems could be employed more broadly across Texas. Some cities are already supplementing their budgets through the sale of fertilizer made from treated sewage, such as Denton’s Dyno Dirt and Austin’s Dillo Dirt. The Austin facility treats 45 tons of primary and secondary waste each day. Upgrading plants to capture methane from the solid wastes already being produced would be capital improvements.
Turning wastewater into fuel not only generates new, clean energy, it also capitalizes on a collection system that is currently in place—the sewer system. With even smaller cities like Denton having hundreds of miles of pipe in place, the wastewater transport systems are a significant physical infrastructure that is underutilized. Because that system is in place, development efforts can focus on the water treatment facilities, speeding the implementation of this large resource.
Mandating the conversion of solid wastes to biofuels of the various kinds and providing information and financial assistance to municipalities should eventually offset the costs for treating urban effluents before re-entering surface and groundwater streams. In addition, by concentrating on the urban centers in northern parts of the state, water quality downstream also will be improved.



CONCLUSIONS
There is a geographic connection that links the renewable energy sources described in this paper. The western portions of Texas have high sustained wind speeds, making the region the natural choice for wind farms. Cattle ranching is widespread in this region, so establishing biogas operations there also is a practical location choice. These two operations can coexist in the same space, maximizing the yield of the area. Finally, insolation rates are highest in the west, and the mostly undeveloped land can accommodate large-scale solar power plants.
In sum, developing the west Texas region as an energy production corridor provides economic resources that may allow the local population to remain on land that is under threat from water rights speculators and from a changing livestock and agriculture market. And as the installations can be done in an ecologically sustainable way, a popular base of support can be built between environmental groups, business, industry, government, and private citizens.
Investing in these sources of renewable energy will provide needed energy and improve environmental quality of the air, water, and land as fossil fuel sources are gradually supplemented and supplanted. By developing a combination of large- and small-scale facilities, present investments in existing fossil fuel sources will not be jeopardized, but perhaps energized to participate in the energy source transition without significant reinvestment capital expenses or the wholesale loss of currently marketed commodities.
For more on the potential of sustainable energy sources in Texas, see maps depicting their geography from the 1995 Virtus report to the Texas Sustainable Energy Development Council.

from the field of economic geography 3

Alternative Energy Pipedreams and End-of-Pipe Realities


1. "The days of cheap energy are over." Critically evaluate the nature of fossil fuel supplies and the role of alternative energy sources.

Robert A. Heinlein dealt with this troublesome issue by dreaming up a device that tapped energy from a parallel universe. That solution created problems of its own, but that flaw, and even its far-fetched nature is not unlike the energy solutions being considered by real-life policymakers. When it comes to energy, as with environmental degradation, people have a tendency to bury their heads in the sand. It is a difficult truth to face: our way of life is coming to an end.
Industrialized and post-industrialized societies depend on cheap, abundant energy. It drives the economy in a metaphoric sense the way it drives machines in the literal one. We use energy to get to and from places, to do work, to light the areas in which we work, to play, to procreate, even to exercise. What once was done by human or animal power is now done by electrical power, or is powered directly from petroleum. “Labor-saving” devices clean dishes and clothing, reap weekly harvests of useless lawn clippings, blow leaves back onto the grass from whence the wind blew them, clean our teeth, and so on. Many of these things we can live without.
But energy also heats and cools our homes, provides our food and clean drinking water, medicines, medical diagnostics, and non-surgical treatments for myriad ailments, including cancer; we use energy to build our environment physically, economically, and socially. The radical change in education alone is mind-boggling to me—in my college career we went from typewriters to computers, and now whole lecture courses are presented online instead of in class. Enormous power is generated and consumed in the first world, it is endemic, it is systemic, it’s becoming pandemic, and we can’t live without it. Well, not the way we’ve been living. That is because, like Heinlein imagined, we’re getting it from the wrong source.
The industrial revolution was fueled by oil, and after 150 years of thriftless use, its reserves are now running out. So are the other so-called fossil fuels, coal and gas. The hope of the 1950s, nuclear energy, has the same problem of limited supply. Unlike Doritos, which we can crunch all we want because they’ll make more, fossil fuels are exhaustible—no one can make any more.
SUPPLY
Petroleum has been one of the most useful things humankind ever encountered. It has provided power to generate electricity, heat homes, fuel machines, to lubricate and cool those machines, even to grow crops (it’s used to make fertilizers), and, literally, put roofs over our heads. Its handiness has allowed it to pervade most every niche of the world economy. But it comes at a high price, and not just one that’s figured per barrel. In fact, that is one of the problems with oil: it has many hidden costs. Subsidies from government defray the costs of almost everything from extraction to insurance, and whole armies secure its pumps and pipelines. One thing, however, is clear—there’s only just so much of it.
Petroleum geologists estimate the quantity of oil in two ways: what’s in the ground, and what we can get out of the ground. Because the oil business is profit-oriented, production goes on at only the largest fields, most companies investing only in operations that can recover one billion barrels over the life of the oil field. There are many smaller reserves, and as supplies dwindle it is conceivable that tapping into them will become profitable. Let’s assume then, that eventually all of the oil will be recovered, the most optimistic stance. How much is there?
Estimates vary. And, of course, the people most in the know are tight-lipped about things like quantity and location, because they don’t want competitors to use their hard-won and costly information. But scientists have ways of getting around such matters—they use government estimates.
The U.S. Department of Energy (DOE) figures released in 2004, and used by Paul Weisz to estimate total world supply in the journal Physics Today, yielded an “optimistic estimate” of 2200-3900 billion barrels—twice the “proven reserve,” which seems to mean that there is really only half that much. Weisz continued to estimate, figuring in rates of population growth and concluded that the oil will run out “well within a human lifespan,” thirty years or so. Others give it 45. Not long, in any case—between, say, the term of a new home mortgage and the expected life of a commercial retail building. This may be an inappropriate remark for Economic Geography, but forget the price of oil—there just isn’t enough of it for that to matter anymore. Weisz did not stop his estimating at petroleum, he estimated the potential supplies of several other energy sources, so let’s look at those, too.
World reserves of natural gas might last 45-60 years, but Weisz notes that 58% of those reserves are in Russia, Iran, and Qatar. Obviously, there would be difficulties in utilizing those reserves, not the least of which is old-fashioned geography—how do we get the gas to the places we want to use the gas? Energy, it must always be remembered, is lost during transmission, and moving gas from one continent to another means a lot of transmission.
Coal is relatively abundant—there’s 90 to 120 years worth of it. But that is assuming that current rates of its consumption do not change, and that all grades of it are used, two unlikely assumptions. Not all grades of coal are useful with current technology, although new processing techniques are being developed that remove pollutants and make the coal cleaner and more efficient. So that makes the first assumption less likely—consumption is likely to increase.
Unlike with Congress, there is a real nuclear option in energy production. Weisz is unenthusiastic about its use, though, noting that it currently supplies only 8% of total U.S. energy production. Weisz estimates that U.S. reserves of uranium will last 35-58 years—were it to replace coal use. That seems unlikely in the foreseeable future, and besides, “technology can save us.”
In a paper by Leon Walters and Dave Wade, both of the Argonne Institute, it is speculated that the combination of nuclear power and hydrogen fuel cells can extend energy production indefinitely. The authors propose that the new generation of nuclear reactors, sodium-cooled fast reactors, can be used to create hydrogen while they also generate electricity. The hydrogen would be used to manufacture fuel cells for transportation fuel, used instead of gasoline or diesel.
Presently, most nuclear reactors do not possess coolant waters hot enough to produce hydrogen as a by-product, and furthermore, they completely expend their fuel, uranium with an atomic weight of 238. However, the sodium-cooled fast reactor would breed net fissile fuel from U238, extending the reserves anywhere from “decades to thousands of years.” But more importantly to their proposal (and the DOE’s Generation IV Plan), the coolant system would heat water to 900°C (that’s 1652°F for the Americans), hot enough to produce hydrogen by thermochemical water splitting, whereas lower temperature coolant water requires additional processing like electrolysis to make hydrogen, robbing it of any utility because of the extra energy used.
Walters and Wade credit nuclear power with 17% of energy production, more than twice Weisz’s estimate, but it’s apples to oranges: world production versus U.S. production. The scale is further confused for Walters and Wade suggest that because there are already 435 nuclear reactors in the world (103 in the U.S.) an additional 241 to supply the needs of U.S. transportation “is certainly achievable.” They do not even hint at how many China will need.
Although these new reactors might provide energy for a long, long time, they are not indefinite. Only natural geoprocesses can make that claim, so Walters and Wade calculate that the equivalent to their 241 nuclear reactors is 640,000 windmills. Though utilizing that number of windmills could present spatial problems, the windmills do have a certain advantage over nuclear reactors: they do not require burying anything in concrete bunkers for 500,000 years. It leaves one to wonder what the people of Nevada would choose, a Don Quixote obstacle course across Yucca Mountain, or a national cache of the most toxic material on the planet—too bad they don’t get to make that choice themselves; over the state’s protests, the Yucca Mountain plan was approved by President Bush in 2002, and the application is being processed by the Nuclear Regulatory Commission. Mr. Bush is right about one thing on the matter, one location is better than the 126 in use now.
Probably, no one source of energy will meet all the need, and combinations of energy sources will be utilized as is the situation today. If we add up all of the current energy sources in use—the higher range of estimates—the energy won’t run out for 283 years. Sounds like a long time, but compare it to the reserve of wind and sea-waves (there’s a new kind of windmill for that, a kind of wavemill). It will be another billion years or so before the Earth’s orbit around the Sun decays to the point where the seas boil off and there is no more wind: 283 years of dirty energy versus 1,000,000,000 of clean energy—what kind of idiot would opt for the former?

CONSERVATION AND THE PROBLEM OF TIM ALLEN
Another 300 years or so is a long time, and with energy conservation efforts, the number could be greater. Usually, I’m for conservation, but if it means 500 years of dirty air instead of 300, I say to hell with it. Conservation won’t happen anyway, because of people like Tim Allen.
As a parody of a home improvement guru, comedian Tim Allen developed a certain catchphrase that epitomizes the American mentality: “More power!” Invariably, Tim would soup up anything with a motor, with accidents and comedy to ensue. What’s disturbing about it is that the joke is an American mantra—consumer products become more powerful, often unnecessarily so, and they use more energy. The fuel efficient foreign cars that rocked Detroit City in the 1970s and ’80s have been supplanted by monstrous and gas-thirsty SUVs, for example. Gillette now markets a battery-powered razor that stimulates the skin, making hair stand upright, but not running the razor.
The process has been ubiquitous and pervasive: wind-up phonographs became laser-using CD players, pointing sticks became laser pointers, peddle toy cars became motorized toy cars, iron steam irons became plastic electric steam irons, brooms became vacuums. Almost everything has been electrified: sewing machines, baby swings, toothbrushes, fans, screwdrivers, can openers, carving knives; Ben-Wa balls became “electric massagers,” so that now even masturbation requires electricity. Timothy Wirth et al. said in Foreign Affairs that over “the past decade, total world electricity demand grew by 29 percent, and it is likely to continue growing.”
Conservation is good for many things, like animals, trees, and water. And it is for energy, as well, but only when it conserves clean energy. If it conserves fossil fuel energy, then conservation only draws out the problems. At best, conservation efforts might keep things running while the energy economy transitions from fossil fuels to alternative, renewable energy sources. At worst, conservation efforts can delay that transition past the point where the planet can absorb the pollution from fossil fuels.

ALTERNATIVE ENERGY SOURCES
American culture is wasteful and stupid, and nature eventually eliminates such things. One way or another, energy usage and sources will change. But things are not entirely without hope, for there are alternative energy sources already available. The aforementioned wind power is the cleanest of all sources, closely followed by solar power, but not to be outdone by geothermal, biogas, and the claims of hydroelectric. The last is the most problematic ecologically, not because of production waste but because of its land surface disruption. This is cause for concern from the standpoints of both ecology and social justice, as it displaces many farming communities and inundates agricultural land.
Hydroelectric power generation requires massive dams to be built (think of Hoover Dam) and back up the world’s largest rivers to form their reservoirs (think of Lake Mead—or France, for that matter, as one estimate found the size of that country comparable to the amount of land covered by dam reservoirs, about 545,630 square km). This creates problems for riparian vegetation and wildlife as it disrupts nature’s cycle of ebb and flow. Since that was learned, many dam operators have taken steps to simulate their rivers’ flood cycle, but this causes other problems, like erosion, as massive amounts of water are jetted out of a valve. But worse, most of the outlets drain from the bottom of the reservoir where water temperatures can approach freezing. From the piscine point of view, it’s like someone flushing the toilet during your shower, only in reverse. The fish get too cold to live near the dam, and so does vegetation, so the artificial flooding is no better in some ways than no flooding.
Mechanical problems like those are relatively easy to fix, but dams have hidden environmental costs that rival or exceed the problems associated with fossil fuel sources. As vegetation washes into the reservoir from upstream, the dams accidentally generate methane gas, which has a greenhouse effect twenty times that of CO2. So, while it is a renewable source of energy, hydroelectric power is not as “green” as had been believed, even for the “micro” facilities, which supply about 30 megawatts of electricity or less.
A new way to take advantage of water power is through wave capture. Jeff Nesmith reports on the Cox News Service that a “prototype has been installed at Orkney, Scotland . . . a single “wave farm” occupying a square kilometer—about one-fourth of a square mile—of ocean surface [which has] a capacity of 30 megawatts . . . enough electricity to supply around 20,000 homes.” But these are, of course, geographically limited—in the U.S., only the northeastern seacoast has powerful enough waves to generate electricity with current technology. But the price is right: comparable to wind power, according to Nesmith’s sources, at around 4 cents per kilowatt hour (kWh). Coal-fired plants operate at about 3½ cents per kWh.
Geothermal sources hold promise, but have a similar problem of geographic limitation. It’s a great resource in Iceland, as the volcanism of that area releases great amounts of steam. Not so expedient in say, Arizona. Being a sci-fi buff, though, I have the cinematic fear of a Crack in the World. Imagine poking food with a fork to get it to cool, then imagine poking the Earth on an industrial scale. And geothermal power is not reliable anyway—even Old Faithful, the famous geyser in Yellowstone National Park, is running out of steam, the average interval between eruptions having gone from 76 to 80 minutes.
The misnomer on the alternative energy scene is solar power. Indeed, all other energy sources are alternatives to it. The power of the Sun was utilized by Roman site planners to help heat the baths, but the first mechanical devices to capture the Sun’s energy were steam-driven motors developed around 1860. Solar power is a proven source of energy, and the costs of its production have dropped dramatically since the days of the first photovoltaic cells, which were used to power satellites. Currently, “solar farms” are in place and producing power all over the world, with large operations in the American southwest.
Arizona is the first state to mandate that a portion of its power portfolio come from renewable sources, but the largest facility produces only one megawatt (MW) of power. By 2007, they’ll need to produce ten megawatts to meet state requirements. Unfortunately, solar cells lose efficiency in the heat, so capitalizing on the Sonoran sun is difficult. Across the California border, however, cooler climes reign, and provide a home for the world’s largest solar power generation facility, in Kramer Junction. The facility has a maximum output of 354 MW. That’s better than ten times the power of one of the state’s micro hydroelectric dams.
But biogas is the really clever shit—literally. This should not be confused with the phonetically similar biomass type of electricity production, which is inherently wasteful because it burns specially grown organic material to drive turbines. Biogas is more of a reclamation process, harvesting methane from human and animal feces. It is energy production cum sanitation: toilets feed into a “digester,” which is part septic tank and part Petri dish. In it, bacteria digest the waste and produce methane which is used to fuel cook stoves and AC power generators. The by-product of the process is a nitrogen-rich sludge that can be used as fertilizer.
Biogas is a wonderful alternative for many reasons: it is not geographically limited, it requires no regional or local infrastructure—no power grid, that is, so it can be introduced piecemeal. And that also makes it less of a security risk because no single event, like a bombing, can disrupt power to all—as such an event was feared to have caused the recent blackout in the northeastern U.S. and the border provinces of Canada. But also, biogas is a renewable, clean energy source. It removes sanitation waste from the potable water system and eliminates the need for wastewater treatment plants (saving that energy). And it can operate at different scales.
Two test facilities have been in use in South Africa since 2001, a household of eight family members (and two cows), and a school of 1000 students (with access to other local cows). It also is being adapted to dairy and beef cattle operations (no human input needed), and can be adapted to housing developments. The process has been experimented with since the 1950s, and in a wide range of rural settings in Africa, India, Nepal, and China, so the technology is mature—and cheaper than solar power electrification.
Shakespeare wished for a muse of fire, but I’ll take wind. Harnessing the power of the wind is one of those quintessential human glories, like mastery over fire or weaving textiles. Humans are great mimics, we learn from other animals and we invent substitutes for their natural abilities. We must have studied birds for millennia, and eventually we learned to fly. But before that, we captured wind in sails to push ourselves across the great oceans, heated it in sacks to rise above the Andes, used it to pump water for our fields. Now that technology—a windmill—can make electricity.
There have been problems with them and birds and some unfortunate sluicing effects, but new designs and better site planning have eliminated them. They do take up quite a bit of room, but nothing like a solar farm or a nuclear plant. They haven’t been exactly cheap to run, but now are running close to fossil fuels in cost per kWh. Most importantly, windmills have been criticized for not producing much electricity, but that’s changed, too. Currently, “the workhorse” of windmills is GE’s 1.5 MW turbine, but the company is testing a downwind model (its blades face away from the wind instead of into it) that can produce 3.6 MW. The windmills generally are grouped together on farms that now reap the wind as well as crops. The “largest wind farm east of the Mississippi” has twenty turbines and a capacity of 30 MW. Just as good as one of the micro hydroelectric dams, but without all the ecological problems or expense.
A combination of locationally appropriate energy sources can be developed to wean society off of fossil fuels. But this must also be coordinated with efforts in sustainable development. Planning communities that reduce energy costs by scaling to human walking distances, that use biogas as part of their waste treatment and energy production systems, and take advantage of geothermal qualities in building, such as underground housing or heat pumps, all must take place if we want to continue having things like televisions and refrigerators and ice cream. Otherwise, it’s back to the plow and the cow, and the sweat of the brow.
I often wonder what the future will bring for humanity. Like most Trekkies, I envision a united Earth, peaceful and prosperous and pretty. But in Star Trek’s future history, that came after the third world war. It is difficult to imagine a future without such an event. Often in science fiction those last battles are called the Resource Wars.
In the wee hours as I watch the overnight news, filled with stories of the Iraq War, terrorist bombings, and American imperialism, I think the Resource Wars have already started. How far will it go before the switch is made, before we stop squabbling over some goo that we know can’t last and isn’t good for any of us? How many people will die because power-mad fatcats want to make as much money as they can before the oil runs out? Will it be all of us?
I apologize for straying so far from the academic voice, but for what it’s worth, here’s my critical evaluation: Before the days of fossil fuel energy are over we will fight a nasty and devastating world war, masked by religious fervor to conceal the greed of world leaders in government and business, because their type has always held human life to be the cheapest resource of all.
-30-

from the field of economic geography 2

1. Is the world overpopulated?
Yes—but it’s hard to prove.
However, expert opinions vary. The absolute limits to human population vary as well, depending on what is measured. Some physical estimates explore the limits of things such as carbon or heat, and yield rather high numbers—on the order of scientific notations of billions of people—but under preposterous assumptions such as total algae food production and sinking all earthly carbon in human flesh. General approaches have lower estimates, 100 billion and less, but consider factors in a more holistic way that better represents real-world scenarios. The meta-analysis by geographers Jeroen, Van Den Bergh and Rietveld, obtained a median of 7.7 billion.
There are three distinct camps on the issue, the pessimists, the optimists, and the Marxists, and at least two are readily understandable. Every fact has an ambiguous interpretation and so consensus on any viewpoint is difficult to gather. Broadly, there are three components to consider when making calculations about human population limits: space, resources, and quality of life.

SPACE____________
Even with six billion humans crawling around it, most of the Earth remains uninhabited. While it is true that our species has settled on every continent, the total area of human habitation is extremely low compared to the total area of Earth, some 57,500,000 square miles. The population density of the entire human population on Earth, then, is the number of people divided by the square miles of area: 6,430,794,815 / 57,500,000 = 111.8399, or about 112 people per square mile. However, most estimates drop Antarctica out of the equation, reducing the total area to 52.5 million square miles, raising the density to 122 per square mile. This seems like a low number, suggesting that there is still room for expansion, but the habitable area of Earth is much smaller, 25% of the total land area, according to economic geography author Timothy J. Fik. So now we have to divide by only 14,375,000 square miles, and the population density is roughly 447 people per square mile.
However, this estimate is considered crude, and rightly so. People are not evenly distributed across the Earth, and the population densities for the major concentrations of people vary widely, clustering around riparian areas. As Fik points out for example, the Nile Valley supports 3200 people per square mile, “30 times greater than the world average,” while cities like Hong Kong and Singapore have more than 10,000 people per square mile, and Manhattan exceeds 50,000.
That sounds like quite a large number, but 50,000 people per square mile is only about 1/1000 of a person per square foot in the Naked City, perhaps the strangest out of its eight million stories. Assuming the math is correct, it makes Manhattan seem more like a city in The Day After instead, but one hardly can wander through its streets screaming “Where is everybody?” Before the next line, “Where has everybody gone?” could be uttered, one would be swallowed up by the crowd on the sidewalk, though likely to go unnoticed. Fik makes the point that physical space is not the problem, dramatically noting that all six billion of us could stand at arm’s length from one another in Texas, but think how bad rush-hour traffic would be then.
Well, if space isn’t the problem, what is?

RESOURCES__________
More important than where to put everybody is the question of how to feed them. Currently, it is estimated that one billion people go hungry. As a rough measure, then, it could be said that there are one billion people too many already. But again, such a crude estimate does not paint the whole picture.
The frightening fun of population pressure theory began in a churchyard in 1798, with the English scholar Thomas Malthus, who postulated a mathematical limit to human population. He reasoned that because humans reproduce at a geometric rate while human food sources were replenished at the lower arithmetic rate, starvation would provide a natural check on overpopulation—when we get overpopulated a bunch of people will starve to death (a lot more will die at the hands of the other two horsemen of the Apocaplypse: pestilence and war). That hypothetical point has not yet been reached, but most concede it has to happen some time.
The idea is not all that new, really. Band societies have struggled with the issue since time immemorial, developing the harsh, but least-of-all-evil practices of infanticide, suicide, and sacrifice. As horrible as any one may seem, it was at least mitigated by usually being limited to a single individual, because the alternative of having too many people—who consequently exhaust the food supply—spreads the horror to all.
For today, however, scarcity is not the reason people go hungry. Lappe and Collins estimate enough food supplies to provide the entire world population with 3000 calories a day. While such a distribution system would be a boon to a fledgling global weight-loss industry, it seems unlikely to develop. Famine is a tool of geopolitics, of warfare and dominance. And those industries are firmly entrenched in human civilization.
Take, for example, the current crisis of the Sudanese refugees now encamped near Darfoor. The invading Arabs not only destroyed wells and crop fields, they timed their attacks so that the Sudanese exodus would occur during the brief planting season. With no one to plant, there would be no food for the entire year, a famine that would kill far more than the militiamen could with rifles alone. It is an ancient military strategy—disrupt supplies.
Assuming the distribution of food was entirely equitable, it still must be determined what number of people is sustainable, a much trickier calculation. While current levels of production may yield plenty for the current population, can that yield be maintained? Land degradation due to overproduction, erosion, or salinization is a growing problem.
According to Fik, 17% of the world’s vegetated land has been degraded, higher in some areas—a whopping 42% in Africa, with Asia at 36%. Considering the concentration of people in Asia—two thirds of all human population is there—the problem is alarming. A ghastly demonstration in Malthusian theory could take place in this region.
Here again, though, it is not the sheer force of numbers that is the problem, but the management of resources. Horticultural societies have long known that a single patch of land cannot be tilled perpetually. Plots in cultivation were rotated so that each could lie fallow for a time and recover its fertility. But as more of the world gets developed into an urban environment, which is permanent, impervious, and immobile, there is less land available to cultivation, much less to lie around unused at all. Fik cites the example of China, where only 10% of the total land area is cultivable, roughly 9.6 million hectares. In the 1990s, more than 400,000 hectares went from cultivation to urbanization.
David Pimentel of Cornell University and Mario Giampietro of the Istituto of Nazionale della Nutrizione, Rome, estimate that the U.S.—the largest food exporter in the world—loses one million acres a year to urbanization. Their study averaged the loss at one acre per person added to the U.S. population. The U.S. has almost 190 million hectares, and adds one person to the population every 10 seconds or so. Here’s another frightening statistic from the Negative Population Growth website: 10 million acres of U.S. forest have been suburbanized.
That’s important because of something far graver in consequence than running short of snacks: breathable air. Forests produce most of the oxygen humans need as well as remove carbon dioxide and some atmospheric pollutants. The Forest Products Association of Canada quotes a 1990 Report to Parliament: “one acre of healthy forest produces about 4 tonnes of O2 per year. On average, we estimate that one acre of mature forest contains 400 trees, therefore: 4 tonnes @ 2,200 lbs/tonne = 8,800 lbs. 8,800 lbs divided by 400 trees = 22 lbs/tree/year.”
The estimates used in managed forests on U.S. tribal indian lands are somewhat different. Developed in conjunction with NASA scientists, seemingly strange bedfellows, they estimate that an average tree can generate as much as 260 pounds of oxygen per year, and that the average person will use 400. To sustain the tribal population, each person needs two trees. For all of humanity to breathe, then, we need approximately 13 billion trees.
The two estimates give a range: 13 billion to 115 billion trees necessary for oxygen production alone. At an estimated 11.84 billion acres of forested land worldwide, with 400 trees per acre (the Department of Energy uses an estimate of 700 trees per acre) there are 4,400,000,000,000 trees on Earth.
That sure seems like a lot of trees, many more than we need just for oxygen, and even for the most incontinent of dogs. However, the rate of their loss is staggering: 50 acres per minute. At that rate, the remaining forests could be completely cleared in 450 years. Coincidently, it’s been about that long since the age of exploration until now. Knock a decade or two off the time limit because we have to maintain 287,500,000 acres of forest at 400 trees per acre just to sustain the number of people there are now, many more than that to sustain 10 billion, the projected world population in 2050.
We do not know how much acreage the forests need to sustain themselves. And of course, rates of oxygen production vary among the different species of trees. Other plants make oxygen, as well, but the forests are where most of the other plants live, too. Common sense would suggest that the forests are valuable resources that require conservation for our very survival; even if there seem to be plenty at the moment, it would be foolish to squander the only sources of fresh oxygen.
A good estimate of population would consider other factors, too, such as fresh water and energy. The real problem with resources, though, is not how much we need to sustain ourselves, but how much nature needs to sustain itself. Minus that, there still should be plenty for humanity. The question of overpopulation is not one of setting an absolute limit but an optimal one.

QUALITY OF LIFE___________
There are many ways to measure the quality of life, and such subjective observations sometimes are of little scientific value. Yet there is one basic truth that is uncontestable: the more people that share, the smaller a share there is for each. There are a finite number of resources on the planet, but a growing number of people to divide them, so we all will have to make do with less.
For example, Pimentel and Giampietro projected trends for the U.S. that suggest “only 0.6 acres of arable land per person will be available in 2050. Agronomists, however, stress that more than 1.2 acres per person are needed for a productive agriculture, one that produces a varied diet of plant and animal products.” Currently, the U.S. cultivates 1.8 acres per person. So, while people may not starve, they will not have as good a diet as they once had. The U.S., with around 5% of the world’s population, consumes roughly one quarter of the world’s resources at present. That kind of luxury cannot last.
With quality of life issues it is perhaps too easy to concentrate on concrete matters like the consumption of resources. There is another fact of nature I wish to consider, and that is humankind’s social evolution. We have not evolved to cope with so many people being around.
Humans live in extremely high population concentrations compared to other primate species. But this state is relatively new. Early humans evolved in low population concentrations. Band societies, considered to be the oldest form of political organization, typically comprise less than one hundred members, and commonly spend time in smaller groups of twenty or so. As the band is an extended kin group, all individuals within it are personally known to one another.
More importantly, they all must work to get along with one another. The band’s decision-making process has “the participation of all its adult members, with an emphasis on achieving consensus,” as William A. Haviland puts it in the standard anthropology text. Contrast such a notion with a presidential election or even a city council meeting.
The amusing zoologist of the human animal, Desmond Morris, said of band society
it is easy enough for the . . . hierarchy to work itself out and become stabilized, with only gradual changes as members become older and die. In a massive city community the situation is much more stressful. Every day exposes the urbanite to sudden contacts with countless strangers, a situation unheard-of in any other primate species.
Morris further explains that because it is impossible to enter into social relationships with all of them “although this would be the natural tendency,” the city dweller instead develops avoidance behaviors. “By carefully avoiding staring at one another, gesturing in one another’s direction, signalling in any way, or making physical bodily contact, we manage to survive in an otherwise impossibly overstimulating social situation.”
What can be the outcome for a social animal of evolving behaviors that avoid social contact? It hardly seems good. And it begs a subsequent question:
Of what quality is a life in the lonely crowd?
The behavior qualities resulting from population stress include aggression, depression, and homosexuality. Skimming the daily news reports of wars, murder-suicides, and television prime-time viewing schedules leads to the inevitable conclusion that the world must be overpopulated.


2. Critically evaluate Lappe and Collins’ arguments regarding the reasons for hunger in the world.
Fik summarizes the arguments in Lappe and Collins’s World Hunger: Twelve Myths extensively. Though each myth has a particular genesis, all fall into one of two larger reasons for existence: the way we do things and the way we think about things—distribution practices and cultural beliefs. Ultimately, they reason that all hunger is due to human activities, not nature’s. People go hungry because we let them—we have not decided to do otherwise, not because we can’t feed them.
Myths 1-3, 6, 8, 10, and 12 all have to do with the way food is distributed. The remaining deal with how the mentality of core economies has dominated and ultimately misguided efforts to reduce hunger. The mistakes are rooted in the core’s blind faith in the tenets of its own distribution system, free-market capitalism. Both forces combine in ways deliberate and unintended to exacerbate and perpetuate a problem that is solvable from a practical point.
Myths 1-3: Not enough food to go around; Natural hazards are the leading cause; Too many people
These go together naturally, and can be considered as different aspects of the same problem. The world is still undergoing a transition from purely subsistence orientation to a commercial one. As transportation systems improve, food can be moved from greater distances. Economies that once were isolated are now part of the global market, and must choose between growing food crops and cash crops.
Even so, as Fik points out from work by Grigg, the global output of agriculture was more than the increased consumption due to growth. Quite simply, the food existed but it did not go to the people who needed it. There can be no other reason than the distribution system.
From time to time, a story appears in the media about how food aid is hijacked by warlords to be sold on the black market. But the real answers lie in the structural reform of the food market. Fik quotes at length from Bradley and Carter, noting the commodification of food and how it blinds social institutions to actual needs. With food treated as a saleable item “the forces that govern its production are . . . buying and selling at a profit. . . . production and distribution will fail to adjust” to the needs of the poor.
Many of the periphery economies suffer the throes of this change, mixtures of subsistence farming, cash crops, and foreign aid. Their delicate balance is easily upset when natural disasters like flood or drought do come along. Coupled with the effects of other human foolishness like war or “misguided government policy” and a region’s ability to sustain itself may become crippled. But to recognize that productive capacity is insufficient is not the same as to say there are too few resources to go around.
Myths 6 and 8: Subsistence farming lowers food output; and Free trade is the answer to ending world hunger
Western style agriculture reaps enormous bounties. Yields are large, business is predictable. The why-doesn’t-the-rest-of-the-world-work-this-way view so permeates thinking that obvious local solutions to the problem of hunger are dismissed in deference to the Western model being introduced. And so large tracts of cultivable land are switched to commercial applications when they could provide a local food source instead. Commercial uses tend to be spatially inefficient compared to intensive subsistence agriculture because they focus on a single crop. The intensive subsistence practices tend to plant multiple crops in combinations and rotations that both nurture soils and maximize yields per unit of land.
Moreover, Fik warns against Arthur Daniel Midland Corporation’s dream: the world as one giant greenhouse. Unrestricted free trade would encourage the local production of exports to the neglect of local subsistence crops. “Global free trade would bias . . . the SOUTH toward the production of cash crops for export. . . . this has only made matter worse in the periphery.”
Myth 10: More U.S. aid is needed
That cure may be worse than the disease. Because the problem is one of distribution and not supply, throwing more food at it hardly makes sense. Historically, the U.S. has done more harm than good, the aid it provides is “a tool of foreign policy and manipulation.” Fik lists seven generalizations about U.S. aid that suggest hope of it solving world hunger is misplaced.
Among these are the facts that aid is usually provided to allies rather than the needy, that their governments are often “fundamentally opposed to economic reforms on behalf of the poor,” and that it often goes “to those who need it least—wealthy businessmen and landowners.” Aid tends to perpetuate the status quo rather than drive regional economic growth. It may actually be a disincentive to production when it allows governments “to rely on cheap and plentiful external food sources.”
Myth 12: There is a “food-versus-freedom tradeoff”
Some people believe that their right to dominate the game is greater than someone else’s right to eat. Any belief that poverty and deprivation is a natural and consequential flip side of ability and drive in a free market system is just Social Darwinism revisited. “Redistribution schemes go against the right to secure wealth,” as Fik summarizes the argument. There is a certain truth to the statement that gives it considerable weight. What responsibility is it for productive members of society to support non-productive ones?
None really, but such a concern fails to question the basic assumption that the needy are inferior and that no one has an initial advantage over the rest. Like so many things, the question more rightly is of what is practical than what is idealistic. “Providing the means to basic survival will eventually allow the needy to provide for themselves.”
The deepest held belief of the West is that its way is always best. Only in light of the growing devastation from its unsustainable practices has there been any progress toward sustainable development. Fortunately, we believe change is good—efforts to change the track of development toward conservation are taking root.
Myths 4 and 5: To produce increasing amounts of food destroys the systems that produce the food; the green revolution caused harm to regional production systems
Those that do not learn from history are doomed to repeat it, so already efforts are underway to preserve capacity and ecological diversity, to limit ecological destruction and protect from invasive species. Pesticide use can be lightened by the introduction of predator species, or crop rotation. Seed banks set back stores to hedge against the loss of native cultivars and wild species as land is cleared. While this only mitigates the damage done by habitat loss, it provides a means to correct it later.
But growing more food crops has not been the problem. Often, food crops were abandoned in favor of cash crops or livestock as LDCs integrated with core economies. Worse, land might be cleared for resource extraction, like timber. However, if supported by higher yields of bio-engineered crops, there is less need to clear virgin land for crops, cash or otherwise. At the global level, it is pointed out, the green revolution “helped preserve land and important genetic resources.” Though these gains from the green revolution were great, they may have run their course, as population pressures continue to mount.
Myth 7: Market forces alone can end world hunger
Preposterous. To quote Fik again, many “regions produce for use value and not exchange value.” It is well known that you can’t eat money, and where infrastructure focuses on feeding people instead of on trading between people, the invisible hand of the market is unseen for a different reason—it does not work.
Again, the unbridled belief in something does not make something so. Market forces work best in market economies, not subsistence ones. It is difficult to see how a lower tariff on coffee would directly translate into a bowl of beans for a plantation worker.
But the idea is not entirely without merit. Market forces in conjunction with a realistic evaluation could stimulate economic growth. If money from exports went to build schools or local manufacturing instead of ready-made goods as imports, the benefits from producing the original export would stay local instead of flowing back outside.
Myth 9: The poor and hungry are too weak to revolt
And the wild animal is backed too far into the corner to bite back.
Marx said it all—they have nothing to lose but their chains. With no investment in the current system, and no expectation of benefit from it, the rational choice is to revolt, if there is a chance to benefit from a new system.
Fik is good to point out that the poor are disadvantaged, not incapable; indeed their hardships perhaps make them stronger. Ultimately, the might of sheer numbers makes them “a formidable force,” and it seems naïve to expect people to meekly accept the yoke forever.
Myth 11: Core economies in the NORTH benefit from hunger in the SOUTH
Prices are cheap because hungry people around the world are willing to work for less. To keep leather jackets cheap at the store, people need to be starving in Pakistan, so they’ll work for the lowest wage. It can hardly be denied that people favor high wages over low, but do people have to go hungry?
Wages compete against wages. If wages are lower in one place, and employers move there to take advantage of it, the original locale loses jobs, or at least income as wages fall to remain competitive. That results in them spending less money consuming the goods produced by others. There is a negative feedback loop—the area becomes more and more depressed as money flows out. It might also be argued that core economies lure immigrant workers to the jobs its own members will not do. But it is likely that these jobs would get filled if wages were higher.
Debunking myths is a thankless job. All the more reason to appreciate Lappe and Collins. Disparities will always exist among people. But there is no reason to let people go unfed. Dispelling the myths that let us believe there is nothing to be done about it is only the first step toward ending hunger—Lappe and Collins have taken it and there can be no excuse for lagging behind. Humanity must soon commit to ensuring the basic necessities for all, or resign itself to hypocrisy, now that it has been demonstrated that hunger is avoidable.
-30-

From the field of economic geography 1

2. Critically evaluate Weber’s model of industrial location and examine the roles of labor, capital, and other “location factors” on industrial location.


The Weber Model
Writing at the turn of the Twentieth Century, Alfred Weber naturally focused on the then-prevalent forms of industry, which at the time were heavy industries such as steel manufacturing. Weber sought to explain the location of such industrial firms through an economic interpretation of space that isolated the transport costs of business for both procuring raw materials and delivering finished products to market. Thus Weber indexed two explanatory variables, weight and distance, into the unit of ton-miles.
Every industry operates in the same general way: raw materials are gathered, processed into a consumer product, and sold in the marketplace. If economic rationality can be assumed, then the successful firms will minimize costs and maximize profits. At Weber’s time, the greatest single cost was transport. Transport costs can be reduced by eliminating unproductive shipping—the goal being to ship as little extra weight as possible.
What “extra” weight? Weber reasoned that some production processes take bulky raw materials and refine them to make a finished product, what is referred to as a weight-losing process. For example, much of the iron ore used in steel production becomes waste material in the production process. By Weber’s logic of “least-cost location,” the steel manufacturing plant would locate near the source of the iron ore, thereby achieving “the minimum transport cost,” because only the finished steel then needs to be shipped to market.
Other products, however, are weight-gaining. Light-weight raw cotton may be shipped to a textile mill and woven into a heavy canvas. As weight is gained during the production process, it is economically logical to locate the mill near the market and ship the canvas as short a distance as possible. In both examples, the heavier item is shipped the shorter distance. The proof of his logic was a weight ratio: weight of the raw material/weight of the finished product, what Weber called the “material index.”
The materials themselves are either “ubiquitous” or “localized.” Weber recognized that certain resources, like the aforementioned ore, are not evenly distributed across the Earth, whereas others are readily available at many locations. So, the consideration of locality or ubiquity also has a push-pull effect: the commonly found ubiquitous materials draw production sites toward the market to avoid extra shipping, while the localized resource draws them toward the point of the material’s extraction.
The localized materials can be divided further as “pure” or “gross,” and assigned a numeric value for the material index. A pure resource has a value of 1 because all of it is consumed in the production process with no waste. Gross materials, however, lose weight during the production process, and therefore have material index values greater than 1.

Figure 1 Source: Joseph H. Butler, Economic Geography
Diagrammatically, then, the problem of location selection “is to find the location of P that minimizes xa + yb + zc.” (Butler, p. 86) Weber’s model clearly has explanatory value, and was subsequently confirmed by observations in Mexico and the United Kingdom (Lloyd, p.62) But Weber did not rest on this laurel, theorizing that labor costs would also influence location selection.
Weber broke with earlier theory by recognizing that labor costs, like natural resources, may be unevenly distributed across space. Therefore, transport costs may be offset by a savings in labor costs. While placing the production site at the location of low-cost labor may increase the transport costs of either the raw materials or finished product—or both—the savings in the cost of labor may be so great that the total cost of production is less than it would be near the point of extraction or the market.

Figure 2 Source: Joseph H. Butler, Economic Geography

Weber called the lines connecting points of equal transport costs “isodapanes,” with the “critical isodapane” being C in the diagram above. The critical isodapane is where total transport costs for materials and products equals the savings in labor costs. If the location of the low-cost labor lies beyond the critical isodapane, there is no savings. Notice in the diagram that L1 is within the critical isodapane, and its selection would lessen the total cost of production, even though P is the point of minimum transport cost.
Later investigators were able to enhance Weber’s isodapane mapping technique into a “space-cost curve” that allowed for a range of possible locations, and this more accurately reflected real-world observations. (Lloyd, 65-66 passim) Though criticized for its simplicity, the Weber model holds great explanatory value, as is illustrated by the very fact that later investigators built upon it rather than dispensing with it.
Weber’s theory also is criticized for its assumption of perfect economic rationality. Of course, no model can tell the whole story where human selection is involved, but such a criticism may be unfair. Weber’s model explains site selection as related to an independent variable—distance. His focus on that variable did not preclude the existence of other variables. Indeed, as Butler points out, Weber glimpsed the benefits of agglomeration, but lacked the tools to investigate its influence.
Though unfair to criticize a model for not doing what it was not intended to do, it is instructive to realize its limits. In that vein, one can say that the Weber model completely neglects spatially differential capital costs such as property values, building costs, utility costs, power supply, taxes, municipal or national regulation—a pack of fardels no one model could bear.
Nor does the Weber model examine locational benefits that might outweigh transport costs. Yet one can easily imagine a scenario where a particular location may offer a relative savings over another. A mill might locate near a river to utilize a water wheel, or with more modern technology, a firm might locate in a desert region to utilize solar power to offset the costs of purchasing electricity. So, too, might a company choose a site less suitable geographically in order to defray costs by moving into a facility that had originally been built for other purposes.
Yet any criticism of these sorts are really reflections of the limited human understanding of the entire problem of location selection. Again it should be noted that Weber’s model is an attempt to define the extent of only one such variable—distance. A comprehensive, dynamic model would require a similar attention to detail for all variables.
Though transport costs have decreased since Weber’s time, the core problem remains, and will persist: raw materials must be processed into a consumable product, the product must reach a market, and ultimately a consumer. The materials and products must travel over distance to be processed and consumed. So long as distance remains, so will the utility of Weber’s model.

4. Critically evaluate the concept of agglomeration economies. Under what circumstances are agglomeration economies likely to operate?
Economies of scale can be enjoyed by many separate firms when they are spatially clustered. The physical proximity of firms to one another helps to create linkages of many kinds such as information and knowledge flows, supply feeds, and financial services. Furthermore, the costs of utilities, worker training, and transport can be reduced as an agglomeration economy forms.
The linkages between firms not only grow and strengthen over time, but foster the development of more linkages. Broadly, the linkages are of three types: production, service, and market, and they allow firms to lower production costs, increase revenues, or both. (Lloyd, 131) These linkages function as a sort of division of labor across companies rather than within them, reducing the individual firm’s cost of doing business, while providing the benefits of a larger operation.
Agglomeration economies are of two kinds, localized and urbanized. Localized agglomeration economies refers to the benefits a firm obtains within a single industry. An example of this form is Silicon Valley, where a single industry dominates the landscape. The economies are due to the output of the industry as a whole, rather than from the dynamism of the locale.
Under the urbanized umbrella, however, companies derive benefits by clustering in a particular location through something like symbiosis. One firm’s waste product is another’s raw material, and transfer costs for both are minimized due to their proximity. Another benefit of proximity is that businesses may “share” pools of services. Because one service provider can accommodate many customers in a single area, it can afford to specialize, offering a unique service that otherwise might not be available. Increased specialization reduces costs, as well. Or, a supplier might be able to lower prices, thereby extending the benefits of volume buying to companies that individually buy smaller amounts.
Also, in the urbanized setting, companies can defray costs to the public sector. For example, wastes may be processed through municipal sewerage, or workers will have received training in universities or trade schools instead of on-the-job. The density of economic activity allows a diverse range of businesses to emerge.
Agglomeration works as a feedback loop. Through its economies, firms can specialize, specialization develops diversity, diversity attracts skilled workers who garner higher pay, and who then pay higher costs for goods and services, meaning greater profits for local businesses, who invest more in the local area. But more, the clustering reduces risk in two important ways.
Though there may be more turnover in particular jobs, workers can more readily find a job in an agglomerated economy. This is not lost on employers, who are more free to engage workers on a temporary basis, as a pool of available and comparably skilled workers is at hand. The company is not obligated to keep unengaged workers on the payroll during periods of low production, and can reduce its costs.
Similarly, businesses need not keep capital “immobilized in inventories.” (Lloyd, p.133) Because of the greater volume of activity with the agglomeration economy, individual firms are more likely to have access to outside vendors who can supply them. The convenience of having the supplies readily available is no longer a capital cost for the individual company, but becomes a fringe benefit of location.
But there is a limit to the advantages of agglomeration. At some point, “net diseconomies should set in as unit production costs begin to rise again.” (Lloyd, ibid.) At this imprecise point, competition within the area increases, counteracting the scale economies. For instance, the pool of skilled workers may shrink as more firms emerge, or rents may increase as demand for real estate increases. Indeed, as Jacobs points out, the very success of an area may be what kills it, if the too much agglomeration leads to too little diversity.
However, cities offer a thick insulation against stagnation. The constancy of change in metropolitan areas coupled with their attractiveness to in-migration mitigates such diseconomies. Cities are places of innovation, where “knowledge spillovers” tend to increase the productivity and versatility of the work force. The linkages between businesses are formed through human interaction.
But that is precisely the criticism of agglomeration. The gathering of empirical evidence is difficult because “the location decisions of firms and workers are interdependent, which makes it difficult to ascribe causal influence to any particular factor.” (Clark, p.490) The dynamism that perpetuates success muddies the waters of investigation.
-30-


Research Materials
Boyce, Ronald R. The Bases of Economic Geography. 1978. Holt, Rinehart and Winston. New York.
Butler, Joseph H. Economic Geography. 1980. John Wiley and Sons. New York.
Clark, Gordon L., Maryann Feldman, and Meric S. Gertler, eds. The Oxford Handbook of Economic Geography. 2000. Oxford University Press. Oxford.
Fik, Timothy J. The Geography of Economic Development. 2000. McGraw-Hill. Boston.
Hayter, Roger. The Dynamics of Industrial Location. 1997. John Wiley and Sons. New York.
Knox, Paul. Ed. The Geography of the World Economy. 2003. Hodder Arnold. London.
Lloyd, Peter E. and Peter Dicken. Location in Space. 1972. Harper & Row. New York.

Review of the classic work by Jane Jacobs

The Death and Life of Great American Cities
Jane Jacobs, Vintage Books, New York, 1961, 448 pp.
Review

Sincere efforts to invigorate the declining areas of cities have been foiled by conventional planning. The idea of the city as an evil, a megalopolis of pollution, poverty, and disease has obscured the view of the city as what it actually is—a space that facilitates the economic and social interdependence of people with independent goals. The city is a place where enough people are concentrated so that very specialized avenues of interest and of employment are opened. It is a place full of economic opportunity and cultural diversion, where public life and private life can blend and enhance the quality of each other. Cities are not inherently evil, they are the haven of all that makes human social life possible and good. The mutually supporting diverse activities, public and private, necessary to social life, however, are being eroded by bad city planning. Jane Jacobs takes these planning faux pas to task in this classic work, what she called “an attack on current city planning and rebuilding.” Obviously, the attack was rebuffed quite readily, for the unfortunate trends that Jacobs so clearly identified still are apparent today, and her criticism is as timely now, as when it was written, over two generations ago.
Part One, “The Peculiar Nature of Cities,” describes the social life of the city, concentrating on the street. “Streets and their sidewalks, the main public places of a city, are its most vital organs,” Jacobs begins, “if a city’s streets are safe from barbarism and fear, the city is thereby tolerably safe from barbarism and fear.” Indeed, feeling safe to walk the streets is the criterion that can distinguish between economic vitality and depression. Jacobs makes the point that “the public peace—the sidewalk and street peace—of cities is not kept primarily by the police,” but by people themselves. Crimes occur in areas where the opportunity for discovery does not exist, what Jacobs calls “lonely places.” The solution is to have “more eyes on the street.” Jacobs compares tenement housing in places like Boston’s North End to public housing projects (unnamed) in New York. The important difference being street activity. In tenement housing there are more “public characters” who perform, in an unofficial capacity, the job of policing. These public characters take a variety of specific forms: street vendors, pedestrians, shopkeepers, children at play and the mothers who watch the children from their open apartment windows. There is constant activity on the street, much of it foot traffic. The foot traffic is prevalent because of the mixed use of the street; it is not merely a thoroughfare to travel from one area to another, it is the destination point for residents and workers, for diners and for customers, even patrons of the arts. On Jacobs’s street “in front of the tenement, the tailor’s, our house, the laundry, the pizza place, and the fruit man’s, twelve children were playing on the sidewalk in front of fourteen adults.”
The streets are a public forum wherein people may experience limited social contact. Familiar faces need not become intimate acquaintances to serve social functions like security. Jacobs illustrates the idea with two contrasting anecdotes. In Jacobs’s neighborhood, it was then a common practice to leave spare keys with a local shopkeeper. Guests could pick up the key from—in Jacobs’s case—the local deli, the proprietor serving as doorman in this capacity. It is safer than leaving the key under the mat, but less socially intrusive than leaving the key with a neighbor. The deli-man is one such public character who can be on rather intimate terms with his customers but not intrude on their private lives, as would the neighbor who shares the private space of the apartment building. As Jacobs put it, he “considers it no concern of his whom we permit in our place and why.”
This is contrasted by the story of Mrs. Kostritsky, who lived on a street near a public park, where mothers brought their children to play. The street was, save for the park, entirely residential. Lacking public amenities like coffee shops with public restrooms and public telephones, Mrs. Kostritsky found herself entangled in an accidental social world which she resented, as park users dropped in to use her telephone or bathroom or to get out of the cold. “If only we had a couple of stores on the street,” she complained, “then the telephone calls and the warming up and the gathering could be done naturally in public.”
In Part Two, “The Conditions for Diversity,” Jacobs gives the main thrust of her argument. In her own words, it is the most important part of the book. The economics of the city are broadly outlined as the author addresses this most salient point: “How can cities generate enough mixture among uses—enough diversity—throughout enough of their territories, to sustain their own civilization?” The chapters that follow specifically develop the four “indispensable” conditions that will generate such diversity: mixture of primary and secondary uses, short blocks, varied economic yields of buildings, and a dense concentration of people.
Why these conditions are important is the premise on which Jacobs’s entire argument is predicated. They afford different levels of opportunity for work, housing, and diversion. A mixture of opportunities helps to assimilate immigrants, and also allows upward social mobility for native citizens. The mixture means greater variety for all, for survival is not a function of volume business.
Smaller operations are possible only in great cities, where a population is concentrated densely enough to support its limited appeal or specialty. Having multiple primary and secondary uses in a district attracts more customers as passersby stop in on their ways to other errands. The passersby may notice a specialty theater, for instance, and return later for the show. When all four conditions are present, Jacobs says, the combination will “create effective economic pools of use.” But, she admonishes: “All four in combination are necessary . . . the absence of any one of the four frustrates a district’s potential.”
“Forces of Decline and Regeneration” are discussed in Part Three, the beginning of which contains Jacobs’s most astute observational point that—
most city diversity is the creation of incredible numbers of different people and different private organizations, with vastly differing ideas and purposes, planning and contriving outside the formal framework of public action. The main responsibility of city planning and design should be to develop . . . cities that are congenial places for this great range of unofficial plans, ideas and opportunities to flourish.
Many well-intentioned building plans and building codes serve to destroy the diversity that is so needed for city vitality. These may seek to provide a scenic consistency of the local structures or to renew an area with the additions of cultural monuments, or to replace slums with middle-income housing. However, by focusing on one type of use these plans eliminate any chance for true vitality.
The most shocking form of this singularization of use is success. Inane competition sometimes leads to duplication of use. Jacobs recounts how a streetcorner that had a good mixture of uses, one of which was a bank, came to be described as a “100 percent location.” Three other banks moved into that same corner, driving out the other ventures that had attracted the wide mix of people to the area originally. Without that mixture, business at the banks trickled away.
The same phenomenon is observable in other cases, such as residential neighborhoods, restaurants, or office districts. Entrepreneurs and investors and lenders focus on what has been shown to be the most profitable ventures in an area. Through repetition, they monopolize the area, eventually eliminating leases of different yields. It is a downward spiral: the more banks that come into an area, the more it becomes profitable only if banks are the lessees. This concentration of use eliminates the cross traffic found when the area had been of mixed uses, and the diminished volume of customers coming into the area cannot support the saturated market. Eventually, the entire area declines. Then people move away.
The pattern of regression and retreat leads to slumming. Jacobs says the “key link in a perpetual slum is that too many people move out of it too fast—and in the meantime dream of getting out.” The residents are trapped there by a lack of affordable choices. When they acquire the means to have greater choice, they choose to move away rather than to invest in the slum area. This leads to slum shifting by means of what Jacobs calls “cataclysmic money.”
Cataclysmic money is used to replace slums with varying types of projects that are assumed will yield higher tax or business revenues. The slum populations are forced to relocate into other areas and the pattern repeats itself. What is needed, Jacobs argues, are programs that encourage slum populations to stay in their neighborhoods, to attract economic opportunities for the residents, and to promote investment in the area. Without investment capital such as mortgage loans or small business loans that stimulate renovation and new construction that will support a diverse mixture of uses, no slum can unslum itself.
The last part of the book, “Different Tactics,” recommends specific methods for generating diversity in the city. Jacobs wisely deals with housing subsidies by suggesting the formation of the Office of Dwelling Subsidies. ODS would be a guarantor or lender, according to needs of individual builders, as well as a kind of planning office for development. The second aspect of ODS would be to act as a liaison between landlords and tenants. Instead of building project housing for the dependent poor, ODS would supplement rents in ODS buildings until such time as the tenants could pay the full amount on their own; when the dependent tenants achieve financial independence, they would not be required to move out of the ODS building. Tenants in residence are thus continuing investors in the neighborhood with strong roots in the community.
Her second tactic is to reduce the absolute number of vehicles using city streets. This would be accomplished primarily through attrition as the city develops diversity. Easy foot traffic and difficult automobile traffic will favor the attrition of automobiles. When combined with changing elements of topography, another tactic for diversity, travel by automobile will become even less convenient. But furthermore, changing topography will enhance the visual stimulation of pedestrian traffic by affording different views of the city. Rather than disconnecting the city dweller from an accurate mental map, landmarks and vistas will become integral parts of his mental map and will help him negotiate travel within and use of the diverse city.
Finally, Jacobs goes into great detail about the possibility for informal government. In her example of Chicago’s Back-of-the-Yards Council, Jacobs emphasizes the utility of district coalitions as intercessors between local interests and the larger, formal political bodies of municipal government. Often planners of road systems or city-wide projects are ignorant of local uses and community values. Local residents lack the special influence on government that powerful and affluent developers enjoy. An informal group—one whose composition and domain is not formally ordained and thereby limited—can be more effective at coordinating the efforts of builders, the maintenance of utility services such as water or electricity, and the interests of residents and local businesses.
Jacobs concludes with “The Kind of Problem a City Is,” an overview of the historical thought-processes that have gone into urban planning and Western science. The city is a problem of organized complexity—one with many interconnected variables. The flaw of previous planning has been to treat the city as a simple problem, ignoring the fact that the varied and interconnected economic, personal, and social relationships of a city’s inhabitants are the generators of its vitality.
This book is brilliant. By examining the functional aspects of a living city it does attack the problems of conventional city planning. City plans must be more than artists’ renderings of grand architecture, more than demographic investment risk analyses, more than voting districts. The plans must also incorporate the very lives of its citizens. The plans of cities must allow citizens the freedom of movement through space, time, and finance to build better lives for themselves. It is a recognized axiom that diversity makes us strong; city diversity is no exception, as Jacobs’s insightful book so clearly indicates. I recommend The Death and Life of Great American Cities to undergraduate and graduate students of geography and of urban planning, as well as professors and teachers in those fields. I think the book would be of special importance to realtors and developers. If these lessons could be learned by those involved with the physical construction of cities, perhaps the social construction of them also would be enhanced.
-30-

My current girlfriend's great, too, but I still like this poem

On the Subject of Women

Maybe there’s something wrong with me
My tastes run counter to society’s
Its iconic dames I like fairly well
But I really go for girls that smell

You can trust a woman that hasn’t showered
Painted over, perfumed, and powdered;
No façade masking her exterior
Reflects her lack of artifice, interior

What care I for rose-water
Or alabaster skin?
What good are press-on nails?
None at all for workin’
A love that’s true
Ties up her hair and shares the work with you
And will kiss you full on the mouth
Though labor’s made your faces wet
And she smells as fresh as sunshine
Though she’s dripping sweat

I like the smell of sweat
And soap mixing in hair still wet
From a shower too quick to rinse away
Everything she did that day

I like faces just washed clean
Not pancaked ones with Revlon sheen
The smoothness of a shaven leg is nice
But no better than soft hair on hippie thighs

What say saline sacks of a woman’s character?
Can you believe words whispered by a collagen injector?
Does skin that’s tan
Reveal as much as a strong, rough hand?
Laughlines, wrinkles, crows’ feet
Are more comely to meet
Than the Stepford smoothness of Botox brows
A fleshy hip to grab much more sweet
Than the hard bones seen through designer gowns

Once a woman loved me
Who smelled of toil and coffee
She picked her nose and farted openly
Never shaved or trimmed that I could see
She never hid herself from me
And when she’d say “I love you,” she said it truly