Chapter 5: Sustainability, Technology, and Economic Pragmatism—A View into the Future
Jonathan Hujsak
SUSTAINABILITY
Once it was familiar only in the lexicon of bioscience professionals, but now CIOs have increasingly come to understand, embrace, and employ the principles inherent in this word—sustainability—to realize strategically renewable IT systems capable of operational diversity and healthy productivity over the long term. Sustainability is defined as:
1. The ability to sustain something.
2. (ecology) a means of configuring communities, systems, and human activity so that cultures, their members, and their economies are able to meet their needs and reach their greatest potential in the present, while preserving resources, biodiversity and natural ecosystems, planning and acting for the ability to maintain these ideals for future generations.
Sustainability is one of the most misunderstood topics in today’s media, often considered to be anti-business and anti-technology, and frequently associated with phrases like “green anarchism,” “social ecology,” and “eco-socialism.” It’s widely believed that sustainability negatively impacts the bottom line, lowers standard of living, reduces enterprise agility, and strangles innovation. Nothing could be farther from the truth. Sustainability in practice is not much different from familiar cost reduction and efficiency moves, and has much in common with Business Process Reengineering (BPR). Sustainability improvements are grounded in economic pragmatism and often turn cost centers into profit centers by reusing resources and consolidating business processes. These improvements apply not only to businesses of all shapes and sizes, but to society in general, regardless of culture, industrialization, or geography. We all live in a closed system (the Earth) and therefore we all share a finite set of resources for which there are no replacements.
Unless the world becomes more sustainable, the forces of globalization will eventually exhaust all known resources and industrialized society as we know it with its globally distributed supply chains will simply grind to a halt. Millennia of human progress will be suddenly reversed in a chaotic, violent process of deglobalization. If we can reach a state of sustainable equilibrium, however, we can ensure that resources, natural or man-made, are carefully managed to preserve the delicate balance within our sealed biosphere. We can ensure that future generations will inherit a world even richer than before, make room for additional population growth, support the advancement of both undeveloped and developing nations, increase our standard of living, and literally, “have our cake and eat it too.” Information technology has long been a historical driver of globalization, and in the future it must play a pivotal role in achieving the global sustainability now needed, because the reach of human influence and our species’ ability to radically alter our planetary ecosystem demands immediate agency and foresighted stewardship. By the same token, the role of the CIO will become ever more important in the global enterprises that survive and thrive in this highly integrated, globalized, sustainable world of the future.
Information technology (IT) organizations are the distributed neural network of the global, enterprise organism that now reaches the most remote regions of the world, providing these areas with unprecedented connectivity to sophisticated information, trading, and financial systems. The scope and scale of global enterprise IT operations are growing at an exponential rate, as Ray Kurzweil’s monumental work, The Singularity is Near, illustrates with its exhaustive collection of technology trend lines.[1] As these trends indicate, information technology will be one of the most pivotal elements in the global enterprise, not just for the sustainability improvements it can provide, but for the transformative influence it will have on the rest of enterprise operations. Those who lead these organizations, CIOs and CTOs, will play an increasingly important role that will soon overshadow many other high-level executive functions.
GLOBALIZATION, DECENTRALIZATION, AND SUSTAINABILITY
Globalization is increasing at a quickening pace, transforming the most remote corners of the world with a flood of technology, finance, information, and cultural memes. The rate at which new products are developed, tested, manufactured, and marketed into this environment is now increasing at an exponential rate, as revealed by Kurzweil’s The Singularity is Near. In this chaotic, accelerating global market, segments such as mobile computing and communications are experiencing product obsolescence that even overtakes product release. Cell phones are prime examples of this trend and today have product lifecycles of less than a year. Much of this product churn is driven by the wide availability of powerful electronic design and automation (EDA) tools that automate the design of increasingly miniaturized electronics, simulating the final functionality and performance of the product before a prototype is ever built. The end result is a lowering of technology barriers that reduces time to market, drives “feature explosion,” and fuels consumer expectations. More and more technologies are becoming commoditized almost instantly under the intense pressure of lowered trade and financial barriers, highly efficient supply chains, improved communications, and a growing number of global competitors. A highly visible example of this effect is the emergence of multiple competitors to the revolutionary Apple iPhone within months of its release.
The Shifting Landscape of IT
Concurrent with the rapid pace of globalization is the explosive growth in offshoring of manufacturing and business processes. The global business process offshoring (BPO) market is currently worth $30 billion, with an upside potential of more than $250 billion. It includes a complex mix of finance, accounting, customer interaction and support, credit card processing and billing, telecomm billing, legal process, and a variety of emerging knowledge services.[2] Information technology offshoring (ITO), which preceded BPO, is moving at an even faster pace (more than 30 percent growth per year) and is currently estimated to be worth over $50 billion worldwide.[3] India now leads the BPO market with an estimated 37 percent market share. Canada follows closely at 27 percent, and then the Philippines at 15 percent. A number of other countries are rapidly gearing up to follow them into the market, including Ireland, Mexico, Central and Eastern European countries, South Africa, and China. Both the ITO and BPO markets are driven by large transnational customers such as Thomson Reuters, Dell, HSBC, American Express, and Citigroup that are reaping substantial benefits including lower costs, increased flexibility, and access to a growing global pool of talent. Keeping up with these rapidly shifting global opportunities will require unprecedented agility from future global IT organizations.
Sustainability, Theory, and Practice
To understand the profound impact that information technology will have in the future and the pivotal role that IT leadership will play, the CIO must first understand sustainability principles, sustainability frameworks, and how all of this fits into enterprise strategic planning. From the literal definition at the beginning of this chapter, dozens of sustainability initiatives have sprung up over the last 20 years or so, providing well-grounded theories, principles, and standards for the enterprise to follow. Several of the most widely known frameworks introduced in this chapter include Carbon Footprint, Ecological Footprint, and The Natural Step. Many references are readily available for the large number of lesser known frameworks not mentioned here. This section provides the essential information that a CIO needs to guide the C-Suite toward a forward-thinking, factually-researched menu of applied enterprise sustainability models and an introduction to the terminology that a best practice CIO requires to understand emerging sustainability research as a basis for ecologically and financially sustainable IT practices that enable enterprise strategy.
Carbon Footprint
One of the best known metrics of sustainability is the Carbon Footprint, a measure of the direct and indirect greenhouse gas (GHG) emissions caused by the enterprise, its services, and its products. Six primary GHG types are defined by the UN Framework Convention on Climate Change Kyoto Protocol that set emission standards for 37 industrialized countries and the European Union. GHGs interfere with the radiation of heat into space as part of the Earth’s natural temperature regulation process, contributing to global warming. The atmospheric heating potential of GHGs is measured using the relative Global Warming Potential (GWP) scale, which assigns carbon dioxide the reference value of one.[4] A GHG with a larger value of GWP has a greater impact on global warming. The six greenhouse gases identified by the Kyoto Protocol are:
1. Carbon dioxide (CO2) is a trace gas in the atmosphere (0.038 percent) that is used by plants in the process of photosynthesis to make sugars used in plant respiration and growth. Estimates of the natural lifetime of CO2 in the atmosphere vary, but generally range up to 100 years.
2. Methane (CH4) is the principal component of natural gas and is eventually oxidized in the atmosphere to produce carbon dioxide and water. It’s a very potent GHG with a GWP value of 72. Methane has an estimated atmospheric lifetime of about 12 years.
3. Nitrous oxide (NO2) is a colorless gas with a lightly sweet odor that is a strong oxidizer, with effects not unlike molecular oxygen. It has a GWP of 298 and has long been linked to ozone layer depletion. NO2 has an estimated atmospheric lifetime of 114 years.
4. Hydrofluorocarbons (HFCs) are common compounds used as refrigerants in commercial, industrial, and consumer applications. GWPs for these compounds range from 140 (HFC-152a) to an amazing 11,700 (HFC-23).[5] HFC lifetimes in the atmosphere range up to 260 years.
5. Perfluorocarbons (PFCs) have extremely stable chemical structures and are not broken down in the atmosphere like other GHGs, remaining in the atmosphere for several thousand years. Two of the most important PFCs, tetrafluoromethane (CF4) and hexafluoromethane (C2F6), are byproducts of aluminum production and semiconductor manufacturing and have atmospheric lifetimes of 50,000 and 10,000 years, respectively. They have respective estimated GWPs of 6,500 and 9,200.
6. Sulfur hexafluoride (SF6) is a colorless, odorless gas widely used in the power industry as a high-voltage dielectric insulator for electrical equipment such as circuit breakers and switchgear. It has an estimated lifetime in the atmosphere of 3,200 years and GWP of 23,900.
Carbon Footprint is a location specific metric that takes into consideration how “clean” the energy sources are in a specific region (i.e., GHG production). The energy portion of the footprint is typically calculated by summing the total direct and indirect fuel consumption of the enterprise facility or activity in the region including fixed assets, transportation, heating and cooling, and other sources and multiplying the total by a regional emissions factor which converts the value to the equivalent mass of CO2. One gallon of gasoline, for example, is equivalent to 8.7 kg of emitted CO2. A diesel-powered delivery truck that gets 10 miles per gallon and drives 50,000 miles per year will result in:
which converts to 54.83 short tons per year. To put this in perspective, a fleet of 100 such trucks would emit 5,483 short tons of CO2 per year.
Corrections for non–energy related GHG emissions such as HFCs must be made to this value to get an accurate aggregate value for enterprise Carbon Footprint. According to the Greenhouse Gas Protocol Initiative (www.ghgprotocol.org) Scope 3 Accounting and Reporting Standard, all GHG emissions data must be accounted for and converted to equivalent CO2 tonnage.[6]
There are three distinct scopes that must be addressed in calculating Carbon Footprint:
• Scope 1: Direct emissions from the enterprise facility including electrical generators (emergency and cogeneration systems), boilers, water heaters, gas-powered ovens, kilns, or dryers, or similar equipment, vehicles, and refrigeration systems.
• Scope 2: Electrical energy purchased from an external utility or source.
• Scope 3: Indirect emissions resulting from your supply chain including purchased materials, delivery vehicles and freight services, employee commuting, employee travel (car rentals, train, airlines), outsourced services, and offshoring.
Ecological Footprint
Carbon Footprint is just one component of the more comprehensive Ecological Footprint, a metric first defined by William Rees at the University of British Columbia in 1992,[7] and codified as a set of standards that apply to national and subnational population groups by the Global Footprint Network Standards Committee.[8] The Ecological Footprint is an estimate of the areal extent of biologically productive land and sea necessary to offset resource consumption and to assimilate any resulting waste. The original nation-level, population-centric approach has been adapted to calculate footprints for a variety of enterprise systems and products. Calculating the footprint for manufactured goods, however, can be explosive and complicated in combination by lack of information about proprietary materials and processes, as demonstrated by Sibylle Frey’s analysis of a typical mobile phone.[9]
Closely related to Ecological Footprint is biocapacity, an aggregate measure of the production of ecosystems in a specific area, which may include arable land, pasture, forest, ocean, river, or lake. Land or water area is normalized to world average biocapacity equivalents in global hectares, using yield factors (global hectares/hectares [gha/ha]) that take into account the degree to which the local productivity is greater or less than the world average for that usage type. Biocapacity increases with the amount of biologically productive area and with increasing productivity per unit area. The total biocapacity of the world is currently estimated to be in the range of 11.5 to 12.5 billion gha. When the Ecological Footprint of civilization exceeds this value, the world is no longer sustainable. It’s estimated that humanity’s Ecological Footprint exceeded this value in 1997 by over 30 percent.
One of the standard methods for calculating Ecological Footprint uses a matrix technique not unlike the input-output multiplier method widely used to make economic projections. The highly similar Consumption Land Use Matrix (CLUM) translates row inputs that represent standard categories of consumption to column outputs that represent Ecological Footprint land use types such as crop land, grazing land, or forest. CLUM matrices are generated in several different ways. Process-based CLUMS are created by gathering data that associates land use type to consumption categories, such as the amount of forest land required to produce the wood for manufacturing printer paper used in a billing process. Input-output–based CLUMs are constructed by extending existing physical or monetary input-output tables with Ecological Footprint data. The monetary input-output table, for example, maps an activity to its direct and indirect economic impact on other business sectors of the community. The resulting economic impacts can then be mapped to their respective Ecological Footprints based on land use types.
The Natural Step
One of the most well known and widely applied sustainability frameworks is The Natural Step (TNS), founded by Karl-Henrik Robèrt in Sweden in 1989. TNS is an internationally recognized organization with offices located in Australia, Brazil, Canada, Israel, Japan, New Zealand, South Africa, Sweden, United Kingdom, and the United States. It’s been successfully applied across a wide variety of industries, including real estate, metals, appliances, utilities, food, retail, apparel, fast food, healthcare, paints, chemicals, furniture, and more. TNS has been adopted and successfully used by over 100 major companies including Bank of America, McDonald’s, Nike, Interface, Starbucks, Home Depot, and Ikea.
TNS represents a systems approach to sustainability analysis and is based on four fundamental Principles of Sustainability.[10]
1. “Prevent the progressive buildup of substances extracted from the Earth’s crust.” For example, mining, extracting, and refining of the naturally occurring and highly toxic element mercury from cinnabar ore and preventing its subsequent accumulation in our environment through sustainable, lossless reuse.
2. “Prevent the progressive buildup of chemicals and compounds produced by society.” For example, avoiding the buildup of highly toxic man-made compounds in the environment such as dioxins, PCBs and DDT, by using non-toxic, sustainable alternatives.
3. “Prevent progressive physical degradation and destruction of nature and natural processes.” Examples include sustainable harvesting and replanting of forest regions, avoiding depletion of soil nutrients by using sustainable agriculture practices, and preserving fisheries and diversity of species through sustainable commercial fishing practices.
4. “Promote conditions that empower people’s capacity to meet their basic human needs (for example, safe working conditions and sufficient pay to live on).” Other examples include offshoring and outsourcing operations that promote safe working conditions and standard wage levels, which combat the causes of social unrest, violence, and political instability.
The TNS Framework employs a practice of “backcasting,” or working backwards, from a desired future state, one of ideal sustainability. In backcasting, you start with a vision of the future, and iteratively plan actions that take you ever closer your future ideal. This is similar to traditional strategic planning methodologies, such as SWOT (Strengths, Weaknesses, Opportunities, and Threats) and SCAN (Strategic Creative Analysis) that begin with a desired end state or objective, identify internal and external factors that are favorable or unfavorable to achieving success, and then plan incremental steps to reach the desired objective. There are now many successful case studies of this process to draw from.
Ashforth Pacific, Inc., is a typical, successful example of the TNS framework that produced a wealth of tangible and intangible benefits. Ashforth Pacific has a total of 55 employees and provides third-party property management, construction, and parking management services to markets throughout the Western United States. The company currently manages over 15 million square feet of office space. By applying the four TNS Principles, Ashforth developed a variety of sustainability programs that focused on energy, water, waste, and toxic materials. Over a five-year period, Ashforth saved a total of $654,000 by reducing energy consumption through adjustments to lighting, heating, and cooling systems throughout its building inventory. They saved over $43,000 annually through several water conservation projects that included sustainable improvements to their landscape irrigation systems. They also instituted a variety of waste reduction programs, including increased emphasis on electronic communication, double-sided copying, use of recycled paper ($15,000 annual savings), recycling of construction materials, and centralization of trash collection.
At the other end of the spectrum is Nike, Inc., with annual revenues of over $18 billion and nearly 800,000 workers in contract factories spread across the world. Nike senior management, led by CEO Phil Knight, began adopting the principles of TNS in 1997. Nike has continued to infuse TNS principles into their product lifecycle, strategic decision process, and employee culture for over ten years. By 2003, Nike manufacturing operations reduced their solvent use by 95 percent by instituting the use of alternative water-based cements, cleaners, and similar materials. The reduction in the use of hazardous chemicals not only improved worker safety, it significantly reduced the environmental impact of Nike manufacturing operations. The resulting annual cost savings on materials alone amounted to nearly $4.5 million in 2003. In another example, Nike introduced improved machine technology for manufacturing its shoeboxes (which were already using recycled materials) in 2008 and realized not only a material savings of 4,000 tons per year, but an additional annual cost savings of $1.6 million. Over the years Nike has increasingly woven the principles of sustainability into its product line and design philosophy, and earned rewards in the form of tangible cost savings and intangible social capital, market positioning, and competitive advantage.
These are just a few examples drawn from a much larger collection of case studies that span the entire spectrum of industries worldwide. They consistently demonstrate not only tangible cost benefits but a broad range of intangible benefits resulting from increased sustainability. The next section narrows this focus to a particular aspect of the enterprise, IT.
FUTURE OPPORTUNITIES FOR IMPROVING GLOBAL IT SUSTAINABILITY
Information technology holds the promise of revolutionary improvements in global enterprise sustainability that will dramatically enhance enterprise agility, increase operational efficiency, and even turn cost centers into profit centers. Until recently, this potential was largely unexplored while basic improvements in material recycling and facility energy efficiency were the primary focus. In the last several years, however, IT’s growing impact on the sustainability of a broad spectrum of industries and institutions in commercial, government, and academic sectors has been felt. These improvements have a far-reaching global effect and are driven by advances in data center consolidation, server, storage, desktop, and network virtualization, cloud computing, workforce mobility, ubiquitous computing, energy and environmental management, disaster recovery, information assurance, and physical security. They physically manifest as deferred construction, downsized buildings, reduced floor space, lower energy consumption, savings in heating/cooling, reduced peak electrical usage, co-generation, employee well-being, customer satisfaction, and many other tangible and intangible benefits. IT is now a key enabler of global enterprise sustainability, and its direct and indirect influence will be increasingly felt in all facets of global enterprise operations.
IT server and data center operations account for a significant portion of worldwide energy consumption, and every key sector of the world economy now depends on them. According to the U.S. Environmental Protection Agency, in 2006 over 1.5 percent of the total U.S. energy consumption, shown in Exhibit 5.1, was attributable to data center operations, or over 120 billion kilowatt-hours (kWh) projected for 2011 (see Exhibit 5.2). This amounts to over 61 million kWh of energy consumed with an aggregate value exceeding $4.5 billion. A single enterprise-grade data center consumes enough energy to power 25,000 households.[11] These numbers are projected to double by the year 2011[12] and reflect a doubling of energy consumption since the year 2000. Of the total energy consumption, about 50 percent is currently attributable to power and cooling infrastructure alone. Data center Carbon Footprint, if unchecked, will increase by a factor of four by the year 2020. Fortunately, we have the means available to prevent this.
Exhibit 5.1: Total U.S. Energy Consumption by Segment
________________________________________
Source: EIA.
________________________________________
Exhibit 5.2: Annual U.S. Data Center Electricity Usage
________________________________________
________________________________________
On the commercial side, data center growth is being driven by a number of factors including the growth of global electronic financial transactions, expansion of the Internet, rise of electronic healthcare systems (e.g., a single high-quality digital chest X-ray can consume 20 megabytes or more; a CT scan of a heart can consume 200 megabytes or more), increase in global commerce and services, and the impact of satellite navigation and commercial package tracking services.[13] On the government side, similar expansion is being driven by digital records retention, Internet publishing of government information, growing defense and national security systems, disaster recovery preparations, information security initiatives, and the impact of digital health and safety requirements.
Overall, data centers now account for as much as 25 percent of corporate IT budgets, and operational costs are rising by as much as 20 percent per year. In a recent Sun Microsystems poll, 68 percent of IT managers reported that they were not even responsible for their data center power bills. These numbers will rapidly spiral out of control unless these systems become more sustainable, ultimately resulting in widespread service interruptions due to energy or infrastructure shortages. The solution, however, lies not in curtailing growth, but in embracing an entirely new paradigm of high performance, energy-conserving, and sustainability-enhancing enterprise IT technologies.
Virtualization and Cloud Computing
This section extends the best practices discussion of Cloud Computing in Chapter 3 to include essential elements of sustainability best practices for this rapidly emerging information technology, with a focus on storage and virtualization practices.
Data Center Consolidation and Virtualization
Data center consolidation and virtualization efforts are closely related and have a major impact on enterprise Ecological Footprint. Deferred construction of a single large-scale data center, for example, can offset tens of thousands of gha of Ecological Footprint. When a hundred or more global data centers are consolidated down to just a few, the reduction in footprint can be substantial. New innovations in high-density, modular data centers significantly increase the capacity and utilization of these assets, while considerably reducing operating costs. Relocation of data centers near sources of renewable energy and cooling such as hydroelectric dams further reduces cost and enhances Carbon and Ecological Footprints. Virtualization amplifies this effect by significantly increasing asset utilization levels and reducing overall power consumption and cooling load.
The total Ecological Footprint of a data center is calculated by first enumerating the fixed and recurring resources needed to construct, equip, commission, and operate the facility. Each component and process used in the project must be traced back to the original source. The calculation must account for the amount of biocapacity, or amount of biologically productive land or sea (measured in gha) dedicated to the production of the item and for consumption of any wastes resulting from the use of that item. A cell phone, for example, has an Ecological Footprint of about 32 global square meters. An average PC has a footprint of about 764 global square meters. (CFOs love this stuff).
Most of the waste and inefficiency associated with commercial products results from the basic material processes, energy, and emissions used in their production, not their disposal. For example, every metric ton of gold used to make electrical contacts in a rack server requires 350,000 tons of ore to be mined. One metric ton of platinum requires 950,000 tons of ore. Obviously, recycling materials without loss of value has a huge impact on sustainability when the effects of the entire supply chain are considered. Every pound of aluminum that is recycled, for example, saves 8 pounds of bauxite, 4 pounds of chemical products, and 14 kW of energy. Exhibit 5.3 illustrates the explosive problem of tracing sources of Ecological Footprint from a complex system such as a data center back to their origins. Consequently, few detailed studies have looked at systems more complex than a single cell phone,[14] and even those studies were limited by materials and processes considered proprietary by the manufacturer. Even the most basic first order analysis shows that the footprint of a large, multi-building, enterprise class data center complex can amount to tens of thousands of global acres (>10,000 gha). This value, to make matters worse, does not include the additional impact of powering and cooling the data center, or the impact of staffing the data center to operate it. Obviously every time construction of a data center is avoided, or an existing data center is decommissioned, there is a tremendous reduction in enterprise Ecological and Carbon Footprint, and an equivalent increase in enterprise sustainability, graphically illustrating the impact of IT on overall enterprise sustainability.
Exhibit 5.3: The Recursive Nature of Calculating Ecological Footprint
________________________________________
________________________________________
The best practice CIO can be certain of one thing. More and more C-Suite teams are actively managing these issues because shareholder activists have all this information at their fingertips. Robert Stephens warned in Chapter 1 that it is only a matter of time until each and every corporation has its pants pulled down on the Internet. The real blow to enterprise profitability comes when this kind of information is exposed by shareholders to an ignorant executive management team.
Conventional data centers consume about 150 to 300 Watts per square foot power density.[15] These are now starting to be replaced by a new generation of sustainable, modular data centers packaged in integrated “containerized” units that can be installed outdoors with minimal shelter. A single Sun Modular Data Center (MDS), for example, contains 200 kW of IT server capacity in 160 square feet, or 8 times the density of conventional data centers. The integrated closed loop water cooling system used in the MDS is 40 percent more efficient than conventional data center HVAC systems.[16] Power conditioning systems are removed from the MDS container and packaged as external transformer units with power busses extending into the container, further reducing cooling requirements. The similar Hewlett-Packard (HP) POD (Performance Optimized Data Center) is packaged in a 40 foot container and replaces 4,000 square feet of conventional data center space. The POD is 50 percent more power efficient than conventional data center build-outs according to HP and can support loads of 1,800 Watts per square foot, or 27 kW for each of its twenty-two 50U racks. This equates to a Power Usage Effectiveness (PUE) of 1.25 compared to 1.7–2.0 for a conventional facility. A 40-foot POD can house as many as 3,520 computer nodes and is equipped to accept chilled water for cooling. Containerized data centers such as the MDS and POD can be more easily relocated to cooler, higher latitudes, allowing the external atmosphere or adjacent rivers to be used to augment server cooling, saving 10 percent or more in cooling costs, while simultaneously reducing Carbon and Ecological Footprint (i.e., sustainability and economic pragmatism go hand-in-hand).
There are many ongoing examples of major data center consolidation to point to at the time of this writing. HP Corporation is currently in the process of consolidating 85 worldwide data centers into just 6 by converting to a virtualized blade server-based infrastructure. This consolidation move will save HP an estimated $1 billion annually and will be phased in over three to four years. Three of the six data centers will be dedicated to disaster recovery. The consolidation effort will be used as a showcase for HP technologies, reaping not only tangible savings from reduced footprint, but a variety of intangible benefits in the form of market positioning, competitive advantage, social capital, and technical discriminators.
Emerson Network Power recently consolidated over 100 data centers worldwide into just four, while reducing high-cost peak energy demand at its main corporate data center by using a 550-panel, 100-kW rooftop solar photovoltaic array.[17] Although powering an entire data center with photovoltaic panels would be quite expensive, a hybrid approach can produce substantial savings through “peak shaving” during times when utilities are forced to buy energy in the expensive spot markets (and pass these costs on to the enterprise). Alternative renewable energy sources can also be employed, in some cases, by utilizing waste from nearby operations as feedstock for anaerobic digesters or gasification systems. The CAPEX associated with a 2.5-MW digester-based generation system today is about $2.5 million, or $2.50 per Watt, about half the installed cost of an equivalent solar PV system.
Other “greenfield” data center projects are being driven as much by enterprise growth as by needed improvements in sustainability. Amazon, for example, is building a new $100 million, 116,700 square foot data center complex near the Columbia River in Oregon. The location, scheduled for completion in the third quarter 2010, provides a sustainable answer to the sizeable problem of cooling hundreds of thousands of servers. River water is piped directly into the site after being processed through a water treatment system. Power from the Columbia River hydroelectric sources provides a large-scale renewable energy source to offset its Carbon Footprint.[18] Google recently built a similar 30-acre complex in The Dalles, Oregon, also along the Columbia River.[19] One of the major attractions of this small-town location is the nearby Dalles Dam hydroelectric complex. With an overall length of 8,875 feet and height of 260 feet, the dam hydroelectric complex includes a powerhouse with a total capacity of 1,779 megawatts, ample renewable capacity for the local Google operations. Also making this an ideal location is the local fiber optic hub tied to the coastal PC-1 landing of the 640 Gbps fiber optic network, linking the United States with Japan and Asia.
Similar river-cooled data centers have sprung up 130 miles to the north built by Yahoo and others. In yet another twist on sustainability, Microsoft’s new $550 million, 477,000-square foot data center in San Antonio, Texas, uses 602,000 gallons a day of recycled municipal wastewater to cool the facility during peak cooling months. Water cooling, however, is not the only approach to improve data center sustainability. Microsoft’s Dublin data center uses the year round cool ambient air found at the higher latitude to eliminate the need for chillers entirely. Waste heat generated from large data center operations can also be used for a variety of sustainable applications, including space heating of buildings, facility hot water pre-heating, industrial heating of local commercial greenhouse operations, and heat for sustainable wastewater treatment (anaerobic digestion).
Server Virtualization
Virtualization (see Exhibit 5.4) typically goes hand-in-hand with data center consolidation, and contributes significantly to enterprise sustainability by increasing asset utilization, reducing energy consumption, and decreasing Ecological Footprint. Server utilization in conventional data centers can be as low as 6 percent, and the same servers can use as much as 74 percent of peak power when idle.[20] By running multiple virtual machines on a single hardware platform, physical server energy consumption can drop as much as 80 percent, while server utilization can increase to as much as 60 to 80 percent. This means that as much as a 15:1 reduction in the number of physical servers is possible with an associated drop in Ecological Footprint (i.e., more than 700 square global meters per server). Blade servers, in particular, can reduce overall energy usage by as much as 35 percent over conventional servers. For each physical server that is virtualized, 4 metric tons per year of CO2 emission are eliminated, an amount equivalent to a gas-guzzling SUV getting 15 mpg being taken off the road.
Exhibit 5.4: Consolidation of Server Functions through Virtualization
________________________________________
________________________________________
Resource impacts often manifest in non-intuitive ways. A single Google search, for example, involving several related queries, is estimated to produce about 7g of CO2. By comparison, boiling a teapot of water costs about 15g. To further put this into perspective, the world IT industry generates as much GHG as all of the world’s airlines put together, according to Evan Mills at the Lawrence Berkeley National Laboratory.[21]
Different types of standalone virtualization (as opposed to hosted) can have different impacts on data center sustainability, depending on the nature of the workload. Where applications are CPU bound or cannot run under the same operating system, there is little difference in efficiency between the different virtualization approaches. In cases where applications are IO bound and can run the same kernel and similar operating system versions, however, the differences can be substantial. Hypervisor and container-based virtualization are two of the most common forms and illustrate both ends of the spectrum. At one end (hypervisor), we have VM isolation at the hardware abstraction layer (HAL). At the other end we have VM isolation at the application binary interface (ABI)/system call layer.
In the hypervisor, or full virtualization model (see Exhibit 5.5), applications execute within a virtual machine, which is a complete unmodified operating system image hosted on a fully abstracted hardware layer. The hypervisor, also known as a Virtual Machine Monitor (VMM), runs directly on the base hardware, handling privileged instructions trapped by the individual Virtual Machines (VMs) and rewrites the machine code on-the-fly, a sophisticated technique patented by VMWare that maintains isolation between the VMs preventing exceptions that would otherwise crash the system. Contemporary hypervisors often leverage hardware-assisted virtualization such as Intel VT™ or AMD-V™ to trap privileged calls, removing the need for binary translation and significantly increasing system performance. The VMMs used in virtualization products such as VMWare ESX Server, Citrix XenServer, KVM, and Microsoft Hyper-V are typically built from Linux, Solaris, Microsoft, or custom kernels that leverage these hardware extensions. In theory, any OS that can run native on the actual hardware can run within a VM hosted on a hypervisor platform. In reality, however, specialized device drivers are often unavailable.
Exhibit 5.5: Hypervisor Virtualization Model
________________________________________
________________________________________
Paravirtualization offers a significantly higher performance and more scalable alternative to the hypervisor model by providing an isolated application execution environment with virtual instructions, virtual registers, and a simpler interface to virtual I/O devices (i.e., virtual drivers). This, however, requires the use of customized “virtualization aware” versions of the guest OS as opposed to the “off-the-shelf” version hosted by normal hypervisors. Paravirtualized guest OSs share access to the underlying hardware, eliminating the need for the VMM to trap protected instructions, replace, and rewrite them, thereby increasing efficiency. Experimental paravirtualization systems such as Denali (University of Washington) have demonstrated the ability to host hundreds of simultaneous Lightweight Virtual Machines, compared to 5 to 10 VMs for a conventional hypervisor system such as VMWare ESX.
In container-based virtualization (see Exhibit 5.6), as typified by Solaris Containers, Parallels Virtuozzo, and Linux V-Server, the virtualization environment provides protected application areas, or resource partitions, within a single, shared, OS image, not unlike the practice of mainframes years ago. The partitions provide a complete execution environment for software applications, allowing them to share a root filesystem, system executables, and shared libraries, in effect sharing an instance of the base OS. For this reason, container-based virtualization solutions do not support heterogeneous collections of different hosted OSs as hypervisor systems do. Applications running in a container-based “VM” still see a single, bootable, OS that behaves just like an ordinary, non-virtualized OS, and can be easily migrated from one physical server to another. Elimination of the complex HAL layer, however, significantly reduces the virtualization overhead compared to hypervisor based systems. Benchmarking studies have shown that container-based systems run close to the performance of unvirtualized systems, with a 1 to 10 percent overhead, depending on operating system and workload. Hypervisor systems, in comparison, often incur overhead as high as 40 percent. Based on overhead alone, the most sustainable solution with the smallest Ecological Footprint would be container-based virtualization for I/O dominated workloads. For large and highly homogeneous enterprise data center operations such as Google or Amazon, there are obvious advantages to using this form of virtualization.
Exhibit 5.6: Container-Based Virtualization Model
________________________________________
________________________________________
Current trends indicate a continued evolution of lighter weight virtualization layers, in some cases leading to the embedding of “flash” hypervisors directly into the motherboard hardware. These embedded hypervisors will load network bootable VMs from the cloud into local memory and access virtualized disk storage, entirely eliminating the requirement for local disk capacity and associated energy consumption and hardware footprint. This, of course, is just a natural evolution of the diskless node, or hybrid client, which employs a network bootable OS and remote storage.
Choice of virtualization approach is a subtle and complex decision process that involves consideration of cost (recurring and non-recurring), performance, reliability, maintainability, scalability, availability, workload characteristics, security, and ultimately sustainability. In some cases, the cost of porting legacy code to a common OS is simply prohibitive, precluding shared resource approaches such as container-based virtualization. In other cases, reliability concerns dictate maximum isolation between VMs, requiring a robust hypervisor approach. In still other cases, trading performance, efficiency, and cost for isolation is advantageous and can yield significant savings and gains in sustainability and a lowered Ecological Footprint. Virtualization in its many forms and variations is clearly here to stay and will be a key element of cloud computing into the future.
Storage Virtualization
Storage virtualization (see Exhibit 5.7), evolving from today’s Network Attached Storage (NAS) and Storage Area Networks (SAN), is yet another approach to hardware consolidation and utilization optimization. This approach is used for “pooling” distributed, heterogeneous storage resources to make them appear as a single, uniform, logical unit of storage. This has the effect of increasing device utilization levels and reducing capacity requirements and subsequent footprint. Typical data center environments with widely distributed storage assets can have storage utilization levels as low as 20 percent. By adding storage virtualization, redundant hardware can be eliminated, data center floor space reduced, energy consumption lowered, and cooling requirements reduced. Storage management costs can often be reduced by half or more. Payback periods as short as 12 months have been demonstrated with return on investment (ROI) exceeding 180 percent over a typical three-year period.
Exhibit 5.7: Storage Virtualization Model
________________________________________
________________________________________
Desktop Virtualization
Taking the virtualization paradigm even further, desktop software is transformed into a managed virtualized service and removed from the remote client platform. This natural evolution from remote desktop and network workstation technology allows the user to do more with less by facilitating access to complex enterprise cloud applications with no more than a lightweight Netbook or smartphone platform (e.g., Citrix Nirvana Phone—”desktop in your pocket”). If you combine the newer smartphones with built in MEMs projectors (that can project a 50 inch diagonal image) with flexible, roll-up, Bluetooth keyboards, you get a highly virtualized, high-performance workstation that fits in your pocket. With the current pace of technology, cell phones with 1080p projector performance will be out before you know it. Country-specific internationalizations (e.g., language, legal, contractual, cultural) will be instantly delivered to the desktop, application user interfaces, document editors, and other resources courtesy of the cloud, and adjusted as the GPS tracked worker moves fluidly from country to country. The result is a tremendous increase in workforce mobility and global enterprise agility that will be needed to adapt to the rapid pace of globalization and the opportunities it brings for lowering cost, increasing enterprise performance, and diversifying the supply chain. IT isn’t just an automation enabler anymore—it’s the key to global enterprise survival.
Network Virtualization
Just as server and storage virtualization map physical resources to logical groups, network virtualization (see Exhibit 5.8) can be used to reduce the physical and hence Ecological Footprint of network services, while preserving access control and path isolation. Networks that were formerly physically separated can now be virtualized through Generic Routing Encapsulation (GRE) tunnels or Multiprotocol Label Switching (MPLS) that create separate, virtual networks over a single, physical IP backbone. Routers, switches, and firewalls can be consolidated and virtualized in the same way as servers to support independent logical networks with virtual routing tables and security features, eliminating redundant hardware, increasing utilization, saving energy, and lowering Carbon and Ecological Footprint.
Exhibit 5.8: Network Virtualization Model
________________________________________
________________________________________
Virtualization as a Sustainability Strategy
From the preceding discussion it’s apparent that data center consolidation/virtualization moves require detailed analysis of the planned workload to optimize energy efficiency, reliability, availability, performance, security, and other factors that realize the benefits of sustainability. The end result, however, has proven to be worth the effort, as illustrated by the many compelling case studies that are freely available. The benefits include not just lower operating costs (CAPEX and OPEX), but a host of intangible benefits for both the global enterprise and its host communities.
Cloud Computing, Agility, and Sustainability
Server, storage, and network virtualization are the basic building blocks of cloud computing and enable a potent, sustainable mix of Software as a Service (SaaS), Platform as a Service (PaaS), and Infrastructure as a Service (IaaS). Cloud computing gives global IT organizations unprecedented agility, enabling them to quickly respond to dynamic market conditions, shifting sources of supply and demand and new global opportunities with little or no change to physical, Ecological, or Carbon Footprints. Enhanced supply chain agility is key to reengineering the enterprise to deal with rapidly changing markets while maintaining competitive advantage and preserving barriers to entry. In the future, cloud computing will be instrumental in coping with demanding time-to-market challenges such as those currently found in the mobile phone industry. Given the recent trend in shifting Supply Chain Management (SCM) from vertical integration within the enterprise to horizontal integration with highly dynamic supply chain partners, cloud computing and the agility it provides will be increasingly important. The resulting real-time supply chain communication will improve efficiency, reduce waste, minimize excess inventory, and lower operating costs—all resulting in improvements to enterprise sustainability.
The combination of virtualization and service-oriented architecture (SOA) enables rapid provisioning of resources to the most remote, IT-challenged environments. Best practice CIOs already know that entrepreneurs in developing countries can tap into global SaaS providers with no more than a smartphone, gaining instant, on-demand, pay-as-you-go access to powerful financial and information resources. In the same manner, mobile members of the global enterprise can access enterprise grade applications such as customer relationship management (CRM), enterprise resource planning (ERP), business intelligence (BI), human resources (HR), enterprise performance management (EPM), and computer integrated manufacturing (CIM) with no need for in-country, physical, brick-and-mortar (B&M) presence. Cloud computing facilitates scalable provisioning as a single representative in a foreign location expands to a field office, to a branch office, and ultimately a small division, all without significantly increasing in-country IT footprint.
In cases where on-site IT software customization support is required, PaaS can be leveraged to deliver a virtualized platform with dedicated storage, applications, and development tools from the cloud to users anywhere in the world. From the user’s perspective, they have a unique, dedicated development platform that’s no different from a local PC. Instead, a lightweight, resource-constrained mobile platform can be used in the remote location. No local IT support personnel are needed—everything is managed remotely in the cloud. An IT “strike” team working remotely for a limited period with an offshore supplier, for example, can implement customized supply chain integration solutions on the spot without the need for a local office, thereby eliminating space, energy, landline connectivity, legal, janitorial, security, and insurance requirements and their logistical intricacies.
For more permanent remote operations that need to support a sophisticated mix of sales, marketing, accounting, engineering, and other functions on-site, an entire scalable virtual data center can be stood up through cloud IaaS. Using a potent combination of server, desktop, storage, application, and network virtualization, a comprehensive, customized, country-specific suite of enterprise functionality including CRM, ERP, HR, payroll, email, Voice over IP (VoIP), collaboration, backup, disaster recovery, and other branch office functions can be rapidly provisioned.
As discussed in previous sections, deferring construction of a physical in-country data center can save upwards of 10,000 gha of Ecological Footprint. On the pragmatic side, IT resource utilization more than quadruples in many cases, the cost of operations drops precipitously, and the end result is a workplace environment better suited to emerging Generation Y workforces, shareholders, stakeholders, and generations to come.
MOBILITY
As a running theme throughout this book, best practice CIOs have become aware that the workplace is undergoing radical change driven by cultural shifts brought by new groups entering the workforce such as Generation Y. These groups have grown up with high-performance personal computers, laptops, and mobile phone technology, all evolving at an exponential rate. The physical changes in the workplace they are affecting have a potentially far-reaching effect on enterprise sustainability, and Ecological and Carbon Footprint.
According to “career doctor” Randall S. Hansen, PhD, this newest crop of workers “has no interest whatsoever in working in a cubicle—not because it is beneath them, but because they feel advances in technology should let them be able to choose to work from home, Starbucks, or anywhere there is a Wi-Fi connection.”[22] They are attracted by Results Only Work Environments (ROWE) that encourage workers to “work wherever they want, whenever they want, as long as the work gets done.” Generation Y workers are not afraid to challenge the status quo and prize employers that encourage creativity and independent thinking. Employers also are finding that recruiting such employees requires benefits such as flexible work schedules, emphasis on work/life balance, telecommuting, and office environments more akin to college campuses than traditional business interior design. In addition, these new generation workers expect from their IT organizations computing capacity on demand, anywhere, anytime—the hallmark of cloud computing.
Offices of the Future
Generation Y workers prefer “open plan” flexible offices, which emphasize shared, open spaces with few private, dedicated offices. Shared hot-desking stations, touch-down areas, computing islands, and a limited number of enclosed, “private harbor” offices are preferred instead of floors of dedicated, identical cubicles. Strongly resembling college campus environments, these spaces are used by Generation Ys to facilitate shared conversations, rapid-issue solving, intense and open information sharing, and pervasive tacit learning. The flexibility they encourage requires sophisticated, high-bandwidth wireless computing and movable resources and furnishings that can be re-arranged on the fly to support ad hoc task requirements. From a sustainability perspective, open office environments require less space because less area is wasted in corridors and dead spaces. In a sense, open office environments represent a “virtualization” of the physical work environment and in fact produce a highly similar increase in utilization and smaller Ecological Footprint. Less space means less energy to support it and fewer materials consumed in the construction and commissioning of the building.
These changes in office environments are now becoming visible in major corporate settings, and the benefits are becoming visible as well. IBM reports that it is now saving $100 million a year in real estate costs because less office space is needed.[23] The workforce at Accenture, a major management consulting firm, is so mobile that not even the CEO has a dedicated office. The Crayon marketing firm has even gone to the extent of putting its headquarters entirely in cyberspace. Their workers, scattered across multiple cities, rarely meet in the physical world but have routine weekly meetings in their virtual headquarters.
Another benefit of moving from traditional B&M operations to more virtual settings is that talent can easily be recruited from anywhere in the world, not just the local geography. Costly relocation packages (often exceeding $100,000 or more) are no longer needed, and the disruptions of home and family associated with relocation can be avoided, increasing employee satisfaction.
Evolving Mobile Computing and Telecommunications
Growing numbers of global enterprise workers in developed and developing countries alike are embracing the mobile computing and communications technologies that reduce or eliminate the need for conventional office space, allow the work to roam with the worker, and directly connect workers with sophisticated international enterprise financial and information services. These improvements are creating a world-spanning, sustainable fabric of transactions, collaboration, social networking, data sharing, and knowledge management that increasingly blurs the boundaries of the global enterprise, its supply chain, and its customers. In a similar manner, it blurs the boundaries between developed and developing countries, accelerating the process of globalization. This fabric has been compared by a number of authors to a growing, increasingly sophisticated neural network, which will gradually acquire distributed intelligence and cognitive abilities as it evolves and supports its human users.[24]
Evolving smartphones, combined with virtualized SaaS, PaaS, and IaaS services delivered through 4G LTE and WiMax wireless broadband, enable radical changes in mobile workforce capabilities. CIOs everywhere are witnessing business applications formerly requiring fully provisioned desktop workstations now appearing on mobile phones, thanks to mobile desktop virtualization. MEMs based, built-in DLP projectors are appearing in mobile phones such as the Samsung W7900 and currently provide 400×240 resolution on a 50-inch screen. HD projection resolution is, as you might expect, just around the corner. Today, standalone versions of these embedded projectors support 720p resolution and are available off-the-shelf. We’ll see 1080p projectors within a year or two. Toss in a roll-up Bluetooth keyboard and mouse, and you have an enterprise workstation with a 50-inch screen that fits in your pocket.
These game changing developments are further fueled by seamless vertical handoffs in mobile communications systems. Vertical handoff refers to automatic handover from one communications medium to another, for example, between a carrier’s 3G/4G network and an enterprise wireless Wimax or WiFi network. This is similar to the horizontal handoff we see today as users roam from one cell to another using the same technology, except the implications are far greater. For example, in the future, when a mobile worker engaged in a complex online collaboration session enters a campus building, their multimode handset will seamlessly switch between the external 4G LTE or 802.16m network to the campus building-hosted wireless VoIP system without dropping any of the concurrent voice, video, or data streams or incurring noticeable latency. As the worker enters an office telepresence environment, the mobile multimedia/multimodal session will instantly and seamlessly transfer to the fixed environment, taking advantage of local resources tied in through pervasive computing, all transparently and dynamically configured according to user preferences and enterprise administrative, economic, and security policies.
As we’ve already seen with today’s basic SIP and H.323 VoIP systems, significant cost savings are reaped by simply transferring a call from the external carrier 3G network to the enterprise wireless VoIP system, particularly when the called number is already in the enterprise domain. If the original call was to an external number, considerable savings can still be realized by re-routing an external call through a much lower cost campus landline calling plan. Increasingly widespread roaming between public cellular networks and private VoIP systems will have the effect of reducing the load on public networks, allowing them to do more with less physical infrastructure, conserving limited carrier bandwidth, and reducing their Ecological and Carbon Footprints in the process.
In particular, if we consider the electromagnetic spectrum to be a limited natural resource just like air or water, we have now conserved “electromagnetic footprint” at the same time. The iPhone, which a year after its introduction represents only 3 percent of AT&T’s subscriptions, has significantly strained their 3G network,[25] in some cases causing delays of as much as 15 minutes to load applications. It’s predicted that sometime in 2010 the number of mobile broadband users will exceed the number of fixed-service users and climb precipitously from there (see Exhibit 5.9). This problem is aggravated by a natural bandwidth resource so strained that carriers are forced to use a discontinuous patchwork of pieces of spectrum to provision their networks. Exhibit 5.10 shows the projected growth for the different media elements eating up this mobile spectrum. Projections of 4G data rates as high as 1 gbps obviously raise serious sustainability questions considering the limitations in natural electromagnetic spectrum. Extensive use of horizontal and vertical handoff, data compression, cognitive radio, and other emerging technologies for bandwidth conservation will be needed to avoid exhausting this precious natural resource long before the promised benefits are realized.
Exhibit 5.9: Broadband Usage by Fixed and Mobile Segments
________________________________________
Exhibit 5.10: Projected Growth of Broadband Media Elements
________________________________________
________________________________________
To really put these developments in perspective, consider that worldwide PCs in use topped 1 billion in 2008 and will surpass 2 billion by 2015, with the majority of this growth coming from BRIC countries (Brazil, Russia, India, and China). These numbers, however, are quickly being overtaken by the proliferation of ever more powerful mobile phones. The first billion cell phones took 20 years to sell, the second billion only four years, and the third billion only two. This is even more significant when you consider that over half the world’s population now owns a cell phone.[26] Clearly, PCs are no longer the driver of change in the global workplace. This is true because much of the action is now occurring in developing countries that lack the basic market and utility infrastructure needed for PCs and their networks (e.g., stable power, hardware/software distribution channels, and landline network connectivity.)[27] Mobile phones, however, allow users to easily bypass entire phases of traditional infrastructure development, such as postal services, landline telecommunications, and even road networks that would otherwise take decades to build out. Mobile phones are deployed through vastly simpler distribution networks and support direct access to sophisticated enterprise applications hosted in global clouds. These observations are supported by a recent Forrester survey of emerging markets that shows: (1) a strong correlation between mobile phone adoption and a rise in per capita GDP; and (2) that people in developing countries are spending a higher percentage of their income on cell phones and related equipment than their counterparts in developed countries.[28]
The most basic rural economies can now stand up sophisticated, current generation wireless communication networks in a fraction of the time required for traditional infrastructure. Self-contained, low cost, cellular networks in a box or trailer can be readily obtained from a host of worldwide suppliers that feature built-in gateways to international landline circuits and sophisticated features such as automatic call handoff to in-building VoIP systems. These low-cost cellular solutions are rapidly being deployed in developing countries and supplanting functions that were formerly handled by primitive traditional services that were risky, expensive, and inconvenient. These new mobile services are having profound and far-reaching effects on developing countries that are rapidly improving their standard of living, reducing corruption, and fueling growth of local economies. Automation of business services has in many cases eliminated corruption by removing traditional middlemen illegally profiting from the market.
In India, users can now wire funds to friends and family, using a mobile phone and Visa card, and are now paying bills by text message. Fishing boats in Kerala, southwestern India, are able to phone ahead to retail markets with the details of their catch, stabilizing markets and prices. Text messaging is used to monitor elections in Africa by observers who instantly report evidence of election tampering, reducing political corruption. In economies throughout the third world, mobile phones are now used to manage the distribution of perishable commodities, reducing waste and improving the diets and health of distant cultures. Beyond these local improvements, mobile phones provide a platform for mobilizing and integrating a developing country’s citizens directly into the world’s financial systems, bypassing complex traditional phases of national financial system evolution.[29]
Telepresence, Teleoperation, and Robotics
Some of the most interesting new developments in the global enterprise have to do with different forms of virtual presence. Simple conference calls and Video Teleconferencing (VTC) sessions are rapidly giving way to more immersive telepresence systems. Even more advanced are teleoperation systems that are being used to extend expert human skills to tasks in remote locations. Fully automated robotic systems are already becoming commonplace in many areas of manufacturing and material handling. These developments are having major impacts such as increased efficiency, better utilization of human and machine capital, lower Ecological and Carbon Footprints, and ultimately improved enterprise bottom line.
Telepresence systems are a significant improvement over their earlier VTC counterparts. VTC equipment was often installed after the fact in non-optimized conference rooms, resulting in poorly lit, heavily shadowed participants on the other end of the connection. Instead of scaled-down, hard-to-resolve, and often dark visual displays, telepresence offers a more integrated, optimized solution where participants are rendered as bright, high resolution, life-size images, imparting a comfortable face-to-face ambiance to the meeting. Telepresence meetings can combine participants from as many as 48 different locations, significantly reducing air travel and ground transportation incurred by employees. A single deferred round-trip cross-country flight for one person, for example, saves over 1600 pounds of CO2 emissions, to say nothing of the high cost and inconvenience of air travel (i.e., both tangible and intangible benefits are realized). The resulting decrease in the number of air travelers will result in fewer aircraft needed by the airlines, smaller traffic hubs, and so on as you recurse down through the supply chain.
Automotive transportation between different enterprise locations can be reduced, creating additional cost savings (especially when the price of gas may soon rise again above $5 per gallon) and has the same, recursive effect on Ecological Footprint. In the near future, home telecommuters and travelers will have portable telepresence systems that tie back into the enterprise social fabric. Cisco, a leading manufacturer of telepresence systems, uses them extensively in their own operations. They hold about 7,000 meetings a week using telepresence, saving over $300 million annually in travel costs and offsetting several hundred metric tons of annual CO2 emissions.
Telepresence benefits extend far beyond simple business meetings. Remote medicine applications instantly bring specialists to remote areas that lack sufficient population to justify the cost of local experts. It can be used in teleoperation applications to allow personnel to operate equipment at great distance, or in locations that are extremely hazardous, as commonly done today by NASA in robotic space exploration or by the military in battlefield UAV (unmanned aerial vehicle) operations. In a sense, teleoperation allows us to “virtualize” highly skilled employees, eliminating the need for redundant staffing at different global sites, and maximizing their utilization across the enterprise.
Robotics extends the idea of virtualizing skilled employees one step further, by capturing skills into rudimentary (i.e., typically rule-driven), autonomous machine intelligence, which can be deployed in large scale throughout production line operations, as is now done with robotic assembly systems in automotive plants, largely eliminating human workers. Computer Integrated Manufacturing (CIM) operations integrate robotics into completely automated production operations, tying together design, planning, purchasing, inventory control, and manufacturing. Robots are often combined with multiple machine tools in “flexible manufacturing” cells (FMCs) or flexible manufacturing systems (FMSs) that are used as building blocks for production lines. These cells can be quickly reconfigured and adapted to different tasks, in effect virtualizing physical manufacturing capabilities, all controlled through CIM. FMCs and FMSs significantly raise the utilization of manufacturing assets and provide enhanced agility to respond to rapidly changing market demands.
Looming world food shortages resulting from climate change–driven desertification, water shortages, and weather anomalies will mandate even larger scale, highly efficient, and increasingly robotic megafarm operations. Already, large-scale farming efforts utilize agricultural equipment automatically guided by Global Positioning System (GPS) satellite data fused with airborne remote sensing based maps of crop conditions.[30] Soon, this equipment will no longer have a human in the cab and will be fully autonomous and monitored from a remote command center, with control of hundreds of square miles of crop production. Robotic operations will be driven by large, complex data sets that will have evolved from today’s Precision Farming Data Sets (PFDS) and Precision Agriculture Data Sets (PADS) that combine spatial, navigation, agronomy, and other data layers to support operations such as seeding, fertilizing, weeding, and harvesting. Sophisticated agricultural analytics driven by vast, high-dimensional spatial data warehouses will be used to compare and predict variety performance based on characteristics such as soil type, fertility level, PH, and other current and historical factors. These data sets will be provisioned and delivered through virtualized cloud resources managed by teams that may live and work on another continent.
Both autonomous and teleoperated robotics will significantly change the future enterprise while reducing its Ecological Footprint. Many business units in the future will operate with limited or no human presence. This trend will impact many different industries including agriculture, construction, mining, prospecting, offshore oil drilling and production, medicine, manufacturing, and education. These robotic systems will take over both mundane and dangerous tasks, allowing the enterprise to function with greatly enhanced safety and significantly reduced human overhead. The integration of such machine intelligence into the global enterprise poses a significant challenge for future IT leadership, and entails taking on additional responsibilities that will blur the distinction between manufacturing, production, service delivery, and traditional IT.
UBIQUITY: PERVASIVE COMPUTING, UBIQUITOUS SENSORS, AND AD HOC COMMUNICATIONS
Pervasive and ubiquitous sensing, computing, and communications will dominate the global enterprise of the future. Nanotechnology-enabled MEM sensors are already diffusing throughout the enterprise, sensing force, shock, vibration, location, temperature, chemistry, and other environmental conditions. In the future they will record, preprocess, analyze, and transmit their data findings into the enterprise cloud for processing. Ordinary office equipment, such as printers, will evolve beyond today’s rudimentary and reactive self-diagnostics to employ fully predictive maintenance management, supplies requisitioning, load balancing, and cost optimization. Instead of operating as isolated units, they will form self-organizing peer networks for load balancing, resource optimization, fault recovery, and cost effectiveness.
Today’s radio-frequency identification (RFID) devices will soon be superseded by new generations of active, intelligent devices that harvest kinetic, electromagnetic, acoustic, or thermal energy from their surroundings to power their electronics. These “always on” devices will constantly feed information into the cloud to support real-time tracking, decision support, quality analysis, and cost optimization. They will drive revolutionary improvements in efficiency and resource utilization that are felt throughout all levels and reaches of the global enterprise.
Future RFID sensors that detect vibration, temperature, and chemical spectra will identify equipment maintenance issues before they become critical. The historical data they produce will drive warehouse analytics that produce highly optimized maintenance and procurement strategies, prolonging the lifetime of equipment and thereby avoiding needless unsustainable waste that might otherwise wind up in landfills. The low cost of these sensors will allow them to permeate through every corner of the enterprise, and the eventual adoption of the 16 octet IPV6 address space with its 2128 possible addresses will allow each sensor to have its own unique IP address, much as we assign unique MAC addresses to network interface cards (NICs) today.
Evolving tracking devices based on nanotechnology and molecular electronics will rapidly supersede today’s rudimentary hybrid polymer tags. They will be a composite of extremely low power microcontrollers, MEM sensors, GPS navigation, ad hoc networking devices, near field multiband communications transceivers, and sophisticated energy harvesters. These devices will continue to perform their traditional package tracking functions but will extend their reach to other unpowered applications such as asset tracking and location, physical security, hazard detection, material inventory, and safety functions.
As packages move between different shippers and transshipment points, their RFID devices will opportunistically reach out for wireless network access points and report back to the cloud with current status. To reach access points they will build ad hoc, self-organizing near-field networks with other packages in the vicinity to eventually reach a package within range of a wireless network leading to the Internet. Packages within range will advertise the resource on the ad hoc network, helping to build and propagate routing information for other packages. Patents, for example, have already been issued covering this specific capability (e.g., U.S. Patent 7126470—Wireless adhoc RFID tracking system).
Interesting scenarios immediately come to mind. A package containing hazardous material could periodically upload into the cloud data reports on physical handling (mechanical shock), orientation, and chemical spectra (leaking chemicals). If one or more of these parameters drifted out of tolerance, virtual assets would autonomously be allocated in the cloud to perform multi-sensor fusion on the data, looking at events in the recent history. If a potential safety hazard is detected, both the company and the shipper would be immediately alerted and provided with the package location and other relevant data. Similar scenarios are possible for fragile or perishable goods. Criminal diversion of high value items could also activate sophisticated tracking and locating functions that invoke law enforcement action.
Looking farther into the future, other examples of ambient computing in the workplace begin to emerge. Ambient computing leverages intelligent agents and ad hoc networks in the enterprise to tie together distributed sensors and processors that track people, machinery, vehicles, tools, buildings, environmental conditions, and other aspects of the work environment. Where today’s rudimentary sensors detect someone entering a work area and switching on the lights, tomorrow’s ambient computing environments will continuously adapt the work environment to maximize worker comfort, efficiency, and safety.[31] Ambient agents will use eye tracking and gaze analysis to sense where the user’s attention is focused. They will notice that the user has missed an urgent on-screen alert, an important change in a graph, or a recalculation in a spreadsheet and discreetly bring it to their attention, or simply fix the problem depending on the user’s profile and preferences. Agents will be used to track users in safety-critical, high-liability tasks such as air traffic control, heavy equipment operation, or hazardous manufacturing operations to prevent accidents caused by distraction, fatigue, and other causes. They will detect and assess signs of discomfort, fatigue, stress, or anxiety, and adjust the environment to make the person more comfortable or alert supervision if safety becomes an issue.
Ambient computing agents will need the ability to reason with highly dynamic, uncertain, and ambiguous contextual information. Reasoning will be distributed and include aspects of biomimetic emergent intelligence. Ambient computing lies at the nexus of artificial intelligence, cognitive science, human factors analysis, psychology, neuroscience, and biomedical science, and is often augmented by a number of different AI disciplines such as case-based, probabilistic, spatial, and temporal reasoning. Agents will have different levels of sociability and perception, and may communicate with varying semantics according to the situation. They will need robust reasoning abilities capable of dealing with imperfect descriptions of context that will result from the inevitable failures in communication paths and sensor nodes.
ENERGY: SMART BUILDINGS, RENEWABLES, AND CAMPUS SUSTAINABILITY
Buildings are the largest consumers of energy in the United States (40 percent) and consume 72 percent of all electricity generated. They produce 39 percent of total U.S. CO2 emissions and consume $572 billion in annual operating costs. The cement alone that goes into their construction accounts for an additional 5 to 8 percent of all CO2 emissions. The production of one ton of cement from Calcium Carbonate (limestone), for example, produces roughly an equivalent ton of CO2 from the high-temperature rotary cement kiln. The magnitude of the problem is in the numbers: 130 million tons of cement is used in the construction of buildings each year in the United States (2.3 billion tons globally). The National Science and Technology Council has established the Net Zero Energy High Performance Green Buildings R&D agenda to address this growing problem. The objective of this agenda is to develop technologies that will result in buildings that produce as much energy as they consume, hence, “Net Zero Energy,” while significantly reducing greenhouse gas emissions. Another objective of the council is the reduction of building water consumption by 50 percent to approximately 50 gal/day/person through conservation, recycling, and rainwater harvesting.
Buildings today are a far cry from the static structures of the past. They now incorporate sophisticated controls for HVAC, lighting, energy conservation, air quality, safety, security, and maintainability. While many of these systems today have limited integration with enterprise information systems, they are rapidly evolving sophisticated analytics and reporting capabilities that will soon support global real-time building management and optimization analytics and interface to traditional executive Decision Support Systems (DSS) and Executive Information Systems (EIS). The benefits will include higher energy efficiency, improved asset utilization, consolidation of physical floor space, and lower Ecological and Carbon Footprint.
Data warehouse–driven analytics that integrate sophisticated fault prediction diagnostic capabilities now tie directly into building automation systems. These applications analyze patterns of building energy usage over time and identify patterns of faults in electrical and mechanical systems that over time cause a gradual drop in efficiency. These accumulating minor faults typically account for a 17 percent drop in building energy efficiency over a two-year period.[32] Given an average electricity cost of $2 per square foot, this loss can quickly become significant. Scientific Conservation, makers of the SaaS application SCIWatch, recently saved one Santa Clara, California building $126,000 in annual energy costs, as well as an additional $93,000 in energy rebates through advanced building analytics. Similar solutions are being adopted by well-established enterprises such as Harley Davidson, Neiman Marcus, and Santa Clara County, California.
Larger scale Enterprise Energy Management Software (EEMS) solutions consolidate data from the entire enterprise real estate portfolio into a real-time data warehouse managed by an Energy Network Operations Center (ENOC). This center supervises lower-level control systems and building automation systems and provides a variety of remote services including building diagnostics, monitoring, control, continuous monitoring-based commissioning, efficiency analysis, energy usage optimization, demand response program participation, energy cost allocation, budgeting and forecasting, utility accounting, performance measurement and verification, and data trending. A single shopping mall in San Diego, California, for example, saved over $400,000 in energy costs over a three-year period by instituting real-time enterprise energy management.[33]
PHYSICAL SECURITY AND INFORMATION ASSURANCE
Just as the future of commerce is the transnational enterprise, the future of crime and terrorism is one of stateless, transnational organizations. These organizations pose one of the greatest threats to the sustainability of global enterprises. Industrial infrastructure is a tempting high value target for these organizations, particularly in cases where catastrophic environmental damage and loss of life would result from an attack. In the future, these attacks will not resemble military assaults. They will be initiated from across the globe in cyberspace with perhaps occasional on-site infiltration to compromise systems not directly connected to the outside world. Such infiltrations already take place by malicious hackers that use social engineering attacks to gain physical access to compromise networks and network devices, remove data, and manipulate security and badging systems. The next generation of terrorist attacks will focus on gaining control of low-level process control and automation systems that can be used to manipulate physical systems such as chemical process plants, nuclear reactors, natural gas pipelines, electrical grids, and other vital infrastructure that can be used to produce catastrophic effects.
December 3, 2009 was the 25th anniversary of the Bhopal disaster, which eventually took the lives of over 25,000 people when the Union Carbide plant in Bhopal released an estimated 42 tons of methyl isocyanate into the atmosphere. Adverse effects have been identified in over 170,000 survivors. Documents seized in recent years from Middle Eastern terrorist groups have confirmed that similar U.S. chemical plants have been identified as targets, particularly those with highly toxic chemicals that are located in close proximity to high-density population areas.[34] According to the Environmental Protection Agency (EPA), there are 125 plants in the United States alone capable of inflicting at least one million casualties and another 3,000 plants capable of inflicting at least 10,000 casualties. Reports have emerged of an undisclosed study by the Army surgeon general that estimates potential terrorist attacks against toxic chemical plants in the United States currently threaten 2.4 million people. Other reports exist of a plant in New Jersey with sufficient hydrogen chloride on site (100,000 pounds) to potentially kill 7.3 million people within a radius of 14 miles, including much of New York City. Official assurances that adequate fences and security guards have been put in place since 9/11 ignore the fact that the attack will most likely occur in cyberspace and will be delivered from the other side of the globe without terrorists ever stepping on U.S. soil.
There are over 470,000 miles of oil and gas transmission pipelines in the United States, with a high concentration along the heavily populated East Coast. Pipeline control and safety systems are vulnerable to a variety of cyber-attack vectors. Security exploits in industrial automation equipment can go unaddressed for years, as compared to the near real-time patch rate of PC operating systems. Complicating matters, the attack may not be direct. An indirect cyber attack on an adjacent electrical or telecommunications grid can produce the same end result potentially resulting in a pipeline breach and large-scale oil spill or gas explosion. Serious disruption of dependent downstream systems such as power plants, aircraft, transportation networks, military bases, and heating will also occur, adding to the collateral damage. Outside of the United States terrorists frequently target pipelines. The Occidental Petroleum Caño Limón pipeline has been bombed 950 times by terrorists since 1986 resulting in over $2.5 billion in revenue loss. A thwarted Al Qaeda attack in Saudi Arabia in 2002 would have disrupted 6 percent of the world’s oil supply, creating an environmental catastrophe. Federal authorities report that Middle Eastern hackers have extensively penetrated U.S. energy infrastructure going back as far as 2001.
Even the destruction of buildings such as the World Trade Center has a serious impact on the environment. In that case, hundreds of tons of asbestos were disbursed into the environment that during construction had been used in a slurry mixture of cement and sprayed throughout the building as insulation. Hundreds of first responders to the attack have been diagnosed with cancer traceable to the uncontrolled demolition and combustion of the building structures.
Given the enormous environmental risks and potential economic losses involved, future global enterprises will need to operate sophisticated intelligence gathering, analysis, forecasting, and response centers to protect global assets from potential attack. Closely resembling government around-the-clock counter-terrorism “watch desks,” these centers will leverage cloud computing to facilitate data ingest, processing, analysis, correlation, and alert generation from a vast number of sensors and systems distributed throughout the global enterprise. Information sources will include building security systems, energy monitoring systems, SCADA control systems, automation systems, manufacturing information systems, building automation systems, plant monitoring, and incursion detection systems (e.g., video monitoring, motion detection, acoustics, thermal), “open source” news information, employee reports and debriefings, and assessments from external security contractors. This information will be “fused” to provide enterprise management with real-time 24/7 global “situational awareness” in the form of a “common enterprise operating picture” that globally summarizes remote plant status, security alerts, stability assessments of foreign countries, travel advisories, attacks on similar targets, and known threat profiles (e.g., terrorists, insurgencies, criminal groups, corrupt government groups).
Next generation global enterprise security command centers will evolve from today’s site specific centers that handle medical emergencies, HAZMAT response, plant monitoring, and physical security control. Already we’re seeing the adoption of some of these capabilities in global customer support centers such as Dell’s Enterprise Command Centers (ECC), which was patterned after crisis response centers designed for 9/11 scale incidents. What’s required next is the adoption of collection, data analysis, reporting, and alerting practices that have been successfully employed by government and law enforcement agencies for gathering data and producing predictive intelligence. If global enterprises, particularly their IT organizations, fail to stand up to these global security capabilities, the results will be grave. As graphically illustrated by the examples in this section, the potential for massive loss of life, catastrophic environmental damage, and incalculable financial loss is quite real and growing. In February 2010, when leaders of the U.S. Intelligence Community were asked by Congress, “What is the likelihood of another terrorist-attempted attack on the U.S. homeland in the next three to six months? High or low?” Referring to another 9/11 scale incident they replied, “An attempted attack, the priority is certain.” Consider also that ongoing conflicts in different parts of the world have produced an entire generation of battle-hardened foreign fighters that will no doubt be looking to inflict damage on global enterprise targets of opportunity far into the future.
INTEGRATING SUSTAINABILITY INTO STRATEGIC PLANNING
The key to achieving sustainability in the global enterprise is tight integration with mainstream enterprise strategic planning. If sustainability planning is relegated to out-of-band processes and isolated staffs, the many tangible (e.g., cost) and intangible benefits simply won’t be realized. This is complicated by the fact that for the majority of sustainability frameworks there is a gap between theory and actionable tools and artifacts, requiring organizations to develop ad hoc extensions to their existing processes and tools. While this has not held back initiatives such as TNS, much more could be accomplished by a sustainability framework based on broadly accepted strategic planning methodologies that are scalable and integrate easily with existing strategic planning tools and processes. Fortunately a new sustainability planning framework has emerged that is based on a solid foundation derived from balanced scorecard and strategy map methodologies that are widely used by organizations around the world.
Strategic planning in many global organizations includes the use of strategy maps describing the corporate strategy, and different versions of scorecards to measure strategic performance.[35] Organizations are embedded in all sorts of detailed tactical scorecards that give them enormous amounts of data to interpret. It is difficult to discern the true strategic forest from this mass of tree detail. Strategy maps are an important tool for describing the strategy, and balanced scorecards are important tools for measuring and managing movement in the desired direction. Drs. Robert Kaplan and David Norton developed the balanced scorecard (BSC) approach as a result of an early project on performance management (PM).[36]
Their early publishing took hold within a PM audience, even though they were promoting what became a particularly effective approach toward strategic management. Hence the legacy of their early work has been that BSC is synonymous in many circles with PM. That linkage includes a burdensome legacy of cascading performance objectives through the chain of command, job families, and functional aspects of the organizational structure. PM cascades of individual objectives are logically guaranteed to yield sub-optimal strategic performance. Such cascades are almost guaranteed to reinforce functional silos, thereby impeding authentic strategic alignment.
Operational measurement and ensuing operationally-oriented scorecards have been used for decades, arising from various management technologies, including total quality management, statistical process control, and business process re-engineering. Balanced scorecards stimulate a more comprehensive approach to strategic performance in the context of increasing shareholder value by enabling consideration of both short- and long-term, the financial and nonfinancial, quantitative and qualitative. Comprehensive enterprise perspectives are the explicit intent behind the use of the term “balance” in this approach. Predictably, many so-called balanced scorecards are oriented heavily toward quarterly financial results to hit annual executive bonus targets. As a logical consequence of demanding financial markets, what some enterprises pass off as balanced scorecards are short on balance. As a logical consequence of objective cascades within corporate PM systems (rather than authentic strategic cascades through a strategy execution system), performance scorecard alignment with enterprise strategy is almost guaranteed to be sub-optimal.
Strategy maps are one-page graphic illustrations of strategic objectives. This management tool has proven to be enormously useful for clarifying and communicating strategy throughout the entire enterprise. Risk management is normally not meaningfully included in corporate strategy maps, if it is included at all. The financial meltdown was a result, in part, of PM incentives that were completely out of balance with a strategy that should have included longer-term consideration of financial risk. The root cause of this problem could be debated, but dysfunctional PM design is a good bet for being included in the final diagnosis.
Corporate risk analysis for the global enterprise in the twenty-first century requires consideration of climate change impacts, their geographic variability, intensity, likelihood, and rate of change. This requires a modification of the traditional corporate scorecards and strategy map, as well as the process by which these tools are initially deployed and then used within a twenty-first century corporate governance system.
Recently, a new sustainability planning framework, the Comprehensive Framework for Resilient Sustainability[37] (CFRS), has emerged based on the strategy execution and PM methodology described above that has been used by business, industry, and government and validated throughout the world for decades.[38] CFRS was developed by Dr. Irv Beiman, who has authored numerous articles and two books on strategy execution, and serves as a virtual advisor from Shanghai, China. Dr. Beiman has spent over 15 years in the Far East helping China to develop more efficient, strategic, and risk-sensitive operating models for the twenty-first century.
According to Dr. Beiman, the CFRS framework integrates the three key domains of strategy execution, sustainability, and resilience. The CFRS methodology is unique in its ability to enable the description, measurement, management, and adjustment of strategy execution for resilient sustainability at five distinct stakeholder levels:[39]
• L1: Global
• L2: Regional/National
• L3: Organizational
• L4: Cities and Communities
• L5: Individuals and Families
In the context of CIO and CTO influences, this discussion focuses on L3: Organizational stakeholder level, which applies to the global business enterprise. The CFRS L3 strategy map template enables enterprises to design a path toward their own resilient sustainability. The L3 objectives are generally consistent with the output from the 2009 COP15 United Nations Climate Change Conference—the Copenhagen Accord. To the extent there is sufficient alignment leading to sufficient results across national borders and organizational boundaries, the CFRS will mitigate the serious concerns raised by the Copenhagen Diagnosis.
CFRS takes strategy maps and balanced scorecards to a more authentic strategic level by enabling risk management for the global enterprise. CFRS leverages these tools to drive the execution of risk-sensitive strategy from headquarters level down to the lowest levels of the organization. The process begins with a basic reformulation of the enterprise strategy and business model to encompass the tangible and intangible benefits and objectives of resilient sustainability. The output from this process is the CFRS L3 strategy map. Exhibit 5.11 is a generic template for an organization’s resilient sustainability (RS), a one-page graphic template for designing the corporate/enterprise RS strategy. Areas for strategic objective setting are identified for three layers of outcomes, plus two critical additional layers, enablers and drivers. These five layers of objective setting for an organization’s resilient sustainability are called strategic perspectives:
1. Resilience Outcomes
2. Organizational Outcomes
3. Stakeholder Outcomes
4. Sustainability Drivers
5. Learning and Growth Enablers
The design process runs from the top to the bottom of the map, while the execution and causality process runs from the bottom to the top. Directional arrows clarify strategic hypotheses about how to achieve key outcomes. The corporate/enterprise strategy map objectives are further clarified via measures, targets, and initiatives. Objectives clarify what is desired or intended, and measures clarify movement in the desired direction. Initiatives are defined projects that are resourced (time, people, equipment, space, funding, as requested and approved). Initiatives take place over a planned time line. They are normally focused on achieving a specific strategic objective, although they may impact multiple such objectives. By definition, if an objective is in a strategy map, it is strategic.
Exhibit 5.11: CFRS L3: Resilient Sustainability for Organizations
________________________________________
Copyright © Irv Beiman, 2009. All Rights Reserved.
________________________________________
This information can be organized in various ways through the illustration of different RS scorecard views. A strategic balanced scorecard for resilient sustainability should not be confused with an operational dashboard, which is normally tactically oriented toward such measurement areas as timeliness, efficiency, quality, and velocity rather than execution of the enterprise strategy in all its aspects. Just as there are levels of RS from global (L1) to organizational (L3) and cities (L4), there are levels of deployment for the CFRS through an enterprise for the purpose of strategy execution. The strategic planning process can be adjusted to include a cascade of the corporate/enterprise strategy map and balanced scorecard objectives down to the next meaningful level of the organizational structure. This often includes strategic business units or product divisions and shared services or functional departments as a next step. The cascading process can be continued in subsequent steps through multiple layers of the organizational structure, eventually reaching individual employees. Vertical alignment from the top to the bottom of the enterprise structure can be achieved during this cascading process. Note that an RS cascade does not take place through the PM system. Instead, it takes place through the structure of organizational units, with each individual’s PM linked to that individual’s unit RS scorecard. This is accomplished by the individual and manager examining their unit scorecard and selecting those objectives that individual can enable or otherwise support, then choosing simple metrics or performance outcomes consistent with enabling those particular objectives.
Along with vertical alignment, it is important to also improve horizontal alignment across organizational units during the cascading process. Horizontal misalignments between organizational units are a common cause of flawed strategy execution. This is a particularly salient issue for an organization’s resilient sustainability. Every enterprise’s RS will benefit from dramatic improvements in resource efficiency across all units. This requires more than the usual operational improvements in work flow and supply chain management arising from business process re-engineering or exceptional improvements arising from significant innovations. It is often the case that intangible misalignments remain among strategy, policy, budgeting, and HR practices. These intangible misalignments can be a continuing source of strategic sub-optimization. Horizontal alignment adjustments during the cascading process can be focused on designing strategic adjustments that create solutions to such issues. Validation of these solutions can be accomplished during the RS governance process.
Strategic governance is focused on the strategic plan. The current status of rationally identified risk issues calls for consideration of risks that are identified at a high generic level by current resilient sustainability templates. These templates enable RS to be folded into the strategic planning process in a systematic and transparent manner. Some enterprise planning processes use a bottom-up process for building investment and profitability plans, with lower levels submitting their investment and funding requests to higher levels for approval. With some enterprises, this can be a back-and-forth iterative process involving a degree of negotiation. Strategy maps, objective targets, and initiative funding requests/approvals create significant improvements throughout the entire enterprise in clarity about the strategy and how to execute it. This enables a rational consideration of strategic budgeting, rather than the more typical iterative changes in operating budgets.
Enterprise governance for RS involves multiple operational aspects, including periodic review meetings, establishing linkages of the RS strategy with budgeting, and HR practices, as well as the establishment of an office of resilient sustainability. The RS office is tasked with creating the necessary enterprise-wide focus on resilient sustainability, organizing the strategic resources, coordinating across unit boundaries, and driving the RS strategy execution process. The RS office mission includes achieving vertical and horizontal alignment for resilient sustainability across all organizational units. To carry out its critical mission, the RS office requires the support and involvement of the CEO and top leadership team. RS office staff work with the CIO and staff to ensure valid and timely data collection of both quantitative and qualitative information related to executing the organization’s RS strategy. This information is presented during review meetings for analysis, decision making, planning of corrective actions, and subsequent review of results.
Classic business strategy has been focused on growth, profitability, and branding, with little to no attention paid to risk management. The economic meltdown has put financial risk management on corporate radar screens, but establishing resilient sustainability requires a broader, longer term, and more comprehensive approach toward risk management. Resilient sustainability will continue to be a moving target, mercurial in its status, which will continue to evolve as our biosphere evolves. This requires an ongoing strategic examination of how to achieve resilient, sustainable business success amidst dynamically changing conditions for climate and the global economy, as well as infrastructure, energy, food, and water. For example, water shortages will increasingly affect cost and the regulatory environment for that precious resource. Global changes in country-level national security concern for jobs, infrastructure, food, energy, and water are already beginning to affect country-level government policies differently, based on each government’s concerns. Governments will face increasing pressure to impose regulations that directly affect current and future business strategy, creating both opportunity and risk for national and transnational organizations.
The driver and enabler perspectives of the L3 RS template enable global and national enterprises to manage risk while optimizing resource consumption, building brand, and moving toward more resilient and sustainable business success. The Learning and Growth perspective forms the foundation of the strategy map by enabling the four other strategic perspectives. This perspective focuses on arenas of intangible objective settings for human, information, and organizational capital.
Human capital objectives capture sustainability education, training and motivation targets for suppliers, customers, management, and employees. In the context of the current discussion, IT competencies in data center consolidation, virtualization, and cloud computing would be emphasized as objectives to drive Improve Resource Efficiency objectives in the next higher perspective.
Information Capital objectives capture elements of sustainability measurement and best practices sharing. These elements would include the establishment of metrics for measuring progress in sustainability, such as Ecological Footprint and Carbon Footprint. These can be used to evaluate progress in higher-level perspectives and their respective objectives, such as Reduce GHG Emissions. Techniques for evaluating Ecological Footprint, for example, are complex and enterprise specific, requiring unique metrics and methodologies for computing them to be developed. As enterprise resource efficiency improves, best practices evolve and are captured and reused, accumulating intellectual capital in the process.
Organizational Capital objectives capture elements of the EPM process, its adjustment, and the establishment of new organizational elements to support resilient sustainability. Together they capture fundamental cultural change necessary to institute resilient enterprise sustainability.
Above Learning and Growth, the Sustainability Drivers perspective identifies the key sustainability processes, risks, and strategies the enterprise must excel at to continue improving how it adds value for its customers and shareholders. Three strategic themes are identified for operational focus: (1) accelerate innovation and collaboration; (2) optimize resource efficiency; and (3) accelerate resilience strategy formulation and execution. These themes clarify generic strategic priorities for the CIO and IT staff, subject to further revisions and refinement by the CEO and top team. Critical to this perspective is the development and/or acquisition of sustainable innovations. Collaboration across organizational boundaries can be used to accelerate the deployment and commercialization of these innovations. Organizational ecosystems of multiple previously disconnected enterprises are being created for commercialization of new sets of products and services. These organizational ecosystems are creating new requirements for integration across organizational boundaries.
Optimizing resource efficiency is a twenty-first century view of the older cost management theme of improving operational efficiency. This strategic theme has the potential to stimulate significant step change improvements in long-term costs that will flow from the acquisition and development of sustainable innovations. Short-term CAPEX may rise, but longer-term OPEX should drop, relative to the increases forecasted for energy cost spirals. Accomplishing the acquisition and development of sustainable innovations will require the enabling support and technical guidance of the CIO, CTO, and staff. The ongoing need to reduce GHG emissions, combined with the principle of “using waste as food,” can lead to significant improvements in resource efficiency. Accelerating the formulation and execution of RS strategy will require access to a wider range of information content delivered from a wider variety of information sources. The CIO’s strategic role in this is crucial: to raise the IT organization’s level of support to that of enabling the formulation and execution of RS strategy. This may require an adjustment of budgeted resources so that transactional and operational needs are still met, while expanding information resources for clarifying and executing the organization’s strategy for resilient sustainability.
Alignment is key to maximizing the tangible and intangible returns from sustainability initiatives. Scrap material from one business unit or enterprise, for example, can be used to fuel a sustainable solution in another business unit or enterprise. CO2, a potent greenhouse gas, is produced in enormous quantities by industrial processes, such as lime calcining or cement production. In some cases as much CO2 is produced (by mass) as the product itself. The accepted industry practice is to simply vent CO2 to the atmosphere after removing particulate matter. This potent GHG, however, has many useful and valuable industrial applications. It’s a critical nutrient for algae growth, for example, and represents the primary cost driver in the algae biofuel production process. CO2 is also used to pressurize pneumatic control systems, decaffeinate coffee, produce urea and other chemicals, and as a lasing medium in high power industrial laser systems. In a recent compelling example, flue gas from coal plants, which has a high CO2 content, has been combined with fly ash and brine to produce an entirely new form of concrete that completely sequesters the carbon dioxide in a biomimetic process similar to coral formation in the ocean.[40] Information technology is key to creating knowledge alignment within and across enterprises to promote the identification of such opportunities and facilitate cross-boundary collaboration and information sharing that leads to sustainable material reuse.
The Customer Perspective objective template embodies the value proposition offered to customers and reflects the associated customer expectations. According to Jenny Dawkins, head of corporate responsibility research at Ipsos MORI Reputation Center, an organization with over 40 years experience in enterprise reputation management, “Our research shows that as of September [2008], three-quarters of the public say it is more important for a company to be responsible in tough economic times.”[41] In addition, she states “Also, the level of ethical purchasing has continued to rise year-on-year, so there is now an onus on companies to continue to behave responsibly in line with consumer expectations, while also delivering at affordable prices.” As discussed in preceding sections, deliberate sustainability improvements can achieve these results without adverse financial impacts. Information technology has the ability to create not just cost savings that translate to more affordable products and services, but intangible social capital that adds to the value proposition. For example, when Emerson Electric consolidated 100 data centers into just four, they saved hundreds of thousands of gha of Ecological Footprint and a tremendous amount of GHG emissions. At the same time, they earned a substantial amount of social capital that was used to improve market positioning and competitive advantage in a world where corporate environmental responsibility is expected by a growing majority of consumers.
The Organizational Outcomes perspective captures both traditional and nontraditional objectives. The traditional ones for any enterprise are revenue growth, profitability, increased shareholder value, and a growing, thriving organization that creates new jobs. Less traditional are sustainability objectives such as lower Ecological and Carbon Footprints, reduced GHG emissions, and a more robust, resilient enterprise. The purpose is to create a global enterprise and its associated organizational ecosystem that can survive the rippling effects of climate change, extreme weather events, rising sea levels, and unstable infrastructure, as well as drought, desertification, and melting permafrost. Unprecedented evolution of the biosphere is creating significant changes in the supply-demand equations for energy, food, water, and raw material commodities. The probability of dynamic supply-demand imbalances must be factored into the global enterprise’s strategic planning process. The CFRS enables the enhanced approach to planning that is required for creating the global enterprise’s resilient sustainability.
The top perspective in the L3 RS strategy map pulls critical objectives from the top perspective of L1 Global and L2 Regional/National RS strategy maps. The top perspective template identifies three primary arenas that are not influenceable by a single global enterprise (although Wal-Mart may be an exception). Each organization, however, should keep these primary arenas on its radar screen for monitoring and subsequent adjustment of lower-level objectives the enterprise can more directly control. The three primary RS arenas have powerful potential to directly impact global and local enterprises. To the extent that global and national green economies flourish, climate change impacts are likely to be less severe. To the extent the resilient sustainability of local, national, and international infrastructure is significantly improved, there will be fewer and less severe disruptions in local, national, and global supply chains. To the extent that local, national, and international supplies of food, energy, water, and shelter are better established, these areas are less likely to negatively impact the business environment, markets, supply chain partners, and direct customers of the global enterprise.
Information technology, as shown by the examples in previous sections, has a tremendous impact on the global enterprise not only as a direct source of resilient sustainability, but also as a key enabler of other technologies, groups, and processes that indirectly promote sustainability. The CIO role will clearly be pivotal in driving this transformative process. To gain optimum benefit from the use of the CFRS and its associated methodology, it is critical to establish vertical and horizontal alignment of objectives and incentives across the enterprise. CFRS facilitates this by extending the traditional uses of balanced scorecards and strategy maps to provide a multi-level framework and set of actionable tools for integrating sustainability planning with traditional strategic planning activities. More importantly, CFRS crosses the chasm between high-level sustainability theories, such as TNS and actionable enterprise strategic planning, leading to a standard, open, cross-domain set of scalable sustainability planning tools, frameworks, and methodologies.
THE FUTURE LIES BEFORE US
The global IT organization and its leadership can have a profound effect on enterprise sustainability by making the right choices, beginning with the strategic planning process. Well-grounded planning frameworks such as CFRS are critical to propagating these changes across and down through the enterprise, while carefully maintaining organizational alignment. As we formulate our strategic objectives we are faced with critical choices, some leading to increased sustainability and others not. If both tangible and intangible benefits are considered, game-changing improvements are possible in enterprise sustainability as well as operating efficiency, global agility, market positioning, and competitive advantage. These changes, however, do not stop at the enterprise boundary but diffuse across supply chain elements as we’ve seen, accruing benefits along the way. They contribute to a growing wave of globalization that opens new markets, creates new supply chain opportunities, and opens doors to new and unexpected sources of intellectual capital and innovation.
In The Singularity is Near, Ray Kurzweil revealed profound and accelerating trends in technology growth that we are now experiencing throughout the developed world. More rapid, however, is the assimilation of this technology in developing countries and the resulting changes in their cultures, economies, and ecosystems. Mobile technology, IT consolidation, and cloud computing are key drivers that are enabling these countries to leapfrog over entire phases of unsustainable industrialization and infrastructure development that older, developed countries have already passed through. These emerging countries are not burdened by heavy foundations of inefficient, large footprint (Ecological and Carbon), and slowly depreciating, unsustainable substratum. They can simply “plug into” the developed world’s infrastructure through the global compute cloud and ascend directly from rural, agrarian economies to futuristic, sustainable, knowledge-driven city states. The underlying trends can already be seen today in the rapid migration from rural areas to growing megacities. Asia will have at least 10 hypercities with at least 20 million residents each by the year 2025, according to the Far Eastern Economic Review. Examples include Shanghai (27 million), Mumbai (33 million), Karachi (26.5 million), Dhaka (25 million) and Lagos (25 million+).[42] More importantly, this change represents a more fundamental evolution of the global city-state, distinct from the nation state construct that has dominated global diplomacy and commerce in the past. It also means that if the singularity hypothesis is true, its most profound effects may not be in the developed world, but in the rise of these developing global city states. Keeping up with this spiraling change and the inestimable opportunities it presents will require of the global enterprise unprecedented “virtual agility” as well as “resilient sustainability”—a considerable challenge for today’s global CIO.
As the future unfolds, the CIO of tomorrow will have to bridge and master many formerly “stovepiped” organizational responsibilities, skills, and talents that will rapidly lead to an ascendancy of the CIO position in the organizational hierarchy. The technology already exists to make this global transition and adapt the enterprise to the quickening pace. As information technology rapidly morphs, the human dimension must also be addressed to ensure organizational change, maintain enterprise alignment, reinforce relevant incentives, and enhance shareholder value. Solutions exist here as well, such as the CFRS framework, based on tried-and-true methodologies proven throughout the world. The only remaining ingredient necessary for this dramatic transformation is visionary CIO leadership. In the words of the author Alan Cohen:
It takes a lot of courage to release the familiar and seemingly secure, to embrace the new. But there is no real security in what is no longer meaningful. There is more security in the adventurous and exciting, for in movement there is life, and in change there is power.[43]