Part 1 of this series outlined how high concentrations of total dissolved gas (TDG) can occur downstream from high head dams when their spillways are open, and how this TDG can be harmful or even fatal to fish. Alden has been involved in several recent projects for which the objective was to reduce TDG downstream of high head dams. Alden performed the hydraulic and structural design of roughness elements that break up the high velocity jet of flow discharged from the spillway. TDG production is reduced by these roughness elements because they cause the jet to spread out and thereby reduce the plunge depth in the receiving water, which reduces TDG. The roughness elements work very well at reducing plunge depth, but they can cause cavitation, which can damage the spillway surface and the blocks themselves. The design and implementation of the roughness elements will be topic of another article. The present article focuses on reducing the potential for cavitation on the roughness elements.
Alden designed roughness elements have been installed on spillways at Cabinet Gorge and Boundary Dams. Cabinet Gorge Dam is shown in Figure 1. The first set of roughness elements installed at Cabinet Gorge Dam performed well at reducing TDG, but suffered cavitation damage (Figure 2). Cavitation can occur in high velocity flows on steep spillways, especially when roughness on the spillway surface causes flow separation. Air supply ramps are often used on spillways to lift the nappe from the spillway surface and supply air to the void underneath the nappe. Cavitation potential is reduced by introducing the air.
Figure 1. Cabinet Gorge Dam and 3D model for CFD (Dunlop, et al., 2016)
Figure 2. Cavitation Damage on Roughness Element (Paul, 2015)
Air ramps were installed upstream of the first row of roughness elements for one bay each at Cabinet Gorge and Boundary Dams to supply air and to lift the horseshoe vortices that form around the base of the blocks off of the spillway surface. The Cabinet Gorge air supply ramps are shown in Figure 3. The hydraulic design of the air ramp and the air supply ducts at Boundary and Cabinet Gorge Dams was performed by Alden and based on the Aerator Design Chapter (Chapter 5) of Dr. Hank Falvey’s “Cavitation in Chutes and Spillways – Engineering Monograph No. 42.” A conceptualization of the air ramp is shown in Figure 4. The flow over the ramp follows a trajectory influenced by: the velocity of the flow at the location of the ramp, the angle of the ramp, the angle of the spillway, and the air pressure in the air pocket underneath the nappe. The underside of the nappe entrains air which generates negative pressure in the air pocket underneath the nappe. Air is supplied to the air pocket by the air ramp, and the energy losses through the air ramp can be significant. The volume of air drawn through the ramp, the shape of the trajectory, the pressure in the air pocket underneath the nappe, and the energy losses of the air flow through the ramp are functions of each other, so there is no analytical solution to solve for the air flow rate. Alden developed an Excel VBA program to iteratively solve for the air flow rate for a given ramp geometry and spillway flow rate. The air ramp and the ducts supplying air to the ramp were designed by Alden to ensure that the high velocity air through the ramps would not cause sonic shocks and so that the unit flow rate of air on the spillway was approximately 10% of the unit flow rate of water, which has been shown to reduce the potential for cavitation (Falvey, 1990).
The air ramps installed at Cabinet Gorge and Boundary Dams successfully supplied air, and cavitation damage has not been observed since their installation.
Figure 3. Roughness Elements and Air Supply Ramp at Cabinet Gorge Dam (Dunlop, et al., 2016)
Figure 4. Conceptualization of Air Ramp (Based on Falvey, 1990)
During spill season at hydroelectric dams, more water flows into the upstream reservoir than can be used to generate electricity in the powerhouses. This excess flow must pass through a number of different flow release structures in order to bypass the dam and powerhouse. Spillways, diversion tunnels, and low-level sluice gates are commonly used to route flow past dams. Open channel spillways are one of the most common flow release structures at high head dams, and create a highly aerated, turbulent jet of water that exits the spillway up to 150 feet above the river downstream of the dam. This waterfall of aerated flow can plunge to the bottom of the tailwater pool, where the bubbles of atmospheric gases are slowly dissolved into solution with the water. The deeper the jet plunges, the more pressure is exerted by the water on the bubbles, dissolving them faster and preventing them from rising to the surface. This is why we see a frothy white plume of flow that can stretch up to half a mile downstream of a dam when flow is being released, as shown in the photo of Boundary Dam spillway below.
Once the water has dissolved all the gases it can hold in equilibrium (saturation level), the river can exceed its saturation level and becomes supersaturated with dissolved gasses. This is a function of the pressure exerted on the gas bubbles at depth, and the travel time of the bubbles to reach the surface. The sum of all the gasses dissolved into solution is called the Total Dissolved Gas (TDG) concentration. High TDG is a hazard for aquatic life, especially migratory fish such as salmon and steelhead. When fish come into contact with high TDG concentrations at depth, their tissues absorb the gases. When they later swim to the surface, the gasses come out of solution and form bubbles, which can cause trauma around the gills and fins. This trauma is known as Gas Bubble Trauma and has become a major problem for fish populations that must use fish passage systems to bypass dams. Some good photos showing Gas Bubble Trauma in fish can be found in this linked article in the Billings Gazette:
Click on image for full article
The Environmental Protection Agency (EPA) has introduced regulations that limit TDG production to 110% of saturation. This limit is enforced regardless of the TDG level of the water coming into the reservoir, which may be at or near saturation due to upstream dams, waterfalls, or other conditions. High head dams and hydroelectric projects are required to be relicensed with the Federal Energy Regulatory Commission (FERC), and must prove that they meet the new EPA regulations before being granted a license. The owners of a number of affected projects have reached out to Alden to help them find cost-effective solutions to TDG problems, which we have explored extensively using computational fluid dynamics and physical modeling, as well as structural and operational changes to the projects.
In Part II we’ll explore the use of energy-dissipating devices to reduce spillway plunge depth and bubble transit time – stay tuned!
Photo 1. Super-cavitating roughness element, before installation at Cabinet Gorge Dam
The flat face plate of the baffle block causes clean flow separation as water travels over and around the block, forming a “cavitation cloud” which envelops the roughness element and prevents damage from flow. In Part 2 we discussed the use of air ramps to ensure that flow is fully aerated as it passes by the roughness elements, further preventing cavitation damage and the formation of horseshoe vortices around the block bases.
The roughness elements are anchored to the spillway surface or underlying rock abutment using post-tension anchors, and are socketed into the spillway to increase bearing capacity. Impact loads from debris or logs traveling down the spillway generally govern the design loads for anchors. Shear keys with additional shear reinforcement can be used if the blocks are designed to be placed very near the spillway lip, where there is not adequate concrete thickness for the required bearing strength.
Photo 2. Roughness elements and air ramps installed at Boundary Dam
Depending on the type of dam and spillway arrangement, post-installation stability analyses may be necessary to ensure that Federal Energy Regulatory Commission (FERC) stability criteria are met. Spillway capacity calculations are also performed to ensure that the spillways are adequate to pass Probable Maximum Flood (PMF) flowrates after modifications. Alden has teamed with several public utilities to model, design, and test super-cavitating roughness elements at their high head dams, and has successfully implemented TDG-reducing measures while meeting all FERC criteria for stability and spillway capacity.
As we discussed in our first blog post, there are many challenges facing the nuclear industry. One of the greatest is the current energy climate. There are many contributing factors to the general state of flux in energy production, which we would like to explore today. These challenges don’t just impact the nuclear industry, but also affect energy producers across generation types.
It may surprise you, but US energy consumption has effectively plateaued over the last 15 years. Below is a plot generated with the US Energy Information Administration Open Data Embedded Visualization Library. The EIA provides a wide range of information and data products covering energy production, stocks, demand, imports, exports, and prices; and prepares analyses and special reports on topics of current interest.
There are four sectors that are included when looking at total energy consumption. These include Residential, Commercial, Industrial, and Transportation, all of which are shown in the figure. As you can see, starting around the year 2000 the Total Energy Consumption has plateaued. The largest changes in trends have been experienced by the Industrial Sector, showinga significant decrease in consumption over that time. This is likely most attributable to a major focus on energy efficiency, which is improving consistently. There are still challenges, however, outlined in this US Department of Energy Report, which provides information on barriers to industrial energy efficiency.
The way energy is produced in the United States has changed dramatically over the last 15 years. Another plot from the EIA is provided showing the change in net generation for coal, natural gas, nuclear, hydroelectric and renewables. Each supply type is zeroed relative to its 2001 value for comparison.
It is obvious from this plot that while nuclear and hydroelectric production has remained relatively constant on an absolute basis, coal has suffered significantly while natural gas and renewables rise.
Solar Growth/Capacity Issues
Utility scale solar is the fastest growing renewable power generation source in the US on a percentage basis, as shown below. The figure shows the growth of various renewables as a percentage change from 2001.
The growth of solar, particularly in the Alden headquarters home state of Massachusetts, has been significant. Below is a plot from ISO New England showing the Projected Cumulative Growth in New England Solar Power. Starting in January of 2010, there was a minor amount of PV capacity in New England, however by 2025 they predict 3.27 Gigawatts of PV capacity.
Next week, we will continue this thread with a discussion of power prices and power storage, and how these effect the changing energy climate.
In the last installment of this series, we discussed energy demand, energy supply, and the impact of the rapid growth of solar power on changing energy sources. Today, we continue with the effects of power prices, the importance of power storage, and offer some conclusions.
Price of Power
A significant portion of the United States electricity markets is split into hubs. Each hub is an independent energy market in which supply and demand set the price of electricity on a real time basis. A map of United States hub zones is shown in the following figure.
Daily electricity demand is primarily based upon the time of day and climate. As shown in Part I, more electricity is used during daylight and evening hours than night time. Very cold or very warm weather can add demand due to heating and cooling. Additional constraints on fuel costs and available supply add an extra layer of complexity. The cost of environmental mitigations for coal fired power plants, the lack of sufficient natural gas supply in certain markets like New England, and the recent addition of large amounts of intermittent renewables adds significant uncertainty and instability to real time power pricing. If supply exceeds demand in certain electricity hubs, negative power pricing can even occur.
These factors combine to create wide price swings in the cost of real time open market electricity pricing, as shown in the following plot from ISO New England. The figure shows the five minute open market pricing for the New England Hub for 03/02/2017. Prices for this day varied between -$150.13/MWh and $71.93/MWh.
For energy sources that have a high capital cost to construct and a marginal ability to throttle energy output in real time, highly volatile real time energy markets with negative pricing periods create a large amount of uncertainty. This uncertainty in the future price of markets makes new and continued investments in large power generating infrastructure unattractive.
In order to limit the impact of our changing energy demand and production, energy storage will need to be a major priority going forward. One major type of energy storage is hydroelectric pumped storage. This is a process by which water is pumped to a higher elevation during periods of high output and low demand, to be later sent through hydropower turbines to produce energy during peak demand periods. Alden has previously covered pumped storage on the blog so check that out for more information.
Massachusetts has an Energy Storage Initiative which intends to promote and support energy storage within the state. Late last year the state released the State of Charge, a Massachusetts Energy Storage Initiative Study. If the full length report seems daunting, there is an Executive Summary available which gives a good overview of the problem and some of the policy goals to help foster power storage in Massachusetts.
We have provided an introductory look into some of the factors that affect the changing energy climate in the United States. As we discussed in our first blog post, nuclear plants are facing major challenges with many shutdowns on the horizon. These closures will continue to change the landscape of how power is produced in the United States. Optimal solutions to this problem will need to include the growth of renewables and the next generation of nuclear, coupled with significant amounts of power storage if we are going to avoid the continued dependence on carbon emitting power options. Let us know in the comments below if there are any particular items you would like us to expand upon in future posts!
Recognition and Sources
All of our plots above came from either the US Energy Information Administration or ISO New England. They both have a wealth of information and we highly recommend you check them out if you are interested more information on these topics.
Special thanks to Will Fay for his assistance in the development of this post. He works in our Hydraulic Modeling and Consulting Group and is directly involved in Massachusetts power generation as an owner and operator of three hydropower plants.
This video shows the support sculling technique used to propel a swimmer's legs out of the water while upside down.
Some computational studies specific to synchronized swimming have been conducted of various ways to improve lift and power and therefore height. In one study conducted by Shinichiro Ito of the National Defense Academy in Yokosuka, Japan, the hydrodynamic hand characteristics of five hand shapes were investigated in a steady-state flow field to determine the configuration resulting in the maximum force and therefore the best performance. It was determined that the most buoyant lift was produced from a cupped hand (rather than flat), straight fingers (rather than naturally bent) and no gaps between the fingers.
Amie and her teammates competing at the 2010 U.S. Masters Synchronized Swimming Championships in La Mirada, California.
Please share photos or videos of yourself involved in fluid dynamics related hobbies, and stay tuned for more.
Over the years, end users have expressed the general concern that data management isn’t worth their time. In all fairness, this misconception is understandable due to the inexpensive and continually decreasing cost of consumer-grade disk drives. Ultimately, firms should work towards changing that mindset because the true cost is far greater in an enterprise’s production environment. To identify the associated costs, I have prepared the following analysis of raw storage consumption, performance impacts, and resources needed to store data in a locally-hosted Microsoft Windows server environment.
Referencing Figure 1, a 1GB file not only consumes raw storage across multiple storage platforms (e.g. local storage, backup volumes, etc.), but at a higher quantity than in its original form. This is a direct result of high availability and fault tolerance achieved when using a Redundant Array of Independent Disks, better known as RAID.
One requirement for this essential feature with RAID level 5 and 6, as depicted in Figure 1, is the additional raw storage capacity needed to maintain parity across all drives. RAID parity is the mathematical method used to calculate used storage on each drive, which is striped (i.e. spread) across all other drives in the array. This allows a RAID volume to operate continuously and unimpeded if a single drive were to fail (or two drives with RAID 6), and protects from unrecoverable sector read errors. It’s worth noting there are other RAID levels designed to improve fault tolerance and/or performance, such as RAID 1, 10, 50, and 60. Each level has its own distinct advantages, however, no matter which RAID type is chosen, additional raw capacity is needed to support it.
In order to create and manage the RAID structure, a hardware controller or software utility is needed. Many different factors determine whether a software or hardware solution is appropriate for a firm’s RAID needs, but in either case, managing an array of disks requires resources. For a RAID hardware controller, these resources come in the form of a physical controller, with its own dedicated CPU, RAM, and specialized firmware designed to manage the disk array. Similarly, a software-based RAID needs the same resources, but instead places this burden on the server’s CPU and RAM. With either solution, computational resources are needed to operate these systems – the cost of each is directly driven by the size and number of disk drives in the array, which in turn is determined by the quantity of data stored.
The cost of storing a single 1GB file is further compounded by the price of enterprise-class drives that are used in servers and storage arrays. These drives, such as server-grade SATA or NL-SAS drives, are designed for 24/7 operation in a production environment and cost approximately 4 times more per GB than consumer-grade drives. SATA or NL-SAS are generally used when capacity is needed over performance, however, firms who require the highest-level of I/O performance must employ enterprise-class SSD or SCSI drives, which come with a substantially higher price per GB.
One might think, “Well, these costs just apply to firms hosting large files, small files don’t matter”. This is another misconception that many end users have. Although small files impact raw storage in a different manner (later explained), their biggest cost comes in the form of processing individual file records. New Technology File System, or simply NTFS, is the file system used by Windows operating systems on servers and workstations to store data on disk drives. This file system relies on a Master File Table (MFT), which is the heart of the NTFS volume structure. The MFT contains a record (its metadata) for every single file stored on the disk, consuming an additional 1KB of raw storage for each, and defines the file’s: file name, attributes, security descriptor, object ID, etc. Whenever an operation is performed on NTFS, each of those records must be processed. You have likely seen the impact this has on performance when executing various operations, such as copying a 1GB file vs. several smaller files that consume the same quantity of storage.
Small files also increase data fragmentation, which can reduce read/write speed due to additional seek times; albeit, this plays a greater factor with mechanical drives vs. solid state. Nevertheless, storing unnecessary small files can have a notable detriment on performance when carrying out the most basic operations.
Another way small files impact performance and drive cost is how they’re stored within the file system. When a disk drive is formatted, raw storage is broken into chunks called clusters, with each cluster representing a fixed size of bytes. When a file is stored on a disk drive it is allocated to as many clusters as it needs. To maintain optimal disk drive performance, cluster size should be increased as drive capacity increases – fewer clusters to seek, means less seek time. One consequence of larger cluster size is the impact it has when storing small files. For example, many drives today use a 4KB cluster size due to their high capacity – that means a 4KB file needs one cluster to store its data. Consequently, a 1 byte file also needs 4KB to store its data because 4KB is the smallest logical unit available for storage, leaving the remaining part of the cluster unused. This may seem negligible, but when you have a storage volume with millions of files that are smaller than the smallest cluster size, it consumes precious storage space; and don’t forget about the performance impact related to the MFT!
The cost of large and small files on a disk drive is worth noting, but another consequence comes in the form of indirect costs. This includes the labor and computational resource consumption associated with managing, indexing and searching, backing up, and other types of data processing, which puts an unnecessary burden on equipment and personnel.
In conclusion, the cost of storing unnecessary files, small or large, can’t be overstated. Firms who make a concerted effort to appropriately manage their data inevitably reduce their IT expenditures and associated management costs. End users may find it difficult to see the value of efficiently managing the data they create, but when the costs are aggregated across a firm’s entire IT systems, it’s apparent that data management really does matter.
Patterson, David; Gibson, Garth A.; Katz, Randy (1988). A Case for Redundant Arrays of Inexpensive Disks (RAID). SIGMOD Conferences. Retrieved 2018-06-20.
Chen, Peter; Lee, Edward; Gibson, Garth; Katz, Randy; Patterson, David (1994). "RAID: High-Performance, Reliable Secondary Storage". ACM Computing Surveys. 26: 145–185. Retrieved 2018-06-20.
Scott Lowe (2009-11-16). "How to protect yourself from RAID-related Unrecoverable Read Errors (UREs). Techrepublic". Retrieved 2018-06-21.
Intel Corporation (2017-10-02). “Defining RAID Volumes for Intel Rapid Storage Technology”. Retrieved 2018-06-10.
Code Idol (2010-09-17). “NTFS On-Disk Structure”. Retrieved 2018-03-02.
Microsoft Corporation (2009-10-08) “How NTFS Works”. Retrieved 2018-03-02.