Searching for eels is an activity reserved for those who like to stay up late. Only under the cover of darkness is one able to have the best chance to find these nocturnal fish. Surveying for American eels, Anguilla rostrata, with lanterns at night or “shining” is a common method to document their presence in rivers across eastern North America. Juvenile American eels often congregate downstream of obstacles that block their upstream movement (eels are catadromous fish, meaning adults spawn in saltwater and the young move into freshwater to rear and mature before returning to the marine environment to complete the reproductive cycle). Hydroelectric dams are common impediments to upstream migrants along river courses. Finding optimal areas to establish passage routes for eels to move upstream is a primary reason to shine for eels.
Eels, being a fish, become more active as the water temperature rises. When the temperature is above 10 degrees C, which generally occurs from May to October in New England, eels of all ages and sizes become more active. The recently born glass eels, (so called because of their transparent appearance), are carried northward by Atlantic Ocean currents, floating along like crystalline feathers until tides pull them ashore where they begin a process of metamorphosis into the more recognizable elongated fish. These eels, now called elvers, grow and mature to the point where they are capable of swimming against the river current to seek inland freshwater and a chance to grow to appreciable sizes, (mature female eels in the St. Lawrence River can reach 1 meter in length and weigh up to 8 kilograms). Eels are long-lived fishes, some individuals are almost 30 years old before returning to the central Atlantic Ocean to complete their lifecycle and give life to the next generation.
Surrounded by complete darkness after nightfall during the warm spring or summer months, throw on a pair of waders, and the search for the secretive eel begins. A careful inspection of the area of interest such as a dam tailwater using flashlights or lanterns is best achieved from the shore or among the rocks and boulders of the riverbed. If the river is inaccessible for wading, a pair of binoculars can help view from a distance by aiming a strong spotlight at the suspected congregation area. Eels do not need a lot of flowing water to stimulate them to try and ascend. They can seek to move upstream through even the smallest trickle of water if that is all that is available, so be sure to pay attention in areas where these conditions exist. Look for rivulets of water flowing among boulder fields that offer a constant and uninterrupted path. They may try and lift themselves up with their tails through a plunging waterfall but they need substrate to support their long muscular bodies and push against it to get up, over and through the point of passage they seek. Eels even have the extraordinary ability to travel overland if the desire is strong enough to move past an obstruction. During warm humid nights, eels can be observed doing just this.
Figure 1: Migrating eels in a New England River
Just because its night-time and it’s the month of June doesn’t mean that eels will be on the move. Environmental conditions such as air and water temperature, precipitation, percent cloud cover, and lunar phase influence eel behavior. Variability in these conditions during the spring and summer months can cause eel activity levels to increase or decrease. Keeping track over a season the exact locations and environmental conditions that cause the greatest number of eels to congregate will provide the best information to decide where to establish an upstream passage route. There are many ways to provide upstream passage for eels (ladders and traps are common), but it is critical to know the optimal location to place these facilities. Eel shining is a proven way to establish these locations by observing the conditions that eels prefer at a site of interest. Alden advocates using night-time surveys as a low-tech yet effective method to shine the light on upstream eel behavior.
Attendees at the 2017 Alden Forum on Hydropower and Fish Passage
Based on presentations given by the various speakers, the primary takeaways from the forum include the following:
Cake consumed during a break at the 2017 Alden Forum, showing companies and agencies in attendance
The format and content of the forum was highly rated by the attendees and led to many in-depth and productive discussions. The setting appeared to be more conducive to open dialogue among all of the participants compared to typical relicensing meetings and agency consultations.
Planning for additional forums addressing other relevant topic areas related to fish passage and other environmental issues is underway by Alden staff, and may include hosting events in other regions of the U.S.
You might never know when one seemingly minor decision could change your life.
One summer weekend, just before entering my third year in the Civil & Environmental Engineering program at Tufts, I found myself on a whitewater kayaking class for beginners run by volunteer instructors with the Appalachian Mountain Club. A friend recruited me to join at the last minute; they needed more new “boaters” to reach their minimum capacity.
Some combination of perfect weather, good company, and new challenges that weekend got me hooked on the sport. The more time I spent on the river, the more folks I met who had degrees and careers related to hydrology or engineering. That would eventually include me, too – my love for this hobby & fluid dynamics led me to work here at Alden.
When I returned to school in the fall, I took my first fluid dynamics course. The coursework and the new hobby complemented each other – spending time in a boat made it easier for me to understand certain fluid mechanics topics.
One of those topics is the concept of a stagnation point: an obstruction in a flow field, like a rock or bridge abutment in a river, will cause the fluid to slow down to a velocity of zero at the object’s surface, resulting in high static pressure.
For a boater, the stagnation point is a dangerous place to be. You and your boat can get pinned on the upstream side of an obstruction, and if you can’t get free quickly, you could be injured or drown.
What makes the stagnation point so dangerous? Consider conservation of energy and the Bernoulli equation, a key concept in fluid dynamics:
The Bernoulli equation only applies to steady, incompressible, frictionless flow along a streamline. Under these conditions, total mechanical energy per unit mass of water (which is the sum of pressure (p/rho), kinetic (V2/2), and potential (gz) energies) is constant along a streamline. This concept can clearly explain why a kayaker doesn’t want to get stuck at the stagnation point.
Figure 1: Bridge Abutment Stagnation Point
Follow the light blue dotted streamline through the middle of the river in Figure 1. If we estimate negligible friction and minimal change in elevation as the streamline approaches the bridge abutment, Bernoulli’s equation says the energy from a high velocity in the river upstream of the abutment will change to a velocity of zero, along with high static pressure at the stagnation point.
For a boater, this means that if you float head on into an obstruction and become pinned, you will have a velocity of zero, and high water pressure will hold you in place against the obstruction. Yikes! It is best to avoid this situation by navigating around the obstruction, but boaters should always be prepared for the worst by staying up to date on swift water rescue techniques and carrying appropriate safety equipment. Thanks for the safety tip, Bernoulli!
Whitewater kayaking was also helpful in understanding my favorite dimensionless parameter: the Froude number! This parameter compares inertial to gravitational forces. It is also a ratio of the fluid’s velocity to the speed that a surface wave travels across the fluid (AKA wave celerity).
Froude numbers less than 1 indicate subcritical flow: the water is deep and moving relatively slowly. Froude numbers higher than 1 indicate supercritical flow: the water is shallow and moving quickly. At a Froude number of 1, we have critical flow: gravitational and inertial forces are equivalent. Critical flow can be achieved where the slope of the channel or river is zero, such as over the top of a weir. If you measure the depth (L) of water over the top of the weir as well as the width of the river along the weir, you can determine the flow area, flow velocity (using the Froude number), and ultimately the total flow rate of the river. The Froude number also helps us understand how waves form, and why kayakers are able to surf on them.
When conditions are just right in a river, standing waves and holes (shown in Figure 2) can form; they stay in the same place and don’t move with the flow of the water. Kayakers can have fun with these features by surfing on them, balancing on top of the wave or hole, and staying in the same location relative to the river bank in a fast-moving river.
Figure 2: Whitewater Hole & Wave1
In fluid dynamics, this phenomenon is called a hydraulic jump.
Hydraulic jumps suitable for surfing sometimes occur naturally in rivers, and can also be designed and installed where river flow and gradient (along with local stakeholders and regulatory agencies) permit. To create a hydraulic jump, a weir-like structure in a river (shown as a “play feature” in Figure 2) can be used to make the flow to transition from critical to supercritical (i.e., the water is shallow and moving quickly). An abrupt drop leading into the downstream pool creates a discontinuity along the river bed. The flow immediately becomes subcritical, getting deeper and slowing down. This abrupt change is mirrored at the river surface, shown in these pictures as a “seam” in a hole or as a “trough” in a wave, where the green water of the supercritical upstream flow meets the hydraulic jump.
Figure 3: Me, trying to throw a loop, in Tariffville CT. Photo credit goes to Andrew Nitchske.
The velocity of the subcritical flow is very low, and can in some cases reverse so the water is flowing upstream, which makes it possible to surf and do tricks in a wave or hole without floating downstream.
The flow is critical at the seam or trough, which means that the Froude number is equal to 1. The wave/hole will not move around too much since the flow velocity is equal to the wave celerity.
Kayak surfing (made possible by the Froude number) is my favorite thing to do!! Check it out:
Figure 4: Katelyn Green (yellow kayak) and me (blue kayak), surfing in Tariffville CT.
So… what’s the difference between a wave and a hole? Tune in next time for a post from my colleague Ben Mater!
Remember to use the comment field to share your favorite fluid dynamics hobbies with us.
 McLaughlin Whitewater Design Group, Ben Nielsen, April 15, 2014 Presentation “Recreational Whitewater: Keys to Successful Management”, available online at https://www.slideshare.net/rshimoda2014/nielsen-ben-rms-recwwworkshop2014submittededit (slide 38 of 66). McLaughlin is one of a small number of companies that create whitewater parks for surfing, and Alden has been lucky enough to collaborate with McLaughlin on occasion!
During spill season at hydroelectric dams, more water flows into the upstream reservoir than can be used to generate electricity in the powerhouses. This excess flow must pass through a number of different flow release structures in order to bypass the dam and powerhouse. Spillways, diversion tunnels, and low-level sluice gates are commonly used to route flow past dams. Open channel spillways are one of the most common flow release structures at high head dams, and create a highly aerated, turbulent jet of water that exits the spillway up to 150 feet above the river downstream of the dam. This waterfall of aerated flow can plunge to the bottom of the tailwater pool, where the bubbles of atmospheric gases are slowly dissolved into solution with the water. The deeper the jet plunges, the more pressure is exerted by the water on the bubbles, dissolving them faster and preventing them from rising to the surface. This is why we see a frothy white plume of flow that can stretch up to half a mile downstream of a dam when flow is being released, as shown in the photo of Boundary Dam spillway below.
Once the water has dissolved all the gases it can hold in equilibrium (saturation level), the river can exceed its saturation level and becomes supersaturated with dissolved gasses. This is a function of the pressure exerted on the gas bubbles at depth, and the travel time of the bubbles to reach the surface. The sum of all the gasses dissolved into solution is called the Total Dissolved Gas (TDG) concentration. High TDG is a hazard for aquatic life, especially migratory fish such as salmon and steelhead. When fish come into contact with high TDG concentrations at depth, their tissues absorb the gases. When they later swim to the surface, the gasses come out of solution and form bubbles, which can cause trauma around the gills and fins. This trauma is known as Gas Bubble Trauma and has become a major problem for fish populations that must use fish passage systems to bypass dams. Some good photos showing Gas Bubble Trauma in fish can be found in this linked article in the Billings Gazette:
Click on image for full article
The Environmental Protection Agency (EPA) has introduced regulations that limit TDG production to 110% of saturation. This limit is enforced regardless of the TDG level of the water coming into the reservoir, which may be at or near saturation due to upstream dams, waterfalls, or other conditions. High head dams and hydroelectric projects are required to be relicensed with the Federal Energy Regulatory Commission (FERC), and must prove that they meet the new EPA regulations before being granted a license. The owners of a number of affected projects have reached out to Alden to help them find cost-effective solutions to TDG problems, which we have explored extensively using computational fluid dynamics and physical modeling, as well as structural and operational changes to the projects.
In Part II we’ll explore the use of energy-dissipating devices to reduce spillway plunge depth and bubble transit time – stay tuned!
Each year, National Engineers Week falls on the week of February 22 — George Washington’s actual birthday—in part to commemorate a man who is considered the nation’s first engineer. But not only that, the week is meant to highlight the contributions engineers have made to the world as we know it. Just think about that for a minute as you read this on a display screen that wouldn’t exist if not for engineering ingenuity. The list of accomplishments engineers have made to our society and the history books is massive.
From our perspective, we can highlight many areas in which Alden engineers have contributed to the annals of history. From testing airplane propellers and missile ballistics to the work on dam safety and fish passage and protection programs, we’ve had a hand in shaping our world throughout our 125 years of continual operation.
But trying to find a singular project to discuss for this week? That task is nearly impossible. So, that’s when I asked Dave Anderson, Senior Vice President and Chief Technology Officer to weigh in. Besides wanting to know where his present for National Engineer’s Week was (in the mail, of course), he offered some great insight.
“If I had to point to one thing we’ve done at Alden that has had the biggest impact on society – each and every one of us—it would be our work on power plant emission controls,” Dave says. “While we have made incredible contributions in so many areas, nothing is as important as the air you and I breathe.”
Dave is referencing the flow modeling work we’ve done to help design, integrate, and optimize the performance of emission control systems to meet clean air standards.
The Clean Air act has evolved throughout the years along with the research and techniques for controlling and monitoring power plant emissions. As programs and provisions were rolled out, the role of flow modeling and the subsequent design work needed to make emission control systems run efficiently became even more critical. And that’s where our engineers coupled their technical expertise with laboratory modeling techniques that use state-of-the-art computational fluid dynamic (CFD) modeling and traditional reduced scaled physical modeling to provide realistic, reliable solutions for each and every project.
Our engineering design, investigation, and evaluation of flow-related systems include experience with NOx, SOx, Hg, particulate collection system design and operation, carbon capture and sequestration, stack liquid discharge, dust deposition and entrainment, system optimization and pressure loss reduction.
For instance, we used computational and scaled physical modeling to simulate a planned Selective Catalytic Reduction system (SCR ). The objective of the project was to design internal flow controls and an ammonia injection system to optimize the NH3:NOx ratio entering the catalyst layers, ensure uniform flue gas velocity & temperature distributions within the catalyst, minimize the potential ash deposition, and reduce the non-recoverable pressure losses through the SCR system.
In another study, Alden engineers used CFD and scaled physical modeling to evaluate and optimize the performance of a planned Wet Flue Gas Desulphurization (WFGD) design by simulating the flue gas flow distributions entering and throughout the WFGD spray tower. Modifications to the inlet ductwork and within the WFGD were made to improve the gas flow and SO2 removal efficiency. The results of the study provided flow controls and a spray nozzle injection grid design to minimize liquid pullback while providing uniform spray coverage, which resulted in optimized SO2 removal.
Another client contracted us to design a quench spray header system to reduce stack inlet temperatures in order to protect a Pennguard lining during Flue Gas Desulphurization (FGD) bypass mode. Our team used CFD simulations to design a quench system that not only lowered the stack inlet gas temperature, but ensured full evaporation of the injected liquid, avoided wall wetting, and minimized gas temperature gradients entering the stack. Read more about the bypass quench system design here.
We have also used scaled physical modeling to simulate a planned Electrostatic Precipitator (ESP) upgrade and to design flow controls and perforated plates to optimize the flue gas velocity distribution entering the collection fields, minimize ash re-entrainment from the collection hoppers, and reduce the non-recoverable pressure losses throughout the system.
So what’s the end result of all this work and countless other emission control projects that have passed through our doors over the years? You’re breathing it. And for that, we can say it’s truly our contribution to making the world a better place for all of us.
Ice chunks the size of Volkswagens falling from the sky! Sounds like a Hollywood special effects scene in an action movie, right? Unfortunately, this is a real-life danger that can occur just about anywhere, but especially on wet stacks running in cold weather.
It should go without saying that ice falling from a tall stack can have damaging, even catastrophic effects on process equipment and personal safety. But with some situational knowledge and attention to design details, you can prevent ice build-up on wet-stacks before it becomes a problem.
Essentially, to operate an ice-free wet stack system, you need to properly handle the discharge of wet flue gas during prolonged exposure to cold temperatures. Units running at low loads on cold, windy days can see a dangerous icing develop from an effect called plume downwash.
Plume downwash occurs when a cross-wind at the top of the stack deflects the plume from its vertical path. This phenomenon is more likely to happen when flue gas exits at a lower velocity—like, for example, when units aren’t running at full capacity. As the wind impacts the plume, the plume is pushed downward onto the stack, causing the liquid within the flue gas to deposit on the stack’s surfaces.
And what happens when moisture is allowed to build up on cold surfaces? You guessed it—ice forms.
Ultimately, all stacks can experience downwash if wind speeds are high enough. The only questions are:
Thankfully, much of the guesswork can be eliminated by using computational fluid dynamic (CFD) modeling. CFD modeling is extremely well-suited to simulate the stack plume over a range of plant operating and atmospheric conditions to predict the potential for plume downwash. And if you haven't already identified icing with the naked eye, CFD simulations can be used to predict not only if icing can occur, but where it can form on the stack.
If conditions are right for plume downwash, the following areas are most likely to experience problems, including potential ice-buildup:
These areas are exposed directly to plume downwash, and therefore, icing in the right conditions— some more so than others. Heat tracing is often recommended for some of these surfaces to eliminate snow accumulation and excessive ice build-up, but care should be taken to ensure the drainage run-off doesn’t create a secondary icing problem.
More details about the icing potential in these areas can be found in the EPRI Revised Wet Stack Design Guideline, section 1.4.9.
According to the EPRI Revised Wet Stack Design Guideline, the potential for icing can be reduced by employing the following steps:
Any uncertainty in any of these recommendations can be discussed with our Gas Flow and Wet-Stack Design experts.
Icing can occur at below freezing conditions all winter long, every winter, creating potentially dangerous conditions for both people and property. If you're running a wet stack at a low load in cold, windy weather, icing is probably going to be a concern for you. Contact us for details and recommendations in order to ensure proper performance and reliable operation of your wet stack located downstream of a wet flue gas desulfurization system (WFGD).
And in any instances where ice is present, be careful!
Over the years, end users have expressed the general concern that data management isn’t worth their time. In all fairness, this misconception is understandable due to the inexpensive and continually decreasing cost of consumer-grade disk drives. Ultimately, firms should work towards changing that mindset because the true cost is far greater in an enterprise’s production environment. To identify the associated costs, I have prepared the following analysis of raw storage consumption, performance impacts, and resources needed to store data in a locally-hosted Microsoft Windows server environment.
Referencing Figure 1, a 1GB file not only consumes raw storage across multiple storage platforms (e.g. local storage, backup volumes, etc.), but at a higher quantity than in its original form. This is a direct result of high availability and fault tolerance achieved when using a Redundant Array of Independent Disks, better known as RAID.
One requirement for this essential feature with RAID level 5 and 6, as depicted in Figure 1, is the additional raw storage capacity needed to maintain parity across all drives. RAID parity is the mathematical method used to calculate used storage on each drive, which is striped (i.e. spread) across all other drives in the array. This allows a RAID volume to operate continuously and unimpeded if a single drive were to fail (or two drives with RAID 6), and protects from unrecoverable sector read errors. It’s worth noting there are other RAID levels designed to improve fault tolerance and/or performance, such as RAID 1, 10, 50, and 60. Each level has its own distinct advantages, however, no matter which RAID type is chosen, additional raw capacity is needed to support it.
In order to create and manage the RAID structure, a hardware controller or software utility is needed. Many different factors determine whether a software or hardware solution is appropriate for a firm’s RAID needs, but in either case, managing an array of disks requires resources. For a RAID hardware controller, these resources come in the form of a physical controller, with its own dedicated CPU, RAM, and specialized firmware designed to manage the disk array. Similarly, a software-based RAID needs the same resources, but instead places this burden on the server’s CPU and RAM. With either solution, computational resources are needed to operate these systems – the cost of each is directly driven by the size and number of disk drives in the array, which in turn is determined by the quantity of data stored.
The cost of storing a single 1GB file is further compounded by the price of enterprise-class drives that are used in servers and storage arrays. These drives, such as server-grade SATA or NL-SAS drives, are designed for 24/7 operation in a production environment and cost approximately 4 times more per GB than consumer-grade drives. SATA or NL-SAS are generally used when capacity is needed over performance, however, firms who require the highest-level of I/O performance must employ enterprise-class SSD or SCSI drives, which come with a substantially higher price per GB.
One might think, “Well, these costs just apply to firms hosting large files, small files don’t matter”. This is another misconception that many end users have. Although small files impact raw storage in a different manner (later explained), their biggest cost comes in the form of processing individual file records. New Technology File System, or simply NTFS, is the file system used by Windows operating systems on servers and workstations to store data on disk drives. This file system relies on a Master File Table (MFT), which is the heart of the NTFS volume structure. The MFT contains a record (its metadata) for every single file stored on the disk, consuming an additional 1KB of raw storage for each, and defines the file’s: file name, attributes, security descriptor, object ID, etc. Whenever an operation is performed on NTFS, each of those records must be processed. You have likely seen the impact this has on performance when executing various operations, such as copying a 1GB file vs. several smaller files that consume the same quantity of storage.
Small files also increase data fragmentation, which can reduce read/write speed due to additional seek times; albeit, this plays a greater factor with mechanical drives vs. solid state. Nevertheless, storing unnecessary small files can have a notable detriment on performance when carrying out the most basic operations.
Another way small files impact performance and drive cost is how they’re stored within the file system. When a disk drive is formatted, raw storage is broken into chunks called clusters, with each cluster representing a fixed size of bytes. When a file is stored on a disk drive it is allocated to as many clusters as it needs. To maintain optimal disk drive performance, cluster size should be increased as drive capacity increases – fewer clusters to seek, means less seek time. One consequence of larger cluster size is the impact it has when storing small files. For example, many drives today use a 4KB cluster size due to their high capacity – that means a 4KB file needs one cluster to store its data. Consequently, a 1 byte file also needs 4KB to store its data because 4KB is the smallest logical unit available for storage, leaving the remaining part of the cluster unused. This may seem negligible, but when you have a storage volume with millions of files that are smaller than the smallest cluster size, it consumes precious storage space; and don’t forget about the performance impact related to the MFT!
The cost of large and small files on a disk drive is worth noting, but another consequence comes in the form of indirect costs. This includes the labor and computational resource consumption associated with managing, indexing and searching, backing up, and other types of data processing, which puts an unnecessary burden on equipment and personnel.
In conclusion, the cost of storing unnecessary files, small or large, can’t be overstated. Firms who make a concerted effort to appropriately manage their data inevitably reduce their IT expenditures and associated management costs. End users may find it difficult to see the value of efficiently managing the data they create, but when the costs are aggregated across a firm’s entire IT systems, it’s apparent that data management really does matter.
Patterson, David; Gibson, Garth A.; Katz, Randy (1988). A Case for Redundant Arrays of Inexpensive Disks (RAID). SIGMOD Conferences. Retrieved 2018-06-20.
Chen, Peter; Lee, Edward; Gibson, Garth; Katz, Randy; Patterson, David (1994). "RAID: High-Performance, Reliable Secondary Storage". ACM Computing Surveys. 26: 145–185. Retrieved 2018-06-20.
Scott Lowe (2009-11-16). "How to protect yourself from RAID-related Unrecoverable Read Errors (UREs). Techrepublic". Retrieved 2018-06-21.
Intel Corporation (2017-10-02). “Defining RAID Volumes for Intel Rapid Storage Technology”. Retrieved 2018-06-10.
Code Idol (2010-09-17). “NTFS On-Disk Structure”. Retrieved 2018-03-02.
Microsoft Corporation (2009-10-08) “How NTFS Works”. Retrieved 2018-03-02.