Step 1: Calculate The LMTD: HOT Cold
Step 1: Calculate The LMTD: HOT Cold
Step 1: Calculate The LMTD: HOT Cold
Step 3: Read hHot from 0.25 < NTU < 2.0 chart for hydrocarbons
Although is there not a viscosity line for 215 cP, the line representing “100 cP” can be or viscosities up to
about 400-500 cP. The heat exchanger will be pressure drop limited and the heat transfer coefficient will
not change appreciably over this viscosity range for plate and frame exchangers. Reading from the
chart, a pressure drop of 15 psig corresponds to hHot 50 Btu/h ft2 °F
Step 4: Read hCold from 0.25 < NTU < 2.0 chart for water based liquids
Again, you will note that the exact viscosity line needed for pure water (0.33 cP) in this case is not
available. However, the “1.0 cP” line on the chart will provide a very good estimate of the heat transfer
coefficient that pure water will exhibit. Reading from the chart, a pressure drop of 15 psig corresponds to
hCold 3000 Btu/h ft2 °F
Step 5: Calculate the Overall Heat Transfer Coefficient (OHTC)
Assume a stainless steel plate with a thickness of 0.50 mm is being used. 316 stainless steel has a
thermal conductivity o 8.67 Btu/h ft °F.
As before, the LMTD is calculated to be 38.5 °F. NTUHot and NTUCold are calculated as 2.59 and 3.14 respectively.
Reading hHot and hCold from the chart for 2.0 < NTU < 4.0 (water based), gives about 2000 Btu/h ft2 °F and 2500
Btu/h ft2 °F respectively. Although the material of choice may be Titanium or Palladium stabilized Titanium, we will
use the properties for stainless steel for our preliminary sizing. Calculating the OHTC as before yields 918 Btu/h
ft2 °F.
We have seen that alternative technologies have significant size advantage over shell-and-tube heat
exchangers. Now let’s consider the implications of this. The first advantage is smaller plot plan for the
process plant. The spacing between process equipment can be reduced. So, if the plant is to be housed
in a building, the size of the building can be reduced. In any event, the amount of structural steel used to
support the plant can be reduced and given the weight saving, the load on that structure is also reduced.
The weight advantage extends to the design of the foundations used to support the plant.
Since, the spacing between individual equipment items is reduced, expenditure on piping is reduced.
Once more we stress the savings associated with size and weight
reduction can only be achieved if these advantages are recognized at
the earliest stages of the plant design.
As we will briefly show, the use of alternative exchanger technologies can result in significant reduction in
plant complexity. This not only enforces the savings associated with reduced size and weight (reduced
plot space, structural cost savings, piping cost reduction etc.) but also has safety implications. The
simpler the plant structure the easier it is for the process operator to understand the plant. The simpler the
plant structure, the safer, easier and more straight forward the plant maintenance (the fewer the pipe
branches that must be blanked etc.).
The alternative technologies result in reduced complexity by reducing the number of heat exchangers.
This is achieved through:
multi-streaming.
Mechanical constraints play a significant role in the design of shell-and-tube heat exchangers. For
instance, it is common to find that some users place restrictions on the length of the tubes used in such a
unit. Such a restriction can have important implications for the design. In the case of exchangers requiring
large surface areas the restriction drives the design towards large tube counts. If such tube counts then
lead to low tube side velocity, the designer is tempted to increase the number of tube side passes in order
to maintain a reasonable tube-side heat transfer coefficient.
Thermal expansion considerations can also lead the designer to opt for multiple tube passes for the cost
of a floating head is generally lower than the cost of installing an expansion bellows in the exchanger
shell.
The use of multiple tube passes has four detrimental effects. First, it leads to a reduction in the number of
tubes that can be accommodated in a given size of shell (so it leads to increased shell diameter and
cost). Second, for bundles having more than four tube passes, the pass partition lanes introduced into the
bundle give rise to an increase in the quantity of shell-side fluid bypassing the tube bundle and a
reduction in tube-side heat transfer coefficient. Thirdly, it gives rise to wasted tube side pressure drop in
the return headers. Finally, and most significantly, the use of multiple tube passes results in the thermal
contacting of the streams not being pure counter-flow. This has two effects. The first is that the Effective
Mean Temperature Driving Force is reduced. The second, and more serious effect, is that a ‘temperature
cross’ can occur.
If a ‘temperature cross’ occurs, the designer must split the duty between a number of individual heat
exchangers arranged in series. Figures 8 and 9 below illustrate the difference between temperatures that
are said to be ‘crossing’ and those that are not.
Many of the alternative heat exchanger technologies allow the application of pure counter-flow across all
size and flow ranges. The results are better use of available temperature driving force and the use of
single heat exchangers.
Figure 8: No Figure 9: Deep
temperature cross temperature cross
Let’s now consider multi-streaming. The traditional shell-and-tube heat exchanger only handles one hot
and one cold stream. Some heat exchanger technologies (most notably plate-fin and printed circuit
exchangers) can handle many streams. It is not uncommon to find plate-fin heat exchangers transferring
heat between ten individual process. Such units can be considered to contain a whole heat exchanger
network within the body of a single exchanger. Distribution and recombination of process flows is
undertaken inside the exchanger. The result is a major saving in piping cost.
Engineers often over-look the opportunities of using a plate and frame unit as a multi-stream unit. (Again,
this will be a regular oversight if exchanger selection is not made until after the flow sheet has been
developed).
A good example of multi-streaming is the use of a plate heat exchanger serving as a process
interchanger on one side and a trim cooler on the other. This arrangement is particularly useful for
product streams that are exiting a process and must be cooled for storage. Another popular function of
multi-streaming is in lowering material costs. Often times, once streams are cooled to a certain
temperature, they pose much less of a corrosion risk. Half of the exchanger can contain a higher alloy,
while the other side can utilize stainless steel or a lower alloy.
In Figure 10 we show how a plate and frame unit has been applied to a problem involving three process
streams. The heat transfer properties used for styrene are given in Table 1. Just one unit is used and this
unit has 1,335 sq.ft. of effective surface area.
In Figure 11 we show the equivalent shell-and-tube solution. In order to avoid temperature crosses we
need six individual exchangers: the cooler having two shells in series (each having 1,440 sq.ft of effective
surface); the heat recovery unit having four shells in series (each having 2,116 sq.ft. of surface).
So, our plate-and-frame design involves the use of 1,335 sq.ft. of surface in a single unit. The equivalent
shell-and-tube design has 11,344 sq.ft. of surface distributed across four separate exchangers.
Figure 10: A multi-stream plate exchanger serving as an interchanger and a trim
cooler
Table 1: Heat Transfer Properties Used for Styrene in the Multi-Stream Example
100 0F 150 0F 200 0F
Density (lb/ft3) 55.5 53.9 52.3
Specific Heat
0.427 0.447 0.471
(Btu/lb 0F)
Viscosity (cP) 0.590 0.428 0.329
Thermal Cond. (Btu/ft
0.077 0.074 0.070
h 0F)
Turbocompressors, either centrifugal or axial, are the heart of many industrial processes. Often, these
compressors are critical to the operation of the plant, yet they are seldom installed with a spare unit.
Surging
represents a major threat to compressors and these processes. Surge prevention is an important
process control problem in these environments as surging can result in
costly downtime and mechanical damage to the compressors. An
effective anti-surge control system is critical for every turbocompressor.
Understanding Surge
Many believe that surging is analogous to cavitation in a centrifugal pump, but this is not the case.
Surging is defined as a self oscillation of the discharge pressure and flow rate, including a flow reversal.
Every centrifugal or axial compressor has a characteristic combination of maximum head and minimum
flow. Beyond this point, surging will occur. During surging, a flow reversal is often accompanied by a
pressure drop.
Surging is best illustrated by observing the movement of the compressor operating point along its
characteric curve as shown in Figure 1.
Consider a compressor system as shown in Figure 2. The discharge pressure is marked Pd and the
downstream vessel pressure is Pv.
Point B is not a stable operating point. When the flow reversal occurs, the discharge pressure drops.
This forces the operating point to move from Point B to Point C. At Point C, the flow rate is insufficient to
build the necessary pressure to return to Point A. Thus, the operating point moves to Point D where the
flow rate is in excess the load demanded and the pressure builds until Point A is finally reached. This
completes a single surge cycle. The next cycle begins again with another flow reversal and the process
repeats until an external force breaks the surge cycle.
Consequences of Surging
Radial bearing load during the initial phase of surging. A side load is placed on the rotor which
acts perpendicular to the axis.
Thrust bearing load due to loading and unloading.
Seal rubbing
Stationary and rotating part contact if thrust bearing is overloaded.
Anti-Surge Control
The only way to prevent surging is to recycle or blow down a portion of the flow to keep the compressor
away from it's surge limit. Unfortunately, compressing extra flow results in a severe economic penalty.
Thus, the control system must be able to accurately determine the compressor's operating point as to
provide adequate, but not excessive, recycle flow.
A Surge Limit Line (SLL) is the line connecting the various surge points of a compressor at varying
RPMs. The set point of the anti-surge controller is represented on the
compressor map shown in Figure 4 by a line which runs parallel to the
surge limit line. This line is called the Surge Controller Line (SCL). The
controller is then able to calculate the deviation from the operating point to the SCL.
The compressor surge limit is not fixed with respect to any one measured variable such as compression
ratio or pressure drop across the flow meter. Instead, it is a complex function that is dependent on the gas
composition, RPM, suction temperature, and pressure. A closed loop PI controller would be unable to
prevent surge during large or fast disturbances. Therefore, such a controller would be unable to stop
surge. Rather, the controller would simply cycle the recycle valve open and closed in response to
successive surge cycles. For a PI controller to act quickly, the "b" value would need to be high. This
would result in a decreased operating region for the compressor when the recycle valve is closed.
Thus, an open loop control is used in conjunction with the closed loop in an anti-surge controller. The
overall configuration is shown in Figure 5. A Recycle Trip Line (RTL) is used between the SLL and the
SCL. Small or slow distrubances are managed by the closed loop controller which keeps the compressor
operating point to the right of the RTL. For large or fast disturbances, the compressor operating point will
reach the RTL. At this point, the open loop control will be initiated. This will add a step change which is a
function of the compressor operating point at the moment it reaches the RTL. In this manner, the fast
opening valve will be sufficient to stop surging.
Adaptive gain is also used in the anti-surge controller. When the operating point moves quickly toward
the SCL, the adaptive gain move the SCL toward the operating point.
Figure 5: Compressor Anti-Surge Control Scheme
1. The valve must be large enough to prevent surging under all possible operating conditions. However,
a valve which is too oversized will result in poor control.
5. One or more volume boosters are required to ensure fast response and equal opening and closing
time.
<Understanding Surge
PINCH TECNOLOGIE
While oil prices continue to climb, energy conservation remains the prime concern for
many process industries. The challenge every process engineer is faced with is to seek
answers to questions related to their process energy patterns. A few of the frequently
asked questions are:
1. Are the existing processes as energy efficient as they should be?
2. How can new projects be evaluated with respect to their energy requirements?
3. What changes can be made to increase the energy efficiency without incurring any cost?
5. What is the most appropriate utility mix for the process?
6. How to put energy efficiency and other targets like reducing emissions, increasing plant
capacities, improve product qualities etc, into a one coherent strategic plan for the overall site?
All of these questions and more can be answered with a full understanding of Pinch
Technology and an awareness of the available tools
for applying it in a practical way. This article aims to provide the basic knowledge of
the concepts in pinch technology and how they have been
be applied across a wide range of process industries.
Composite Curves
DTmin and Pinch Point
Grand Composite Curve
Energy Cost, Capital Cost, and Total Cost Targeting
Energy Cost and Capital Cost Trade-Off
Plus/Minus Principle of Process Modification
Appropriate Placement Principles for Key Process Equipments
Total Site Analysis
Conclusions
References
Web links
What is Pinch Technology?
The term "Pinch Technology" was introduced by Linnhoff and Vredeveld to represent
a new set of thermodynamically based methods that guarantee minimum energy levels
in design of heat exchanger networks. Over the last two decades it has emerged as an
unconventional development in process design and energy conservation. The
term ‘Pinch Analysis’ is often used to represent the application of the tools and
algorithms of Pinch Technology for studying industrial processes. Developments of
rigorous software programs like PinchExpressTM, SuperTargetTM, Aspen PinchTM have
proved to be very useful in pinch analysis of complex industrial processes with speed
and efficiency. Check out Link-1 for their demos at the end of the article.
Consider the following simple process [Figure 1(a)] where feed stream to a reactor is
heated before inlet to a reactor and the product stream is to be cooled. The heating and
cooling are done by use of steam (Heat Exchanger -1) and cooling water (Heat
Exchanger-2), respectively. The Temperature (T) vs. Enthalpy (H) plot for the feed
and product streams depicts the hot (Steam) and cold (CW) utility loads when there is
no vertical overlap of the hot and cold stream profiles.
An alternative, improved scheme is shown in Figure 1(b) where the addition of a new
‘Heat Exchanger–3’ recovers product heat (X) to preheat the feed. The steam and
cooling water requirements also get reduced by the same amount (X). The amount of
heat recovered (X) depends on the ‘minimum approach temperature’ allowed for the
new exchanger. The minimum temperature approach between the two curves on the
vertical axis is DTmin and the point where this occurs is defined as the "pinch".
From the T-H plot, the X amount corresponds to a DTmin value of 20 oC. Increasing
the DTmin value leads to higher utility requirements and lower area requirements.
The estimation of optimum economic value of DTmin is discussed in Steps of Pinch
Analysis.
When the process involves single hot and cold streams (as in above example) it is
easy to design an optimum heat recovery exchanger network intuitively by heuristic
methods. In any industrial set-up the number of streams is so large that the traditional
design approach has been found to be limiting in the design of a good network. With
the development of pinch technology in the late 1980’s, not only optimal network
design was made possible, but also considerable process improvements could be
discovered. Both the traditional and pinch approaches are depicted in Figure 2.
Figure 2: Graphic Representation of Traditional and Pinch Design Approaches
Traditional Design Approach: First, the core of the process is designed with fixed
flow rates and temperatures yielding the heat and mass balance for the process. Then
the design of a heat recovery system is completed. Next, the remaining duties are
satisfied by the use of the utility system. Each of these exercises is performed
independently of the others.
Pinch originated in the petrochemical sector and is now being applied to solve a wide
range of problems in mainstream chemical engineering. Wherever heating and cooling
of process materials takes places there is a potential opportunity. Thus initial
applications of the technology were found in projects relating to energy saving in
industries as diverse as iron and steel, food and drink, textiles, paper and cardboard,
cement, base chemicals, oil, and petrochemicals.
Most industrial processes involve transfer of heat either from one process stream to
another process stream (interchanging) or from a utility stream to a process stream. In
the present energy crisis scenario all over the world, the target in any industrial
process design is to maximize the process-to-process heat recovery and to minimize
the utility (energy) requirements. To meet the goal of maximum energy recovery or
minimum energy requirement (MER) an appropriate heat exchanger network (HEN)
is required. The design of such a network is not an easy task considering the fact that
most processes involve a large number of process and utility streams. As explained in
the previous section, the traditional design approach has resulted in networks with
high capital and utility costs. With the advent of pinch analysis concepts, the network
design has become very systematic and methodical.
A summary of the key concepts, their significance, and the nomenclature used in
pinch analysis is given below:
Combined (Hot and Cold ) Composite Curves: Used to predict targets for
Energy and Capital Cost Targeting: Used to calculate total annual cost of
utilities and capital cost of heat exchanger network.
Total Site Analysis: This concept enables the analysis of the energy usage for
an entire plant site that consists of several processes served by a central utility
system.
With further research, new topics like ‘Regional Energy Analysis’, ‘Network Pinch’,
‘Top Level Analysis’, ‘Optimisation of Combined Heat & Power’, ‘Water Pinch’, and
‘Hydrogen Pinch’ are being developed. These basic terms and concepts have
become the foundation of what we now call Pinch Technology.
In any Pinch Analysis problem, whether a new project or a retrofit situation, a well-
defined stepwise procedure is followed (Figure 3). It should be noted that these steps
are not necessarily performed on a once-through basis, independent of one another.
Additional activities such as re-simulation and data modification occur as the analysis
proceeds and some iteration between the various steps is always required.
‘Cold Streams’ are those that must be heated e.g. feed preheat before a
reactor.
For example, when a gas stream is compressed the stream temperature rises because
of the conversion of mechanical energy into heat and not
by any fluid to fluid heat exchange. Hence such a stream
may not be available to take part in any heat exchange. In the context of pinch
analysis, this stream may or may not be considered to be a process stream.
For each hot, cold and utility stream identified, the following thermal data is extracted
from the process material and heat balance flow sheet:
CP = m x Cp
W = 0 (zero)
** Here the specific heat values have been assumed to be temperature independent
within the operating range.
The stream data and their potential effect on the conclusions of a pinch analysis
should be considered during all steps of the analysis. Any erroneous or incorrect data
can lead to false conclusions. In order to avoid mistakes, the data extraction is based
on certain qualified principles. For details on principles of data extraction, check
out Link-2 at the end of the article. The data extracted is presented in Table 1.
The design of any heat transfer equipment must always adhere to the Second Law of
Thermodynamics that prohibits any temperature crossover between the hot and the
cold stream i.e. a minimum heat transfer driving force must always be allowed for a
feasible heat transfer design. Thus the temperature of the hot and cold streams at any
point in the exchanger must always have a minimum temperature difference (DTmin).
This DTmin value represents the bottleneck in the heat recovery.
Hot stream Temp. ( TH ) - ( TC ) Cold stream Temp. >= DTmin
The value of DTmin is determined by the overall heat transfer coefficients (U) and the
geometry of the heat exchanger. In a network design, the type of heat exchanger to be
used at the pinch will determine the practical Dtmin for the network. For example, an
initial selection for the Dtmin value for shell and tubes may be 3-5 0C (at best) while
compact exchangers such as plate and frame often allow for an initial selection of 2-
3 0C. The heat transfer equation, which relates Q, U, A and LMTD (Log Mean
Temperature Difference) is depicted in Figure 4.
For a given value of heat transfer load (Q), if smaller values of DTmin are chosen, the
area requirements rise. If a higher value of DTmin is selected the heat recovery in the
exchanger decreases and demand for external utilities increases. Thus, the selection
of DTmin value has implications for both capital and energy costs. This concept
will become clearer with the help of composite curves and total cost targeting
discussed later.
Just as for a single heat exchanger, the choice of DTmin (or approach temperature) is
vital in the design of a heat exchanger networks. To begin the process an initial
DTmin value is chosen and pinch analysis is carried out. Typical DTmin values based
on experience are available in literature for reference. A few values based on Linnoff
March’s application experience are tabulated below for shell and tube heat
exchangers.
For more details on typical DTmin values, check Link-3 at the end of the article.
In general any stream with a constant heat capacity (CP) value is represented on a T -
H diagram by a straight line running from stream supply temperature to stream target
temperature. When there are a number of hot and cold streams, the construction of hot
and cold composite curves simply involves the addition of the enthalpy changes of the
streams in the respective temperature intervals. An example of hot composite curve
construction is shown in Figure 5(a) and (b). A complete hot or cold composite curve
consists of a series of connected straight lines, each change in slope represents a
change in overall hot stream heat capacity flow rate (CP).
Figure 5: Temperature-Enthalpy Relations Used to Construct Composite Curves
For heat exchange to occur from the hot stream to the cold stream, the hot stream
cooling curve must lie above the cold stream-heating curve.Because of the ‘kinked’
nature of the composite curves (Figure 6), they approach each other most closely at
one point defined as the minimum approach temperature (DTmin). DTmin can be
measured directly from the T-H profiles as being the minimum vertical difference
between the hot and cold curves. This point of minimum temperature difference
represents a bottleneck in heat recovery and is commonly referred to as the
"Pinch". Increasing the DTmin value results in shifting the of the curves horizontally
apart resulting in lower process to process heat exchange and higher utility
requirements. At a particular DTmin value, the overlap shows the maximum possible
scope for heat recovery within the process. The hot end and cold end overshoots
indicate minimum hot utility requirement (QHmin) and minimum cold utility
requirement (QCmin), of the process for the chosen DTmin.
Thus, the energy requirement for a process is supplied via process to process heat
exchange and/or exchange with several utility levels (steam levels, refrigeration
levels, hot oil circuit, furnace flue gas, etc.).
Graphical constructions are not the most convenient means of determining energy
needs. A numerical approach called the "Problem Table Algorithm" (PTA) was
developed by Linnhoff & Flower (1978) as a means of determining the utility needs of
a process and the location of the process pinch. The PTA lends itself to hand
calculations of the energy targets. For more details on PTA see Link-4 at the end of
the article.
To summarize, the composite curves provide overall energy targets but do not clearly
indicate how much energy must be supplied by different utility levels. The utility mix
is determined by the Grand Composite Curve.
The information required for the construction of the GCC comes directly from the
Problem Table Algorithm developed by Linnhoff & Flower (1978). The method
involves shifting (along the temperature [Y] axis) of the hot composite curve down by
½ DTmin and that of cold composite curve up by ½ DTmin. The vertical axis on the
shifted composite curves shows process interval temperature. In other words, the
curves are shifted by subtracting part of the allowable temperature approach from the
hot stream temperatures and adding the remaining part of the allowable temperature
approach to the cold stream temperatures. The result is a scale based upon process
temperature having an allowance for temperature approach (DTmin). The Grand
Composite Curve is then constructed from the enthalpy (horizontal) differences
between the shifted composite curves at different temperatures. On the GCC, the
horizontal distance separating the curve from the vertical axis at the top of the
temperature scale shows the overall hot utility consumption of the process.
Figure 7 shows that it is not necessary to supply the hot utility at the top temperature
level. The GCC indicates that we can supply the hot utility over two temperature
levels TH1 (HP steam) and TH2 (LP steam). Recall that, when placing utilities in the
GCC, intervals, and not actual utility temperatures, should be used. The total
minimum hot utility requirement remains the same: QHmin = H1 (HP steam) + H2
(LP steam). Similarly, QCmin = C1 (Refrigerant) +C2 (CW). The points TH2 and TC2
where the H2 and C2 levels touch the grand composite curve are called the "Utility
Pinches." The shaded green pockets represent the process-to-process heat exchange.
In summary, the grand composite curve is one of the most basic tools used in pinch
analysis for the selection of the appropriate utility levels and for targeting of a given
set of multiple utility levels. The targeting involves setting appropriate loads for the
various utility levels by maximizing the least expensive utility loads and minimizing
the loads on the most expensive utilities.
If the unit cost of each utility is known, the total energy cost can be calculated using
the energy equation given below.
The capital cost of a heat exchanger network is dependent upon three factors:
Pinch analysis enables targets for the overall heat transfer area and minimum number
of units of a heat exchanger network (HEN) to be predicted prior to detailed design. It
is assumed that the area is evenly distributed between the units. The area distribution
cannot be predicted ahead of design.
Area = Q / [ U x dTLM ]
The composite curves can be divided into a set of adjoining enthalpy intervals such
that within each interval, the hot and cold composite curves do not change slope. Here
the heat exchange is assumed to be "vertical" (pure counter-current heat exchange).
The hot streams in any enthalpy interval, at any point, exchanges heat with the cold
streams at the temperature vertically below it. The total area of the HEN (Amin) is
given by the formula in Figure 8, where i denotes the ith enthalpy and
interval j denotes the jth stream and dTLM denotes LMTD in the ith interval.
The actual HEN total area required is generally within 10% of the area target as
calculated above. With inclusion of temperature correction factors area targeting can
be extended to non counter-current heat exchange as well.
NminMER=[Nh+Nc+Nu–1]AP +[Nh+Nc+Nu–1]BP
Where :
For the Exchanger Cost Equation shown above, typical values for a carbon steel shell
and tube exchnager would be a = 16,000, b = 3,200, and c = 0.7. The installed cost
can be considered to be 3.5 times the purchased cost given by the Exchanger Cost
Equation.
To arrive at an optimum DTmin value, the total annual cost (the sum of total annual
energy and capital cost) is plotted at varying DTmin values (Figure 7). Three key
observations can be made from Figure 9:
a. An increase in DTmin values result in higher energy costs and lower capital
costs.
b. A decrease in DTmin values result in lower energy costs and higher
capital costs.
c. An optimum DTmin exists where the total annual cost of energy and
capital costs is minimized.
The heat exchanger network designed on the basis of the estimated optimum DTmin
value is not always the most appropriate design. A very small DTmin value, perhaps
8 0C, can lead to a very complicated network design with a large total area due to low
driving forces. The designer, in practice, selects a higher value (15 0C) and calculates
the marginal increases in utility duties and area requirements. If the marginal cost
increase is small, the higher value of DTmin is selected as the practical pinch point for
the HEN design.
The pinch divides the process into two separate systems each of which is in enthalpy
balance with the utility. The pinch point is unique for each process. Above the pinch,
only the hot utility is required. Below the pinch, only the cold utility is required.
Hence, for an optimum design, no heat should be transferred across the pinch. This is
known as the key concept in Pinch Technology.
To summarize, Pinch Technology gives three rules that form the basis for practical
network design:
Violation of any of the above rules results in higher energy requirements than the
minimum requirements theoretically possible.
Plus/Minus Principle: The overall energy needs of a process can be further reduced
by introducing process changes (changes in the process heat and material balance).
There are several parameters that could be changed such as reactor conversions,
distillation column operating pressures and reflux ratios, feed vaporization pressures,
or pump-around flow rates. The number of possible process changes is nearly infinite.
By applying the pinch rules as discussed above, it is possible to identify changes in
the appropriate process parameter that will have a favorable impact on energy
consumption. This is called the "Plus/Minus Principle."
Applying the pinch rules to study of composite curves provide us the following
guidelines:
These simple guidelines provide a definite reference for the adjustment of single heat
duties such as vaporization of a recycle, pump-around condensing duty, and others.
Often it is possible to change temperatures rather than the heat duties. The target
should be to
The process changes that can help achieve such stream shifts essentially involve
changes in following operating parameters:
reactor pressure/temperatures
distillation column temperatures, reflux ratios, feed conditions, pump
around conditions, intermediate condensers
evaporator pressures
storage vessel temperatures
For example, if the pressure for a feed vaporizer is lowered, vaporization duty can
shift from above to below the pinch. The leads to reduction in both hot and cold
utilities.
In addition to the above pinch rules and principles, a large number of factors must also
be considered during the design of heat recovery networks. The most important are
operating cost, capital cost, safety, operability, future requirements, and plant
operating integrity. Operating costs are dependent on hot and cold utility requirements
as well as pumping and compressor costs. The capital cost of a network is dependent
on a number of factors including the number of heat exchangers, heat transfer areas,
materials of construction, piping, and the cost of supporting foundations and
structures.
With a little practice, the above principles enable the designer to quickly pan through
40-50 possible modifications and choose 3 or 4 that will lead to the best overall cost
effects.
The essence of the pinch approach is to explore the options of modifying the core
process design, heat exchangers, and utility systems with the ultimate goal of reducing
the energy and/or capital cost.
The design of a new HEN is best executed using the "Pinch Design Method
(PDM)". The systematic application of the PDM allows the design of a good network
that achieves the energy targets within practical limits. The method incorporates two
fundamentally important features: (1) it recognizes that the pinch region is the most
constrained part of the problem (consequently it starts the design at the pinch and
develops by moving away) and (2) it allows the designer to choose between match
options.
In effect, the design of network examines which "hot" streams can be matched to
"cold" streams via heat recovery. This can be achieved by employing "tick off"
heuristics to identify the heat loads on the pinch exchanger. Every match brings one
stream to it target temperature. As the pinch divides the heat exchange system into
two thermally independent regions, HENs for both above and below pinch regions are
designed separately. When the heat recovery is maximized the remaining thermal
needs must be supplied by hot utility.
The graphical method of representing flow streams and heat recovery matches is
called a ‘grid diagram’ (Figure 10).
All the cold (blue lines) and hot (red line) streams are represented by horizontal lines.
The entrance and exit temperatures are shown at either end. The vertical line in the
middle represents the pinch temperature. The circles represent heat exchangers.
Unconnected circles represent exchangers using utility heating and cooling.
The design of a network is based on certain guidelines like the "CP Inequality Rule",
"Stream Splitting", "Driving Force Plot" and "Remaining Problem Analysis". The
stepwise procedure can be understood better with the help of an example problem
(Link-5).
Having made all the possible matches, the two designs above and below the pinch are
then brought together and usually refined to further minimize the capital cost. After
the network has been designed according to the pinch rules, it can be further subjected
to energy optimization. Optimizing the network involves both topological and
parametric changes of the initial design in order to minimize the total cost. For more
details on HEN Design check the Link–6 at the end of the article.
One of the main advantages of Pinch Technology over conventional design methods is
the ability to set energy and capital cost targets for an individual process or for an
entire production site ahead of design. Therefore, in advance of identifying any
projects, we know the scope for energy savings and investment requirements.
Conduct Process Simulation Studies: Pinch replaces the old energy studies with
information that can be easily updated using simulation. Such simulation studies can
help avoid unnecessary capital costs by identifying energy savings with a smaller
investment before the projects are implemented.
Set Practical Targets: By taking into account practical constraints (difficult fluids,
layout, safety, etc.), theoretical targets are modified so that they can be realistically
achieved. Comparing practical with theoretical targets quantifies opportunities "lost"
by constraints - a vital insight for long-term development.
Decide what to do with low-grade waste heat: Pinch shows, which waste heat
streams, can be recovered and lends insight into the most effective means of recovery.
Industrial Applications
A Case Study: When Pennzoil was adding a residual catalytic cracking (RCC) unit,
the gas plant associated with the RCC and an alkylation unit at its Atlas Refining
facility in Shreveport, energy efficiency was one of their major considerations in
engineering the refinery expansion. Electric Power Research Institute (EPRI) and
Pennzoil's energy provider, SWEPCO, used pinch technology to carry out an
optimization study of the new units and the utility systems that serve them rather than
simply incorporating standard process packages provided by licensors. The pinch
study identified opportunities for saving up to 23.7% of the process heating through
improved heat integration. Net savings for Pennzoil were estimated at $13.7 million
over 10 years.