Professional Documents
Culture Documents
Contributor Index
Increased throughput
Improved forecasting.
Commercial installations. Installed in 16 olefins sites
worldwide.
Licensor. Emerson Process Management, Austin, Texas;
www.emersonprocess.com/solutions/aat. Contact: Emer-
son Process Management, Tim Olsen, Process and Perfor-
mance Consultant, Advanced Applied Technologies, tel:
(641) 754-3459, e-mail: Tim.Olsen@EmersonProcess.com.
Advanced Process Control and Information Systems 2003
Real-Time Optimizer
RTE
DeltaV
OPC
interface
MINLP
Rigorous models
Data validation
Heater APC
Severity
Compression
Constraints
Column APC
Separation
Quality
Constraints
Compressor APC
Suction pressure
Constraints
Olefins
Applications. Olefins plants are generally well suited
for advanced process control and real-time optimization
applications. These plants are ideal candidates to bene-
fit from: energy reduction, increased capacity, optimiza-
tion of yields and feedstock selection, and for providing
valuable information to operators and engineers to run
the plant at optimum conditions. Model-based advanced
control enforces the optimum setpoints while respect-
ing changing operating constraints.
Control strategies. Furnaces, quench area, distillation
columns and acetylene and MAPD converters are con-
trolled with Honeywells multivariable Profit Controllers.
Furnace controls. Each controller is responsible for
achieving optimization targets for the furnace while pre-
venting constraint violations. The optimizer calculates a
severity, feed rate and steam-to-hydrocarbon ratio target
for each furnace that will then be implemented by the
individual furnace Profit Controller.
Separation area. Where possible the column controls
are implemented as independent controllers. However, in
many applications, controls for the cold fractionation
columns (e.g., demethanizer, deethanizer and ethylene
fractionator) are coupled with the refrigeration controls
due to the energy link between these columns.
Converters. The acetylene and MAPD converters are
controlled with Profit Controllers. The outlet acetylene or
MAPD is controlled by adjusting the hydrogen-to-diolefin
ratio. The Profit Controller sets the inlet bed tempera-
ture and first bed outlet conversion while maintaining
the converters within constraints.
Optimization. Honeywells plant optimizer uses fur-
nace yield models, material and energy balances and
constraint models to calculate the optimum targets. The
optimization hierarchy has four layers. The first layer con-
tains the Profit Controller, which holds the process at
specified setpoints with minimum energy input. Each
controller has a dynamic process model. Typically, a unit
operation is the basis for the controller.
The next layer in the hierarchy is Profit Optimizer, which
uses the controller models to coordinate furnace opera-
tion with constraints in the plant separation area. Plant
constraint information, along with feed and product
prices, are the input to the distributed quadratic opti-
mization function. Profit Optimizer resets the furnace
feedrates, severity, charge gas compressor suction pressure
and soft targets compositions for some cold-side columns.
The third layer utilizes Profit Bridge to interface with
rigorous furnace kinetic models used to update the fur-
nace yield gains in Profit Controller and Profit Optimizer.
These nonlinear gains properly account for changing
feed compositions and coke profiles.
The top layer may be ProfitMax optimization, a rigor-
ous, first-principles mathematical model for the entire
plant that realistically represents the complex relationships
that exist between plant operating conditions, plant prof-
itability and plant constraints. ProfitMax is a self-tuning,
steady-state process model. The solution determines the
optimum steady-state operating conditions passed down
to Profit Optimizer. The table below summarizes the sim-
ilarities and differences between Profit Controller, Profit
Optimizer, Profit Bridge and ProfitMax.
Name Model Scope Run-time Function
type interval
Profit Dynamic Single 12 min Local control
Controller linear unit and
optimization
Profit Dynamic Multiunit 12 min Multiunit
Optimizer linear control and
optimization
Profit Dynamic Single or 25 min Nonlinear
Bridge nonlinear multiunit gain updating
for nonlinear
control and
optimization
ProfitMax Steady Single or 12 hour Global
-state, multiunit steady-state
nonlinear optimization
Economics. Typical improvements from advanced con-
trols and optimization in an ethylene plant are: 38%
increased ethylene production, 812% reduced energy
usage, 2030% increased furnace run lengths. Typical
paybacks range from 10 to 20 months.
Commercial installations. This technology has been
implemented in 16 olefin plants around the world. Nine
Profit Optimizers have been installed, and two more are
in progress.
Licensor. Honeywell Industry Solutions, Phoenix,
Arizona. Contact: Susan.Alden@Honeywell.com.
Advanced Process Control and Information Systems 2003
Olefins
Applications. Integrated olefin process plant Design
Simulation Analysis (DSA) and Operation Simulation Anal-
ysis (OSA) includes rigorous, kinetic models, knowledge-
based cracking furnace and acetylene removal reactor
models, and rigorous model-based downstream com-
pressor and fractionation recovery system OSA for CIM
and DCS applications to maximize daily ethylene and
propylene reactor yields and recovery while minimizing
energy and off-spec wastes for process improvements in
debottlenecking.
This system can also be used for preventive mainte-
nance and accident, emergency shutdown and startup
simulation for safety and loss prevention, and supply
chain and TQM cost reductions.
Strategy.
Information knowledge base development. Olefin fur-
nace, acetylene reactors and downstream recovery unit
DSA and OSA have been developed and implemented
based on the past 10 years global fuel oils, LPG, naphtha,
gas oil, feedstock procurement, inventory and supply
chain costs; olefin products spot and contract prices
(DeWitt, market newsletter), corporate/plant full oper-
ating history (including normal emergency operations
and upsets); process unit design data, latest literature
and patent search; and management and plant opera-
tors expertise as the information knowledge base.
Design and operations simulation models develop-
ment. Rigorous kinetic theories, fuzzy logic, neural net-
work and chaos theory supported by reactor and down-
stream compression and fractionation train recovery
expert systems cover full range operating loads and sever-
ity for the latest licensors designs.
The furnace reactors track full range coking run length
to accurately predict full range gas feeds (ethane,
propane, butane and LPG), naphtha, gas oil, feed com-
positions, operating severity, operating load (from 60%
to 120%), steam to HC ratio, outlet pressure changes
impact on olefin yield improvements, minimize energy
consumption and olefin loss, and maximize products
recovery.
Operations management implementation. The OSA
consultant, Dr. Huang, will set up cost, quality, market
shares as goal, mission performance-oriented cross-
departmental strategic execution OSA teams to conduct
design, operation review, goal and objective definition;
develop and implement reactors simulation and tie into
downstream recovery units for integrated olefin process
systems operations simulation, optimal control and debot-
tlenecking cost reduction.
Economics. Up to a 3% of olefin yield increase over
design can be achieved with up to 20% over design capac-
ity and 15% reduction of unit energy consumption. This
is achieved by integrating olefin and acetylene removal
reactors into downstream recovery units OSA without
any equipment retrofit. Up to a $30 million cost reduction
in feedstock and energy unit costs with improved quality
and market shares without staff reduction is possible.
Commercial installations. Four integrated olefin plant
operation improvements have been implemented by
olefin plant OSA teams, and Dr. Warren Huang. Twenty
cost reduction workshops have been offered.
References. All by Dr. Warren Huang, OSA:
Capitalize on LPG Feed Changes, April 1979, Oil &
Gas Journal; Improve process by OSA, Hydrocarbon
Processing, May 1980, Improve naphtha cracker opera-
tions, Hydrocarbon Processing, February 1980; a 12-
paper series in Oil & Gas Journal and Hydrocarbon Pro-
cessing, 19801983; Control of Cracking Furnace, US
patents, 1981,1982; Energy and Resource Conservation
in Olefin Plant Design and Operation, presented to
World Congress, Montreal, Tokyo, 1982, 1986; Refinery,
Petrochemical Process Improvement, Debottleneck on
PC, ISA, 1989, Philadelphia; Large chemical plant con-
ference, Antwerp, Belgium, 1992, 1995; INTER PEC CHINA
91, Beijing, 1991, 1995; OSA Decision Supported TQM,
Quality Productivity Conference by Hydrocarbon Pro-
cessing, Houston, 1993; Goal, Mission Performance Ori-
ented Design/Operations Simulations Analysis Predictive
Control Maximize Refinery/Petrochemicals Productivity,
Flexibility, Dallas, 1999; Supply chain strategy maximize
oil, chemical profits conference/workshops, Singapore,
April 2627, 2001.
Licensor. OSA Intl Operations Analysis, San Francisco,
California; Website: www.osawh.com; e-mail
wh3928@yahoo.com.
Advanced Process Control and Information Systems 2003
Olefins
Application. Economic optimization of olefins plant
operations is based on a combination of the NOVA opti-
mizer and STAR multivariable predictive controllers. Both
NOVA and STAR are part of the DOT Products advanced
process modeling and control suite.
Optimization strategy. NOVA consists of a solution
engine for nonlinear optimization and equation solving,
a library of equation-based unit operations models and
a pure component physical property system.
A fully rigorous equation-based plant model is typi-
cally solved first in parameter estimation mode to match
the model to current plant operation. This parameter
estimation problem is posed so as to attain 100% solution
robustness.
After the model is matched to the actual plant, it is
then run in economic optimization mode. The modeling
approach and fidelity are selected to ensure accurate
prediction of dependent variables to reflect plant con-
straints. Independent variables for online optimization
typically include furnace feed rates and severities; con-
trolled pressures for main compressors (cracked gas, ethy-
lene, propylene); soft specifications, controlled pressures
and feed distribution, preheat and side reboil for sepa-
ration columns.
Online optimization is scheduled by a real-time exec-
utive that deals with data and task management.
Control strategy. Results from the optimization act as
setpoints and limits for STAR multivariable predictive
controllers that run every 13 minutes to ensure that
equipment constraints are honored as the optimization
results are implemented in the plant.
STAR multivariable predictive controllers are imple-
mented on the cracking furnaces, quench towers,
demethanizer and deep chilling, C2 and C3 separation,
and compressors in the separation train. The multivariable
applications are designed respecting the significant inter-
actions and complex dynamics of the separation area.
STAR is an adaptive multivariable predictive controller
designed to make large applications easier to implement
and maintain. STAR implementation only requires steady-
state gain relationships. Calculations to synthesize pro-
cess dynamics are then performed by the controller at
each control cycle. STAR thus captures the benefits of
multivariable predictive control technology while mini-
mizing the difficulties and disadvantages.
Benefits. Gross margin improvements range from 310%
dependent on economics, feedstock type and flexibility,
and market or production constrained scenario.
Commercial installations. This technology has been
implemented 14 olefins units around the world.
Licensor. PAS, Inc., Houston, Texas. Contact: e-mail:
sales@pas.com; Website: www.pas.com; tel: (281) 286-
6565.
Advanced Process Control and Information Systems 2003
Olefins
Application. Ethylene is a very competitive business and
advanced control/optimization strategies can give the
user a competitive edge over. Ethylene is produced by a
pyrolysis reaction in multitube cracking furnaces. Model-
based control strategies and real-time optimization can
have significant impacts on yields and economics.
Control strategy. The control philosophy applied to
modern ethylene plants addresses both the hot and cold
sides of the plant and involves four distinct levels:
Distributed control. The first control level is imple-
mented on the DCS level. Both regulatory and advanced
regulatory control strategies are implemented at this
level.
Advanced constraint controls. This level involves
application of multivariable model-based constraint con-
trollers. These multivariable controllers maintain stable
operation during upsets and keep areas within the plant
operating against their local constraints.
Plantwide constraint control. Plantwide LPs provide
shifting constraints for the multivariable controllers.
These LPs operate in real time and serve to coordinate
the operation of the multivariable controllers. This appli-
cation layer keeps the plant operating against several
constraints in multiple plant areas.
Plantwide rigorous optimization. A plantwide rig-
orous model of the ethylene plant is employed to pro-
vide optimal targets to the plantwide constraint control
LPs. The model combines rigorous kinetic models with
thermodynamic property models and equipment models.
This model is also periodically parameterized or updated
using data from the plant. This top level optimization
allows changing operation based on different objectives
such as maximizing plant profit or olefin production or
minimizing costs at a fixed olefin production.
Economics. Typical benefits have been reported from
$1 million to $3 million per year.
Commercial installations. The control and optimization
philosophies have been implemented at six different
sites. Some of these installations involve multiple ethylene
units.
Licensor. Yokogawa Corporation of America, Systems
Division, Stafford, Texas, info@us.yokogawa.com.
Advanced Process Control and Information Systems 2003
HPS
LPS
WHB
WHB
DS
Fuel Boiler
house
Water
treatment
GT1
GT2
Stack
MPS
LD
T2
T1
Air
Air
Process Process
Process
Export elec.
Import elec.
Process
Olefins (inline laboratory)
Application. Use of near infrared spectroscopy (NIR) as
an inline, real-time laboratory to provide control and
decision support systems with timely and accurate qual-
ity information.
Strategy.
High-frequency analysis of naphtha feeds to the
cracking furnaces: specific gravity, molecular weight,
PIONA per carbon atom to be used by the Technip SPYRO
technology, distillation curve and coking index
Feed analysis to hydrogenation units: PIONA, dienes,
BTX
Pyrolysis gasoline analysis: RON, MON, Rvp, PIONA.
Economics.
Real-time assessment of feed qualities variations
for feed-forward adjustment of furnace severity control
and plant optimization using SPYRO as yield predictor
Dienes hydrogenation optimization
Safe naphtha quality swings
Optimal evaluation of pyrolysis gasoline selling price.
Commercial installations. Several steam crackers in
Europe and South America.
Licensor: Technip France on behalf of ABB Automation.
Contact: Marc Valleur, Manager ASE ParisAdvanced
Systems Engineering, Technip; tel: (33) 1 47 78 21 83; fax:
(33) 1 47 78 28 16; e-mail: mvalleur@technip.com; Web-
site: www.technip.com.
Advanced Process Control and Information Systems 2003
Online controller
maintenance
Application. Long-term economic benefit of an APC
system strongly depends on the success of the controller
maintenance activities. Without adequate maintenance,
controller performance can slowly deteriorateresult-
ing in an erosion of APC benefits and loss of operator
confidence. Controller maintenance activities, however,
can sustain APC system value, and provide substantial
benefits:
Optimal economic performance of the controller
over its full life cycle
More effective leverage of control system support
resources
Improved economic benefit due to higher onstream
factor.
Strategy. Advanced model-predictive control systems
are now deployed in thousands of applications world-
wide, delivering substantial financial benefits. The process,
however, is subject to multiple and frequent changes.
Seasonal variations, changes in operational mandates
and process degradation as well as process improvements
can all adversely impact controller performance.
For this reason, the long-term economic benefit of an
APC system depends on the success of controller main-
tenance activities. Aspen Sustained Value consists of two
primary software toolsAspen Watch and Aspen Smart-
Stepcombined with practical training and input from
AspenTechs process control domain experts. The solu-
tion can dramatically improve controller performance,
while providing a significant reduction in the number of
people required to support the application.
Aspen Watch is AspenTechs premier technology for
control system performance monitoring and diagnosis. A
layered product for DMCplus, it tightly integrates
advanced control and database technology into a revo-
lutionary new tool. Aspen Watch provides full uncom-
pressed historization and visualization of all controller-cal-
culated data on a cycle-to-cycle basis.
This allows support engineersusing accumulated his-
toryto identify trends, problems and potential areas
for improvement. Aspen Watch also features an expand-
ing range of performance monitoring and diagnostic
application modules, including PID controller perfor-
mance monitoring and tuning technology. This technol-
ogy leverages limited engineering resources, providing
prioritization of engineering effort and reducing sup-
port requirements.
Aspen SmartStep is used to audit and optimize DMC-
plus performance. Based on a patent-pending constrained
step-testing algorithm, Aspen SmartStep automatically
generates high-quality closed-loop step test data with
reduced engineering supervision, while observing all pro-
cess unit operability constraints. Support engineers use it
to conduct focused retests whenever performance
degrades due to changes in the process unit.
Economics. Aspen Sustained Value can help a typical
refinery increase APC benefits by as much as 15% to 20%.
Commercial installations. Aspen Watch and Aspen
SmartStep are licensed at over 70 commercial sites world-
wide.
Licensor. Aspen Technology, Inc., Cambridge, Mas-
sachusetts, US; Houston, Texas, US; and approximately
50 offices worldwide. Internet: www.aspentech.com.
Advanced Process Control and Information Systems 2003
Online controller
maintenance:
Regulatory and MPC
Application. ProcessDoctor Online provides a complete
solution for constant monitoring and diagnosing health
of regulatory control technology, identifying and prior-
itizing problem loops, and monitoring advanced con-
trollers to ensure that return on investment for regulatory
and advanced control technology is sustained over time.
Strategy. ProcessDoctor Online is a server-based appli-
cation that uses standard process data (in normal closed-
loop mode) to constantly monitor plant control asset
performance. Scheduled and on-demand reports are
Web-delivered, and provide information to all plant per-
sonnel levelsfrom supervisory, to engineering, to tech-
niciansto allow for effective deployment of technical
personnel, and to provide the appropriate information
and recommendations to address maintenance issues
(provides tuning, valve and model information). Top 10
lists of worst performing loops are provided, and helpful
key performance indicators (KPIs) are generated such as:
benchmarks to best past performance Relative Perfor-
mance Index, Six Sigma information, valve stiction and
many other indicators of control loop health.
The model predictive controller (MPC) module helps sus-
tain performance of multivendor MPC technologies,
including Honeywells Profit Controller and Aspens DMC-
plus, identifying problems such as large model errors and
more. Additional add-on functionality is available for
process modeling.
ProcessDoctor Online has universal connectivity to all
plant control systems and process historians. Standard
templates for all major systems and configurations allow
for fast installation and instant value.
Economics. ROI often seen in 36 months. Example ben-
efits: increased throughput, closer operation to plant
constraints, reduced equipment wear, improved quality
control, more stable plant operation due to reduced vari-
ability, increased effectiveness of technical personnel and
better in-service factor for model-based multivariable
predictive controllers.
Commercial installations. ProcessDoctor was one of
the first control loop assessment products available,
and has been installed at over 50 sites worldwide, includ-
ing North America, the Middle East, Asia, Europe and
Australia.
Licensor. Matrikon Inc., Houston, Texas, and Edmonton,
Alberta, and 15 offices worldwide. Contact: e-mail: pro-
cessdoctor@matrikon.com; Website: www.matrikon.com.
Advanced Process Control and Information Systems 2003
Operational excellence
solutions
Application. The Value-added Information Sourced
Applications (VISA) suite from Yokogawa are software
applications designed to optimize operations, deliver-
ing timely and accurate information to operational and
management staff. VISA is a core product delivered as
part of the Yokogawa Enterprise Technology Solutions
concept. VISA integrates information from many soft-
ware packages in readily accessed, user-contextual, self-
configuring management level reports. These applica-
tions include, but are not limited to:
Mass balancing providing hourly, shift and daily
mass or energy balances based on volume, mass or mole
at the plant or unit level.
Production accounting expands on the validated
balance data to provide a suite of production reports
relating to inventories, utilizations, consumptions,
receipts, sales and similar.
Performance monitoring combining mass balance
data with additional operational data to calculate key
performance indicators such as yields and efficiencies,
which are then presented in actual versus plan reports.
Environmental monitoring provides real-time mon-
itoring, calculation, alarms and reports for all emis-
sions, reducing risk of legislative noncompliance, and
demonstrating due diligence in emissions management
strategies.
Operations Activity Management (OAM) provides
a log-book to ensure improved workflow, and track key
operator instructions and actions through the initial auto-
matic notification to completion.
Data reconciliation provides process data validation
and measurement inference to guarantee data accuracy
and quality.
Laboratory information management system (LIMS).
VISA provides a structured solution to integrate LIMS
data into the business environment, providing analysis
of quality giveaway, costs, etc.
Plant information management system (PIMS). VISA
is PIMS impartial, accepting data from a wide range of
PIMS systems. The VISA engine provides business intelli-
gent preprocessing of PIMS data, long-term data stor-
age of low-granularity data and presentation of infor-
mation from process control systems.
Strategy. VISA is based on data gathered from the his-
tory modules of the contributing systems lower in the
industrial software pyramid. Once in VISAthis data may
be freely combined to create new values of direct rele-
vance to operational optimization.
VISAis the essential link between plant-level data and
the demand for derived and reconciled plant manage-
ment data at the ERP and business optimization levels.
Economics. Business benefits achieved with the suc-
cessful implementation of Visa include:
Reduced unplanned downtime
Improved decision making
Optimized performance
Improved yield
Empowerment of operators
Improved operator response
Improved plant utilisation
Effective emissions management
Optimized planning and scheduling cycles
Better stock control
Integrated planning cycles
Commercial installations. Yokogawa has over 200 sites
where information management systems have been
implemented. In excess of 150 of these are in the hydro-
carbons arena
Licensor. Yokogawa Electric Corporation, Tokyo,
Japan, e-mail: info@ymx.yokogawa.com, Website:
www.ymx.yokogawa.com.
Advanced Process Control and Information Systems 2003
POP
SOP
Finance
Data
recouncil-
iation
Value-added information sourced applications
Plant information
management system
Mass
balancing
PCS
PCS interface
OLE for
process
control
Maintanace
management
Environment
monitoring
Performance
monitoring
Laboratory
management
Operations
activity
management
Whouse mgt
EDMS
Production
accounting
Phenol
Application. Phenol is produced by acid cleaving cumene
hydrogen peroxide (CHP) derived from catalytic oxida-
tion of cumene through several reactors in series. The
byproduct of this reaction is acetone. The catalytic oxi-
dation process is a slow reaction and can result in uneven
compositions of CHP in the reactor product, which sig-
nificantly affect the phenol and acetone product distri-
bution.
The distillation side of the phenol plant separates and
purifies a crude mixture of phenol, acetone, cumene and
other materials. Phenol and acetone leave this area as
purified products. Aspen Technologys DMCplus multi-
variable control technology can significantly reduce prod-
uct variability and, thus, increase phenol plant profitability
by controlling the unit at the optimum level, subject to
constraints.
Control strategy. A DMCplus controller on the front
end of the plant can control the CHP concentration in
the reactor product by manipulating reactor feeds, oxy-
gen flows and reactor outlet temperatures. This ensures
consistent CHP in the feed for the acid cleavage tower.
The crude acetone tower is usually the first tower that
separates the crude unreacted cumene, phenol stream
and the acetone stream. The crude acetone tower oper-
ation involves azeotropic separations and it is critical to
maintain a constant temperature profile in the tower.
To recover most of the acetone into the overhead stream,
it is required to shift the water azeotrope from the ace-
tone stream into the cumene stream. The DMCplus con-
troller for the phenol tower can adjust bottoms temper-
ature, feed and reflux to maintain a stable temperature
profile. The acetone tower and AMS towers can also be
included.
A typical DMCplus controller for this unit can have as
many as 1015 manipulated variables, 56 disturbance
variables and 2530 controlled variables.
Economics. A typical increase in the phenol production
rate is 57% with a payback period of 34 months.
Commercial installations. AspenTech has completed
four phenol projects.
Licensor. Aspen Technology, Inc., Cambridge, Mas-
sachusetts, US; Houston, Texas, US; and approximately
50 offices worldwide. Internet: www.aspentech.com.
Advanced Process Control and Information Systems 2003
Phenol
Application. The multistage reactors in a phenol plant
make excellent opportunities to lower costs while improv-
ing product quality control of the final phenol product.
Phenol plants can have as many as four reactors in series
with the final product purity being affected by all of the
upstream reactors. This highly interactive process with
its extremely long time constants makes Emersons model
predictive control (MPC) a valuable advanced control
application. MPC technology is one of the tools that
power Emersons PlantWeb digital plant architecture to
improve throughput and quality, while reducing costs.
Inferential property sensors can predict concentrations
of reactor effluent for operator guidance or feedback
measurements to the MPC block. The inferential prop-
erty estimates are updated by laboratory results or online
analyzers. These predictive models must be generated
from plant historical process and laboratory data. The
product quality predictions run in real time for operator
display, trending and alarms.
MPC controls can also be implemented on the frac-
tionation section of the plant for additional benefits.
These controls help reduce process variability and lower
energy costs in the distillation columns.
Advanced control strategies are designed to achieve
a number of operating objectives:
Maximize feed rate against unit constraints while
maintaining product quality (when desired)
Stabilize and control reactor effluent concentrations
to desired targets
Minimize excess air
Minimize unit energy consumption per barrel feed.
Strategy. A single MPC application is used to manip-
ulate reactor temperatures and air flow to each of the
reactors to control reactor effluent concentrations at
offgas O
2
concentrations. Constraints include valve,
pump, temperature and reaction rate limits. The
embedded optimization in the MPC controller algo-
rithm allows costs to be used to drive the unit to the
most profitable region, which is normally at minimum
air and energy consumption.
Commercial installations. MPC control on a phenol
unit implemented by Emerson has been operating on
one site for over five years.
Benefits. Phenol plant advanced controls typically pro-
duce economic savings from the following sources:
Additional capacity from operating closer to actual
process equipment limits (when desired)
Better average conversion across the reactor sys-
tem
More stable product quality
Reduced quality giveaway
Lower energy cost per barrel of feed.
The nominal value for these benefits is normally in the
range of $0.010.05/barrel feed, depending on the plants
incentives for phenol capacity, product prices and fuel
costs.
Licensor. Emerson Process Management, Austin, Texas;
www.emersonprocess.com/solutions/aat. Contact: Emer-
son Process Management, Tim Olsen, Process and Perfor-
mance Consultant, Advanced Applied Technologies, tel:
(641) 754-3459, e-mail: Tim.Olsen@EmersonProcess.com.
Advanced Process Control and Information Systems 2003
Phenol reactor section
Fresh feed
(flow DV)
R-1
Product
Air (MV)
Air (MV)
Air (MV)
R-2
Excess oxygen (CV)
Excess oxygen (CV)
Excess oxygen (CV)
Excess oxygen (CV)
Recycle feed
(conc. DV)
R-3
R-4
Air (MV)
Concen-
tration
(CV)
Concen-
tration
(CV)
Concentration
(CV)
Concen-
tration
(CV)
Temp.
(MV)
Temp.
(MV)
Temp.
(MV)
Temp.
(MV)
Planning and scheduling
Applications. Business.FLEX PKS software applications
provide Process Knowledge Solutions (PKS) that unify
business and production automation. Business objectives
are directly translated into manufacturing targets, and
validated production data are returned to close the
loop on the business planning cycle. Business.FLEX PKS
applications for planning and scheduling enable opti-
mal, robust production plans to be created and dis-
tributed to automation systems for execution.
The SAND module is a supply chain optimization tool
that determines the optimal method of producing prod-
ucts and satisfying customer demand with multiple man-
ufacturing facilities. A multiperiod modeling capabil-
ity is most valuable when product demands or
manufacturing capabilities are significantly different
between periods.
The ASSAY2 module is an integrated crude selection
and evaluation application to support rapid, effective
decision making about which crudes to buy, sell or trade.
ASSAY2 generates yield and quality data that are essen-
tial for evaluating crude oils selected for processing and
for preparing production plans.
The Production Planner (RPMS) module is a planning
tool that supports evaluating and selecting raw materi-
als, formulating optimal production plans evaluating
capital investments, and evaluating processing and
exchange agreements.
The Production Scheduler module prepares a detailed
schedule for operations such as crude scheduling and
blending scheduling. It enables a scheduler to rapidly
respond to events such as equipment outages, and sup-
ply and distribution changes, all while maintaining a
robust, feasible and profitable schedule. It prepares an
optimal blend plan with the most economical blending
recipes for intermediate component blending to meet
the final product demand on time and without quality
giveaway.
The Production Analyst module enables comparing
planned performance to actual results to continuously
improve overall performance.
Strategy. The Advanced Planning and Scheduling solu-
tion suite aligns production planning with corporate
objectives, prepares an optimal plan, transforms the plan
into a production schedule and establishes operational
targets for meeting that schedule. Multi-site planning is
supported. The solution acts as the interface between
planning and control and provides better feedstock selec-
tion, yields and margins, and feasible schedules that max-
imize throughput. Focusing on economics, the Advanced
Planning and Scheduling solution addresses crude
scheduling, operations planning, supply and distribution
optimization operations scheduling, blending optimiza-
tion, and performance monitoring, as well as other
requirements.
Economics. Benefits are realized from effective unifi-
cation of business and production automation. As a result,
companies can typically increase production by 25%
and decrease costs by 0.51%. Major benefit areas are
improved operational effectiveness, market responsive-
ness, quality control, personnel productivity, customer
satisfaction, conformance to environmental controls and
reduced working capital requirements, operating costs,
raw material utilization, utility consumption, product
returns and inventory levels.
Commercial installations. Over 1,000 Business.FLEX
PKS licenses have been installed throughout the world,
including at refineries, offshore platforms, chemical plants
and petrochemical complexes.
Licensor. Honeywell Industry Solutions, Phoenix,
Arizona. Contact: Susan.Alden@Honeywell.com.
Advanced Process Control and Information Systems 2003
BLEND:
blend planning
Production
balance
Operating
instructions
Advanced
process control
and optimization
SAND: supply
and distribution
Production
analyst
Production
planner (RPMS)
ASSAY2:
feedstock
evaluation
Production
scheduler
Planning and scheduling
(olefins)
Applications. The Invensys nonlinear planning system,
NL Planner, can be applied to provide economic decision
support, such as feedstock evaluation and planning, as
well as day-to-day guidance for olefins plant operations.
The system, therefore, closes the traditional gap
between planning and operations. In the past, planning
systems were not accurate enough to provide daily guid-
ance to plant operations. In olefins plants, as well as in a
number of other important HPI facilities, this level of
accuracy can only be achieved by a nonlinear system such
as NL Planner.
Strategy. Although linear programming (LP)-based plan-
ning tools are widely used in the oil refining industry,
they have found limited acceptance in ethylene plants
and other process applications. NL Planner provides a
unique ability to accurately model these highly nonlinear
processes. Key system features are:
First-principles, equation-based modeling
Graphical user interface (GUI) for model building
Microsoft Excel interface for planners and
schedulers.
These features, combined with fast system execution
and sophisticated case management, enable a broad
range of process facilities to improve their profitability. NL
Planner is based on elements of Invensys SimSci process
simulation and nonlinear optimization technology. For
ethylene plants, the full capability of the Spyro furnace
yield program from Technip-Coflexip is included.
The systems proven technology, modern software
architecture and intuitive GUI result in improved return
on investment and reduced cost of ownership. These
benefits are achieved through a shortened learning curve,
faster application implementation, easier long-term main-
tenance, broader use of the applications and increased
application life span.
Because of its unique open equation-based optimiza-
tion, petrochemical plant operators can develop very
accurate, credible models of their facilities for economic
decision support. Key advantages of the nonlinear
approach to planning and scheduling are:
Accurate over a broad operating range
Rigorous treatment of constraints
Full kinetic reactor models
Accurate utility calculations based on heat and
material balances.
NL Planner can be used in the office for feedstock eval-
uation and production planning. It can also be used in
the plant for daily optimization and to provide accurate
yield projections for production scheduling.
Economics. Economics vary depending on the specific
circumstances of each installation; however, benefits
from these systems are typically found in the following
areas:
Improved feedstock selection
Improved yield slate
Reduced utility consumption.
For many olefins plants, feedstock and utility costs can
be in the range of 7080% of variable operating costs. By
reducing these costs, NL Planner can provide potential
benefits in the range of $515/ton of ethylene product,
which can add up to millions of dollars per year in savings.
These systems often pay for themselves in a few months.
Commercial installations. The Invensys NL Planner
technology has been applied at four olefins sites.
Licensor. Invensys Performance Solutions, Foxboro, Mas-
sachusetts. Contact: david.geddes@invensys.com.
Advanced Process Control and Information Systems 2003
Feedstock
avails and cost
Plant
constraints
Product price
and constraints
Feedrates
Product slate
Operating
conditions
Fractionation
Furnaces
First-principles,
Equation-based model
Planning and scheduling
(planning)
Application. Aspen Process Industry Modeling System
(PIMS)the foundation of AspenTechs powerful, easy-
to-use family of supply chain solutionsis a state-of-the-
art production planning and optimization system that
enables refiners and petrochemical producers to achieve
dramatic productivity increases, while simultaneously
improving overall supply chain agility and profitability.
Aspen PIMS is a key component of AspenTechs solution
for production execution, seamlessly providing unit activ-
ities information and planned recipes to the Aspen Orion
scheduling system.
Aspen PIMS employs linear programming (LP) tech-
niques to solve both simple and complex models, and
offers capabilities for recursion and interfacing with both
crude assay and process unit databases. The system gives
users the ability to construct optimal planning models
that balance the complexities of todays environment
with maximum fidelity, and provides benefits that include:
Increased profits through accurate and flexible
models that reflect true modeling of key planning pro-
cesses, including model analysis, crude and feedstock
selection, production planning, operations planning
and blending
Reduced operating costs through a streamlined
planning process that enables improved asset utilization,
utility right-sizing, utilities reduction and loss reduction
Sustained value through common process models,
consistent model validation and calibration methods,
and custom reporting.
Additional capabilities. The capabilities of Aspen PIMS
can be extended with a set of layered products that allow
users to link together a number of single-plant Aspen PIMS
models to form a complex multisource, multiplant, multi-
market supply/demand/distribution network; solve com-
plex multitime period problems and model production
planning applications where inventory considerations are
important; and solve multiple plant and multiple time
period models with interplant transfer and distribution to
meet marketing demands and feed distribution to plants.
Economics. Conservative estimates for a simple 100 Mbpd
refinery indicate that Aspen PIMS can generate potential
annual benefits in excess of $10 million from improved
crude selection, improved unit performance, and improved
blending. PIMS-SX has demonstrated tangible and con-
sistent savings in the range of $0.100.15 per barrel.
Commercial installations. The industry standard for
petroleum industry planning, Aspen PIMS is licensed at
over 400 sites, and is used by more than 75% of the
refineries and more than 60% of all petrochemical plants
in the world.
Licensor. Aspen Technology, Inc., Cambridge, Mas-
sachusetts, US; Houston, Texas, US; and approximately
50 offices worldwide. Internet: www.aspentech.com.
Advanced Process Control and Information Systems 2003
Assays and
sub models
Aspen PIMS
Refinery planning
Unit activities
Planned recipes
Aspen Orion
Refinery scheduling
Rundowns and
qualities
Shipments
Aspen MBO
Product blend planning
and scheduling
Optimized blend
recipes
Aspen Blend
Blend control and
online optimization
Instrumentation layer (DCS)
BRC setpoints
Lab
information
system
Yield
accounting
Opening
inventories
Shipment
and receipts
Tank
qualities
ERP
system
Planning and scheduling
(refining)
Applications. PETRO is state-of-the-art software
designed to enhance productivity of refinery planners.
The system is unparalleled in terms of ease-of-use, speed
and accuracy.
PETRO can be used for the full range of refinery plan-
ning applications. This includes feedstock evaluation,
production planning and strategic planning. The robust
solution algorithm not only allows more accuracy through
use of very detailed process representations, but also
facilitates uncertainty planning to account for a range
of real world uncertainties from crude oil variations, mar-
ket pricing/demand and reliability issues.
Strategy. The PETRO user interface enables planners to
easily define, run and analyze cases without having to
learn the details of the PETRO model. The interface facil-
itates both data input and output results.
Model building in PETRO is done in a Microsoft Excel
environment. Model builders use Excel spreadsheets to
provide information about process units and blending
operations in a series of spreadsheets. PETROs unique
design enables model builders to work at the matrix level
for maximum flexibility. This greatly simplifies the learn-
ing process.
Once model building is complete, the PETRO system
can be used to read the spreadsheets, perform diagnos-
tics and then generate the matrix for the model. The sys-
tem includes a comprehensive diagnostic procedure to
ensure model building integrity. If a problem is found,
diagnostics messages are created to enable the model
builder to quickly resolve potential problems.
Facilities are included in PETRO to easily tune the pro-
cess models. Tuning may be required, for example, to
update crude oil assay data or change process yields as a
result of catalyst replacement.
One of the key advantages of PETRO is the modeling
expertise developed over many years of real world refin-
ery applications. This expertise enables PETRO models
with the following features:
Highly accurate
Simulation quality to apply over a broad operating
range
Long lasting
Easily tuned to account for new catalysts, etc.
Rapid convergence
Avoids local optimums.
PETROs modern system design, combined with mod-
eling expertise, produces a competitive edge.
Economics. Refinery planning, particularly feedstock
evaluation, is a key business process in the refining indus-
try. Proper crude oil purchase decisions are vital to remain
competitive in todays global economy. In addition to
providing a significant increase in planning department
productivity, PETROs increased modeling accuracy can
yield potential benefits in the range of 510 cents/barrel.
Additional economic benefits are also often realized
through improved shutdown planning. In one shutdown
planning example, PETROs multiperiod system resulted
in an estimated savings of approximately $2 million.
Commercial installations. The PETRO LP system is cur-
rently licensed at 10 locations. The system is currently
used in North America, Asia and the Middle East.
Licensing agent. Invensys Performance Solutions,
Foxboro, Massachusetts. Contact: david.geddes@inven-
sys.com.
Advanced Process Control and Information Systems 2003
Feedstock
avails and cost
Plant
constraints
Product price
and constraints
Feedrates
Product slate
Operating
conditions
Fractionation
Furnaces
First-principles,
Equation-based model
Planning and scheduling
(scheduling)
Application. Aspen Orion is a petroleum refinery and
petrochemicals scheduling application that supports com-
prehensive scheduling of all plant activities. A single sys-
tem that integrates crude and feedstock scheduling, unit
operations, product blending, and product shipping,
Aspen Orion helps operating companies achieve greater
profitability through more accurate scheduling. With
Aspen Orion, the entire scheduling process can be stream-
lined and automated, making it possible to generate a
more detailed and accurate schedule in less time.
Aspen Orion helps refining and petrochemicals com-
panies to:
Coordinate scheduling among a comprehensive
range of plant activities
Improve supply chain efficiency
Develop more accurate business and operations
planning for critical factors such as target inventories
and production throughput
Close the gap between plant planning and schedul-
ing functions
Improve overall supply chain agility
Develop more accurate economic plans
Identify and solve scheduling issues before they
occur.
As a part of AspenTechs solution for production execu-
tion, Aspen Orion is the critical link between a plants monthly
plan (provided by a modeling system such as Aspen PIMS)
and plant operations. Aspen Orion receives information
about unit activities and planned recipes from Aspen PIMS,
and provides information about rundown, qualities and
shipments to the Aspen MBO multiblend optimization tool.
Strategy. Aspen Orion enables users to meet their specific
needs by providing event-based scheduling, ease of imple-
mentation, interactive graphics for increased productivity,
embedded LP optimization and schedule automation,
and also graphical model building. Its built-in function-
ality includes crude distillation based on assay data, prod-
uct blending optimization and pipeline batch tracking.
The client-server and database architecture provides mul-
tiuser capability, enables intranet publishing, and facili-
tates integration with applications such as planning, yield
accounting and multiperiod blending and oil movements.
Reporting capabilities. Aspen Orion reporting capa-
bilities include built-in reports, MS Access-based stan-
dard reports and queries, customized reports and MS
Excel-based reports.
Scheduling approaches. Aspen Orion incorporates the
three typical approaches to refinery and petrochemical
scheduling: simulation, linear programming and expert
systems.
Flow simulation. The Aspen Orion flowsheet simula-
tor predicts plant performance based on crude runs, pro-
cess operations, and other refinery and petrochemical
plant decision variables. It also ensures accurate repre-
sentation and prediction of plant performance.
Economics. Real client experiences include capture of
economic benefits from improved scheduling with Aspen
Orion in the following areas:
Increased throughput
Improved block switching
Reduced quality giveaway
Eliminating crisis decision-making
Reduced inventory
Ability to evaluate and capture special opportunities.
Commercial installations. Aspen Orion is licensed by
over 100 sites throughout the world at both corporate
and individual levels.
Licensor. Aspen Technology, Inc., Cambridge, Mas-
sachusetts, US; Houston, Texas, US; and approximately
50 offices worldwide. Internet: www.aspentech.com.
Advanced Process Control and Information Systems 2003
Assays and
sub models
Aspen PIMS
Refinery planning
Unit activities
Planned recipes
Aspen Orion
Refinery scheduling
Rundowns and
qualities
Shipments
Aspen MBO
Product blend planning
and scheduling
Optimized blend
recipes
Aspen Blend
Blend control and
online optimization
Instrumentation layer (DCS)
BRC setpoints
Lab
information
system
Yield
accounting
Opening
inventories
Shipment
and receipts
Tank
qualities
ERP
system
Plant information (alarm
and event collection and
analysis)
Application. ProcessGuard provides a complete solu-
tion for critical condition and situation management.
Many operators have become desensitized to critical
alarms due to the sheer numbers of alarms that are now
so easy to implement on most control systems (DCSs).
Both safety and profitability can be affected by improper
critical situation management, and the industry has seen
losses of millions of dollars from: damage to equipment,
lost production or reduced safety. ProcessGuard is an
alarm historian and strategizer, collecting all alarms and
events from all major control systems, and analyzing this
information to identify alarm strategy issues, as well as
quickly perform incident reviewsimproving plant oper-
ations and safety.
Strategy. ProcessGuard is an online server-based appli-
cation that collects alarm and event information from
any DCS via a network or serial printer port connection.
ProcessGuard analysis reports are viewable by any autho-
rized PC on the sites network, allowing engineers or
technicians to access information on any current or past
crises from their officeenabling them to enter the con-
trol room prepared with recommendations, instead of
questions.
The many analysis functions include: top 10 lists of
most frequent occurring alarms, helpful key performance
indicators (KPIs) and sequence of events, to name a few.
ProcessGuard was developed following EEUMA guide-
lines and industry feedback.
ProcessGuard has universal connectivity to all plant
control systems and is integrated with process historians,
allowing simultaneous alarm and process data viewing
clearly showing the sequence of events. Standard tem-
plates for all major systems and configurations allow for
fast installation and instant value.
Economics. Example benefits include: 5070% reduc-
tion in alarms, identifying costly operation (such as incor-
rectly opened valves), increased throughput, closer oper-
ation to plant constraints, more stable plant operation
due to identifying variability or poor control (shown with
frequent alarming and/or frequent controller mode
changes), increased effectiveness of technical personnel,
better in-service factor for regulatory and model-based
multivariable predictive controllers.
Commercial installations. ProcessGuard is installed
and licensed at over 100 sites worldwide, including facil-
ities in North America, the Middle East, Asia, Europe and
Australia.
Licensor. Matrikon Inc., Houston, Texas, and Edmonton,
Alberta, and 15 offices worldwide. Contact: e-mail: pro-
cessguard@matrikon.com; Website: www.matrikon.com.
Advanced Process Control and Information Systems 2003
Plant information (alarm
and quality management)
Applications. Geometric Process Control (GPC) uses mul-
tidimensional geometry to provide the first mathemati-
cal unification of process control, product quality con-
trol and process alarm management. It includes a
completely new operator display, shown above with an
alarm rectification example, that is intuitive, easily under-
standable and provides operators with information not
previously available. It uses a multivariable Best Operat-
ing Zone (BOZ) identified by Curvaceous Visual Explorer
(CVE) as its basis for distinguishing between normal and
abnormal operation. The BOZ is converted by Curvaceous
Process Modeller (CPM) into an equation-less multivari-
able and nonlinear model that can contain knowledge
derived from both process and laboratory quality histo-
ries. It can be built and updated in minutes without math-
ematical knowledge, making it practical and affordable
even for small plants.
Capabilities. Curvaceous Process Camera (CPC) includes
a BOZ for your process and an operator display. It allows
replay of freshly gathered process data and is intended for
user familiarization and desktop checkout of a new BOZ
before it is put into operation, although in practice much
new knowledge of multivariable process behavior is
revealed as well. CPC visually shows alarms and quality vio-
lations as they occur and generates logs, including a tra-
ditional alarm log for ease of comparison between the
present and BOZ-possible modes of plant operation.
CPM contains additional facilities to build new BOZ-
models from the data extracted in CVE and has differ-
ent license conditions to CPC.
Economics. The ASM Consortium has estimated losses
due to incidents at 38% of capacity per year. The new
multivariable methods GPC employs for process alarm
management are expected to reduce the number of
minor incidents that would have previously escalated
into major incidents. Integration of quality control pro-
vides new economic benefits and additional incentive.
Integrating MPC will increase benefits already provided
by MPC applications. Utilizing GPC won Curvaceous the
European Process Safety Centre (EPSC) Award for the
Biggest Contribution to Improved Process Safety 2003.
Commercial installations. After the success of field
trials supported by a smart grant in 2001 GPC is currently
operational in six process plants. Many more systems are
being commissioned or under investigation covering a
wide range of industries from oil refineries down to bak-
eries.
Licensor. Curvaceous Software Limited, Gerrards Cross,
UK; Website: www.curvaceous.com; e-mail: enquiries@
curvaceous.com.
Advanced Process Control and Information Systems 2003
Plant information
(batch/lot tracking)
Application. The Batch/Lot Tracking solution provides
complete support for tracking batches of material
through a continuous or discrete production environ-
ment. It is capable of full forward and backward lot
genealogy, meeting the most stringent industry lot trace-
ability requirements. RISnet forms enable site-specific lot
tracking work processes to be modeled, making for effec-
tive data capture with minimum user input.
Strategy. The Batch/Lot Tracking solution is built on the
RESOLUTIONdatabase and makes use of the production,
commodity, inventory and movement management mod-
ules. Adding the document management module pro-
vides coordinated access to associated documentation
stored locally or in external systems. The key performance
indicator module enables nonconformance alerting and
reporting to be done when batch characteristics are not
within specifications.
As with all RESOLUTIONmodules, data can be obtained
through external interfaces to coordinate information
such as real-time measurements, lab analyses and other
plant operational conditions with the batch tracking
records. The result is the most complete batch/lot track-
ing system in the business.
Specific capabilities:
Individual identification of all batches
Batches can be in many locations over time, and
can be in multiple locations at the same time
Multiple batches can be in the same location at the
same time
Batches can be tracked through processing equip-
ment and silos/tanks
Batches can be split, combined and renamed
Quality data can be attached to a batch or a batch
transfer, or lab measurements attributed to the batch
can be obtained by collecting data at batch locations
Complete forward and backward genealogy main-
tained between batches/locations
Specific actions on batches tracked as events, e.g.,
classification and reclassification
Supports batch activity planning
ISA SP88 capable.
Benefits. Complete forward and backward lot trace-
ability is immediately available, along with all associated
plant conditions and documentation. Can meet trace-
ability requirements of US Bioterrorism Act. Forms can
be generated to support plant work operations, enabling
effective data capture with minimum user input. RESO-
LUTIONs integration capabilities can make maximum
use of legacy applications.
Commercial installations. RESOLUTION Lot Tracking
modules have been installed on five sites, covering refin-
ing, petrochemicals, dairy and food industries.
Licensor. Resolution Integration Solutions, Inc., Solon,
Ohio. Contact: PeterLawrence@ris-resolution.com, Web-
site: www.ris-resolution.com, tel: (440) 519-1256.
Advanced Process Control and Information Systems 2003
Plant information (critical
condition management)
Application. Critical conditions result from process dis-
turbances with potential outcomes extending from minor
upsets through catastrophic incidents. Industry data con-
tinue to show that most of these can be minimized in
impact or completely avoided with timely and accurate
operator actions. AMO Suite, AMO Plus and PlantState
Suite provide a complete operator-centric solution to
critical condition management (CCM) addressing the key
areas of:
Alarm management
Control loop performance
Early fault detection & diagnostics
Transition management
Procedural automation.
Strategy. Benchmark your plant performance in critical
condition management against the PAS body of knowl-
edge and industry best practices. Then, take a prescriptive
approach to improvement with a detailed customized
improvement plan employing six sigma concepts.
Economics. The economic incentive is large considering
the impact of lost production, equipment and facility
damage, environmental excursions and endangerment to
human life. Industry-focused research groups estimate
losses due to incidents at 38% of overall production
capacity annually or over $20 billion/year in the US.
Commercial installations. There are currently over 100
CCM installations with a variety of software components
from AMO Suite, AMO Plus and PlantState Suite.
Licensor. PAS, Inc., Houston, Texas. Contact: e-mail:
sales@pas.com; Website: www.pas.com.
Advanced Process Control and Information Systems 2003
Plant information
(data reconciliation)
Application. Reconciler is RESOLUTIONs reconciliation
engine. It is accurate, fast, robust and handles complex
reconciliation problems involving linear and nonlinear
balance equations and constraints.
It can be used as a DLL procedure and incorporated
into the clients existing applications or as a Web-based
application totally integrated into the RESOLUTION
database. Results can be stored back to the database for
automatic generation of material balance and compari-
son reconciliation reports.
Strategy. The Reconciliation Solution is composed of
two distinct components: the Reconciliation Engine and
the Web-based User Interface. The Reconciliation Engine
can be run separate from the user interface.
Reconciliation Engine:
Memory-economic solution procedure
Ability to express complex constraints (linear, non-
linear, equality and inequality)
Inequality constraints eliminate the meaningless
negative-flows that may be in the reconciled results
Can create complex mass, volume and energy con-
straints
Fast, robust and accurate
Quickly see the reconciled results
Can be fully integrated into client applications, e.g.,
run from within Microsoft Excel
Use in real-time mode from within a real-time
database application.
Web-based User Interface:
Uses existing plant flow-sheet configuration defined
in the RESOLUTION database, so any configuration
changes are automatically reflected in the equations
Has direct access to plant measurements, either
stored within the Repository or in any interfaced real-
time database
Measurement quality information such as toler-
ances and maximum/minimum values are also obtained
from the database
Solving procedure enables adjusting measurement
values or tolerances directly on the grid that displays the
solution results
Results can be directly stored back into the database
with automatic material balance report generation
Comes with a Reconciler Explorer, which enables a
user to navigate through the entire plant configuration,
showing actual and reconciled values.
Benefits. Fast and robust reconciliation of complex mod-
els. Configuration is driven directly from the plant oper-
ational database, so there is no need to maintain a sep-
arate reconciliation model. Operational database provides
all measurements, movements, inventories and instru-
ment configuration informationeliminating the com-
plex data collection task.
Commercial installations. RESOLUTION Reconciler has
been installed on three sites.
Licensor. Resolution Integration Solutions, Inc., Solon,
Ohio. Contact: PeterLawrence@ris-resolution.com, Web-
site: www.ris-resolution.com, tel: (440) 519-1256.
Advanced Process Control and Information Systems 2003
Plant information
(data reconciliation)
Application. The real-time data reconciliation software
technology, DATREC, is used to improve accuracy of mea-
surements and/or generate missing values in case of insuf-
ficient or faulty field instruments.
This software is designed for fully automatic opera-
tion on process units and utility networks. It improves
availability of process control strategies by online detec-
tion of instrumentation errors and provides consistent
data for applications such as: process optimization,
scheduling, equipment diagnosis, plantwide mass bal-
ance reconciliation, unit performance monitoring and
instrumentation maintenance.
Strategy. With advanced statistical techniques, DATREC
reconciles raw measurement values using redundancy
relations linking these measurements and taking into
account instruments accuracy.
The latest release of DATREC provides the following
features:
Automatic processing of gross errors on measure-
ments
Generation of an instrumentation guide for instru-
ment maintenance
Linear and nonlinear mass, enthalpy and composition
balances
Dynamic accounting of nonmonitored or out-of-
scale instruments
Automatic system reconfiguration to match changes
of process unit operating modes.
The DATREC software has two modes of operation:
An automatic online mode to provide data to other
computer systems
An offline mode to build reconciliation applications,
as well as for instrumentation studies.
Economics. DATREC is used to improve process moni-
toring and enhance performance of downstream opti-
mization applications. It simplifies instrumentation main-
tenance, contributes to increased sensor accuracy, provides
reliable information to real-time optimization and opti-
mizes sensor implementation through instrumentation
studies.
Commercial installations by Technip. DATREC online
has been installed in more than 120 refineries process
units, ethylene plants or utility networks at various sites
in Europe, the Middle East and in the USA.
Licensor. Total. Contact: Marc Valleur, Manager ASE
ParisAdvanced Systems Engineering, Technip; tel: (33)
1 47 78 21 83; fax: (33) 1 47 78 28 16; e-mail:
mvalleur@technip.com; Website: www.technip.com.
Advanced Process Control and Information Systems 2003
Around 10-D01
Mass balance (x1)
Component balance (x2)
Sum of component fractions=100% (x2)
Liquid/vapor equilibrium (x2)
Outlet temperature equality (x1)
Heat balance (x1)
Around 10-E-01
Heat balance (x1)
Saturated steam P/T relationship (x1)
FI 3
TI 13
X 3A Component A fraction
X 3B Component B fraction
H_LIQ_FL Enthalpy/mass unit
FC 1
TI 10
X 1A Component
A fraction
X 1B Component
B fraction
FC 17 Saturation steam
H_Steam Enthalpy/mass unit
TI 18
PC 21
FC 2
TI 12
X 2A Component A fraction
X 2B Component B fraction
H_VAP_FL Enthalpy/mass unit
10-E-01 TC 11
1
0
-
D
-
0
1
H_water
Plant information
(equipment monitoring)
Application. Knowing the welfare of your critical plant
equipment, in the field, is crucial to meet contractual
and business targets. Emerson has developed Equipment
Performance Monitor, a PlantWeb technology that helps
maximize profits by minimizing unscheduled equipment
downtime.
Used to monitor critical pieces of equipment such as
compressors, gas and steam turbines, boilers, pumps, heat
exchangers and furnaces, Equipment Performance Mon-
itor:
Enables operators to troubleshoot equipment prob-
lems remotely and determine when to plan maintenance
or cleaning schedules to extend run times and maximize
throughput.
Tracks operating performance against targets and
highlights potential causes of downtime and production
inefficiencies.
Pinpoints any performance degradation, enabling
preventive action, thus assisting in optimizing the plants
planned production.
The technology helps meet the needs of customers
switching from routine to targeted maintenance pro-
grams, thereby maximizing utilization of process assets.
Strategy. Web-based Equipment Performance Monitor
collects, models, processes and presents performance
information about critical equipment to operators around
the world.
Process data is collated and uploaded periodically from
the data historian and applied to mathematical and sta-
tistical calculations including data reconciliation and
parameter estimation to eliminate adverse data. A cal-
culation engine (the model) generates the monitoring
results. Performance indicators, customized reports and
graphical representations are presented within a secure
Website providing a fast and easy mechanism for main-
tenance technicians, engineers, service support and man-
ufacturers to access performance data from the field.
Equipment Performance Monitor enables engineers
to:
Exchange data/intelligence between local, out-
sourced and corporate divisions
Respond faster to changing conditions
Benchmark operational performance vs. sister units
or plant sites
Analyze the operational history to determine the
root causes of equipment problems.
Economics. Equipment Performance Monitor:
Increases throughput, availability and reliability
Prevents unnecessary downtime and costly shut-
downs
Increases operating performance
Reduces operating / unplanned maintenance expen-
ditures
Optimizes cleaning and maintenance cycles
Detects faulty / poorly calibrated instrumentation.
Commercial installations. Equipment Performance
Monitor has been successfully implemented on over 100
process units.
Licensor. Emerson Process Management, Austin, Texas;
www.assetweb.com. Contact: Emerson Process Manage-
ment, Darren Greener, Asset Optimization, tel: +44 (1642)
773000, e-mail: greenerd.mdctech.com.
Advanced Process Control and Information Systems 2003
Generate
mathematical
model
Calculation
engine
Process plant
Maintenance
headquarters
Maintenance
office
Process data
Data historian
DCS
LAN
Logging
devices
End user
(various
degrees
of access)
LAN
End user
(various
degrees
of access)
OEM
design
data
Firewall
Data flow
(i.e. ftp
modem link)
E
q
u
i
p
m
e
n
t
p
e
r
f
o
r
m
a
n
c
e
m
o
n
i
t
o
r
C
u
s
t
o
m
e
r
S
o
l
u
t
i
o
n
s
p
r
o
v
i
d
e
r
S
i
t
e
A
Performance
indicators
Servers
S
i
t
e
B
S
i
t
e
C
Firewall
Data validation
Firewall
Internet
www.e-fficiency.com
Email
Plant information (event
monitoring and notification)
Application. Alarm and event management systems can
be used in the manufacturing process to determine the
earliest point of problems that increase product vari-
ability. Aspen Technology has developed an alarm and
event management system to record alarms and events
related to product quality as well as the actions taken to
deal with these events.
Based on the InfoPlus.21 basic statistical process control
software, the application includes the capability to ana-
lyze the events and their causes to make process improve-
ments. The system addresses only product quality alarms;
process safety alarms are handled by DCS alarm man-
agement functions and other systems.
Strategy. InfoPlus.21 is the foundation system for infor-
mation management that enables deployment of other,
layered applications for advanced information manage-
ment activities such as alarm and event management for
product quality purposes. This application provides man-
ufacturers with a comprehensive system that:
Alerts operators to alarms and events in the pro-
cess, including both traditional alarms (something bad
has already happened) and statistical alarms (if noth-
ing changes, something bad is likely to happen soon)
Records the operators understanding of the cause
of each alarm
Records the operators choice of corrective action
to deal with the alarm.
Alarm and event data enters the system from various
sources: AspenTech software, including Aspen Multi-
variate, Q or Aspen IQ, third-party software, or manual
entry by operators or process engineers. The data can be
analyzed to determine what kinds of alarms occur most
frequently and what actions the operators usually take to
deal with each kind of alarm. Armed with this informa-
tion, plant engineers can make improvements to the pro-
cess machinery, the operating procedures and the oper-
ators training.
InfoPlus.21, Aspen Multivariate, Q and Aspen IQ are
components of Aspen Plantelligence, the Production
Optimization module of the Aspen ProfitAdvantage solu-
tion, that enables process manufacturers to identify and
maximize profit opportunities throughout the entire
process industry value chain.
Economics. Benefits from applying an alarm and event
management system are achieved through eliminating
many of the causes of alarms at their root, and mitigat-
ing the consequences of those alarms that still occur.
Commercial installations. AspenTech has completed
one alarm and event management system, and a func-
tional design for a second. Several more applications are
under consideration.
Licensor. Aspen Technology, Inc., Cambridge, Mas-
sachusetts, US; Houston, Texas, US; and approximately
50 offices worldwide. Internet: www.aspentech.com.
Advanced Process Control and Information Systems 2003
Plant information (inbound
chemical management)
Application. Purchased chemical inventory levels are
collected and displayed for secure viewing by chemical
vendors. This information is used by vendors to manage
chemical deliveries to the site and supports a just-in-
time purchasing model in which the vendor retains
chemical ownership until the time of consumption by
the site. This reduces working capital at the plantsite,
while also minimizing supply risks and purchase costs.
Strategy.
Remote-hosted solution: The chemical inventory data
are collected from existing onsite systems and transferred
to a central database where individual vendors can access
their inventory information as authorized by the plant.
Data collection into the central database is from exist-
ing PLCs, control networks, real-time databases, etc., with
secure, encrypted communications across available chan-
nels (Internet, telephone, paging network, satellite, etc.).
Secure AnyWhere/AnyTime access: Vendor access is
by secure password-protected web pages only, from any
internet-connected PC or wireless device. All user access
is to the Web pages only; no user access is granted to any
site systems.
User-configurable electronic alerting: Each applica-
tion comes with the ability to automatically alert plant
and/or vendor personnel of changes in inventory lev-
els versus specified targets or limits. Alerts are set on a
per-user basis and can be received via e-mail, cellphone,
pager, etc.
Works with existing business systems: The applica-
tion interfaces to most enterprise systems, allowing
automated reorder information to drive product ship-
ments. The application can also transfer inventory data
to business systems for product optimization, logistics
scheduling, etc.
Inventory management services from any vendor: The
Inbound Chemical Management application is delivered
as a monthly service, suitable for use by 1 or 100 vendors.
Each vendor is able to take on complete inventory man-
agement responsibility, eliminating the working capital
and plant personnel otherwise required for procuring
and storing chemicals onsite.
Economics. The Inbound Inventory Management appli-
cation provides the following benefits:
Reduced working capital
Guaranteed inventory supplies
Avoids rush shipments
Reduced inventory reconciliation and transaction
effort
Commercial installations. As of mid-2003, the Inven-
tory Management application is being delivered to over
250 sites around the world.
Licensor. Industrial Evolution, Inc., Phoenix, Arizona,
and ChemLogix LLC, Blue Bell, Pennsylvania; Websites:
www.industrialevolution.com or www.chemlogix.com;
e-mail: contact@industrialevolution.com; tel. (602)
867-0416.
Advanced Process Control and Information Systems 2003
Inbound
chemical
management
Authorized chemical and
additive suppliers
Secure VPN
connection
or
Telephone
dial-up
Industrial Evolution
data center
Real-time database,
catalyst vendor application(s)
Oil refinery
Onsite
data sources:
DCS, PLC, Lab,
Database, etc.
Plant information
(key performance
indicator management)
Application. KPI is a solution for managing key perfor-
mance indicators (KPIs). It provides comprehensive dis-
plays and charting options for KPI performance review,
and a drill down capability to facilitate identifying
problems. Included in the KPI solution is full nonconfor-
mance reporting and alerting capability. This enables
mail messages to be sent to responsible parties when
KPIs violate limits, or when escalation is required.
Strategy. A KPI is linked to a business goal. In general,
every KPI will have a target value that may change over
time. The actual value of the KPI is compared to the tar-
get value to determine how much progress has been
made toward achieving the business goal. RISnets KPI
Web-based forms (or interface) including a KPI explorer
and Resolving navigation provide a very flexible envi-
ronment for managing and analyzing KPIs.
KPI data can be entered manually, extracted directly
from the attached real-time databases, or the result of
complex calculations implemented using the Recalculator
module.
Web forms are available specifically for monitoring the
progress of KPIs: The user is able to browse the KPI hier-
archy and drill down to reveal values for dependent KPIs.
Charting tools are available to plot the KPI versus its limits.
Benefits. KPI allows users to see the wood for the
trees. Instead of being swamped by vast quantities of
information, KPI distills the information down into a few
indicators that are easy to watch. As soon as one indica-
tor is out of alignment, the drill down capability allows
focus to be brought on the problem area. Since KPI is
built on the RESOLUTION database, KPIs have access to all
plant data: safety, engineering, operational, economic
and more.
In combination with RESOLUTIONs Target-Setting solu-
tion, one installation reported breaking 18 operating
records the month after installation. Benefits were esti-
mated at between $8 million and $20 million/yr.
Commercial installations. RESOLUTION KPI modules
have been installed on 13 sites.
Licensor. Resolution Integration Solutions, Inc., Solon,
Ohio. Contact: PeterLawrence@ris-resolution.com, Web-
site: www.ris-resolution.com, tel: (440) 519-1256.
Advanced Process Control and Information Systems 2003
Plant information
(mass balance)
Application. The GERA mass balance reconciliation sys-
tem is used to interactively generate daily plantwide
mass balances, providing coherent data to decision sup-
port systems.
Strategy. Plant facilities are described as a simplified
process flow sheetthe GERA networkincluding nodes
(process units, tanks, blenders, receipt/shipment facili-
ties) and flows between nodes. The GERA network is rep-
resented graphically and provides facilities to manage
temporary flows.
GERA reconciles cumulated flow measurements, tank
inventories and estimated losses together with their asso-
ciated uncertainty.
The latest release of GERA includes :
Full graphical generation of the mass balance equa-
tions
Direct visualization of the balance reconciliation
results on the plant graph
Full compliance with Windows NT standards and
ORACLE
Multiuser access for consulting validated results.
Economics. Benefits generated by GERA are essentially
derived from a better day-by-day knowledge of the plant
operations from feed receipts to finished products ship-
ments. In particular, the benefits are associated with a
coherent and timely set of data being used by various
plant departments, improved instrumentation monitor-
ing and consequent savings in maintenance and mass
balancing computation workload. GERA also provides
coherent tank farm inventory and product movements
reporting, reliable process unit yield analysis as well as
timely and better knowledge of magnitude and location
of the losses. Typically, benefits amount to $0.51.5 mil-
lion/yr in complex refineries or ethylene plants with high
capacity.
Commercial installations by Technip. GERA has been
and is being implemented at several sites in Europe, Asia
and the Middle East.
Licensor. Total. Contact: Marc Valleur, Manager ASE
ParisAdvanced Systems Engineering, Technip; tel: (33)
1 47 78 21 83; fax: (33) 1 47 78 28 16; e-mail:
mvalleur@technip.com; Website: www.technip.com.
Advanced Process Control and Information Systems 2003
Plant information (offsite
data management)
Application. RESOLUTIONs Offsite Data Management
solution provides tools for reviewing and changing tank
compositions, consolidated inventory reporting by area
and stock category, movement data entry and report-
ing, and inventory/material movement balancing.
Strategy. This solution provides all of the tools required
to define material line-ups, plan and execute material
movements, plan and record storage tank contents, and
produce a variety of reports.
RESOLUTION enables management of movements
throughout their entire life cycle from within a single
solution. Allowable routes can be defined using the Line-
up Editor. The Movement Editor, Movement Manager
and Movement Entry allow a user to define movements
that will later be scheduled via line-ups. Alternatively,
planned movements can be imported from a planning
and scheduling tool using the Relayer XML interface.
Movement times can be manually recorded using the
Movement Start/Stop application. Alternatively, Auto-
matic Movement Detector infers this information from a
combination of the tanks state and planned movements.
The Unit Line-up Viewer (white board) shows all cur-
rent and planned movement routes to and from a unit.
This provides the look ahead required by the control
room of potential line-up switches.
The Movement Viewer allows a user to view the sched-
ule of movements on a Gantt-like time scale.
Contents of a tank (planned or actual) over time can be
viewed and managed via the Item-Commodity Editor.
Benefits. Offsite Data Management provides the com-
plete solution for offsite data management throughout
the entire life cycle of a material transactionfrom plan-
ning, scheduling, execution, reporting and reconcilia-
tion.
RESOLUTION also provides complete management of
stocks: planned stocks by category and location, actual
stocks, different stock ownership, composition of the
stocks and reconciled quantities.
Offsite Data Management enables both unit material
balance reports to be generated regularly, as well as a
planned-versus-actual production report. By tracking
these figures, confidence can be improved on the pro-
jected stock figures and, hence, the projected stocks at
each of the depots. This allows for better inventory man-
agement.
Commercial installations. RESOLUTION Offsite Data
Management modules have been installed on 10 sites.
Licensor. Resolution Integration Solutions, Inc., Solon,
Ohio. Contact: PeterLawrence@ris-resolution.com, Web-
site: www.ris-resolution.com, tel: (440) 519-1256.
Advanced Process Control and Information Systems 2003
Plant information (online
downtime reporting)
Application. ProcessMORe for Automated Missed
Opportunity Reporting tracks the cause of production
downtime, delays or reduced ratesin summary,
missed opportunities to make targeted production. Thus,
any time decisions need to be made on where to focus
money or resources for the best ROI, at hand is the infor-
mation, telling engineers and management where their
issues liewhether in maintenance, equipment restric-
tions or other areas.
Strategy. ProcessMORe is an online thin client-based
application integrated with a sites existing plant control
and information systems to provide complete information
on causes and costs of missed opportunity to achieve tar-
geted production. Production, financial and event infor-
mation are taken from all DCS and plant information
systems, providing Web-based ProcessMORe analysis
reports viewable by any authorized user IDs on the com-
panys intranet. These reports are used by plant opera-
tions, maintenance and control departmentsto mea-
sure, understand and address the top items that are lim-
iting the sites profitability. In the past, these reports took
maintenance and production departments weeks to
assimilatewhich now automated are available contin-
uously and immediately.
The many analysis functions include: top 10 lists of the
most costly and frequently occurring production limita-
tions; mechanical availability key performance indicators
(KPIs); and sequence of events, to name a few.
Economics. Example benefits include improving mechan-
ical availability by 7%.
Commercial installations. ProcessMORe is installed
and licensed at over 10 facilities.
Licensor. Matrikon Inc., Houston, Texas, and Edmonton,
Alberta, and 15 offices worldwide. Contact: e-mail: pro-
cessmore@matrikon.com; Website: www.matrikon.com.
Advanced Process Control and Information Systems 2003
Plant information (OPC data
management)
Application. Matrikons OPC Data Manager (ODM) is a
software application that transfers data from one OPC
server to another. Use ODM when you need to share data
between two or more control systems (e.g., PLC and a
DCS). With ODM, this connectivity can be accomplished
with standard off-the-shelf software.
Strategy. Traditional OPC-enabled systems share data
by implementing one application as an OPC client and
another as an OPC server. But sometimes neither appli-
cation is an OPC client; instead, both are servers. Two
OPC servers cannot exchange data since they are designed
to respond to a clients requests and are unable to gen-
erate requests. Matrikons ODM solves this problem by
acting as a double-headed or thin OPC client to both
servers. It requests data from one server and immedi-
ately sends it to the other OPC server.
Benefits include:
No programming (use drag-and-drop operation
instead)
Bidirectional read/write
Support for OPC 1.0a and 2.0
Runs as a Windows service.
Economics. ODM is an off-the-shelf software application
that connects control systems that have OPC servers.
With ODM, users avoid the need to use proprietary hard-
ware solutions to bridge their control systems. Since no
programming is required, users can get the connectivity
quickly.
Commercial installations. The OPC Data Manager has
been used in over 100 applications.
Licensor. Matrikon Inc., Houston, Texas, and Edmonton,
Alberta, and 15 offices worldwide. Contact: e-mail:
OPCInfo@matrikon.com; Website: www.matrikon.com.
Advanced Process Control and Information Systems 2003
Plant information
(outbound inventory
management)
Application. Manufactured product inventory levels are
collected from storage vessels at customer sites and dis-
played for secure viewing by product management per-
sonnel. Centralized field inventory data viewing allows
product personnel to optimize production schedules,
product shipments and enhance customer service. Cus-
tomers benefit from a just-in-time purchasing model
that reduces their working capital and product restock-
ing efforts.
Strategy.
Remote-hosted solution: Product inventory data are
collected from new or existing inventory measurement
devices and transferred to a central database where they
are accessible to customer service representatives and
other product management personnel. Data are securely
collected across available channels (Internet, telephone,
paging network, satellite, etc.).
Secure AnyWhere/AnyTime access: Inventory data
access is typically via a set of user-specific password-pro-
tected Web pages, allowing anywhere/anytime access
from any Internet-connected device. Alternatively, col-
lected data can be forwarded back to plant or corporate
systems for integrating with local production planning
or optimization tools. All user access is to the Web pages
only; no user access is granted to any site systems.
User-configurable electronic alerting: Each application
comes with the ability to automatically alert product
and/or operations personnel of changes in inventory lev-
els versus specified targets or limits. Alerts are set on a
per-user basis and can be received via e-mail, cellphone,
pager, etc.
Works with existing business systems: The applica-
tion interfaces to most enterprise systems, allowing
automated reorder information to drive product ship-
ments. The application can also transfer inventory data
to business systems for product optimization, logistics
scheduling, etc.
Multi-customer inventory management: The Outbound
Product Management application is delivered as a
monthly service, suitable for use by any number of prod-
ucts at multiple customer sites. Automated data collection
takes place for each site, allowing product personnel to
take on complete inventory management responsibility
for their customers. This reduces working capital and
product restocking costs for the customer, and often leads
to exclusive multiyear supply arrangements.
Economics. The Outbound Inventory Management appli-
cation provides the following benefits:
Increased customer loyalty
Improved product management, eliminating cus-
tomer rush shipment requests
Improved product and operations planning
Increased visibility into customer consumption and
projected product demand
Reduced inventory reconciliation and transaction
effort.
Commercial installations. As of mid-2003, the Inven-
tory Management application is being delivered to over
250 sites around the world.
Licensor. Industrial Evolution, Inc., Phoenix, Arizona,
and ChemLogix LLC, Blue Bell, Pennsylvania; Websites:
www.industrialevolution.com or www.chemlogix.com;
e-mail: contact@industrialevolution.com; tel. (602)
867-0416.
Advanced Process Control and Information Systems 2003
Outbound
chemical
management
Secure internet access
Industrial Evolution
data center
Real-time database,
catalyst vendor application(s)
Refinery or chemcal plant
Product
deliveries
Optional
VPN
connection
Customer sites, terminals, etc.
Onsite control and
information systems
Plant information
(recipe management)
Application. The Recipe Management solution provides
for a structured library of plant recipes, covering material
components and operational procedures. It can support
a complete ISA SP88 configuration and more.
Strategy. Recipe Management supports multistep, mul-
tifeed and multiproduct recipes. All data are stored as
values able to be manipulated or downloaded indepen-
dently. All values can have associated instructions and
complete audit trail associated with them. All recipes can
have associated documentation.
Recipe templates facilitate new recipe construction.
Recipes can be synchronized with ERP planning systems.
Recipe management is integrated with the Batch
Scheduling so that operations can be given specific rele-
vant instructions related to their schedule.
Benefits. Comprehensive recipe management is able to
provide operations with current instructions coordinated
with the schedule.
Commercial installations. RESOLUTION Recipe Man-
agement modules have been installed on two sites.
Licensor. Resolution Integration Solutions, Inc., Solon,
Ohio. Contact: PeterLawrence@ris-resolution.com, Web-
site: www.ris-resolution.com, tel: (440) 519-1256.
Advanced Process Control and Information Systems 2003
Plant information
(reliability/operations
management system)
Applications. Reliability and Operations Management
(R/OM) Solutions provides clients with real-time process
and equipment diagnostics through their existing opera-
tors console. Nexus builds its integrated Nexus Oz solu-
tion framework into its solutions to significantly improve
plant operations performance, reliability and safety.
The role of Nexus Oz is not to provide solutions which
react to Abnormal Situations, but to help clients avoid
them. Nexus Oz informs the operator of process prob-
lems and enables the operator to correct the problems
before they become critical. Nexus Oz enables clients to
capture and deploy the best operations practices. This
function is extremely important since the experiences of
the most experienced operators and engineers are trans-
ferred to new operators as well as building a consoli-
dated global portfolio of best practices.
One of the benefits of Nexus Oz software is providing
the framework for a range of applications. The initial
project scope would include sensor validation and diag-
nostics for the equipment such as the instrumentation,
pumps, vessels, furnaces, etc. The application would also
include the event response procedure documents for the
critical failures for these pieces of equipment. Docu-
mentation of this information is generally available as a
result of the OSHA mandated HAZOP process. This con-
figuration of Nexus Oz enables the operator to call up
the corresponding response procedure for a detected
process upset or failure scenario. This scope of application
would enable the client to have the system quickly
installed and operational.
Reliability and operations management. Nexus Oz
enables integrating configuration information from the
control system database to quickly and efficiently pro-
vide the sensor validation and process operations advisory
functions for the process units. As a potential failure is
diagnosed, a message is propagated through a message
board on the operators DCS console, specific to the oper-
ating area. Selecting the message calls up the appropri-
ate DCS schematic, highlights the effected piece of
equipment and displays the appropriate operations
response for the situation.
The reliability management aspects of Nexus Oz inte-
grates the dynamic sensor information defined above
with specialty data from systems like vibration analyzers
to include equipment health logic at the process unit
level. Each of the specific unit models, like cat crackers,
batch digesters or steam generators, are then integrated
within the Nexus Oz equipment object models to yield a
plant topology data model for the plant. The results of
these models are integrated with the clients mainte-
nance management and predictive maintenance appli-
cations for improved asset management benefits.
The rules and procedural-based reasoning and infer-
ential logic features of Nexus Oz facilitates information
management between the offline planning and opti-
mization models, and the online unit operations. The
production management applications include the sys-
tems addressing shared resources such as fuel gas, steam,
hydrogen and amine systems for the complex. Additional
operations management applications are process unit
specific such as operator advisories for diagnostics on
furnaces, distillation columns, and other process opera-
tions. The operations management applications are
scoped with clear economic benefits based on their direct
impact on the process operations.
The combined knowledge of the organization about
the process, its normal and abnormal operations (includ-
ing startup and shutdown) and all documentation is
embedded in Nexus Oz and made available in real time
for operator assistance. It is also transferred to all unit
operators and other sites as best practices.
Benefits. Benefits of integrated Reliability and Opera-
tions Management applications can be very significant
including the reduction of process upsets associated with
the abnormal situation applications, the improved process
performance during normal operations plus the estab-
lishment and implementation of best operating prac-
tices driving lower operating costs. Typical paybacks for
the systems are less than six months.
Commercial installations. Nexus Oz has been installed
at a number of refinery, petrochemical and chemical plants.
Licensor. Nexus Engineering, Kingwood, Texas,
www.nexusengineering.com.
Advanced Process Control and Information Systems 2003
Plant information
(Solomon benchmarking)
Application. The IndustryBest Performance Bench-
marking application automates key aspects of the bench-
marking process established by Solomon Associates to
deliver real-time feedback on plant operations versus
established performance goals, sister operating sites or
peer group competitors. Operating data are validated
against Solomon-defined norms to provide competitive
insight and a basis for measurable and sustainable
increases in operational efficiency and productivity, lead-
ing to strengthened profitability and market share.
Strategy.
Solomon benchmarks: Real-time operating data from
individual plants are collected from existing onsite sys-
tems and transferred to a secure, central database at
Industrial Evolution for integration with the Solomon
benchmarking application. Application results are vali-
dated versus past Solomon Studies and experience and
sent back to the plant site for display to management
and operations personnel.
Rigorous Data Security: Data security is key to this
applicationin accordance with Solomon Associates
years of benchmarking experience, no data are made
available to any individual or company outside of those
authorized by the operating company. All data commu-
nication is via Virtual Private Network only, with data
encryption and compression used to further secure indi-
vidual data transfers or application results.
User-configurable electronic alerting: Each IndustryBest
application comes with the ability to automatically alert
plant and/or vendor personnel of changes in performance
benchmarking results versus specified targets or limits.
Alerts are set on a per-user basis and can be received via
e-mail, cellphone, pager, etc.
Interfaces to existing systems: The IndustryBest appli-
cation is able to collect data from over 350 types of plant
systems and devices for secure data transfer. Collected
data can be optionally reviewed by plant personnel prior
to application execution. Application results can be stored
back in the onsite control system, real-time database,
etc., for access and use by plant personnel, per their estab-
lished access privileges.
Service delivery model: The IndustryBest Performance
Benchmarking application is available as a monthly ser-
vice from Solomon Associates and Industrial Evolution.
Customers can select from a range of calculated perfor-
mance indicators to be benchmarked versus appropriate
peer group(s) in their market or geography.
Economics. IndustryBest brings the high-value com-
ponents of the well-established Solomon Associates
biennial performance benchmarking studies to the plant
as real-time performance and competitive indicators.
This increases awareness of operating costs, plant effi-
ciencies and overall plant performance, resulting in
heightened competitive awareness and sustainable
plant profitability.
Commercial installations. As of mid-2003, various com-
ponents of the IndustryBest solution have been installed
in seven hydrocarbon processing sites in North America.
Licensor. Industrial Evolution, Inc., Phoenix, Arizona,
and Solomon Associates, Dallas, Texas; Websites:
www.industrialevolution.com or www.solomonon-
line.com; e-mail: contact@industrialevolution.com; tel.
(602) 867-0416.
Advanced Process Control and Information Systems 2003
IndustryBest
performance
benchmarking
Secure VPN
connection(s)
Industrial Evolution
data center
Real-time database,
catalyst vendor application(s)
Solomon Associates experts
Oil refinery
Corporate
data sources
Onsite
data sources
Plant information
(target setting and non-
conformance monitoring)
Application. The RESOLUTION Target Setting solution
includes entry of the unit operating targets, both oper-
ating characteristics and material movements, plan or
target review, and adopting these targets as settings for
the control systems. The nonconformance monitoring
application automatically detects deviations to targets
and captures the reasons for the nonconformance.
Strategy. The target setting application area is con-
cerned with transfer of target values to the operators
first for information and then for transfer into the control
system so that deviations from this target can be tracked.
Target setting starts with the units and their plans.
There are several areas of detail:
What operating conditions are expected for dura-
tion of the plan: a target coil outlet temperature on the
furnace, a maximum recycle ratio on the tower overhead,
etc.
What material consumption and production are
expected during execution of this plan
Associated ad-hoc details about this particular plan.
Operations will want to review the plan, and if con-
sidered acceptable, download this plan as targets into
the control system.
A feature of the target setting solution is the ability
to detect nonconformance. RESOLUTION is constantly
examining the key performance indicators and deter-
mining which ones are out of specification. If one is
detected, a message is sent. The message must be
acknowledged and an application will require that users
identify why they were not conforming to the plan for
that period.
Benefits. Accurately communicating the plan or opera-
tional instructions allows performance against this plan
to be accurately measured. Improved planning or
improved business processes can then remedy any devi-
ations from the plan.
Capturing nonconformance events is an essential part of
the feedback to the scheduling and planning process. Are
the feedstock assays incorrect? Are the simulation mod-
els inaccurate? Are there plant equipment limitations?
In combination with RESOLUTIONs Key Performance
Indicator Management solution, one installation reported
breaking 18 operating records the month after installa-
tion. Benefits were estimated at between $8 million and
$20 million/yr.
Commercial installations. RESOLUTION Target Setting
modules have been installed on six sites.
Licensor. Resolution Integration Solutions, Inc., Solon,
Ohio. Contact: PeterLawrence@ris-resolution.com, Web-
site: www.ris-resolution.com, tel: (440) 519-1256.
Advanced Process Control and Information Systems 2003
Plant information (Web-
based decision support)
Application. ProcessNet is a leading Web-based indus-
trial decision-support system, integrating all data sources
(vendor-independent relational and time-series data)
into a common and responsive view of your plant oper-
ations. Plant key performance indicators (KPIs), along
with modern graphical and visual elements and connec-
tivity to legacy information sources, are only some of the
many pieces that ProcessNet brings together to provide
a total USER-focused industrial decision-support envi-
ronmentnot vendor focused. ProcessNet provides value
to all process enterprise levels.
Strategy. ProcessNet is often used as an enterprise por-
tal to production-based data, but is also scalable down to
a point solutionas a thin-client bi-directional front end
for existing or new applications. This, coupled with
advanced ProcessNet functionality such as event notifi-
cation and automated reporting, enables ProcessNet to
lever existing IT infrastructure and applications to pro-
vide users the ability to get more value out of their exist-
ing software investments.
ProcessNet acts as a virtual data warehouseaccess-
ing and leaving data at its source, without duplication
into any additional database. This means no manage-
ment of change issues as well as always providing cur-
rent and accurate information. Data exporting functions
into standard file formats allow for data consolidation
from multiple sourcesuseful for further analytical appli-
cations or integrated reporting.
ProcessNet is central server-based, providing thin-client
tools to enable a nontechnical user to both use and
administrate the system. Users access ProcessNet through
their standard Web browser, a tool that they are already
familiar with. Therefore, ProcessNet training require-
ments are low, and acceptance high.
Economics. Return on investment is measured in less
than one year. Typical benefits seen are: cost reductions
in client software licenses and vendor maintenance and
support agreements, reduced administration costs,
increased workforce efficiency, and the ability for pro-
cess enterprises to make timely decisions based on real-
time and accurate information from any source or loca-
tion in the enterprise.
Commercial installations. Over 150 installations across
12 countries worldwide.
Licensor. Matrikon Inc., Houston, Texas, and Edmonton,
Alberta, and 15 offices worldwide. Contact: e-mail: pro-
cessnet@matrikon.com; Website: www.matrikon.com.
Advanced Process Control and Information Systems 2003
Plant information (yield
accounting)
Application. The RESOLUTION Yield Accounting solu-
tion unifies unit material balance reporting with the
expected yields and the yields reported by offsites.
Strategy. Frequently, yield and unit reporting tasks are
largely unrelated: unit reports are created directly from
real-time database meter readings without reference to
actual charge and production movements; conversely,
yield reports rely heavily on tank gauges for charges and
productions. Unit reports are then used as the basis for
technological audits, simulation runs, LP vector genera-
tion, etc., despite the fact that there might be a discrep-
ancy between the yield reports and them. Additionally,
the yield reports provide useful information regarding
actual feedstock and product analysis.
Plant data reconciliation is identifying anomalous move-
ments or meters. If unit personnel can be involved in this
comparison as soon as possible, any anomalies will likely
be immediately recognized. These two business processes
are unified within RESOLUTION Yield Accounting. The
next step would be to feed back the site-reconciled data
to the units and technical department so that they can
then use the same data.
The objective of this unification is to define a more
comprehensive unit-performance report that presents
the two versions of the data.
Benefits. Accurate yield information is key to success-
ful plant planning. Without accurate yield data, there is
always some doubt as to the source of deviations from
plan. Discrepancies invariably arise due to failure to accu-
rately report material movements. Most of these dis-
crepancies are easily resolved by the control staff when
presented with a clear comparison of the two versions
of the data.
Commercial installations. RESOLUTION Yield Account-
ing modules have been installed on 15 sites.
Licensor. Resolution Integration Solutions, Inc., Solon,
Ohio. Contact: PeterLawrence@ris-resolution.com, Web-
site: www.ris-resolution.com, tel: (440) 519-1256.
Advanced Process Control and Information Systems 2003
Plant information
(yield accounting)
Application. Aspen Advisor is AspenTechs client-server
application for yield accounting and data reconciliation.
It combines an easy-to-use GUI for modeling, reconcil-
ing and reporting with a robust object-oriented expert
system for guidance during the reconciliation process.
Its goal is to significantly improve the productivity of
ongoing performance control, scheduling and planning
activities.
Aspen Advisor boosts profitability by identifying prod-
uct loss and providing decisionmakers throughout the
plant with critical production data. In addition, it pro-
vides an important link between ERP and manufactur-
ing systems, generating transactional information from
continuous processes.
Strategy. Aspen Advisor prepares accurate, reconciled
material and utility balances that are passed both to the
ERP system and the planning and scheduling system. For
ERP connectivity, Aspen Advisor conditions and trans-
forms the raw plant data into transactional information.
Production and yield values are posted to the ERP sys-
tem through an interface that is automatically associ-
ated with the appropriate production order. This infor-
mation is also sent to the planning model for immediate
analysis of actual plant performance, while optimizing
the manufacturing operations by enabling faster, more
accurate decisionmaking.
As a stand-alone production accounting tool, Aspen
Advisor provides significantly more capability than tra-
ditional in-house custom spreadsheets. Data transfer
from the information management component is auto-
matic and preconfigured; nonroutine product move-
ments and permanent plant modifications are easily
entered and tracked via the GUI.
Aspen Advisor analyzes data and provides key results
with minimal user intervention. It first identifies and
aids the user in correcting any gross anomalies, and
then distributes any remaining random errors based on
flow values, instruments and tolerance. Its flexibility in
modifying reconciliation decisions, viewing interactive
reports and assisting users in error resolutions is unique
in the industry.
Its simultaneous, least-squares reconciliation engine
performs adjustments of measured movements to min-
imize unit imbalances and/or minimize adjusted devi-
ations of measured movements. The data reconciliation
engine is an integral component of the application;
its data reconciliation algorithm includes an object-
oriented paradigm for resolving gross errors, as well as
a mathematical error distribution algorithm for dis-
tributing random errors. Multiple strategies are avail-
able for reconciling discrepancies among inventory,
receipts/shipments, oil movements and process unit
readings, and data reconciliation strategies can be cus-
tomized to produce optimal results for a specific man-
ufacturing facility.
For loss monitoring and early warning of operational
problems, Aspen Advisor provides reporting of adjust-
ments on a frequency basis, allowing problems to be
rapidly identified so corrective action can be taken.
Data in Aspen Advisor are stored in industry-standard
relational databases. The system is 100% ODBC-compli-
ant, and all major RDBMS solutions (including Oracle,
Microsoft SQL Server and IBM DB2) are supported.
Economics. Conservative estimates for a simple 100
Mbpd refinery indicate that Aspen PIMS can realize $10
million in annual benefits from improved crude selec-
tion, unit performance and blending.
Commercial installations. The industry standard for
petroleum industry planning, Aspen PIMS is licensed at
over 400 sites, and is used by more than 75% of the
refineries, and more than 60% of all petrochemical plants
in the world.
Licensor. Aspen Technology, Inc., Cambridge, Mas-
sachusetts, US; Houston, Texas, US; and approximately
50 offices worldwide. Internet: www.aspentech.com.
Advanced Process Control and Information Systems 2003
Plant information analysis
Application. To remain competitive, hydrocarbon pro-
cessing plants require the ability to analyze current pro-
cess data in real time. To this end, Aspen Technologys
Aspen Multivariate can be deployed to enable quicker
assessment and understanding of complex processes, ear-
lier and more reliable detection of faults and increased
processing throughput / capacity.
Strategy. Aspen Multivariate is both an offline tool used
to analyze process data and build a model that captures
the data correlations and an online tool for monitoring
the process. Used online, an Aspen Multivariate model
can detect faults based on the reference model built
offline. The system uses current and historical process
data from AspenTechs InfoPlus.21 information man-
agement technology and generates information as graph-
ical plots that visually depict process conditions and fault
indications.
The latest version of Aspen Multivariate adds the capa-
bility to deploy models online with model results and
alarm conditions being written to an underlying Info-
Plus.21 database. The model results can be used for pro-
cess analysis and the alarms can be used to alert opera-
tions and engineering of process abnormalities.
Aspen Multivariate uses Principal Component Analy-
sis, a powerful statistical technique for transforming a
large number of process variables into a small number
of principal components, which can be used to analyze
the process. Models can be built in as little as a few hours
and can be the basis for significant operational improve-
ments. Examples of Aspen Multivariate models used for
process data analysis and resulting in process improve-
ments include:
Equipment fault diagnosis in a vinyl chloride
monomer stripper column based on a model constructed
from data gathered from InfoPlus.21. Data from three
days of operation was used and included 17 variables/tags
as well as inlet and outlet feed flow for the inlet surge
tank. The data described the column and its associated
equipment. The model identified a problem with the
propellers on the surge tank agitator, thus explaining
some extremely erratic behavior in the downstream col-
umn and allowing the column to be controlled more effi-
ciently. Correcting the problem increased column
throughput.
Instrument fault diagnosis in an olefins coproducts
furnace, based on a model constructed from data gath-
ered from the InfoPlus.21 system over 10 days of operation
for 21 furnace variables/tags, including propane feed
flow. Analysis of the Aspen Multivariate Dual-Principal
Component plot indicated that the propane feed con-
troller valve had malfunctioned. It was subsequently deter-
mined to be out of calibration. Correcting the calibration
allowed the furnace to be operated more stably, thus
putting far less stress on the downstream process units.
Economics. Aspen Multivariate applications can quickly
provide a payback of many times the installation costs
through process troubleshooting and upset avoidance.
Commercial installations. Aspen Multivariate is
installed and running in multiple process plants in the
refining, chemical and petrochemical industries. More
installations are in progress.
Licensor. Aspen Technology, Inc., Cambridge, Mas-
sachusetts, US; Houston, Texas, US; and approximately
50 offices worldwide. Internet: www.aspentech.com.
Advanced Process Control and Information Systems 2003
Plant information analysis
Applications. A root problem of engineering and sci-
ence has been the limit of 23 variables in a graph. Sup-
pose you could easily display 1,000 sets of observations
of 50 variables in one interactive graph. This would be
the equivalent of 435 conventional x-y graphs of two
variables! This bigger, better picture would help the
engineer gain new insight and understanding with which
to improve the process and/or its operation and/or con-
trol. Interactions between parameters and difficult
product qualities such as polymer color, melt index and
particle size would become apparent. Identical pro-
cess units could be compared (one is always better
than the other). Give-away and the conditions under
which it occurs could be seen!
Such a graph exists in an engineer-focused implemen-
tation by Curvaceous Visual Explorer (CVE). An example
of a food process operating with several different vari-
ables in different modes with five queries is shown above.
CVE is also a base component of Geometric Process
Control (GPC) where it is used to define a best operat-
ing zone (BOZ).
Capabilities. CVE allows the display and interaction
with a graph containing several hundred variables which
brings a whole new power to finding cause-and-effect
relationships in large-scale process plants. Many thou-
sands of points (rows of a spreadsheet) can be displayed
and multiple focus levels allow refinement to smaller
sets of points for more effective visual analysis. CVE pro-
vides one- and two-dimensional graphic queries allow-
ing a user to quickly and nonmathematically focus on
interesting areas of plant behavior. Automatically gen-
erated algebraic and Boolean representation of queries
can be exported as rules for use in a rule-based system.
Rules generated by other means can be examined for a
true multivariate view of their consequences. Algo-
rithms for multivariable cluster analysis and paramet-
ric analysis are also included.
Economics. All improvements ultimately stem from bet-
ter understanding of how a process really works. CVE
gives the user a much larger view than ever before of
the process and does not require any specialist knowl-
edge of mathematics. This at least doubles user pro-
ductivity compared to previous methods and means that
more engineers are willing to use it. Both factors com-
bined increase the amount of effective analysis being
performed by several times so increasing by at least sev-
eral times the number of economically viable improve-
ment discoveries that will be made. Curvaceous Process
Camera used in conjunction adds even more improve-
ment, especially in the understanding of how variables
really interact.
Commercial installations. CVE is operational on over
60 sites in the UK and North America. Applications
include process analysis and improvement in batch and
continuous processes ranging from oil refining to plas-
tic injection moulding, and from manufacture of solid
rocket fuel through drilling of oil wells to visualization
and decomposition of neural networks, commodities
trading and emissions reduction in CHP plants.
Licensor. Curvaceous Software Limited, Gerrards Cross,
UK; Website: www.curvaceous.com; e-mail: enquiries@
curvaceous.com.
Advanced Process Control and Information Systems 2003
Plant information
integration
Application. The Integrated Control and Information
System (ICIMS) streamlines operations and enhances deci-
sion support by providing integration between control
system, safety system, plant information management
systems, technical information systems and business infor-
mation systems. The computerized dataflows between
these applications minimizes customized integration and
enables businesses such as petrochemical producers to
monitor product costs on an activity basis across facility
and national boundaries. The system includes a complete
TCP/IP information network architecture, office automa-
tion, plant historian, laboratory information system,
CADD/electronic document and plant maintenance, plus
human resources, financials and business reporting based
on an ERP (SAP, BAAN, or JD Edwards) implementation.
Capabilities. This information technology solution pro-
vides a complete integration of real-time petrochemical
plant information with business/transactional systems to
coordinate all operations personnel, technical groups,
plant management and business management. The plant
information system portion combines all production sys-
tems, laboratory, security, safety and building systems
into a unified database. The ICIMS solution enforces the
best practices of petrochemical plant business pro-
cesses via automated workflow, document management
and business system integration. Furthermore, it ensures
that plant documentation is consistently current, training
records and authorized procedures are met and all other
ISO practices are followed.
Real-time information integration to maintenance
asset management minimizes unnecessary maintenance
procedures and inventories, while ensuring plant equip-
ment and personnel availability. Automated business
reporting driven from production systems allows up-
to-the-minute production reporting that contains all
flow-through costs and profitability (labor, materials
and overheads). The real-time linkage between product
demand and inventory and distribution systems dra-
matically speeds product changeover and minimizes
on-hand inventories.
Economics. Field results indicate the following economic
benefits:
Reduction of tank farm safety-stock inventory from
two weeks to three days
150% increase in product changeover speed
30% reduction in maintenance-related expendi-
tures
5% increase in petrochemical plant uptime due to
real-time maintenance condition monitoring
50% reduction in manual paperwork
30% reduction in information network costs due
to streamlined computer architecture
40% reduction in ICIMS system maintenance costs
due to reduced suppliers and customer software inter-
faces.
Commercial installations. ICIMS for petrochemical
plants are installed at five plants in the US, Europe, Mid-
dle East and Asia.
Licensor. Invensys Hydrocarbons Solutions, Foxboro,
Massachusetts. Contact: pamela.williams.invensys.com.
Advanced Process Control and Information Systems 2003
Plant information
integration
Application. RESOLUTION provides a comprehensive
plant information system that has:
Specific business solutions for the chemical, petro-
chemical, gas processing, refining, food and process man-
ufacturing industries, e.g., production reporting, mass
balancing, key performance indicators, batch tracking,
data reconciliation and more.
Configurable work flow components to match your
business.
Strategy. RISs RESOLUTION product line is a compre-
hensive plant information system that integrates your
plants isolated systems and software, providing effec-
tive solutions that adapt to your business needs. RESO-
LUTIONs configurable components include RELAYER,
REPOSITORY and RISNet.
RELAYER is an XML messaging system that utilizes intel-
ligent listeners to break down integration barriers, allow-
ing third-party applications to communicate via an
enhanced intelligent workflow. Interfaces to industry
standard products already exist: OSI PI, Honeywell PHD,
Baytek BLISS, PSDI Maximo, Aspen ADVISOR, OSI Sig-
mafine and more. RELAYER includes message-driven mod-
ules for scheduling activities, collecting operating data,
deriving and summarizing data, and producing complex
analyses.
REPOSITORY, the plant data bank, provides an inte-
grated view of all plant data, facilitating knowledge
management and, in turn, enhanced financial insight.
RISnet is a Web-based user interface development envi-
ronment, with the ability to rapidly generate full trans-
action-capable Web forms matching client business pro-
cesses.
RIS also provides a set of standard application solutions
or RESOLUTIONS covering key plant business processes.
Adapting these components to meet specific business
requirements can be achieved by entering configuration
data and not programming. RESOLUTIONS include:
Comparative performance monitoring (Solomon)
Operator logbook
Laboratory management
Project management and tracking
Product specification management
Quality assurance
Production management
Equipment inspection and testing
Shipping
Planning and scheduling
Documentation management
Offsite data management
Equipment specification
Batch tracking
Key performance indicator (KPI)
Yield accounting.
Benefits. RESOLUTION provides for integrating ALL plant
data: operational, economic, engineering, planning, main-
tenance, documentation and more. This provides for one-
stop shopping for information, and eliminates data ambi-
guity and duplication. Its benefit is that it allows efficient use
of information. This improves analysis of any aspect of the
business, in particular, comparison of plan versus actual.
Typical plants might have more than 100 different
applications. RESOLUTION allows the number of appli-
cations to be drastically reduced, hence, greatly reduc-
ing costs.
RESOLUTION is designed with integration in mind: its
RELAYER tools for third-party system integration greatly
reduces the cost of a systems integration project.
Commercial installations. RESOLUTION modules have
been installed on 20 sites.
Licensor. Resolution Integration Solutions, Inc., Solon,
Ohio. Contact: PeterLawrence@ris-resolution.com, Web-
site: www.ris-resolution.com, tel: (440) 519-1256.
Advanced Process Control and Information Systems 2003
Plant information
integration (ERP)
Applications. Corporate ERP/plant knowledge-based
integrated management information systems have been
developed and implemented for global refinery, olefin,
polyolefins, ethylbenzene, styrene/PS, EG/PTA/polyester
fibers, caprolactam and nylon fibers companies sup-
porting restructuring, reengineering, TQM cost reduc-
tion strategic decision analysis and expert system model-
based e-business strategy and APC/DCS applications to
maximize supply chain productivity.
They integrate the information, Internet and intranet
technology into finance, cost accounting, human
resources, feedstock and fuels procurement, inventory
supply chain and plant daily operating information,
equipment and instrumentation/DCS maintenance, emer-
gency shutdown, startup, explosion accident informa-
tion systems support, and operating and information
technology staff on-the-job training.
Strategy.
Information knowledge base development. These infor-
mation knowledge systems have been developed from
the past 20 years daily US Asian and European Wall Street
Journals, Business Week, Economist, IMF economic and
NPRA data, DeWitt and Platt market newsletter data;
global central banks monetary policy and economics and
business information, extensive literature and patent
search, daily Internet information on US, European, Tai-
wan, China and Asia-Pacific crude oils, fuel oils, ethylene,
EG, PTA, polyesters fibers and PET spot and contract prices
data, entire corporate/plant operating history (including
normal, crisis and emergency operations); management
and plant operators expertise and market psychology as
the knowledge base supporting expert systems-based
decision simulators. Features include:
Global central banks monetary policy, financial mar-
kets interest rates, currency, commodities and deriva-
tives prices information
Global crude oil, fuel oils, gas oils, ethylene, EG, PTA
and benzene feedstock prices, inventory and procure-
ment information systems
Global refining products, olefin, styrene, polyester
and nylon fibers competitive spot, and contract pricing,
and marketing and sales information
Corporate/plant cost accounting (unit consumption)
information
Corporate/plant manpower, function and perfor-
mance information
Process plant operating DCS management process
startup, emergency shutdown, troubleshooting, waste
minimization, energy conservation, equipment design,
instrumentation and maintenance information.
Operations management implementation. OSA con-
sultant, Dr. Warren Huang, will conduct the corporate/plant
operations, restructuring, reengineering and cost reduc-
tions review and set up goal mission performance-ori-
ented OSA teams to develop and implement the plant
strategic information knowledge management systems
supporting daily corporate/plant decision simulation anal-
ysis in supply chain e-commerce cost reductions.
Economics. Up to $1 billion saved without staff cut
by these information knowledge-based OSA decision
simulators.
Commercial installations. Twelve refinery/petro-
chemicals ERP/plant information integration and 100 cor-
porate/plant integrated information management appli-
cations workshops offered.
References. All by Dr. Warren Huang, OSA:
Improve process by OSA, in 12-paper series in Hydro-
carbon Processing and Oil & Gas Journal, 1980, 1983, HP,
OGJ, 19791983, Goal, Mission Performance Oriented
Design/Operations Simulations Analysis Predictive Control
Maximize Refinery-Olefin, Styrene, Polyester, Nylon Fiber
Mills Productivity, Flexibility, AIChE 1983 Diamond
Jubilee, 1990, 1999 annual meeting Dallas; World
Congress II, II, IV, Canada, Tokyo, Germany,1983, 1986,
1991, Singapore, Beijing, Antwerp, Shanghai, Dallas,1989,
1992, 1995, 1997, 1999; Intl. central banks governors
conference, Macau, May 15, Taipei, May 29, Barcelona,
June 3, 1999, Washington D.C., June 30, 1999; Supply
chain strategy maximize oil, gas, chemical profits con-
ference/workshop, Singapore, April 2627, 2001.
Licensor. OSA Intl Operations Analysis, San Francisco,
California; Website: www.osawh.com; e-mail
wh3928@yahoo.com.
Advanced Process Control and Information Systems 2003
Plant information
management
Application. Information management systems play a
key role in supporting entire enterprises, from the infor-
mation pumping out of the plant to the communication
among suppliers in the extended supply chain. These sys-
tems are used in the hydrocarbon processing industries to
improve operational reliability, enhance plant and envi-
ronmental performance, reduce costs and provide the
data platform for integrating plant systems to other plant
and business applications.
Strategy. In todays networked environment, its not
enough to focus attention on the plant and the produc-
tion of a particular product. Processes must be optimized
across all critical value chains: engineering, manufactur-
ing, the extended supply chain and beyond into the realm
of digital marketplaces. To do this, information is required
from all levelsquickly. AspenTechs InfoPlus.21 infor-
mation management system provides the infrastructure
to capture, integrate, manage and display plant data,
just as ERP systems integrate business data. InfoPlus.21
also provides the infrastructure for integrated plant sys-
tem applications, such as production control, production
management and quality management systems.
InfoPlus.21 captures and integrates plant transaction
and real-time data from:
On-plant systems, such as control systems, instru-
ments and analyzers
Execution systems, such as those performing real-
time optimization, advanced process control, laboratory
systems and online quality management systems
Operators interactions with the plant.
InfoPlus.21 also integrates plant data with the ERP sys-
tem to improve supply chain visibility, enhance plant deci-
sion support and provide better quality data across the
enterprise.
Economics. InfoPlus.21 information management solu-
tions enable users to make intelligent business decisions
quickly and accurately. The benefits arising from an inte-
grated information management system include:
Operational reliability through a holistic view of
how the plant is operating, which enables plant operators
and managers to identify and preempt situations that
might otherwise lead to costly plant shutdowns.
Enabling a real-time supply chain with real-time
production information. Plan managers can get updated
production schedules for the next few hours. This short-
term planning can be shared with suppliers who work
in just-in-time mode to optimize their production.
Capacity, yield and quality improvements. These
benefits arise from better decision-making and main-
taining stable operation closer to operating limits. Exam-
ples include reduced variability in product quality, faster
changeover and reduced rework.
Inventory cost reductions from improved operating
decisions.
Capital cost reductions through the use of histori-
cal operating data to investigate operating scenarios
when designing plant expansions/revamps. This, com-
bined with process modeling, can lead to avoiding sub-
stantial capital expenditures.
Improved personnel productivity through the use
of sound decision-support tools, reports and process anal-
ysis tools.
Regulatory compliance through automatically gen-
erated reports, tracking and early notifications to mini-
mize or prevent outages. Savings are achieved through
reduced efforts to prepare reports, avoiding penalties
from regulatory authorities and improved relations with
local residents.
Commercial installations. AspenTechs InfoPlus.21 has
been implemented in over 2,500 operating sites.
Licensor. Aspen Technology, Inc., Cambridge, Mas-
sachusetts, US; Houston, Texas, US; and approximately
50 offices worldwide. Internet: www.aspentech.com.
Advanced Process Control and Information Systems 2003
Plant information
management
Application. Exaquantum from Yokogawa is a plant
information management mystem (PIMS). It provides
business benefits to users in a wide range of industries
including hydrocarbons, power and chemicals. It is one of
the most comprehensive PIMS available for the process
industries. Exaquantum is suitable for continuous and
batch processes.
Exaquantum acquires process data and transforms it
into easily usable, high-value, widely distributed infor-
mation. This becomes an integral part of the tools used
in decision-making.
Strategy. To provide data capture, integration and
reporting, Exaquantum comprises the following features:
Process Control Systems interface. Exaquantum pro-
vides PCS data access using the OPC standard.
Data processing and storage. The Exaquantum real-
time database (RTDB) is tag-based. Quality codes, statis-
tical capabilities, data aggregations, data assembly into
function blocks and user-scripting of logic pathways are
integral to the informational tags.
Role-based view of resources. Exaquantum can be con-
figured so user groups have their own view of informa-
tion. This avoids lengthy searches through large volumes
of data. Tags are stored in folders, grouped with their
associated information. Data access and security is pro-
vided at this level.
Multiple servers support. Multiple Exaquantum servers
can be configured so that information is available as a
single resource.
Data visualization. Exaquantum supports varied visu-
alization needs through Exaquantum Explorer and
Exaquantum/Web.
Exaquantum Explorer offers detailed graphics con-
figuration, including runtime support, trending, alarms
and events, data entry and write-back. Further advanced
features are available in addition to a comprehensive
Excel add-in.
Exaquantum/Web allows a wider variety of users to
access plant information using only their Web-browser.
Data availability through OLE DB/ODBC and a pub-
lished API, if required.
Exaquantum/Batch. Exaquantum/Batch is an intelli-
gent, scaleable S88-based Batch PIMS product. It pro-
vides analysis and reporting and collects, stores and dis-
plays current and historical data from batch production,
equipment and recipe formulation.
Exaquantum/SER. Exaquantum/SER is an event-driven
integrated reporting system that acquires alarm and
event messages and point data from plant monitoring
and control systems and then stores them in a single
database. For trip reports, a configuration tool is pro-
vided to set conditions and report content. Sequence of
events reports are generated on request, displaying mes-
sages from all available
Commercial installations. Exaquantum PIMS have
been installed in over 200 plants worldwide. Exaquan-
tum is the PIMS of choice for the hydrocarbon industry
with over 150 installations in this sector alone.
Licensor. Yokogawa Electric Corporation, Tokyo,
Japan, e-mail: info@ymx.yokogawa.com, Website:
www.ymx.yokogawa.com.
Advanced Process Control and Information Systems 2003
Exaquantum/web clients Exaquantum/explorer clients
Local area
network
Intranet
Role-based
view
Administration
tools
PCS
interface
Real-time
database
Exaquantum/PIMS server
Exaquantum/PIMS server
Exaquantum/PIMS server
OPC servers
DCSs, PLCs, etc.
System overview
External data:
ERP, LIMS, etc.
Long-term
archive
Historian
Plant operations
management
Application. The Business.FLEX PKS software applica-
tions provide Process Knowledge Solutions (PKS) that
unify business and production automation. Business objec-
tives are directly translated into manufacturing targets,
and validated production data are returned to close the
loop on the business planning cycle.
Business.FLEX PKS applications for operations man-
agement supports monitoring and analysis of process
operations, as well as providing integration with control
systems including Honeywells advanced control and opti-
mization solutions. When integrated with Honeywells
alarm management applications, these applications help
to overcome abnormal situations, such as upsets, and
ensure safe and profitable production.
The Operating Instructions module manages oper-
ating targets and instructions for production steps.
Operating Instructions can serve as the link between
planning, scheduling and advanced control, ensuring
that business objectives are accurately translated into
production targets and properly communicated. The
Business.FLEX PKS planning and scheduling tools, and
Honeywells advanced control system can be integrated
to streamline the process of translating plans into pro-
duction.
Operations Monitoring compares operating targets to
actual results, and provides tools for explaining and ana-
lyzing the differences. Operations Monitoring helps
reduce production variability and cost, and improves
throughput and yields by showing where and why plans
were not achieved.
Event Monitoring detects, records and communicates
operating events. It is useful for detecting and record-
ing things such as operating modes, unit and equipment
outages, and other occurrences that are interesting to
analyze.
Operations Logbook provides better access and man-
agement of operations information. Information from
different sources is consolidated in a common view to
give operators, supervisors and engineers a consistent,
up-to-date window into key operating data, including
shift reports, operator comments, daily shift orders and
daily shift task management.
Strategy. Business.FLEX PKS Operations Management
applications form an integrated solution suite that
enables improved operational performance. The solu-
tion systematically sets and communicates operating
plans, monitors process data against limits, and high-
lights priorities on deviations. It provides a better under-
standing of performance versus industry norms, and
knowledge of true operating limits for better reliability
and agility. The solution helps reduce energy use while
improving yield, product consistency and run lengths.
When combined with Honeywells alarm management
solutions, these applications help to overcome abnormal
situations, and ensure safe and profitable production.
Economics. Benefits are realized from effective unifi-
cation of business and production automation. As a result,
companies can typically increase production by 25%
and decrease costs by 0.51%. Major benefit areas are
improved operational effectiveness, market responsive-
ness, quality control, personnel productivity, customer
satisfaction, conformance to environment controls and
reduced working capital requirements, operating costs,
raw material utilization, utility consumption, product
returns and inventory levels.
Commercial installations. Over 1,000 Business.FLEX
PKS licenses have been installed throughout the world,
including at refineries, offshore platforms, chemical plants
and petrochemical complexes.
Licensor. Honeywell Industry Solutions, Phoenix,
Arizona. Contact: Susan.Alden@Honeywell.com.
Advanced Process Control and Information Systems 2003
RPMS:
plant planning
Scheduling
LIMS: quality
and product data
Operating
instructions
Operations
monitoring
Event
monitoring
Operations
logbook
Advanced
process control
and optimization
ASSAY:
feedstock selection
Plant operations
management
Applications. Rigorous, kinetic, information knowl-
edge, expert system model-based refinery, olefin, poly-
olefins, styrene, caprolactam, polyester and nylon fiber
mills process reactors and downstream recovery units
design and operations simulation improve daily plant
operating decisions and predictive control for advanced
process control (APC) applications.
The results are feedstock optimal allocation, blending
for full range feed compositions, operating loads in reac-
tor yield improvements, process debottlenecking, energy
conservation, waste minimization, preventive mainte-
nance and safety management, downstream customer
processing quality assurance, DCS/CIM system design and
integration, and economic impact on supply chain cost.
Technical and operating staff on-the-job training for full
range feed variations, operating loads and severity
changes in plantwide supply chain cost reduction, products
innovation and quality improvements is also provided.
Strategy.
Information knowledge base development. These sys-
tems have been developed from the past plant hourly oper-
ating history (including normal, crisis and emergency oper-
ations) and management and plant operators expertise.
Process plant units OSA models development. The lat-
est statistical, thermodynamic and kinetic theories, artifi-
cial intelligence in fuzzy logic, neural network and chaos
theory have been applied to develop expert system-based
decision simulators covering the entire operating history
and technical and operating staff expertise. Features
include:
Feedstock and fuel prices simulation forecasts, pro-
curement, inventory scheduling, blending and supply
chain strategic analysis
Reactor yield improvements and debottlenecking
and polymer processing quality improvement for full
range feeds, loads and severity changes
Process troubleshooting and debottlenecking over
design
Process energy conservation, cut fuel and steam unit
consumption
Process waste management, tracking and simulate
pollution source and minimization
Maximize products recovery while minimizing off-
spec loss
Process plant quality assurance and equipment pre-
ventive safety and maintenance management
Process plant technical, operating and DCS Internet
e-business strategy staff on-the-job training.
Operations management implementation. OSA con-
sultant, Dr. Warren Huang, will conduct the corpo-
rate/plant operation cost reductions review and set up
goal mission, performance oriented cross-functional OSA
strategic execution teams to achieve a $20-million cost
reduction with improved quality and market shares with-
out a staff cut or hardware investment.
Economics. Over $20 million saved without staff cut.
Commercial installations. Over 30 refinery, olefin,
polyolefin, ethylbenzene, styrene, caprolactam plants,
nylon and polyester fibers mills applied and 140 TQM
cost reduction workshops offered to corporate, plant
managers, technical, operating and DCS staff.
References. All by Dr. Warren Huang, OSA:
Improve process by OSA, Improve naphtha cracker
operations, February, May 1980, Optimize styrene
units, April 1983, Hydrocarbon Processing; OSA maxi-
mize ethylbenzene, styrene unit productivity, flexibility,
January, March 1983 Oil & Gas Journal and 12-paper series
in Hydrocarbon Processing, OGJ 19791983; Control of
Cracking Furnace, US patents, 1981, 1982; Goal, Mission
Performance Oriented Design/Operations Simulations
Analysis Predictive Control Maximized Refinery-Olefin,
Fiber Mills Productivity, Flexibility, AIChE 1983 Diamond
Jubilee, 1990, 1999 annual meeting, Dallas; World
Congress II, II, IV, Canada, Tokyo, Germany, 1983, 1986,
1991; Refinery Optimal Control, Singapore, Beijing,
Antwerp, 1992,1995, 1999; OSA Integrated Supply Chain
Strategy Maximize Oil, Gas, Chemical Profit, Singapore
Supply Chain Conference/Workshop, April 2627, 2001.
Licensor. OSA Intl Operations Analysis, San Francisco,
California; Website: www.osawh.com; e-mail
wh3928@yahoo.com.
Advanced Process Control and Information Systems 2003
Plant optimization
Application. Geometric Process Control (GPC) provides
the first mathematical unification of process control,
product quality control and process alarm management.
It includes a completely new operator display that is intu-
itive, easily understandable and provides operators with
information on the currently usable ranges of all vari-
ables that has never before been available. It uses a mul-
tivariable Best Operating Zone (BOZ) that is a business
objective such as efficient operation or in-spec product
used to extract a subset of actual process capability from
existing process and LIMS historian data.
This step is performed by visual analysis using the Cur-
vaceous Visual Explorer. The BOZ is defined by a set of
representative points as its basis for distinguishing
between normal and abnormal operation. The BOZ is
converted by Curvaceous Process Modeller (CPM) in min-
utes into an equation-less multivariable and nonlinear
model containing knowledge derived from both process
history and laboratory quality history. Alarm correction
advice is given to the operator or to an advanced con-
trol system when the process or the predicted qualities are
outside the BOZ; optimizing advice is given when inside
the BOZ. Advice is generated by a same-for-everyone
algorithm entirely avoiding any need for a rule base and
its associated costs. Process safety is greatly improved as
a consequence of much better alarm definitions, fewer
false alarms and reduced annunciation rates.
This method won the European Process Safety Centre
(EPSC) Award for the biggest single contribution to
improving plant safety in 2003. The equivalent of EPSC in
the US is the CPSC.
Economics. Real-time optimizing models can be built
and updated without mathematical knowledge, mak-
ing RTO practical and affordable even for very small
plants with few engineers. In use in the chlorinated hydro-
carbons application, it produced a 2% improvement in
process efficiency in the first three weeks of use, reduced
false alarms from 49% to less than 10% at the first
attempt and cut plant startup time by a factor of six.
Commercial installations. Six full GPC systems are oper-
ational or in commission and several others are under
investigation covering industries from refining, chemi-
cals and glass through to semi-conductors. Particular
appliations include para-amino phenol, catalytic crack-
ing, monoethanolamine, propane-propylene separation,
para-xylene and benzene recovery. Over 60 large plants
are now using CVE to identify their BOZs as the neces-
sary first step in the real-time implementation of CPM.
Licensor. Curvaceous Software Limited, Gerrards Cross,
UK; Website: www.curvaceous.com; e-mail: enquiries@
curvaceous.com.
Advanced Process Control and Information Systems 2003
Plant optimization (refining)
Application. The PetroPlan refinery modeling system is
appropriate for precise nonlinear simulation of the whole
refinery for various applications involving:
Evaluating revamp/expansion options
Planning grassroots facilities
Valuating alternative feedstocks
Changed product specifications
Optimizing plant operations
Quick screening of processing options.
Description: The PetroPlan model is a block-by-block
simulation of the entire refinery encompassing crude
fractionation and product blending in one single model.
Each process/utility block calculates product yields and
properties as well as utility consumption based on feed
properties and parameters such as conversion, severity,
etc. Submodel correlations may be nonlinear and are
very visible and easy to modify.
The crude unit is a block integral to the main simulation
so its cut points can be varied on the fly. A blender block
using linear programming (LP) techniques blends all
intermediate products into up to 16 optimum blends of
desired property specifications.
The user builds the flowsheet by connecting feeds
to desired blocks using a mouse. Products from a block
are inserted into the flowsheet by PetroPlan based on
the block type. A product may be recycled to an
upstream block. Block operation/design parameters
are entered on a simple form. If the user chooses, Petro-
Plan will vary the selected parameters (e.g., reformer
severity) to maximize global profit. In general, blocks
and streams in the simulation mimic their counterparts
in the real refinery, unlike LP-based simulators. The
easy-to-browse output clearly shows all the results of
each block on a single page including product yields
and properties.
Interaction with the other elements of the refinerys
planning and optimization system can be automated by
exchanging PetroPlan input/output in a Microsoft-Excel
compatible format.
Installations: 30 sites.
Licensor: AMI Consultants, Sugar Land, Texas; e-mail:
info@AmiConsultants.com.
Advanced Process Control and Information Systems 2003
Plant optimization and
information (refining)
Application. Operating and controlling a modern oil
refinery is now an extremely complex and demanding
business. As well as being highly interactive, processes
contain many operating variables and constraints often
subject to daily change. The range of feedstocks avail-
able and required product slates are usually wide, with
costs and values frequently updating as economic con-
ditions change. For this reason, the information flow
between plant and personnel and, more importantly,
how the data are used to improve unit profitability, is
now a key element within the refinery operating strategy.
Many sites have generated substantial benefits by invest-
ing to improve plant information, unit optimization and
process control. Emersons PlantWeb digital plant archi-
tecture is a leading platform for improving refinery per-
formance through process and asset optimization, and
delivering secure information to those running the facil-
ity from onsite or remote access.
Strategy. A number of important functions can be
accomplished by implementing modern control systems
and technologies to improve process unit operation per-
formance and availability.
Sitewide networks for plant data acquisition and
distribution
Sitewide LP modeling
Unit simulation and optimization
Equipment performance monitoring
Advanced process control, including model-based
techniques
Process alarm management.
Implementation. Computer systems will generally be
constructed in a hierarchical manner, with information
and data transmitted in both upward and downward
directions. At the highest level, systems will consider data
on a sitewide basis, often including links to remote loca-
tions such as company headquarters or other sites. Such
systems allow multiple users to access and manipulate
data from different lower-level platforms and to return,
for example, operating targets back to these individual
platforms. At the next level, individual plant monitoring
and optimization systems are applied to ensure the plant
continuously operates in the most efficient and prof-
itable manner, within the operational and economic lim-
its of the unit. Finally, advanced control is utilized to
ensure the processes continue to operate at their required
optimum conditions when subject to internal and exter-
nal disturbances.
Benefits. Installation of an individual system can real-
ize substantial benefits very quickly, with payback periods
normally in the range of 6 to 12 months. Quantities
involved depend on the size and complexity of the system,
but can be up to $2 million/yr.
Commercial installations. Emersons Real-Time Opti-
mizer, Equipment Performance Monitor and Model Pre-
dictive Control (MPC) have been successfully applied in
many refineries and other plants worldwide.
Licensor. Emerson Process Management, Austin, Texas;
www.emersonprocess.com/solutions/aat. Contact: Emer-
son Process Management, Tim Olsen, Process and Perfor-
mance Consultant, Advanced Applied Technologies, tel:
(641) 754-3459, e-mail: Tim.Olsen@EmersonProcess.com.
Advanced Process Control and Information Systems 2003
Plant performance
management
Application. Performance management provides a
means for closing the gap between expected and actual
performance. Aspen Technologys performance man-
agement solution comprises technology and workflows
for measuring and quantifying operating performance
and detecting, quantifying and correcting any deviation
in planned performance that may affect profitability.
Strategy. The automated performance management
solution combines software with common workflows to
develop plant performance information for utilization
by planning, scheduling and operations. Using predic-
tive software tools, users can establish a multifunctional
continuous improvement program across multiple busi-
ness processes. Components of an integrated perfor-
mance management program are:
Plan vs. actual reports that compare the operating
plan to the actual operation on a site-wide and unit-spe-
cific process and economic basis, for mass/volume bal-
ance, market vs. production variance, predicted versus
actual stream qualities, and plan/predictive/actual vs.
actual reconciled unit comparisons
Margin curves
Added value
Product quality giveaway analysis
Planning model accuracy
Unit performance analysis.
Performance Management can be customized to meet
individual plant requirements, and provides significantly
more capability than traditional in-house custom spread-
sheets. Extensive data manipulation capabilities include
mapping among predictive models, data scaling and
aggregation and report distribution. AspenTechs Per-
formance Management solution supports comparisons
over different time periods (daily, weekly, month to date,
last 30 days, etc.).
Economics. The performance management technology
allows evaluating current plant performance to plan and
modify future plant targets, thus moving a plant closer to
its optimum. Benefits from closing the gap between
expected and actual performance depend on how well
existing business processes are executed, but a conser-
vative estimate is $0.02/bbl to $.03/bbl, resulting from:
Consistent methodology to measure performance
Standardization across multiple installations
and sites
Reduced costs to determine performance
Reduced time for problem identification
Improved ability to monitor and identify LP and
simulator predictions
Standardization of the predictive model calibra-
tion process.
Commercial installations. AspenTechs performance
management solution has been implemented in three
refinery and four petrochemical sites.
Licensor. Aspen Technology, Inc., Cambridge, Mas-
sachusetts, US; Houston, Texas, US; and approximately
50 offices worldwide. Internet: www.aspentech.com.
Advanced Process Control and Information Systems 2003
Plant performance
management
Application. Business.FLEX PKS software applications
provide Process Knowledge Solutions (PKS) for innova-
tive performance management. The solution includes
a number of integrated applications that track and ana-
lyze performance results on a timely basis. It helps busi-
nesses better align employee actions with overall cor-
porate objectives, creating a performance-driven
enterprise.
KPI Manager is a Web-based application that auto-
matically tracks and analyzes Key Performance Indica-
tors (KPIs) at a production site, or across multiple sites.
It provides plant managers, supervisors and employees
with an interactive, real-time metrics environment in
which they can assess and improve performance of their
business on a timely basis (e.g., per shift). KPI Manager can
access multiple (third-party) data sources and related
Business.FLEX PKS applications to deliver a comprehensive
performance management solution. It utilizes Six Sigma
workflow methodology for monitoring and minimizing
deviations as much as possible.
Strategy. KPI Manager is part of a comprehensive solu-
tion for performance management, which includes Hon-
eywells advanced historian (Uniformance PHD), ERP
integration link (Business Hiway) and related Busi-
ness.FLEX PKS applicationsall sources of KPI data. For
example, KPI Manager is complementary to the Busi-
ness.FLEX PKS Operations Monitoring applicationfor
real-time unit monitoring. KPI Manager can retrieve
prenormalized calculations directly from related appli-
cations such as Operations Monitoring, Production Ana-
lyst and Blend Managementvastly simplifying the KPI
configuration overhead. Industry (or corporate) bench-
marks can also be configured into the KPI system for
accurate comparisons.
Economics. Benefits are realized from consistent, timely
performance analysis. KPI Manager lets you calculate and
publish KPI results while there is still time to do some-
thing about them. Access to up-to-date KPI results enables
faster, more effective decision making. The easy-to-use
Web-based application provides improved visibility of
your organizations performance. Financial returns have
been estimated to provide a 23 month payback, based
on recent customer experience.
Commercial installations. Over 1,000 Business.FLEX
PKS licenses have been installed throughout the world,
including at refineries, offshore platforms, chemical plants
and petrochemical complexes.
Licensor. Honeywell Industry Solutions, Phoenix,
Arizona. Contact: Susan.Alden@Honeywell.com.
Advanced Process Control and Information Systems 2003
Production
balance
Business Hiway:
ERP Integration
Uniformance
PHD
Batch tracking
lot tracing
KPI Manager
Operations
Monitoring
Production
analyst
Blend
management
Blend
management
Third party data
Plant production
management
Application. Emersons PlantWeb digital plant archi-
tecture comprises an intelligent, information-rich plant
operations environment that delivers predictive process
and equipment performance information to higher level
management systems, enabling access via business sys-
tems, browsers and PDAs. Emersons modular software
applications are used at the process operations level and
at the business systems level of the plant.
At the process operations level, process automation
software monitors and optimizes performance of intel-
ligent instrumentation and the process itself; asset opti-
mization software monitors, manages and optimizes
machinery health. At the business systems level, software
provides links to production planning, economics, pur-
chasing and supply. Accurate up-to-date information on
actual production, inventories and plant performance is
provided. Web-enabled components permit the infor-
mation to be disseminated worldwide within the corpo-
ration and externally with suppliers and customers if
desired.
Strategy. The business systems level of software includes
the following modules:
Data ManagementIncludes links to multiple DCSs,
PLCs, real-time historians and databases, relational
databases and ERP systems. Enhanced data validation
and reconciliation are supported. The system provides
consistent unit and sitewide mass balances and produc-
tion data in a form that can easily be integrated with
modern higher-level business systems.
Cost ManagementProvides calculation of produc-
tion costs by major equipment, major unit and mode of
operation. Actual results are calculated against a plan.
Performance indices and benchmarks are automatically
calculated, allowing corporations to compare perfor-
mance of different plants continuously.
Intelligent Performance MonitoringSupports rigor-
ous performance monitoring of individual units and major
equipment. Both long-term trends and sudden changes
in performance can be detected. This helps identify likely
candidate equipment for preventive maintenance.
Quality ManagementLaboratory data are associated
with the batch or lot produced and the process operating
conditions at the time of production. This facilitates prob-
lem solving and data retrieval for reporting purposes.
Process AnalysisProvides tools for advanced statisti-
cal analysis and trending of process and laboratory data.
This provides operations, technical and management
staff with the means to assess, improve and optimize
plant operation.
Historical Data ManagementEfficient data archiving
and retrieval are provided. Very large databases, typical
of refining and chemical operations, are supported.
View ManagementA variety of user interfaces are
supported with selectable data security settings. Perfor-
mance data can be published on the corporate WAN and
viewed with easy-to-use web browsers.
Economics. Project paybacks of less than one year are
common. Savings occur through reduced operating,
inventory and maintenance costs and improved opera-
tional performance.
Commercial installations. More than 10 installations
of this technology have been completed worldwide.
Licensor. Emerson Process Management, Austin, Texas;
www.emersonprocess.com/solutions/aat. Contact: Emer-
son Process Management, Tim Olsen, Process and Perfor-
mance Consultant, Advanced Applied Technologies, tel:
(641) 754-3459, e-mail: Tim.Olsen@EmersonProcess.com.
Advanced Process Control and Information Systems 2003
Plant production
management
Application. Business.FLEX PKS applications provide
Process Knowledge Solutions (PKS) that unify business
and production automation. Business.FLEX PKS for pro-
duction management supports plant-level yield account-
ing, costing, material tracking, plan vs. actual analysis
and comprehensive performance monitoring.
KPI Manager improves performance monitoring by
automating the generation and collection of a rigorous
set of KPIs for a manufacturing site. It ensures that KPIs are
accurate, synchronized and visible across an organiza-
tion to enable consistent, timely analysis of business per-
formance.
Production Balance provides a consistent, accurate view
of production, resulting in improved inventory control,
planning and process condition monitoring. It efficiently
identifies and eliminates gross measurement errors. Users
can then rapidly identify unmeasured material move-
ments.
Batch/Lot Tracking tracks process conditions, production
metrics and qualities, which helps reduce product vari-
ability and costs, and improves customer satisfaction by
quickly pinpointing problems.
Production Tracker reviews, monitors and manages
planned and actual material movements throughout a
plant, allowing planning, scheduling and movements
control to be linked ensuring that movement orders
are properly communicated, executed and captured for
use by Production Balance.
Tank Composition Tracking tracks product components
anywhere products are mixed helping to correlate oper-
ating performance to actual feedstock mixtures and to
track the origin of inventory.
Production Costing calculates production costs at each
processing step, including direct, variable and utility
costshelping to reduce operating costs by under-
standing true production costs.
Business Hiway integrates Business.FLEX PKS with ERP
systems, facilitating, for example, communication of pro-
duction plans to plants, while returning production and
consumption quantitiesfor closed-loop production.
Strategy. Production Management is a complete solution
to manage production output and quality. It provides a
detailed picture of what was madeincluding how,
when, and where it is located. It measures performance,
helps improve product quality, and increases customer
satisfaction. It improves collaboration within the pro-
duction site, as well as with the overall supply chain, by
responding to customer and market demands more effi-
ciently and by providing timely closure of the planning
cycle and available product inventories. Business Hiway
provides the essential link between plant and supply
chain systems.
Economics. Effective unification of business and pro-
duction automation can typically increase production
by 2% to 5%, and decrease costs by 0.5% to 1%. Major
benefits are improved operational effectiveness, market
responsiveness, quality control, personnel productivity,
customer satisfaction, environmental compliance and
reduced working capital, operating costs, raw material
utilization, utility consumption, product returns and
inventories.
Commercial installations. Over 1,000 Business.FLEX
PKS licenses have been installed throughout the world,
including in refineries, offshore platforms, chemical plants
and petrochemical complexes.
Licensor. Honeywell Industry Solutions, Phoenix, Ari-
zona. Contact: Susan.Alden@Honeywell.com.
Advanced Process Control and Information Systems 2003
KPI
manager
Production
costing
Business Hiway:
ERP integration
Production
balance
Batch tracking
lot tracing
Tank composition
tracking
Production
tracker
Scheduling
LIMS:
quality and
product data
Planning
Movement control
and automation
Inventory
monitoring
Plant scheduling (refining)
Application. FORWARD is an interactive system dedi-
cated to optimal scheduling of refinery operations. It
provides a single tool to solve refinery scheduling prob-
lems from crude receipts to finished products liftings.
Strategy. FORWARD combines the experience of the
scheduling team and the power of object-oriented pro-
gramming, constraint propagation, linear and mixed-
integer programming, simulation and efficient user inter-
face techniques.
FORWARD contains provisions to easily configure and
maintain the refinery model :
Flow-sheet information can be easily entered to pro-
vide the suitable detail of plant topology.
Process unit models can be configured or selected
from a library of process unit models.
The FORWARD interface is built around two main dis-
plays :
The Gantt Chart is used to build and visualize the
production scenario with a resolution of a few minutes.
The Refinery Graph view provides a snapshot of
refinery operations at any time.
A scenario is built by placing events either manually
or automatically on the Gantt chart and entering the
event attributes.
During the scenario simulation, FORWARD warns the
user of any unfeasibility, takes action using pre-defined
rules and provides guidance for proper action.
The latest release of FORWARD includes provisions to
optimize crude unloading operations, mixing in tanks,
crude sequences to atmospheric distillation units and
sequence of finished product blending operations.
Economics. FORWARD bridges the gap between the
production plans and daily operations. It enables the user
to define the operating instructions for a short-term hori-
zon without losing track of the optimum monthly or
weekly plans. Its computational power enables the user
to identify potential problems and dynamically modify
the scenario to react quickly to new events. Benefits are
obtained from :
Increased throughput
Better adherence to the monthly plan
Better utilization of feedstocks and intermediate
streams
Better utilization of blending components
Reduced demurrage.
Commercial installations by Technip. FORWARD has
been implemented in several refineries in Europe and
East Asia.
Licensor. Technip France. Contact: Marc Valleur, Man-
ager ASE ParisAdvanced Systems Engineering, Tech-
nip; tel: (33) 1 47 78 21 83; fax: (33) 1 47 78 28 16; e-mail:
mvalleur@technip.com; Website: www.technip.com.
Advanced Process Control and Information Systems 2003
Plastics (product grade
switch)
Application. Engineering plastics (e.g., SAN, styrene-
acrylonitrile, or ABS, acrylonitrile-butyl rubber-styrene)
are typically produced in a wide variety of grades, that
is, similar products with differing product quality speci-
fications, such as viscosity. Depending on inventories and
ever-changing customer requirements, switches from
manufacturing one grade to another occur quite fre-
quently (every few days) in the same production line.
The product made during the switch is off-spec, and must
be sold as wavered material or as scrap. There are large
incentives, then, to minimize time required to make the
switch.
Control strategy. The control hierarchy normally
includes lower-level advanced controls for the key oper-
ating parameters, including primary feed charge rate,
secondary feed charge rate or charge ratio, chain initia-
tor or terminator rate or ratio, and reactor, and recov-
ery temperatures. The Product Grade Switch Control
ramps the targets of the key parameters to new values
needed to change the line from producing one product
grade to another. The parameters are ramped to new
targets according to a timing pattern established by oper-
ating experience. The ramps RATES are set to make the
switch as quickly as possible, while maintaining stable
operation. The operator is provided with a table of
default target values and timing patterns for each grade
switch.
Economics. This set of controls installed in 2000 on two
SAN lines increased on-spec material yield by 0.5% (con-
firmed by six-sigma audit), providing a payback of less
than one year. Operator acceptance and controls uti-
lization are extremely high.
Commercial installations. Two SAN lines and two ABS
lines at one site.
Developer/licensor. C. F. Picou Associates, Inc., an affil-
iate of GE Automation Services, Baton Rouge, Louisiana,
(225) 293-3382.
Advanced Process Control and Information Systems 2003
Platforming
Application. The main target is satisfying the weighted
average inlet temperature (WAIT) of the three reactors,
which is the way to reach the specified octane number
(RON). The operating conditions meet more and more
constraints as the catalyst gets older. The specifications are
the result of a trade-off between handling catalyst aging,
heater constraints and capabilities, and reactor constraints
(high limit on temperature deviation between pairs of
reactors).
The main objective is satisfying the WAIT setpoint with
a defined closed-loop time response. Balancing reactor
inlet temperatures is considered as a secondary objec-
tive less important than satisfying the WAIT setpoint.
Nevertheless, the imbalance is limited by high limits.
Secondary objectives are minimizing the difference
between inlet temperature of pairs of reactors (guaranty
of homogeneous catalyst aging) and satisfying the plant
nominal feed rate as long as the constraints do not make
it necessary to decrease it.
Control strategy. The control actions (manipulated vari-
ables) available are four setpoint values: heater outlet
temperature of each of the three heaters and the unit
feed rate.
Conflicts are managed by a hierarchy of the objectives.
They are sorted hierarchically corresponding to the con-
trol strategy defined by the producer:
Keep the process in a safe situation (heater con-
straints)
Limit unbalanced catalyst aging (constraints on the
inlet temperatures)
Satisfy the WAIT setpoint (to respect the RON target)
Keep the feed rate at its nominal value
Balance the inlet temperatures.
The principle of hierarchical objectives, specified to the
multivariable predictive controller, made possible taking
into account the control strategy defined by the pro-
ducer: obtain the RON target while always respecting
the whole set of constraints and secondary objectives.
Commercial installations. Two applications.
Benefits. Stabilizing RON, especially in case of a feed
change. Its standard deviation decreased by a factor of
two. The catalyst life cycle is now 8% longer.
Licensor. Adersa, Palaiseau, France; Website:
www.adersa.com; e-mail: jacques.papon@adersa.com.
Advanced Process Control and Information Systems 2003
Actions
on feedrate
and heaters
HIECON multivariable
model based
predictive control
Feed
rate
Heater
constraints
Heater
outlet T
Reacter
inlet T
Polycarbonate monomers
Application. The principal route to polycarbonate pro-
duction uses carbonyl dichloride (CDC or phosgene) as
the carbonate monomer. CDC polymerization with
bisphenol-a (BPA) produces this important engineering
plastic. An alternate route to polycarbonate production
is BPA polymerization with another carbonate-donating
molecule, diphenyl carbonate (or DPC). Newer plants uti-
lize this technology because they avoid use of phosgene.
Multivariable control (MVC) is especially suitable and
effective for these plants because of the highly interactive
nature of the specific processes and the relatively long
time constants. This application uses MVC, along with
inferred properties, to improve productivity of the CO
unit (the syngas unit), the dimethyl carbonate (DMC)
unit (reactor and distillation) and the DPC unit.
Control strategy. An important design decision for MVC
implementation across several related process units is
the number of controllers to be employed. Results of
preliminary step testing suggested three controllers cov-
ering: the CO unit, the DMC reactor and distillation, and
the DPC unit. Important manipulated variables (MVs)
include feeds to each unit, recycle streams, important
reactor and column temperatures, purge streams and
reboiler steam flows. Important controlled variables (CVs)
are reactor temperatures and compositions, column tem-
peratures and compositions, vent valve positions and key
inventories. Important inferred properties are the DMC
recycle acid organics composition, MDC azeotrope col-
umn overhead DMC composition, and bottom methanol
composition.
Economics. The project was justified by a combination
of increased production, reduced reboiler steam con-
sumption and reduced raw material costs. Payback was
less than six months.
Commercial installations. Controllers recently installed
at one site in Europe, with excellent results and accep-
tance by operations.
Developer/Licensor. C. F. Picou Associates, Inc., an affil-
iate of GE Automation Services, Baton Rouge, Louisiana,
(225) 293-3382.
Advanced Process Control and Information Systems 2003
Syngas
unit
DMC
reactor
DMC
distillation
DPC
product
DPC unit
CO
Natural gas
Recyle H
2
O
2
Methanol/DMC
Recycle
O
2
H
2
O
HCL Phenol
MVC controller
no. 1
MVC controller
no. 2
MVC controller no. 3
PolyCarbonate plant
Application. IntellOpts Polycarbonate advanced pro-
cess control applies advanced regulatory control to
achieve quality and economic goals while respecting
safety and equipment limitations.
Strategy. Advanced regulatory control applications are
implemented for the bisphenol A (BPA) melter, reactor
effluent (granulizer feed) handling, methylene chloride
(MC) strippers and carbon monoxide (CO) reformer. The
primary control strategies are:
Maintain free caustic and BPA concentration in
melter effluent
Maintain polycarbonate concentration in granulizer
feed
Minimize energy consumption by MC strippers
Maintain CO reformer tube temperature and excess
oxygen.
Melter effluent composition and granulizer feed com-
position are controlled using inferential models that are
updated with laboratory data. The MC strippers use feed-
forward control action to stabilize operation and ensure
that MC is recovered from wastewater. The CO reformer
controls include feedforward and feedback control action
for tube temperature, as well as analyzer feedback adjust-
ment of air/fuel ratio to control excess oxygen.
Economics. Benefits include improved yields, energy
savings and increased throughput. Payback periods are
typically less than six months for these advanced regu-
latory control applications.
Commercial installations. This advance process con-
trol application has been implemented on two polycar-
bonate units.
Licensor. Intelligent Optimization Group, Houston, Texas
(www.intellopt.com).
Advanced Process Control and Information Systems 2003
BPA
melter
CO
reformer
CO CDC
reactor
CDC
PC
reactors
MC
flusher
MC
strippers
PC
granulizer
MC
MC
PC product
Water
MC
Water
Demin. water Methylene chloride
BPA
feed
Caustic
BPA
powder
Flue gas
LPG
Steam
Naphtha
Comb. air
Polyethylene
Application. Nonlinear multivariable control and opti-
mization of polyethylene plants using a first-principles
engineering model. The integrated solution makes use
of both the equipment geometry and reaction kinetic
mechanisms to provide a dynamic model that can opti-
mize the process during grade runs and through grade
transitions.
Strategy. For most processes, the primary objective of
Profit NLC is direct control of key properties including
polymer melt flow index, density and production rate by
manipulating catalyst flow, hydrogen concentration and
comonomer/ethylene concentration ratio. Ethylene con-
centration in gas phase reactors is controlled by adjusting
reactor pressure through vent flows. The first-principles
engineering model combines a simultaneous heat and
material balance with polymer property estimation tech-
niques to provide a number of fundamental properties
including:
Polymer production rate
Instantaneous and bed-average melt index
Instantaneous and bed-average density
Number and weight average molecular weight
Reactor dew point calculations
Reactor monomer conversion
Reactor superficial gas velocities
Reactor space time yield
Catalyst productivity
Recycle gas compositions.
A desired response for the key calculations used as con-
trolled variables is combined with an economic objec-
tive function and solved using a large-scale open-equation
optimization system.
The same model is used for parameter estimation
when defining and calibrating the model, dynamic sim-
ulation for open-loop prediction and for online control
and optimization.
The controller can be used with a clients proprietary
model, either engineering or empirically based, and is
readily integrated with recipe management systems and
other production and quality management applications.
Usually, no step testing is required.
Profit NLC includes models for different reaction kinetic
mechanisms including Ziegler-Natta, chromium-based
and metallocene catalysts or free-radical kinetics used
for LDPE production.
Profit NLC is suitable for most bulk polymer processes
including Phillips Loop Reactors, Unipol, BP Innovene,
Spheripol, Mitsui Hypol, Novolen and LDPE autoclaves.
Economics. Typically Profit NLC will increase prime pro-
duction by as much as 5% by pushing the unit to capac-
ity limits. Grade switch transition times can be reduced by
as much as 30% and product quality variation reduced by
50%. The ability to simulate and control over a broad
range of operation allows for new product grades to be
rapidly moved into full production.
Commercial installations. These controls have been
implemented on over 19 polyethylene and polypropy-
lene reactors.
Licensor. Honeywell Industry Solutions, Phoenix,
Arizona. Contact: Susan.Alden@Honeywell.com.
Advanced Process Control and Information Systems 2003
Polyethylene
Application. Polyethylene is a plastic used to manufac-
ture a wide variety of consumer products.
Strategy. A dynamic information system forms the basis
of the polyethylene technology package. It calculates
the following vital reactor parameters in real time:
Dynamic concentration of ethylene
Dynamic polymer solids concentration
Dynamic concentration of comonomer
Reactor settling leg efficiency
Dynamic concentration of hydrogen
Cooling surface heat transfer coefficient
Dynamic polymer production rate
Catalyst productivity
Comonomer incorporation into polymer
Catalyst mud pot inventory
The dynamic variables are calculated using real-time
process measurements, and the values are displayed to
the process operator on the process operators console
and logged.
Many key operating variables calculated by the dynamic
information system are used in real-time, closed-loop
advanced control strategies to control polymer produc-
tion rate and product quality. Reactant concentrations,
solids concentration and production rate are thus con-
trollable even though their direct measurement within
the reactor is impractical. The closed-loop control strate-
gies may employ multivariable predictive control soft-
ware if a host computer is available, or they may be con-
figured in a modern DCS without a host.
Economics. Dynamic reactor calculations and controls
smooth the plants operation by minimizing upsets and
maintaining reactor stability. Production rate of on-spec-
ification product is increased by operating closer to con-
straints. Other benefits include improved polymer density
and ash control, improved operability from reduction in
solids and ethylene variability, and 2040% reduction in
melt index off-specification polymer at the reactor. The
dynamically calculated reactor parameters can be related
to product specifications and are valuable for produc-
tion of various polymer grades.
Commercial installations. Our technology has been
implemented on about 30 polyethylene reactors in the
United States, Europe and the Far East.
Licensor. Yokogawa Corporation of America, Systems
Division, Stafford, Texas, info@us.yokogawa.com.
Advanced Process Control and Information Systems 2003
FC
FC
FC
FC
Co-monomer
Recycle
diluent
Hydrogen
Ethylene
To product
separation
Polyethylene
reactor
M
Operating reports
CRT displays
Regulatory loops
Dynamic
information
system
Product
quality
correlations
Advanced
reactor
controls
(MVC or DCS)
Polymers
Application. Aspen Apollo is the polymer industrys only
truly universal nonlinear controller. Although general in
nature, Aspen Apollo has been specifically designed with
polymer control applications in mind. This is reflected in
the type of models that it supports and the types of con-
straints that can be imposed on the models, based on
process knowledge.
Strategy. Aspen Apollo is based on dynamic models that
have guaranteed steady-state and dynamic gain response
qualities.
Aspen Apollo is able to safely extrapolate into oper-
ating regions that have little or no historical data. This
extrapolation capability is analytical and, therefore, ele-
gant; the extrapolation gradient is based on the known
gradient at the extrapolation point, and robust since the
gains are globally bounded within the specified limits.
This capability of moving the process beyond that which
has been observed historically is essential if any true
benefit is to be achieved through advanced control.
Aspen Apollo is nonlinear in both steady state and
dynamics. It can model directional, positional and step-size
dynamic nonlinearities, and solves a nonlinear opti-
mization problem. A single-model philosophy is employed
where the steady-state and dynamic optimizations all
utilize the same universal model. In addition to this, the
inferential predictions can also utilize the same model if
required. This substantially reduces implementation and
maintenance costs, and produces superior optimization
performance when compared with alternative gain-sched-
uled approaches.
Features include the following:
Data management: A rich suite of data prescreen-
ing and analysis tools for data cleaning, filtering and
cause/effect analysis.
Deadtime and dynamics: Independent deadtime
alignment for each pair of relationships.
Guaranteed gain and extrapolation: State-space
bounded derivative networks guarantee gains will be
within specified bounds, ensuring that the models can
be inverted safely and reliably.
Consistent models: Steady-state optimization and
move plan optimization use consistent models, so the
controller can optimally move the process to targets it
knows it can achieve.
Multivariate nonlinear models: All models are mul-
tivariable, i.e., they are MISO not SISO transformations.
Unmeasured disturbance rejection: Configurable
extended Kalman filter update mechanism is used for
superior unmeasured disturbance rejection.
No complex tuning recipes: Powerful approxima-
tors and true nonlinear path optimization eliminate need
for gain adaptation, transforms or multiple tuning recipes.
Flexible tuning: Flexible tuning allows individual
manipulated and controlled variables to be tuned with
different aggressiveness levels, and supports widely dif-
fering dynamics within the same controller.
Constraint ranking: Constraint ranking capability
is included so that more important constraints get pri-
ority.
Process control web viewer: Online Web viewer
accessible by any PC with access to the process control
web server using Internet Explorer.
Economics. Applying Aspen Apollo in combination with
Aspen Transition Manager typically increases production
rate by 36% and reduces polymer grade transition time
by 30% or more. This leads to a significant reduction in
the amount of off-specification product being produced
during the transition.
Experience to-date is showing that the payback time is
rapid: with 56 months as a typical average expected
range. In more than 10 cases evaluated, the payback
period has been less than one year. This rapid payback
is driven by substantial decreases in transition time, reduc-
tions in first-pass off-spec product and increased plant
capacity.
Commercial installations. The underlying bounded
derivative network technology has now been imple-
mented on over 27 polymer production lines worldwide,
making it one of the most widely-applied nonlinear con-
trol paradigms in the polymer industry. Aspen Apollo has
been successfully implemented for in-grade control and
product grade transitions on plants in the US, Germany,
China and South Korea.
Licensor. Aspen Technology, Inc., Cambridge, Mas-
sachusetts, US; Houston, Texas, US; and approximately
50 offices worldwide. Internet: www.aspentech.com.
Advanced Process Control and Information Systems 2003
Polymers
Application. Nonlinear optimizing multivariable con-
trol of polymer processes using rigorous, first-principles
models is achieved using NLC with excellent control
including during product grade transitions or new prod-
uct introduction campaigns. NLC is part of the DOT Prod-
ucts advanced process modeling and control suite.
Description. The NOVA NLC combines a description of
the desired closed-loop behavior of the process, an eco-
nomic objective function and a nonlinear dynamic process
model into a single optimization problem. Dot Products
large-scale optimization engine, NOVA, is then used to
solve for the appropriate control action. The control that
can be achieved with this unique technology combination
is superior to controllers that use linear models or other
approximations of process behavior.
The NLC allows tuning to be implemented in terms
of specified controlled variable response rates. This pro-
vides tuning that is independent of process nonlinear-
ities, a key requirement for nonlinear control applica-
tions. As a result, one set of tuning parameters is suffi-
cient for all operations, so it is not necessary to define
sets of tuning parameters corresponding to different
operating conditions.
The controller also provides simultaneous economic
optimization. An economic objective function is opti-
mized at every control cycle, so that multivariable control
and nonlinear optimization can be performed by the
same application.
In a typical application, controlled variables include
polymer product properties and process constraints.
Manipulated variables include setpoints for monomer
feed and composition controllers, and catalyst feed
controllers, which are typically implemented in the DCS
system.
The controller may be implemented using the clients
proprietary dynamic model, if available. Alternatively,
the NOVA Polymer Dynamic Modeling System may be
used to configure a model. The modeling toolkit includes
different reactor types, complete fundamental poly-
merization kinetics, interface to client-specific propri-
etary property methods and multiple monomer/active
site capability.
The NOVA NLC provides an environment for analyz-
ing process data, including the capability to use the NOVA
optimizer to fit model parameters using multiple sets of
process data over the desired operating range.
Economics. The controllers implemented thus far have
been very successful, controlling through product grade
transitions in which the process gains change by as much
as a factor of 100. Significant benefits are achieved by
reducing transition times, increasing capacity and reduc-
ing production of off-spec material.
Commercial installations. This technology has been
implemented in 23 polymer units (polyethylene,
polypropylene, others around the world.
Licensor. PAS, Inc., Houston, Texas. Contact: e-mail:
sales@pas.com; Website: www.pas.com; tel: (281) 286-
6565.
Advanced Process Control and Information Systems 2003
Process sequence manager
Application. Aspen Sequence Manager offers manu-
facturers the opportunity to focus on continuous pro-
cess improvement to enhance process efficiency, increase
profitability, reduce costs by significantly reducing their
transition times and off-spec material losses, and auto-
mate complex process sequences.
Also, risk management requires that best practices for
standard operating procedures be maintained in the
plant. Maintaining the hundreds of parameters required
to execute a process transition becomes a challenge. Prior
to an operating state change, production specifications,
alarm limits, compliance limits and other operating
parameters must be loaded into a DCS or PLC for con-
trolling the process. Improving ease, speed and consis-
tency of these transitions reduces process variability and
increases operating performance, providing the manu-
facturer with a significant business advantage.
Strategy. Aspen Sequence Manager integrates with a
real-time database to deliver process information to pro-
cess control systems and operators, thereby helping plant
personnel implement complex transition strategies. The
solution provides automated best practices for operat-
ing procedures, while reducing operating times. Aspen
Sequence Manager also includes an OPC client, allowing
it to integrate with other devices with an OPC server.
Aspen Sequence Manager has both design and run-
time modes. Design mode allows the user to develop
strategies and attach process sequences and equipment
information. Run-time mode allows the user to execute
and interact with recipes as they are being implemented.
Key features of the system include:
Easy configuration of transition strategies. Strat-
egy configuration is defined by a combination of a pro-
cess flow diagram and corresponding property dialog.
The flow diagram is utilized to illustrate logic in a
flowchart fashion using nodes and links between nodes.
The interface allows the user to graphically draw the
flow of procedural logic, which defines the execution
strategy.
Flexible units of measure. Users have the freedom
to define their own units of measure. When adding new
units of measure, Aspen Sequence Manager will auto-
matically save the new information to the database.
Failure recovery. Aspen Sequence Manager has the
capability to recover at the point of failure as soon as
communication is re-established. In addition, Aspen
Sequence Manager notifies the user when communica-
tion with the SQL Server or the DCS is interrupted and
when communication is regained. Error propagation
from the tag or script level up through the node to a
procedure and out to the execution strategy has been
made configurable so that the user can determine the
effect an error has on other simultaneous activities.
Operator guidance. For more complex sequences,
Aspen Sequence Manager guides the process operator
through the sequence using a preconfigured strategy.
This event-driven structure helps the operator more effec-
tively manage the sequence.
Common terminology. Aspen Sequence Manager
takes full advantage of reusable templates to create and
store recipe targets and sequence-based strategies uti-
lizing familiar S88 and SP95 terminology.
Greater analysis capabilities. Additional analysis
capabilities include tracking and comparing similar pro-
cess sequences. Examples of items used for comparison are
sequence execution times, process conditions, raw mate-
rial usage and second-quality material produced.
Economics. Typically startup/shutdown/risk manage-
ment are in excess of a $1 million a year for a site imple-
mentation. Economics are highly dependent on the type
of process and risk management program undertaken.
Commercial installations. Aspen Sequence Manager is
installed at over 30 sites.
Licensor. Aspen Technology, Inc., Cambridge, Mas-
sachusetts, US; Houston, Texas, US; and approximately
50 offices worldwide. Internet: www.aspentech.com.
Advanced Process Control and Information Systems 2003
Product quality
management
Application. The Business.FLEX PKS software applica-
tions provide Process Knowledge Solutions (PKS) that
unify business and production automation. Business objec-
tives are directly translated into manufacturing targets,
and validated production data are returned to close the
loop on the business planning cycle. Business.FLEX PKS
applications for quality management make quality data
management integral to the overall automation solution.
The Lab Information Management System (LIMS) mod-
ule is a LIMS package designed for process plants. Most
third-party LIMS packages support the lab. Honeywells
LIMS product makes the lab part of the overall plant
automation solution by ensuring that quality data from
the laboratory are fully integrated with other plant data
and are available throughout the enterprise.
The Product Specification Management module tracks
development of new products and specifications over
the life of a product. Product Specification Management
records formulation, composition and specification details,
along with distribution and use of specifications, pro-
viding control over how specifications are used in a plant.
The Recipe Management module compares actual
operation to expected performance defined by a recipe
and isolates information around grade transitions.
Selected recipe information can be downloaded to pro-
cess control applications. Recipe Management helps
ensure products are made to the correct specification
and lowest cost. The Business Hiway module integrates
Business.FLEX PKS data with ERP systems such as SAP R/3,
thereby enabling quality information such as product
certification, specifications and recipes to be exchanged
between plants and corporate business systems.
Strategy. Honeywells LIMS is a modern laboratory infor-
mation management system designed especially for lab-
oratories in the process industries. LIMS is ideally suited for
process plants when industrial-strength lab management
is needed and integrating lab data with other business
systems is desired. LIMS is optimized with features impor-
tant to process plants, but without a lot of overhead.
LIMS is fully integrated with other Honeywell software
products, which greatly reduces initial configuration and
support requirements and eliminates need for custom
integration work. The robust Uniformance Plant Refer-
ence Model provides the foundation to share equipment,
products, specifications and related information with
other applications.
Economics. Benefits are realized from effective unifi-
cation of business and production automation. As a result,
companies can typically increase production by 25%
and decrease costs by 0.51%. Major benefit areas are
improved operational effectiveness, market responsive-
ness, quality control, personnel productivity, customer
satisfaction, conformance to environmental controls and
reduced working capital requirements, operating costs,
raw material utilization, utility consumption, product
returns and inventory levels.
Commercial installations. Over 1,000 Business.FLEX
PKS licenses have been installed throughout the world,
including at refineries, offshore platforms, chemical plants
and petrochemical complexes.
Licensor. Honeywell Industry Solutions, Phoenix,
Arizona. Contact: Susan.Alden@Honeywell.com.
Advanced Process Control and Information Systems 2003
Product tracking
(homeland security)
Application. This application meets the requirements
of the USDOTs proposed amendments to HM232 Security
Requirements for Offerers and Transporters of Hazardous
Materials. The application tracks product movement from
lifting to delivery and maintains information on all par-
ties involved in each product transaction. Fixed and in-
transit inventories are tracked in real time to help opti-
mize operational and logistics planning while meeting
relevant security regulations.
Strategy.
Centrally hosted solution: The AnyWhere/AnyTime
Product Tracking application collects product inventory
and shipping data in real time from both fixed-location
storage/loading facilities, as well as tanker transport vehi-
cles. Data collected from transport vehicles includes
engine ignition status, GPS location and driver status. All
fixed-location and mobile data are stored in a central
database that allows product owners, transporters and
others involved in the transaction to track all informa-
tion specific to their business interests.
AnyWhere/AnyTime access: Owner/transporter access
is by secure password-protected Web pages only from
any office PC or truck-mounted wireless device. All data
access is via the central data repository at Industrial Evo-
lution onlyseparate access to each of the individual
source data locations is not required.
User-configurable electronic alerting: Each application
comes with the ability to automatically alert product
management, loading or trucking personnel of changes
in storage inventory levels versus specified targets or lim-
its. Alerts are set on a per-user basis and can be received
via e-mail, cellphone, pager, etc.
Interfaces to existing business systems: The applica-
tion interfaces to most enterprise systems, allowing col-
lected inventory and shipping data to also be integrated
with existing corporate and product management sys-
tems, in accordance with the data access rights for that
transaction partner.
Comprehensive inventory and shipping management:
The AnyWhere/AnyTime Product Tracking application
can be configured for use with one or more products,
and any number of shipping agents or transaction part-
ners. Partners are able to view live, real-time data as nec-
essary to optimize their portion of the transaction, min-
imizing transport delays and increasing efficienciesall
while meeting the latest homeland security requirements.
Economics. The AnyWhere/AnyTime Product Tracking
application provides the following benefits:
Homeland security regulatory compliance
Reduced shipment wait times
Increased logistics efficiency
Reduced shipment transaction effort.
Commercial installations. As of mid-2003, the Prod-
uct Tracking application has been deployed to approxi-
mately 50 shipping agents.
Licensor. Industrial Evolution, Inc., Phoenix, Arizona,
and BeversHughes, Houston, Texas; Website: www.indus-
trialevolution.com; e-mail: contact@industrialevolu-
tion.com; tel. (602) 867-0416.
Advanced Process Control and Information Systems 2003
Product tracking for homeland security
Delivery network participants
Secure web access
Industrial Evolution
data center
Real-time database,
maping software
tracking softwaare
Product delivery network
Steam methane reformer
Application. Steam reforming of natural gas (primar-
ily methane) and, less commonly, naphtha and other
hydrocarbons, is an essential step for many processing
units since hydrogen is required for most refining units,
as well as many chemical and petrochemical plants. The
steam methane reforming process is also widely used in
methanol production.
Because the process is multivariable, interactions can be
very significant. Advanced process control improves pro-
cess performance and stability and leads to more effi-
cient operations. Optimizing steam reformers is possible
with rigorous models, since appropriate trade-offs among
throughput, conversion (methane slip), steam-to-carbon
ratio, coil outlet temperature, pressure and fuel con-
sumption are not intuitive. Optimization is best done on
a plant-wide basis to take into account the true value of
the hydrogen.
Strategy. Applying Aspen Technologys DMCplus online
multivariable constrained controller on steam methane
(and other hydrocarbons) reformers ensures superior unit
stability, reduced fuel consumption, improved reformer
furnace excess oxygen control, and locally optimum selec-
tion of steam-to-carbon ratio and coil outlet tempera-
ture. The controller responds to the major process dis-
turbances and variations in fuel and feed gas composition,
operating as closely as possible to the true process con-
straints and maintaining desired hydrogen purity.
Hydrogen production targets can be incorporated in
the controller and, with proper tuning, the controller
will adjust plant capacity via timed coordination of the
manipulated variables to meet hydrogen demand. The
controllers variable gain feature allows online adjust-
ments of the dynamic model gains as a function of pro-
duction rates. Also, variable transformation will extend
the range over which the controller model can predict
process response, thereby improving closed-loop per-
formance and constraint-handling capabilities.
The Aspen Plus Optimizer rigorous modeling and opti-
mization system provides a superior tool for real-time
process simulation. Aspen Plus Optimizer determines in
real time the optimum operating conditions for increas-
ing profitability by trading off increased hydrogen pro-
duction and purity, and reducing energy consumption.
The models can also be used to develop the appropriate
functional form for transforming nonlinear variables to
be used in linear controllers, and to develop strategies
for plant testing and controller tuning. Additionally, the
models can be used to explore and optimize design
changes.
Optimization models include the catalyst-filled tube,
radiant firebox and convection section. The catalyst
tube model includes heterogeneous kinetics for each
feed component, from methane through light naph-
tha. Prereformers (adiabatic) can also be modeled. The
models have been validated over a very wide range of
conditions, including low pressure (3.5 bar) to high
pressure (over 40 bar), and feeds including natural gas,
naphtha, butane, recycled purge gas and CO
2
-rich feeds
(for 2-ethyl-hexanol plants). The effluent is typically
over 65 dry mole percent hydrogen, but in the case of
2-ethyl-hexanol plants is a 1:1 H
2
:CO product for the
downstream Fisher-Tropsch reactors.
Economics. Typical benefits of implementing DMCplus
controllers on steam methane reformers are in the range
of 2% to 4% efficiency improvement or reduced hydro-
gen production cost. The improved efficiency is a result
of reduced fuel gas consumption, optimum steam usage,
and higher furnace efficiency. Optimization benefits
can be significantly more, and are highly dependent on
the downstream products and constraints. Often, the
steam reformer, at optimal overall plant conditions, is at
very different conditions than if optimization is only
applied to the reformer.
Commercial installations. The control technology has
been installed on more than a dozen steam methane
reformers (stand-alone and integrated in refineries and
chemical plants), and the optimization model has been
applied to four steam reformers.
Licensor. Aspen Technology, Inc., Cambridge, Mas-
sachusetts, US; Houston, Texas, US; and approximately
50 offices worldwide. Internet: www.aspentech.com.
Advanced Process Control and Information Systems 2003
Styrene
Application. The styrene plant comprises two compo-
nents: an ethylbenzene section and a styrene section.
Each component consists of a reactor section and a sep-
aration section. In the ethylbenzene reactor section, ethy-
lene and benzene react to make ethylbenzene, while in
the separation section the reactor effluent is split into
unreacted benzene and polyethylbenzene, both recy-
cled, and ethylbenzene. The produced ethylbenzene is
sent to the downstream styrene section in which styrene
is made by ethylbenzene dehydrogenation. In the distil-
lation section, the ethylbenzene is separated from the
styrene and recycled; the styrene is further purified.
The highly integrated design of a styrene plant, cata-
lyst degeneration and the recycle streams make it very
difficult to determine the true optimum, and to operate
at this optimum. Implementing the Aspen Plus Optimizer
in conjunction with DMCplus multivariable constrained
control improves process performance monitoring and
allows operating the unit as close as possible at the true
process constraints, increasing high-purity styrene pro-
duction.
Strategy. Operating a styrene plant is a balancing act
between a number of independent variables: reactor
temperature and pressure, steam-to-hydrocarbon ratio
and throughput all affect catalyst life, conversion and
selectivity. Determining optimum targets for these inde-
pendent variables is either done by an offline or online
optimizer. The targets are then sent to the DMCplus
advanced process control application, which will move
the unit to the optimum without violating process con-
straints. To obtain full benefits from the control system,
all key manipulated variables in the feed, distillation
columns and reaction system must be included. This allows
the DMCplus controller to maintain the plant at the true
optimum.
Economics. A capacity increase of 3% can be achieved by
implementing an advanced control system that includes
the reactors.
Commercial installations. AspenTech has completed an
application on one styrene unit, and several more appli-
cations are under consideration.
Licensor. Aspen Technology, Inc., Cambridge, Mas-
sachusetts, US; Houston, Texas, US; and approximately
50 offices worldwide. Internet: www.aspentech.com.
Advanced Process Control and Information Systems 2003
Sulfur complex
Application. The sulfur complexwhich typically con-
sists of several amine recovery units (ARUs), sour water
strippers (SWSs), and sulfur recovery units (SRUs), and tail
gas treating units (TGTUs)is one of the most important
and integral parts of refining and gas processing. How-
ever, it is often overlooked for potential APC improve-
ments to relieve operational bottlenecks and help meet
stringent environmental regulations. Interactions
between sulfur complex units, changes in sour gas pro-
cessing demand, feedstock changes and the need to
balance multiple sulfur processing trains present a com-
plex control application.
Aspen Technologys sulfur complex control package
based on using DMCplus, is ideally suited for this appli-
cation, and can be applied and integrated to the entire
sulfur complex operations including ARU, SWS, SRU and
TGTU. The DMCplus constrained multivariable applica-
tion significantly improves sulfur complex operations by
maximizing capacity to each SRU, improving H
2
S and SO
2
ratio control, minimizing TGTU recycle, and balancing
the acid gas demand between parallel ARU/SRU/TGTU.
Control strategy. Individual DMCplus controllers are
configured for the entire sulfur complex plant, includ-
ing ARUs, SWSs and SRUs/TGTUs. All significant con-
straints are handled explicitly. The controller responds
to all significant unit interactions, accounts for unit con-
straints, handles both fast- and slow-controlled vari-
able dynamics, compensates for changes in sour gas/acid
gas production load changes, maximizes available
throughput, improves sulfur recoveries, improves con-
trol of rich and lean amine loadings, improves operat-
ing stability, reduces upsets and improves environment
regulatory compliance.
The controller performs a thorough constrained opti-
mization calculation at each controller execution. Oper-
ating simultaneously at the optimal lean amine loading,
SWS bottoms pH, thermal reactor pressure, reactor dew
point approach, H
2
S/SO
2
ratio, TGTU hydrogen concen-
tration, incinerator O
2
and SO
2
concentration and
hydraulic constraints maximize sulfur complex capacity
and profitability. The DMCplus controller adjusts SRU
acid gas flow, SRU O
2
flow, SRU acid gas air ratio, SRU
reactor reheater temperatures, TGTU H
2
flow, TGTU incin-
erator air flow, ARU reflux flow, ARU rich amine flow
and ARU reboiler duty.
SRU dewpoint approaches, lean amine H
2
S loadings,
and product quality models are implemented using the
Aspen IQ inferential sensor package. The flexible
client/server allows the user to plug in a variety of
engines (empirical, rigorous, fuzzy logic, neural net, cus-
tom, etc.) to generate the online models. Analyzer vali-
dation and update, as well as SQC techniques for labo-
ratory validation and update, are seamlessly incorporated
into Aspen IQ. Amine H
2
S loading is calculated using
AspenPlus and HYSYS modeling technology to account
for the nonideal solution behavior.
Economics. Benefits in the range of 24% sulfur capac-
ity increase, with project payouts less than one year, are
typical.
Commercial installations. AspenTech has commis-
sioned more than 3 sulfur complex applications, total-
ing over 14 individual SRU\TGTU trains.
Reference. Performance Improvements of a Sulfur
Complex Using Model Predictive Control, NPRA, Novem-
ber 2001 (Motiva Enterprises LLC, Convent, Louisiana).
Licensor. Aspen Technology, Inc., Cambridge, Mas-
sachusetts, US; Houston, Texas, US; and approximately
50 offices worldwide. Internet: www.aspentech.com.
Advanced Process Control and Information Systems 2003
Syngas generation plant
Application. MVC-based advanced process control has
been applied to optimize steam and gas flows to the
steam/methane reformer, resulting in increased car-
bon monoxide production. Three process areas, the
steam/methane reformer, carbon dioxide removal and
cold box areas have MVC modules installed.
Control strategy. The first control objective is to maxi-
mize steam and feed gas flow to the steam/methane
reformer while optimizing operation of the cold box and
other process units. The second control objective is to
remove the carbon dioxide from carbon monoxide and
optimize the methane and carbon dioxide while meeting
the carbon monoxide product specifications.
MVC is a nonlinear multivariable control and economic
optimization technology that incorporates predictive
and adaptive algorithms derived from rigorous simula-
tions and field tests calibrated to match actual plant per-
formance. MVC resides within a Windows 2000 or RISC
workstation interfaced to or integrated within the plant
control system.
Economics. MVC has achieved a net increase in car-
bon monoxide production by increasing yield and max-
imizing plant throughput. Unscheduled plant shut-downs
have been reduced while improving carbon monoxide
product quality through better cold box performance.
Commercial installation. This technology has been
implemented at one domestic plant.
Licensor. GE Drives & Controls, Inc., Houston, Texas; Web-
site: gecci@indsys.ge.com; tel: (832) 296-7699.
Advanced Process Control and Information Systems 2003
MVC
FIC
TIC
PIC
Fuel
gas
Bank of heat
exchangers
Regenerator
gas heater
Hydrotreater/
desulferizer
BFW
Feed preheater
Natural
gas fuel
Recycle
gas
Natural
gas feed
BFW Steam
generator
CO concept
C
O
-
C
H
4
t
o
w
e
r
H
2
s
t
r
i
p
t
o
w
e
r
C
O
2
s
t
r
i
p
p
e
r C
O
2
s
a
b
s
o
r
b
e
r
K
O
d
r
u
m
K
O
d
r
u
m
S
e
p
e
r
a
t
o
r
s
CO
2
compress
Coldbox
heat
exchanger
Expander
Amine
SO ln.
CH
4
CH
2
to fuel
Steam
Steam
FIC
FIC
SC
AI
Calc
FIC
FIC
TI
MVC
MVC
MVC
MVC
MVC
CO
product
CH
4
Steam
Reboiler
Terephthalic acid
Application. ABBs advanced process control (APC) pack-
age combines the use of DCS block-based advanced con-
trol applications and multivariable model predictive con-
trols (MVPC) for improving terephthalic acid plants
productivity and profitability.
Strategy. By applying advanced controls to the feed
blending and reaction process, TPA plant productivity
and profitability are improved. Applications applied to the
feed blending are designed to maintain proper reactant
to solvent ratios, minimize fresh catalyst consumption
and maximize recycle stream utilization. Control of the
reaction process requires fast reaction time, thus a com-
bination of conventional DCS-based APC and MVPC is
applied, with the MVPC handling the constraint controls
and feed maximization. Advanced control logic is applied
to the resluification process to ensure that the proper
weight percent solids in the slurry are maintained and
requires intelligent controls capable of making proper
adjustments depending on purification unit load require-
ments. In addition, the package includes advanced appli-
cations for:
Solids concentration handling in the crystallization
area
Moisture control in the drying area
Reactant and solvent recovery
Temperature control and steam minimization in
the purification unit preheat section
Hydrogen to terephthalic acid feed ratio in the
hydrogenation reactors.
Economics. Benefits studies show a payback of 612
months, depending on product pricing and raw mate-
rial costs.
Commercial installations. This APC application has
been installed on one TPA unit.
Licensor. ABB Inc., Simcon Advanced Application Ser-
vices, Sugar Land, Texas; Website: www.abb.com.
Advanced Process Control and Information Systems 2003
Terephthalic acid
Application. Advanced control applications are applied
to both the crude terephthalic acid (CTA) and purified
terephthalic acid (PTA) sections, including the CTA reac-
tors and crystallizers, the CTA dehydration tower, the
PTA hydrogenation reactors, PTA crystallizers and hot oil
furnace. Profit Controllers based on Robust Multivari-
able Predictive Control Technology (RMPCT) are used in
these applications for online control and optimization.
This advanced control algorithm minimizes tuning
requirements and maintains good control under chang-
ing conditions and model error. Model identification,
controller building, testing and simulation are available
in the Windows environment. These individual Profit
Controllers can be dynamically integrated by using an
upper-level Profit Optimizer to coordinate control strate-
gies across the complex.
Strategy. Profit Controllers are applied to each of the
major areas of the CTA and PTA plant sections.
CTA reactors and crystallizers. The controller will
maximize CTA production subject to unit constraints
and control key quality specs including 4-CBA and opti-
cal density. The controller adjusts reactor and crystallizer
air to feed ratios, catalyst/feed and solvent/ feed ratios,
and water withdrawal to maintain reactor tempera-
ture, pressure, excess O
2
, CO/CO
2
concentration and
water content.
CTA dehydrator tower. The controller will maximize
acetic acid recovery subject to tower constraints. The
controller adjusts tower reflux and steam to maintain
stable water content in the tower bottoms and mini-
mize loss of acetic acid overhead.
PTA hydrogenation reactors and crystallizers.
The controller will control the PTA 4-CBA content by
control of the hydrogen to CTA feed, and reactor level
and pressure.
PTA crystallizer pressure controls are adjusted to main-
tain the desired delta-P across adjacent crystallizers and
minimize loss of demineralized water.
Hot oil furnace. The controller will control furnace out-
let temperature and furnace excess O
2
and minimize fuel
gas usage.
Economics. Benefits from implementing advanced con-
trols come from increased production rate across the
complex, and reduced consumption of raw materials, p-
xylene, acetic acid and utilities. Paybacks from projects
are typically between 5 and12 months.
Commercial installations. These controls have been
implemented on over eight units.
Licensor. Honeywell Industry Solutions, Phoenix,
Arizona. Contact: Susan.Alden@Honeywell.com.
Advanced Process Control and Information Systems 2003
Terephthalic acid
Application. IntellOpts Advanced Process Control (APC)
technology comprehensively covers the process areas of
feed mix, oxidation reactors, dehydrator, hot oil heater,
slurry feed mix and PTA hydrogenation reactor, to
improve profitability while honoring safety limits.
Strategy. The TA/PTA unit APC applications are com-
posed of DCS-based advanced regulatory controls, cou-
pled with GMAXC-based Multivariable Predictive Con-
trol (MVPC) of the TA oxidation reactors. Typical control
strategies include:
TA Feed Preparation Control: Maintain the catalyst
concentration in the feed, and the feed drum level.
TA Oxidation Reactors MVPC: Maintain the prod-
uct qualities (4CBA and Transparency), excess oxygen,
burn rate and reactor temperatures by simultaneously
adjusting the feed rate, air rate, reactor level, reactor
pressure and water withdrawal rate.
Dehydrator APC: Fuzzy Logic Control to minimize
acid loss in the overhead and maintain water concen-
tration in the bottoms acid stream.
Hot Oil Heater Control: Maintain the heater out-
let temperature and excess oxygen.
PTA Slurry Feed Control: Maintain slurry drum level
and percent solids in slurry feed.
PTA Feed Preheat Energy Minimization: Maximize
energy recovery from process streams while maintain-
ing reactor feed temperature.
Crystallizer Level Control: Valve flushing logic with
user selectable frequency and severity to avoid line plug-
gage.
PTA Hydrogenation Reactor Control: Maintain
desired conversion of 4CBA by proper control of reactor
level, pressure and hydrogen concentrations.
Economics. Payback period is about 69 months, with
improvements in feed rate capacity, lower acetic acid
consumption (burn rate), stable product qualities and
lower energy consumption.
Commercial installations. This technology has been
implemented on four units.
Reference. APC Improves TA/PTA Plant Profits, Hydro-
carbon Processing, October 1997.
Licensor. Intelligent Optimization Group, Houston, Texas
(www.intellopt.com).
Advanced Process Control and Information Systems 2003
Reactants
Crude
reaction
Crystallization
section
Product
separation
section
Crude
TA
recovery
section
TA
catalysts
recovery
section
TA
solvent
recovery
section
PTA
product
storage
PTA
recovery
section
PTA
preheat
section
PTA
purification
reaction
section
PTA
feed mix
section
Solvent
Catalysts
Water
Terephthalic acid
dehydrator (fuzzy
logic controller)
Application. IntellOpts Fuzzy Logic Controller ( Z-Way)
models uncertainty by converting operating heuristics
and experience into a quantifiable model for top tem-
perature control.
Strategy. The TA azeotropic dehydrator requires tight
control of not only top tray temperatures, but also of
the temperature difference between two specific and
sensitive trays. The Z-Way:
Allows (online) selection of tray pairs for tempera-
ture control
Models the qualitative tray temperature deviations
from their targets (such as high, high-high, low, low-low)
into mathematical possibilities (membership functions)
Models the qualitative rate of change of tray tem-
peratures (such as up or down) into mathematical possi-
bilities (membership functions)
Fuzzifies the tray temperatures and their slopes
into quantitative confidence values for rule inferencing
Evaluates/computes all rules (e.g., If top tray tem-
perature is high and rate of the top tray temperature is
up then, increase reflux by a medium amount.)
Combines the conclusion of all rules (different rules
can have different conclusions for the same observations)
Defuzzifies the conclusion into a quantitative num-
ber for change in the reflux flow setpoint.
Other enhancements, such as maximum cumulative
change of reflux flow in a specified time period (to take
care of process delays), and wait-and-hold features were
added in the Z-Way algorithm to minimize adjustments
while allowing the dehydrator to settle down.
Economics. Improvements in process stabilization and
reduction of acetic acid carryover in the overhead. Also,
reduced engineering costs as plant testing for dynamic
models (typically used in MVPC technology) are not
required. Z-Way technology appears to fill the technol-
ogy gap between typical advanced regulatory control
(not sufficient for this process) and MVPC (too costly).
Commercial installations. This technology has been
implemented on three terephthalic acid azeotropic
dehydrators
Licensor. Intelligent Optimization Group, Houston, Texas,
(www.intellopt.com).
Advanced Process Control and Information Systems 2003
To NBA decanter
To HAc tank
To high pr. abs
Caustic
H
2
O, %
LC
LC
AI
TC
PI
PC
FC
Reflux from MA and NBA/PX strippers
Water draw from reactors
5 Kg
steam
Spray from HP abs.
Spray from atm. abs.
Vent
Urea
Application. Emersons solution for urea plant control is
one of several applications targeting the nitrogen-based
fertilizer manufacturing industry. It combines both tra-
ditional advanced regulatory control solutions with mul-
tivariable predictive constraint controls. These tech-
nologies power Emersons PlantWeb digital plant
architecture to improve plant throughput and reduce
operating costs.
Control strategy. Primary control functions consist of:
Ammonia/CO
2
ratio control. Controller ratios ammo-
nia to the sum of all CO
2
flows. The ammonia provides
heat to the reactor and the reactor temperature control
is coupled with the NH
3
/CO
2
control.
Carbamate strength control. Controller varies con-
densate to the wash column to keep recycled carbamate
strength at target.
CO
2
feed rate pusher. Controller will drive production
rate against constraints including compressor, feed avail-
ability, heating and cooling, condensate availability, pres-
sure and valve position limits.
Evaporator controls. Controller stabilizes urea con-
centration controls and ultimately minimizes steam con-
sumption in the evaporators.
Economics. Typically urea production can be increased
by 24% and steam usage can be reduced by 0.5Mlb/ton
of urea. More stable operation also allows a wider oper-
ating range (greater turndown).
Commercial installation. This technology has been
applied in at least one urea plant.
Licensor. Emerson Process Management, Austin, Texas;
www.emersonprocess.com/solutions/aat. Contact: Emer-
son Process Management, Tim Olsen, Process and Perfor-
mance Consultant, Advanced Applied Technologies, tel:
(641) 754-3459, e-mail: Tim.Olsen@EmersonProcess.com.
Advanced Process Control and Information Systems 2003
Utilities
Application. To manage a utility system at lowest cost,
many decisions need to be made under many constraints.
The challenge is the ability to consider all the constraints
and aspects of the problem simultaneously in an ever-
changing economic environment. Aspen Utilities is a
modeling and flowsheeting applicationcombined with
an optimization capabilityspecially developed for util-
ity system design, operation and management within or
linked to process plants. It addresses the key issues in the
purchase, supply and use of fuel, steam and power within
environmental constraints; and provides a single tool to
optimize energy business processes and substantially
improve financial performance.
Aspen Utilities provides a model-centric approach,
whereby a single rigorous utilities system model is used
to address all the important business processes associ-
ated with the purchase, generation, use and distribution
of utilities at industrial sites. This ensures that all deci-
sions are made on the same basis and are, therefore,
mutually consistent and compatible. Business processes
include demand forecasting, utilities production plan-
ning, purchasing, trading, optimal plant operation, con-
tract management, performance monitoring, emissions
monitoring/constraints, cost accounting and investment
planning.
Strategy. Aspen Utilities incorporates a library of mod-
els specifically developed for utility systems, which can
be tuned to reflect unit actual performance on an oper-
ating site; and allows graphical flowsheet construction
from these models and a powerful set of solution tech-
niques to solve applications in steady-state simulation,
parameter estimation and data reconciliation.
Incorporating a mixed-integer linear optimizer that
enables any utility system flowsheet to be optimized to
define the minimum system cost and operation, Aspen
Utilities also allows the cost objective to be customized to
accurately reflect complex gas or electricity supply con-
tracts featuring multitier structures. Constraints such as
equipment availability and maximum emissions limits
are easily incorporated, together with issues such as dif-
ferent tariffs; alternative fuels; optimum boiler and tur-
bine loading, equipment choice, electricity import, self-
sufficiency or export, and drive choice (motor or turbine).
Aspen Utilities can be used to identify optimum util-
ity plant setup to take advantage of available flexibility
in purchase, generation, use and distribution of utilities
at an industrial site. This can be used both offline (for
planning) and online (to provide guidance to operators).
Aspen Utilities helps users identify the most cost-effec-
tive utility suppliers and contract parameters for a site,
applicable for daily, monthly or annual contract nomi-
nation, from a fixed supplier or for contract evaluation
studies to determine the long-term gas or electricity sup-
pliers. It can also be used to determine marginal gas
and/or electricity price at the site, and to guide decisions
relating to the sale or purchase of electricity and/or gas on
the spot market.
Economics. Typical benefits are a 3% to 8% reduction in
sitewide energy bills. These benefits are obtained from
better purchasing (lower contract price), better adher-
ence to contract/tariff terms, maximized use of the most
efficient equipment, correct choice and use of fuels,
reduced equipment on standby and steam venting, and
faster response to (and better targeting of) problems.
Return on investment is less than one year, and typically
as little as six months.
Commercial installations. AspenTech has installed
Aspen Utilities at nearly 30 sites worldwide.
Licensor. Aspen Technology, Inc., Cambridge, Mas-
sachusetts, US; Houston, Texas, US; and approximately
50 offices worldwide. Internet: www.aspentech.com.
Advanced Process Control and Information Systems 2003
Utilities
Application. In many refineries and chemical plants,
power and utilities are the second largest operating cost
component (after feedstocks). Proper management of
modern cogeneration/utilities plants can provide signif-
icant cost savings for any site with a requirement for
efficient heat and electrical power.
Factors such as ambient air conditions, electricity prices,
process demands and equipment degradation can greatly
affect the optimal operating points.
Tightening environmental limits on NO
x
and CO emis-
sions further complicates the picture for most plants.
Strategy. Emersons PlantWeb digital plant architecture
provides real-time performance monitoring and opti-
mization technologies, including a complete suite of rig-
orous unit models complemented by a proven online
optimization layer. Emersons robust online environment
enables close to 100% uptime.
PlantWeb integrates the digital automation system
with the data acquisition and historian system. The real-
time executive layer of system software manages data
acquisition, filtering, validity checking and data substi-
tution. Rigorous data reconciliation is performed to iden-
tify bad inputs that can be replaced with estimated val-
ues, default values or last good values by the parameter
estimation package. Predicting NO
x
, CO and other com-
ponents in the exhaust gases is a standard feature of the
models using kinetic reaction equations.
The system has the capability to run multiple opti-
mization cases simultaneously and present various
results to the operatorfor example, a step-limited
solution, or a global optimum case or a case with day
zero or clean parameters to evaluate the cost of equip-
ment degradation.
The plant optimization application uses Emersons
advanced optimization system, incorporating a variety
of solution algorithms including LP, SLP, SQP and mixed
integer options that can be selected with a click of the
mouse. Results can be implemented within the digital
system automatically, or passed back as supervisory targets
for the operator. Total plant optimization is achieved by
employing a tiered system:
Continuous optimization allows current equipment to
operate at minimum cost for a given demand and within
the emissions and equipment constraints.
Configuration optimization performs the optimal
equipment selection with current equipment perfor-
mance and penalties to prevent excessive equipment
starting and stopping.
Look-ahead optimization predicts future plant oper-
ation based on profiled demands/prices.
Benefits. This system can typically save 310% of the
energy costs, depending on the size and age of the plant.
Systems generally pay for themselves in less than eight
months.
Commercial installations. This system has been
installed in over 15 sites around the world.
Licensor. Emerson Process Management, Austin, Texas;
www.emersonprocess.com/solutions/aat. Contact: Emer-
son Process Management, Tim Olsen, Process and Perfor-
mance Consultant, Advanced Applied Technologies, tel:
(641) 754-3459, e-mail: Tim.Olsen@EmersonProcess.com.
Advanced Process Control and Information Systems 2003
HPS
LPS
WHB
WHB
DS
Fuel Boiler
house
Water
treatment
GT1
GT2
Stack
MPS
LD
T2
T1
Air
Air
Process Process
Process
Export elec.
Import elec.
Process
Utilities
Application. Steam systems represent a large but nec-
essary portion of the operating expense of most oil
refineries, chemical plants and other large industrial
plants. Steam systems have been notoriously difficult to
manage. They are large, with many miles of piping spread
over many utility, process plants and offplot areas. Steam
system upsets can cause shutdowns of individual process
plants and even whole facilities. Steam system design
changes for new or modified process plants are frequently
not built at the least capital cost and best energy effi-
ciency. Accounting for steam production and use has
been difficult because of system complexity and poor
metering, and has led to large amounts of wasted steam.
This wasted steam is not only a large energy cost, it is
also wasted capacity that could supply all or part of the
new demand for new process plants.
Strategy. Visual MESA is an online, graphical steam sys-
tem management tool that can solve the problems
described above.
Visual MESA protects your steam system by moni-
toring all variables and providing alerts on important
changes.
It tracks key operating parameters including eco-
nomics. It helps in emergencies with directed load shed
advice.
Visual MESA finds how to run the steam system at
minimum operating cost using optimization. Visual MESA
also optimizes starting and stopping of individual spared
equipment, like turning on a motor and turning off a
turbine. Optimization is customized to your facility so
no infeasible or unsafe moves are recommended.
Visual MESA is used to predict how your steam system
will respond to a proposed change such as a new plant,
a process change, a shutdown or whatever YOUR facility
needs to understand.
Using Visual MESAs data validation techniques, you
can accurately account for steam use and track down
waste and inefficiencies where they exist.
Key features.
Visual MESA is an online, graphical, nonlinear model
of your entire steam and electric systems. Visual MESA
steam system models are easy to build and maintain.
Icons that represent each component of a steam system
are used to build or modify a model. These icons are con-
nected together on a series of hierarchical drawings that
allow the model to be easily used for monitoring, opti-
mization, performing what if cases and auditing and
accounting for steam production and use.
The component icons are object-oriented. For exam-
ple, a steam turbine icon contains all of the data that
describes its performance. The drawings and data are in
one database which makes the model easier to maintain.
Visual MESA is written in a graphical, object-ori-
ented environment for building and deploying real-
time systems.
Visual MESA uses MESA (MESA Co.), the most accu-
rate and reliable steam system modeling program. This
combination makes it possible to manage the entire
steam system of an industrial complex.
Economics. Typical savings of over $1.5 million per year
have been achieved through optimizing and eliminat-
ing waste in a 200 kbpd refinery. Savings from monitor-
ing and performing steam system studies are not included
in this estimate although they can result in significant
capital and cost reductions.
Commercial installations. Visual MESA is currently
installed online at 28 sites around the world including
11 oil refineries, 10 large combined refinery/chemical
sites, five chemical plants, one gas plant and one com-
bination air separation plant/cogeneration plant. The
MESA program, on which Visual MESA is based, has been
installed at over 200 sites in 90 companies worldwide
over the past 17 years.
Licensor. Nelson & Roseme, Inc., Walnut Creek,
California, tel: (925) 280-0327; e-mail:
gary.roseme@nelson-roseme.com.
Advanced Process Control and Information Systems 2003
Utilities
Applications. Rigorous integrated process plant utility
Design Simulation Analysis (DSA) and Operations Simu-
lation Analysis (OSA) systems have been developed and
implemented for refinery, olefin, styrene, caprolactam,
nylon, polyester fibers and pulp and paper mills process
plant daily energy conservation operations management
and plant operations control strategic support to daily
plant cost reduction applications.
They provide reduced unit energy consumption and
cost reduction through improved utilities and process
energy conservation operations and debottlenecking,
and utilities operating staff on-the-job training simulators
and DCS control for supply chain TQM cost reduction.
Strategy.
Energy information knowledge base development.
Utility (boilers, heaters, turbine compressors, steam lines)
and process energy users (reactors, heat exchangers,
pumps) design and full operating history and mainte-
nance data, and heating, oil, gas, and electricity costs
and unit consumption data, and operators expertise are
the utility information knowledge base.
Energy usage audit, conservation OSA model devel-
opment. Process plantwide and offsite utility energy
usage audits are conducted. These artificial intelligence
expert-based integrated systems rigorous models have
been developed out of the entire operating history.
These systems cover all the offsite utilities and process
unit energy users normal and emergency operations, the
full range feedstock compositions, operating load and sever-
ity change with average error below 1.5%: Features include:
Feedstock and fuel price simulation forecasts, pro-
curement, inventory, scheduling, blending and supply
chain strategic analysis
Process and utilities units energy usage auditing and
goal setting
Boiler and furnace optimum firing, improved high-
and medium-pressure steam utilization and maximize
condensate return
Reactors and recovery units energy usage improve-
ment and debottlenecking
Process startup, emergency shutdown and trou-
bleshooting
Process plant energy equipment preventive safety
and maintenance to maximize energy efficiency
Maximum product recovery at minimum energy and
waste
Process, utility, DCS and pollution control staff on-
the-job training.
The system is available on PCs for on/offline
CIM/APC/DCS.
Operation management implementation. Goal, mis-
sion and performance-oriented cross-departmental
energy OSA teams develop and implement daily deci-
sion simulators for process units and offsite boilers and
heaters fuel conservation and steam consumption to
maximize products yields and recovery at minimum
energy usage simultaneously with OSA-reactor yield frac-
tionation system operations improvement.
Economics. Up to 15-50% energy saved, or millions of
dollars saved in energy costs annually without hardware
retrofit.
Commercial installations. Five refinery, three olefin,
three caprolactam, two styrene, two polyolefin, 12 fibers
and pulp and paper mill systems have been applied. Over
200 energy conservation workshops have been offered to
plant managers, senior technical and operating staff.
References. All by Dr. Warren Huang, OSA:
Capitalize on LPG Feed Changes, April 1979, Oil &
Gas Journal, Improve naphtha cracker operations,
Improve process by OSA, Improve demethanizer
operation, Hydrocarbon Processing, February, May,
December 1980; Control of Cracking Furnace, US
patents 1981, 1982; Energy Conservation in Deetha-
nizing, OSA Saves Energy in C
2
Splitter Operations,
Oil & Gas Journal, June, September 1980; Energy and
Resource Conservation in Olefin Plant Design and Oper-
ation, World Congress Montreal, Tokyo, Karlsruhe,1982,
1986, 1991; Refinery, Petrochemical Process Improve-
ment, Debottleneck on PC, ISA Philadelphia,1989; Large
chemical plant conference, Antwerp, Belgium, 1992,
1995; INTER PEC CHINA 91, Beijing, 1991, 1995, AIChE,
Dallas, 1999; Supply chain strategy maximize oil, chem-
ical profits workshops, Singapore, April 2627, 2001.
Licensor. OSA Intl Operations Analysis, San Francisco,
California; Website: www.osawh.com; e-mail
wh3928@yahoo.com.
Advanced Process Control and Information Systems 2003
Value chain management
Application. The Value Chain Management solution
suite enables supply chain planning, execution and pro-
cess automation solutions to work in harmony. The solu-
tion overcomes supply chain complexity by making rele-
vant knowledge easily accessible for effective decisions.
It includes a suite of Internet-enabled supply chain man-
agement applications that dynamically model the sup-
ply chain, and improve profitability through measurable
cost reductions and optimization of operations. It deliv-
ers true collaboration with your suppliers and customers,
as well as their suppliers and customers.
Strategy. Integration of supply chain decisions with
those of suppliers, distributors and customers is a vital
step to building e-business capability. The Value Chain
Management solution architecture is designed to sup-
port an integrated e-business network. It takes you a
giant step closer to harnessing the power of the Inter-
net by supporting collaborative planning, Advanced
Available-to-Promise (ATP) capability and information
sharing with your trading partners. Use of XML technol-
ogy enables real-time messaging capability to allow col-
laborative decision-making with trading partners.
Economics. The Value Chain Management solution more
than pays for itself in the first year of use through
increased plant yields, lower inventories, enhanced cus-
tomer service and optimized production cycles. Addi-
tional savings are generated from reduced transporta-
tion, procurement and transition costs.
Commercial installations. Over 1,000 Business.FLEX
PKS licenses have been installed throughout the world,
including at refineries, offshore platforms, chemical plants
and petrochemical complexes.
Licensor. Honeywell Industry Solutions, Phoenix,
Arizona. Contact: Susan.Alden@Honeywell.com.
Advanced Process Control and Information Systems 2003
Vinyl chloride monomer
Application. ABBs advanced process control (APC) pack-
age combines use of conventional APC techniques and
multivariable, model-predictive controls (MVPC) for
improving vinyl chloride monomer (VCM) plant prof-
itability.
Strategy. Conventional APC functions are configured
in standard DCS blocks for areas requiring fast response
times with minimum interactions. MVPC is used for appli-
cations with highly interactive processes, most constraint
control and reasonably long steady-state times. The over-
all application comprises:
VCM unit
Chlorine vaporizer superheat control
Furnace conversion, throughput maximization
and coil outlet temperature balancing
Product quality and loss control in the quench
tower, absorber/stripper and VCM product still
Steam minimization in the absorber/stripper and
VCM product still.
EDC unit:
Reactor feed cross limiting, ethylene vent con-
centration and feed maximization
Fractionation section product quality and energy
minimization.
OHC unit:
Reactor controls with HCL feed disturbance han-
dling and cost minimization.
Economics. Benefits studies shows a payback period of
612 months depending on product pricing and raw
material costs.
Commercial installations. The package has been fully
installed and commissioned for several units in one site.
Licensor. ABB Inc., Simcon Advanced Application Ser-
vices, Sugar Land, Texas; Website: www.abb.com.
Advanced Process Control and Information Systems 2003
Principal steps in balanced
vinyl chloride process
Oxychlorination
Air or O
2
C
2
H
4
H
2
O
Direct
chlorination
Ethylene
dichloride
purification
Vinyl
chloride
purification
Ethylene
dichloride
pyrolysis
Ethylene dichloride recycle
Heavy ends Vinyl
chloride
Light ends
CL
2
Vinyl chloride monomer
Application. Advanced process control (APC) and opti-
mization can provide large economic benefits for vinyl
chloride monomer (VCM) plants. They are ideal candi-
dates to benefit from: energy reduction, increased capac-
ity, optimization of yields and to provide valuable infor-
mation to operators and engineers to operate the plant
at optimum conditions. Model-based advanced control
enforces the optimum setpoints while respecting chang-
ing operating constraints. Applications normally include
the following plant sections:
Oxyhydrochlorination
Direct chlorination
EDC purification
EDC cracking furnaces
HCl and VCM purification.
Applications can be adapted to all reactor configura-
tions, including loop reactors, tubular reactors, packed
and fluidized beds, and to all furnace and distillation col-
umn configurations.
Control strategy. Reactors, furnaces and distillation
columns are controlled and locally optimized using Hon-
eywells multivariable Profit Controller. Profit Controller
is based on the Robust Multivariable Predictive Control
Technology (RMPCT) algorithm. This advanced algorithm
minimizes tuning requirements and maintains good con-
trol under changing conditions and model error. Model
identification is available in the Windows environment.
The following focuses on particular plant areas:
OxychlorinationControl solutions are designed to
improve stability and reduce the effect of disturbances
from varying HCl flow. Local optimization will reduce
operating costs by improving reactor conversion, and
reduce energy consumption and losses.
Direct chlorinationControl solutions are designed
to improve stability and minimize undesirable side reac-
tions. Local optimization minimizes ethylene loss to the
vent through effective reactor pressure control and excess
ethylene in the feed.
EDC purificationAdvanced control improves sta-
bility and fractionation in the columns. EDC loss in the
light ends column is minimized as well as heavy boiling
byproducts in the EDC that can cause excess coking in
the furnaces.
EDC cracking furnacesControl and optimization
on the cracking furnaces focuses on effective cracking
depth control while minimizing fuel gas and coking. By
using yield and coking models, such as those provided
by Technips EDC crack models, nonlinearities can be
accounted for in the controller models as well as cracking
depth and coking profiles can be controlled on a per -
pass basis.
HCl and VCM purificationControl and optimiza-
tion solutions focus on maintaining stability and improv-
ing fractionation thereby maintaining or improving VCM
product quality and recycled EDC and HCl.
Optimization. In addition to local optimization per-
formed by individual controllers, global optimization can
be achieved using Honeywells Profit Optimizer, a cost-
effective, dynamic optimization solution. Multiple Profit
Controllers can be dynamically coordinated by an upper
level Profit Optimizer, which also uses RMPCT algorithms.
Global optimization in VCM plants would focus on the
balance between furnace run lengths, EDC recycle costs
and cracking depth.
Economics. Typical improvements from advanced con-
trols and optimization in a VCM plant are: 36% increase
in VCM production, an 812% reduction in energy usage
and a 2030% increase in furnace run lengths. Typical
paybacks range from 9 to 18 months.
Commercial installations. Control solutions have been
implemented in six VCM plants.
Licensor. Honeywell Industry Solutions, Phoenix,
Arizona. Contact: Susan.Alden@Honeywell.com.
Advanced Process Control and Information Systems 2003
Waste incinerator load
optimization
Application. IntellOpts waste incinerator load opti-
mization application uses the Gensym/G2 expert system
with a mixed integer optimizer to maximize loading of
multiple incinerators.
Strategy. To reduce the combinatorial problem to a fea-
sible size for real-time optimization, a G2-based expert sys-
tem application is used to infer the most preferable oper-
ating combinations from the existing process conditions.
These modes are then formulated as mixed integer opti-
mization problems with the following constraints:
Only one incinerator connection per vent flow
Only one incinerator connection per liquid waste
flow
Vent flowrates cannot be adjusted
Total vent flow per incinerator constraint
Total liquid flow per incinerator constraint
Specific waste component total loading constraint
Unit-specific liquid and vapor flow mixing
constraints
Total heat load per incinerator constraint
Incinerator stack emission constraints.
The mode offering the highest economic objective
function value is then selected for allocating the multiple
liquid and vapor streams into multiple incinerators.
Economics. Observable benefits are in waste through-
put handling capacity, which can also help production
of upstream units that are constrained by side-reaction
waste production. Benefits would depend on the num-
ber and types of waste streams and the number of incin-
erators.
Commercial installations. This application has been
installed at one site.
Licensor. Intelligent Optimization Group, Houston, Texas
(www.intellopt.com).
Advanced Process Control and Information Systems 2003
Multiple liquid and vapor waste flows
From process units From waste storage
Multiple waste incinerators
Main Menu
Control & Information
Systems Index
Control & Instruments
Directory
Contributor Index
Control & Information
Systems Articles
Advanced Control and
Information Systems
Handbook - 2003
Emerson Process Management, an
Emerson business, is a global leader
in helping to automate production,
process and distribution in the
refining, oil and gas, chemical, pulp
and paper, power, food and
beverage, pharmaceutical and other
industries. Emerson is known
worldwide for its superior products
and technology combined with
industry-specific engineering,
consulting, product management,
and maintenance services.
Articles
Reducing operations &
maintenance costs with PlantWeb digital
plant architecture (September 2003)
Improving availability with PlantWeb digital plant
architecture (May 2003)
Diagnostics capabilities of FOUNDATION fieldbus
pressure transmitters (April 2003
The smart refinery: Economics
and technology (March 2003)
Fieldbus improves control and asset management
(January 2002)Handbook Entries
Adiponitrile
Alkylation
Amine treating
Ammonia
Catalytic reformer
Cracking furnace
Crude unit
Fractionator (vacuum distillation)
Hydrotreater
Olefins
Phenol
Plant information (equipment monitoring)
Plant optimization and information (refining)
Plant production management
Urea
Utilities
Articles
Directory Listing
Handbook Entries
Main Menu
Control & Information
Systems Index
Control & Instruments
Directory
Contributor Index
Control & Information
Systems Articles
Advanced Control and
Information Systems
Handbook - 2003
Honeywell Industry Solutions is
part of Honeywells Automation
and Control Solutions group, a
global leader in providing product
and service solutions that improve
efficiency and profitability, support
regulatory compliance, and
maintain safe, comfortable
environments in homes, buildings
and industry. For more information
about Industry Solutions, access
http://www.acs.honeywell.com/.
Articles
Crude oil blend scheduling optimization: An
application with multimillion dollar benefits
- Part 1 (June 2003)
- Part 2 (July 2003)
Future trends in saf et y
instrumented systems (May 2003)
Advanced Control Methods
- Part 1: Purpose and characteristics
- Part 2: Optimizationmaximization or
minimization?
Handbook Entries
Alkylation
Ammonia
Blend management
Blending
Catalytic reformer
Delayed coker
FCCU
Fractionator (crude)
Fractionator (FCCU)
Fractionator (light products)
Hydrocracker
Hydrogen production
Oils movements
Olefins
Planning and scheduling
Plant operations management
Plant performance management
Plant production management
Polyethylene
Product quality management
Terephthalic acid
Value chain management
Vinyl chloride monomer
Articles
Directory Listing
Handbook Entries
Main Menu
Control & Information
Systems Index
Control & Instruments
Directory
Contributor Index
Control & Information
Systems Articles
Advanced Control and
Information Systems
Handbook - 2003
Invensys, a global leader in
production technology, works
closely with customers to increase
performance of production assets,
maximize return on investment in
production and data management
technologies and remove cost from
the supply chain. The Invensys
Production Management division
includes APV, Avantis, Eurotherm,
Foxboro, IMServ, SimSci-Esscor,
Triconex and Wonderware.
Articles
The validation of an on-line nuclear magnetic
resonance spectrometer for analysis of
naphthas and diesels (April 2003)
Integrated data reconciliation, process modeling
and performance monitoring online
Handbook Entries
Ammonia
Blending
Catalytic reformer
FCCU
Fractionator (crude)
Fractionator (NGL)
Oil movement management
Planning and scheduling (olefins)
Planning and scheduling (refining)
Plant information integration
Articles
Directory Listing
Handbook Entries
Main Menu
Control & Information
Systems Index
Control & Instruments
Directory
Contributor Index
Control & Information
Systems Articles
Advanced Control and
Information Systems
Handbook - 2003
Contributor Index
ABB Inc., Simcon Advanced
Application Services
Adersa
AMI Consultants
Aspen Technology
Applied Manufacturing Technologies
C. F. Picou Associates
Curvaceous Software Limited
Emerson Process Management
GE Drives and Controls
Honeywell Industry Solutions
Industrial Evolution
Intelligent Optimization Group
Invensys Performance Solutions
Matrikon
Nelson & Roseme
Nexus Engineering
OSA Intl Operations Analysis
PAS
Resolution Integration Solutions
Soteica Ideas & Technology L.L.C.
Technip
Yokogawa Corporation of America
Sponsored by:
Premier sponsors:
Main Menu
Control & Information
Systems Index
Control & Instruments
Directory
Contributor Index
Control & Information
Systems Articles
Advanced Control and
Information Systems
Handbook - 2003
return to Contributor Index
ABB Inc., Simcon Advanced
Application Services
Alkylation
Blending
Catalytic reformer
Crude unit
Delayed coker
Energy management
Ethyl benzene/styrene monomer (EB/SM)
Ethylene oxide/ethylene glycol
FCCU
Gas plant
Hydrocracker/hydrotreater
Olefins
Oil movements and storage
Terephthalic acid
Vinyl chloride monomer
Sponsored by:
Premier sponsors:
Main Menu
Control & Information
Systems Index
Control & Instruments
Directory
Contributor Index
Control & Information
Systems Articles
Advanced Control and
Information Systems
Handbook - 2003
return to Contributor Index
Adersa
Chemical reactor
Platforming
Sponsored by:
Premier sponsors:
Main Menu
Control & Information
Systems Index
Control & Instruments
Directory
Contributor Index
Control & Information
Systems Articles
Advanced Control and
Information Systems
Handbook - 2003
return to Contributor Index
AMI Consultants
Plant optimization (refining)
Sponsored by:
Premier sponsors:
Main Menu
Control & Information
Systems Index
Control & Instruments
Directory
Contributor Index
Control & Information
Systems Articles
Advanced Control and
Information Systems
Handbook - 2003
return to Contributor Index
Applied Manufacturing
Technologies
Aromatics (automated plant testing)
Crude unit (model predictive control
productivity)
FCCU (model predictive control productivity)
Sponsored by:
Premier sponsors:
Main Menu
Control & Information
Systems Index
Control & Instruments
Directory
Contributor Index
Control & Information
Systems Articles
Advanced Control and
Information Systems
Handbook - 2003
return to Contributor Index
Aspen Technology
Alkylation
Ammonia
Blending
Blending (planning and scheduling)
Catalytic reformer
Crude unit
Cyclohexane
Delayed coker
FCCU/RCCU
Hydrocracker/hydrotreater
Linear alkyl benzene
Olefins
Online controller maintenance
Phenol
Planning and scheduling (planning)
Planning and scheduling (scheduling)
Plant information (event monitoring and
notification)
Plant information (yield accounting)
Plant information analysis
Plant information management
Plant performance management
Polymers
Process sequence manager
Steam methane reformer
Styrene
Sulfur complex
Utilities
Sponsored by:
Premier sponsors:
Main Menu
Control & Information
Systems Index
Control & Instruments
Directory
Contributor Index
Control & Information
Systems Articles
Advanced Control and
Information Systems
Handbook - 2003
return to Contributor Index
C. F. Picou Associates
Catalytic Reformer Octane
Cracking furnace
Delayed coker
Ethylene oxide
Fractionator (soft analyzer)
Lube oil plant
Plastics (product grade switch)
Polycarbonate monomers
Sponsored by:
Premier sponsors:
Main Menu
Control & Information
Systems Index
Control & Instruments
Directory
Contributor Index
Control & Information
Systems Articles
Advanced Control and
Information Systems
Handbook - 2003
return to Contributor Index
Curvaceous Software Limited
Plant information (alarm and quality
management)
Plant information analysis
Plant optimization
Sponsored by:
Premier sponsors:
Main Menu
Control & Information
Systems Index
Control & Instruments
Directory
Contributor Index
Control & Information
Systems Articles
Advanced Control and
Information Systems
Handbook - 2003
return to Contributor Index
Emerson Process Management
Adiponitrile
Alkylation
Amine treating
Ammonia
Catalytic reformer
Cracking furnace
Crude unit
Fractionator (vacuum distillation)
Hydrotreater
Olefins
Phenol
Plant information (equipment monitoring)
Plant optimization and information (refining)
Plant production management
Urea
Utilities
Sponsored by:
Premier sponsors:
Main Menu
Control & Information
Systems Index
Control & Instruments
Directory
Contributor Index
Control & Information
Systems Articles
Advanced Control and
Information Systems
Handbook - 2003
return to Contributor Index
GE Drives and Controls
Ammonia
Bisphenol A
Cogeneration plant
Methanol plant
NGL plant
Syngas generation plant
Sponsored by:
Premier sponsors:
Main Menu
Control & Information
Systems Index
Control & Instruments
Directory
Contributor Index
Control & Information
Systems Articles
Advanced Control and
Information Systems
Handbook - 2003
return to Contributor Index
Honeywell Industry Solutions
Alkylation
Ammonia
Blend management
Blending
Catalytic reformer
Delayed coker
FCCU
Fractionator (crude)
Fractionator (FCCU)
Fractionator (light products)
Hydrocracker
Hydrogen production
Oils movements
Olefins
Planning and scheduling
Plant operations management
Plant performance management
Plant production management
Polyethylene
Product quality management
Terephthalic acid
Value chain management
Vinyl chloride monomer
Sponsored by:
Premier sponsors:
Main Menu
Control & Information
Systems Index
Control & Instruments
Directory
Contributor Index
Control & Information
Systems Articles
Advanced Control and
Information Systems
Handbook - 2003
return to Contributor Index
Industrial Evolution
FCCU (catalyst monitoring)
Hydrotreating (catalyst monitoring)
Plant information (inbound chemical
management)
Plant information (outbound inventory
management)
Plant information (Solomon benchmarking)
Product tracking (homeland security)
Sponsored by:
Premier sponsors:
Article:
Collaboration across company boundaries
Shared inventory Management
Main Menu
Control & Information
Systems Index
Control & Instruments
Directory
Contributor Index
Control & Information
Systems Articles
Advanced Control and
Information Systems
Handbook - 2003
return to Contributor Index
Intelligent Optimization Group
Acrylonitrile recovery
Delayed coker
LPG plant
PolyCarbonate plant
Terephthalic acid
Terephthalic acid dehydrator (fuzzy logic
controller)
Waste incinerator load optimization
Sponsored by:
Premier sponsors:
Main Menu
Control & Information
Systems Index
Control & Instruments
Directory
Contributor Index
Control & Information
Systems Articles
Advanced Control and
Information Systems
Handbook - 2003
return to Contributor Index
Invensys Performance Solutions
Ammonia
Blending
Catalytic reformer
FCCU
Fractionator (crude)
Fractionator (NGL)
Oil movement management
Planning and scheduling (olefins)
Planning and scheduling (refining)
Plant information integration
Sponsored by:
Premier sponsors:
Main Menu
Control & Information
Systems Index
Control & Instruments
Directory
Contributor Index
Control & Information
Systems Articles
Advanced Control and
Information Systems
Handbook - 2003
return to Contributor Index
Matrikon
Online controller maintenance: Regulatory
and MPC
Plant information (alarm and event
collection and analysis)
Plant information (online downtime
reporting)
Plant information (OPC data management)
Plant information (Web-based decision
support)
Sponsored by:
Premier sponsors:
Main Menu
Control & Information
Systems Index
Control & Instruments
Directory
Contributor Index
Control & Information
Systems Articles
Advanced Control and
Information Systems
Handbook - 2003
return to Contributor Index
Nelson & Roseme
Utilities
Sponsored by:
Premier sponsors:
Main Menu
Control & Information
Systems Index
Control & Instruments
Directory
Contributor Index
Control & Information
Systems Articles
Advanced Control and
Information Systems
Handbook - 2003
return to Contributor Index
Nexus Engineering
Plant information (reliability/operations
management system)
Sponsored by:
Premier sponsors:
Main Menu
Control & Information
Systems Index
Control & Instruments
Directory
Contributor Index
Control & Information
Systems Articles
Advanced Control and
Information Systems
Handbook - 2003
return to Contributor Index
OSA Intl Operations Analysis
FCCU/ROC/DCC
Fractionator
Hydrocracker/hydrotreater
Olefins
Plant information integration (ERP)
Plant operations management
Utilities
Sponsored by:
Premier sponsors:
Main Menu
Control & Information
Systems Index
Control & Instruments
Directory
Contributor Index
Control & Information
Systems Articles
Advanced Control and
Information Systems
Handbook - 2003
return to Contributor Index
PAS
Olefins
Plant information (critical condition
management)
Polymers
Sponsored by:
Premier sponsors:
Main Menu
Control & Information
Systems Index
Control & Instruments
Directory
Contributor Index
Control & Information
Systems Articles
Advanced Control and
Information Systems
Handbook - 2003
return to Contributor Index
Resolution Integration Solutions
Laboratory data entry and management
Plant information (batch/lot tracking)
Plant information (data reconciliation)
Plant information (key performance indicator
management)
Plant information (offsite data management)
Plant information (recipe management)
Plant information (target setting and
nonconformance monitoring)
Plant information (yield accounting)
Plant information integration
Sponsored by:
Premier sponsors:
Main Menu
Control & Information
Systems Index
Control & Instruments
Directory
Contributor Index
Control & Information
Systems Articles
Advanced Control and
Information Systems
Handbook - 2003
return to Contributor Index
Soteica Ideas & Technology
Oil movement management
Sponsored by:
Premier sponsors:
Main Menu
Control & Information
Systems Index
Control & Instruments
Directory
Contributor Index
Control & Information
Systems Articles
Advanced Control and
Information Systems
Handbook - 2003
return to Contributor Index
Technip
Blending
Crude mix quality identification
Environmental monitoring
Gasoline pool management
Heavy hydrocarbon stream identification
Middle distillate pool management
Olefins (inline laboratory)
Plant information (data reconciliation)
Plant information (mass balance)
Plant scheduling (refining)
Sponsored by:
Premier sponsors:
Main Menu
Control & Information
Systems Index
Control & Instruments
Directory
Contributor Index
Control & Information
Systems Articles
Advanced Control and
Information Systems
Handbook - 2003
return to Contributor Index
Yokogawa Corporation
of America
Alkylation
Ammonia
Catalytic reformer
Cracking furnace
Delayed coker
FCCU
Fractionator (heavy oil)
Fractionator (light products)
Hydrotreating
MTBE
Oil movements and blending
Olefins
Operational excellence solutions
Plant information management
Polyethylene
Sponsored by:
Premier sponsors:
Main Menu
Control & Information
Systems Index
Control & Instruments
Directory
Contributor Index
Control & Information
Systems Articles
Advanced Control and
Information Systems
Handbook - 2003
Control & Instruments Directory
ABB Instrumentation
AMI Consultants
Belco Technologies Corp.
Emerson Process Management
Honeywell Industry Solutions
Intelligent Optimization Group
Invensys
Swagelok Company
Sponsored by:
Premier sponsors:
[Return to Control & Instruments Directory]
ABB Instrumentation
125 Country Line Road
Warminster, PA 18974
USA
Phone: 215-674-6001
Fax: 215-674-6394
instrumentation@us.abb.com
www.abb.com/instrumentation
Description: ABB Instrumentation is a global leader building on the heritage of Fischer & Porter, Bailey, Hartmann &
Braun, Sensycon, TBI and Kent-Taylor. Wether you need to: measure flow, pressure or temperature; analyze; record;
control; actuate or position . . . ABB provides FOUNDATION Fieldbus, PROFIBUS & HART enabled instrumentation with
the added benefit of its Industrial IT enabled products.
Executives: Dave Barnes, VP Sales & Marketing
Dane Maisel, Sr VP & General Manager
[Return to Control & Instruments Directory]
AMI Consultants
4102 Tremont Ct.
Sugar Land, TX 77479
USA
Phone: 281-565-4745
Fax: 281-565-1196
info@amiconsultants.com
amiconsultants.com
Description: AMI Consultant provides PetroPlanSM software for Petroleum refinery planning and economics.
PetroPlan is an easy tool to accurately set up the whole refinery simulation using a truly user friendly graphic inter-
face. The companys customer base consists of operating and E&C companies as well as educational institutions with
installations at 30 sites.
Executives: Indira Patell, President
Vibhu Sharma, Marketing Director
[Return to Control & Instruments Directory]
Belco Technologies Corp.
7 Entin Road
Parsippany, NJ 07054
USA
Phone: 973-884-4700
Fax: 973-884-4775
info@belcotech.com
www.belcotech.com
Description: World leader in the design supply and installation of system to reduce particulate, SO2, SO3 and NOX
from refinery applications such as FCC Regenerator off gas, Fluid Cokers, SRU Tail gas, refinery waste incinerators and
power generation. Most reliable and efficient systems available on the market.
Executives: Kevin R. Gilman, President
Nicholas Confuorto, Vice President
[Return to Control & Instruments Directory]
Emerson Process Management
8301 Cameron Rd
Austin, TX 78754
USA
Phone: 512-832-3235
Fax: 512-834-7600
info@emersonprocess.com
www.emersonprocess.com
Description: Emerson Process Management, an Emerson business, is a global leader in helping to automate produc-
tion, process and distribution in the refining, oil and gas, chemical, pulp and paper, power, food and beverage, phar-
maceutical and other industries. Emerson is known worldwide for its superior products and technology combined
with industry-specific engineering, consulting, product management, and maintenance services.
Executives: John Berra, President
U.S. Sales Offices:
AL, GA 770-495-3100
AL, LA, MS 504-887-8550
AK, OR, MT, NV, UT, WA, ID 425-487-9600
AZ, CA, NV, NM 877-827-8131
CA 510-846-9300
CN, NJ, NY 201-934-9200
FL 813-626-1126
MI 734-459-0040
IN, KY, OH 513-489-2500
ME, VT 508-339-5522
MN, SD, IL, IN, ND, WI 847-956-8020
MS, TN, KY, MS 901-386-5020
OH 440-248-9400
OK, IL, MO, KS 636-681-1500
OK, TX, NM 972-389-5700
PA, WV 724-746-3700
RI, MA, NY 518-664-6600
SC, NC 704-375-4464
SD, CO, MT, WY 303-799-9300
VA 804-858-5800
HI 808-487-7717
PA, MD, Delware 610-495-1835
SD, NE 515-753-5557
TX 281-240-2000
TX 409-842-5950
Canada Sales Offices:
Alberta, BC 403-207-0700
British Columbia 604-422-3700
Manitoba 204-633-9197
Ontario 905-629-0340
Quebec 514-697-9230
Saskatchewan 306-721-6925
International Offices:
Baar, Switzerland +41 (0) 41 768 6111
Bognor Regis, West Sussex, UK +44 (0) 1243 863121
Bron Cedex, France +33 (0) 4 72 15 98 00
Singapore +65 6777 8211
Wessling, Germany +49 (0) 8153 939-0
Measurement and Analytical Instruments
Pressure, temperature, level, and flow
measurement:
Brooks Instrument
Flow Computer Division
Daniel (gas or liquid measurement)
Micro Motion
Rosemount
Rosemount Nuclear Instruments
Saab Rosemount
Liquid analyzers:
Rosemount Analytical - Liquid Div.
Gas and combustion analyzers:
Rosemount Analytical - Gas Div.
Final Control Devices
Control valves and valve-related instrumentation:
Fisher
Baumann
Valve Automation Division
- Bettis
- Dantorque
- El-O-Matic
- FieldQ
- Hytork
- Shafer
Regulators:
Fisher
Fisher LP Gas Regulators
Systems and Software
Process management systems:
Process Systems
(formerly Fisher-Rosemount Systems)
- DeltaV
- PROVOX
- RS3
Power & Water Solutions
(formerly Westinghouse Process Control)
Process control, automation, and optimization
software:
EnTech
Process Systems (formerly Fisher-Rosemount Systems)
MDC Technology
Power & Water Solutions (formerly Westinghouse
Process Control)
Asset management, monitoring, maintenance, and
optimization software:
Asset Optimization
Mechanical equipment
Process equipment
Instruments and valves: AMS and HART 275
Automation Architecture:
PlantWeb digital plant architecture
[Return to Control & Instruments Directory]
Honeywell Industry Solutions
2500 W. Union Hills Drive
Phoenix, AZ 85027
USA
Phone: 602-313-5000
Fax: 602-313-4040
www.acs.honeywell.com
Description: Honeywell Industry Solutions is part of Honeywells Automation and Control Solutions group, a global
leader in providing product and service solutions that improve efficiency and profitability, support regulatory compli-
ance, and maintain safe, comfortable environments in homes, buildings and industry. For more information about
Industry Solutions, access http://www.acs.honeywell.com/.
Executives: David Cote, Chairman & CEO
Kevin Gilligan, President of ACS
Bo Anderson, Vice President
Ramon Baez, Vice President
Michael Bartschat, Vice President
Jack Bolick, President of System Solutions
Roger Fradin, President of Automation & Control Products
[Return to Control & Instruments Directory]
Intelligent Optimization Group
P.O. Box 79162
Houston, TX 77279
USA
Phone: 713-269-2340
Fax: 713-849-0455
info@intellopt.com
www.intellopt.com
Description: IntellOpt is an Advanced Automation Solutions company specializing in advanced process control, mul-
tivariable predictive control (MVPC), expert systems and neural nextworks for petrochemical, chemical and refining
processes. Our products include GMAXC, a multivariable controller offering MVPC technology at a commodity level
and Z-Way, a multivariable fuzzy logic controller, with several projects completed worldwide.
Executives: Ravi Jaisinghani, Pricipal
[Return to Control & Instruments Directory]
Invensys
33 Commercial Street
Foxboro, MA 02035
USA
Phone: 508-549-2424
Fax: 508-549-4999
www.invensys.com
Description: Invensys, a global leader in production technology, works closely with customers to increase perform-
ance of production assets, maximize return on investment in production and data management technologies and
remove cost from the supply chain. The Invensys Production Management division includes APV, Avantis, Eurotherm,
Foxboro, IMServ, SimSci-Esscor, Triconex and Wonderware.
Executives: Rick Haythornthwaite, CEO
Leo Quinn, COO
[Return to Control & Instruments Directory]
Swagelok Company
31400 Aurora Road
Solon, OH 44139-2764
USA
Phone: 440-349-5934
Fax: 440-349-5806
Robert.fleig@swagelok.com
www.swagelock.com
Description: Headquartered in Solon, Ohio, U.S.A., Swagelok Company is a major developer and manufacturer of
fluid system component technologies for the research, analytical instrumentation, process instrumentation, pharma-
ceutical, oil and gas, power, petrochemical and semiconductor industries. More than 25 Swagelok manufacturing,
research, technical support and distribution facilities support a global network of more than 200 independent, local
sales and service centers on six continents.
Executives: Bob Fleig, Industrial Market Communications Leader
Main Menu
Control & Information
Systems Index
Control & Instruments
Directory
Contributor Index
Control & Information
Systems Articles
Advanced Control and
Information Systems
Handbook - 2003
Control & Information Systems Articles
Reducing operations & maintenance costs with
PlantWeb digital plant architecture (Emerson
September 2003)
Crude oil blend scheduling optimization: An
application with multimillion dollar benefits
(Honeywell)
Part 1 (June 2003)
Part 2 (July 2003)
Improving availability with PlantWeb digital plant
architecture (Emerson May 2003)
Future trends in safety instrumented systems
(Honeywell May 2003)
Diagnostics capabilities of FOUNDATION fieldbus
pressure transmitters (Emerson April 2003)
The validation of an on-line nuclear magnetic
resonance spectrometer for analysis of naphthas
and diesels (Invensys April 2003)
The smart refinery: Economics and technology
(Emerson March 2003)
Fieldbus improves control and asset management
(Emerson January 2002)
Advanced Control Methods (Honeywell)
Part 1: Purpose and characteristics
Part 2: Optimizationmaximization or
minimization?
Collaboration across company boundaries
Shared inventory management (Industrial Evolution)
Integrated data reconciliation, process modeling
and performance monitoring online (Invensys)
CONTRIBUTIONS NOT INCLUDED
Sponsored by:
Premier sponsors:
White paper: Reducing Operations & Maintenance Costs
September 2003 Page 1 PlantWeb
Reducing operations & maintenance costs
with PlantWeb
The graph below shows typical as well as worst- and best-in-class
%RAV.
1
For a plant with $250,000,000 in assets to maintain, moving from
typical to best-in-class status could mean over $10,000,000 in annual
savings.
One benchmark of maintenance
productivity is annual spending as a
percentage of Replacement Asset
Value.
Of course, you still have to keep the plant running smoothly and safely.
The goal is to use your maintenance budget and personnel more
efficiently so you can spend less and maintain or even improve plant
performance.
Recent data shows that 86% of maintenance is reactive (too late) or
preventive (unnecessary).
2
In fact, typical maintenance practices for
reactive, preventive, and predictive maintenance have not changed in over
15 years.
1
This is primarily due to a lack of tools powerful enough to
fundamentally improve maintenance practices.
Span of control. For operations, one measure of productivity is the
number of loops each operator manages.
A typical plant might have 125 loops per operator, so managing 1500
loops would require 48 operators to staff four shifts. In a best-in-class
plant, on the other hand, each operator might handle 250 loops requiring
only 24 operators over the same number of shifts. At a fully burdened
cost of $80,000 per year for each operator, the savings would approach
$2,000,000 annually.
Even greater productivity and economic benefits are possible when
operators also have the tools and information to continuously optimize
energy use, feedstocks, and other economic factors for the loops they
control, as well as to reduce costs in related areas such as safety, health,
and environment; utilities; and waste and rework.
So why arent more plants getting these savings and productivity gains
today?
Emerson Process Management 2003. All rights reserved.
www.PlantWeb.com
White paper: Reducing Operations & Maintenance Costs
September 2003 Page 3 PlantWeb
Misdirected maintenance
Too much of the work done by maintenance teams is unnecessary,
unproductive, or even counterproductive.
Unnecessary work. Over half of typical maintenance activities are
unnecessary. This includes routine equipment checks as well as
preventive maintenance on equipment that doesnt need it.
One analysis showed that 63% of all instrument work orders did not
result in corrective action, because there was nothing wrong with the
equipment.
A study of 230 valves scheduled for rebuilding during a shutdown
found than only 31% needed such extensive service.
Many plants re-calibrate transmitters before installation and then once
or twice a year after that, even when the original factory calibration is
more accurate and (for some transmitters) stable for 5-10 years.
Unproductive work. In a typical plant, the maintenance department
averages about 30% wrench time. The rest of the time theyre doing
data entry and retrieval, work-order reporting, and other paperwork. Best-
practices plants use automated tools to manage this information more
efficiently increasing wrench time to 50% or more.
3
Counterproductive work. Some maintenance actually reduces
equipment reliability. Problems can result from incorrect re-assembly,
incorrect tightening, misalignment, or other errors. In fact, as many as
70% of equipment failures happen shortly after initial installation or major
preventive maintenance.
1
Inefficient
maintenance strategies
Many of these problems could be reduced by adjusting the mix of
reactive, preventive, predictive, and proactive maintenance strategies
so workers can focus on doing the right things at the right time.
1. Reactive maintenance. Also described as fix it when it breaks, this
is the most basic maintenance strategy. Its major drawback is obvious:
the cost to repair (or replace) equipment thats run to failure is typically
much higher than if the problem were detected and fixed earlier not to
mention the cost of lost production during extended downtime.
2. Preventive maintenance. A preventive strategy assumes equipment
is relatively reliable until, after some period of time, it enters a wear-out
zone where failures increase. To postpone this wear-out, equipment is
serviced on a calendar- or run-time basis whether it needs it or not. On
average, this fix it just in case approach is about 30% less expensive
than reactive maintenance.
Emerson Process Management 2003. All rights reserved.
www.PlantWeb.com
White paper: Reducing Operations & Maintenance Costs
September 2003 Page 4 PlantWeb
A preventive maintenance strategy
attempts to service equipment before it
enters an assumed wear-out zone--
but most equipment doesnt follow this
failure pattern.
However, determining when the wear-out zone might begin has
traditionally been an inexact science, relying on estimates and averages
rather than actual equipment condition. Because of this uncertainty,
preventive maintenance schedules are usually very conservative.
As a result, maintenance often takes place too soon, when theres
nothing wrong and service can actually create new problems. In fact,
about 30% of preventive maintenance effort is wasted, and another 30% is
actually harmful.
1
But theres an even bigger problem: only about 6% of equipment follows
a time-based wear-out pattern. For most other equipment over 90%
failures typically result from the cumulative effects of events or conditions
that can occur at any time.
1
That means schedule-based preventive
maintenance can also come too late, after the damage has begun.
Because its time-based instead of
condition-based, preventive
maintenance often takes place before
theres a problem, or after the damage
has grown.
3. Predictive maintenance. The third strategy overcomes these
drawbacks by constantly monitoring actual equipment condition and
using the information to predict when a problem is likely to occur. With
that insight, you can schedule maintenance for the equipment that needs it
and only what needs it before the problem affects process or
equipment performance. Thats a great way to improve maintenance
productivity, as well as reduce costs for repairs and unexpected downtime.
A best-practices plant uses predictive maintenance for most equipment
where condition-monitoring is practical, limiting reactive and preventive
strategies to equipment thats not process-critical and will cause little or no
collateral damage if run to failure.
Emerson Process Management 2003. All rights reserved.
www.PlantWeb.com
White paper: Reducing Operations & Maintenance Costs
September 2003 Page 5 PlantWeb
Best-practices plants improve
productivity and reduce costs by
emphasizing a predictive maintenance
strategy.
Despite the benefits of predictive
maintenance, typical practices have
not changed in over 15 years.
4. Proactive maintenance. The next strategy is proactive maintenance,
which analyzes why performance is degrading and then corrects the
source of problems. The goal is not just to avoid a hard failure, but to
restore or even improve equipment performance.
For example, a valve failure might be caused by excess packing wear,
which in turn was caused by poor loop tuning that caused the valve to
cycle continuously. Retuning the loop will prevent further failures while
also improving process performance.
The best-practices plant of the future will actually spend more on
maintenance to include this proactive approach in their arsenal and
more than regain the investment in increased plant efficiency.
Overwhelmed operators Operators typically have extensive real-world knowledge of the plant
and the process. But instead of using this know-how to improve
operations, they spend much of their time and talent reacting to
unexpected situations a productivity drain that limits the number of
loops they can manage effectively.
This productivity problem often begins with instruments, valves, and
process equipment or entire loops that dont perform as they should,
requiring intense operator intervention to maintain control.
When something does go wrong, the flood of data and alarms that
operators have to deal with can make it harder for them to find and fix the
problem, or even obscure other process conditions and events that need
their attention. Better alarm and alert management is needed to ensure
that the right people get the right information at the right time to guide their
actions.
Some plants rely on abnormal situation management programs to provide
this guidance. But greater productivity gains are possible by focusing on
abnormal situation prevention using predictive maintenance and
Emerson Process Management 2003. All rights reserved.
www.PlantWeb.com
White paper: Reducing Operations & Maintenance Costs
September 2003 Page 6 PlantWeb
similar strategies to correct or avoid potential problems before they require
operator intervention.
Operators on the run Many facilities have remote areas ranging from tank farms and water and
waste treatment, to well heads, remote platforms, and pipelines. In the
ideal world operators can run remote areas from a central location. If a
remote area experiences a condition which requires a temporary onsite
operator, predictive intelligence and diagnostics should provide the
operator with all the information needed to have proper supplies,
equipment, and procedures on hand to address the situation.
If remote areas require onsite operators, operator span of control is
significantly reduced, and operations expense significantly increased.
The increase in cost includes the operator, but it also includes control
room space suitable for continuous operations and transportation costs to
potentially distant sites.
In addition to cost, transportation to and from the remote site may bring
the operator through potentially hazardous or remote areas effecting
personnel safety. Effective remote operations can reduce direct
operations cost, reduce capital cost for remote operating areas, reduce
logistics cost, and increase operator safety.
Missed opportunities for
economic optimization
Many of the factors that affect plant economics change frequently from
raw material costs to market demand for process outputs. In an ideal
world, operators would constantly adjust energy and feedstock sources,
product mix, equipment used, and other variables to optimize the
economic performance of the plant.
In the real world, however, operators seldom get any real-time feedback
on the economic effect of their actions. They could be unaware that
theyre losing millions of dollars by running the plant at sub-optimal
operating points.
Even if they have the information, they may not have the tools needed to
evaluate complex interactions between variables, or to determine the best
operating points before conditions change again.
Emerson Process Management 2003. All rights reserved.
www.PlantWeb.com
White paper: Reducing Operations & Maintenance Costs
September 2003 Page 7 PlantWeb
A limited view Predictive maintenance, abnormal situation prevention, economic
optimization, and similar strategies offer clear productivity and cost
benefits. But predicting potential problems and the effect of changing
conditions requires a constant flow of real-time information not just
about the process, but also about the myriad pieces of equipment that
make it work.
Thats something traditional automation architectures cant easily provide.
The control system cant show you much more than the process variable
and any associated trends or alarms. Theres no way to monitor
equipment health, and thus no way to detect the early-warning signals of
potential problems.
For example, any analog instrument signal between 4 and 20 mA is
assumed to be good, when in fact there could be any number of problems.
The signal could have drifted, a sensor could be fouled, or a valve may be
sticking. Unless an experienced operator notices that something doesnt
look right, the problem could grow until it causes a process upset or
equipment failure.
Whats needed is a way to detect (or predict) such problems before they
increase operational and maintenance costs, and the tools to leverage
that information so you can do more with the resources you have or with
even less.
The answer: predictive intelligence
What makes PlantWeb
different from other
automation architectures?
It's engineered to efficiently
gather and manage a new we
of information including
equipment health and diagnostic
from a broad range of field
devices and other process
alth
s
equipment.
ess
other plant and business systems.
greater reliability and scalability.
tage of FOUNDATION
fieldbus.
s of projects across all
indust s.
ture and
isit
www.PlantWeb.com.
It provides not only proc
control, but also asset
optimization and integration with
It's networked, not centralized, for
It uses standards at every level of
the architecture including taking
full advan
It's the only digital plant
architecture with proven success
in thousand
rie
For more about the architec
what it can do for you, v
Emersons PlantWeb
monitor their own health and performance, as well as the process, and
signal when theres a potential problem or maintenance is needed.
But PlantWeb doesnt stop with instruments and valves. It also captures
information on the condition of rotating equipment, such as motors and
pumps. And it monitors the performance and efficiency of a broad range
of plant equipment, from compressors and turbines to heat exchangers,
distillation columns, and boilers.
Information integration. PlantWeb uses communication standards like
HART, FOUNDATION fieldbus, and OPC, as well as our AMS Suite of
integrated software, to make this information available in the control room,
the maintenance shop, or wherever else its needed for analysis and
action.
The equipment information is also integrated into PlantWebs DeltaV and
Ovation automation systems, which combine it with process data to
deliver accurate, reliable control and optimization, and to manage alarms
and alerts.
The power to predict and improve. With the ability to see whats
actually happening and about to happen in your process and
equipment, your team no longer has to spend as much of their time
reacting to unexpected events (caused by problems they didnt know
about), or trying to find and fix problems that may not even exist.
Instead, they can focus on more productive tasks, like heading off
problems they know are on the way, and finding new ways to reduce costs
and improve performance.
Lets take a closer look at some examples of how PlantWeb makes this
possible both for maintenance and for operations.
More productive
maintenance
PlantWebs predictive intelligence enables you to gain the benefits of
predictive and proactive maintenance across the thousands of pieces of
equipment in your operation from instruments and valves to
mechanical and process equipment.
Emerson Process Management 2003. All rights reserved.
www.PlantWeb.com
White paper: Reducing Operations & Maintenance Costs
September 2003 Page 9 PlantWeb
Instruments and valves. The proven reliability of PlantWebs Fisher
valves and Rosemount, Rosemount Analytical, and Micro Motion
transmitters reduces maintenance needs right from the start. But process
conditions and events can lead to problems in even the best equipment.
Thats when these devices built-in performance monitoring and
diagnostics help focus your maintenance efforts where theyre most
productive.
For example, transmitters can fail if the electronics are exposed to
excessive temperatures. But built-in temperature-monitoring and
alarming in PlantWeb instruments can alert you to the problem in time to
find and remedy the cause.
Similarly, the sensor fouling detection diagnostic in our pH transmitters
can trigger a maintenance request before fouling causes process
problems or even automatically initiate cleaning of the sensor.
And valve diagnostics can tell you (while the valve is still in service) if
conditions like seat wear, packing friction, or air-supply leakage are
approaching the point where maintenance is needed.
This valve diagnostic indicates that
friction will exceed the recommended
limit in one month enabling you to
schedule replacement of the valve
packing before process quality,
availability, or throughput is affected.
The ability to forecast service needs can reduce the need for a large in-
house spare-parts inventory. One PlantWeb user has reported cutting
valve and instrument parts inventories 70%, saving over $500,000.
Knowing exactly which devices need work, and what kind of work, also
lets maintenance technicians plan their work more efficiently taking
the right tools and parts into the field, for example.
Just as important, PlantWeb diagnostics can tell you which devices
dont need maintenance reducing unnecessary equipment checks,
shortening shutdowns, and avoiding the cost and risk of unneeded
preventive maintenance. Experience has shown that monitoring the
Emerson Process Management 2003. All rights reserved.
www.PlantWeb.com
White paper: Reducing Operations & Maintenance Costs
September 2003 Page 10 PlantWeb
performance and condition of critical valves with PlantWebs ValveLink
diagnostic software can reduce their maintenance costs by 50%.
The AMS Suite's Intelligent Device Manager software consolidates valve
and instrument information for easy access, as well as providing a robust
but user-friendly tool for many maintenance tasks from initial device
configuration through troubleshooting and recordkeeping.
For example, the software's remote monitoring and diagnostic
capabilities dramatically speed equipment checks. What might have been
a 25-minute check in the field becomes a 2-minute task done online from
the maintenance shop or control room without exposing workers to
hazardous environments.
Intelligent Device Manager software also helps cut instrument calibration
time almost in half, from an average of 47 to 25 minutes. And its
automatic documentation of maintenance tasks virtually eliminates the
manual data entry that eats up so much wrench time.
Combined with new work practices to reduce unproductive work, taking
full advantage of these tools over a broad spectrum of tasks can on
average reduce maintenance time 65% over traditional methods.
Mechanical equipment. Half of equipment failures that cause downtime
typically involve mechanical equipment such as pumps, motors,
compressors, and turbines. PlantWeb can help here, too.
The AMS Suite's proven Machinery Health Manager software combines
online monitoring information with data from a wide range of analytical
tools, so you can see which equipment will need service soon, and which
wont.
Bearing failure, for example, is a common problem with rotating
equipment. But PeakVue software can detect and identify the very high-
frequency noise associated with the earliest stages of bearing wear. You
get maximum warning of future problems, before increasing damage
significantly increases the cost (and possibly the time) for repairs.
Emerson Process Management 2003. All rights reserved.
www.PlantWeb.com
White paper: Reducing Operations & Maintenance Costs
September 2003 Page 11 PlantWeb
The Machinery Health Manager uses
vibration monitoring, IR thermography,
oil analysis, ultrasonics, and motor
diagnostics to give you a better view of
actual equipment condition.
Tools for laser alignment and equipment balancing also play an important
role in proactive maintenance of rotating equipment. Used to ensure that
shafts are coupled center-to-center and that vibration levels are low at
operating speeds and loads, they can substantially extend equipment life
and reduce maintenance costs.
Process equipment. Performance of larger process equipment such as
boilers, compressors, heat exchangers, and distillation towers often
degrades gradually. Repairs or overhauls can restore the lost efficiency,
but at the cost of lost production while the equipment is out of service.
PlantWeb helps you pinpoint the right time to service such equipment.
The AMS Suite's Equipment Performance Monitor uses thermodynamic
models to show you changes in equipment efficiency over time. It then
calculates the financial impact of these changes, so you can weigh the
cost of sub-optimal performance against the cost of shutting down for
maintenance.
Emerson Process Management 2003. All rights reserved.
www.PlantWeb.com
White paper: Reducing Operations & Maintenance Costs
September 2003 Page 12 PlantWeb
The Equipment Performance Monitor
alerts you to long-term changes in
equipment performance and their
economic impact.
You can also use Equipment Performance Monitor to measure
maintenance effectiveness, verifying that the equipment is again
delivering the needed performance or even comparing the economic
impact of different maintenance methods, such as in-place cleaning or
complete equipment overhaul.
Enabling operators
to do more
PlantWeb increases operator productivity by reducing the time
operators spend in reactive mode, scrambling to deal with unexpected
situations and problem loops that threaten process stability and safety.
With fewer abnormal situations and better tools and guidance for
dealing with those that do occur operators can manage more loops in
both local and remote locations, and focus on improving production.
Abnormal situation prevention and management. Much of the gain
comes from the maintenance improvements discussed above. Because
many potential problems can be predictively sensed (and the
maintenance team notified) before they affect process performance, they
never even hit the operators dashboard.
PlantWebs integration of equipment and process information helps keep
things running smoothly in situations like these. As our intelligent
FOUNDATION fieldbus instruments constantly check for problems, they use
what they learn to label the data they send as good, bad, or uncertain.
PlantWebs DeltaV and Ovation automation systems monitor this signal
status (something not every system can do) to constantly verify that the
data is valid for use in control algorithms. If its not, the systems can
automatically modify control actions as appropriate.
Operators can also easily check equipment condition to anticipate and
adjust for potential problems. The AMS Suite's Asset Portal provides an
integrated, high-level view of information from valves and instruments,
Emerson Process Management 2003. All rights reserved.
www.PlantWeb.com
White paper: Reducing Operations & Maintenance Costs
September 2003 Page 13 PlantWeb
rotating equipment, and process equipment in a single browser-based
interface. This access to predictive diagnostics and other asset data also
enables operators to determine when equipment health is (or, more likely,
isnt) causing process problems.
Asset Portal provides a consolidated
view of instrument and valve, rotating
equipment, and process equipment
health.
When process or equipment problems do occur, PlantWeb Alerts notify the
right people without flooding operators with nuisance alarms. This capability
relies on powerful software in Emerson field devices, AMS Suite software,
and DeltaV and Ovation systems to immediately analyze the incoming
information, categorize it by who should be told, prioritize it by severity and
time-criticality, and then not only tell the recipients whats wrong but also
advise them what to do about it in clear, everyday language.
With the advanced warning provided by predictive intelligence, combined
with effective information integration for both control and asset health
information, operators and maintenance personnel have more information
and more lead time to deal with potential problems. This reduces overall
operations and maintenance cost and may reduce or eliminate staffing
requirements at remote locations.
Simulation software such as DeltaV Simulate can also improve operator
efficiency by providing a safe but realistic environment where they can
practice dealing with both normal and abnormal process events.
Better control. PlantWeb also improves productivity by reducing
process variability, so operators dont have to spend time managing
problem loops manually.
This better control begins with the intelligent instruments and valves that
form the foundation of PlantWeb architecture. They include transmitters
Emerson Process Management 2003. All rights reserved.
www.PlantWeb.com
White paper: Reducing Operations & Maintenance Costs
September 2003 Page 14 PlantWeb
with fast dynamic response, digital valves that respond to signals of 1% or
less, and the worlds most accurate Coriolis flowmeters.
DeltaV and Ovation systems integrate equipment and process information
to add rock-solid regulatory and advanced control. And because
advanced controls such as Model Predictive Control are embedded in the
system controllers, theyre easier to configure and use, and have better
availability than traditional host-based systems.
When the problem is a poorly tuned loop, its easy to get back on track
with DeltaV Tune software, which uses uses patented relay oscillation
principles that minimize process disturbances and tuning time.
OvationTune, a system-wide tuning package, also smoothes out
variability by monitoring and adaptively tuning loops for optimal
performance.
(For more on how PlantWeb reduces variability, visit www.PlantWeb.com
and click the Quality link under Operational Benefits.)
Process optimization. As your operators shift their focus to improving
process performance, PlantWeb provides the tools that help them make it
happen.
AMS Real-Time Optimizer software identifies optimum setpoints to
achieve best performance without violating constraints. Like PlantWebs
other advanced controls, Real-Time Optimizer is an integral part of the
architecture, making implementation of optimum setpoints easy.
For power applications, SmartProcess plant optimization software
improves throughput and efficiencies by maximizing boiler performance,
improving heat rate, and minimizing steam temperature variations.
These applications allow operators to better optimize each loop or unit,
without violating interacting constraints that can cause process upsets or
downtime.
Extending the savings
Many of the PlantWeb capabilities that enhance maintenance and
operator productivity also help reduce other operational costs. Although
a full discussion of these other benefits is beyond the scope of this paper,
here are a few highlights:
Safety, health, and environment. With PlantWebs predictive
intelligence and information integration, you can:
Emerson Process Management 2003. All rights reserved.
www.PlantWeb.com
White paper: Reducing Operations & Maintenance Costs
September 2003 Page 15 PlantWeb
Maintain mechanical integrity by detecting, predicting, and preventing
equipment failures or unsafe process conditions
Use remote monitoring to reduce personnel exposure to hazardous
environments.
Streamline regulatory compliance through automatic documentation
of maintenance and engineering activities.
Utilities. PlantWeb helps reduce energy use that can be a major
contributor to operating costs.
Tight, consistent control helps improve conversion of fuels to energy
by 6-10%.
Real-Time Optimizer and SmartProcess software can optimize the
mix of fuels and energy-producing assets.
Equipment Performance Monitor helps you identify when and where
maintenance will most reduce energy use.
Machinery Health Manager can alert you when corrective action is
needed to restore motor efficiency.
Waste and rework. Costs rise when you must reprocess or dispose of
off-spec product. PlantWeb can help here, too.
Predictive intelligence alerts you to conditions that lead to waste,
while superior control smoothes out variability so you meet specs
even at higher production rates.
DeltaV and Ovation can automate startups and grade changes,
bringing the process to full production faster.
Real-Time Optimizer can constantly find the best operating points
for minimizing waste and rework.
For more on each of these areas, see the Operational Benefits section of
www.PlantWeb.com.
Maximizing and sustaining
the gains
Gaining the full benefits of a new architecture means adopting new
technologies and work practices, but finding the time and resources to
make improvements can be challenging in todays short-staffed plants.
With Emerson you can maximize the gains and sustain them for
improved financial performance over the life of your plant.
Emerson makes it easy. Experience shows that customers gain the full
value from their technology investments by complementing these
technologies with PlantWeb Services. Whether youre using PlantWeb in
a new facility or adding it to your current operation, our expertise helps
ensure a successful implementation.
Emerson Process Management 2003. All rights reserved.
www.PlantWeb.com
White paper: Reducing Operations & Maintenance Costs
September 2003 Page 16 PlantWeb
Best-practices consultation. Emerson service experts will conduct
assessment and benchmarking program design so you know where your
plant is compared to goal and best practices.
Expert implementation. We will apply AMS Suite technology to your
plant needs. To help ensure the success of your project, our service
experts will define and document modified work practices, integrate real-
time process and equipment health information with your enterprise
applications, and provide education and certification for your plant
personnel.
We offer a full range of training at your location or ours, or in video, PC-,
and web-based courses to help your operations and maintenance staffs
come up to speed quickly. Courses include condition-monitoring and
predictive-maintenance techniques, as well as product-specific classes for
predictive maintenance across all assets.
Sustain the gain. If you choose, we can also provide PlantWeb-enabled
expert services to supplement your in-house resources. Emerson ongoing
services include monitoring and analysis, diagnostic services, and
program management to help ensure long-term results.
Real projects, real results
PlantWeb has proven its value in thousands of installations, in all
industries, and around the world. Users are seeing the benefits every day.
Here are just a few examples:
We are saving $300,000 in labor costs alone, and are running more
efficiently than ever. - Power plant
The diagnostics are fast and precise when identifying what is
generating a malfunction. - Electric service utility
Automated documentation of instrument tests saves us an average of
40%. - Pharmaceutical maker
The time it takes to troubleshoot problems has been reduced nearly
50%, and predictive diagnostics tell us when our valves are starting to
deteriorate so we can plan our maintenance activities instead of
reacting to process problems and failures. - Chemical producer
We used to go to the field, hook up to the device, and look to see what
was wrong. Now we can see immediately from the DeltaV what is
Emerson Process Management 2003. All rights reserved.
www.PlantWeb.com
White paper: Reducing Operations & Maintenance Costs
September 2003 Page 17 PlantWeb
wrong with the instrument. What used to take 40 to 45 minutes now
takes 5 to 10 minutes. - Tank-farm operator
We have eliminated 25% of our maintenance time since PlantWeb
was installed two years ago. We have kept the same number of
people, but those people are now able to do other things to make our
plant more productive. - Food processor
We were able to keep the same number of personnel in spite of
doubling our size. - Regulated-waste treatment facility
For case histories and additional proofs of PlantWeb architectures
capabilities, visit www.PlantWeb.com and click on Customer Proven.
Taking the next steps
PlantWeb can help you meet the ever-increasing demand for lower costs
by increasing operations and maintenance productivity. But with such a
wide range of opportunities for improvement, how do you get started?
1. Decide where you want to go. What are your goals for operations
and maintenance cost? What are your goals for uptime? How do you
want your operators and maintenance departments to work together?
How would you like your plant to run? Establish your vision and goals,
and then get ready to achieve them.
2. Assess where you are. How have your costs changed over the last
two or three years? What is your maintenance budget as a percent of
replacement asset value (%RAV)? What is your current mix of
maintenance strategies? How many loops does each of your operators
manage? How do these figures compare to industry benchmarks?
3. Look for specific pain points or opportunities. Do some units or
equipment types have more problems than others especially unexpected
ones? Is equipment-health information from HART and fieldbus devices
available to operators and maintenance technicians? Do you have
automated maintenance management or process optimization tools? Are
they being used?
4. Plan the changes that offer the most benefit. Usually, this involves
greater use of predictive maintenance to avoid problems that affect both
maintenance and operations productivity. Consider changes in work
practices as well as technology, and be sure to involve management and
engineering as well as operations and maintenance teams in the planning
process.
Emerson Process Management 2003. All rights reserved.
www.PlantWeb.com
White paper: Reducing Operations & Maintenance Costs
September 2003 Page 18 PlantWeb
5. Work with your local Emerson team. Well help you identify which
PlantWeb technologies and related services will meet your goals, and how
we can put them to work for you. If you want, we can even help you with
the assessment and planning phases of this process, as well as providing
implementation services and ongoing support.
Emerson Process Management 2003. All rights reserved.
www.PlantWeb.com
White paper: Reducing Operations & Maintenance Costs
September 2003 Page 19 PlantWeb
Emerson Process Management
8301 Cameron Road
Austin, Texas 78754
T 1 (512) 834-7328
F 1 (512) 834-7600
www.EmersonProcess.com
References
1. Dennis Berlanger and Saxon Smith, MRG Inc., The Business Case
for Reliability, as published at www.reliabilityweb.com/rcm1.
2. Reliability magazine, 2002.
3. Richard L. Dunn, "Composite Maintenance Benchmark Metrics,"
Plant Engineering, January 1999.
Other resources
Reducing operations and maintenance costs is just one of the ways
PlantWeb architecture helps improve process and plant performance.
The PlantWeb web site offers a wealth of information including
additional white papers on reducing costs while also improving
process quality, throughput, and availability.
www.PlantWeb.com click on "Operational Benefits."
Emerson Process Managements free online learning environment,
PlantWeb University, offers several courses on improving
maintenance effectiveness. Additional courses on reducing
operations and maintenance costs by increasing productivity are in
development and will be available soon.
www.PlantWebUniversity.com
The AMS Work Processes Guide outlines maintenance-practice
changes to maximize the benefits outlined in this white paper.
www.emersonprocess.com/ams/solutions
click on "Saving Money" and then "AMS Work Processes Guide"
The contents of this publication are presented for informational purposes only, and while effort has been made to ensure their accuracy, they are
not to be construed as warranties or guarantees, express or implied, regarding the products or services described herein or their use or
applicability. All sales are governed by our terms and conditions, which are available on request. We reserve the right to modify or improve the
designs or specifications of our products at any time without notice.
PlantWeb, Fisher, Rosemount, Micro Motion, RBMware, e-fficiency, Ovation, SmartProcess, and DeltaV are marks of Emerson Process
Management. All other marks are the property of their respective owners.
030903
2003 Emerson Process Management. All rights reserved.
E
conomic and operability benefits associated with better crude
oil blend scheduling are numerous and significant. The various
crude oils that arrive at a refinery to be processed into the dif-
ferent refined products must be carefully handled and mixed before
they are charged to the atmospheric and vacuum distillation units or
pipestill. Many details are involved in optimizing scheduling of a
refinerys crude oil feedstocks from receipt to charging of the pipestills.
Every refinery that charges a mix of crude oils to a pipestill has a
crude oil blend scheduling optimization opportunity. Producing
and updating a schedule of when, where, what and how much crude
oil to blend can be difficult. Although crude oils are often planned,
purchased, procured and have a delivery schedule set long before
they arrive at the refinery, details of scheduling crude oil off-load-
ing, storing, blending and charging to meet pipestill feed quantity and
quality specifications must always be prepared based on current
information and very short-term anticipated requirements. How-
ever, the rewards for performing better crude oil blend scheduling opti-
mization can be substantial depending on complexity and uncer-
tainty of the particular crude oil blendshop** operation.
Crude oil blendshop scheduling has historically been carried out
using fit-for-purpose spreadsheets and simulators with most deci-
sions made manually. The user attempts to create a feasible schedule
that meets flowrate and inventory bounds, operating practices and
quality targets over a near-term horizon, typically 1 to 10 days (up to
30). Actual schedule length depends on the reliability or certainty
of the crude oil delivery quantities and timing, and the pipestill pro-
duction mode runs. Farther into the future, the input data become
more uncertain and the scheduling work is typically cut off.
The goal of production scheduling optimization is to automate
many of these manual decisions by taking advantage of recent
advances in computer power and mathematical programming codes
and solving techniques. The main advantages of this approach are that
many thousands of scheduling scenarios can be evaluated as part of
the optimization in comparison to perhaps only one schedule found
by a user, a substantial reduction in time required to generate better
schedules and the ability to incrementally rerun the optimization
when different what-if scenarios are required (i.e., evaluating dis-
tressed crude oil cargos).
The focus of this article is four-fold. First, we delineate the busi-
ness problem of crude oil blend scheduling using a simple but reveal-
ing motivating example. Second, we highlight the hard and soft ben-
efit areas of improved blend scheduling to whet the appetite for
the impending details. Third, we provide a description of the new
scheduling approach for crude oil blending including the theory,
explicit problem formulation and related aspects such as how to seg-
regate crude oils when there are not enough tanks for dedicated stor-
age. And fourth, we discuss key elements of the scheduling solution
to enlighten the reader on the nuances and the challenges of solv-
ing large combinatorial problems.
Before further discussion, it is important to highlight the differ-
ence between production planning and scheduling and to discuss
the underlying need for continuous improvement of the schedul-
ing function. There will always be a planning activity and a schedul-
ing activity. Together, they form a hierarchical decision-making
framework that is very much a part of the organizational structure of
every corporation.
Planning is forecast driven and typically aggregates resources such
as equipment, materials and time to model and solve the breadth of
the problem. Planning generates simplified activities or tasks con-
sistent with these aggregations. Scheduling is order driven and uses the
decomposed equipment, materials and time to model and solve the
depth of the problem. It should be appreciated that a great number
of planning decisions are made long before any scheduling decisions
are generated. This implies that good scheduling can only result
from good planning.
The subtlety between forecasts and orders must also be appreci-
ated. At the planning stage firm and reliable customer orders are
rarely available over any significant planning time horizon except
for wholesale agreements or contracts. Forecast or best-guess aggre-
gate demands and capabilities are used to optimize projections of
plant operations. Thus, the plans generated are used to set direc-
tions and not production orders. Scheduling on the other hand is
primarily based on orders. Orders are much more concrete both in
quantity and time and have the highest reliability for the immediate
future. Scheduling generates detailed tasks and activities to meet the
immediate orders and scheduling is typically updated whenever sig-
nificant changes to the order or plant capabilities occur.
There is always a requirement to continuously improve schedul-
ing utility. The underlying driving force for this is related to the
PROCESS/ PLANT OPTI M I ZATI ON SPECI ALREPORT
HYDROCARBON PROCESSING JUNE 2003
I
47
Crude oil blend scheduling
optimization: an application with
multimillion dollar benefits Part 1
The abilit y t o schedule t he crude oil blendshop more ef f ect ively
provides subst ant ial downst ream benef it s
J. D. KELLY* and J. L. MANN, Honeywell Industry Solutions, Toronto, Ontario, Canada
.
.
notion of innovation in industry. Three known innovations are out-
lined by Norman et al.
1
The first is the manufacture of replaceable or
interchangeable parts that comprise the bill-of-materials of any given
product. The second is to produce many products within a single
facility and is sometimes referred to as product diversification. And
the third innovation, the one being implemented today in industry,
is production for final demand or demand-driven production (DDP).
This is the prevailing concept that we should only produce product
that satisfies actual product demand (demand orders). Speculative or
provisional production must be inventoried and, hence, can be con-
sidered to be inefficient and potentially risky because a real cus-
tomers purchase order is not secured. Many popular production
paradigms such as just-in-time (JIT), Kanban, tach time, single
minute change of a die (SMED), theory-of-constraints (TOC), etc.,
are all examples of striving toward the goal of DDP. The schedul-
ing optimization solution presented in this article is a critical step
in the on-going struggle to improve efficiency and profitability of
an oil refinery or petrochemical plant with respect to the DDP inno-
vation.
Blendshop example. Marine-access blendshops are usually char-
acterized by having a set of storage or receiving tanks and a set of
feed or charging tanks with either a continuous- or batch-type blend
header in the middle. Pipeline-access blendshops often only have
receiving tanks because settling of the unloaded crude oil for free
water removal after a marine-vessel has unloaded is not required.
Fig. 1 shows a small blendshop problem with one pipeline, two
receiving tanks, one transferline (or batch-type blend header), two
charging tanks and one pipestill. Tank inventory capacities and
names are shown inside the tank objects and flowrate capacities of the
semicontinuous equipment are shown above the
pipeline, transferline and pipeline objects. The
bold arrows in the figure indicate the flow or
movement variables for the blendshop schedul-
ing problem. Connections between equipment
are typically material based in the sense that only
certain crude oils or mixtures are allowed to flow
between a source and destination. These material
based connections permit use of crude oil segre-
gations or pooling by directing certain crude oils
to be stored into specific tanks. Segregations are
useful when controllable equipment or move-
ments can be used to prepare a blend recipe or
formulation to be specified for charging the
pipestill.
2
Segregations are useful to reduce prob-
lem complexity by reducing the number of deci-
sions to be made in terms of where crude oils should be stored. In our
example we do not impose any batch recipe such as a 50:50 blend for
flow out of each segregation. That is, we do not impose a 50% vol-
ume fraction from the light crude oil pool (TK1) and a 50% fraction
from the heavy crude oil pool (TK2). We have omitted this detail so
as not to detract from the main focus of the article.
Table 1 provides the information on crude oil receipts for four
different types of crude oils that are segregated into light and heavy
crude oil pools (i.e., crude oils #3 and #4 are light and #1 and #2
are heavy). The start and end times are in hours from the start-of-
schedule which is set at the zero hour; the end-of-schedule is on the
240th hr (10 days). Crude oil mixture liftings from the charging
tanks to the pipestill are continuously set at 5 Kbbl/hr and the
pipestills fuels production mode is unchanged over the scheduling
horizon. This defines the crude oil mixture demand schedule.
Fig. 1 also describes various operating rules that must be respected
for the blendshop for it to operate as a crude oil blendshop (more
details on these and others are given in Part 2). For this blendshop we
only allow one flow out of or in to the pipeline and transferline at a
time. Also, the tanks must be in standing-gage operation where there
can be only flow in or out at a time but not both. A generalization of
standing gage is the mixing-delay restriction. It imposes a time
delay after the last flow into a tank has finished before a movement
out is allowed. The receiving tanks have a 9-hr delay and the charg-
ing tanks a 3-hr delay.
The last logic constraint for this example is that all flows are semi-
continuous or disjunctive. This means that a flowrate must either
be zero or lie between lower and upper bounds.
Table 2 displays a subset of the assay information for the four
crude oils being stored and delivery into the blendshop. Only three
cuts are included: whole crude oil, kerosene and heavy gas oil. Blend-
ing of the cuts and assigned properties are based on volume or weight
depending on the property. Blending numbers or indices would be
used for those properties that blend nonlinearly such as Reid vapor
pressure (Rvp) and viscosity. Synergistic and antagonistic nonlinear
blending such as evidenced for octane are not considered further for
the crude oil blendshop problem.
Table 3 displays the minimum, target and maximum quality
specifications for the mix of crude oils required by the fuels operat-
ing mode. For this example we do not concern ourselves with bounds
on the quality variables though the scheduling optimizer can be con-
figured to respect these bounds over the scheduling horizon. The
target values are those typically found in the planning optimizer.
Table 4 completes the required information for the blendshop by
SPECI ALREPORT PROCESS/ PLANT OPTI M I ZATI ON
48
I
JUNE 2003 HYDROCARBON PROCESSING
TABLE 1. Crude oil receipt or supply orders over
scheduling horizon (cycle dat a).
Crude St art t ime End t ime Valid
Oil # (hr) (hr) Flowrat e Durat ion Flow dest inat ion
1 8 1 0 10 00
1 16 17 0 10 00
7 16 0 10 00
4 5 0 10 00 1
1 0 0 10 00 1
4 1 141 0 10 00 1
TK1
TK2
TK3
TK4
Receiving t anks
Charging t anks
Transf erline Pipeline
Pipest ill
220 Kbbl
100 Kbbl
0 16
5
,
,
1
Cut s
This blendshop problem involves one pipeline, t wo receiving t anks, t wo charging
t anks and one pipest ill.
FI G. 1.
providing the opening tank inventories and crude oil compositions
in each tank.
The information provided for the example characterizes a simple
yet typical crude oil blendshop scheduling optimization problem.
It has been deliberately organized into two main data themes: model
and cycle. Model data are generally static and do not change within
the scheduling time horizon. They define the material/flow/capac-
ity network, desired operating logic and crude oil assay information
and mixture quality specifications. Cycle data define the dynamic
data that can change every time the next schedule is made such as tank
opening inventories and compositions, supply, demand and main-
tenance orders, and any actual or logged movements. The problems
model and cycle data can be further segmented into what we call
the quantity, logic and quality aspects of the problem and will be dis-
cussed in Part 2.
Use of the word cycle is taken from the well-known hierarchical
planning and scheduling philosophy of Bitran and Hax
3
who advo-
cate a rolling-horizon framework or scheduling cycle to mitigate
uncertainty due to such effects as order reliability, measurement inac-
curacies and execution errors. For more recent details see S. C. Graves
in the Handbook of Applied Optimization, 2002.
4
Pot ent ial benef it s. Of course it makes sense to pursue only
those aspects of a solution to a problem that have value. Before we
describe how we formulate and solve the crude oil blend schedul-
ing optimization problem it seems prudent to analyze why we would
want to solve it. This leads into the discussion of expected benefits.
Three major types of disturbances arguably affect a refinery at
any time during its production or operation: crude oil mixture qual-
ity variability, ambient temperature changes and unreliable or faulty
equipment. Processing equipment malfunctions can cause serious
production outages and safety concerns, and are normally mitigated
by sound maintenance practices. Seasonal and diurnal ambient tem-
perature swings also disturb stability of operations and are mitigated
by providing increased cooling or heating capability and improving
controls.
The molecules from each crude oil receipt are eventually pro-
cessed at every unit within a refinery. Variation in crude oil mixture
quality charged to the pipestill is perhaps the single most influen-
tial disturbance to a refinery. It is the foremost reason why reducing
variability around the many quality targets by blending crude oils
is of tremendous importance.
Crude oil blendshop scheduling optimization is a relatively inex-
pensive and timely way to seriously improve performance of almost
any refinery. Five benefit areas are all aided by applying better blend
scheduling:
Reducing quantity and quality target variabilityAs men-
tioned, reducing quality target variance should be at the top of the list
for refinery improvements. Deviations from quality targets should be
minimized to charge the pipestill with a steady mixture of crude oil.
Steadier quality crude oil mixtures charging the pipestills will also
translate into steadier operations for downstream production units.
It also makes good sense to run pipestills with a constant flowrate
for as long as possible. An example of an improved quality target or
key planning proxy is shown in Fig. 2.
Improving the ability to generate more than just feasible
schedulesFor those blendshops that are tightly resource con-
strained due to previous cost-cutting initiatives, it may be arduous to
generate a feasible schedule for the immediate future. For these blend-
shops it is valuable to have an automated scheduling application
generate in seconds what would take a human scheduler hours to
construct. Multiple better-than-feasible or what we call optimized
schedules may be presented that meet the production goals for selec-
PROCESS/ PLANT OPTI M I ZATI ON SPECI ALREPORT
HYDROCARBON PROCESSING JUNE 2003
I
49
TABLE 2. Crude oil assay inf ormat ion (model dat a).
Cut / propert y Crude oil #1 Crude oil #2 Crude oil #3 Crude oil #4
0.870 0.87 0.856 0.851
,
. 6 8.71 10.44 .60
7.0 4 .0 1.0 7.0
,
.16 .75 10.17 .51
0.864 0.878 0.85 0.84
,
1.8 0.77 1.67 1. 8
TABLE 3. Fuels product ion mode cut /propert y
specif icat ions (model dat a).
Cut / propert y M inimum Target M aximum
0.86
0.8
1.508
TABLE 4. Crude oil opening invent ories and
composit ions (cycle dat a).
Tank I nvent ory Crude oil #1 Crude oil #2 Crude oil #3 Crude oil #4
1 100 0 0 100 0
100 0 0 0
50 100 0 0 0
4 100 50 0 50 0
Qualit y variabilit y was reduced wit h aut omat ed crude oil
blend scheduling.
FI G. 2.
tion by the scheduler. The effect of not being able to generate feasi-
ble schedules means that either the supply scenario must be changed
by distressing crude oil deliveries or demand must be altered by
decreasing or increasing the flowrate of crude oil mixtures charging
the pipestill. Unfortunately, both of these alternatives are undesir-
able for various reasons.
Rapid acceptance and inserting spot supply and demand
opportunitiesTypically, a refinery will be some mix of contract
(wholesale, nondiscretionary, strategic or base) versus spot (retail,
discretionary, tactical or incremental) crude oil purchases. Faster and
better ability for a refinery to assess whether a particular spot crude
oil purchase will result in a feasible operation the better the refinery
can capitalize on short-term market opportunities.
Consistency of schedulesA common problem in production
scheduling is that usually only one skilled scheduler can schedule a
refinerys crude oil blendshop effectively. When the main scheduling
individual is sick or on vacation it is very difficult to backfill with
another appropriately trained scheduler. Hence, if this occurs, sched-
ules generated by the two individuals can be widely different to the
point where unfortunately schedules generated by the relief may
actually be infeasible. Using an automated scheduling tool alleviates
some of these issues since schedules are made to satisfy the same
business logic and reflecting the same constraints and limitations.
Production schedule visibility throughout the refinery
Finally, given the use of Internet technologies, it should be standard
now and in the future to disseminate the official schedules online
so that managers, operators and engineers can all view the same pro-
duction program for the next several days or weeks. Although this can
be easily accomplished with spreadsheets and simulators, it is not
always possible with these solutions to look out into the future many
days or weeks and show the longer-term schedules to those who can
take advantage of greater look-ahead.
If we take for the purpose of discussion a medium-sized 100,000
bpd refinery, or equivalently 35,000,000 bpy with a 350-day yr pro-
duction schedule, it is possible to make a list of some of the expected
benefits and their value. Here we only detail the tangible benefits.
However, a range of intangible benefits can translate into significant
value. Each benefit cited is incremental over what would be achiev-
able using spreadsheets or simulators.
Quality target variability improvement such as whole crude
oil sulfur: $2,000,000/yr. This number was estimated from the ben-
efits identified when a similar application for crude oil blend schedul-
ing was developed and applied by the first author to a sweet crude oil
processing 100,000 bpd refinery. The benefits were captured at the
planning feedstock selection activity level because the proxy con-
straint on bulk crude oil sulfur was raised from 0.55 to 0.85 %wt sul-
fur over a three-month period (Fig. 2). This resulted in a cheaper
slate of crude oils being purchased while still being able to meet the
quality specifications for all of the finished products refined and
blended.
Reduced chemical injection: $100,000/yr. Further and unex-
pected savings on corrosion control chemicals were also observed
for the refinery over a one-year period due to the fact that less inhibitor
needed to be added given the improved regulation of crude oil bulk
sulfur concentration.
Distressed sale of crude oils from the refinery: 5-incidents/yr
$1.30/bbl 50,000 bbl = $325,000/yr. Here we assume that five
times in one year the refinery needs to distress a 50,000 bbl batch of
crude oil at a loss, in terms of lost netback production (opportunity)
and loss in selling price of $1.30 per bbl. This means that the refin-
ery lost the opportunity to process the crude oil and make netback
$1.00/bbl and sold the crude oil batch at a loss of $0.30/bbl.
Pipeline penalty charge for changes in sequence or timing:
3 incidences/yr $25,000/incidence = $75,000/yr. A penalty of
$25,000 per incidence is realized for altering the start or end time of
a batch of crude oil before it can be received at the refinery.
Spot opportunity for crude oil trades: 5 opportunities/year
50,000 bbl $1.00 net margin/bbl = $250,000/yr. There are five
extra opportunities per year to run batches of 50,000 bbl at a netback
or net margin of $1.00/bbl due to better crude oil blendshop schedul-
ing.
Reduced working capital (decommissioning of a crude oil
tank): 50,000 bbl cycle stock $20/bbl 10% cost of capital/yr =
$100,000/yr. This is a result of sustaining a long-term reduction of
safety or cycle stocks of reserve crude oil in the crude oil blendshop.
Total savings: $2,000,000 $100,000 $325,000 $75,000
$250,000 $100,000 = $2,850,000/yr! These hypothetical and
somewhat anecdotal benefit calculations translate into over two-
and-a-half million dollars worth of the potential savings to the refin-
ery profit and loss that would not have been achieved with manual
scheduling alone. It must be emphasized that this number only pro-
vides a benchmark or yardstick to Pareto, at least qualitatively, the
priorities in terms of choosing between other possible and compet-
ing capital investment projects at the refinery. From the perspective
of overall crude oil costs of the refinery over one year, this $2.85 mil-
lion in savings is less than 0.41% of the total feedstock cost (i.e.,
$2,850,000 / (35,000,000 bbl) / ($20/bbl) 100 = 0.407%).
Formulat ing problem logist ics and qualit y det ails.
Modeling the crude oil blendshop is the cornerstone of being able to
capture the potential benefits outlined. Although the modeling must
ultimately reside as a collection of complex mathematical expres-
sions relating variables and constraints to offer some level of opti-
mization, we supply a qualitative description of the model only. We
do this so as not to detract from the general understanding of the
overall problem and reasons for solving it. Some of the more spe-
cific details around the mathematical modeling of the crude oil
blendshop scheduling optimization problem can be found in Lee
et al.,
5
Shah
6
and Jia et al.
7
At the core of our formulation is the hierarchical
decomposi-
tion of the problem into logistics and quality subproblems. The logis-
tics subproblem is very similar to the supply chain logistics problem
except that our logistics problem has less spatial scope. It considers only
the crude oil blendshop (inside the production chain) and not the
entire supply chain but has more of an in-depth operational view of
the crude oil handling and blending. The logistics subproblem only
considers the quantity and logic variables and constraints of the prob-
lem and ignores the other quality variables and constraints. The qual-
ity subproblem is solved after the logistics subproblem whereby the
logic variables are fixed from the logistics solution and the quantity
and quality variables are adjusted to respect both the quantity and
quality bounds and constraints. The quality optimizer is very simi-
lar to commercially available oil refinery and petrochemical plan-
ning software formulations which are used to select the crude oils, for
example, that will be processed at the refinery.
The reason for the logistics and quality subproblem decompo-
sition is three-fold. First, commercially available optimization soft-
SPECI ALREPORT PROCESS/ PLANT OPTI M I ZATI ON
50
I
JUNE 2003 HYDROCARBON PROCESSING
.
ware, optimization theory and computer horsepower have not pro-
gressed to the point where we could solve a full-blown simultane-
ous quantity, logic and quality crude oil blendshop scheduling prob-
lem in reasonable time. Second, theory tells us that if you cannot
find a feasible solution to the logistics subproblem then you will not
be able to find a feasible solution to the quality subproblem. Hence,
the decomposition provides a very useful problem-solving aid because
if the logistics subproblem is infeasible for whatever reason (i.e., bad
input data, overly aggressive production plans, etc.) then there is no
use spending time solving the quality subproblem until something has
changed in terms of the quantity and logic aspects of the problem.
However, even if the logistics subproblem is feasible there is no guar-
antee that the quality subproblem is feasible. Third, theory also tells
us that if the logistics subproblem is globally optimal (i.e., the very best
solution has been found) and the quality subproblem is feasible then
we have found the global optimum for the overall problem. Hence,
our decomposition provides a very powerful structure given that it is
easier to check for quality feasibility than quality global optimality after
the best logistics solution has been found. Unfortunately if the qual-
ity subproblem is infeasible then there must be a mechanism to send
back special constraints to the logistics subproblem to force it away
from those regions of the search space that are known to cause qual-
ity infeasibilities.
We find that this break down of the overall problem into two
subproblems is in fact very intuitive for the scheduling users who
are using spreadsheets, especially the aspects of the quantity and
quality. The logic details are known by the users but rarely included
in formulating their spreadsheets due to the discrete nature (i.e.,
requires some search mechanism and/or trial-and-error). Logic aspects
are usually resolved ad hoc after a quantity-quality solution has been
circumscribed, but only for the immediate near term of the schedule.
There could also be further decomposition within each of the
logistics or quality subproblems. For example, in the logistics sub-
problem it is possible to decompose the scheduling into assignment
and sequencing stages. The assignment stage can be solved relatively
easily to optimality by assigning orders or jobs to equipment and
ignoring the equipment job sequencing. The assignment decisions
are then fixed and the sequencing stage is solved. If a feasible sequenc-
ing solution can be found then the overall logistics subproblem is
feasible. If the lower-level sequencing is infeasible then extra con-
straints are added to the assignment stage problem to guide the
higher-level solution away from those assignments that are known to
cause problems for the sequencing.
8
A strong parallel to the logistics and quality decomposition is
found in discrete parts manufacturing refined by the Japanese. That
is the decomposition of JIT, Kanban, SMED, TOC, etc., with the sta-
tistical quality control philosophies of W. E. Deming and S. Taguchi.
A clear separation between the two is brought together in the end to
guide the manufacturing machine to produce quality products effi-
ciently, effectively and punctually.
Finally, formulating crude oil blend scheduling optimization is in
the class of production scheduling known as a closedshop.
Defini-
tion of a closedshop, and its counter-part an openshop, can be found
in the review paper by Graves.
9
In an openshop all production orders
are by customer request and no inventory is necessarily stocked. In
a closedshop all customer requests are serviced from inventory and a
production activity is generally a result of inventory replenishment
decisions. These definitions really state that closedshops involve
quantity variables and inventory balances whereas openshops typically
dont. Closedshops are generally associated with lot-sizing problems
(requiring a flow path or network to be defined) and almost always
are formulated using some form of scheduling horizon segmenta-
tion into time periods. Even when continuous-time closedshop for-
mulations are used (see Jia et al.)
7
the number of time-event points
is required as an input. Segmenting the time horizon is necessary to
perform inventory or material balances. Solving the industrial-scale
closedshop problem has been attempted by first determining lot,
batch or blend sizes and then making the assignment, sequencing
and timing decisions (or logic decisions) using an openshop frame-
work. However, this decomposition has been met with limited suc-
cess. A unique formulation of the logistics subproblem is to model and
solve the blendshop as a closedshop explicitly by including both
quantity and logic decisions simultaneously in one optimization.
LITERATURE CITED
1
Norman, A., K. Mahmood and M. Chowdhury, The need for a paradigm for
innovation, http://www.eco.utexas.edu/homepages/faculty/Norman/long/
InnParadigm.html, Department of Economics, University of Texas at Austin,
August, 1999.
2
Kelly, J. D. and J. F. Forbes., Structured approach to storage allocation for
improved process controllability, AIChE Journal, 44, 8, 1998.
3
Bitran, G. R. and A. C. Hax, On the design of hierarchical production plan-
ning, Decision Science, 8, 28, 1977.
4
Pardalos, P. M., and M. G. C.Resende, (editors), Handbook of Applied
Optimization, Oxford University Press, London, U.K., 2002.
5
Lee, H., J. M. Pinto, I. E. Grossmann and S. Park, Mixed-integer linear pro-
gramming model for refinery short-term scheduling of the crude oil unload-
ing with inventory management, Industrial Engineering Chemistry Research,
35, 5, 16301641, 1996.
6
Shah, N., Mathematical programming techniques for crude oil scheduling,
Computers & Chemical Engineering, 20, Suppl. B, S1227S1232, 1996.
7
Jia, Z., M. Ierapetritou and J. D. Kelly, Refinery short-term scheduling using
continuous-time formulationcrude oil operations, Industrial Engineering
Chemistry Research, February 2002.
8
Jain, V., and I. E. Grossmann, Algorithms for hybrid MILP/CP models for a
class of optimization problems, INFORMS Journal of Computing, 13,
258276, 2001.
9
Graves, S. C., A review of production scheduling, Operations Research, 29, 4,
646675, 1981.
Coming next month: Part 2 will describe more of the details involved
in optimizing refinery feedstock scheduling.
PROCESS/ PLANT OPTI M I ZATI ON SPECI ALREPORT
HYDROCARBON PROCESSING JUNE 2003
I
53
.
J. D. Kelly is a chemical engineer and has a masters degree
in advanced process control from M cM aster University. He has
worked as an advanced control engineer at both Shell Canada
and Imperial Oil including implementing real-time optimization programsand tac-
tical planning and scheduling solutions in their refineries. M r. Kelly has installed
plant-wide data reconciliation packages in several oil refineries around the world
and he haswritten many academic publicationson the subject. He isnow a solu-
tions architect for advanced planning and scheduling at Honeywell Industry
Solutionsin Toronto, Canada.
J. L. Mann isa chemical engineer with a bachelor of applied
science degree from the Universityof Toronto. He hasworked as
a design engineer and as a simulation engineer at Imperial Oil,
including developing refinery-wide simulation tools to support planning and
scheduling activities. M r. M ann hasworked on a number of plant information sys-
tem projects with a focus on integrating plant data collection systems to plant-
wide yield accounting systems. He now is a business architect for Honeywell
Industry Solutionsin Toronto, Canada.
I NSTRUM ENTATI ON/ PLANT OPTI M I ZATI ON
72
I
JULY 2003 HYDROCARBON PROCESSING
T
o facilitate more specific information on the formulation we
must first talk about the problem variables. These can be
classed into continuous and combinatorial variables. Con-
tinuous variables are the quantity and quality variables, and the
combinatorial variables are the logic or discrete variables. There are
also auxiliary or intermediate variables such as startup (and shut-
down or switchover) and flow times yield variables that are used to
support solving both the logistics and quality subproblems. Bounds
and constraints associated with these variables follow.
Quantity details (hydraulic capacities). There are essen-
tially three types of hydraulically related quantity bounds: flowrate,
flow and inventory. Each of these has continuous variables asso-
ciated with them in both the logistics and quality optimizer for-
mulations. All inventory variables are related to the flows through
the material balances on each piece of equipment.
Flowrate bounds are capacity bounds associated with a move-
ments process and transfer-type equipment such as pipestills,
headers, line segments, pumps, valves, etc. They specify how much
material can flow within a certain amount time through the piece
of equipment and are defined by an upper and lower bounds.
Flow bounds specify a quantity of material that can be trans-
ferred from one piece of equipment to another. They extend the
flowrate bound to fully describe a supply or demand order. Know-
ing the rate and the quantity determines the duration. Both flow
and flowrate bounds are associated with a connection between a
source and destination piece of equipment and ultimately relate to
the underlying limiting or shared transfer-type piece of equip-
ment that moves the material from the source to the destination.
Inventory bounds are capacity bounds for inventory-type
equipment such as spheres, tanks or drums. They specify how
much material can be stored in a piece of equipment and are
defined by an upper and lower bound.
Logic det ails (operat ing rules). Fourteen different kinds
of logic constraints are typical of a crude oil blendshop opera-
tion. This list is not exhaustive but is a very reasonable starting
point. As mentioned, to model these constraints we need to have
logic variables or combinatorial variables. These are also referred
to as 0-1 or binary variables and are associated specifically with a
flow between source and destination equipment. Zero indicates
the flow is inactive and one implies the flow is active and must be
between its lower and upper flow bounds. We also have two other
logic variables to indicate when a flow route has been started up
(time it is made active) or has been shut down (time it is been
made inactive); these variables are also used to model transition
or switchovers.
Semicontinuous (SC) constraints represent a flow that can be
zero or between a lower or upper bound. Without SC constraints
the logistics problem would become a linear program and not a
mixed-integer linear program.
Standing gage (SG) constraints enforce the practice for tanks
where there can be flow in or out but not both at the same time
(mutually exclusive). SG constraints are useful to decouple the
production chain from the supply chain upon receipt of a crude oil
delivery for example, and to enable tank level differences to be
used as a cross check for custody transfer meters.
Mixing delay (MD) constraints restrict flow out of a tank
until a certain amount of time has past after the last flow in. A
tank must have SG constraints set for mixing delay to be used.
The MD constraints are useful to allow separating ballast-free
water after a marine vessel unload.
One flow in (OFI) constraints prevent more than one flow
into a piece of equipment at a time. OFI constraints are useful to
model cases where a pipestill can only be fed from one tank at a
time for example.
One flow out (OFO) constraints prevent more than one flow
out of a piece of equipment at a time. OFO constraints are useful
to model cases where a pipeline can only discharge to one tank
at a time.
Contiguous order fulfillment (COF) constraints define a
flow to be fixed quantity and fixed rate over specified start and
end times. They are typical of pipeline receipt and delivery orders.
In these cases the flowrate is equal to the quantity divided by the
difference between the end and start times. These order fulfill-
ment types are such that there is a contiguous or consecutive flow
between the order start and end time (i.e., an uninterrupted or
non-preemptive flow).
Noncontiguous order fulfillment (NOF) constraints define
order fulfillment to be the opposite of the COFs. Arrival and
Crude oil blend scheduling
optimization: an application with
multimillion dollar benefits Part 2
The abilit y t o schedule t he crude oil blendshop more ef f ect ively
provides subst ant ial downst ream benef it s
J. D. KELLY* and J. L. MANN, Honeywell Industry Solutions, Toronto, Ontario, Canada
.
departure dates are specified for supply orders and release and due
dates are specified for lifting orders. The NOFs are defined with
a specified order quantity such that between the arrival and depar-
ture date or release and due date, the cumulative quantity of mate-
rial that has flowed from the pipeline or to the pipestill equals
that specified by the order. This implies that there can be non-
contiguous or nonconsecutive flows from or to a piece of equip-
ment (i.e., an interruptible or preemptive flow). Arrival and depar-
ture dates are useful for handling marine vessel unloading when
arrival due to inclement weather conditions causes higher than
normal uncertainty levels. Release and due dates are useful for
specifying pipestill production mode orders because the planning
solutions will typically say how much crude oil to process within
a particular time horizon with the detailed flow scheduling to be
determined by the scheduling optimization program.
Lower up-time (LUT) constraints are identical to minimum
production run length-type constraints. They are used to specify
a minimum time a particular movement needs to be up or active
before shutting down or becoming inactive.
Upper up-time (UUT) constraints are used to specify a max-
imum contiguous time a movement can be up before it is required
to be shut down.
Equal flow (EF) constraints force the same flow value for a col-
lection of time periods where the movement is contiguously or
consecutively active. Either a lower or upper up-time must be
specified before the equal-flow constraints will be added for that
particular source-destination pair.
Switch-over-when-empty (SWE) constraints indicate a move-
ment cannot switch over or shut down until tank inventory is less
than some specified threshold. This is useful when a charge or
feed tank must be near empty before it can be shut down or before
another tank can be used to charge the pipestill.
Switch-over-when-full (SWF) constraints are very similar to
the SWE constraint where a movement cannot be shut down until
the tank is full. This is useful for receiving or storage tanks when
being fed from a pipeline because it tries to fill a tank before mov-
ing on to another one if the volume or quantity of the delivery
order is greater than the available ullage.
Startup opening (SUO) bounds are applied to a particular
startup variable for a movement and are used to restrict the time
of day when that movement can be started up. For example, it
may be useful to only have a switchover to a different tank of
crude oils feeding a pipestill during the day shift (between 8:00 a.m.
and 4:00 p.m.).
Shutdown opening (SDO) bounds are similar to the SUO
except that they tell the logistics optimizer when a possible move-
ment shutdown can occur.
Qualit y det ails (propert y specif icat ions). The inten-
sive properties of the crude oil mixtures charging the refinery must
be carefully regulated for the pipestill to meet the downstream
quality stipulations or specifications when operated in a particu-
lar production mode. These qualities are associated with the tem-
perature cutpoints or cuts of the different hydrocarbon streams
being separated by the pipestill and must be modeled as continu-
ous variables in the quality optimizer formulation. Quality bal-
ances or equations must be associated with each quality through-
out the entire blendshop where we model tanks as perfectly mixed
vessels. The quality balances force the subproblem to be nonlin-
ear due to the product of quantity (flow and inventory) times
quality. Quality splitter equations model the situation of multiple
simultaneous flows out of equipment to ensure that each outlet
stream has the same quality as all of the other outlet streams. Fol-
lowing is a somewhat complete list of the many streams produced
by the pipestill or atmospheric and vacuum distillation unit with
typical properties that could be typically assigned or measured for
the pipestill output streams.
Wet or saturated gas cut/properties include both the vol-
ume and weight yields of the pure components methane, ethane,
propane, iso- and normal-butane, specific gravity, etc.
Light and heavy straight-run naphtha cut/properties include
both the volume and weight yields, paraffins, olefins, naphthenes
and aromatics (PONA), Rvp, octane, specific gravity, sulfur, etc.
Jet fuel and kerosene cut/properties include both the vol-
ume and weight yields, cloud point, freeze point, pour point, spe-
cific gravity, sulfur, etc.
Diesels and middle distillates cut/properties include both the
volume and weight yields, cloud point, flash point, pour point, spe-
cific gravity, sulfur, viscosity, etc.
Heavy distillates cut/properties include both the volume and
weight yields, basic nitrogen, metals (nickel, vanadium, iron),
refractive index, specific gravity, sulfur (total and reactive), vis-
cosity, etc.
Light and heavy vacuum gas oils cut/properties include
both the volume and weight yields, base oils, basic nitrogen, met-
als (nickel, vanadium and iron), refractive index, specific gravity,
sulfur (total and reactive), viscosity, etc.
Vacuum residue or pitch cut/properties include both the
volume and weight yields, asphaltenes, base oils, carbon number,
metals (nickel, vanadium and iron), penetration, specific gravity,
sulfur (total and reactive), viscosity, etc.
Logistics and quality objective function details. Now
that we have enumerated the variables and constraints of the two
subproblems it is important to talk about the driving force for
optimization. This underlying forcing function is the objective
function that is continuously being maximized during the course
of the logistics and quality searches over the entire scheduling
horizon (start-to-end of schedule). Both the logistics and quality
objective functions are separated into three terms.
The first term is profit defined as revenue of crude oil mixtures
minus the feedstock costs of the delivered crude oils and any inven-
tory holding or carrying costs for both types of tanks. The profit
function is identical to both the logistics and quality subprob-
lems although the quality profit term can be extended to include
individual revenue generated from the cut yield flows. The sec-
ond term is required to maximize performance. Performance for the
logistics subproblem is defined so as to minimize the number of
active movements and the number of movement startups and
shutdowns (i.e., transitions or switchovers). Another term in the
performance category is to minimize deviation of any tank inven-
tory from a closing inventory target specified by the user. This is
also used in the quality subproblem but is extended to include
deviations from user-specified quality targets on the crude oil
compositions and cut properties. Ad hoc performance weights
are usually used for each performance type and can be tuned based
on the priority level dictated by the scheduling user.
The third term is very important when solving real-world prob-
lems. Not all input data required to solve for optimized sched-
ules is good or free of gross errors (see Kelly
10
for a list of possible
HYDROCARBON PROCESSING JULY 2003
I
73
I NSTRUM ENTATI ON/ PLANT OPTI M I ZATI ON
sources of error). Therefore, we must always anticipate that some
infeasibilities may occur before the data have been optimized and
carefully cross-checked for validity. In light of this, all quantity,
logic and quality constraints have artificial or penalty variables
associated with them. Each penalty variable is weighted and min-
imized in the objective function so that the most important busi-
ness practices at a site are respected when they cant all be met. If
the problem data are free of gross errors or flaws (as some people
refer to them) then the penalty variables will be driven to zero by
the optimizer meaning all business requirements are satisfied. The
penalties are also known in the planning domain as infeasibility
breakers or safety valves.
Ultimately, the scheduling optimization objective function is
used to balance the three costs of manufacturing: cost of renew-
able and nonrenewable resources (i.e., materials, equipment, labor,
utilities, chemicals, etc.), inventory (i.e., it costs money to store
materials and equipment) and transitions (i.e., startups, shutdowns,
changeovers, switchovers, sequencing, etc.).
Typical planning optimization systems only
include the resource and inventory costs and
do not model transition costs. The major rea-
son is due to the mathematical intractability
of solving simultaneously for quantity, logic
and quality given todays state of optimization
technology. Consequently, transition costs are
excluded from the planning models and only
quantity and quality details are formulated,
except for minor logic details concerning cargo
or batch size increments for feedstock avail-
ability. Because transition costs are relegated
to the scheduling layer, all planning solutions
are overoptimized. This implies that all plan
versus schedule or plan versus actual analysis will have inherent
biases or offsets even if measurement, model, solution and exe-
cution errors are negligible
10
and strongly suggests that these
biases be interpreted carefully.
Time modeling. Both planning and scheduling involve time
considerations. There are principally two types of time model-
ing. The first and most used and studied is time discretization
into predefined fixed duration time periods but not necessarily
of equal duration over the scheduling horizon. All activities are
defined to start and end at the time period boundaries and are
piece-wise continuous over the time period duration.
The second time model is the most elegant and is that of con-
tinuous-time modeling whereby activity start and end times are
included explicitly as optimization variables. An example of con-
tinuous-time formulation of the crude oil blend scheduling opti-
mization can be found in Jia et al.
7
Continuous-time models also
have the notion of time periods except that these have variable
durations determined by the optimizer.
The recognized disadvantage of discrete-time formulations are
that they require a large number of time periods to model the
smallest duration activities, however, continuous-time modeling
enables each piece of equipment to have its own timetable. This
removes the need to artificially synchronize all equipment to be on
the same timetable and thus reduces the number of logic or binary
variables. There are nonetheless advantages of discrete time in
that it scales well when long time horizons are required for what-
if studies because larger time period durations can be used and it
can handle easily time-varying quantity bounds and out-of-ser-
vice orders. With continuous-time formulations, time-varying
tank inventory bounds, for example, require extra binary vari-
ables to be generated for the optimizer to assign which time period
the tank inventory capacity change is to take place even though we
know explicitly the event time of the change. Therefore, for the
immediate future, discrete-time formulations seem to have value
over continuous-time formulations given the previous discussion,
yet in the end both discrete- and continuous-time formulations
should be available to the scheduling user.
One final note on time models, the popular distinction now
between production planning models and production schedul-
ing models, with underlying structures of the lot-sizing problem,
is through the notion of big buckets and small buckets to discretize
time. This can be found in Belvaux and Wolsey
11
who also have
LOTSIZELIB, a library of diverse lot-sizing problems. The fun-
damental difference between big and small buckets, where big
buckets are used typically to model planning problems, is that big
buckets are those in which several materials can
be produced on a convergent-flow-path
piece
of equipment, such as a blend header, during a
single time period. Small buckets are typically
used to model scheduling problems where only
one material can be produced on a single piece
of equipment at a time (single-use or unary
resource logic constraints). Small time buckets
are used to model startups, switchovers and
shutdowns as is the case in our formulation of
the crude oil blend scheduling optimization
problem.
Segregat ing crude oils int o t anks. A
salient aspect of crude oil handling and blending is that of segre-
gating crude oils into specific tanks. Segregation is used is to sep-
arate disparate crude oil types into different tanks to maintain the
flexibility or controllability to blend to specific cut property val-
ues (i.e., specification blending as opposed to recipe blending).
The first requirement of crude oil segregation is to understand
the key cut property constraints.
From a degrees-of-freedom analysis, the number of key con-
straints must be less than or equal to the number of tanks used
to blend the crude oil mixtures (i.e., typically the number of receiv-
ing tanks). For example, in the example where there are two receiv-
ing tanks, at most two cut properties can be controlled at any
given time. Since there is a supply order of 5 Kbbl/hr this reduces
the number of degrees-of-freedom by one and hence, only one
cut property at any time can be controlled. Once the key cut prop-
erties have been identified then the crude oils should be separated
according to the level of each in the crude oil. All in all, effective
segregation can be difficult to figure out but can be automated
following the control and optimization techniques found in Kelly.
2
Usually very simple isolation rules are applied based on crude oil
bulk sulfur or density levels. Another relevant reason segregation
is used is to reduce complexity of the logistics subproblem. When
we preassign specific crude oils into tanks the number of choices
where an individual crude oil receipt can be stored is circum-
scribed by the segregation. In our scheduling formulation we han-
74
I
JULY 2003 HYDROCARBON PROCESSING
I NSTRUM ENTATI ON/ PLANT OPTI M I ZATI ON
. ,
.
I Typical planning
optimization systems
only include the
resource and
inventory costs and
do not model
transition costs.
dle segregations by pruning the available connections between
pipelines and receiving tanks. For instance, in the example with two
receiving tanks and two segregations, light and heavy crude oils,
only light crude oils 3 and 4 are allowed in TK1 and only heavy
crude oils 1 and 2 are allowed in TK2. Hence, of the possible
eight crude oil-based connections (two tanks times four crude
oils) only four are allowed.
Continuous and batch blending. When most people think
of blending in the process industries they envision simultaneous
mixing of the blend constituents or components in some mixer or
blend header. This is known as continuous blending. When we
are solving the logistics subproblem, a fixed recipe or bill-of-mate-
rials is required that relates the blend volume size to the fractions
of each component material feeding the blend header. This is
known as recipe blending. In the quality subproblem, specifica-
tion blending is performed whereby the recipe is determined based
on the property specifications of the blended product. Continu-
ous blending is relatively straightforward to model because at
every time period we impose either the recipe constraints in the
logistics optimizer or the quality constraints in the quality opti-
mizer. However, in the quality optimizer, specification blending
makes the problem nonlinear.
Batch blending can be considered as the opposite to continu-
ous blending similar to batch distillation or separation. Batch
blending mixes the required components sequentially in a desti-
nation tank with the components typically being fed one after
the other. Both recipe and specification blending are achievable
using batch blending similar to the continuous blending. Yet
instead of the blending constraints being set-up for each time
period, batch blending requires the constraints to be specified
over a time window made up of two or more time periods so that
the component additions are the equations cumulatively. In our
example we employ batch blending at the transferline with the
restriction that components can flow into the transferline one at
a time. The time window we use for our example is arbitrarily
chosen at 20 hr.
It is also important to mention that components included in the
blending equations are not the individual crude oils
but the
crude oil segregations or mixtures. For instance in our example, the
two blending components are light and heavy crude oils.
Last of all, if we could solve the overall problem simultane-
ously for quantity, logic and quality then we would not have to con-
cern ourselves with the side issues of segregating crude oils into the
receiving tanks and specifying a nominal recipe for the blend head-
ers. These aspects would be dealt with effectively by the single
optimizer and it would determine where to put the crude oils
upon delivery and how much of each crude oil mixture from each
receiving tank should be set to through the blend header. The
only other effect that would preclude us from achieving almost
perfect crude oil blend scheduling optimization would be the
type, sequence and amount of each crude oil supply order and
potentially the production run schedule on the pipestills. Unfor-
tunately simultaneous quantity, logic and quality solutions are
not attainable given the present state of optimization technology
and, hence, puts the onus on the scheduling user to properly con-
figure the system to help overcome the solver limitations and to go
on to generate better-than-spreadsheet or simulator-type schedules.
Solving t he problem f or logist ics and qualit y. Since
both the logistics and the quality subproblems have been care-
fully formulated as mathematical programs, solving them using
commercially available optimization codes is our next step to
achieve better crude oil blend scheduling optimization. From the
perspective of finding optimized solutions, we can class all solutions
coming out of both the logistics and the quality optimizers as
infeasible, feasible, approximate (locally optimal) and globally opti-
mal, given that both subproblems are known to be nonconvex.
,
.
.
an iteration loop to converge when they are present. It has the
advantage of being able to handle discontinuous and complex
nonlinear functions to model difficult reaction kinetics and fluid
mechanics. The SEA is sometimes referred to as the open form
approach found in process optimizers. It has the disadvantage that
all nonlinear equations must be continuous and once differen-
tiable but has the advantage of being able to handle the reversal-
type flows easily. The SEA requires the topology to be an implicit
part of the model to allow for easy handling of anywhere to any-
where-type of blendshop networks. For our scheduling application
we use the SEA. Specifically, the SEA is well suited to crude oil
blendshop simulations because we blend or mix linearly by either
volume or weight.
Logist ics solving met hods. Although there is a paucity of
literature documenting the quantity and logic formulation of con-
tinuous/semicontinuous (CSC)-type processes, there is, however,
a remarkable amount of literature on the techniques being used to
formulate batch/semi-batch (BSB) type pro-
cesses both in the operations research (OR)
literature and in the chemical engineering
journals on process synthesis engineering.
That said, the underlying mathematical pro-
gramming theory used to aid in formulat-
ing the crude oil blendshop problem was
mostly found in the OR literature
13, 14, 15
and relates to the classic problem formula-
tions of the fixed-charge network flow, lot
scheduling and facility location problems.
At the core of the logistics optimization is
use of the branch-and-bound (B&B) search heuristic using linear pro-
gramming (LP) as the underlying sub-optimization method; this
is also commonly referred to as mixed-integer linear programming
(MILP). It is well known and can be found in many textbooks.
B&B is an exact search method in that if given enough time it
would arrive at the global optimum. The B&B begins by solving an
LP with all of the binary variables relaxed to lie between zero and one.
Then the search begins to successively fix binary variables to either
zero or one based on elaborate variable selection criteria and solving
an LP for each newly bounded binary variable. After each LP, which
are called the B&B nodes, another selection criterion is required to
chose which node will be branched on next. The B&B will termi-
nate, kill or fathom a branch of the search tree for two reasons. The
first happens when a node along the branch is recognized to be
infeasible. The second is called value dominance and happens when
the nodes objective function value is less than the value of the
incumbent integer-feasible solution for maximization problems.
The incumbent integer-feasible solution is the last solution found
that has all binary variables at the extremes of either zero or one.
Consequently there is no sense continuing a search on a branch
that is infeasible and it does not seem beneficial to follow a branch
that is not as good as the current integer-feasible solution found so
far. This technique can have other flavors to the search such as
breadth first and depth first with backtracking and more details can
be found in standard textbooks on integer programming. More-
over, other enhancements to the B&B search include cutting planes,
special ordered sets and variable prioritization which in general can
speed the search to find good integer-feasible solutions.
Unfortunately even with the most efficient formulation, clever-
est B&B search and fastest LP code, finding good integer-feasible
or approximate solutions can be very time consuming. Hence,
we must be somewhat more pragmatic from the perspective of
the quality of the logistics solutions that can be found in reason-
able time. To help speed the search, a myriad of heuristics have
been the focus of much research in both OR and artificial intelli-
gence (AI). These are referred to as primal and meta-heuristics.
Primal heuristics use results of the LP solutions and successively
round and fix binary variables to either zero or one. Examples are
the pivot-and-complement,
13
relax-and-fix,
15
dive-and-fix,
15
smooth-and-dive
16
and chronological decomposition.
17
Meta-
heuristics use a metaphor usually found in nature to devise a search
strategy that exploits a particular nuance of the natural mecha-
nism. Examples include the genetic algorithm, tabu search, scat-
ter search, simulated annealing, ant colony optimization and
squeaky wheel optimization.
Many other heuristics or approximation algorithms can be
found in the OR and AI literature and are basically separated into
two categories: greedy search and local search. Greedy searches are
typically used to find quickly integer-feasi-
ble solutions in some greedy fashion with a
myopic view of the search space. Greedy
searches tend to exploit some detail of the
problem to enable some fixing of the binary
variables. Local searches are basically a refine-
ment on top of greedy searches to try and
find better solutions, essentially using a trial-
and-error approach, in the neighborhood
of the greedy search solutions. An interest-
ing example of local search applied to the
lot-sequencing can be found in Clark.
18
All
in all, most relatively successful heuristics for practical size prob-
lems require some form of a B&B search with backtracking and will
typically embed a commercial B&B code in the algorithm.
Quality solving method. Solving for the quality variables of the
problem is carried out using well-established successive linear pro-
gramming (SLP). SLP technology is the cornerstone of all solving
methods found in oil refinery and petrochemical large-scale plan-
ning systems. An example SLP algorithm can be found in Palacios-
Gomez et al.
19
which in spirit is used by many of the SLP solvers
today. Success of SLP as the method of choice for solving industrial-
size planning and scheduling arises from it use of the LP. As LP tech-
nology improves SLP technology improves because the major itera-
tions of the SLP are simply the LP solutions. Although SLPs are well
documented to be more suitable for mildly nonlinear problems with
either none or only a few degrees-of-freedom at the optimum (i.e.,
otherwise known as superbasic variables), the maturity of LP tech-
nology plays a major role in the SLP success over other nonlinear
solvers such as successive quadratic programming or conjugate-gra-
dient methods for example.
One of the biggest advantages is the use of presolve.
14
Presolve
is applied before any LP is solved and can dramatically reduce LP
matrix size (i.e., fewer rows and columns) through clever tight-
ening, consistency and probing techniques, and can remove eas-
ily vacuous and redundant constraints and variables; presolve is
also used in the MILP solutions. While the other nonlinear solvers
could also take advantage of presolve, these nonlinear solvers often
do not employ third-party commercial LP codes that have many
man-years of development implementing incredibly efficient and
fast presolving techniques. A second advantage is the use of inte-
76
I
JULY 2003 HYDROCARBON PROCESSING
I NSTRUM ENTATI ON/ PLANT OPTI M I ZATI ON
I Solving for the quality
variables of the problem
is carried out using well-
established successive
linear programming
HYDROCARBON PROCESSING JULY 2003
I
77
I NSTRUM ENTATI ON/ PLANT OPTI M I ZATI ON
rior-point and simplex (both dual and primal) LP solving meth-
ods where needed in the SLP algorithm. Since commercial LP
codes offer both interior-point and simplex methods, the SLP
program can be tailored to use the appropriate LP method at each
step. Nonlinear solving codes usually use only one solving tech-
nique. For example, for large problems it is appropriate to solve the
initial LP using interior-point then any subsequent LP resolves
use the dual-simplex; this is also true for MILP problems.
Need for the SLP formulation is of course borne out by the
product of quantity times quality or a flow times a cut/yield for
instance. When blending is performed linearly either by volume
or weight, in the absence of any antagonistic or synergistic effects
requiring nonlinear blend laws, this makes the problem both bilin-
ear, trilinear and quadlinear. Its trilinear because of the flow times
cut/yield times cut/property and quadlinear because of the density
property required when performing the weight balances. Unfor-
tunately this makes the problem nonconvex as mentioned, and
to solve it to global optimality necessitates use of global opti-
mization techniques found in Adya et al.
20
To solve to global opti-
mality requires a spatial B&B search similar to the MILP B&B
search except that the branching variables are continuous and not
binary. In our case they would be the flow and quality variables.
Due to the fact that global optimization is very slow and no com-
mercial software is available, we claim only to search for locally
optimal or approximate quality subproblem solutions.
A side benefit to solving for the logistics subproblem first in
series, then solving for the quality subproblem, is the actuality that
the SLP solves faster than if we were to solve for the qualities first
(as in the planning systems that solve for quantity and quality).
The reason is that the logistics solution provides us with an excel-
lent starting position or local neighborhood for the flows and inven-
tories. This aids the SLP where it is well known that all nonlinear
programs do better when better initial guesses are provided.
Example results. Fig. 3 shows one penalty-free logistics solu-
tion with a 10-day time horizon. The blue horizontal bars are the
supply and demand orders. The yellow bars are flow out of the equip-
ment and the green bars represent flow into the equipment. The
trend lines superimposed on the tank equipment show the inven-
tory profiles that are within the limits of their respective upper and
lower bounds. The major ticks on the x-axis are strategically spaced
at a distance of 20 hr and the minor ticks are positioned at every 5 hr.
The y-axis shows the renewable equipment resources starting from
the pipeline at the top down to the pipestill displayed at the bot-
tom. If either quantity or logic penalties were encountered they
would be shown as red bars between the flow to and from bars for each
equipment. It is clear that because there are no penalties there were
no logic constraints (standing gage, mixing delay, minimum up-
time, etc.) violated and all inventory and flow bounds were simul-
taneously respected without incident, i.e., this schedule satisfies
100% of the business practices and needs over the entire horizon.
As can be seen in the figure we have satisfied all of the six supply
orders for the pipeline (PL1) and segregations are properly main-
tained in that only those crude oils that belong to a segregation can
fill a storage tank (TK1 and TK2). Flows from TK1 and TK2 to the
transferline (TL1) comply with the 3-hr minimum run length as well
as the 9-hr mixing delay specification. To observe mixing delay on
tanks count the number of hours from the end of a green in-flow bar
to the start of the first out-flow yellow bar. The long run lengths
for flows from each of the two feed tanks (TK3 and TK4) charging
the pipestill (PS1) also comply with the 19-hr up-time minimum
constraint and 3-hr mixing delay. All standing gage restrictions were
also obeyed since no green and yellow bars overlap for TK1, TK2,
TK3 and TK4. The demand order of continuously charging 5
Kbbl/hr or 120 Kbpd to PS1 was additionally met.
This logistics solution took approximately 60 seconds to gen-
erate on a 1-gigaHz PC which involved solving an MILP. No spe-
cial heuristics except for the default settings in the B&B search
were used. Table 5 illustrates the power of presolve. The number
of inequalities or rows is reduced by 58% and the number of con-
tinuous variables or columns is even more dramatically reduced by
77%. The number of nonzeros in the constraint matrix is corre-
spondingly reduced by 61%. Thus, matrix density has increased
or conversely, sparsity has in fact decreasedit has become less
sparse after presolve.
The logistics solution was then used as input to the quality opti-
mizer where the same 1-hr time period and 240-hr time horizon were
used to generate the quality time profiles. The quality solver took
approximately two seconds to solve. In this case study, only an LP is
required to solve the quality optimization given that no flows were
0
1
4
1
1
1
0 40 60 80 100 1 0
Time, hr
E
q
u
i
p
m
e
n
t
140 160 180 00 0 40
This Gant t chart shows one penalt y-f ree logist ics solut ion
wit h a 10-day t ime horizon.
FI G. 3.
0
0.850
0.855
0.860
0.865
0.870
0.875
0.880
0 40 60 80 100 1 0
Time, hr
W
h
o
l
e
c
r
u
d
e
o
i
l
s
p
e
c
i
f
i
c
g
r
a
v
i
t
y
140 160 180 00 0 40
Trend of whole crude oil specif ic gravit y cut propert y. FI G. 4.
78
I
JULY 2003 HYDROCARBON PROCESSING
I NSTRUM ENTATI ON/ PLANT OPTI M I ZATI ON
adjustable and hence, no nonlinearities present; the lower and upper
flowrate bounds are equal. Figs. 4 to 6 trend the profiles of whole
crude oil/specific gravity, kerosene/pour point and heavy gas oil/sul-
fur cut/properties respectively for only those flows leaving the charg-
ing tanks and entering the pipestill; we do not show any internal flow
or tank qualities. The black line is the actual trace of the cut/property
and the blue line is the planning proxy target found in Table 3.
For the whole crude oil/specific gravity we observe a step-type
function that is due to the business practice of preparing mixes of
crude oils in the feed tanks and then charging that mix from one
tank at a time, emptying each tank before swinging to the other.
The result is an approximate 19-hr run length given the feed tank
capacity and pipestill charge rate. The approximately 0.011 maximum
excursion from the proxy would be improved by using a 50:50 recipe
on the transferline. The 50:50 recipe is driven simply by the fact
that the light and heavy crude oil segregations for whole crude oil spe-
cific gravities would mix to 0.862 if there were also 50:50 mixes in the
receiving tanks of each appropriate crude oil. Unfortunately, this
recipe would cause undue variation in the other two qualities. Because
there are only two receiving tanks and three qualities, and total flow
to the pipestill to potentially respect, at most we could only reason-
ably control two of the variables. Since throughput is rarely sacri-
ficed for overall refinery stability and profitability, only one quality
could potentially be controlled. The other two qualities would dis-
play an offset from target (i.e., only one quality can possess reset or
integral action in the context of control theory).
The kerosene/pour point trend does not show any obvious cycle
or periodicity as seen in Fig. 4 and there is a large excursion from
target between hr 60 and 80 when crude oil #2 is delivered at hr 7,
with a pour point of 42, starts to percolate through the blendshop
to the pipestill. If it were the most important quality bottleneck,
there would be four avenues to reduce this variability. The most
powerful effect is to change the delivery schedule of crude oils to
better manage the pour point quality. This is not always possible
nor is it an option unless sufficient lead time is available to the crude
oil traders and procurers. The second avenue would be to focus on
a segregation recipe that would better control the pour point to the
planning or operational target, although as mentioned this would
be at the expense of the other two qualities. The third approach
would be to alter the segregation policy. The current policy is based
on the whole crude oil/specific gravity whereby crude oils #1 and
#2 are deemed to be heavy and crude oils #3 and #4 are deemed to
be light. If for example kerosene/pour point is the quality of most
importance then it would seem prudent to segregate crude oils #2 and
#3 together as a low pour segregation and crude oils #1 and #4 as a high
pour segregation. This type of analysis can be found in more detail in
Kelly and Forbes.
2
The fourth avenue along the lines of changing the segregations is
to add a third or even fourth receiving tank. This would be an expen-
sive alternative but may provide a level of flexibility and controllability
well worth the investment. For instance, if a third tank is added then
instead of only being able to control theoretically without offset one
quality, we would now be able to control two qualities. With four
tanks we would be able to control without offset all three qualities.
The somewhat intangible benefit of controlling more qualities implies
that the downstream processes will have less disturbances to battle. A
more tangible benefit quantifiable by the planning optimizer would
be the ability to ride closer and closer to the real refinery constraints
or quality bottlenecks as shown in Fig. 2.
Finally, we show the quality profile for the heavy gas oil/sulfur.
An interesting consequence of this trend is the periodicity or cycle
of the variation (i.e., up-down-up using the blue line as the datum).
It appears to be in the range of 100 hr for this set of cycle data. This
means that from a production standpoint, the heavy gas oil inter-
mediate tankage must have sufficient capacity to store 100 hr
worth of heavy gas oil production. The reasoning behind this is that
given the relatively uncontrollable quality variation, due to the
inherent delivery schedule disturbances and limitations in the
blendshop, we need up and down or positive and negative varia-
tion around the planning target over some time frame to acquire
reasonably on-specification quality in the intermediate tanks. The
best alternative of course is to have constant or steady quality (i.e.,
the blue line) since shipment or blending of the intermediate can
0
4
40
8
6
4
0
8
6
0 40 60 80 100 1 0
Time, hr
K
e
r
o
s
e
n
e
p
o
u
r
p
o
i
n
t
140 160 180 00 0 40
Trend of crude oil kerosene pour point specif ic gravit y. FI G. 5.
0
0.6
0.8
1.0
1.
1.4
1.6
1.8
.0
0 40 60 80 100 1 0
Time, hr
H
e
a
v
y
g
a
s
o
i
l
s
u
l
f
u
r
140 160 180 00 0 40
Trend of heavy gas oil sulf ur cut propert y. FI G. 6.
TABLE 5. Logist ics opt imizat ion problem st at ist ics.
# Rows # Columns # Non zeros # 0 1 Variables # SOS1
16 1 0 5 7 611 1500 540
6814 7651 0745 1500 540
be performed at any time during production and there will be
less likelihood of off- or over-specification product.
The next best thing is to have as short a perturbation cycle as
possible. And for this example, it would seem that qualitatively
we should have larger capacity intermediate tanks for kerosene/pour
point than the heavy gas oil/sulfur due to the irregular nature of the
pour point trend. This ultimately implies that the more a given cut
varies in quality variation the more tankage is required to buffer it
so that it is more consistent for blending or as charge to a down-
stream process unit. Minimal tankage will cause sharp swings in
quality forcing controls to react to the upsets.
It should be emphasized that scheduling is an important deci-
sion-making tool not just to create pro-forma operational sched-
ules but to also help answer tactical business questions such as
can we run feasibly by trading cargoes and delaying a crude oil
delivery of Arabian light by two days or can we accept another
crude oil delivery three days from now of Arabian heavy to fill
out our fluidized catalytic cracking unit? In the same way plan-
ning system users have more than just one planning model such as
for facilities, budgetary, feedstock selection and operation, so can
many different types of scheduling models be employed to answer
these questions timely and accurately. HP
LITERATURE CITED
7
Jia, Z., M. Ierapetritou and J. D. Kelly, Refinery short-term scheduling
using continuous-time formulationcrude oil operations, Industrial
Engineering Chemistry Research, February 2002.
8
Jain, V., and I. E. Grossmann, Algorithms for hybrid MILP/CP models for
a class of optimization problems, INFORMS Journal of Computing, 13,
258276, 2001.
9
Graves, S. C., A review of production scheduling, Operations Research, 29,
4, 646675, 1981.
10
Kelly, J. D., The necessity of data reconciliation: some practical issues,
NPRA Computer Conference, Chicago, Illinois, November 2000.
11
Belvaux, G. and L. A. Wolsey, Lot-sizing problems: modeling issues and a
specialized branch-and-cut system bc-prod, CORE Discussion Paper
DP9848, Universite Catholique de Louvain, February, 1998.
12
Clark, A. R. and S. J. Clark, Rolling-horizon lot-sizing when setup times are
sequence-dependent, International Journal of Production Research, 38, 10,
22872308, 2000.
13
Nemhauser, G. and L. A. Wolsey, Integer and Combinatorial Optimization,
John Wiley, New York, 1988.
14
Williams, H. P., Model Building in Mathematical Programming, 3rd Edition,
John Wiley & Sons, 1993.
15
Wolsey, L. A., Integer Programming, John Wiley & Sons, New York, 1998.
16
Kelly, J. D., Smooth-and-dive accelerator: a pre-milp primal heuristic
applied to production scheduling problems, Computers & Chemical
Engineering, 27, 827832, 2003.
17
Kelly, J.D., Chronological decomposition heuristic for scheduling: a divide
& conquer method, AIChE Journal, 48, 29952999, 2002.
18
Clark, A. R., A local search approach to lot sequencing and sizing, IFIP
WG5.7 Special Interest Group on Advanced Technologies in Production Planning
and Control, Florence, Italy, February 2000.
19
Palacios-Gomez, F., L. Ladson and M. Enquist, Nonlinear optimization by
successive linear programming, Management Science, 28, 10, 1106, 1120,
1982.
20
Adya, N., M. Tawarmalani and N V. Sahinidis, A Lagrangian approach to
the pooling problem, Ind. Eng. Chem. Res., 38, 5, 19561972, 1999.
HYDROCARBON PROCESSING JULY 2003
I
79
I NSTRUM ENTATI ON/ PLANT OPTI M I ZATI ON
End Part 2. See Hydrocarbon Processing June 2003 for Part 1.
White paper
May 2003 Page 1 PlantWeb
www.PlantWeb.com
Improving availability
with PlantWeb
www.PlantWeb.com
Equipment problems. Over time, even the best equipment can fail
because of wear or damage causes that can be hard to detect before its
too late. Whats surprising is that many failures also occur early in the
equipment life cycle, often because of improper installation, calibration, or
startup.
Operations problems. Process conditions and events trigger many
outages, either directly or by causing equipment failures.
1
These
operations-related failure sources include
! Constraint violations
! Interruptions in feed, fuel, steam, or power
! Coking, fouling, freezing, plugging
! Corrosion or tube leaks
! Process transitions
! Operator errors
Maintenance problems. Basing maintenance programs on the calendar
or run-time rather than actual equipment condition can mean shutting
down the process (or extending a shutdown) for work that may not be
necessary. When there is a problem, finding the cause can be a lengthy
process. And maintenance actions themselves can result in equipment
contamination, misalignment, and other errors that lead to premature
failure and more downtime.
What if you could minimize these sources of downtime in your operation?
Higher availability =
higher profit
Even the best plants have some downtime. What makes them the best
is keeping availability as high as possible.
In fact, when major operational drivers such as productivity, feedstock
costs, fuel or energy costs, emissions compliance, and waste disposal
costs are taken into account, availability is the factor that differs most
between the worst- and best-performing plants. That difference covers a
span from as low as 72% availability to as high as 95%.
2
Across industries, best- and worst-
performing plants have significantly
different levels of availability.
2
Quartile
Process Type Worst 3
rd
2
nd
Best
Continuous < 78% 78 - 84% 85 -91% > 91%
Batch < 72% 72 - 80% 81 -91% > 90%
Chemical, Refining, Power < 85% 85 - 90% 91 -95% > 95%
Paper < 83% 83 - 86% 87 -94% > 94%
White paper
May 2003 Page 3 PlantWeb
www.PlantWeb.com
If your plant is capacity-limited, higher availability lets you boost output to
meet demand -- without investing additional capital in production facilities.
Thats a sure-fire way to increase profit and ROI.
Consider a typical plant that generates $500 million per year in revenue at
85% availability. Each incremental hour of production is worth
approximately $67,000. If variable costs are 60% of total cost, almost
$27,000 of that added revenue is operating profit. In this case, increasing
availability from 85% to 90% (reducing downtime by 438 hours per year)
would boost annual profit by more than $11.7 million.
If your production is market-limited, on the other hand, higher availability
can enable you to use fewer assets to meet existing demand. For
example, output levels that previously required five production units might
be met with only four reducing operations and maintenance costs,
allowing you to use your most efficient units to meet demand, and freeing
the other unit to make other products.
Keeping those units up and running also means fewer efficiency-robbing
outages, reducing costs for fuel or energy, materials, and scrap or rework.
Youll also gain the flexibility to expand production quickly when higher
demand levels require it.
Finally, with higher availability, you wont have to maintain as much
excess production capacity to allow for downtime. One worldwide
refiner has estimated that 10% of their capital is in place to compensate
for unscheduled downtime.
But if the benefits are so great, why hasnt every plant already maximized
availability?
The information situation:
Too little, too late
The best way to increase availability is to detect and correct potential
problems before they cause downtime. The problem is that early
warning signs of these problems can be hard to spot especially if youre
limited to the information available through traditional automation
architectures.
A traditional control system cant show you much more than the process
variable and any associated alarms or trends. You dont know whats
happening in the equipment itself. If an instruments signal falls within the
expected range, for example, its assumed to be working properly.
But such assumptions can be risky. The signal could have drifted. A
sensor may be reading the pressure in a plugged impulse line rather than
the process. A control valve may not be responding properly. Unless an
White paper
May 2003 Page 4 PlantWeb
www.PlantWeb.com
experienced operator notices that something doesnt look right, the
problem may continue until the equipment fails or the process exceeds
constraints causing unexpected downtime.
Stuck with the
wrong strategy
Without a clear view of actual equipment condition, plants are largely
limited to reactive and preventive maintenance strategies.
Reactive maintenance -- also known as run to failure or fix it when it
breaks -- obviously runs the risk of unplanned downtime when equipment
fails. The time and cost to repair (or replace) failed equipment can also be
much higher than if problems were detected and fixed earlier.
Calendar or run-time based preventive maintenance (fix it just in case)
can reduce the risk of unplanned downtime, but servicing equipment that
doesnt need it yet increases the length and frequency of planned
shutdowns as well as the risk of maintenance-induced problems.
A typical plant caught in the reactive/preventive maintenance cycle may
have plant availability as low as 70-75%, with annual maintenance costs
that can exceed 15% of asset replacement value.
3
Contrast these approaches with a predictive maintenance strategy that
constantly monitors equipment condition and uses the information to
predict when a problem is likely to occur. With that insight you can
schedule service when it will have the least impact on availability, such as
during a planned shutdown but before the equipment fails or causes a
process upset.
A best-practices plant uses predictive maintenance for most equipment
where condition-monitoring is practical, limiting reactive and preventive
strategies to equipment thats not process-critical and will cause little or no
collateral damage if run to failure. Such a plant can have availability as
high as 95% and annual maintenance costs below 2% of asset
replacement value.
3
Before that can happen, however, you need a way to access and monitor
equipment information so you can detect potential problems in time.
The answer: Predictive intelligence
With its PlantWeb digital plant architecture, Emerson Process
Management offers technology and services that enable you to see whats
happening in your equipment and process, identify conditions that lead to
White paper
May 2003 Page 5 PlantWeb
www.PlantWeb.com
downtime, deliver the information wherever its needed, and take action to
maximize availability. We call this predictive intelligence.
Providing new insights. Digital technology makes it possible to access
and use new types of information that go far beyond the PV signals
available through traditional automation architectures. With PlantWeb
architecture, both the breadth and depth of this information are
unprecedented.
It starts with intelligent HART and FOUNDATION fieldbus instruments
including transmitters, analyzers, and digital valve controllers that use
on-board microprocessors and diagnostic software to monitor their own
health and performance, as well as the process, and signal when theres a
problem or maintenance is needed.
But PlantWeb doesnt stop there. It also captures information on the
condition of rotating equipment such as motors and pumps -- from shaft
speed and vibration to temperature and lubricant condition -- and uses the
data to identify machine-health problems such as misalignment,
imbalance, gear defects, and bearing faults.
Other tools provide insights on the performance and efficiency of process
equipment like heat exchangers, compressors, turbines, distillation
columns, and boilers.
Integrating information. PlantWeb uses communication standards like
HART, FOUNDATION fieldbus, and OPC, as well as integrated software
applications, to make this new wealth of process and equipment
information available wherever its needed for analysis and action all
within the same architecture.
For example, RBMware
www.PlantWeb.com
The Asset Portal provides an
integrated view of health and status
information from multiple types of
instruments and equipment.
When potential problems arise, targeted online alerts help ensure that the
right people get the right information right away but other users arent
bothered by nuisance alarms. PlantWeb can also send synchronized
alerts to applications such as operations historians and maintenance
systems, making it easier to establish a cause-and-effect relationship
between process events and equipment conditions.
Our DeltaV and Ovation automation systems also use digital
intelligence to provide rock-solid process control as well as ensuring
operators and others get the information they need reducing risks of
process- and operator-induced downtime.
Maximizing the advantage. In addition, Emerson offers a full range of
services -- from monitoring, troubleshooting, maintenance, and repair to
technical training and equipment optimization -- to help you take full
advantage of PlantWebs capabilities and sustain the improvements over
the life of your plant.
PlantWeb architecture helps reduce
both planned and unplanned
downtime, so you can keep your
process up and running at its best.
White paper
May 2003 Page 7 PlantWeb
www.PlantWeb.com
In short, PlantWeb architectures predictive intelligence reaches into the
field, monitors and predicts the performance of plant assets, and
integrates the information into the architecture to help you
! Reduce unplanned downtime
! Extend the period between planned downtimes
! Shorten the length of planned downtime
! Speed startup after downtime
Lets take a closer look at each of these four ways PlantWeb improves
availability.
Reducing
unplanned downtime
PlantWeb helps detect conditions that can lead to equipment failure or a
process excursion -- before youre faced with an unexpected shutdown.
For instruments using FOUNDATION fieldbus technology, this capability
starts with automatically labeling the devices signal status as good, bad,
or uncertain, so youll know when the device needs attention, and have
early warning that an invalid measurement may be threatening process
stability. The DeltaV and Ovation systems use this early warning to avoid
controlling off bad data and can automatically make adjustments to keep
the process running smoothly.
But instrument signal status is just part of the picture. PlantWebs full set
of online and offline tools enables monitoring, diagnostics, and notification
of problems for a wide range of HART and FOUNDATION fieldbus
instruments and other process equipment.
Bearing failure, for example, is a common problem with rotating
equipment. But our PeakVue software can detect and identify the very
high-frequency noise associated with the earliest stages of bearing wear.
You get maximum warning of future problems, before increasing damage
significantly increases the cost (and possibly time) for repairs.
In pressure transmitters, impulse-line plugging can block the instrument
from reading actual process pressure. Instead, it reads the pressure in
the plugged line leaving you and your control system blind and at risk
of a process trip if the actual pressure changes beyond whats allowable.
PlantWeb uses special diagnostics in the transmitter to detect plugged
impulse lines and immediately alert you to the problem.
White paper
May 2003 Page 8 PlantWeb
www.PlantWeb.com
With a plugged line diagnostic based
on statistical process monitoring,
PlantWeb detects conditions that can
lead to equipment failure or a process
upset.
Freezing can cause similar problems. If heat-tracing fails, for example,
liquid can freeze in the impulse lines or even in the cell of a transmitter,
where it can cause bursting. Monitoring sensor temperature and alarming
on low temperatures, a standard capability in many of our transmitters,
can help eliminate this type of failure.
Plugging isnt just an instrument problem. One of the most frequent
causes of failure in control-valve actuators is loss of air. A diagnostic
similar to that used to detect plugged impulse lines in transmitters enables
Emerson digital valve controllers to detect a plugged air supply to the
actuator -- and head off a process upset when the valve cant respond as
its supposed to.
PlantWebs monitoring and diagnostics capabilities also enable you to
predict potential problems in larger process equipment.
For example, if a heat exchanger fouls to the point where there is
insufficient flow to run the process, the unit will shut down. Even
temporary fouling can cause a loss of capacity that can lead to process
disturbances and a resulting trip.
Our e-fficiency
www.PlantWeb.com
e-fficiency
www.PlantWeb.com
Extending the period
between planned
downtimes
Even if equipment problems dont cause unexpected outages, dealing
with them can force you to schedule maintenance shutdowns so
frequently that availability suffers.
One way PlantWeb architecture extends the time between scheduled
shutdowns is by helping you detect and avoid conditions that can shorten
equipment life.
A common cause of premature transmitter failure, for example, is
exposure to excessive temperatures. A 10 degree C increase in steady-
state temperature can reduce the life of electronics by half. But
PlantWebs temperature-monitoring and alarming capabilities can alert
you to the problem in time to find and remedy the cause.
Excess vibration can shorten the life of rotating equipment. In a plant that
was experiencing premature failures in the motor and gear train to a
pump, PlantWebs vibration monitoring tools revealed a resonant coupling
between the motor, the gearbox, the pump, and the mountings. This
caused very high vibration levels at certain turning speeds. With this
insight, the startup procedure was modified to bring the equipment through
the critical speed range very quickly substantially eliminating the
premature failures.
Process variability is an often-unrecognized factor in shortening
equipment life, especially for control valves: The more often the valve has
to move to compensate for process variation, the more wear on its trim
and other components. The precise control provided by our instruments,
valves, and automation systems minimizes this problem.
PlantWeb can also help avoid installation- or maintenance-induced
problems that cause equipment to fail prematurely.
For example, improper installation of pumps, motors, and related
equipment can result in shaft misalignment and imbalance that reduces
equipment life by as much as a factor of 10. Emerson tools and services
for laser alignment and equipment balancing help ensure that shafts are
coupled center-to-center, and that vibration levels are low at operating
speeds and loads.
Rotating-equipment life can also be shortened by wear that begins with
improper cleaning or other contamination during maintenance. Our wear-
particle analysis of lubricating oil can detect the type of wear and the exact
location so you can head off premature failures.
White paper
May 2003 Page 11 PlantWeb
www.PlantWeb.com
RBMware trivector analysis combines
multiple information types to help
pinpoint equipment-life-shortening
conditions such as bearing wear.
Shortening the length of
planned downtime
As PlantWeb enables you to shift your emphasis from reactive and
preventive maintenance to predictive maintenance, one of the benefits
will be shorter planned shutdowns. Thats because with PlantWebs
predictive intelligence youll know in advance which equipment needs
attention and which doesnt, so you can avoid doing unnecessary work
that would prolong the downtime.
For example, control valves are often serviced or rebuilt as part of
preventive-maintenance programs during scheduled shutdowns. But one
study by Emerson across multiple industries showed that almost 70% of
valves pulled for rebuilding didnt actually need it.
Knowing each valves actual condition
enables you to identify the ones that
need extensive work during a
shutdown and which dont.
Chart based on sample of 230
valves scheduled for overhaul.
With PlantWeb valve diagnostics, you can check each valves
performance to determine if wear, stiction, or other conditions call for
maintenance at the next scheduled opportunity or if you can leave that
valve alone this time and get the process back online that much sooner.
Diagnostics can identify not only which equipment needs work, but also
the nature of the problem. Knowing in advance whether a valves poor
performance is caused by trim wear or by too-tight packing, for example,
White paper
May 2003 Page 12 PlantWeb
www.PlantWeb.com
shortens troubleshooting time in the field as well as enabling you to plan
work more efficiently and have appropriate parts on hand when scheduled
downtime begins.
AMS software also helps shorten scheduled downtime by streamlining
tasks such as instrument calibration. And its automatic documentation
capabilities reduce the time your technicians spend on data entry and
other paperwork.
Finally, Emerson can provide a broad range of services to help speed
turnarounds as well as ongoing maintenance from performing remote or
onsite diagnostics, to carrying out repairs and maintenance, to training
your staff on how to make the most of new technologies and work
practices.
Speeding startup
after downtime
After a shutdown, PlantWeb can help bring your process back to full
production as quickly as safety and plant constraints allow. This not only
increases total availability, but also reduces the energy, fuel, material,
and scrap or rework costs of starting up and lining out the process
which can be twice as high per hour as shutdown costs. The same
benefits apply to grade changes.
The DeltaV and Ovation automation systems deliver these gains by
automating the startup sequence. They smoothly bring the process and
equipment to the appropriate state for each step in the sequence, then
automatically move to the next step without the delays that can result
when operators control the startup sequence manually.
Automatic logic minimizes human error
and helps ensure a smooth startup.
White paper
May 2003 Page 13 PlantWeb
www.PlantWeb.com
Automating startup can also eliminate human errors that can cause
equipment damage and downtime. In effect, its like having your best and
most experienced operator running the startup -- every time.
Real projects, real results
Better process availability is one of the reasons users have chosen
PlantWeb architecture for thousands of automation projects. In plants,
mills, refineries, and other operations around the globe, its helping keep
processes up and running with less unplanned downtime, shorter and
less-frequent planned downtime, and faster startup after shutdowns and
grade changes.
Here are just a few examples:
! If we had major breakdowns in the past, we had to shut the whole
plant down. With this new system, weve got a window on whats
actually happening in the plant and we now feel we can get to
problems before they are breakdowns.
- Brewing company, Australia
! Without AMS software, maintenance would have shut down the
process for four or five hours to replace a valve that was in perfectly
good working condition. The cost would have been more than just for
the replacement valve and the crews time. It would have included
several thousand dollars per hour of lost production time.
- Chemical processor, U.S.A.
! After installing Ovation we significantly increased plant availability by
decreasing steam temperature variation. This reduced scheduled
plant outages from tube leaks.
- Utility, U.S.A.
! [PlantWeb] allows us to come closer every day to our sought-after
100% availability. Because the system is so integrated into our
process, we sometimes forget what an impressive amount of work it is
doing for us.
- Solvent producer, France
White paper
May 2003 Page 14 PlantWeb
www.PlantWeb.com
! We immediately eliminated downtime losses. And we calculate
payback on the system, based on previous downtime, at 1.8 years -- a
rather quick return on our capital expenditure.
- Paper maker, U.S.A.
For additional case histories and proofs of PlantWeb architectures
capabilities, visit www.PlantWeb.com and click on Customer Proven.
Taking the next steps
As you can see, PlantWeb architecture clearly helps increase availability.
And the benefits are significant. But how do you get started?
Begin by assessing where you are. How many potential production hours
per year do you currently lose in downtime, both planned and unplanned?
What are your primary sources of downtime? (An Emerson availability
audit can help here.) What is your current mix of reactive, preventive, and
predictive maintenance? To what extent are you using diagnostics and
equipment monitoring? How do your maintenance costs stack up to
industry benchmarks, or to similar operations in your own company?
Next, determine where you want to go. Are you currently market-limited or
capacity-limited? Whats the value of an incremental hour of production?
Which units in your operation are likely candidates for improvement? How
much would you gain by increasing their availability to best-in-class
levels? Who in your organization would support or sponsor a project to
make that happen?
Then work with your local Emerson team to identify which PlantWeb
technologies and related services can have the greatest impact on your
operations availability, and how we can put them to work for you.
If youd like, we can even help you with the assessment and goal-setting
portions of this process, including developing the business case for
increased availability.
References 1. George Birchfield, Olefin Plant Reliability, Aspentech.
2. Fluor Global Services Benchmark Study NA, AP, EU, 1996.
3. Dennis Berlanger and Saxon Smith, MRG business case for reliability, as
published at http://www.reliabilityweb.com/rcm1.
White paper
May 2003 Page 15 PlantWeb
Emerson Process Management
8301 Cameron Road
Austin, Texas 78754
T 1 (512) 834-7328
F 1 (512) 834-7600
www.EmersonProcess.com
2003 Emerson Process Management. All rights reserved.
Other resources
! Improving availability is just one of the ways PlantWeb helps improve
process and plant performance. It can also help increase throughput
and quality, as well as reducing cost for operations and maintenance;
safety, health, and environmental compliance; energy and other
utilities; and waste and rework.
www.PlantWeb.com/Operational_Benefits
! Availability is also a major factor in Overall Equipment Effectiveness,
a structured metric for process performance. Emerson Process
Managements free online learning environment, PlantWeb
University, offers a 5-course introduction to OEE.
www.PlantWebUniversity.com
The contents of this publication are presented for informational purposes only, and while effort has been made to ensure their accuracy, they are
not to be construed as warranties or guarantees, express or implied, regarding the products or services described herein or their use or
applicability. All sales are governed by our terms and conditions, which are available on request. We reserve the right to modify or improve the
designs or specifications of our products at any time without notice.
PlantWeb, RBMware, e-fficiency, Ovation, and DeltaV are marks of Emerson Process Management. All other marks are the property of their
respective owners.
030430
1
Future Trends in Safety Instrumented Systems
The process industry has always been faced with the difficult task of determining the required
integrity of safeguarding systems. In spite of the application of a wide variety of safeguarding
measures, many accidents in the process industries still happen. Experiences gained from these
accidents have led to the application of a variety of technical and non-technical layers of
protection, such as Safety Instrumented Systems (SIS).
The central role of the safety-PLC forces companies to decide on the logic solver integrity class
(e.g. SIL 3) taking into account the current risk levels to be reduced by the SIS, as well as future
higher risk levels. This article describes the future expectations with regards to the requirements
and application of dedicated safety-PLCs. It addresses issues such as the (un) acceptability to use
a SIL 2 rated logic solver instead of SIL 3, and the (un) acceptability to use a single system both
for control and process safeguarding functions.
1 Current developments of SIS standards
Safety Instrumented Functions
Standards like IEC 61508, IEC 61511, and ANSI/ISA S84.01 concentrate on the functional safety
of the SIS. All combined instrumentation, devices, and equipment that are required to fulfill an
intended safeguarding function are considered to be part of the SIS. As the collection of safety
instrumentation normally includes more than one safeguarding function (e.g. protect against over-
pressure, temperature protection, back flow protection, etc.), the SIS could be defined as the
collection of all safety-related sensing elements, logic solvers and actuators.
On the other hand, the SIS could be considered as separate for each safeguarding function, and
would comprise only the devices to protect the Equipment Under Control (EUC) against one single
hazardous situation. Consequently, the process installation would be comprised of a number of
safety-instrumented systems. As particular devices such as safety-related PLCs and shut-off
valves normally deal with more than one Safety Instrumented Function (SIF), this article uses the
first definition; the SIS is comprised of all safety-related devices of the process installation.
Figure 1 illustrates the definition of a SIS and the SIFs that will be executed; specifically a SIF
that protects the process temperature and causes a shut-off valve to close in case of an out-of-
control process temperature. Other SIFs that are performed by this example SIS are level
protection and back-flow protection.
SAFETY INSTRUMENTED FUNCTION SAFETY INSTRUMENTED FUNCTION
Logic Solver
(PLC)
Temperature
transmitter
Temperature
transmitter
Level switch
Flow
transmitter
Shut-off
valve
Solenoi d
Globe
valve
Solenoi d
Pump
2
Figure 1 Safety Instrumented System with multiple SIFs
Distribution of the SIL requirements
Based on the hazard and risk assessment, the safety requirements are defined and rated according
to the needed SIL for each function to be realized by the safeguarding instrumentation. Figure 2
shows an actual SIL requirements distribution based on 392 analyzed SIFs from 16 different sites
of various companies, which can be considered as reasonably representative for the process
industry.
0%
5%
10%
15%
20%
25%
30%
- a SIL 1 SIL 2 SIL 3 SIL 4
Figure 2 SIL requirements distribution based on 392 analyzed SIFs.
It can be seen that 18% of all SIFs are required to meet SIL 3 or higher. Based on an average of
50 SIFs per safety PLC, approximately 9 SIFs will have to meet SIL 3 or higher. The probability
that such a safety PLC does not contain any SIL 3 rated SIFs is negligible. Therefore, the need for
SIL 3 rated safety PLCs as logic solver is substantially high and will form the majority of market
demands.
Layers Of Protection
Figure 3 shows the concept of layers of protection and the compositions of the different types of
SIS as defined in part 1 of IEC 61511. A distinction exists between the Basic Process Control
System (BPCS) and the SIS as part of the Prevention and Mitigation layers. The primary objective
of a BPCS is to optimize process conditions to maximize production capacity and quality. SISs
are primarily applied to prevent hazardous events from occurring (Prevention layer), and
mitigation of the consequences of a hazardous event (Mitigation layer). The motivation for this
distinction is due to the fact that a BPCS does not necessarily have to contribute to the risk
reduction and sometimes might even pose a potential risk itself.
3
Basic Process Control Systems
Monitoring Systems (process alarms)
Operator Supervision
PREVENTION
Mechanical Protection System
Process Alarms
Operator Supervision
Safety Instrumented Control Systems
Safety Instrumented Prevention Systems
MITIGATION
Mechanical Mitigation Systems
Safety Instrumented Control Systems
Safety Instrumented Mitigation Systems
PLANT EMERGENCY RESPONSE
COMMUNITYEMERGENCYRESPONSE
Process Design Process Design Process Design
Figure 3 IEC 61511 - Independent Layers Of Protection the onion model.
The importance of the principle of having independent layers of protection is emphasized by the
requirements specified by the latest standards on SISs. IEC 61508 part 1 clearly requires that the
EUC control system shall be separate and independent from the E/E/PE safety-related systems,
other technology safety-related systems and external risk reduction facilities.
2 Technical evaluation of SIL requirements on safety PLCs
The role of the safety PLC as central unit
Figure 4 shows a typical application of a safety PLC performing a large number of functions and
with a combination of safety functions with different SILs. Although most functions only require
a SIL 1 or SIL 2, the remaining SIL 3 required functions will result in the application of a SIL 3
certified common central part of the logic solver. For this reason most end-users have specified the
SIL 3 requirement for the safety PLC into their technical specs.
Safety
PLC
SIL 3
SS
SS
SS
SS
SS
SS
SS
SS
SS
SS
SS
Safety
Sensors
SIL 1
Safety
Sensors
SIL 2
Safety
Sensors
SIL 3
Safety
Actuators
SIL 1
Safety
Actuators
SIL 2
Safety
Actuators
SIL 3
Figure 4 A SIL 3 certified safety PLC as central unit.
4
Increasing SIL requirements on the safety PLC
Accumulated risks
The safety PLC as central logic solver normally handles a large number of SIFs, so the risks of
out-of-control process parameters have common elements that aim to reduce those risks. Because
state-of-the-art risk analysis techniques do not consider the probability and degree of overlapping
risks in detail, it is not always clear which elements should comply with a higher SIL and which
should not. Experts responsible for the hazard and risk analysis often decide to increase the safety
integrity requirements of the central safety PLC unit.
For instance, assume a number of SIFs each protecting against individual hazardous
situations/events. Each SIF has its own remaining/residual risk that has been made
acceptable/tolerable by ensuring that the target SIL is achieved. For the complete unit/plant, these
residual risks should be added together to arrive at the total remaining risk associated with those
hazardous events that the SIFs are protecting against. This total remaining risk is still slightly
high.
An efficient way to improve the overall remaining risk is to improve the parts that are common to
many SIF's. These are often final elements that are operated by a number of SIF's (e.g. close fuel
gas to furnace) and in almost all SIF's it is the logic solver. Hence a SIL 3 logic solver is
commonly selected, even if there are only a number of SIL1 and SIL2 functions. Because there are
usually relatively few SIL3 functions, the logic solver is normally not required to meet SIL 4
requirements.
Reducing spurious process trips
Increased safety requirements on a system also can have a positive effect on the availability of that
system. To comply with higher safety requirements in combination with hardware fault tolerance,
it is necessary to have a higher safe failure fraction, which in programmable systems is achieved
through self-diagnostics. In combination with redundancy the results of the diagnostics also can be
used to increase the availability.
In addition to the accumulated risks, the shared probability of the occurrence of undesired spurious
process trips due to safe failure of the PLC system is a common argument to increase the
reliability of the system by increasing its Diagnostic Coverage (DC). Obviously, any tangible
safety system will always have a probability of physical failure. However, this failure occurrence
does not necessarily have to result in a process trip at the moment that due to the internal system
diagnostics this failure is observed. A detected failure can be isolated and repaired within a
predefined acceptable timeframe. It is clear that the DC factor importantly determines the added
value to asset management and process uptime. This argument also forces companies to apply a
SIL 3 safety PLC instead of a lower DC characterized SIL 2 system.
Considerations on various BPCS and SIS configurations
Increasing automation in the process industry is leading companies to ask for integration of various
functionalities into one system. Advantages include easier to use systems, integrated exchange of
information between the basic control system part and safety system part, and a cheaper solution
due to the application of a single system. The next paragraphs describe the implications for three
basic configurations concerning the process control and process safeguarding functionalities and
the ability to achieve SIL requirements.
5
Configuration 1
The traditional solution applied in the process industry for the configuration of safety and control
systems is a fully separate, thus no shared devices, control system (BPSC) and safety system (SIS)
(Figure 5).
SS CC
SS CC
Control System
Safety System
Figure 5. Full separation between the control system and the safety system.
Although questions often arise whether it would be appropriate and acceptable to share information
of the devices, making use of single field instruments or even a single control and safety system, it
is not done and the configuration of Figure 5 normally prevails. Not surprisingly, it is this design
that is fully supported by the onion model and required by most SIS related standards.
Configuration 2
Figure 6 illustrates the implementation of both control and safety functions into one single
controller. At the moment that a SIL 3 requirement is applicable to the safety functions, the
complete control system has to comply with these SIL 3 requirements, including its maintenance
and operating procedures. Its essential weakness is that both functionalities will fail in case of
central system failure. The control and safeguarding layers are not independent. In that case IEC
61511 requires that it is demonstrated that the overall resulting hazard rate is still acceptable or at
least tolerable.
SS CC
SS CC
Safety & control system
Safety functions
Control functions
Figure 6. Fully integrated control functions and safety function into one system.
Since many SIFs also protect against the failure of the control system (including sensors and final
elements), complete independency often has to be applied to achieve an acceptable hazard rate.
Not surprisingly, the onion concept is enforcing this principle.
The utilization of a single logic system both for safeguarding and control functions will only be
acceptable in very specific situations where the demand rate on the safety functions is independent
from failure of the control logic. Standards on SISs largely exclude the option to apply this
concept. For clarity reasons, one of the current maintenance activities on IEC 61508 is to sharpen
this requirement.
6
3 Market perception
Growing complexity
The following trends are currently observed in the process industry:
Increasingly complex industrial processes
Greater need for production capacity and flexibility
Increasing numbers of people and organizations
Higher circulation of employers and employees
Greater use of information and communication
High cost of an unwanted spurious process trip
Significant consequences if process gets out of control.
These trends mean that the requirements on the applied SIS are not expected to become less, but
will mostly result in a predefined high SIL requirement. Companies that tend to apply a SIL 2
rated system will have to be fully aware of the consequences and probabilities in case something
goes wrong, and will have to be absolutely certain that the above mentioned aspects are fully
evaluated before a lower SIL rated safety system is selected.
Increasing safety awareness and requirements on environmental protection
Due to a changing perception of society towards safety of people and protection of the
environment, attention is focusing on protective and preventive measures. One characteristic is the
application of state-of-the-art safety instrumentation. For the railroad industries generally a SIL
4 is required, whereas for the process industries, it is SIL 3. Concerning applications in the
machinery industry, the majority of protective instrumentation is rated at SIL 2.
As society is increasingly un-prepared to accept risks, the trend is towards SIL 3 rated safety
PLCs. Where a lower SIL might be considered acceptable, the preference will be to continue to
apply SIL 3 systems because of the priority to prevent hazardous situations from occurrence rather
than mitigate the events by other risk reduction measures. It is also for this reason that safety
PLCs will play a more important role.
4 Conclusion
Although the expectation that more reliable process control systems will enter the market, a clear
need for dedicated safety PLCs will remain. The adoption of the onion model emphasizes the
importance of differentiation between process control systems and the dedicated SIS.
State-of-the-art technology will set the trend towards a continued application of best-in-class safety
PLCs. As safety of people and the protection of the environment become more important,
companies will stay away from the acceptance of less safe and less reliable or lower integrity
protection systems.
The majority of todays corporate standards and technical requirement specifications on SISs
demand a SIL 3 certified rated safety PLC, often combined with requirements for independent
safety certification. The fact that a significant amount of SIL 3 functions to be fulfilled by the
PLC, in combination with the anticipated probability that SIL 3 functions might be required in
future, prompts the industry for this system requirement.
It is therefore concluded that the market demand for dedicated SIL 1 or SIL 2 certified safety
PLCs is expected to be small compared to the SIL 3 certified rated safety systems market.
7
Article written by Dr. Bert Knegtering, Honeywell Safety Management Systems, The Netherlands,
and Jan Wiegerinck, Shell Global Solutions, The Netherlands.
Originally published in the Honeywell IS Journal, May 2003, Issue 11.
O
ver the past two years, the ExxonMobil Research and
Engineering group tested the self-diagnostic capabilities
of pressure transmitters with FOUNDATION fieldbus (FF)
capability installed on a refinery FCC unit. This project involved
conducting a series of tests on the ability of the devices to diagnose
plugged impulse lines.
A typical purged instrument detail for a pressure transmitter
in service on an FCC unit where catalyst is present is shown in
Fig. 1. Three types of problems associated with the pressure trans-
mitters and purge systems on an FCC unit, and with the FCC
process itself, can be detected with the diagnostic capabilities of
fieldbus pressure transmitters:
1. Loss of a reliable signal due to a plugged pressure tap caused
by catalyst restricting the outlet
2. Plugged restriction orifice or filter, resulting in diminished
purge flow and possible loss in the signal sensitivity (can lead to
problem 1)
3. Circulation problems caused by stick-slip flow condition
in the FCC unit.
In addition to identifying process-related problems, the diag-
nostics capabilities of fieldbus pressure transmitters should be
able to help identify conditions related to plugged impulse lines
before they cause operational upsets.
Economic impact of predict ive diagnost ics. Advanced
diagnostics technologies should help avoid unexpected process shut-
downs during refinery operation. The blockage in pressure transmit-
ters impulse lines is notorious in refinery applications, as well as many
other chemical and gas applications. Though a well-experienced oper-
ator might have a feel for impulse line blockage during normal oper-
ations, it is usually well after the fact. When the impulse lines are
plugged, the control system will not be getting an accurate pressure
reading [pressure sensor will be reading the trapped pressure between
the sensor and where the blockage is in the impulse line(s)].
Impulse line blockage can be very costly. Depending on refin-
ery capacity, a process shutdown due to an impulse line blockage
during FCC unit operation could cost as much as $1 million per
day if the unit is completely shut down. Further, it might take
M AI NTENANCE AND RELI ABI LI TY SPECI ALREPORT
Diagnostics capabilities of
FOUNDATION fieldbus pressure
transmitters
Test s in an FCC unit showed bot h inst rument and
process problems could be det ect ed
R. SZANYI and M. RATERMAN, ExxonMobil Research and Engineering, Florham Park,
New Jersey, and E. ERYUREK, Emerson Process Management, Austin, Texas
PT
1
RO
Filt er
Typical purge det ail
Typical purge det ail of a pressure t ransmit t er on an FCC
unit .
FI G. 1.
Pi RO
Typical aerat ion det ail
one of f our @ each elv.
Ring header
St andpipe
I mpulse lines
Purge f low
Typical det ail of f our aerat ion point s f ed by a common
header.
FI G. 2.
HYDROCARBON PROCESSING APRIL 2003
I
53
up to seven days to restart the FCC unit. The FCC unit has a
large impact on profits. Early detection of possible upsets, espe-
cially if shutdowns are avoided, can significantly enhance refin-
ery profits.
With the potential economic impact in mind, ExxonMobil
decided to put these new advanced diagnostics technologies to
test and see where and how they might help avoid refinery pro-
duction outages. ExxonMobil believes these results will be ben-
eficial to the entire oil and gas business.
Test inst rument inst allat ion. The operational FCC unit
selected as the field test site is equipped with 18 levels of aera-
tion taps on the regenerated catalyst standpipe. Fig. 2 displays a
typical detail of four aeration points fed by a common ring header.
Several ring headers are, in turn, connected to a single flow con-
troller that controls total flow to the group. The restriction orifice
then sets aeration flow to each point on the standpipe within a
grouping. There are three flow controllers for the 18 different
aeration levels forming three groups.
The upper 17 aeration levels have been equipped with pressure
transmitters to aid in diagnosing flow instability problems and to
help optimize the aeration distribution. These instruments are
not used in any unit control or emergency shutdown system,
which is the reason they were chosen for this study. The instru-
ments are generally connected to the location occupied by the
pressure indicator in Fig. 2.
Comparing Fig. 1 to Fig. 2, it is apparent that the arrange-
ments are functionally the same. Gas flows associated with aera-
tion requirements are in general much larger than those associated
with instrument purge, so typically a filter is not required.
Problem t heory. Line plugging has long been an issue for
flow and level measurements in many process applications. Pro-
cesses with dense materials such as crude oil or those in colder
climates are particularly susceptible to impulse line plugging.
In a typical process, impulse line length could vary from 1 ft to
more than 10 ft. Although recent close-coupled designs are intended
to eliminate this problem, industry standards or the process con-
ditions require impulse lines for flow and level measurements.
When pressure transmitter impulse lines are blocked, operators
and the control system can no longer rely on the measurement
since only the trapped pressure level between the sensor and the
point of blockage is being measured and not the actual process
pressure. Fig. 3 depicts differential pressure transmitter blockage.
Although problems 1 and 2, listed in the beginning of this
article, may seem similar, the first involves standpipe pressure
tap blockage, not pressure transmitter impulse line plugging.
The third problem is a process problem, and is essentially a
function of catalyst circulation rate, standpipe and the fluidiza-
tion properties. Under normal conditions, gas is entrained into
the standpipe and travels downward between the catalyst parti-
cles (emulsion phase) as bubbles (Fig. 4). These bubbles are com-
pressed as they travel downward, forming smaller bubbles. In
addition, they will merge to form larger bubbles, which can sub-
sequently break apart. This leads to pressure fluctuations or noise
within the standpipe.
Under certain conditions (low circulation or poor catalyst flu-
idization properties), the catalyst will over-deaerate as the bubbles
travel down the standpipe. The compression effect will then cause
the bubbles to disappear. When this happens, pressure buildup
along the standpipe length is no longer smooth but becomes
erratic. Under severe conditions, the catalyst will bridge across
the standpipe, momentarily stopping and then breaking loose
again. This sudden stopping and starting of the catalyst flow is
generally referred to as stick-slip flow. It produces a very notice-
able chugging noise, with pressure fluctuations that become less
random but more severe (Fig. 5). If left uncorrected, this condi-
tion can result in severe damage to the standpipe system, partic-
ularly at expansion joints.
Normally, noise from the standpipe should be white noise
with no distinguishable pattern as a result of the random size and
population of gas bubbles in the standpipe. When the catalyst
bridges, the noise becomes more regular. Noise at the bridg-
ing condition shows up as large pressure fluctuations to the pres-
sure transmitters generally in use today. Detecting this condition
before it becomes serious has been a costly challenge.
One goal of this field test, was to determine if the Statistical
Process Monitoring diagnostic capabilities of FF pressure trans-
SPECI ALREPORT M AI NTENANCE AND RELI ABI LI TY
54
I
APRIL 2003 HYDROCARBON PROCESSING
Blockage
When pressure t ransmit t er impulse lines are blocked, t he
measurement becomes quest ionable.
FI G. 3.
Aerat ion
St andpipe
Time
Bubble size Pressure
P
r
e
s
s
u
r
e
Under normal condit ions, gas is ent rained int o t he
st andpipe and t ravels downward bet ween t he cat alyst
part icles as bubbles.
FI G. 4.
mitters could detect noise anomalies in the standpipe early enough
to allow the operators to prevent the bridging condition.
Diagnost ics t echnologies of pressure t ransmit t ers
wit h FOUNDATION f ieldbus capabilit y.
Plugged impulse line detection. Plugged impulse line detec-
tion technology is based on an advanced pattern recognition tech-
nology with built-in intelligence to be aware of the environmen-
tal conditions of the pressure and differential pressure transmitters
widely used in the process industries. Basically, the pattern recog-
nition algorithm embedded in the pressure transmitters receives the
sensor updates (update frequency varies among sensor manufac-
turers). The faster the response time, the more information can be
captured about the process noise. This becomes important espe-
cially for differential pressure applications to differentiate a sin-
gle-leg plugged condition from both legs being plugged.
In general, the measurement signal contains fluctuations
superimposed on the average value of the pressure or differential
pressure of the process, called process noise or signature. These
fluctuations are induced by the flow and are a function of the geo-
metric and physical properties of the system. The time domain
signatures (i.e., variance and correlation) of these fluctuations
do not change as long as the overall system behavior stays the
same (i.e., steady-state process). In addition, these signatures
are not affected significantly by small changes in the average
value of the flow variables. This offers an advantage in identify-
ing and isolating line plugging, which is part of the underlying
pattern recognition technology developed to solve the problem
of line plugging.
When the lines between the process and the sensor start to
clog through fouling and buildup on the impulse tubing inner
surfacesor loose particles in the main flow getting trapped in
the impulse linestime and frequency domain signatures of the
fluctuations start to change from their normal states. The clog-
ging decreases or increases the effect of damping on the pressure
noise of the main flow signal. As the impulse lines get clogged,
measurement noise levels change. Fig. 6 displays the noise con-
ditions of sensor outputs during normal, one line plugged and
both lines plugged conditions.
Some fieldbus pressure transmitters have this diagnostics tech-
nology as part of their Advanced Diagnostics Block (ADB). Fig.
7 displays the ADB block diagram, where various function blocks
such as Transducer Block (TB), Resource Block (RB) and others
are displayed.
Operational details of plugged impulse line detection tech-
nology can be summarized into two distinct sections once it is
properly configured, which is simply selecting a few parameters.
First is the learning phase. The algorithm first observes its envi-
ronment, such as the process noise levels and temperature con-
ditions. (These conditions could significantly differ from an FCC
unit application in a refinery to simple drum level measurement.)
At the end of this phase, the algorithm establishes the basic sig-
nature for that pressure transmitter as it is used in that process. It
establishes various parameters that represent process behavior
and keeps them in its memory to be used during the monitor-
ing phase. The learning phase also has a verification phase, so
that repeatability of the process behavior is established.
Second is the monitoring phase, where the algorithm periodi-
cally monitors the process and looks for changes in process sig-
nature. Once a change in process conditions is detected and ver-
ified, the pressure transmitter sets its alert bit to inform the
operator and/or maintenance personnel, since the plugging could
cause a major process upset. Fig. 8 shows a display of the fieldbus
pressure transmitter status.
St at ist ical process monit oring. The second diagnostics
feature of the fieldbus pressure transmitters is a generic process
anomaly detection tool called Statistical Process Monitoring (SPM).
Many process anomalies can be analyzed and correctly diagnosed
by an expert eye or by an expert system where necessary process
expertise and possible conditions and a rule-base are present.
Traditionally, fault detection has been part of the control sys-
tem where analysis is done using data collected by process histo-
rians. There are various reasons for this implementation choice.
Most importantly, the field devices could not handle the tasks
required of fault detection methodologies. This is mainly due to
the limited firmware capability of the older technologies. However,
with the help of advanced silicon technology and digital fieldbus
technologies, todays smart transmitters are capable of providing
more information regarding the process and its conditions in
addition to their traditional process variable information.
Process anomalies can be grouped into five categories. These
M AI NTENANCE AND RELI ABI LI TY SPECI ALREPORT
Aerat ion
Cat alyst
bridge
and void
St andpipe
Time
Bubble size Pressure
P
r
e
s
s
u
r
e
St ick-slip f low produces a not iceable chugging noise wit h
pressure f luct uat ions t hat become less random but more
severe.
FI G. 5.
One line rugged Lines are OK
1.8 0
1.81
1.818
1.817
1.816
1.815
1.814
1.81
1.81
1.811
1.810
0 4 8 1 16 0 18 6 10
Time, min
P
V
,
V
14
Bot h lines
plugged
Sensor out put during normal and plugged impulse line
condit ions.
FI G. 6.
are common for all sensor types and processes: pressure, tem-
perature, flow, level and others. Using advanced pattern recog-
nition and statistical analysis methods, fieldbus transmitters and
smart valves can now detect drift, bias, noise, spike and stuck
behaviors of each process where:
Drift: sensor/process output changes gradually
Bias: sensor/process output shows a level change
Noise: dynamic variation in the sensor/process output is
increased
Spike: sensor/process output is momentarily very high or low
Stuck: dynamic variation in the sensor/process output is
decreased.
Fig. 9 illustrates these anomalies along with normal behavior.
The approach and key features of the developed local anomaly
detection technology that make it applicable to a broad range of
industrial processes are:
No redundancy in the measurement system is assumed
No mathematical model of the process is necessary
No mathematical model of the sensor is required.
Field t est result s.
Test condition 1: Plugged impulse line detection. Unit test-
ing was broken into two days. On day one, the plugged tap and
loss of purge scenarios were tested (problems 1 and 2). On day
two, the circulation problem was tested. Prior to starting the test,
each instrument was calibrated to establish new baseline values for
the diagnostics analysis, and both plugged line diagnostics and
SPM features of the transmitters were initialized to learn the pro-
cess and establish the base-line process patterns.
To test the built-in impulse line blockage diagnostics feature
of the fieldbus pressure transmitter, root valves of the installa-
tion were used to create impulse line blockage.
The fieldbus pressure transmitter successfully detected every
test scenario.
Test condition 2: Loss of purge flow detection. This was
tested by closing the purge source valve. (It was expected that
either the built-in impulse line plugging detection feature or the
statistical data collected at the fieldbus transmitter via SPM would
provide sufficient data to observe the blocking.) Test results indi-
cated that both diagnostics features were successfully indicating
the loss of flow condition.
Test condition 3: Circulation problems within FCC unit.
The internal diagnostic technology of the fieldbus pressure trans-
mitter tested, namely SPM technology, continuously samples the
process signal from the sensor at high frequencies and performs
SPECI ALREPORT M AI NTENANCE AND RELI ABI LI TY
58
I
APRIL 2003 HYDROCARBON PROCESSING
Normal Drif t Spike
St uck
Noise
Bias
7.0
7.5
8.0
8.5
.0
.5
100.0
100.5
101.0
101.5
10 .0
Time
Process anomalies can be cat egorized int o f ive dist inct
classes: drif t , bias, noise, spike and st uck.
FI G. 9.
0
5 5 1 6
00
0
64 66 68 610 61
1
4
5
S
T
D
o
f
n
o
i
s
e
/
m
e
a
n
6
7
8
10
Dat a gat hered over a t wo-week period
Hist orian and SPM dat a collect ed f rom t he t ransmit t er
during a cat alyst upset .
FI G. 10 .
Device
hardware
Sensor
hardware
Device
hardware
RB
ADB
TB
Device
Diagnostics
Loop
Diagnostics
Object
Dictionary
Statistical
Process
Monitoring
AI
PID
Function
Blocks
Sensor
hardware
Advanced diagnost ic block of t he f ieldbus t ransmit t er FI G. 7.
Display of f ieldbus pressure t ransmit t er st at us. FI G. 8.
M AI NTENANCE AND RELI ABI LI TY SPECI ALREPORT
59
additional calculations on it. The transmitter calculates the mean
value of the signal and how that changes with time. It also calcu-
lates the standard deviation in the noise from the process signal.
The standard deviation calculation should allow us to detect a
change in the white noise characteristic long before transition into
stick-slip flow. This will allow operations to take corrective actions
before circulation problems develop.
Fig. 10 displays the data collected with the historian as well as
the fieldbus pressure transmitters ADB for a period of two weeks
during which a catalyst upset occurred. Fig. 11 highlights the data
collected from the transmitter, where the upset during the oper-
ation was detected 30 minutes in advance. It was expected at the
beginning of the test period that this type of data from the field-
bus pressure transmitter would indicate such process upsets in
advance so that necessary measures could be taken to avoid process
shutdowns. The next stages of the research program will integrate
this type of data with operational procedures to improve the oper-
ators ability to respond to catalyst upsets. HP
Ron Szanyi isthe section head of
ExxonM obil Research & Engineering
Instruments & Control Proj ects
Secti on i n the Plant Automati on & Computi ng
Division. He has been with ExxonM obil for 22 years.
M r. Szanyi is a member of the Fieldbus Foundation
Board of Directors and past chairman of the API
Subcommittee on Instruments & Control Systems. He
isbased in Fairfax, Virginia.
Mike Rat erman ishead of the ProcessTechnology
Section for Imperial Oil LTDsEngineering ServicesCanada
Group in Toronto. Hisgroup isresponsible for providing
process techni cal support to all of IO Ls refi neri es i n
Canada. Prior to hiscurrent assignment, M r. Raterman
lead the equi pment health moni tori ng development
effort of Exxon-M obil Research and Engineering in Fair-
fax Virginia. He hasover 25 yearsof experience in fluid
catalytic cracking with ExxonM obil, M obil and Gulf Oil,
and holdsan M S in chemical engineering from the Uni-
versityof Pittsburgh.
Evren Eryurek isthe director of
PlantWeb technology, responsible
for developi ng and coordi nati ng
technologies for PlantWeb across Emerson Process
M anagement divisions. He is a member of the
PlantWeb Leadership team and the leader of the
PlantWeb Diagnostics Council. M r. Eryurek has 15
issued patents and over 20 pending patent applica-
tions. He is a senior member of ISA and resides in
M inneapolis, M innesota.
0
5 5 1 6
00
00
0 .
64 66 68 610 61
1
4
5
S
T
D
o
f
n
o
i
s
e
/
m
e
a
n
6
7
8
10
Dat a gat hered over a t wo-week period
The t ransmit t er diagnost ic dat a det ect ed t he cat alyst
upset 30 min. in advance.
FI G. 11.
THE VALIDATION OF AN ON-LINE NUCLEAR MAGNETIC
RESONANCE SPECTROMETER FOR ANALYSIS OF
NAPHTHAS AND DIESELS
Paul A. Barnard Chuck Gerlovich Roger Moore
Senior Research Scientist Principal Engineer Principal Chemist
Equistar Chemicals, LP. Lyondell Chemical Co. Equistar Chemicals, LP
1515 Miller Cut-Off Rd 2502 Sheldon Road 8280 Sheldon Road
LaPorte, TX 77571 Channelview, TX 77111 Channelview, TX 77530
KEYWORDS
NMR, Online, Naphtha, Diesel, Condensate, Distillation, PINA
ABSTRACT
This paper discusses the efforts to commission and validate an online Nuclear Magnetic Resonance
Spectrometer [NMR] for the analysis of heavy feedstock to a cracking plant. Reference will be made to
ASTM D3764 Standard Practice for Validation for Process Stream Analyzers (1). Results from
laboratory analyses of standards and plant samples will be presented.
INTRODUCTION
The cracking of heavy feedstock to produce olefins and other downstream derivatives can be optimized
by controlling many plant parameters such as Coil Outlet Temperatures, Hydrocarbon to Steam Ratio,
Flow Rates, Pressures, etc. These parameters can be varied based on kinetic and thermodynamic models
to increase the quantities of the more economically preferred products. Several commercial software
packages are available to the plant to assess all input information and tune the furnace cracking
conditions to afford these optimal conditions. One important piece of information for the optimizing
software is the exact make up of the furnace feedstock. Plants that crack gases such as ethane, propane,
and butane can effectively analyze their feedstock and furnace effluent by gas chromatography. The
products of such plants are not complicated and would not benefit from optimizing programs. However,
heavier feedstock such as Natural Gasoline, Naphtha, Condensates, and Diesels can have a widely
variable composition, and as such the furnace yields can vary in their components and concentrations.
This is even more the case if the plant receives many different types of feedstock from the spot market
and local refineries. The cracker facility at Corpus Christi, TX installed an NMR (2) in 2002 for
feedstock analysis to provide detailed information to the Spyro / RT-Opt plant optimization package
(3).
Copyright 2003 by ISA - The Instrumentation, Systems and Automation Society ~ http://www.isa.org
Presented at the 48
th
Analysis Division Symposium ~ 27 April - 1 May 2003 ~ Calgary, AB, Canada
PROJECT REQUIREMENTS
This project sought bids that would provide as a minimum the following information.
Normal Paraffins: C4, C5, C6, C7, C8, C9, C10, C11, C12+, and total
Isoparaffins: C4, C5, C6, C7, C8, C9, C10, C11, C12+, and total
Naphthenes: C5, C6, C7, C8, C9, C10, C11, C12+, and total
Aromatics: C6, C7, C8, C9, C10, C11, C12+, and total
D86 Distillation: Initial Boiling Point [IBP], T10, T50, T90, and End Boiling Point [EBP].
D2887 Simulated Distillation: Initial Boiling Point [IBP], T10, T50, T90, and End Boiling Point
[EBP].
% Hydrogen
Specific Gravity
HARDWARE
LABORATORY EQUIPMENT
The lab experiments were carried out on an Agilent (HP) 6890 with split inlet and flame ionization
detector. The column used is a 50m x 0.20mm id crosslinked methyl siloxane 0.5um film thickness (HP
PONA). The method is based on ASTM Method D6733, Standard Test Method for Determination of
Individual Components in Spark Ignition Engine Fuels by 50-Meter Capillary High Resolution Gas
Chromatography. Software from the Institut Francais Du Petrole (IFP) called Carburane was used to
identify components in the GC analysis and to produce the detailed hydrocarbon report.
In addition to the above, an HP 5973 GC/Mass Spectrometer system was used to verify peak
identification. The system consists of an HP 6890 GC with split inlet and HP5973 mass spectrometer
featuring a hyperbolic quadrupole mass filter. Software from SINTEF Applied Chemistry in Oslo,
Norway called SI-PIONA is used to help identify peaks using a combination of two libraries, a library of
mass spectra and a retention library. The GC column used for this application is a 100m x 0.25mm id
crosslinked methyl siloxane with a 0.5um film thickness (Chrompak CP-SIL PONA CB).
Hydrogen content was measured on a Bruker MiniSpec benchtop NMR. D86 was run on standard lab
distillation equipment.
ANALYZER HARDWARE
Two process headers were tapped, and an insulated dual tubing bundle run about 200 feet to an
existing analyzer house. The sample conditioning cabinet which is located on the outside wall of the
house consists of manually activated block and bleed valves, coolers with temperature switches, stream
switching valves, flow control, fast speed loops, and a manual sample collection point. A switch was
built into the programmable logic controller [PLC] logic to allow the stream switching to be halted while
a sample was collected, to ensure the timestamp and stream of the sample could be correctly labeled.
The analyzer itself has dual 3-way block valves to provide a constant by-pass while the measurement is
made on a static sample. The analyzer is contained in a free standing Class 1 Div 2 stainless steel
enclosure with a built in air conditioner for the on-board electronics and computer. The sample
measurement probe is isolated in a temperature controlled box within the enclosure. A remote PC in the
analyzer control room is linked by PC AnyWhere / Ethernet for viewing all diagnostics and results.
Separate Modbus connections link direct to the Distributed Control System [DCS] for plant use. The
sample return point was close to the analyzer house, just downstream of the cracking furnaces, with
magnetically coupled pumps to provide the required by-pass flow rate. However, the start-up of the
NMR was delayed when the pumps were found to be undersized and constantly decoupling. A review
of the situation caused a relocation of the return point to a lower pressure process entry upstream of the
furnaces (albeit a longer tubing run), thus negating the need for the pumps. This also increased the
economic value of the feedstock in the fast speed loop (60 gallons per hour).
VALIDATION
COMMERCIAL VALIDATION LIMITS
Based on careful examination of the lab paraffins, isoparaffins, naphthenes, and aromatics [PINA]
repeatability, and ASTM methods D86 (4) and D2887 (5), a set of validation limits for the PINA and
distillation parameters was finalized. These limits could theoretically be equated to the analyzer
reproducibility in that the results from the analyzer and the lab for the same sample should not exceed
these limits in more than 1 case in 20 (95 % confidence limit). The limits are shown in Table I.
TABLE I. COMMERCIAL VALIDATION LIMITS FOR PINA AND DISTILLATION
PARAMETERS.
Parameter
Validation
Limits [wt%]
n-hexane 0.80
n-nonane 0.20
Total n-paraffins 1.00
i-hexane 1.00
i-nonane 0.40
Total i-paraffins 0.60
cyclohexane 0.20
C9-naphthene 0.30
Total naphthenes 0.50
Benzene 0.20
C9-aromatics 0.20
Total Aromatics 0.30
D86 [IBP, T10, T50, T90, EBP] 14
o
F
D2887 [IBP, T10, T50, T90, EBP] 9
o
F
Density 0.003 g/ml
LAB VALIDATION
The modeling process for the online NMR relies exclusively on the lab results of plant samples.
Therefore it is imperative that the lab analyses are as accurate and precise as possible. The first step in
the validation of the Online Analyzer is therefore to accredit the lab. To this end a gravimetric standard
(6) was purchased and analyzed on six non consecutive days. This exercise essentially establishes the
precision statement (7) for the applicable test method since the lab test is not routine. The components
and results for the standard are shown in Table II.
TABLE II. COMPONENTS IN LABORATORY GRAVIMETRIC STANDARD.
Certified Measured Certified Measured
Component [wt %]
Average
[wt%]
%
RSD
[n=6] Component [wt %]
Average
[wt%]
%
RSD
[n=6]
n-pentane 2.354 2.18 1.06 benzene 2.359 2.31 1.22
n-hexane 2.374 2.33 0.51 toluene 2.465 2.42 0.81
n-heptane 2.399 2.38 0.14 ethylbenzene 2.420 2.42 0.43
n-octane 2.367 2.38 0.16 p-xylene 3.300 3.28 0.53
n-nonane 2.389 2.43 0.34 propylbenzene 2.354 2.37 0.21
n-decane 2.394 2.44 0.51 cumene 1.833 1.84 0.20
n-undecane 2.367 2.47 0.67 3-ethyltoluene 1.910 1.92 0.26
n-dodecane 2.394 2.35 0.64
1,2,4-
trimethylbenzene 1.751 1.79 0.25
isopentane 1.888 1.74 1.03
1,3,5-
trimethylbenzene 1.439 1.46 0.26
2-methylpentane 1.675 1.62 0.42 n-butylbenzene 2.407 2.45 0.43
3-methylpentane 1.738 1.70 0.39 isobutylbenzene 2.214 2.26 0.33
2,2-dimethylbutane 1.572 1.52 0.44
1,2,4,5-
tetramethylbenzene 1.245 1.23 0.55
2,3-dimethylpentane 2.843 2.82 0.16 n-pentylbenzene 2.399 2.45 0.61
2,4-dimethylpentane 1.790 3.68 0.28 1-pentene 2.213 1.99 1.31
2,2,4-trimethylpentane 2.231 4.62 0.18 1-hexene 2.399 2.29 0.66
cyclohexane 2.367 2.32 0.42
2,3,3-trimethyl-1-
butene 1.073 1.03 0.20
methylcyclohexane 2.367 2.36 0.22 1-octene 2.399 2.34 0.11
ethylcyclohexane 2.399 2.40 0.22 2-methyl-1-heptene 1.310 1.29 0.14
propylcyclohexane 2.367 2.40 0.28 1-nonene 2.379 2.38 0.33
n-butylcyclohexane 2.347 2.40 0.45 1-decene 2.379 2.37 0.52
n-pentylcyclohexane 2.379 2.45 0.61 1-undecene 2.420 2.50 0.70
decalin total 1.959 1.99 0.50 dodecene 2.379 2.32 0.67
The % relative standard deviations are all extremely low, showing the excellent repeatability of the
chromatographic method. Although this gravimetric standard had many of the components that are
expected to be found in a naphtha or condensate, the proportions of those constituents are not similar.
Therefore a plant sample was also analyzed on six non consecutive days, and the variance compared to
the gravimetric standard by the F-Test (8-10) to determine if a significant difference existed between
analyzing standards and plant samples by the analytical method. The results for the plant samples and
the corresponding F-Test values are shown in Table III.
TABLE III. RELATIVE STANDARD DEVIATIONS AND F-TEST RESULTS FOR TWO
PLANT SAMPLES
Critical F = 5.05 for a limited Reference Set
Naphtha
Average
% RSD
[n=6]
F
result
Condensate
Average
% RSD
[n=6]
F
result
n-c4 1.27 0.70 3.91 0.85
n-c5 10.95 0.60 8.08 9.37 0.55 5.05
n-c6 7.81 0.35 5.19 7.27 0.39 5.88
n-c7 5.05 0.08 1.42 6.03 0.28 24.92
n-c8 4.25 0.31 11.95 4.42 0.35 16.00
n-c9 3.92 0.40 3.63 3.07 0.51 3.57
n-c10 1.85 0.49 1.93 2.13 0.70 1.44
n-c11 0.13 0.55 544.47 1.42 0.81 2.09
n-c12+ 2.36 0.92 2.06
Total n-paraffins 35.24 0.18 7.50 40.08 0.26 20.65
I-c4 0.09 5.96 0.72 1.12
I-c5 5.78 0.64 4.19 6.68 0.59 4.80
I-c6 7.53 0.41 2.55 6.39 0.41 1.84
I-c7 4.78 0.11 6.64 6.21 0.28 1.64
I-c8 5.07 0.23 2.02 6.33 0.33 6.57
I-c9 4.95 0.66 4.25 0.31
I-c10 3.49 0.45 3.74 1.02
I-c11 0.50 0.74 1.67 0.88
I-c12+ 2.35 2.80
Total I-paraffins 32.19 0.08 4.56 38.34 0.14 1.01
cyclopentane 0.66 0.45 0.28 0.62
me-cyclopentane 2.00 0.29 0.67 0.43
cyclohexane 1.71 0.21 7.37 1.36 0.38 3.58
methylcyclohexane 3.35 0.12 1.79 2.59 0.29 2.09
Other c7-Nap 2.13 0.14 1.19 0.29
c8-Nap 5.66 0.62 42.69 2.02 0.92 12.21
c9-Nap 5.56 0.34 7.69 2.45 0.49 3.24
c10-Nap 1.93 0.93 1.06 1.26 1.84 1.59
c11-Nap 0.30 6.39 1.70 0.87 8.38 23.89
c12+Nap 0.84 11.37
Total Naphthenes 23.30 0.24 2.33 13.54 0.97 13.20
Benzene 1.30 0.29 53.71 2.34 0.40 9.09
Toluene 1.25 0.17 87.46 1.00 0.30 42.95
Ethylbenzene 0.33 0.39 68.55 0.13 0.43 367.22
Xylenes 2.28 0.36 4.38 1.01 0.41 17.50
c9-Arom 3.16 0.47 2.01 1.13 1.35 1.94
c10-Arom
c11-Arom 0.52 4.30
c12+Arom 0.95 1.70 1.71 2.99
Total Aromatics 9.27 0.33 3.49 7.84 0.67 1.17
The components of the gravimetric standard have been relabeled to fit the descriptions required for the
online NMR. The critical F value for a limited reference set [five degrees of freedom] is 5.05 and many
of the test parameters pass this test. For those parameters that do not pass the F-Test, if an assignable
cause can be found, then no corrective actions need be taken. This was the case for all parameters that
failed the F-test (e.g. very low concentrations that result in high standard deviations), and the method
was therefore deemed suitable. An examination of the % relative standard deviations [RSD] for all
components shows very low values, except for the higher carbon numbers where integration starts to get
difficult. Based on these test results, the lab was considered more than adequate in its ability to provide
high quality data for input to the modeling process.
ANALYZER VALIDATION
ASTM Method D3764 was cited by the analyzer vendor as the vehicle to be used in the commercial
validation process of the analyzer. D3764 describes the steps to be taken to compare lab results with
analyzer results, and the statistical methods employed to decide if the two results are significantly
different or not. It was found that this method could only be used as a guide for the actual process
finally agreed to by the customer and the vendor. Section 4 of ASTM D3764 refers to two procedures
that can be used in the validation process. The Reference Sample Procedure involves a laboratory
calibrated sample that is introduced into the analyzer and results compared. The Line Sample Procedure
involves withdrawal of samples from the analyzer system, with subsequent comparison of a lab result
with the result of the analyzer at the time of sampling. The constraints of time and distance forced the
latter procedure onto the validation process.
RESULTS
Existing models developed at existing NMR users for PINA and D86 were installed on the analyzer after
all initial hardware situations were corrected. The modeling was done by PLS (11). D2887 models
were developed with new plant data since these had not been previously established. The process of
collecting lab data for incorporation into the training set began by catching samples in stainless steel
cylinders, and synchronizing the timestamp with the NMR measurement. The samples were shipped in
the cylinders from the plant in Corpus Christi, TX to the testing lab in Channelview, TX. Early data
showed that the existing models would require input from the new installation to improve the
predictions. This is illustrated in Table 4.
TABLE 4. FIRST RESULTS FROM OLD AND NEW MODELS.
Model Sample Total Total
Number n-c6 n-c9
n-
paraffins i-c6 i-c9 i-paraffins
Ver1 651 17.12 -0.01 43.16 15.82 -0.68 32.47
Ver1 652 18.57 0.00 45.40 16.09 -0.87 31.32
Ver1 663 18.73 -0.03 46.11 15.84 -1.04 30.74
Ver1 667 14.85 -0.16 41.46 14.23 -0.81 32.54
Ver1 670 18.81 0.69 45.36 15.52 0.08 32.84
Ver2 656 7.55 1.85 35.79 10.57 2.27 36.36
Ver2 661 7.42 1.57 35.22 9.75 2.82 35.32
Ver2 662 7.60 2.10 34.97 10.50 1.95 37.05
Ver2 669 6.97 1.64 34.01 9.96 2.39 37.48
Model Sample cyclo Total Total
Number hexane C9-Nap Napthenes Benzene C9-Arom Aromatics
Ver1 651 4.35 0.57 14.52 2.64 -0.45 5.16
Ver1 652 4.60 0.44 13.60 2.59 -0.31 4.68
Ver1 663 4.71 0.47 13.05 2.48 -0.20 4.32
Ver1 667 5.70 0.95 16.48 2.64 -0.14 5.26
Ver1 670 3.56 1.10 12.20 2.27 0.00 4.91
Ver2 656 2.16 2.05 17.66 2.82 1.05 9.70
Ver2 661 2.20 2.46 17.19 2.57 1.34 9.89
Ver2 662 1.75 1.84 15.35 2.98 0.90 10.16
Ver2 669 1.68 2.30 14.95 2.66 1.22 9.83
Results from model Version 1 did not accurately predict the validation parameters, but the addition of
results from the samples caught at the new facility into the training set improved the results markedly.
The first set of models for the D2887 and D86 were installed after about 3 months of data collection,
comprising approximately 60 samples. The results for T50 and T90 were found to be the most robust.
This is not surprising, since the feedstock mixture of heavy and light materials caused wide variation in
the initial boiling points [IBP] and end boiling point [EBP]. D86 apparatus cannot handle heavy tails
very well, and the GC SimDist D2887 method was set up for diesels. The transition from a light
condensate to a diesel is captured very well by the NMR distillation predictions, and they match the lab
results quite closely as seen in Figure 1. Trend lines from the NMR for density, total i-paraffins, and
total aromatics are also shown in Figure 2.
CONCLUSION
An NMR was successfully installed at the Equistar Corpus Christi plant and is being used to characterize
naphthas, condensates, and diesels. The predictions are being used in conjunction with a Whole Plant
Optimization software package to run the furnaces to produce higher yields of more economically
favorable hydrocarbons. The financial impact of the analyzer has not yet been established but is
expected to be > $1 million / year.
ACKNOWLEDGMENTS
All of the work described in this paper was a team effort within the company, but particular
acknowledgement must be made to the following individuals.
Mr. Tripp Howse the analyzer tech who fastidiously collects samples and maintains the system.
Mr. Bill Bradbury the Project Manager
Messrs. Mike Chaney, Gary Colwell, Jim Wu, Tom Kelley, and Tom Ferguson for valuable discussion.
SimDist D2887 T50
50
200
350
500
650
800
10/19/02 10/22/02 10/25/02 10/28/02 10/31/02
D
e
g
r
e
e
s
F
a
r
e
n
h
e
i
t
NMR T50 Stm 1 Lab D2887 T50
FIGURE 1. SIMULATED DISTILLATION T50 PREDICTIONS FEEDSTOCK TRANSITION.
PINA Gas Oil Header
0
15
30
45
60
10/21/02 10/24/02 10/27/02 10/30/02
W
t
%
0.3
0.6
0.9
D
e
n
s
i
t
y
,
g
/
m
l
Density
i-paraffins
aromatics
FIGURE 2. PREDICTIONS FOR TOTAL AROMATICS, ISOPARAFFINS, AND DENSITY.
REFERENCES
1. ASTM D764-92 Standard Practice for Validation of Process Stream Analyzers,
American Society for Testing and Materials. West Conshohocken, PA.
2. The NMR installed is a 60MHz Model NMRB, Style C Magnetic Resonance
Analyzer by The Foxboro Company, Foxboro, MA, a division of Invensys.
Application modeling was provided by Process NMR Associates, LLC, of Danbury,
Conn.
3. SPYRO Version 6 kinetic Scheme 9306 by North American Pyrotec, San Dimas,
CA, a division of KTI. RT-Opt by AspenTech, Houston, TX.
4. ASTM D86-00a Standard Test Method for Distillation of Petroleum Products at
Atmospheric Pressure
5. ASTM D2887-02 Standard Test Method for Boiling Range Distribution of
Petroleum Fractions by Gas Chromatography
6. Gravimetric Standard
7. ASTM D3764 Section 3.1.12. This gravimetric standard can also be described as
the Reference Sample , section 3.1.14.
8. The F-test is used to determine if there is a significant difference between two test
methods. It is essentially a ratio of the variances of the two methods. See ASTM
D3764 section 9.
9. Christian, Gary D Chapter 4 Data Handling, Analytical Chemistry, 3rd Ed.,
Wiley and Sons, New York page 72.
10. Potts, Lawrence W Chapter 2 Errors in Chemical Analysis, Quantitative Analysis
Theory and Practice, 1st Ed., Harper and Row, New York. Page 75.
11. Grams AI Version 7 with PLS/IQ by Galactic Industries, Salem, NH.
AM-03-19
THE SMART REFINERY:
ECONOMICS AND TECHNOLOGY
by
Douglas C. White
Emerson Process Management
Houston. Texas
Presented at the
NPRA
2003 Annual Meeting
March 23 - 25, 2003
San Antonio, Texas
1
AM-03-19
THE SMART REFINERY:
ECONOMICS AND TECHNOLOGY
Douglas C. White
Emerson Process Management
Houston, Texas
Abstract
Advances in sensors, automation, and information technology have significantly
changed the way refineries operate. High performance computing in physically small
devices and high speed communication technology developments have been the
foundation for many of these advances. Advanced analytical and optimization methods
based on this infrastructure can simultaneously lower costs, increase profitability and
improve customer service across the supply chain. The collective changes are
sometimes characterized as constituting smart refining." They allow the refinery staff
to better analyze the past, assess the current state, and predict future behavior under
alternative scenarios. In this paper, we survey the recent history of these developments
and look at likely future trends. Economic benefits achieved through implementation of
this technology are explained and a framework for understanding them presented. The
issues that have slowed adoption and implementation are also discussed.
The Smart Refinery: Economics and Technology
2
AM-03-19
Introduction
What is a smart refinery? We are all aware of the extraordinary developments that
are occurring in the computer and communication area. It seems that almost every day
there is another report of the continuing decrease in the cost and size of computing
elements and the continuing increase in the availability of communication bandwidth.
Advances in software and mathematical analysis have built on these developments to
significantly increase our ability to model and optimize refining activities. Many new
developments in process sensor and measurement devices have also appeared.
These developments have led to new methods and procedures for operating production
facilities. The new procedures utilize more comprehensive and frequent measurements
of the current state of the refinery, increased use of models and other analytical
techniques to compare what the refinery is currently producing against what is expected
and to understand the differences, earlier detection of anomalous conditions, and tools
to plan future operation with increased confidence. While we may be aware of these
developments as individual advances, their cumulative and combinatorial aspects are
perhaps less well recognized. This paper will discuss how the combination of these
technologies has led to an evolutionary change in the way refineries can operate. This
change is to decisions and actions based primarily on the best available prediction of
expected future conditions rather than reactions principally triggered by what has just
happened. This shift in focus is the defining characteristic of "smart refining."
The second related subject of this paper concerns the expected economic benefits
from investments in this area. The link between technology developments and
improved economic results including increased productivity is not always apparent.
Many unsupportable claims on potential benefits are made. Correspondingly, there are
many technology developments that are believed to be beneficial but it is not clear how
to translate this belief into realistic monetary values.
Incentives for Change
Why do we need to consider these new technologies for use in refineries? What
refinery problems are they solving that can't be solved more economically by other
means? In answering this question, three major incentive areas are reviewed below
financial, safety and environmental issues, and workforce demographics.
Financial
Looking at overall financial performance, the five year average return on invested
capital for the US refining industry for the period 1996 to 2001 has been approximately
9.5% (3) which is at or below the cost of capital for the industry with 2002 results
generally lower. Individual refining companies have varied widely with five year
averages that range from negative to 14% (15). Clearly there are individual differences
in financial performance and competitive pressures force the industry to pursue all
avenues for improvement.
Operational excellence is the goal of most refineries and this excellence has many
components. Among these components are some key objectives that have a direct and
significant impact on the financial performance of the site. These include:
The Smart Refinery: Economics and Technology
3
AM-03-19
Produce the highest valued product mix possible
Maximize production from existing equipment
Maximize equipments on stream operating (service) factor
Continually reduce costs and pursue operational efficiencies
Keep inventories as low as possible
Minimize Health, Safety and Environmental incidents
where the last objective implicitly recognizes the reality that HSE issues can often be
governing.
Where are the opportunities for operational improvement?
Energy Energy costs remain the largest single cost component in the refineries
after crude purchases. For the 1996 to 2001 period, they averaged approximately 8%
of the value of crude purchases and about 30% of all operating costs for the overall US
refining industry (3). There are many opportunities for energy savings in the average
refinery that remain unpursued.
Reliability Lost production due to unscheduled shutdowns or slowdowns of refinery
equipment and process units remains an ongoing problem with average losses in
potential capacity of 3 to 7%.
Maintenance Maintenance costs are the third largest cost component after crude
and energy at 10% to 20% of the operating costs but often the maintenance action is
provided too early when not required and sometimes (regrettably) too late.
Inventory Large inventories of crude, intermediates, and products are
characteristic of many refineries. Excessive inventory increases working capital and
reduces the return on invested capital.
The components of smart refining provide some of the most cost effective
investments available to achieve the operational excellence objectives listed above.
Safety and Environmental Issues
The safety and environmental performance of the refining industry is widely viewed
by the public as unsatisfactory. Analysis of the cause of recent accidents and incidents
indicate that many factors including design, change control, and operational issues
contributed to the incidents (1,2). However, reviewing the incidents and potential
amelioration indicate that improved measurements and real time analysis/ detection
might have prevented or at least substantially reduced the damage from approximately
25% to 50% of these accidents.
The Smart Refinery: Economics and Technology
4
AM-03-19
Environmental emissions from refineries continue to be a major problem. Although
the US Chemical Process Industires (CPI), reduced its emissions by 56.3% from 1989
to 1999 while increasing production by 33.3% (5), it still remains the largest single US
manufacturing industry source of undesirable emissions (6). Industry along the Texas
Gulf Coast, which is the worlds largest single concentration of CPI sites, is under
government mandates to reduce NOx emissions by a full 80% by 2007 (13). Obtaining
the latter goal and continuing the reduction will require many changes in refinery design
and operation. Improved measurements, modeling, analysis, and control are critical to
the goal of reducing emissions.
Demographics
The demographics of process refinery operators in North America are changing.
With industry downsizing there was very limited hiring in the 80s and 90s. As a result,
75+% of the operators in the CPI are expected to retire in the next 10 to 15 years (13).
Clearly, the average operator experience level will drop as a result. In addition, the
demands for enhanced analytical skills in the operator's job are increasing. A partial
solution to this problem is again to use refinery measurements, modeling and analytical
techniques to automate routine decision processes or at least provide the information to
make the decision process more efficient.
The general conclusion from the comments above is that there is a significant need
for improved operation in the refining industry and that smart automation technology
can be a significant contributor to the improved operation.
Prediction Versus Reaction
What is meant by decisions based on intelligent prediction rather than reaction?
The concept can perhaps best be understood in the context of the normal decision
process in the refinery as presented in figure 1 below. We measure a condition in the
refinery or detect a change of state, analyze the data to potentially spot an anomaly,
predict the effect of alternative action scenarios, decide which scenario to implement,
and then actually implement the scenarios. After this, the cycle repeats. Examples of
decisions made in this framework include what products to produce and when to
produce them, decisions on the resources required for production including feedstocks
and manpower and decisions on when to perform maintenance on a particular item of
equipment.
The Smart Refinery: Economics and Technology
5
AM-03-19
Figure 1. Refinery Decision Cycle
What are the characteristics of the steps in this process?
Measure
Modern refineries produce a lot of data. It is not unusual for a large refinery site to
have 100,000 distinct measurements. If these measurements are scanned once a
minute, ten gigabytes a week of data will be produced. However, the data is natively of
poor quality. Instrument readings drift and noise corrupts the measurements. Even
when the actual measurements are good, the statistical properties are not the data is
cross-correlated and serially auto-correlated. It is often hard to detect changes or
trends.
Analyze
Analyze in this context is obtaining the best possible estimate of the current
performance of the system (refinery) and its history. Generally this means processing
the raw data through some kind of a model to obtain a performance indicator, perhaps
of an individual piece of equipment or of the overall refinery or site. This performance
indicator is then compared against a standard. The standard could be the normal, new
or clean performance of the equipment; it could be the financial budget for the refinery;
or it could be environmental or design limits. The model could be simply our memory of
how things behaved previously or it could be a formal mathematical formulation. Key
issues with analysis are to detect under (or over) performance and precursors of
abnormal events.
Measure
Analyze
Predict
Decide
Implement
The Smart Refinery: Economics and Technology
6
AM-03-19
Predict
The next step in the decision process is to project into the future the expected
behavior of the system based on the information available. In some cases, this is done
by simply extrapolating future behavior to be the same as current or to expect future
behavior to follow the same pattern the system has exhibited in the past under similar
conditions. In more complicated situations, we can use an estimate of the current state,
a model of the system, and assumptions about the disturbances or effects that the
system will experience. Repeating from the paragraph above, analysis refers to
obtaining the best possible estimate of the current and past state of the system while
prediction refers to obtaining the best possible projection of future behavior.
Decide
Ultimately it is necessary to make a decision about the action to take in the future
including no new action and no change in condition. Normally this is done by
evaluating a set of feasible alternative decision sequences and then choosing one that
maximizes or minimizes a combined set of objectives within the imposed set of
constraints with this evaluation and choice done within the time available.
Implementation
Implementation is the actual execution of the scenario chosen. It involves all of the
activities required to make some change occur including most particularly inducing
individuals in the refinery to perform or not perform an action. Without implementation,
measurement, analysis and prediction are just an exercise.
The decision steps mentioned above are obviously not new and in fact have been
followed in refineries for many years before computers and networks had any major
impact. Those charged with decisions did the best they could at obtaining information
on the state of the refinery, on estimating its current performance and predicting what
would happen with various decision scenarios. However, the uncertainty levels were
very high and most decisions were not analytically based.
How do we move towards "smart" operation? We can improve the overall decision
process by:
Knowing better what the refinery is doing now this implies more accurate
measurements with less delay and more frequent measurements of previously
difficult to measure conditions.
Comparing better what the refinery is doing against what it is expected to do and
understanding the differences this leads to model based analysis and
techniques which promote comprehension of the information
Predicting better the effect of alternate decisions in the future
Some examples from different operational areas may make this clearer.
The Smart Refinery: Economics and Technology
7
AM-03-19
Predictive Control Example
The first is from the control field. Consider the evolution from the PID controller to
advanced controllers utilizing multivariable predictive constraint control (MPCC)
algorithms. A standard PID loop is shown below:
Figure 2 Standard PID Loop
The controller senses the current measurement of the controlled variable, compares
it with the desired setpoint to calculate an error, and then takes corrective action based
on the parameter settings of the controller. It reacts to the current measurement.
Contrast this with the action of an MPCC algorithm in Figure 3 following.
For MPCC, there is a formal mathematical model relating the response of the
controlled variable to changes in the manipulated variable. This then allows the control
algorithm to use the history and current values of manipulated and controlled variable
moves to predict the behavior of the plant in the future and to take action based on this
prediction. The controller predicts if a controlled variable is likely, in the time period of
the prediction horizon, to deviate from its specification or violate a plant limit. Control
action can then be taken to correct the condition before there is ever an actual deviation
or violation detected. The implementation part of the decision process is done
automatically via closed loop control. Moreover, we can combine the models for
multiple controlled and manipulated variables into one controller that explicitly
recognizes the interaction between them as shown in Figure 4 below. The result is
significantly improved control performance. Reductions in standard deviation of 30 to
70% over standard PID control are routinely reported with MPCC implementation and
payout period of a few months for investments in this technology are often reported.
PID
Algorithm
to make
Error zero
Plant
Current measured
value for single
controlled variable
Setpoint
Error
Move
single
manipulated
variable
+
-
Control Moves Based on Current
Measurement
The Smart Refinery: Economics and Technology
8
AM-03-19
Figure 3 Predictive Control Modeling
Figure 4 - Multivariable Predictive Constraint Control
Uses Information from
The Past
To Predict The Future
Past Present Future
Time
Controlled Variable
Manipulated Variable
Modeled
Relationship
Plant
Multivariable
Predictive
Constraint
Controller
Multiple
Setpoints
Multiple
measurements
of controlled
variables
Multiple manipulated
variable moves
based on predicted
plant behaviour
Multiple
Constraints
Measured
Disturbances
The Smart Refinery: Economics and Technology
9
AM-03-19
Predictive Maintenance Example
The second example, from reference 18, concerns plant maintenance. There are
several approaches to maintenance in the plant. One is to wait until the equipment
breaks and then react to fix it if it is really important. Many plants still operate in this
mode. The second, known as preventative maintenance, uses average times to failure
for equipment and schedules maintenance before the expected failure time. However,
equipment can vary widely in actual performance. Predictive maintenance attempts to
find techniques to determine more precisely if equipment is underperforming or about to
fail. With the continuing improvement in computing and communication capabilities,
predictive maintenance can be based on actual device performance data, obtained and
analyzed in near real time. The overall objective is to catch potential equipment
problems early which leads to less expensive repairs and less downtime. Conversely,
we want to avoid shutting expensive equipment down unnecessarily. Figure 5,
following, illustrates the concept. Detecting anomalies early and deciding what they
imply with respect to the equipment is the goal. For example, the vibration patterns of
rotating equipment vary with deterioration of the equipment and can be used as
predictors of failure. In operation, data from the process and the equipment is validated
and brought to performance models. These calculate the performance and correct it to
standard conditions. With economic information, the cost of poor performance is also
calculated. This can be used for predictions of unscheduled removal (or replacement)
of part(s), disruption of service, or delays of capacity. Maintenance based on this
approach has been shown to reduce unscheduled maintenance costs by as much as
20 to 30% while simultaneously improving equipment reliability.
Figure 5 Predictive Maintenance
Acquire and
Validate Data
Analyze
Performance
Predict
Degraded
Operation
Take
Corrective
Action
Design
Information
Maintenance
History
Process Data
Temperature
Pressure
Flow
Load
Operating
Mode
Validated Data
Standardized Performance
Economics
Cost/ Benefits for Cleaning
Impending Failures
Proritized Maintenance
Work Orders
Asset Failure Probability
Equipment
Diagnostics Maintenance Decisions Based on
Future Predictions
The Smart Refinery: Economics and Technology
10
AM-03-19
Predictive Product Demand Forecasting Example
The staff at every refinery needs to make a decision on the quantity of each product
to produce in the next production period and this decision is based partially on a
forecast of market demand. It is also recognized that the forecast will always have
uncertainty due to market fluctuations, production interruptions and transportation
issues. The response to this uncertainty is to have substantial product inventories that
ensure actual demands seldom go unmet. In fact, many refineries even today set their
schedules in large measure to produce to inventory, i.e. there is a target inventory of
each product and when the actual amount falls below this amount, they react and
produce more to fill the tanks back to the desired levels. Other elements of the supply
chain, i.e. production, the terminals, and the retail outlets all contain more stocks of
feed and product inventory. These inventories tend to be controlled locally and set
based on problem avoidance at the individual site. The result is excessive inventory in
the supply chain that consumes unneeded working capital. Modern product demand
forecasting systems utilize sophisticated modeling of expected demand based upon
extensive analysis of historical records and correlations with demand triggers, i.e.
expected weather patterns. These are combined with real time information about the
current total state of inventory across the supply chain as shown in the figure below to
predict demand and set production targets (14). Analytical analysis of the projected risk
of not meeting demand compared with the cost of inventory can then be made. One oil
company reported a substantial increase in profitability largely attributed to
implementation of this technology (20).
Figure 6 Predictive Product Demand Forecasting
Production Refinery Supply and
Distribution
Realtime
Information
Analytical
Forecasting
and
Planning
Production Based on
Future Prediction of
Demand
The Smart Refinery: Economics and Technology
11
AM-03-19
Enabling Technologies
What are the enabling technologies that permit refineries to move from reacting to
predicting? There are certainly dozens and perhaps even hundreds of new
developments that could be discussed. In the sections below, the ones that the author
views as having the most important impact on operations are presented and referenced
to their specific decision cycle position as shown in figure 7 below. Since space limits
how much functionality can be covered in this document, some references are provided
on sources for more information. The emphasis again is on the cumulative and
combined effect of these developments to support the smart refinery operation.
Figure 7 Enabling Technologies
Measure
Smart Field Devices One of the most dramatic technology developments has been
in the general area of smart field devices. As microprocessors have shrunk, they have
been incorporated directly into basic refinery equipment. In the instrumentation area,
this has included transmitters, valves, and primary measurement devises including
process analyzers. These devices have become in essence small data servers. A
basic transmitter a few years ago would send one 4-20 ma signal back to the control
system as an indication of the measured value. Today, a modern transmitter sends
back multiple readings plus at least six different alarm conditions. A standard electric
motor that previously had no real time measurements now has as many as fifteen
sensors providing temperatures, flux, run times, etc. that are available for recording and
diagnosis. Modern valves now calculate and retain in local data history a current valve
signature of pressure versus stem travel, compare it with the signature when the valve
was installed, and provide diagnostic information or alarming on the difference. An
example is shown below in Figure 8 of a valve that is clearly malfunctioning and is
reporting this malfunctioning in real time. In addition to normal measurements, cheap
Measure
Analyze
Predict
Decide
Implement
Smart Field
Devices
Digital Plant
Networks
Data
Mining
Model
Based
Performance
Realtime
Simulation
Optimization
Expert
Systems
Comprehensive
Plant Databases
Analytics
Predictive
The Smart Refinery: Economics and Technology
12
AM-03-19
sensors allowing thermal photographic and audiometric data monitoring on major
equipment are being routinely used. The data transfer is not just from the devices to
the central database. Configuration and calibration information is entered remotely and
executed without the necessity for local activation.
Figure 8 Typical Smart Device
Analytical procedures that could only be performed in laboratories a few years ago
are now migrating to field devices. Examples include NIR (Near Infra-Red) and NMR
(Nuclear Magnetic Resonance) analyses.
Digital Plant Networks Supporting the increases in local measurement and
analytical capability has been a change from analog based communication for field
instrumentation to digital bus structures. This produces a corresponding increase in
communication bandwidth of several orders of magnitude and permits much more
diagnostic information to be carried to the data system. Open standards for these
buses have encouraged interoperability among devices from multiple manufacturers.
Connectivity between the plant instrumentation network, the control network, and the
plant IT network has also evolved into a reliable backbone for plant systems. This
infrastructure is required to support the other applications that analyze and use the
data. The continuing evolution in remote access through developments in the Internet
is well known and will not be repeated here. What perhaps is less well known is the
penetration of wireless communication into the refinery environment. Remote sensors
are being installed without wires on refinery equipment where there is no need for two
way communication and absolute reliability is not as important.
The Smart Refinery: Economics and Technology
13
AM-03-19
Comprehensive Plant Databases - Although there have been plant databases for
many years, the continued evolution in their functionality has maintained their
importance as the basic infrastructure or enabler for other applications. Previously they
were primarily aimed at storage of realtime process data and related calculations for
historical records and trending. Today there is a much larger set of information that
must be maintained for realtime access. This includes equipment purchase, spare
parts and cost information; mechanical, electrical, P&I, and process drawings; initial and
current configuration information along with an audit trail of the changes; maintenance
records; safety procedures; MSDS sheets; etc. All of the diagnostic information
reported by the smart devices above must be captured. Product analyses, blend
recipes, and other production specifications are also accumulated. Objects stored in
the database are not just numbers and text but also pictures, spectral analyses, links to
other data sources, etc. Once the data is in the database, techniques to permit efficient
retrieval of this information are a key to determining the state of the refinery. When
something goes wrong in the plant, the primary objective is fixing the problem as soon
as possible. It is usually necessary to gather information about the problem area
drawings, spec sheets, process conditions, maintenance history, etc. Without a
comprehensive database, this data gathering often takes more time than solving the
problem after all the data is assembled. Developing a common and adequate user
interface for these systems is a specific challenge. Generally, the interfaces are icon
based with some views keying off graphic process layouts that permit all information to
be retrieved by moving a pointer to the desired piece of equipment.
Analyze
To reiterate, analysis techniques are intended to determine the best possible
estimate of the current and historical state of the plant. The new developments in the
measurement area plus the general increase in computer capabilities generally mean
much more data is available more than one can hope to process manually. Part of
the response to this increase in data is an increase in automated analysis which takes
several forms.
Data Mining The real time data available from the refineries presents special
challenges. As mentioned earlier, it is usually corrupted by noise and non-independent,
i.e. both auto-correlated and cross-correlated. In addition, there is a lot of data - our
ability to gather data has far outstripped our ability to analyze it. This problem is not
unique to the process industries. One perhaps lesser known statistic is that the
capacity of digital data storage worldwide has doubled every nine months for at least a
decade, which is a rate twice that of Moores law on semiconductor densities (4).
However, if correlations in the data relating to production variables can be found or if
precursors to failure can be identified, the potential benefits are large. Data mining is
derived from traditional types of statistical analysis but is focused on processing large
databases to find undetected patterns and associations. The first level tools include a
number of special linear statistical techniques such as PCA and PLS (reference 9).
These tools should always be the first to be used for analysis since they have well
developed statistical properties that other approaches do not have. When these are
The Smart Refinery: Economics and Technology
14
AM-03-19
not sufficient, a large number of more general tools has been developed to provide
more general pattern recognition, including relations between events and determine
how attributes are linked (7). Again, the major issue is the poor underlying statistical
quality of process data that makes techniques useful in other fields less useful in
analyzing process data.
Associated with data mining is the whole issue of visualization of large databases.
Pattern recognition is significantly improved if the data can be visually displayed in a
form which accentuates patterns and correlations that may exist.
Model Based Performance Monitoring To mange something you generally have to
measure it. For plant performance this normally implies using the data in some sort of
model to calculate performance measures, often called KPIs (Key Performance
Indicators). These performance measures are used to compare actual against plan or
actual against original condition. An example is the calculation of specific energy
consumption, i.e. energy consumed per unit of feed or product. To accurately assess
unit operation, this calculated value has to be corrected for the current feed and product
types and distribution, for the current production rate, and for the run time since the last
equipment maintenance. This correction can only be done via a model of process
operation. Data validation and reconciliation procedures must be used to bring the
input data to the standard required by the performance analysis. With the corrected
KPIs, actual operation versus plan can be accurately assessed and deviations noted.
Important questions that can then be answered include:
What is the true maximum capacity of our equipment? Today? If it was clean? If
it was new?
What really stopped us from making our production targets last month?
How do we accurately and consistently compare performance across all of our
sites?
How do we make sure everybody is looking at the same set of numbers?
Virtual analyzers or soft sensors are a special case of model based performance
monitoring and involve the use of common process measurements (temperatures,
pressures, flows, etc.) to infer a difficult to measure property using an empirical or semi-
empirical model. This is, unfortunately, one of the development areas where the claims
have outpaced reality by a large measure. However, progress has continued and there
are a number of actual installations where real value is obtained (12). Three key
limitations that are not always recognized are:
The estimate is only good within the data region used to train the model.
Unsteady state process conditions with a steady state model will not generally
yield acceptable results since the time constants in the process will normally be
different for different measurements.
Non causal models can estimate current conditions but cannot be used to
predict future behavior.
The Smart Refinery: Economics and Technology
15
AM-03-19
Predict
Predictive analytics
Predictive analytics is the general name for developing the best possible estimate of
the future behavior of the system of interest based upon a model and an estimate of the
current state. It includes a variety of techniques. In the predictive control example
above, it is the model between the manipulated and controlled variables. In the
maintenance example, it is the model relating deterioration in performance to potential
failure. In the supply chain example, it is the demand forecasting model. Note that the
control model is deterministic, i.e. there is a specific set of outputs calculated for each
set of inputs; the supply chain forecast model will be statistically based a range of
outputs is calculated, and the maintenance model is event driven. These are the
general types of predictions models of interest to the process industries. Most
prediction model building approaches are application specific at this time. One overall
key issue in model development is the necessity to use independent not dependent
variables as the basis for prediction.
Decide
As mentioned earlier, a key to good decisions is efficient evaluation of the full range
of potential solutions. Clearly, the improved modeling and computational capabilities
has resulted in a significant improvement in the refinery staffs ability to evaluate
alternatives. For example, if there was a production problem in one of a number of
process units, the normal reaction in the past was to correct the problem by following
the response pattern of previous similar outages. This was done not necessarily
because the staff believed that it was the optimal response, but rather because the time
available to respond and the available information did not support any other response.
Today, it is normally possible to analyze multiple possible responses and choose one
that reflects current actual demands and availabilities.
Optimization Optimization is the general technique of determining the best set of
actions within the constraints imposed that maximize or minimize the specific result
desired. Most developments in refinery logistics planning, operations scheduling, and
advanced control algorithms are, in reality, developments in applied constrained
optimization. As optimization algorithms have become more computationally efficient
and as computer processing speeds have increased, we are able to model systems in
more detail with more independent variables and still complete the required optimization
calculations fast enough for the answers to be useful. For advanced control the
required execution time may be seconds or even milli-seconds. In scheduling,
execution times of a few minutes are acceptable while for planning even an hour may
be satisfactory. Naturally the models and numbers of variables will be different. Linear
programming problems, which use the most computationally efficient algorithms, are
now routinely able to solve problems with as many as seven million constraint equations
(10). Mixed integer optimization algorithms, which have applicability to scheduling and
other problems, have similarly increased capabilities. The recent history of all of these
applications is the use of more complex and hopefully more realistic models that exploit
the rapid advance in computing power to permit solution in a reasonable time period.
The Smart Refinery: Economics and Technology
16
AM-03-19
Real Time Simulation The increased use of real time simulation as a tool for
learning about complex systems such as a refinery is one of the most significant of the
ongoing developments. This is most valuable in situations with very low tolerance for
error or with very infrequent occurrences. Normal examples include training refinery
operators to deal with emergency situations or with refinery start-up and shut-down.
The key improvement obtained is a faster and safer response to these types of
situations. An interesting development is the adoption of 3D virtual refinery
representations for this safety training. However, the use of simulation is not limited to
operator training. In fact, one of the biggest areas of increased use for this technology
is in overall business simulation, particularly in the logistics area.
Expert Systems Another technology where the hype has significantly outpaced
reality has been in the use of expert system technology to assist in decision making,
most particularly as operator guidance systems. Much has been proposed but few
actual systems have been implemented and even fewer have stayed in use for multiple
years. The modeling of actual decisions has proven to be more difficult in practice than
anticipated. However, of perhaps more importance has been the difficulty in
maintaining the expert systems current as situations in the refinery change. However,
there remains a real need for such systems, particularly in the general area of abnormal
event detection, diagnosis, and prevention. See reference 16 for recent academic work
and reference 7 for some industrial comments.
Economic Benefits
There are many sources of benefits for the technologies discussed above. Smart
field devices and plant digital networks are often justified on the basis of reduced capital
costs versus alternate required investments and/ or reduced maintenance
requirements. These can be quantified based on experience with similar installations
and can be substantial. Advanced controls and real time optimization also have
developed methodologies for benefit analysis (17).
However, many of the developments in smart refining involve more, better and
faster measurements of process and equipment conditions and use of models to
analyze the data. How do we estimate the value of these developments or of a
database? Sometimes these economic benefits are calculated by multiplying a small
potential percentage improvement in production performance times a large number
such as product value and claiming that the result is plausibly the expected benefit.
The causal map between the technology implementation and the improvement in
production performance is not really specified. A close review of the claims shows,
however, that many developments are each claiming to achieve the same
improvement. The concept of diminishing returns seems absent. One source of
confusion in evaluating the benefits is that only the action, the implementation, actually
creates business profit or loss. How then, can we estimate the value of the improved
information permitting a better decision and implementation of a superior strategy?
Assume that we have determined the "optimum" operating policy for the refinery and
this generates an expected economic profit as shown in the figure below. Any estimate
The Smart Refinery: Economics and Technology
17
AM-03-19
that we have of the current best operating policy has some uncertainty that is
represented by the confidence limits around the operating line. Moreover, as we project
the optimum operating policy into the future, the expected confidence limits increase
and the increase is proportional to the distance into the future we project the optimum
policy. This uncertainty is reflected back into the present and creates uncertainty about
what the current best policy is. In other words, we now have most of the information to
tell us how we should have operated last week but we don't know precisely how to
operate today since it depends on events that will happen in the future.
Figure 9 Prediction versus analysis/ estimation
How can we improve the accuracy of the prediction of the future which permits us to
decide better how to operate today? In general, it will be enhanced by having more
accurate models, having a better estimate of the current state, and having more
information about future disturbances. The decision is improved by increasing the set
of feasible sequences considered, by better projection of the implication of the
decisions into the future including risk factors, and by the factors mentioned earlier of
better knowledge of the current state and more frequent evaluations. In simple terms,
the earlier a problem is detected, the easier it is to solve.
Further, many of the technology developments can be categorized by their reduction
in the expected error limits on estimates of current performance and predictions of
future system behavior shown previously in Figure 9. The cumulative effect of these
developments over the past thirty years has been a steady reduction in the uncertainty
Profit
$/ Hr
Projected
Maximum
Profit
Scenario
Present Past Future
Time
Confidence
Limits
Analysis Prediction
Actual
Optimum Decision Uncertainty
Increases with Distance Forward from
Current Time
The Smart Refinery: Economics and Technology
18
AM-03-19
of the planning projections as illustrated in figure 10 below. In simple terms, we are
able to predict better and hence make better decisions. In mathematical terms, this
corresponds to tightening the confidence limits around the projection into the future.
Figure 10 Variance Evolution
Example
One of the most important process units in a refinery is the Fluid Catalytic Cracking
Unit. It operates by contacting a fluidized stream of hot granular catalyst with a
vaporized hydrocarbon feed in the reactor which induces a reaction to convert the feed
into a variety of lower molecular, weight higher valued products. The catalyst is
separated from the hydrocarbon and sent to a catalyst regenerator where the heavy
reaction byproducts, "coke," are burned off the catalyst so that it can be reused.
Supporting the process operation is a hydraulic circuit of catalyst as it passes through
the reactor and regenerator. This hydraulic circuit generally operates with a relatively
low pressure gradient with some major valves, called slide valves, controlling the flow.
To ensure that hot hydrocarbons don't enter the regenerator, the pressure drop across
the regenerated catalyst slide valve is monitored. An upset condition, where
hydrocarbons do enter the regenerator, is called a "reversal" and is both dangerous and
expensive to correct. As a result, if a low pressure drop is detected across the valve
indicating that hydrocarbon might be about to flow in the wrong direction, the unit is
Effect of Smart Plant
Developments is to Reduce
Uncertainty
Present Past Future
Time
Normal
Variance
of
Analysis/
Prediction
1970
1990
Today
Analysis
Prediction
The Smart Refinery: Economics and Technology
19
AM-03-19
automatically shut down. Restarting the unit after a shutdown is expensive and the lost
production from the unplanned shutdown is also an economic loss. Avoiding
unnecessary shutdowns while maintaining safe operation is therefore a challenge. With
the circulating granular catalyst, small particles, catalyst "fines," are produced.
Occasionally these fines can plug the leads to the pressure drop transmitter, simulating
a low pressure drop and causing an unnecessary shutdown.
Figure 11 below shows how a modern smart transmitter with automatic detection of
a plugged transfer line can be used to correct this problem. The standard deviation of
the current measured signal is calculated and compared with the values when it was
first installed. If there is a significant reduction in the standard deviation, it is an
indication of the possibility of plugging. The alert is sent to the operator who can
investigate and avoid an unnecessary shutdown without any loss of safety. One major
refining group estimated that installation of this technology across their group of refinery
FCCU's would save at least $1 million per year in shutdown/ startup costs and $3
million per year in lost production operating margin.
Figure 11 Detection of Plugged Lines
Example - Using Device
Intelligence to Predict Failure
The Transmitter
Monitor Detects
Process
Conditions
Impulse Lines Plug
0 2 4 6 8 10 12 14 16 18 20
1.81
1.811
1.812
1.813
1.814
1.815
1.816
1.817
1.818
1.819
1.82
Time (min)
P
V
(
V
)
0 2 4 6 8 10 12 14 16 18 20
1.81
1.811
1.812
1.813
1.814
1.815
1.816
1.817
1.818
1.819
1.82
Time (min)
P
V
(
V
)
Alarm Condition
The Smart Refinery: Economics and Technology
20
AM-03-19
Outstanding Issues
Clearly there have been many new developments in the smart refining arena and
many successful technology adoptions. However, there are numerous practical issues
that have delayed further implementation. While technology is part of the equation, it is
clear that the primary issue concerns individuals and organizations. The authors
experience is that the technology generally works if not totally, at least partially.
However, many new technology implementations fail on the human issues involved.
Individuals and organizations are highly resistant to change. If you introduce new
technology but don't change the business processes to take advantage of it, obviously
the business benefits will be reduced. How to make individuals feel comfortable with the
new technology and how to fit the new decision models into an organizations existing
decision and power structure are the primary open questions. While these questions
may seem outside the normal range of enquiry for technologists, their answers may
continue to limit the rate of progress.
It is also important to retain a sense of proportion with regard to technology.
Improving refining productivity and efficiency is the goal, not technology development.
Quick approximate answers to the right question are more important than elegant
answers to the wrong one or precise answers to the right question delivered long after
the issue has passed.
Conclusion
Dramatic changes in computer and communication capabilities are occurring and
will continue to have a very large impact on refinery production. The trends in
manufacturing financial incentives, health, safety and environmental issues, and
refinery operating demographics are driving many of the potential uses. Significant
benefits can be obtained by taking advantages of these opportunities. Companies that
are the quickest to take advantage of these opportunities will benefit the most.
In other industries, developments are ongoing and perhaps illustrate the path
forward. The appliance division of a major manufacturer has already announced sale
of refrigerators, washers, and other appliances that receive instructions and report over
the web. It will not be too long until your doorbell rings and the repairman says, "I
received a request from your refrigerator to come and replace the drive belt.
Can process equipment be far behind?
The Smart Refinery: Economics and Technology
21
AM-03-19
Acknowledgement: This paper is partially based on an earlier one presented by the
author at FOCAPO 2003 (19)
References:
1. Belke, James C. Recurring Causes of Recent Chemical Accidents,
htttps://www.denix.osd.mil/denix/Public/Intl/MAPP/Dec99/Belke/belke.html ; (1999)
2. Duguid, Ian, Take this Safety Database to Heart, Chemical Engineering, July, 2001; pp.80 84
3. Energy Information Agency, Performance Profiles of Major Energy Producers, 2001, available
www.eia.doe.gov
4. Fayyad, U. and R. Uthurusamy (ed); Evolving Data Mining into Solutions for Insights and following
articles; Communications of the ACM; Vol 45 (8); August, 2002, pp. 28 ff
5. Franz, Neil, TRI Data Shows Emissions Declines for Most Category: Right to Know, Chemical
Week, April 25, 2002.
6. Franz, Neil; Report Tracks Nafta Region Emissions, Chemical Week, June 5, 2002
7. Hairston, Deborah, et al (1999) CPI Refineries Go Data Mining; Chemical Engineering; May, 1999
8. Harold, D. (2001) Merging Mom's Perceptive Power with Technology Creates Startling Results;
Control Engineering, April, 2001
9. Hawkins, Chris; Kooijmans, Rob; Lane, Steven (1999); Opportunities and Operation of a Multivariate
Statistical Process Control System; Presented Interkama; Hanover, Germany; 1999
10. Lustig, I; Progress in Linear and Integer Programming and Emergence of Constraint Programming;
Proceedings FOCAPO 2003; pp. 133 ff
11. Shanel, Agnes. Who will operate your plant? Chemical Engineering, Vol. 106 (2), pp. 30 ff
12. Siemens; Press Release; Express Computer; October 8, 2001
13. Sissell, Kara; Texas Relaxes Nox Mandate, Chemical Week, June 12, 2002
14. Shobrys, D. and D.C. White; "Planning, Scheduling, and Control Systems: Why can't they work
together?" NPRA 2000 Annual Meeting; Paper AM-00-44.
15. The Forbes Platinum List, Forbes, January, 2003 p.120
16. Venkatasubramanian, V.; Abnormal Event Management in Complex Process Plants: Challenges
and Opportunities in Intelligent Supervisory Control; Proceedings FOCAPO 2003, pp 117 ff
17. White, D.C, Online Optimization: what, where and estimating ROI; Hydrocarbon Processing; Vol.
76(6); June, 1997; pp.43 51
18. White, D.C. Increased Refinery Productivity through Online Performance Monitoring; Hydrocarbon
Processing, June, 2002
19. White, D.C.: "The Smart Plant: Economics and Technology;" Proceedings 2003 FOCAPO; Ft.
Lauderdale, FL; (2003)
20. Wortham, B., Drilling for Every Drop of Value, CIO Magazine, June, 2002
HYDROCARBON PROCESSING / JANUARY 2002
M
a
i
n
t
e
n
a
n
c
e
&
R
e
l
i
a
b
i
l
i
t
y
J. D. Smart, Emerson Process Management, Singapore
W
ith the enabling technology of FOUNDATION field-
bus, intelligent field devices are able to go far beyond
providing an accurate process variable. Information
generated by smart field instrumentation in hydrocarbon
processing plants can significantly improve production effi-
ciencies, enable open field-based control architecture, drive
cost-saving asset management solutions and enhance enter-
prise-wide information technology systems.
Since development of intelligent field instrumentation more
than 10 years ago, growth and utilization of the capabilities and
data available from these devices have been limited, largely by
widespread proprietary digital communication standards. The
introduction of device communication technologies and stan-
dardssuch as FOUNDATION fieldbusis now enhancing the
value of information delivered from field devices throughout
process industry facilities.
This enabling technol-
ogy is based on open, con-
tinuous communication of
information between other
intelligent field devices and
application-specific hosts
such as process automation
and asset management sys-
tems. Openness of the archi-
tecture protects the interests
of the end user, but it also
provides manufacturers
access to a larger number of
potential customers without
being locked out by propri-
etary protocols.
This trend toward inter-
operabilityreplacing different vendors products easily and
effectivelyencourages field device suppliers to find new ways
to add value to their products. Efficient use of device data is
the basis for a revolution that is expanding the role of intelli-
gent field devices to meet the business needs and marketplace
challenges of the hydrocarbon processing industry.
A new model. The field device revolution is centered on
reducing process variable uncertainty and enhancing device
functionality and diagnostics while providing more integrated
solutions around the desired process measurement. Fig. 1
illustrates the relationship between these four key areas of intel-
ligent field device development.
To fully utilize functionality and diagnostics improve-
ments in a field device, new emphasis must be placed on
reducing process variable uncertainty. There is no sense in hav-
ing an instrument capable of performing complex calculations,
such as dynamically compensated mass flow in a differential
pressure transmitter, if the calculation is based on an inaccu-
rate process variable.
Reductions in process variable uncertainty go beyond
general improvements in accuracy. They encompass:
Minimizing all sources of measurement error under
actual field conditions
Improving device stability to ensure desired perfor-
mance is maintained over extended periods and changing field
conditions
Reducing response time to generate a representative
process variable signal.
By minimizing process variable uncertainty in this fash-
ion, manufacturers are able to
use the base sensor mea-
surement as a platform to
develop functionality and
diagnostics capabilities that
further enhance process per-
formance, reliability and
availability.
Added functionality sim-
ply means getting the trans-
mitter to do more. A wide
range of functionality
enhancements can be
achieved from highly accu-
rate and reliable sensor sig-
nals. The Fieldbus Founda-
tion already defines some 30
discrete and continuous func-
tion blocks that can be used for various control activities
including PID control. This does not, however, prevent man-
ufacturers from generating even more advanced functional-
ity. Multivariable technology, for example, increases the
number and type of measurements that can be achieved
with a single field device.
The role of the microprocessor in intelligent field devices
can also be expanded to incorporate complex computations
and data management. More advanced functionality can
include scalable field device designs that allow the end user
to match a devices performance to the requirements of the
application and easily upgrade it to changing requirements
in the future. Recent release of mass flowmeter electronic sets
Fieldbus improves control
and asset management
Substantial benefits are realized from increased diagnostics and process data
Fig. 1. The new model shows the relationship between four key areas of
intelligent device development.
Reprinted from: Jan. 2002 issue, pgs 55-57. Used with permission.
that allow users to select and
upgrade performance level,
number of process variables and
desired diagnostics is one of the
first examples of scalability.
With a field device network,
device data are more readily
available for analysis and inter-
pretation to help support cost-
effective predictive and preven-
tive maintenance programs.
Internal diagnostics can encom-
pass more detailed analysis at an
electronic board and component
level to identify intermittent or potential failures before they
impact the device reliability. The diagnostics role can even be
extended to include external components associated with a mea-
surement point such as temperature sensors and impulse lines.
With reduced process variable uncertainty comes the abil-
ity to expand the diagnostics capability of the field device into
the process. Research shows that what was once considered sen-
sor noise is actually an indicator of conditions that exist within
the process. By analyzing specific characteristics and trends in
noise, field devices can identify and signal potential problems
with process variability or other physical assets (pumps, valves,
etc.) in a control loop.
To help ensure that desired performance, functionality and
diagnostics within critical measurement and control loops are
realized under field conditions, device manufacturers are pro-
viding a more integrated approach to applying the technology.
Easy-to-use application and engineering software, integration
of critical measurement point components, and development of
new best practice installation designs and procedures are
offered to ensure measurement integrity. Taking a more inte-
grated approach to the entire measurement point helps simplify
the application engineering process, delivers a more cost-effec-
tive packaging of components and expands the manufacturers
responsibility to include the entire measurement point. This is
a significant step by vendors toward assuring measurement
point reliability versus just assuming field device reliability.
Stepping into reality. When viewing a model, it is always
interesting to assess it against whats happening in the real
world. Surprisingly enough, intelligent field device develop-
ments based on the model proposed in this article are well
underway. The best instrument manufacturers recognize the need
to reduce process variable uncertainty and already publish total
performance and stability specifications for various field
devices. Resulting improvements in pressure and temperature
transmitters have demonstrated 3% to 4% reductions in process
variability and up to 80% reductions in field device calibrations.
Improvements in control valve technology and addition of dig-
ital valve controllers (DVCs) have resulted in 10% increases in
throughput, with over a twofold improvement in controllabil-
ity performance.
Functionality enhancements are also prevalent within currently
available intelligent field devices. The added functionality in dig-
ital control valve positioners means they can be field calibrated
within five minutes compared to previous methods that required
one to three hours. It is even possible for a pressure regulator to
indicate flow in applications that would normally use flow
recorders. Appearance of more and more multivariable devices
for industrial processes attests to the ability of manufacturers to
add functionality once they have confidence in the process vari-
ability of their products. Multi-
variable technology allows dif-
ferential pressure, absolute pres-
sure, process temperature and
dynamic mass flow compensa-
tion to be consolidated into one
field device. This has contributed
to reductions of as much as 42%
in capital and installed cost while
obtaining a 1% of mass flowrate
accuracy over a wider turndown
ratio. Field-hardened tempera-
ture transmitters accommodate
up to eight temperature inputs
(RTD and/or thermocouple) with a variety of function blocks for
averaging or differential temperature calculations. Field bundling
of several temperature points can reduce the cost per installation
point by as much as 50% to 65%.
As the ability to self-diagnose device health and integrity
improves, available information is too valuable to ignore. Stan-
dard temperature measurement options offering hot backup
redundancy are being expanded into detecting sensor drift and
predicting when a temperature sensor will fail.
Pressure transmitters now detect plugged impulse lines and
inform the operator that an apparently good measurement is, in
fact, not valid. Interestingly, most of these developments do not
require additional sensors or electronics. They simply utilize
existing information or measurements within the field device
itself to improve availability of the device for process control.
Control valve diagnostics and the ability to generate valve sig-
natures for online diagnostics allow many valve problems to be
easily isolated and remedied without the cost associated with
pulling a valve out of service and unnecessarily rebuilding it.
All of these developments in advanced field device diag-
nostics help hydrocarbon processing facilities practice more pre-
ventive and less reactive maintenance. With approximately
50% of the work accomplished in most organizations being rea-
sonably preventable maintenance,
1
potential cost savings from
utilizing field device diagnostics data are tremendous.
According to a study by Dow Chemical Company,
2
prior to
installing smart field devices, 63% of trips to the field by
maintenance technicians responding to requests from an oper-
ator found nothing wrong with the installed instruments.
Todays communications and remote diagnostics with intelli-
gent field instruments can eliminate much of this wasted time.
Current advances in device diagnostics have the potential to
reduce maintenance activities by another 32% by minimizing
or eliminating problems associated with drift, plugged impulse
lines and zero shifts in the field device. This proactive approach
to maintaining field device availability to provide a reliable mea-
surement for control also improves process availability while
drastically reducing maintenance costs.
The most exciting aspect of advanced diagnostics is the
ability to look into the process to diagnose control loop and other
physical and/or process anomalies. Field device information can
readily be shared on a fieldbus network. This data sharing
makes it possible to monitor and diagnose the health of a com-
plete loop through statistical process monitoring (SPM).
Field devices can statistically process internal information
or data from other devices in a control loop and use the infor-
mation to establish a set of base conditions. Operator config-
urable alarm points are then set against the base conditions to
alarm potential problems that could have a serious impact on
the process. Employing SPM in this fashion has the potential
LK/7M/4-2002 Article copyright 2002 by Gulf Publishing Company. All rights reserved. Printed in USA.
Fig. 2. Recent developments in intelligent field devices relate to a facilitys
potential to generate annual incremental revenue.
to improve mass/energy balances; indicate fouling, leaks or
obstructions in the process stream; or detect variability prob-
lems within a control loop or a number of loops. Software
packages are already under development to help interpret SPM
data to assist operations and maintenance personnel in identi-
fying the root causes of process problems within regulatory flow
and level control loops.
It would be extremely counterproductive to allow applica-
tion or installation procedures to diminish the added perfor-
mance, enhanced functionality and advanced diagnostics of these
revolutionary field devices. It would also be wasteful not to take
advantage of newer integrated designs that this type of field
device can offer. Several developments are taking place to
deliver a more integrated approach in supplying intelligent
field devices. These developments range from applications
support software, to better methods of integrating existing
measurement point components, to radically redesigning how
field devices are mechanically connected to the process.
Integral manifold designs reduce potential sources for leak-
age by 5%, provide a pressure transmitter that can be installed
out of the box, and deliver installation savings in the range
of $80 to $250 per device. Studies of the impact of impulse tub-
ing on device performance, availability and cost-to-maintain
have led to some creative new practices for installing pressure
transmitters. These preengineered, preassembled and pressure
tested direct-mount packages standardize installation practices
to eliminate plugging problems and measurement errors asso-
ciated with impulse turbines. Studies show that such designs
reduce installed costs by 30% to 50%, while reducing impulse
line-related maintenance for 3,000 installed devices over a
three-year period to a total of just six work orders. Multivari-
able technology also promotes new designs that integrate sen-
sors and mechanical components to reduce the number of pipe
penetrations, thereby reducing capital and installation costs
by as much as 40% per device.
Whats the benefit? Quantifiable benefits of the smart field
device to overall process operations vary with the type of pro-
cess, production capacity and products being manufactured.
However, it is possible to model contributions of intelligent field
devices in relation to key factors that inhibit process facilities
from generating incremental revenue.
Fig. 2 is a generic depiction of how recent developments in
intelligent field devices relate to a facilitys potential to gen-
erate incremental revenue on an annual basis. An annual incre-
mental revenue contribution of $5,000,000 is offered as a
benchmark value only. Actual value realized is process-depen-
dent and typically goes well beyond the benchmark level for
complex processes.
As previously outlined, intelligent field device performance
and functionality directly contribute to process variability
reductions. As a result, associated increases in capacity and the
production of more on-spec product generate incremental rev-
enue that ordinarily would not be realized using conventional
technology.
One of the first hurdles to countering lost revenue and
exploiting profit opportunities is the ability to actually use the
automated control capabilities of a process automation system.
Various studies indicate that 20% to 40% of control loops are
typically in manual mode and up to 80% demonstrate excessive,
correctable process variability. Enhanced functionality and
performance of intelligent field devices help minimize these
problems, allowing operators to turn on auto control.
Properly tuned control loops are vital for advanced process
control (APC) to function effectively. Reliability and perfor-
mance of field devices are the most significant elements in
implementing and optimizing APC. Refineries that are not
properly maintained and monitored can show significant per-
formance degradation in APC initiatives. Diagnostics and
maintenance data help keep field device performance and
availability at the levels necessary to maintain long-term APC
benefits. Asset management systems (AMS) enhance prof-
itability associated with these revenue opportunities by reduc-
ing instrument maintenance costs by as much as 50%. Use of
AMS software also provides added insurance that the physical
assets within the process will be available more often to gen-
erate the desired incremental revenue.
The impact of intelligent field devices on enterprise man-
agement is not easy to quantify. However, we know the value
of enterprise management programs is significant, and return
on investment (ROI) can be restricted by the process automa-
tion system and field devices. Inability to maintain and optimize
APC initiatives limits benefits achieved with enterprise resource
planning (ERP) directed manufacturing applications. The same
holds true for ERP applications that do not incorporate accu-
rate and real-time information from the process. A 1998 report
by the Gartner Group stated that the ROI from enterprise man-
agement programs can be reduced by half if they fail to provide
accurate and real-time process information.
End-user acceptance. Ready accessibility of reliable field-based
information is starting to ignite the imaginations of business
managers, who foresee integrating field-generated data with
higher-level management systems as the means of controlling
overall costs and enabling the enterprise to compete more vig-
orously in the worldwide marketplace.
A recent study
3
stated that users recognize the need to
upgrade field devices to get the most of their system network
investment. The same report refers to a study conducted by a
leading control industry trade publication that revealed over 60%
of respondents in the south central U.S. are considering imple-
mentating field networks in the next two to three years.
The report cited a recent poll that indicated 71% of the users
were planning to place control locally in lieu of in-system con-
trollers. This significant trend toward integrating field device
networks is taking place to capitalize on the advanced capa-
bilities of intelligent field devices.
Obviously, there is growing acceptance of intelligent field
devices in process facilities. Benefits of asset management
systems are becoming too great to ignore and have resulted in
an increased demand for AMS software in new and existing pro-
cess facilities. Rapid acceptance of fieldbus technology is
another strong indicator. Combine this with increasing demand
for advanced field device functionality and diagnostics, and it
is evident that users are starting to recognize the true value of
intelligent field devices.
LITERATURE CITED
1
Oliverson, R. J., Preventable Maintenance Costs More Than Expected, HSB Reliability
Technologies.
2 Sinclair, H., Site Maintenance Process Leader, Texas Operations, The Dow Chemical
Company.
3
The ARC Strategies Report, Field Device and Sensor Strategies for the E-World.
Part D350878X012 / 00802-0100-2122
M
a
i
n
t
e
n
a
n
c
e
&
R
e
l
i
a
b
i
l
i
t
y
J. Denver Smart, C.E.T., has over 20 years of experience in process measure-
ment technology involving control instrumentation and continuous process ana-
lyzers. His responsibilities have included sulfur plant optimization studies, man-
aging pilot plant research projects, environmental quality control monitoring and
gas chromatography applications research. Mr. Smart is currently director,
PlantWeb marketingAsia-Pacific for Emerson Process Management.
Advanced Control Methods: Part 1 Purpose and Characteristics
By Matti Pulkkinen
(Originally published in the Honeywell In-House magazine, The Journal, #11).
The use of advanced control methods, such as fuzzy logic, neural networks and multivariable
control, was not widely adopted in process control until the late 1990s. The majority of these
methods had been known for several decades, like a number of related control problems that had
been awaiting resolution. Why did it take so long for the methods to become widely adopted?
There are probably two reasons. One is that the automation systems capacity would have caused
extreme problems, for example, in the implementation of neural networks with the aid of systems
using 1980s technology. On the other hand, it is a fact that fuzzy logic would not have consumed
any more memory capacity than the PI controls used at that time. Nevertheless, fuzzy logic
applications remained extremely rare throughout the 1980s.
Another, and possibly the crucial motive, was provided by the application of design tools of an
entirely new type. The current applications and solutions that are based on advanced methods are
made and tested graphically, in the same familiar way as ordinary controls have been for more than
a decade. Since programming is no longer required, application design and testing has become
easier, with advanced control methods being commonplace. In addition, it is obvious that graphic
application design and test environments have also facilitated the understanding of advanced control
methods, their functions and principles.
This series of articles will briefly introduce the best-known advanced control methods, their
principles of function, and most common applications.
The series is initiated by fuzzy logic.
What is fuzzy logic?
It has been said that fuzzy logic imitates the human way of thinking. It does not classify things as
true or false on a black or white basis, and does not draw conclusions from exact numeric values.
The following simple example from daily life describes a persons actions with the aid of fuzzy
logic concepts. The issue is to fill the kitchen sink with water of a suitable temperature. This is, in
fact, a chore that a person can do without much premeditation. But a complexity of rules and
objectives underlies it.
Simplified, the functions objectives are as follows:
To fill the sink with water to a suitable level for dishwashing
To have a suitable water temperature
To be able to drain the sink as necessary, avoiding excess loss of water.
Assuming that there are two separate faucets for hot and cold water, the control rules could be
stated as follows:
If the surface level is low and the water temperature suitable, then use the hot and cold
faucets equally to raise the surface level
If the surface level is low and the water temperature high, then use the cold faucet to raise
the surface level
If the surface level is suitable and the water temperature low, then use the hot faucet to
slightly add water
If the surface level is suitable and the water temperature high, then wait a while for the water
to cool
If the surface level is high and the water temperature low, then remove the drain plug for a
moment and add hot water
etc.
As the above, slightly formalized rules indicate, fuzzy logic is well-suited to describe how to use
common sense in various problem-solving situations. Similarly, it is also possible to apply
optimization-type intelligence to this task, since it can be seen to include multiple objectives.
Structure of fuzzy control
Figure 1 contains a simple "Kitchen Sink" process & instrumentation diagram with a fuzzy
controller. The internal structure of the fuzzy controller covering the two first rules is also included.
Fuzzy logic is actually a quite straightforward extension to normal Boolean logic. Fuzzy logic
theory is based on the extended Boolean values that include all real values between 0 and 1. Fuzzy
logic is very useful when classification of some arbitrary values is needed (e.g. process
measurement values). In the Kitchen Sink example the rules to control temperature and level are
based on verbal definitions of process properties:
Water temperature is defined to be Cold / OK / Hot
Water level is defined to be Low / OK / High
Open/Close valve(s)
Fuzzification
Before the defined fuzzy rules can be applied to direct process measurements, the fuzzification
operation is needed. The fuzzification function can be interpreted to define membership values
between 0 and 1 for each property (e.g. the degree water level measurement belongs to the groups
of Low, OK or High values). In Figure 2, the fuzzification function interprets the actual numerical
water level measurement value (0 to 100 %) to three fuzzy set values (= membership values in the
fuzzy theory) between 0 and 1. There are many different fuzzification methods in the theory, but in
this example a quite simple one is used to describe the procedure:
Fuzzy set
values
Measurement
1
0% 100%
Low OK High
0
Me1=30%
L1=0.35
M1=0.65
H1=0
Figure 2
Define the sets Low, OK and High (as center point of triangle)
Read the actual process value (e.g. 30 %)
Read the fuzzy set values from intersections of fuzzy set functions and vertical line set to
process value point of x-axis.
The next step of the fuzzy logic controller is evaluation of if...then rules after the fuzzification
function it is known how probable it is that the level is High or the temperature Cold etc. What is
needed to evaluate the rules is definition of AND/OR Fuzzy operations (e.g. how those functions
differ from well known Boolean AND/OR counterparts?). There are many possible functions
available in the fuzzy theory that fulfill the rules of AND/OR operation; the most simple ones are
(AND) Minimum and (OR) Maximum functions. Also common are (AND) Multiplication (=AxB)
and (OR) "Probabilistic OR" ( =A+B AxB). Figure 3 shows an example of how the different
AND/OR methods interpret two fuzzy set values A and B:
AND AND OR OR
A B MIN MUL MAX PROBOR
---------------------------------------------------------------------------
0 0 0 0 0 0
0 1 0 0 1 1
1 0 0 0 1 1
1 1 1 1 1 1
0.1 0.9 0.1 0.09 0.9 0.91
0.5 0.5 0.5 0.25 0.5 0.75
Figure 3.
The evaluation of the rules by performing AND/OR operations actually defines fuzzy set values of
fuzzy "output sets". The probabilities of the Open/Close control actions are now known as fuzzy set
(membership) values between 0 and 1. There might be several rules that end up to the same control
action and that's why the OR functions are needed to select the most probable fuzzy set value for
each possible control actions.
Defuzzification
Defuzzification is the inverse operation to fuzzification; the function is used in this example to
interpret the fuzzy set values of Open/Close control actions to "crisp" real numbers defining the
actual valve positions in the scale of 0 to 100 %. There are also many different defuzzification
methods in the fuzzy theory; the Sugeno method described here is one of the most simplest. It is
based on scaled singletons whose center of gravity describes the actual "crisp" output value as
Figure 4 shows.
100 %
Close Open
100 %
Open
Close
=
*
Close
O
Open
O
1 1
Figure 4
The internal structure of the fuzzy logic controller may seem somewhat confusing but fortunately
the fuzzy logic tools hide the internal structure from the designer and only a basic understanding is
needed to use fuzzy logic as control strategy definition language.
When is it advisable to use fuzzy logic?
Fuzzy logic is particularly well-suited to control processes where the control is resolved through a
combination of several measurements. In addition, suitable fuzzy logic control targets include those
processes that are difficult to describe by means of a process model but are based on manual control
experience. Fuzzy logic is also an excellent tool in cases where the process involves strong non-
linear features.
For more information and details, please visit the following address:
http://www.honeywell.fi/
Advanced Control Methods: Part 2:
Optimization Maximization or Minimization?
By Matti Pulkkinen
(Originally published in the Honeywell In-House magazine, The Journal # 11)
The term optimization has entered everyday language as a concept that describes the
general improvement of things rather than actual mathematic, technological or economic
optimization. The actual concept of optimization is both narrower and more accurate, and
refers to the minimization or maximization of a specific objective function. Whichever is
referred to, will naturally depend on the situation and the objective function in question.
Regarding techno-
economic optimization
problems, typical target
functions to achieve
minimization are the ones
that describe costs (such
as raw material, operating
or maintenance costs) in
one way or another. The
target functions to be
maximized, most
commonly describe the
production volume,
(financial) productivity or
profit.
An optimization problem inherently implies a number of constraints, due to the fact that
an unlimited solution to a specific optimization problem is generally not feasible in the
real world.
An unlimited optimum value may be found, for example, by setting an optimized variable
value to be infinitely high or low, or otherwise outside the sensible range of variable
values (such as negative production).
x
1
x
2
Constant z value
curves for objective
function z = f
obj
(x
1
, x
2
)
Constraint 1
Constraint 2
Constraint 3
Problem:
max z = f
obj
(x
1
, x
2
)
Problem:
max z = f
obj
(x
1
, x
2
)
The solution is the point where the largest z -value curve appears
within the feasible region bounded by constraint curves.
Feasible region
x
1
x
2
Constant z value
curves for objective
function z = f
obj
(x
1
, x
2
)
Constraint 1
Constraint 2
Constraint 3
Problem:
max z = f
obj
(x
1
, x
2
)
Problem:
max z = f
obj
(x
1
, x
2
)
The solution is the point where the largest z -value curve appears
within the feasible region bounded by constraint curves.
Feasible region
GENERAL TWO-VARIABLE OPTIMIZATION PROBLEM
Linear and nonlinear optimization
In addition to the aspects above, optimization problems are generally divided into two
types: linear and nonlinear. In terms of mathematics, liner optimization problems are
easier to solve than nonlinear ones. Linear optimization problems are referred to as LP
problems (Linear Programming). Nonlinear optimization problems are typically
described by means of quadratic equations, referred to as QP solutions (Quadratic
Programming).
As we know, practical processes are typically more or less nonlinear. This means that QP
would be seen as a more natural approach in many cases. However, the aim in control
technology, and process control in general, is to use linear control solutions as much as
possible. In practice, this can be done to a considerable extent by describing the processes
as piecewise linear near the operating point. Converting a global nonlinear optimization
problem into a local linear optimization problem will make the actual mathematic
solution much easier, without compromising the solution accuracy, provided that the
validity range is not ignored in the local optimization solution.
Bringing optimization closer to the process
Putting theory into practice usually takes a long time. In optimization, however, these
steps were taken long ago and optimization has become one of the most frequently
applied advanced techniques in process control. Nevertheless, optimization has largely
remained a tool for so-called upper level process control (on the millwide production
line, and sub-process levels), regardless of the fact that optimization problems are
encountered on all process levels, including the basic control loop level. In recent times,
the basic process control loops internal functions have been found to contain an
increasing number of predictive multivariable controllers, as well as other controller
types, that are largely based on the optimization theory.
It is also possible to find other optimization targets at the basic process control levels.
These may have been overlooked, in the shadow of financial optimization on the upper
level and control technological optimization solutions on the basic level. The said targets
include, for example, the optimization of individual process devices, the problems of
which are better expressed in terms of technological than economic objective functions.
Nevertheless, the efficiency and economy of the process is the sum total of all
components. This means that the entire process function hierarchy must be considered, in
addition to optimizing the production lines.
Example of fuel supply optimization
The following example illustrates how the process function being optimized is identified
and formulated into an optimization problem. The problem is to establish an economic
optimum value for a multi-fuel boilers fuel costs, considering the price of various fuels,
their calorific values and other application constraints (for example, relating to the fuel
supply equipment and the boiler itself). An optimum state will be found by minimizing
the objective function that describes the fuel costs. The function can be described
mathematically as follows (when using fuel types 1, 2 and 3):
3 3 2 2 1 1
min X C X C X C Z + + =
where C
1
, C
2
, C
3
represent the fuel price per consumption unit and X
1
, X
2
, X
3
the fuel
volumes consumed. The minimum point in the above objective function is easily found
by setting all the fuel consumption values as zero (negative values are not feasible). This
means that the above is not sufficient to describe the entire optimization problem the
constraints must be included. The first constraints were already defined above, in other
words:
0
0
0
3
2
1
>
>
>
X
X
X
Other constraints include the maximum values for the fuel supply equipment (
3 2 1
, , L L L
) ) )
)
concerning each fuel type, which may be described as follows (any lower limit values
deviating from zero may also be used as necessary):
The last and most important constraint is that the fuel combination supplied to the boiler
must enable the boiler to produce the required power at all times. The constraint can be
described as follows:
P X H X H X H = + +
3 3 2 2 1 1
where H
1
, H
2
, H
3
represent the fuels (measured or estimated) calorific values and power
P is received from the boilers power controller. The entire formulation resulted in the
following group of equations which, when resolved, will indicate the fuel combination
that has the optimum cost efficiency:
3 3
2 2
1 1
L X
L X
L X
)
)
)
<
<
<
= + +
<
<
<
>
>
>
+ + =
P X H X H X H
L X
L X
L X
X
X
X
X C X C X C Z
3 3 2 2 1 1
3 3
2 2
1 1
3
2
1
3 3 2 2 1 1
0
0
0
constrains with
min
)
)
)
Real-time optimization
However, it must be taken into account that an optimum solution will only apply for as
long as the optimization problem remains unchanged. The example above shows that the
life cycle of a single solution equals the boiler power controllers operation cycle, which,
in turn, may be 1 s or even shorter, due to the fact that whenever the power requirement
(or the calorific values, prices or constraints) changes, a new optimum value must be
resolved. This also means that an optimization problem of the above type can only be
resolved by using the actual real-time process control system.
For detailed information on exploiting optimization in process control, please visit our
web site:
http://www.honeywell.fi
end
Integrating Data Reconciliation and Performance Monitoring Online
Page 1 of 15 Invensys-SIMSCI ESSCOR
Integrated Data Reconciliation, Process Modeling and
Performance Monitoring Online
Harpreet Gulati & Scott Brown
Invensys SIMSCI-ESSCOR, Lake Forrest, CA, USA
Email: Harpreet.Gulati@Invensys.com
Abstract
Todays economic climate requires a step-change in the timeliness and quality of
decision-making in plant operations. With decreasing operations staff, process engineers
have less time to analyze data, calibrate sensors or diagnose the root causes of
performance degradation. Yet there is increased emphasis on improving performance and
cutting cost. Despite the availability of a wealth of data from the DCS and plant historian,
there is little online validation or analysis of this data. Currently, plant engineers typically
use Excel or offline simulation tools to analyze performance and for process
troubleshooting. However, due to data inconsistencies offline performance calculations
are viewed to be inaccurate and engineers have difficulty recognizing and
troubleshooting performance problems.
This paper discusses a novel approach to integrating data reconciliation and model based
performance monitoring online using a single model for both process modeling and data
reconciliation. New techniques that enable accurate data reconciliation and data error
detection in the presence of large data errors or complex non-linear cases are also
presented. Automating the flow of consistent process performance information is another
key component of this technology.
Integrating Data Reconciliation and Performance Monitoring Online
Page 2 of 15 Invensys-SIMSCI2 15
INTEGRATING DATA RECONCILIATION, PROCESS MODELING AND
PERFORMANCE MONITORING ONLINE
Harpreet Gulati & Scott Brown
Invensys SIMSCI-ESSCOR, Lake Forrest, CA, USA
Introduction
Plant Managers have always wanted to monitor the performance of their processes and
operating assets to make profitable operational decisions for both the short and long term.
Currently, this activity is mostly restricted to onetime offline analysis, material balances,
or comparisons to simple standards in custom spreadsheets. The time taken to generate
even these basic analyses has been too long to enable effective action based on the
results. Todays economic climate is pushing plant management to dramatically improve
the quality and timeliness of their operational decisions.
Equipment reliability and efficient operations are critical for long-term profitability and
safety. The dynamic nature of processes means that the performance of gas turbines,
compressors, heat exchangers and other chemical process equipment deteriorates over
time, forcing operating costs to rise. As operating plants get larger, the financial
implications of even relatively short production outages are dramatic. Operating
companies find it difficult to address problems or make improvements without knowing
either the exact performance of existing equipment or knowing the exact location of
process problems.
The result is a gap a crucial gap in the quality and timeliness of the information plant
operators need to make plants perform optimally. Most plant managers, engineers, and
operators are making daily operating decisions based on unreconciled snapshots of
operating data. This analysis does not take into account instrument drifts or errors, bad
data, or the inaccuracy of the data. How can plant personnel be certain the information
they are using to operate the plant is fresh and accurate?
Process simulation tools can do an excellent job at predicting the performance of existing
operating assets and understanding the deviation from targeted performance. However
the process simulations tools are only as accurate as the quality of input provided. The
data input to process simulation tools must be accurate, consistent and valid for the
results to be accurate and useful. Additionally the predicted process performance
information is of little use if it is not readily accessible or provided in a timely fashion in
order to assist in operational decision-making. The modeling analysis should be done
online and the predicted process performance information needs to flow to the desktops
of engineers and operators who are the end users for the information.
This paper defines the essential requirements for an online model based performance
monitoring system and describes how it functions. The primary goal of an online
performance monitoring system is to extract useful and consistent information from raw
plant data to enable high quality operational decision-making. Integrating the process of
Integrating Data Reconciliation and Performance Monitoring Online
Page 3 of 15 Invensys-SIMSCI3 15
data collection, data reconciliation, and online simulation analysis can enable modeling
tools to become real-time operational decision support tools.
Todays Requirements for Online Performance Monitoring System
For process simulators to become decision support and operational troubleshooting tools,
they have to move closer to the near-realtime or online environment. A comprehensive
online modeling and performance monitoring system should encompass the following
elements to be considered complete:
! Connectivity with the plants control system
Manually gathering information off-line simply cannot be done fast enough to
bridge the gap between process performance and operational requirements. The
solution must seamlessly interface with the existing IT infrastructure and allow
direct access to process data and performance parameters.
! Rigorous data reconciliation and validation
The solution must be able to detect and eliminate gross errors in plant data due to
instrument malfunction or drifting sensors. It should rigorously reconcile the
inconsistencies or small errors in the remaining data and provide calculated
estimates for missing data points based on a first principles approach. The result is
a reconciled and consistent set of information that can then be used for
performance analysis and process troubleshooting.
! Model accuracy
First principles models, which apply fundamental chemical equilibrium and
thermodynamic principles to model equipment performance, are far more accurate
than empirical or heuristic methods. The solution should include sophisticated
models for complex reactors such as FCC or ethylene cracking applications as well
as engineering models to predict the behavior of unit operations such as columns,
compressors, and heat exchangers.
! Low setup and maintenance costs
An efficient, user-friendly environment enables a shallow learning curve and lower
maintenance costs for an online solution. If the online modeling system is spread
over several databases or utilities, the effort required to maintain such a system can
be substantial.
! Full automation capability
Automatic operation must provide real-time speed while allowing operator
oversight. Automation of all process modeling tasks including case studies enables
plant personnel to concentrate on plant improvements rather than running software.
A manual operation mode has to be available when desired.
An ideal online performance monitoring system bridges the plant information quality gap
by automating the process of collecting then reconciling plant operating data and
Integrating Data Reconciliation and Performance Monitoring Online
Page 4 of 15 Invensys-SIMSCI4 15
distributing process performance information at specified intervals. While there are many
approaches to operational decision support, an approach based on predictive engineering
models and reconciled process information can substantially enhance the accuracy and
the detail of performance information on the plants equipments and processes.
Data Reconciliation and Gross Error Detection
All plant measurements are subject to a degree of uncertainty, some more than others.
The measurement errors can be described as either Random, which are unavoidable
and normally small, or Gross Errors, produced often from faulty measuring equipment
and inherently avoidable. These errors lead to imbalances in mass and energy
conservation equations.
The measured data can be corrected only by providing additional information. Although
many filtering techniques used for data conditioning particularly for the process control
applications rely on signal analysis and elimination of outliers based on thresholds
calculated from historical information, the best correction of the process measurements
can be obtained by rigorous, model-based data reconciliation, which uses the process
models as additional information to the measured data. Data reconciliation optimally
adjusts measurements from the plant to obey the conservation laws of mass and energy.
The improvement in the accuracy of measurements allows better yield accounting for the
entire process and provides superior inputs for process optimization.
The traditional data reconciliation systems are based on the assumption that no gross
errors exist in the measured data. The task of data reconciliation is to find an optimal
solution that satisfies the model conservation equations. However, if gross errors do exist,
as is often the case, the data reconciliation solution may be highly biased. Therefore the
reconciled values for some measured variables may be worse than their corresponding
raw values. Consequently, it is imperative to identify all gross errors and eliminate them
before final data reconciliation is obtained.
Major Problems with Traditional Data Reconciliation and Gross Error Detection
Approaches
Traditional data reconciliation algorithms assume that the measurement errors follow a
normal distribution and only random or relatively small gross errors exist in the data.
Appendix A shows the details of statistical testing based on traditional approach for
gross error detection. The statistical tests for gross error detection use the data
reconciliation solution previously obtained with data in gross error. Since this solution
may be highly biased for some measurements, the gross error detection tests may fail to
accurately locate all gross errors. Consequently, both the reconciled values and the
location of gross errors can be incorrect for certain measured variables.
The data reconciliation and gross error detection problem becomes even more difficult to
solve accurately when not enough measurements exist, i.e., not enough measurement
information is provided in order to make all measurements redundant. When a
Integrating Data Reconciliation and Performance Monitoring Online
Page 5 of 15 Invensys-SIMSCI5 15
measurement for a redundant variable is missing (for example, the instrument is
temporarily out of service), a reconciled value for that variable can be estimated based on
the information provided by other measurements and the plant model. Adding more
instrumentation or additional constraints will definitely enhance the redundancy in
measured variables, but neither one of these two actions is easy to implement. One way
to ease this problem is to solve the data reconciliation problem simultaneously with
rigorous engineering models. A rigorous model is able to reconcile all measurements
including pressure, lab reading and quality measurements, thus adding additional
redundancy to the model.
Another level of complication especially for the gross error detection tests comes with the
nonlinear data reconciliation problems. The statistical tests for gross error detection and
the identification algorithms such as serial elimination have been designed and tested for
the linear data reconciliation problem. In order to find the optimal parameters for the
plant operation, a large nonlinear parameter estimation problem is solved simultaneously
with data reconciliation. This simultaneous approach provides a consistent solution for
both measurements and model parameters and requires a Non Linear Programming
(NLP) algorithm. But that requires special methods to handle the gross error detection
problem, designed for nonlinear problems.
New Estimators for Data Reconciliation and Gross Error Detection
Two approaches may be taken for gross error detection for nonlinear problems. The first
one uses model linearization, followed by application of statistical tests on a data
reconciliation solution obtained with the linearized model. This approach is very common
but may fail for very large flowsheets with highly nonlinear equations, such as those
coming from complex reactors.
Another approach is to find a solving method for data reconciliation that is able to handle
gross errors, that is, a method or Estimator that it is able to provide a data reconciliation
solution with reduced biases in the estimated values. Their performance seems to be
superior to the linearization approach for reasons shown below. With the robust
estimators, gross error detection is not absolutely necessary, unless the information on
gross errors is used for instrument maintenance. Simple gross error detection statistical
tests that are usually reliable for relatively larger gross errors in redundant measurements
are also available.
We have recently implemented and tested two new estimators in addition to the least-
squares normal estimator typically used in the industry. The new estimators are called
Contaminated Normal estimator and Fair Function Robust estimator. Appendix B
shows the technical details of each of the three estimators. The following case study on a
real industrial problem highlights the important performance difference between each
estimator approach.
Integrating Data Reconciliation and Performance Monitoring Online
Page 6 of 15 Invensys-SIMSCI6 15
Case Studies for a Refinery Crude Unit Process
In order to get a fair comparison of the performance for various estimators, a data
reconciliation problem free of gross errors has been initially set for these studies. A
refinery crude unit separation process with 47,436 equations, 47,248 unmeasured free
variables and 307 measured variables has been chosen for this analysis. The objective
function for the converged solution was very small, with no indication of any gross
errors. The model variables were then used to build simulated measured values for all
measured variables and tuning parameters included in the objective function as follows:
New Scan Value = Model variable value + Random error + Gross error
The random errors have been generated by a random vector generator (based on normal
distribution), which uses a random number generator and the standard deviation of each
measurement. Ten gross errors of various magnitudes (5#$%.5) have also been added
in various locations ($%& represents the ratio of the magnitude of the gross error to the
standard deviation of the measurement random error).
The resulting data reconciliation problems were solved using various estimators. The
comparison results for three estimators, namely, the least-squares normal estimator, the
contaminated normal estimator, and the Fair function robust estimator are presented in
Table 1. Appropriate gross error detection tests or algorithms were used for identification
of gross errors.
GE location
(Tag Name)
Standard
Dev.
Scan
Value
Model
Value $'&
Reconciled
Value
ND
(1)
CND
(2)
Fair
(3)
TI0014 2.2222 406.342 384.12 10 388.157 384.239 384.087
TI0064 2.433 579.744 561.533 7.5 566.186 561.433 561.942
FI0020 0.0395 1.9738 1.5917 10 1.9208* 1.5938 1.6192
TI0029 2.7396 620.654 641.133 7.5 638.027 641.219 640.864
FI0019 0.00043 0.0375 0.0322 12.5 0.0365 0.0323 0.0329
FI0087 0.00049 0.0244 0.0291 10 0.0267 0.0291 0.0289
FI0061 0.0126 0.3888 0.5148 10 0.3999* 0.5102 0.4130*
FI0040 0.0184 0.0964 0.0044 5 0.0041 0.0044 0.0044
TI0036 2.143 577.128 595.167 8 594.77 595.249 595.326
TI0068 2.2222 309.282 331.504 10 314.086 335.358 335.169
Additional GEs FI0030 None FI0081
detected
Table 1 Reconciled values with Normal, Contaminated Normal and Fair Function
estimators for the Refinery Crude Unit process
* not detected in gross error
(1)
GEDE used for gross error detection/elimination
(2)
Contaminated normal distribution; p=0.001, b=20
Integrating Data Reconciliation and Performance Monitoring Online
Page 7 of 15 Invensys-SIMSCI7 15
(3)
Fair function robust estimator; c=0.05
Table 1 shows a snapshot of the results for the three estimators in this study. Since
initially when the measured values were equal to the model variables values the
reconciled values were also closed to the model variable values, we expect that a good
estimator will provide reconciled values close to the model variable values (the model
variable values are considered true values). The following are some general
observations drawn from the simulated case studies:
- The reconciled values from the normal distribution estimator are significantly
biased for certain measurements.
- The reconciled values from the contaminated normal and Fair function robust
estimators are more accurate than those from the normal distribution estimator
(their reconciled values are much closer to the model variable values); choosing
the optimal estimator parameters is very important for these estimators.
- Gross error identification by serial elimination (GEDE) based on normal
distribution fails to detect certain gross errors because the reconciled value is
closer to the measured value rather than the model value, therefore their
adjustments are very small.
- The gross error detection tests based on robust statistics estimators are able to
detect most gross errors. Occasionally, they also indicate other nonexistent gross
errors, but the number of incorrect gross errors is usually lower than with the
normal distribution estimator. The user is able to change some parameters
associated with the statistical test in order to reduce the number of measurements
falsely detected in gross error.
How does an Online Modeling System Operate
This section describes the working of ARPM, a commercially available online
performance monitoring system. ARPM provides a single integrated graphical user
environment for process simulation and data reconciliation. The software uses a single
model database for both data reconciliation and performance monitoring and incorporates
several data conditioning and gross error detection techniques described in the previous
section.
The user first configures a simulation model that accurately represents current operating
conditions. The model is then connected to real process data via a process historian. A
typical online modeling sequence starts with a fresh download of process data (typically
hourly or daily averages) from plant historian. ARPM then screens and removes gross
errors from the raw plant data. This involves simple checks for limit violations and
availability, as well as sophisticated statistical gross error detection methods described in
the previous section. The next step is data reconciliation where the system attempts to
reconcile the mass and heat balance discrepancies in the data. Since the simulation model
and data reconciliation model are the same, the data reconciliation step also attempts to
tune the model to best fit the operating data. Several process parameters, such as tray
separation factors, exchanger fouling factors, equipment and catalyst efficiencies etc.,
Integrating Data Reconciliation and Performance Monitoring Online
Page 8 of 15 Invensys-SIMSCI8 15
cannot be directly measured. The model tuning in data reconciliation mode is achieved by
optimally adjusting these unmeasurable, computed process parameters to best fit the
model predictions with process data. The rigorous process model then compares this
reconciled data to a pre-determined base case data of the entire process and flags any
results that indicate potential under-performing equipment.
Figure 1 ARPM directly accesses the process and equipment data from the plant
historian and extracts performance information
Figure 1 shows a rough schematic of the flow of information in an online modeling and
performance monitoring solution. ARPM accesses data directly from a plant historian,
calculates the performance parameters and distributes the appropriate information to
maintenance, process engineering and operations.
An important aspect of an online modeling system is the automation of data collection,
data reconciliation and simulation workflow. The Real Time System (RTS) within
ARPM provides this functionality to define a sequence of tasks that can automatically run
at a scheduled interval. The Real Time System has its own GUI in a separate subsystem
based set-up to configure an operating sequence. Figure 2 shows a view of the user
interface of the Real Time System.
Integrating Data Reconciliation and Performance Monitoring Online
Page 9 of 15 Invensys-SIMSCI9 15
Figure 2 A view of the Real Time System for defining sequence of modeling tasks to be
performed sequentially
Within the RTS, a Task performs a specific operation and is shown on the GUI as a
logic box. A large number of standard modeling tasks (such as download plant data, solve
model, output results, etc) are available within the RTS and the opportunity also exists for
a user to create a Custom Task (user defined TCL scripts). A Sequence is a line of
Tasks that are configured to operate sequentially. An on-line sequence can be scheduled
to execute operations automatically at a certain hour. The on-line sequence can be opened
for inspection where the User can monitor the progress of execution but not modify either
the sequence or the Model Application.
The Real Time System within ARPM can also be configured to run various case studies
as a part of the daily modeling and simulation tasks. These case studies can include
predicting the consequences of performance maintenance in a certain area of the process
units, predicting the economic impact of cleaning a particular heat exchanger or
determining the best maintenance action to improve bottom line profitability. By
automating the download and reconciliation of process data, ARPM essentially provides
an evergreen self-tuning model of the process. Process engineers can use these models
anytime to perform troubleshooting and debottlenecking studies or offline advisory
optimization.
Integrating Data Reconciliation and Performance Monitoring Online
Page 10 of 15 Invensys-SIMSCI10
15
Integrated Online Modeling, Data Reconciliation and Performance
monitoring in practice
A typical online modeling application starts with a rigorous process model connected to a
database of measurements. Periodically (e.g. daily) the measurements are downloaded
and after preliminary data screening, a Data Reconciliation run reconciles the mass and
heat balance discrepancies in the data. The data reconciliation step also attempts to tunes
the model to best fit the process data. This model tuning is achieved by adjusting certain
unmeasurable variables, such as efficiencies and fouling factors. The reconciled values
indicate the current performance of the equipment and are exported to a historian or
spreadsheet for trending and analysis.
Figure 3 Calculated fouling factor for a heat exchanger
Figure 3 shows the calculated fouling factor of a heat exchanger over one year. The sharp
increase in the fouling rate around day 300 warrants investigation. Perhaps this was an
unintended consequence of an operational change or perhaps the anti-foulant rate needs
to be increased. In any case, the problem could have been detected shortly after day 300
by simply viewing the graph.
In many services, compressors are critical pieces of equipment with high operational and
maintenance costs, hence making them good candidates for online performance
monitoring. For example, a sharp drop in efficiency may forecast an unexpected problem
that requires immediate attention. On the other hand, a slower-than-expected decline in
efficiency may justify deferring maintenance, thus avoiding or shortening a costly
shutdown.
Figure 4 shows a portion of a compressor performance monitoring modeling application.
The tuning parameter labeled EfficiencyTune serves to adjust the shape of the efficiency
curve during data reconciliation runs. This model also adjusts the shape of the head
curve.
300
Time (days)
0 100
F
o
u
l
i
n
g
300
Time (days)
0 100
F
o
u
l
i
n
g
Integrating Data Reconciliation and Performance Monitoring Online
Page 11 of 15 Invensys-SIMSCI11
15
Figure 4 Compressor performance monitoring model
The accuracy of the reconciled efficiency depends on the size and detail of the model and
on the quantity and quality of the measurements. A rigorous model with many redundant
measurements yields the most meaningful results. Compressor efficiency is a measure of
the ratio of PV work to the work supplied to the compressor, so measurements on the
feeds and products and on the driving equipment are particularly valuable.
If one simplified this model by omitting the motor, the calculated efficiency would be
less accurate due to the absence of the motor-Amps measurement, which is the only
direct measurement of the compressors power consumption. For turbine-driven
compressors, the measurements around the expander (e.g. T, P, flow) are important
sources of redundancy for the compressor efficiency.
ARPMs compressor model maintains two independent efficiency curves: the baseline
efficiency and the current efficiency. The baseline curve represents the efficiency that the
compressor would achieve following maintenance, which is typically regressed from
manufacturers data. The actual efficiency curve is initially the same as the baseline curve
but moves down as the efficiency deteriorates over time. This is shown in Figure 5A.
Figure 5 A) Baseline and Actual efficiency curves. B) Trending calculated efficiency
values.
300
Time
0 100 200
Efficiency
increases!
E
f
f
i
c
i
e
n
c
y
Flow
Time
Day 0
Day 200
Day 100
Baseline
300
Time
0 100 200
Efficiency
increases!
300
Time
0 100 200
Efficiency
increases!
E
f
f
i
c
i
e
n
c
y
Flow
Time
Day 0
Day 200
Day 100
Baseline
E
f
f
i
c
i
e
n
c
y
Flow
Time
Day 0
Day 200
Day 100
Baseline
Integrating Data Reconciliation and Performance Monitoring Online
Page 12 of 15 Invensys-SIMSCI12
15
The trend in Figure 5B shows counterintuitive results; the efficiency actually increases
between 100 and 200 days. This is easily explained by looking at the operating points in
Figure 5A. At day 200, the compressor is operating close to peak efficiency while at day
100, it is operating in a inefficient condition.
Rather than trending the efficiency, it is better to track the offset from design efficiency at
the current operating conditions ( (E = ActualEfficiency - BaselineEfficiency). This is
shown in Figure 6.
Figure 6 Trending the offset from baseline efficiency ( (E)
By maintaining both the baseline and actual efficiency models, the online compressor
simulation model is able to automatically calculate this (E variable for trending. If the
user wishes to quantify the benefits of servicing the compressor, he can run a simulation
using the baseline efficiency instead of the actual efficiency.
Conclusions
Process modeling has rapidly gained acceptance worldwide as an essential tool for
process design and troubleshooting. However, the use of process simulation technology
for operational decision-making is still infrequent and tedious. Most simulation models
stop being used either because the model becomes out of date or the data input to the
model is inaccurate. Consequently plant managers, engineers and operators still rely on
simple custom spreadsheets to make important operational decisions. By integrating data
reconciliation with process modeling into a single near realtime environment, and
automating the flow of performance information, these process models can truly become
reliable operation support and performance monitoring tools and an important intellectual
property of an enterprise.
300
Time
0 100 200
(
E
0
E
f
f
i
c
i
e
n
c
y
Flow
Day 0
Day 200
Day 100
Baseline
300
Time
0 100 200
(
E
0
300
Time
0 100 200
(
E
0
E
f
f
i
c
i
e
n
c
y
Flow
Day 0
Day 200
Day 100
Baseline
E
f
f
i
c
i
e
n
c
y
Flow
Day 0
Day 200
Day 100
Baseline
Integrating Data Reconciliation and Performance Monitoring Online
Page 13 of 15 Invensys-SIMSCI13
15
Appendix A
Gross Error Detection
The most commonly used methods for detecting gross errors are based on statistical
hypothesis testing. In gross error detection case, the null hypothesis, H
0
, is that no gross
error is present, and the alternative hypothesis, H
1
, is that one or more gross errors are
present in the system. All statistical techniques for choosing between these two hypotheses
make use of a test statistic, which is a function of the measurements and constraint model.
The test statistic is compared with a pre-specified threshold value and the null hypothesis is
rejected or accepted, respectively, depending on whether the statistic exceeds the threshold
or not. The threshold value is also known as the test criterion or the critical value of the test.
For example the statistical tests is the Univariate Measurement Test (Z-statistic), defined
as:
&
& &
i
y
x
i
y
i
i
x
i
y
x
i
y
i
Z
i
)
*
)
)
+
2 2
If Z
i
> Z
crit
, where Z
crit
is the critical value for the statistical test, a gross error is declared on
measurement i. The outcome of hypothesis testing is not perfect. A statistical test may
declare the presence of gross errors, when in fact there is no gross error (false alarm). On
the other hand, the test may declare the measurements to be free of error, when in fact
one or more gross errors exist. Therefore, simple statistical tests may not provide accurate
gross error detection.
Distance from measured
to reconciled value
Scale with standard
deviation
Integrating Data Reconciliation and Performance Monitoring Online
Page 14 of 15 Invensys-SIMSCI14
15
Appendix B
New Robust Estimators for Data Reconciliation and Gross Error Detection
A mentioned above, one way to improve the accuracy of data reconciliation is to choose
an estimator for the data reconciliation problem which takes into account the gross errors
or an estimator that is less sensitive to the presence of gross errors (robust estimators).
The least-squares estimator solves a minimization problem of the following form:
,
+
-
-
.
/
0
0
1
2
+
)
n
i
i
i i
X
1
2
)
( F min
&
x
x
such that f(x) = 0 and g(x) 3 0
If gross errors are present in data (e.g., large measurement biases), the objective function
above, which comes from the normal distribution, provides a biased data reconciliation
solution. One way to reduce the degree of bias in the reconciliation estimates is to use a
contaminated normal distribution. This distribution is less sensitive to gross errors of
medium size. The objective function for this approach changes to
4
4
5
6
-
-
.
/
0
0
1
2
) 7
8
8
9
:
-
-
.
/
0
0
1
2
) ) ) +
,
+
2
i
i
i
i
1
2 i
i
)
b
a
( 5 . 0 exp
b
p
)
a
( 5 . 0 exp ) p 1 ( ln 2 F
i
n
i i
& &
where a
i
is the adjustment (or offset),
i i
X x
- a
i
+ and &
i
is the standard deviation of the
measurement i. In the equation above, p
i
represents the probability of occurrence of a
gross error and b
i
is the ratio of standard deviation for the gross error distribution to the
standard deviation of the normal random error in measurement i. This approach enables a
simple statistical test (similar to Univariate Measurement Test) for gross error detection.
The test simply declares a measurement i in gross error if:
-
-
.
/
0
0
1
2
) ) ;
-
-
.
/
0
0
1
2
)
2 i
i
2
i
i
i
i
)
a
( 5 . 0 exp ) p 1 ( )
b
a
( 5 . 0 exp
b
p
i i
& &
This test is equivalent with the following detection rule: declare the measurement j in
gross error if:
< =
>
?
>
@
A
>
B
>
C
D
4
4
5
6
8
8
9
: )
)
3
j
j j
2
j
2
j
j
j
p
p 1 b
ln
1 b
2b
!
a
Integrating Data Reconciliation and Performance Monitoring Online
Page 15 of 15 Invensys-SIMSCI15
15
The contaminated normal estimator is recommended for data with gross errors of medium
magnitude ( 5 1
!
a
5
j
j
# # ). For small gross errors the contaminated normal estimator
behaves similarly to the normal distribution estimator. Parameter p
i
can be chosen as an
approximate overall probability of gross errors in the data, while b
i
is a tuning parameter.
A larger value b
i
(e.g., b
i
=20), makes the estimator more robust, therefore less sensitive to
large gross errors. However, that will make the contaminated normal estimator less
efficient for the small random errors.
Another class of estimators is based on robust statistics. Various robust estimators have
been proposed, which are insensitive to large gross errors. One of them is based on the
Fair function, which gives the following objective function:
,
+
4
5
6
8
9
:
-
-
.
/
0
0
1
2
&
7 7
&
+
n
1 i i
i
i
i 2
c
| a |
1 log
c
| a |
c F where c is a tuning parameter.
The robust estimator by Fair function is recommended for data with gross errors of
medium to large magnitude ( 7
!
a
j
j
3 ). For small gross errors the Fair Function behaves
similarly to the normal distribution estimator. The selection of tuning parameter c is also
important. A smaller value c (e.g., c=0.05), makes the Fair estimator more robust, i.e.,
less sensitive to large gross errors. However, that will make the Fair estimator less
efficient for the small random errors.
Note that all statistical tests derived for the contaminated normal and the robust
estimators have a similar form, i.e.,
crit
k
!
a
j
j
3
The threshold
crit
k depends on the estimator type and it can also be manually adjusted in
the data entry window for the statistical test, so fewer or more gross errors will be
included in the final collection of gross errors.
The ARPM system has embedded options for solving the data reconciliation problem and
performs gross error detection using the normal distribution estimator or the
contaminated normal and robust statistics estimators.
CONTRIBUTIONS NOT INCLUDED
The following submissions were not published because either they were out of the scope
of the Handbook, did not apply to the hydrocarbon industry, were not online or an
installation had not been completed.
Aspen Technology
Air separation
GE Automation Services
Smart analyzers
GE Drives & Controls
Gas pipeline transmission cost optimization
Intelligent Optimization Group
Air separation unit
OSA International Operations Analysis
Enterprise strategic optimization
Olefins strategic optimization
Strategic supply chain demand
Resolution Integration Solutions, Inc.
Crude assay viewer/editor
Document indexing and retrieval
Equipment inspection and testing
Equipment specification data management
Operator log-book
Technip
Advanced automation and remote surveillance (e-OSS) oil refineries and ethylene
plants