You are on page 1of 115

Finite Element Analysis and Optimization

Introduction
Peter Budgell
E-mail Address: Please see main page.

Return to Home Page


FEA Modeling Issues Page
ANSYS® Tips Page

Finite Element Analysis

A few years ago, Boeing released a few television episodes on the development of their
777 aircraft. I found them fascinating. They included a destructive test of a prototype
wing. The wing was loaded in a test rig in a manner that simulated the application of
severe aerodynamic pressure during flight. As I recall it, the wing was specified to meet a
load of 150% of the design load. The design load was the maximum g-load that the
aircraft was permitted to experience. (If I remember correctly, the design load was 4 g's,
although it would never be intentionally operated at that level.) In the test, the wing failed
at 153%. Since there is a severe weight penalty in aricraft design and significant
overdesign would be unwanted, I consider this test result to be a fantastic
accomplishment. The precision of this result must have required superb Finite Element
Analysis, as well as material characterization, dimensional tolerancing, exact
manufacturing methods, and a precise test rig. Needless to say, the deformation of the
wing was "Large Displacement", so nonlinear analysis must have been used. I imagine
that it was an example of FEA at its very best -- a variation on The Wonderful One-Hoss
Shay in which no component is over- or under-designed.

What is Finite Element Analysis?

Finite Element Analysis (FEA) is a computer-based numerical technique for calculating


the strength and behavior of engineering structures. It can be used to calculate deflection,
stress, vibration, buckling behavior and many other phenomena. It can be used to analyze
either small or large-scale deflection under loading or applied displacement. It can
analyze elastic deformation, or "permanently bent out of shape" plastic deformation. The
computer is required because of the astronomical number of calculations needed to
analyze a large structure. The power and low cost of modern computers has made Finite
Element Analysis available to many disciplines and companies.

In the finite element method, a structure is broken down into many small simple blocks or
elements. The behavior of an individual element can be described with a relatively
simple set of equations. Just as the set of elements would be joined together to build the
whole structure, the equations describing the behaviors of the individual elements are
joined into an extremely large set of equations that describe the behavior of the whole
structure. The computer can solve this large set of simultaneous equations. From the
solution, the computer extracts the behavior of the individual elements. From this, it can
get the stress and deflection of all the parts of the structure. The stresses will be compared
to allowed values of stress for the materials to be used, to see if the structure is strong
enough.

The term "finite element" distinguishes the technique from the use of infinitesimal
"differential elements" used in calculus, differential equations, and partial differential
equations. The method is also distinguished from finite difference equations, for which
although the steps into which space is divided are finite in size, there is little freedom in
the shapes that the discreet steps can take. Finite element analysis is a way to deal with
structures that are more complex than can be dealt with analytically using partial
differential equations. FEA deals with complex boundaries better than finite difference
equations will, and gives answers to "real world" structural problems. It has been
substantially extended in scope during the roughly 40 years of its use.

How is Finite Element Analysis Useful?

Finite Element Analysis makes it possible to evaluate a detailed and complex structure, in
a computer, during the planning of the structure. The demonstration in the computer of
the adequate strength of the structure and the possibility of improving the design during
planning can justify the cost of this analysis work. FEA has also been known to increase
the rating of structures that were significantly overdesigned and built many decades ago.

In the absence of Finite Element Analysis (or other numerical analysis), development of
structures must be based on hand calculations only. For complex structures, the
simplifying assumptions required to make any calculations possible can lead to a
conservative and heavy design. A considerable factor of ignorance can remain as to
whether the structure will be adequate for all design loads. Significant changes in designs
involve risk. Designs will require prototypes to be built and field tested. The field tests
may involve expensive strain gauging to evaluate strength and deformation.

With Finite Element Analysis, the weight of a design can be minimized, and there can be
a reduction in the number of prototypes built. Field testing will be used to establish
loading on structures, which can be used to do future design improvements via Finite
Element Analysis.

For an interesting note on the very beginnings of Finite Element Analysis, see this web
page on Courant.

What Is Required To Do Finite Element Analysis?

Finite Element Analysis is done principally with commercially purchased software. These
commercial software programs can cost roughly $1,000 to $50,000 or more. Software at
the high end of the price scale features extensive capabilities -- plastic deformation, and
specialized work such as metal forming or crash and impact analysis. Finite element
packages may include pre-processors that can be used to create the geometry of the
structure, or to import it from CAD files generated by other software. The FEA software
includes modules to create the element mesh, to analyze the defined problem, and to
review the results of the analysis. Output can be in printed form, and plotted results such
as contour maps of stress, deflection plots, and graphs of output parameters.

The choice of a computer is based principally on the kind of structure to be analyzed, the
detail required of the model, the type of analysis (e.g. linear versus nonlinear), the
economics of the value of timely analysis, and the analyst's salary and overhead. An
analysis can take minutes, hours, or days. Extremely complex models will be run on
supercomputers. Usually, "Faster and Bigger is Better!" if you can afford it -- this might
be called "The fundamental theorem of finite element analysis." People who analyze
large structures, or run nonlinear models, will tell you that you can never get a machine
that is as fast as you would like -- hopefully, good business sense will prevail. Many
things can be analyzed in good detail on computers costing from roughly $2,000 to
$20,000. Budget for a large monitor, fast graphics card, large hard drive, large memory,
fast processor(s), and an appropriate printer -- faster printers are to be preferred. Color is
usually desirable, although you can sometimes live without it by doing gray-scale plots
where the palette for contour maps has been changed to a progressive gray-scale. Higher
prices will let you consider computers with more memory, larger hard drives, and one or
more high-speed processors. Depending on the complexity of the structures to be studied
and the volume of manufacturing, the expense for FEA hardware can be small in
comparison with the savings in weight and construction cost that can result from design
improvements, and speed of analysis. The expense can be very small in comparison to the
cost of a failure. Problem is, when there is no failure, how do you "prove" that the
investment was warranted? It helps if senior management trusts engineering staff and is at
least somewhat familiar with the nature of their work.

The background of a finite element analyst includes an understanding of engineering


mechanics (strength of materials & solid mechanics) as well as the fundamentals of the
theory underlying the finite element method. The analyst must appreciate the basics of
numerical methods. An engineering degree is typical, though not an absolute requirement.
Use of a particular finite element program requires familiarity with the interface of the
program in order to create and load the models, and to review the results. To do the work
well requires experience, comprehension of structures and their classical (manual
analytical) analysis, an understanding of a variety of FEA modeling issues (see link
below), and an appreciation of the specialized field in which the design work is taking
place.

Click for Information on FEA Modeling Issues

Types of Analysis on Structures

Structures can be analyzed for small deflection and elastic material properties (linear
analysis), small deflection and plastic material properties (material nonlinearity), large
deflection and elastic material properties (geometric nonlinearity), and for simultaneous
large deflection and plastic material properties.

By plastic material properties, we mean that the structure is deformed beyond yield of the
material, and the structure will not return to its initial shape when the applied loads are
removed. The amount of permanent deformation may be slight and inconsequential, or
substantial and disastrous. In metal forming, deformation is substantial and intentional
(consider the shaping of a fender for an automobile). In some structures, "shakedown"
producing residual stress due to local permanent deformation, may in some circumstances
reduce fatigue problems in zones that will remain in compressive stress as a consequence.
An example is the hydrostatic pressure test on a new, post weld heat treated, steel
pressure vessel (opinions on this may vary). In this test the pressure may be taken to 1.5
times the design pressure. Local yielding means that some zones will usually be in
compressive stress during conventional use of the pressure vessel, and may be less prone
to fatigue crack development.

By large deflection, we mean that the shape of the structure has changed enough that the
relationship between applied load and deflection is no longer a simple straight-line
relationship. This means that doubling the loading will not double the deflection. The
material properties can still be elastic.

In addition to analyzing structures for their stress and deflection, other typical analyses
are an evaluation of the natural frequency of vibration, and calculation of buckling loads.
Steady state, transient, and random vibration behavior can be analyzed, too.

Loads on structures can be represented by using the force of gravity on the mass of the
structure, by applying distributed pressure over surfaces of the structure, or by applying
forces directly to positions in the structure. Centrifugal load can be entered by indicating
the axis for the motion, and the rate of rotation. Displacements of the structure can be
specified at positions in the structure. This can include boundary conditions that imply
symmetric structures where only a portion of the structure is modeled. Other boundary
conditions will indicate where the structure is supported against movement, by the
outside world. Temperature distribution that causes thermal expansion and stress can be
applied directly to nodes or to elements with appropriate commands. Uniform
temperatures and reference temperatures can also be applied to full models.

Typical Modeling Difficulties

Certain modeling problems can be considered "typical". (Note: The Toolbox in ANSYS
has, by default, a save model button. This saves the database under the jobname. The
button can be used before employing any "dangerous" operation, such as modifying the
position of a group of keypoints or a group of nodes. Then, a single press to the restore
button will restore the state of the model when it was last saved. Regular use of the save
button can prevent the loss of time-consuming work. There is no automatic save feature.)
A typical user modeling problem is the case of keypoints, lines, areas, volumes, nodes,
and elements that are identical and occupy the same space. This can lead to erroneous
models. Proper use of the merge command can eliminate many instances of these
problems. The merge can fail if, for example, two elements share the same space, but
were defined via alternative sequences of nodes (e.g. elements in the same place, one
numbered by nodes selected clockwise, the other counterclockwise).

Another problem is failure of keypoints, or lines, or areas to be shared by higher


geometric modeling entities. When this happens, the higher entities are not "fused" or
"welded" together as intended. Consequently, the elements will not share nodes along
what should have been the common boundary. The analyst must always use caution and
double-check everything while developing a model.

A problem most users will encounter is to make a change to a model in error, long after
the database was saved. The user will have to learn to use a text editor on the log file, to
extract that portion of the log file after the last time the database was saved, or retrieved
(whichever was most recent). Remove the offending command. That portion of the log
file will have to be run on the model as it was the last time it was saved or retrieved.
Make sure you are in the correct part of ANSYS (usually /PREP7) when you read in the
instructions with /INPUT. The same method can apply if your computer is subject to a
power failure, or if ANSYS crashes without leaving an "ansabort.db" file. After re-
starting, take a text editor to the log file, and re-run the appropriate instructions on the
model database file as it was when last saved or retrieved. I've done this more times than
I care to remember, though less often with more recent ANSYS revisions and more
experience.

The most common of all errors in Finite Element Modeling is the incorrect application of
loads and boundary conditions. This must be thought about very carefully. Most models
(not all) are prevented from undergoing free body motion in 2-D or 3-D space, by
eliminating at least a minimal number of degrees of freedom (2 translations plus 1
rotation in 2-D, and 3 translations plus 3 rotations in 3-D). Rotations can be prevented
either by having constraints on translations at enough distinct nodes in space, or by
directly constraining a rotational degree of freedom at a node. A common check on
results is to see whether the sums of the reaction forces at the constrained nodes equal the
sums of the applied forces and gravity loads.

On rare occasions, the Finite Element Analysis model may not be bug-free as a result of
imperfect programming of the FEA software, not the user's mistakes. An FEA software
package has to keep track of the relationships between keypoints, lines, areas and
volumes, including the changes that result from Boolean operations. In addition it must
keep track of the relationships between those geometric entities and the nodes and
elements that result from meshing, even when geometric entities are cleared, and the
model is modified. The process for coordinating all the pointers and tables that are
generated and changed is not perfect. There are times when the relationships will be
erroneous. An example will be nodes that cannot be deleted because ANSYS claims they
are attached to elements, yet the ESLN command will not select any elements from them.
The same thing can happen with geometric entities. I have seen an area for which the
software thought one of the bounding lines was in a distant part of the model. The area
would not mesh. If the user cannot step back to an earlier "good" saved database, then the
offending parts may have to be deleted, and generated from scratch. The process of
archiving and restoring a model via CDWRITE and IGES can sometimes leave the
misbehaving portions of a model behind, if only the "good" portions of a model are
selected. The good news is that each release of the FEA software eliminates more of these
bugs.

ERRORS ! If you have worked with inferential statistics, you will have heard of Type I
and Type II errors. There are also major types of errors in FEA work. They could be
classified something like this: "Type 0" -- the model contains fundamental flaws: parts
missing, not connected, or details are inappropriate. "Type I" -- The model may not
properly represent the structure as built, or recorded in the engineering drawings. "Type
II" -- The loads or boundary conditions may not represent the real world or the customer
specification. "Type III" -- failure to consider that a particular type of analysis was
needed. "Type IV" -- experience of the analyst inappropriate to the task at hand plus
inadequate training and supervision. "Type V" -- wrong element type and associated
option settings. "Type VI" -- errors in applying design codes. "Type VII: -- computer is
too small and slow to use fine meshing, run nonlinear analysis, and review results in
sufficient detail. The list goes on... The "Type III" problems can be "unsettling". Several
years and a few employers ago, I saw troubles when heat treating deformations were not
considered soon enough during design (not my design!). It is not impossible to imagine
that failure to consider buckling, or bending-compression on a beam with nonlinear
analysis, or reduction of bending cross section in a hollow tube subjected to external
pressure, or a prying load, or an unusual load configuration, or a fatigue detail, or
overloading or fracture-inducing loading of a weld, or flow induced vibrations, or
missing stiffeners, can lead to "unhappy events".

Element Connectivity Error, 8-Node Curved Shell Elements

In this image, the red stiffener was intended to be


welded to the purple pipe. Note that the elements of the
red stiffener do not match up with those on the pipe.
There is no connection, and the meshing was done
independently. This is due to a geometric modeling error
by the user (me). There are superimposed curved lines
where the interface is located. There should have been a
shared line for the connection to have worked. I found
this only because of careful examination of the model --
I had already run a stress analysis.

What to do about these error concerns? Read and think. Share and listen to ideas and
concerns with others. Review your own work, and the work of your co-workers.
(Recently an experienced co-worker who does not even do FEA work asked me if I had
eliminated the added mass of water in pipes when evaluating shipping loads on a product.
I hadn't. Eliminating the added mass got rid of a high-stress problem. These errors are
very easy to make.) Be friendly. Communicate with other departments. Have a check list
and design reviews. Never use FEA blindly, or believe the results of an analysis without
some critical review. Accept a critical review without taking it personally. Develop a
good understanding of the intent of the design codes that regulate your work. Consult an
expert when it is appropriate. Pay attention to the ethics and standards of your
professional association. Choose your employer wisely. (Some of these things you were
supposed to have learned in Kindergarten, but life isn't always that simple.)

Go Up to Top of Page

Engineering Optimization

What is Optimization?

Optimization of an engineering design is an improvement of a proposed design that


results in the best properties for minimum cost. One of the simplest examples is
determining the shape of a fence that will enclose the most area. If the fence can be any
shape, but only a certain amount of fencing is available, then a circle will enclose the
most area with the given amount of fencing. In order to minimize the amount of steel
used in manufacturing a cylindrical tin can, a certain relationship between the diameter of
the can and the height of the can is found. This will enclose a volume with the least
amount of steel used for the surface area.

In each these simple optimization examples, there have been two criteria -- one was a
criterion to be made best. In the fence, it was the enclosed area. In the tin can, it was the
amount of steel in the body. The other criterion was a constraint on the design. In the
fence, it was the amount of available of fencing material. In the can it was the specified
volume to be enclosed.

In more elaborate problems encountered in engineering, there will be a property to be


made best (optimized) such as weight or cost of a structure. Then there will be
constraints, such as the load to be handled, and the strength of the steel that is available.

Constraints on the design are of two types. One is Equality Constraints. An equality
constraint specifies a property of the design that must hit a specified value. In our fence
example, the length of the fencing (perimeter of the enclosure) was a certain number.
This is an equality constraint. In a structure, the steel throughout the structure will need to
be kept below the yield strength (localized stress concentration regions excepted). In
many parts of the structure, the stress will be below yield stress. Consequently, there is an
Inequality Constraint. In an inequality (or one-sided) constraint, a property of the design
will be required to be kept above or below some number.

Once a preliminary design has been developed, variations in some of the dimensions of
the design can be evaluated. The particular dimensions that will be permitted to be
changed are the degrees of
freedom, known simply as
variables. Some of the
resulting properties of the
design will be required not to
exceed certain boundary
values, or constraints. There
may be constraints on the
degrees of freedom, as well as
on derived properties, such as
the stress in a structure.

If the initial design was feasible it did not violate any constraints. Variations on the design
may result in properties that are an improvement. When the degrees of freedom have
been set to values that give the best possible properties for the design, the design is said
to have been optimized. In the case of the fence above, if we started out by trying a
rectangular shape, and eventually arrived at the circle, we would have optimized the
design. This would require that there was no constraint on the permitted shapes, such as
requiring that the fence be rectangular!

What is Required to Do Optimization?

Classical optimization is done manually with algebra, calculus, and the calculus of
variations. Problems with a variety of constraints may be handled symbolically using
Lagrangian multipliers. Many modern design problems are too complex to be handled
with purely algebraic symbolic methods. Computers are used for numerical assessment of
variations in a design.

Computer codes to optimize design have been developed ever since the inception of
modern digital computer use. Today, codes for optimization can be acquired for free, or
purchased as part of mathematical subroutine libraries. Some coding of the problem to be
solved is required, as are calls to the optimization subroutines to be employed.

A faster method suitable for many optimization problems is to use the optimization
engine bundled into spreadsheet programs, such as Microsoft Excel. (This is an option
that you must intentionally install.) Then the only significant work required is to put the
problem in spreadsheet form. If the problem is difficult to program, Excel spreadsheet
cells can reflect the results of code written in Visual Basic. The spreadsheet program then
does most of the work, and the user interface is easy to construct.

In Finite Element Analysis, optimization is more difficult, because each variation in the
design takes a significant amount of time to evaluate. This can make brute-force iterative
optimization excessively time consuming. The analyst will usually attempt nonlinear
optimization under both equality and inequality constraints, when optimization is used
with FEA. One approach is to run a small set of variations on the design, then fit curves
to the relationship between degrees of freedom, and the properties of the optimization
function and properties to be constrained. Software can be used to search this design
space, and suggest good starting points for the next set of design checks. Finite Element
Programs are beginning to have tools of this type built in. Alternatively, external tools can
be used to evaluate the design that is being tested in a small set of analyses. For further
information, see the Optimization section of the ANSYS Analysis Guides (included in the
Help Files).

It should be noted that substantial new optimization tools have been added to the ANSYS
product line in recent releases.

Optimization versus Innovation

Optimization, as I have used the term above, implies changing the setting of independent
variables in a continuous manner, to get best possible structure properties. An approach
like this will look at variations in an existing configuration, but not invent significantly
new configurations for a structure. When trying to improve a structure, or respond to a
defined need to support a set of loads with a newly created structure, there is far more
involved than the rather narrow definition of optimization that I have pursued above. The
analyst should keep in mind that creativity in finding a structural configuration should not
be sidetracked by a narrow approach to optimizing an existing shape. Innovation is often
more important in the early stages of a new job.

Finite Element Analysis


Modeling Issues and Ideas
Peter Budgell

E-mail Address: Please see my main page.

© 1998 & 2004 by Peter C. Budgell -- You are welcome to print and photocopy these pages.
These tips and comments are intended for user education purposes only. They are to be used at your own risk. The contents are
based on my experience with ANSYS 5.3 -- more recent versions may change things. The contents do not attempt to discuss all
the concepts of the finite element method that are required to obtain successful solutions. It is your responsibility to determine if
you have sufficient knowlege and understanding of finite element theory to apply the software appropriately. I have attempted to
give accurate information, but cannot accept liability for any consequences or damages which may result from errors in this
discussion. Accordingly, I disclaim any liability for any damages including, but not limited to, injury to person or property, lost profit,
data recovery charges, attorney's fees, or any other costs or expenses.

Return to Main Page


FEA and Optimization Introduction Page A Quick Overview of FEA
ANSYS® Tips Page My Collection of Tips on ANSYS Use.

Modeling issues include a host of topics. I will mention some that have been relevant to
my experience. After almost six years of continuous use of the ANSYS program, I
continue to learn new features of the software, discover more ways to represent or
approximate features, and develop new ways to get useful output information from the
models.

Example of Approximation: I wrote a macro to give the surface area (one side) of a previously
selected set of shell elements. A force divided by this area can be applied as pressure over these
shell elements, for smooth force application, if the elements are flat. Writing the macro required a
few lines of code that: determine the number of elements, get the first element identity, create an
array of correct size to hold data, put the areas of the elements into the array, sum the array entries,
report the result, and delete the variables and array. NOTE: The user must be careful to apply the
pressure to the CORRECT FACE of the set of shell elements. Force or pressure on a flat shell may
require Large Displacement (geometrically nonlinear) analysis.

CONTENTS:

1. FEA is Approximate
2. Meshing
3. Shell versus Solid versus Beam Elements
4. Reduction of the model to a shell structure
5. Pressure on Shell Elements
6. Reflecting Part of a Model
7. Representation of Bolted Connections
8. Warning about Nodal Coupling
9. Development of geometry in which surfaces cut each other with shared lines
10. Application of boundary conditions
11. Application of loading
12. Pressure loading of a wall containing granular material
13. Deformation of thin flat panels by pressure loading
14. Use of Units
15. Buckling analysis and failure
16. Ramping Loads in ANSYS
17. Plotting results
18. Coping with Design Changes
19. Computer Aided Engineering Environment
20. FEA versus Hand Calculations
21. Choosing an Appropriate Shell Element
22. Using P-Elements
23. Harmonic Response
24. Failure Modes to Consider
25. Stress Limits and Margin of Safety
26. Representation of a group of bolts (or rivets)
27. Adequate Computer Hardware for FEA

MODELING ISSUES that are faced include (but are by no means limited to):

FEA is Approximate. The first issue to understand in Finite Element Analysis is that it is
fundamentally an approximation. The underlying mathematical model may be an
approximation of the real physical system (for example, the Euler-Bernoulli beam
ignoring shear deformation). The finite element itself approximates what happens in its
interior with interpolation formulas. The interior of a 2-D or 3-D finite element has been
mapped to the interior of an element with a perfect shape, so a severely distorted element
can not deform in a manner that has an accurate match to the real physical response.
Integration over the body of the element is often approximated by Gaussian Quadrature
(depending on the element, an analytical integral can be either impractical or exceedingly
difficult -- I've done a few with the computer algebra system MACSYMA and the
number of terms can explode unless constants are extracted during the derivation and the
integrand is kept factored; some elements are said to be more accurate with numerical
integration at a limited number of points). The continuity of deformation between
connected elements is interrupted at some level. Badly shaped (by distortion, warping or
extreme aspect ratio) elements can give less accurate results. Elements approximate the
local shape of the real body. Numerical analysis difficulties such as ill-conditioned
matrices may reduce the accuracy of calculated results. A linear analysis is an
approximation of the real behavior. The loading of the model is an approximation of what
happens in the real world. The boundary conditions approximate how the structure is
supported by the outside world. The material properties assumed are approximate. Flaws
are not represented unless the analyst incorporates a model of a flaw. The overall
dimensions of the model approximate real structures that are manufactured within a
tolerance. Many details are idealized, simplified, or ignored. Element results may be
reported at integration points or nodes, not continuously evaluated with the interpolation
functions over the whole element interior. Stress and strain results are based on the
derivatives of the displacement solution, amplifying the errors.

The result of an analysis contains the accumulated errors due to all of the contributing
approximations. Good analysis and interpretation of results requires knowing what is an
acceptable approximation, development of a complete list of what should be evaluated,
appreciation of the need for margin of safety, and comprehension of what remains
unknown after an analysis.
Meshing. Production of a good quality mesh is a major topic. The mesh should be fine
enough for good detail where information is needed, but not too fine, or the analysis will
require considerable time and space in the computer. A mesh should have well-shaped
elements -- only mild distortion and moderate aspect ratios. This can require considerable
user intervention, despite FEA software promotional claims of automatic good meshing.
The user should put considerable effort into the generation of well-shaped meshes. This
will include setting element densities, gradients in element size, concatenation of lines or
areas to permit mapped meshing, playing with automatic meshing controls, and re-
meshing individual areas and volumes until the result looks "just right".

In ANSYS, the command "LSEL,S,NDIV,,0" will select all the lines that have not had
mesh density assigned. This can help find missed lines when setting mesh densities
manually.

On a curved surface, quadrilateral shell elements should not be generated with a warped
form. (The theory manual discusses shell element warping, but I suspect that the
discussion is more relevant to element deformation under load, than to the initial un-
deformed element shape. ANSYS will give warnings if there is more than very slight
warping of the original un-deformed quad shell element shape.) Quad shell elements can
sometimes be fitted to a cylindrical curve so that they are rectangular in shape and not
warped. On other curved surfaces, finely meshed triangular 3-node or four-sided curved
8-node shell elements may be needed. Mid-side node elements can follow complex
curved surfaces, so if they are capable of any nonlinearity that will be needed, they may
be acceptable and preferred. The 8-node Shell93 shell element of ANSYS has mid-side
nodes, follows curved surfaces, and supports nonlinearity.

Remember that most finite elements are stiffer than the real structure. For these elements,
a coarse mesh generally results in a structure that underpredicts deflection, and
overpredicts buckling load and vibration frequency. A coarse mesh is less sensitive to and
"hides" stress concentrations. A fine mesh generally gives an answer closer to the exact
solution. A fine mesh also results in larger models, more data storage, and longer model
solution and display times.

Shell versus Solid versus Beam Elements. Ideally, structures would be represented for
Finite Element Analysis by solid elements, for this would eliminate the problem of
positioning the mid-plane of shell elements, exactly represent the sectional properties of
components, and position welds in their design location. Unfortunately, there would have
to be several solid elements through the thickness of sheets of steel or aluminum to
capture local bending effects with any accuracy, and the other dimensions of the elements
would have to be kept small so that the aspect ratios of the elements were acceptable.
Consequently, the number of elements would be unbelievably large. It is not feasible to
model many thin-wall structures with solid elements.

Shell elements were originally developed to efficiently represent thin sheets or plates of
steel or aluminum, both flat and curved surfaces. They include out-of-plane bending
effects in their fundamental formulation, as well as transferring shear, tension, and
compression in the plane. Developing an interface between a shell portion and a solid
element portion of a model has a difficulty: Most solid elements do not include rotational
degrees of freedom at the nodes, and this results in a rotational "joint" if shell elements
are connected to a solid. Even if a solid element with rotational degrees of freedom is
used, the rotational stiffness at a solid's edge node is not appropriate for connection to
shell elements -- these solid elements were intended to be connected to each other. In
addition, high order solid elements like these are not usually capable of nonlinear
analysis. A modeling trick that is often used is to overlap one shell element with the first
element in a solid, and join the nodes in two locations in order to imply continuity of
rotations, as well as deflections. This is not a perfect fix. Rigid regions with node pairs
(rigid links with CERIG) may be used to enforce connection, although high local stresses
will result. Some finite element software may have tools to address this problem.

Of course, beam elements are even simpler and more efficient, when structures employ
beam-like details. There are occasions in FEA work when structural beams (including I,
wide-flange, channels and angles) will be more fully represented as shells or solids, in
order to examine in detail how they are behaving, or interacting with the structure where
they are connected to other parts. Structural steel tubing and rolled sections can
sometimes be simplified as beam elements. NOTE: Remember that when shapes are
simplified as beam elements, we lose the possibility of predicting flange buckling, web
buckling, and concentrated stresses, so caution must be used. Link elements will not
show bending stress or Euler buckling of a link.

On the XANSYS listserver, I have seen the opinion that the ANSYS PCG solver is not
significantly faster than the frontal solver with shell elements, because of the great
stiffness difference between in-plane deflections of shell elements, and out-of-plane
deflections. In the ANSYS manuals the PCG solver is not recommended where
significant numbers of coupled nodes (CP) and rigid regions (CERIG) have been defined.
Gap and contact elements may introduce the same problem. This has usually been my
experience. However, when modeling a perforated flat plate with shell elements that were
roughly square, all about the same shape and size, and as thick as they were wide, using
about 200,000 degrees of freedom, I achieved good convergence with the PCG solver.
The frontal solver could not fit this problem into my computer because of the size and
large wavefront. Of course, you can speed up the "solution" of the PCG solver by
accepting a larger convergence error. You know you are having PCG convergence trouble
when the convergence error is not decreasing monotonically (when it goes up and down
instead of dropping smoothly). The PCG solver is not recommended for use with
nonlinear solutions. One time I tried it I got a negative on the diagonal, which would have
resulted in bisection with the Frontal solver and adaptive time stepping, but crashed
ANSYS with the PCG solver. However, for better behaved models, I have achieved
apparently good results with the PCG solver, with shell elements, in nonlinear Large
Displacement runs.

Reduction of the model to a shell structure. Shell elements are appropriate for many
steel structures, since the plates of steel are thin in comparison with their other
dimensions. (This applies to aluminum and other materials, too.) The ideal position for
the shell element is on the mid-plane position of the sheet of steel. Consequently, a
variety of approximations are needed to link parts of the model together, so that the
surfaces act as if they are welded together.

ANSYS supports shell elements for which the element thickness varies within the
element. This could require a REAL value for every element in order to input a different
shell thickness at each node. Input from external programs such as CAD packages
sometimes generates such elements and information. User-written macros are sometimes
employed to generate elements with varying thickness, or to set up REAL values for
existing elements, with the thickness that is assigned being based on node position.

There is a helpful if less-than-ideal fix for the case when somewhat thick shells overlap
each other, and are welded together. Place the shell mid-surfaces correctly in space, and
mesh them so that nodes where welds are used are positioned directly "above" one
another on the two surfaces. Join those nodes in pairs with rigid regions (CERIG) or with
massless high-stiffness beam elements. The beam elements have the advantage of
working properly in large displacement (geometrically nonlinear) solutions. The problem
with this technique is that it requires proper mesh control if the user wants to automate
generation of the model, and it is tedious to implement manually. In some cases it will be
desired to place gap or surface contact elements (with the gap set closed) between the
nodes or elements in the interior of the pair of shells, requiring more work. The gap
elements keep their original orientation in a large displacement solution, so they will not
be applicable in large displacement analyses (unless you can live with the error), and
surface contact elements will be needed. Surface contact elements on shell elements must
be applied to the correct face of the shell elements.

The following figure shows two areas that are offset with one above the other. Lines have
been created so that the CERIG command can be used to join them as if they were
welded together. Mesh densities have been set so that the rigid region pairs can be
created.

The next figure shows the same two areas after meshing and the creation of the rigid
region pairs with CERIG. The shell elements have been plotted with the shell thickness
shown, so that the positioning of the nodes in the center of the shell elements is visible,
and the touching of the plates is implied. Remember that rigid regions only apply
accurately with Small Displacement analysis.

Automating the creation of these CERIG pairs could be done with a macro that:

1. Has the user identify the set of lines on one surface, and the set on the other
surface.
2. Steps through the nodes on the first set of lines.
3. For each node on the first set of lines, uses a *GET command to select the closest
node from the nodes on the other set of lines.
4. Create a rigid region from the pair of nodes.

The macro would work as long as the nodes for the sets of lines are located "above" one
another by appropriate mesh control on the lines. A similar macro could join the node
pairs with the massless high stiffness beam elements mentioned. Alternatives to using a
macro include applying the CEINTF or the EINTF commands with appropriate tolerance
values. The reader is cautioned that this technique tells us little about the stresses in the
weld, or about fatigue, crack growth and fracture. A prying load applied to the above
example could tear the weld apart if the weld was small in comparison to the shell
thickness. The example does not illustrate good design practice for handling certain
loads. The FEA evaluation of loading of welds in shell structure models is a whole
separate topic.

Pressure on Shell Elements. In ANSYS, shell elements have two sides. These are known
as the TOP and the BOTTOM faces. They are also known as FACE 1 (the BOTTOM) and
FACE 2 (the TOP). The nodes I,J,K,L form
a path around the element. If the "right hand
rule" is used on this path, the fingers of the
right hand following the path, then the
thumb points out of the TOP surface (FACE
2).

If positive (into the element) pressure is to


be applied to FACE 2, a positive pressure
vector points into FACE 2, the TOP. If
positive pressure is to be applied to FACE
1, a positive pressure vector points into
FACE 1, the BOTTOM. Areas act similarly.

If a simple primitive solid (for example a cube) is created in ANSYS, it is bounded by


areas. The areas will have FACE 1 on the inside surface, while FACE 2 is on the outside
of the solid. If the volume was deleted, and the areas that bounded the solid were to be
pressurized on the interior of the box that was formed, the pressure should be applied to
FACE 1 on all sides. In other models, where Boolean operations have been performed,
the FACE 1 and FACE 2 orientations get very scrambled.

For the user to apply pressure, careful checking must be used to assure that the correct
faces of shell element have been pressurized. ANSYS can plot elements or areas for
which the positive vector points out of the screen (when coming out of FACE 2), or when
it points into the screen. This lets the user plot only those areas or elements for which the
user sees FACE 2, or for which the user sees only FACE 1. This helps in choosing
whether to apply pressure to FACE 1 or FACE 2 when using picking to select areas or
elements. Alternatively, ANSYS 5.3 (and presumably later) plots shell elements with
different colors for FACE1 and FACE2 under PowerGraphics when the numbering
options are set with "No Numbering" and with "Colors" or "Colors and Numbers".

To add to the challenge, the direction of the pressure arrows (choose arrows to be shown
to indicate pressures under the SYMBOLS choice under PlotCtrls on the Utility Menu)
for areas may differ from the direction of the arrows shown for the elements attached to
those areas, depending on surfaces visible and sides to which the pressure was applied.
The arrow plots for the elements are the ones to believe. Pressures have to be transferred
from geometric entities to elements in order for these plots to take place. You have to
activate plotting of arrows with the /PSF command -- by default surface symbols are
used. ANSYS only plots pressure arrows on shell elements when the arrows point into the
screen, so you have to look at a model from all directions when inspecting a shell model.
Have fun!
Final notes on pressures: ANSYS can include a gradient in the applied pressure to show
the effect of, for example, pressure increasing as a depth of water increases. "Suction"
can also be applied by using a minus sign. Remember that "suction" in physically realistic
models cannnot be applied beyond the point at which a liquid boils, or below zero
absolute pressure. ANSYS, however, does not limit the negative pressure values that a
user enters. The hydrostatic pressure of oil floating on water might be modeled by setting
the "zero" position of the water pressure gradient above the position where the water
starts, in order to include the pressure of the oil. A variety of other tricks can be applied.

Reflecting Part of a Model. Where symmetry in the design exists, only a partial model
need be built; the rest can be created by reflecting (mirror imaging) the geometry. Where
structure is repeated (e.g. a set of posts) multiple copies can be made.

Reflection in ANSYS can be done across the XY, YZ, or ZX planes of any ACTIVE
Cartesian coordinate system. Since the active coordinate system can be any local system
that the user has defined, any kind of reflection in 3D Cartesian space can be
accomplished.

If the reflection included geometry, nodes, or elements that were on the XY, YZ, or ZX
plane about which the reflection took place, copies of those entities will overlay the
original copy on the plane of reflection. Entity appropriate merge (NUMMRG)
commands will be needed to connect the original and reflected entities. Warning: As
discussed elsewhere, elements lying in the plane of reflection get copied with the node
order reversed, and will NOT merge with the element from which they were generated.
These elements may have to be deleted, depending on your intentions.

Representation of Bolted Connections. This non-trivial item can be tackled at a


simplified level, or with detailed 3-D representation. The simplest approximation is to
represent the bolted (or riveted) connection of overlapping shell structures by locating a
node of each surface at the location of the bolt. The nodes have to be located at the same
X,Y,Z location in space. This means offsetting one or both shells from its nominal
position so that the nodes and shells can touch. One then uses nodal coupling (the CP
command in ANSYS) to tie the X, Y, and Z locations in space. It will generally be
desirable to tie two of the three rotations as well. The only rotation that is free is that
about an axis perpendicular to the planes of elements (about the axis of the bolt). When
any rotations in a 3-D analysis are coupled (a result of the bolt clamping surfaces
together) the rotation coupling is generally valid only in a small displacement
(geometrically linear) analysis. Large displacement (geometrically nonlinear) analysis
introduces an error based on the difference between "sin(theta) and theta" (expressed in
radians). If contact surfaces are added between the shells that are bolted together, the
coupling of rotation is not needed, but the solution becomes a nonlinear iterative process,
taking several times longer. NOTE: Contact surfaces on shell elements have to be defined
carefully, so that the correct surfaces (Face 1 or Face 2) of the shell elements are the ones
in contact -- shell element orientation may need to be doctored to get this to work.
Another bolt representation is to use a rigid region to link pairs of nodes. Rigid regions in
ANSYS assume small displacement (geometrically linear) analysis. The degree of
freedom for rotation about the axis of the bolt must be free at one end of the rigid region
node pair, for bolt representation. This representation has the advantage that the shells
can be positioned properly in space. However, contact surfaces may become desirable,
depending on the dimensions of the clamped parts. The ANSYS rigid region (CERIG)
couples rotations about global axes, so the axis of the bolt would have to be along one of
the global axes for the rotational degree of freedom to be correct. The analyst may do
better to use a very stiff beam element, with incomplete nodal DOF coupling at one beam
end and shell, and the other beam end attached to the other shell; the rotational degree of
freedom about the beam axis is free at the end with the nodal coupling. A beam with
arbitrary orientation may require the nodes at the coupled end to have their coordinate
system rotated to have the rotational degree of freedom oriented properly (I haven't tried
this). The problem of contact surfaces remains. It can be partially addressed by using gap
elements at nearby nodes, for which the nodes of the two shell surfaces must be aligned
"above" one another, so the gap elements are perpendicular to the two shell surfaces.
Note: Gap elements keep their original orientation in a large displacement analysis, and
will not be applicable where there is significant rotation. Contact surfaces (with the gap
closed) may be needed where there will be large displacement. The previous warning
about applying a contact surface to the correct side of a shell element applies.

Note that nodal coupling acts in the coordinate system of the nodes coupled. The nodal
coordinates systems of the coupled nodes should, in general, be identical. The ability of
nodal coupling to act in the nodal coordinate system means that the user is not restricted
to coupling in global coordinate system directions.

Two of the previous bolt representation methods (nodal coupling and rigid region
CERIG) are missing the possibility of representing bolt preload. Preload can be implied if
a bolted connection is represented with a link or beam element that is capable of "initial
strain". In ANSYS these include: Link1 (2-D Spar), Beam3 (2-D Elastic Beam), Beam4
(3-D Elastic Beam), Link8 (3-D Spar), and Link10 (Tension or Compression Only Spar).
They must be squeezing surfaces together, which means that either nodal contact
elements (gap elements) or surface contact elements must be in use between separated
shell element surfaces, or that surface contact elements must be used on the interface
between touching 3-D solid elements or touching shell element surfaces.

Other ways to represent bolt preload include:

1. Use all 3-D representations of bolts and parts, use contact elements, and apply a
temperature difference to the bolt to cause it to "shrink" an intended amount.
2. Use 3-D elements and contact surface elements, with an initial interference
between bolt and parts, such that the initial interference results in the intended
preload.

Temperature setting, interference setting, and setting the "surface normal stiffness" value
of surface contact elements in ANSYS must be carefully done to result in the intended
preload. Setting the surface normal stiffness value appropriately is nontrivial. The
intended preload must exist BEFORE the structure is loaded. An iterative process may
help, but be time-consuming. If the bolts are not overloaded when the structure is loaded,
the bolt preload will be nearly unchanged when the structure is loaded. Whether any gap
or contact element friction coefficient should be included in the model needs to be
considered carefully for it can hide or prevent shear loading on the bolts. For
conservatism and safety, friction coefficients may need to be zero, so that the bolts take
all the load. When postprocessing, loading on bolts should be assessed using established
criteria.

My experience has been that if a full 3-D model of a bolted connection (bolt and
materials represented with 3-D elements, and contact elements on the surfaces) starts out
with the bolt loose and none of the contact elements touching, convergence may be
difficult when the solver begins work. Various analyst "cheats" may help, such as moving
the bolt or parts so that there is some contact, and/or using some very soft spring stiffness
combination elements to keep the model from "flying off into space", when the solver is
working to converge.

Warning about Nodal Coupling. Nodal coupling has its uses: one is a quick-and-dirty
representation of a bolted or riveted connection with shell elements (see above). More
exotic applications can be invented. When nodal coupling is used to represent a bolted
connection of 3D shells, the nodes that are coupled must occupy the same position in
space. Otherwise, body rotation at that part of the structure will result in an artificial
mechanism acting on the structure. If the nodes were tied in the X,Y,Z directions,
structure rotation would not result in the necessary change in the relative X,Y,Z positions
of the two nodes. High local stresses, and an external couple would result if the coupled
nodes were not located at the same position. This is not good!

Development of geometry in which surfaces cut each other with shared lines. The
lines must be shared between different areas if the finite elements are to act as if the
surfaces are welded together, when meshing takes place. Considerable care and checking
is always necessary as a model is built, to see that connectedness is complete. I can still
make errors of this type, for they sneak in even when being careful.

Hopefully, a beginning ANSYS user will have had some training in the development of
ANSYS solid geometry within /PREP7. New revisions of ANSYS improve the capability
of /PREP7, with not all improvements being publicized. I lived with ANSYS 5.0 and 5.1,
and much prefer the more recent ANSYS versions. The solid modeling engine does not
like singularities, e.g. you can't have a line that cuts half way through an area, the way
that you can cut half way into a sheet of paper with a pair of scissors. It is necessary to
cut an original area into two areas, in order to get a line that extends into the interior of
the original area. Recent ANSYS versions appear to be more tolerant of cusps and some
other difficulties. Development of complex structure solid-model geometry with /PREP7
calls on analyst creativity, intelligence, and puzzle-solving skills, as well as a good dose
of patience. This tends not to be understood by those who have never done the work.
ANSYS does not assign the attributes (REAL, MAT, TYPE, and ESYS) of a parent
geometric entity (Line, Area, or Volume) to the entities that are formed by a Boolean
operation such as dividing the original entity into parts. I consider this unfortunate, since
it increases the work required of the analyst who is developing the model. It is an easy
way to forget to assign attributes.

Application of boundary conditions. Structural FEA displacement boundary conditions


are the limitations on movement of the structure at places such as anchor locations. The
boundary conditions in a finite element model must limit translation or rotation in a
manner appropriate to the case at hand. Boundary conditions can be used to imply
symmetric behavior in a structure that has symmetry, so that the model size can be
halved, quartered, or similarly reduced, if the loading of the structure is also symmetrical.
Boundary conditions can also be used to imply anti-symmetry, for example, where a
warping displacement is applied to a symmetric structure (envision twisting a shoebox
about the long axis -- a quarter model could be sufficient).

There are occasions when a displacement boundary condition needs to be applied to a


single node so that the structure can rotate around the support point. This single node
support, however, can result in a serious local stress spike. Depending on the model, the
elements where the single node support will be applied might be artificially stiffened.
Alternatively, if there is a surrounding "pad", an even pressure could be applied to the
pad, that generates a force equal to the reaction otherwise found at the constrained node.
Two stress runs could be used: (1) Run without the pressure on the "pad" and find the
reaction at the constrained node. (2) Take the reaction, spread it smoothly over the pad as
a pressure, and run again. The reaction could be spread over nearby nodes at stiffeners,
instead of applied as a pressure, depending on the nature of the model and structure. The
goal here is to approximate reality in an acceptable way, while avoiding the time-
consuming use of contact and other non-linear elements. (Of course, in some cases, it will
be necessary to exactly model a support complete with many non-linear complexities.)
NOTE: If you do this, the reaction forces will no longer equal the previous applied load
plus gravity load on the structure, because of the new load that has been introduced.

Application of loading in a manner that is of satisfactory accuracy, without


becoming overly complex. It is often sufficient to apply forces directly to a small set of
nodes. However, better representation of loading can be needed to avoid local stress
spikes in some analyses. As discussed above, application of pressure over a region of
elements, producing the desired force, can help avoid a local stress spike. Artificially
stiffening a local region where a point force is applied can help, if this is acceptable.

The load to consider may need to be increased because of the possibility of dynamic
effects, if you are doing only a static analysis. Your industry may have standards for this.
Consider road vehicle design -- you wouldn't want the tires to blow out from the
increased force due to a vehicle roll-over. (If they did, how would you prove that tire
failure did not cause the accident?) This would call for the tires to stand at least twice the
"normal max rating" without immediate failure. I once sighted a non-professional driver
pulling a simple trailer grossly overloaded with crushed stone. It appeared that the wheel
bearings failed before the tires let go (there was a lot of smoke so it was hard to tell).
Somebody did good tire design! (Some transportation structures have to be limited in size
under the knowlege that users will fill them to the maximum possible volume, without
regard to the density and total weight of the material loaded.)

For structures that do not have a severe weight penalty (e.g. those that do not have to fly),
getting a conservative result is often satisfactory. An analyst will develop a feel for this as
the result of experience in a particular industry. However, where there are high material
costs, or large volumes manufactured, extra modeling detail to reduce unjustified
conservatism may be economically sound.

Pressure loading of a wall containing granular material is particularly challenging.


Earth, sand, grain, coal, or other granular material pressure is a civil engineering topic.
Because of internal friction in the material, the lateral pressure on walls is usually less
than simple hydrostatic pressure would be for a liquid of the same average density. For
some dry materials, the pressure would be roughly 40 to 60 percent of hydrostatic
pressure (look up a proper value) on a vertical wall. The pressure loading varies with the
depth of the material, and varies if the slope of a wall changes (a horizontal surface could
see hydrostatic pressure). On the other hand, in a long column filled with granular
material, the pressure may be constant past a certain depth -- this affects the function of
an hourglass. The ANSYS Finite Element program is capable of applying a pressure with
a gradient, so pressure can ramp up smoothly as the depth increases. The pressure load
must be applied to the correct face of a shell finite element. Considerable FEA checking
is needed to assure that the whole structure model is properly loaded. Extra analyst work
is needed to apply a series of gradient loads that increase smoothly in intensity if
curvature of a wall or container surface causes change of slope. The Rankine formula
describes granular material pressure on a vertical wall. Non-vertical sides might require
the Coulomb formula to give a higher accuracy representation of how non-vertical slope
affects granular material pressure on a wall (go visit a library, plus talk to a civil
engineer). Take a look at EJGE/Magazine Feature for more information.

After creating loads that represent a granular material in a container, under a 1.0 g
vertical load, the vertical component of the applied pressure should result in a total force
that equals the weight of the granular material. It may be desired to scale the granular
material pressures so that the total vertical force component under 1.0 g equals the weight
of the contained material. This should be checked in reviewing the results of the analysis.

A perfect FEA model of containers (bin, hopper, hold, box, trailer, etc.) loaded by
granular material may be impossible. The pressure required to push inward and deform a
surface of a granular material is greater than the load with which the granular material
pushes outward. This is because of the internal friction in the material. A finite element
model of a loaded wall can include pressure on the inside surface that would result from
contained material. However, that pressure will not be adjusted according to whether the
wall moves inward, or expands outward, as the container deforms under various loads.
Since an FEA analysis results in deformation of the walls, exact representation of the
pressure loading will be unachievable. I have not been able to find an expert who would
say that a granular material nonlinear solid element finite element model can be included
inside a shell structure container model in a successful manner, using contact elements on
the interface between the solid elements and shell elements (geotechnical engineers
should know far more about this than I do). Material properties such as Drucker-Prager
are included in ANSYS and some other FEA packages, but I don't know if they are
applicable to this type of structure and granular material modeling. ANSYS manuals
discuss this material option briefly. An engineer often settles for a model and design
thought to be conservative or adequate, given industry experience. The worrying starts
when a design departs significantly from previous practice.

Deformation of thin flat panels by pressure loading causes the panels to curve. When
flat panels are loaded on one of their surfaces, the panels curve, then start to carry applied
loading with membrane forces. The only way in which this can be represented is to
activate large displacement (geometrically nonlinear) analysis. A rule of thumb is that
membrane forces begin to be significant when the out-of-plane deflection exceeds half
the thickness of the panel. Nonlinear analysis requires considerable experience, because
of the difficulty in achieving converged solutions. Failure to use nonlinear analysis where
it is appropriate can result in considerable ignorance of the real structural mechanics
involved. Nonlinear analysis becomes very time consuming because of the iterative
solutions needed. Fast computers are very desirable when doing this kind of work with a
large model. Failure to consider that significant out-of-plane deflection can result during
nonlinear analysis can, in some cases, lead to inadequate designs. In other cases, the
curvature can lead to significant increases in strength of the structure. The designer needs
to be aware of the need to include nonlinear effects in some work.

Use of Units. Vibration and transient analysis require that the mass of the structure be
entered in units consistent with the other units in the model. Some North American
industries normally work in inches-pounds-seconds. This requires that mass be
represented as pounds/in/sec^2. Pounds here means "pounds force", the force with which
1.0 g of gravity pulls on the mass. This means dividing the weight in "pounds force", or
the density in pounds/in^3, by the number 386.1 (more accurate than 32.2*12=386.4),
which is the acceleration due to gravity expressed in inches per second squared
(in/sec^2). In consequence, when mass and mass density have been defined this way (the
density of steel, which depends on the alloy, if given as 0.2836 lb/in^3 would be entered
into ANSYS as 0.0007345) it is necessary to enter 1.0 g of gravity as 386.1 in/sec^2 to let
ANSYS apply the correct force due to gravity on the structure. Loads will be entered in
pounds. Pressures and stresses will be referred to as pounds per square inch. ANSYS
refers to these units as "BIN" (see the /UNITS command for "British system using
inches", noting that the /UNITS command is for annotation of the database, and has no
effect on the analysis or data).

In the metric world, fundamental units are meters-kilograms-seconds. However, in


engineering work, analysts often use millimeters-kilograms-seconds. Forces are
expressed in Newtons (1 Newton accelerates 1 kilogram at 1 meter per second squared).
Pressure is Newtons per square meter (1 Newton/Meter^2 = 1 Pascal). A pressure of 1
Newton per square millimeter is referred to as 1 megapascal. When working in
millimeters-kilograms-seconds, it is common to refer to pressures, stresses, and Young's
modulus in megapascals or kilopascals. Acceleration due to gravity is 9.807
meters/sec^2, or 9807 mm/sec^2.

ANSYS does not care what units are used, nor does it issue warnings. The analyst must
be consistent in the set of units in one model, to avoid errors. Getting the mass and mass
density into the correct units is particularly important if any form of vibration, transient,
or transient heat transfer work will be done. Tip: Check the values for typical materials in
the ANSYS material library as a guide, even if you do not use these exact materials. A
comparison will indicate if your values are in the right range. The ANSYS materials
library includes material values in various systems of units. Many design codes will, for
example, give densities in lb/in^3, where pounds is actually the weight expressed as
"pounds force". This Imperial value cannot be used directly for vibration and transient
work, and must be converted. (When I try to explain this to non-North American people,
and even recent Canadian graduates, they think the whole Imperial units business is
insane -- I can't blame them.)

The usual question on Imperial units is, "Why can't I enter density for steel as 0.2836 and
1.0 g of gravity as 1.0 ?" The answer is, "This would work for gravity loading on a
structure, but if you ever do vibration or transient analysis on the same model in the
future, your answer will be garbage." My own policy is to always use the "correct" units,
similar to those that the ANSYS material library supplies for the BIN system, in case
vibration or other work is done in future.

If densities have been entered "correctly" in Imperial units (e.g. 0.2836/386.1=0.0007345


for steel), then when ANSYS reports the "mass" of the model during the SOLVE process,
that mass will have to be multiplied by "g" (386.1 in this example) to recover the weight
of the model in "pounds force".

Buckling analysis and failure can be pursued in two ways: Linear eigenvalue buckling,
and geometrically nonlinear (Large Displacement) buckling analysis. Eigenvalue
buckling (also known as Euler buckling or classical buckling) will be sufficient for some
structures, but much greater detail about stress amplification and margin of safety can be
found with geometrically nonlinear analysis. Note that margin of safety is not a simple
concept in a nonlinear analysis. The margin of safety will be based on the difference
between the intended design load and either the load that reaches failure conditions or the
load that exceeds allowables set by design codes. The relationship between loading and
consequent stress and deflection cannot be extrapolated linearly when a nonlinear
analysis is used, or when it is needed. Design codes may address this concept with
reference to combined compression and bending of beams, but many codes were written
before the availability of nonlinear finite element analysis, so the analyst will need to
comprehend the intent of the design code and interpret it, if this is permissible.

A difficulty here is to establish what level of loading has reached "failure" conditions. If
the structure starts to buckle in a Large Displacement analysis, solution convergence will
become slow, as the load is ramped up. The fact that the FEA solution stops converging at
some level does not guarantee that the failure load has been reached -- it could be just a
numerical analysis difficulty. The Arc-length method is useful here, since it will follow
the load up and back down as the load/deflection curve first rises and then falls. An
advantage to Large Displacement, Plastic material property analysis is that the failure can
be followed in detail (if the model is small, or the computer is very fast). Defining margin
of safety still requires a human decision as to what load "reaches" unacceptable stress and
deflection, before complete collapse happens. Simply basing margin of safety on the
highest load reached in a plastic, Large Deflection, Arc-length analysis would not satisfy
the rules in most design codes, and usually not make good engineering sense.

A problem with eigenvalue analysis of some structures is that localized "popping" of


panels or other components happens long before the whole structure begins to fail via
buckling induced deformation. The problem with geometrically nonlinear analysis of the
same structure and loading is that convergence troubles may make analysis exceedingly
difficult and/or time consuming. This is particularly true when applied force is ramped
up. Convergence of applied displacement is more successful in nonlinear studies, but
applied displacement is not the most common way in which loads are analyzed. A
possible advantage of a geometrically nonlinear Large Displacement run is that if
convergence of the model is achieved, it may sometimes be shown that the structure will
handle a load considerably greater than the first several eigenvalue buckling loads,
without exceeding yield, or allowable stress, or undergoing deflection significant enough
to merit concern. A geometrically nonlinear analysis with loads that exceed the
eigenvalue buckling level should have loading ramped up, with substep information
saved in fine detail. The substep results should be examined carefully to see whether
sudden changes in the stress or deflection patterns develop. With a shell model, this
should be done for both mid-plane and surface (use Powergraphics) stresses and for
deflection plots. The ability of the ANSYS program to generate an animation file from
the set of substep results is helpful here. Deflection can be set 1:1 or exaggerated using
the /DSCALE command.

Ramping Loads in ANSYS. Loads are ramped up if the appropriate settings are used for
time stepping. The fun starts when the user tries to ramp the loads back down (as when
wanting to find the permanent deformation that results from plastic deformation). If the
loads are deleted, there is nothing to ramp down to, the force drops immediately to zero,
and convergence may be a problem. One solution is to reduce forces and pressures to an
extremely small number. Another problem is that if the loading has been applied to
geometric entities, it cannot be scaled down directly, for ANSYS lacks commands to do
this.

An unsatisfactory but adequate fix is to transfer the loading to the nodes and elements,
then delete the relationship between geometric entities using the MODMSH,DETA
command from /PREP7 (Warning: make sure your model is saved before doing this --
MODMSH ruins the connection between your geometry and your FEA mesh), then scale
down the loading on the nodes and elements. If you merely scale down the loading on the
nodes and elements, it will be replaced by the loading on the geometric entities when the
SOLVE command is executed.
A more satisfactory way to ramp loads that were originally applied to geometric entities
will be to write and read load step files. The full loading on the geometric entities can be
transferred to the elements, then a load step file written. The load step file includes
pressures on elements, not information about loading on geometric entities. Then, the
loading on geometric entities can be deleted. Next, the load step file can be read, bringing
back in the pressures on the elements. Finally, that loading can be scaled down to an
extremely small number. This method works in general for keeping the loading that
geometric entities transferred to elements and nodes, while discarding the original
assignment of loading to geometry, and so can be quite convenient.

Plotting results can show the stresses in the structure with colored contour maps.
Plotting with stresses averaged at nodes (PLNSOL) results in smoother cleaner contours
that are easier to study, and that tend to average out stress fluctuations due to local
variations in element shape. However, such plots have the disadvantage that they average
stresses at shell intersections (at corners, "Tee" intersections, thickness discontinuities,
and material changes, for example). This results in considerable loss of information, and
masking of high stress areas in some models. Either element stress plots with no nodal
averaging must be used when this matters (PLESOL), or element selection must be
limited to continuous panels of material, so that the averaging is not performed where it is
not appropriate. This is a very common error in the reporting of results from shell
models (and solid models with material type changes). I have seen stresses hidden that
would cause fatigue troubles, because of nodal stress averaging with shell FEA models.
In addition, fatigue-causing stresses often need to be shown at shell surfaces, not just at
the mid-plane, so both mid-plane and surface stress plotting will often be required for
complete model evaluation. In a complex model, components may need to be examined
from a number of viewing angles, and with cutting planes, in order to inspect the stresses
everywhere.

ANSYS has introduced its "Powergraphics" setting that can show VISIBLE SURFACE
shell stresses with discontinuity at intersections, and changes in REAL and MATerial (see
the AVRES command). However, a user often wants stress at the shell mid-plane.
ANSYS keeps track of the surface stresses in its database, and calculates the mid-plane
average when needed. I have written a macro that will move the mid-plane stress for each
node of each shell element, element-by-element, to the top and bottom surfaces, so that
the Powergraphics setting can show mid-plane shell stress with discontinuities and
intersections. The problem with the macro is that it executes VERY slowly -- it was about
two seconds per SHELL 63 element on a Pentium-Pro 180 under Windows NT in a
70,000 DOF model, taking 7 hours to process one load case. Surrounding macro
executable lines with /NOPR and /GOPR speeded up the process by roughly a factor of 3.
The database is permanently modified by this macro, so the analysis results database
must be stored on disk BEFORE this macro is used. It must be used with caution.

The ANSYS contour map colors can be customized. I set them to shades of gray when I
want to plot to a black-and-white laser printer (directly from ANSYS, not the DISPLAY
program). The contour levels can be set automated to be evenly applied (default), or can
be set by the user. I sometimes set all levels but the "red" contour to be evenly spread out
up to the material yield, or the allowable stress, and let red color the region above. I wrote
a macro to automate this, using the *GET command to find the max and min stresses, in
order to calculate the custom levels. The macro has to be re-applied every time stresses
are plotted for new elements, or for a different stress plot type. The automatic contour
level mode should be returned to when done.

Shell mid-plane stresses are often preferred for review of structures. There are also good
reasons to review shell surface stresses. They include checks on: direct shell bending,
torque causing torsion stress in open sections, plastic hinge development and the onset of
plastic failure, local stress concentrations, locations for possible fatigue or fracture, non-
linear buckling, stresses from design errors or modeling errors, and prying loads. Torsion
on an open section can cause substantial shell surface stresses at shell intersections such
as corners -- an invitation to fatigue failure, fracture, or possible structure collapse. This
phenomenon will be completely overlooked if only mid-plane stresses are plotted.

In limited testing I did, ANSYS gave me surprisingly good values for surface stress
caused by torque applied to open sections modeled with shell elements. (I created
equivalent solid models with a few solid elements through the wall thickness for the
comparison runs that gave the "real" answer.) Mid-plane stress plots don't hint that
torsional load is causing high shell stress on the surfaces of open sections. I wouldn't
extrapolate my test result to any structure, but it suggests that shell surface stress plots
will help to detect a class of design problems (shortcomings) that mid-plane stress
plots will miss. ANSYS PowerGraphics plotting helps considerably.

Coping with Design Changes. A fun topic! The analyst must be able to modify existing
models. The ability to do this can be enhanced if the model has been planned for later
modification (see parametric design comments below). The commands that move
keypoints can help a little... the keypoint moves will destroy curved lines, and only work
if affected areas are not severely distorted, and topology does not try to change.
KEEPING THE GEOMETRY on which the mesh was based is an important part of being
able to do significant future modifications of models. It is easier to move a set of nodes
than a set of keypoints, so under rare circumstances the elimination of geometry may be
desired (nodes cannot be moved while they attached to underlying geometry; see the
MODMSH command but do not use it without knowing exactly what you are doing).
However, any substantial model changes become very difficult when only elements and
nodes are available.

Computer Aided Engineering Environment. I often develop finite element models the
"hard" way: Generate all the geometry from scratch in the pre-processor of ANSYS. For
existing designs, I may get copies of a few dozen drawings, sometimes scaling
dimensions off the drawings (I did say finite element analysis is approximate) when the
dimensions are not explicit on the drawings (I don't like this). I adjust the position of
parts in space to achieve a good mid-plane representation of steel sheets for shell element
development. Adjustments and modeling tricks are used to approximate some
connections of thick parts and of bolted parts. For a complex model it can become very
time consuming to modify a model's fundamental dimensions after model development
has progressed significantly. This makes exploration of cost-saving alternatives difficult
on a tight time schedule (what other kind of time schedule is there?), even though
significant money might be at stake. Significant money is involved with expensive
structures, weight penalties, high-volume production, and with failures.

There exist CAD systems that can link the 3-dimensional CAD model to a complex shell
finite element model (e.g. Pro/Engineer and SDRC IDEAS, probably others as progress is
made). The CAD models can be parametrically defined so that overall dimensions can be
updated quickly with all associated part and assembly prints, and the bill of materials
being automatically updated, as well as the finite element model. This can make
exploration of design alternatives much more sophisticated. Otherwise, the analyst may
be limited to exploring shell thickness alternatives, and development of ANSYS models
parametrically, so that the ANSYS log files can be re-run with different fundamental
dimensions. Such a finite element model "program" requires careful planning and
experience.

FEA versus Hand Calculations. This issue comes up when a new design needs to be
configured. The "first cut" at a design must start with the invention of a configuration that
supports the applied loads, and carries these loads to the support points of a structure. A
variety of loads usually need to be supported, and structural details must be present that
will handle each kind of load in a manner that is acceptable for the type of structure being
considered (e.g. welded steel structures, bolted, pipes and pressure vessels, and others).
The initial layout of the components of the structure, and the initial sizing of the parts has
to begin with manual calculations.

Several concerns arise in the initial configuration, such as:

• Adequate section properties and crossectional area to handle applied loading.


• The presence of bracing and stiffeners that prevent structure instability.
• Sufficient wall and beam thickness and stiffening to avoid detrimental buckling of
local regions.
• Avoidance of unacceptable stress concentrations by methods such as stiffeners,
shapes, finishing, or other details.
• Adequate weld and bolt size to handle all applied loads.
• Design for manufacturing.
• Development of geometry that respects dimensional constraints on the overall
structure.
• Minimizing cost: material cost, uncut raw material size, material availability,
fabrication expense, delivery dates and penalties, and risk.
• Use of standard thicknesses, hot rolled sections, bolt sizes, available maximum
dimensions, and affordable material choices.
• Discussions with suppliers regarding not just supply and cost, but possible cost-
saving customization of the scope of supplied material and parts. Remember that
suppliers are familiar with the practices of others, including your competitors --
they can be a valuable resource to a designer (and to a job-hunter, so treat them
well, but don't give away secrets).
Given an initial structural concept, an FEA model can be created. If the model is only of
moderate complexity, the geometry for the FEA model can be created parametrically, so
that the log file can be reused in the future to regenerate the design with different
dimensional values. This will require that there be no changes in the topology of the
structure (e.g. varying the number of stiffeners, or shortening a part until it no longer
meets another part) or else the parametric approach must include means to accommodate
these changes. If the model is complex, it may not be feasible to create the geometry
parametrically, and the finite element model will be created with exact dimensions
entered numerically. During the finite element analyses that follow, the thicknesses of
shells or beams can be varied in order to investigate the possibility of weight savings and
cost reduction. The FEA package can be used to investigate stress, deflection, buckling,
vibration, and nonlinear effects if these matter. Properly interpreted results will show
where the structure is overdesigned, underdesigned, or if it has significant inadequate
design details (e.g. complete lack of stiffeners where they are needed) and needs
modification. Design sensitivity can be assessed with respect to variations in some
dimensions. Optimization may be possible if time and sufficient skill are available.

Given modern CAD software, a parametric model can be built in the CAD system. An
FEA model can be derived from the CAD model such that updating the CAD model leads
to updating of the FEA model. This makes the modify-and-assess design loop much more
effective and can lead to significant cost savings. Progress with development and
deployment of these CAD systems continues.

Choosing an Appropriate Shell Element. There are several shell elements types
available under ANSYS. The usual workhorse shell element is Shell63, a 4-node shell
element. This element supports large displacement, but not plastic material properties. (If
plastic material properties have been entered, they will be ignored by Shell63.) If your
element type 1 was Shell63, you can directly enter (by hand) a command like "ET,1,181"
to convert the elements to Shell181, which has plastic capability. You may want to
modify the KEYOPT values after this command. Note that the effect of stress stiffening
is activated with shell elements like Shell63 by adjusting one of the KEYOPT values for
this element. Other 4-node elements that are capable of plasticity include Shell43,
Shell143, and Shell181.

I have recently found Shell93, an 8-node shell element, to give satisfactory results for a
problem I ran. This element is capable of plasticity (ANSYS manuals note that lower
order elements (4-node in this case) may be preferred for nonlinear and plastic analysis),
in addition to large displacement, so it gives "one size fits all" service. The advantage to
this element is that mesh density does not have to be as great, and it follows curved
surfaces very well, since it is a curved element. (4-node shell elements are flat, and any
significant warping of their shape during meshing will cause the FEA program to
complain, and presumably give degraded results.) Some user work is required with mid-
side node elements, because they do not want to curve too much. Meshing an area fillet
has to be carefully controlled. To change a model with 4-node elements, to 8-node
elements with mid-side nodes, the usual thing to do would be to clear elements and re-
mesh, after possibly modifying mesh density. Stress stiffening is activated for Shell93 in
the Solution part of ANSYS, not by setting a KEYOPT value as with Shell63.

Using P-Elements. The use of P-elements can reduce the effort required to mesh models.
The user is cautioned that the P-elements do not support large displacement or plasticity.

Harmonic Response. This is what ANSYS calls Steady State Frequency Response to
constant harmonic input (an input forcing frequency that is sinusoidal steady state). There
are three ways available in ANSYS: full, reduced, and modal. A damping ratio can be
input using the DMPRAT command. The output is complex numbers that imply
amplitude and phase. The phase differs from the phase of the input if the input is not at an
eigenfrequency. Only the reduced and modal methods can handle stress stiffening. The
/POST26 Time History postprocessor can plot amplitude for a node versus frequency (see
the PLCPLX key value); the /POST1 postprocessor can use the SET command to load
either the Real or the Imaginary component, but not both. The manuals say that the
/POST26 postprocessor can do things with the components. As with all vibration and
transient analyses, the units of mass must be input appropriately.

Failure Modes to Consider. Textbooks are written on this topic. There are many things
an analyst may overlook. Just a few of the many things to think and worry about include:

1. Static loads lead to stresses exceeding yield (or allowable stress) over a significant
region. Dynamic loads exceed anything considered in load factors for static
loading.
2. Loads on bolts, rivets, spot welds, plug welds, stitch welds, fillet welds, bevel
welds, full-penetration welds, adhesives, nails, tie-rods, links, or other connection
devices are too high. Prying loads are not considered or properly assessed, and are
too high. Moments tear a bolt circle apart because it was represented as a pinned
(one bolt) joint. Compression of tie-rods or links reaches buckling levels (FEA
will not detect this for link elements).
3. Loads on bolt holes are too high. Bolt holes weaken a section.
4. Strains reach fracture levels in brittle materials.
5. Surface strains cause damage to protective coatings.
6. Deformations cause lock-up of parts that should slide or rotate.
7. Buckling of components leads to local damage, or to progressive collapse.
8. Buckling of the full structure is reached.
9. Combined bending and compression leads to excessive stress and failure.
10. Fatigue failure and/or sudden fracture is reached. If the FEA model ignores stress
concentrations, and representation of details where trouble can occur, fatigue or
fracture may never have been properly assessed. If cracks grow without detection,
sudden fracture conditions may be reached. Growing cracks need to be of a
detectable size without causing sudden fracture. (The capacity of a crack to cause
sudden fracture in a structure increases with the size of the crack and with the
stress level, and depends on the properties of the material. Remember that when
materials are welded together there is an implicit crack formed except where
good-quality full-penetration welds are used.) To paraphrase a writer whose name
I unfortunately can't recall, "A tolerable crack size needs to be large enough that it
can be detected by a tired inspector on a Friday afternoon a half hour before
quitting time." To keep a crack of detectable size from causing sudden fracture,
the material choice, allowable stress, allowable load, and inspection frequency
can be affected, in addition to other design details.
11. Vibration frequencies are located where applied loading causes damage through
large amplitude response.
12. Margins of safety are not high enough to deal with material variability, work
quality variation, and unknown or unexpected loading.
13. Buckling and high deflection or stress are not assessed with Large Displacement
analysis, when it was needed.
14. Stresses that exceed yield over regions of "questionable" size are accepted, rather
than checked with a model that includes material plasticity (within design rules
and "good practice").
15. The structure is destroyed by flow induced vibration, flutter damage, or high-
intensity sound or noise.
16. Shipping, handling and erection loads are not considered, or are underestimated.
Some structures need extra stiffening and protection from impact loads, bending
or torsion during shipping and handling.

Please send me your favorites, to add to this list of failure modes, as they
relate to inadequacies and oversights in FEA.

Stress Limits and Margin of Safety. Two possible approaches to margin of safety are:
(1) Amplify the loading, e.g. to twice the maximum static applied load (or far more with
many civil engineering and other structures), and use the lesser of material yield or a
fraction of ultimate tensile stress as the allowable limit, or, (2) Use the maximum static
applied load, and the lesser of a fraction of material yield or a smaller fraction of ultimate
tensile stress. The approach will depend on the industry and the codes followed; some
industries may differ. Other factors may bear, e.g. stress allowables may be reduced by
temperature and by high temperature creep considerations. Other considerations will be
different allowables for thermal stresses, "secondary displacement-driven" stresses, and
checks on vibration characteristics, buckling, fatigue, etc.

I noticed some recent discussion on ASME changes in the fraction of ultimate tensile
stress (UTS) to be applied to some pressure vessel materials (some carbon and low alloy
steels below creep temperatures in Section VIII, Div.1). The UTS fraction settings were
said to put some ASME regulated designs at a competitive disadvantage on the world
market. Steel producers note that the quality and uniformity of their steel is much better
than two or three decades ago. Still, I have seen new steel plate that had laminar cracks
more than a foot in size (roughly half a meter), and a spring that had a crack along the
length of the wire from which it was produced. QC checking and conservative designs
will not go away any time soon.

In discussing nonlinear material properties in these web pages, I am usually referring to


checking for structure failure when loading leads to stresses that exceed material yield
over regions of questionable size. This will usually NOT be strictly according to the rules
laid out in design codes, but is added as a check that the intent of codes and safety needs
are considered under severe or unusual loading, or under loading that is important but not
included in codes. Some design codes have rules for "elastic-plastic" analysis, or for
"fully plastic" analysis, which would have to be studied and applied during design and
analysis.

Representation of a group of bolts (or rivets). A single bolt might be represented in an


FEA model as preventing motion in the X, Y, and Z directions, as well as rotations,
except rotation about the axis of the bolt. Contact elements may be wanted between the
layers that are bolted together, at the expense of much slower solution. Friction with these
contact elements might or might not be considered, depending on whether bolt preload or
initial interference was included, and on whether it was acceptable to let friction carry
any of the "in-plane" load -- it may be important or necessary (per codes or for safety) to
let the bolts carry all the "in-plane" load, setting the contact element friction coefficient to
zero. Because of looseness of fit, tolerancing of bolt diameter, and of hole position,
diameter and alignment, not all bolts will act simultaneously when a real structure is
loaded up. This would be true of structural tension, compression, and shear forces that
produce shear forces in the bolts, and of moment applied to a "bolt circle." It may be
decided in FEA to represent all bolts as being "tight" for the purpose of analysis. Note,
this can introduce a problem: In the FEA model, the structural members undergo strain
when they carry loads. Where members are bolted together, the overall structural strain
will create high local forces as the bolts try to make one bolted member's strain match the
other bolted member's strain. This makes the FEA report very high forces on the
individual bolts, much of which may not be due to load path forces being transferred
through the bolts.

I can't think of a simple way out of this dilemma. Your firm or industry may have
"standard" ways of dealing with this analysis. It might be decided to average the reported
forces acting on the full group of bolts for tension forces, and to use the standard
analytical approaches to force on a group of bolts, and to a bolt circle with net moment on
the group of bolts. If there is no significant load path force in one direction, some of the
bolts could be modeled as "loose" in this direction. An alternative, possibly conservative,
approach would be to consider a minimum number of bolts and directions of bolt action,
to be acting to resist forces and moments, although this could result in FEA reporting
overloaded bolts and high local stresses if the bolts are on the primary load path.
(Usually, all the bolts should be "tight" in the direction in which they pull the joined
materials together (the bolt axis direction).) The load on this reduced number of bolts
could be considered to be spread over the group of bolts, and analyzed manually. In
general, the user will want to consult codes and standards used in the appropriate
industry, understand the concepts used in bolting, and discuss with people with expertise.
It wouldn't hurt to review standard textbooks. Remember to avoid significant prying
loads on bolts, rivets, welds, and other fasteners.

The presence of a bolt or group of bolts means that the crossectional area of the bolted
materials is reduced by the presence of the bolt holes. If the holes are not represented in
the finite element model, the analyst needs to do extra work to examine the stress in the
zone of the bolt holes, using codes, standards, and good judgment to find the allowable
net stress, bearing force, and total force in that zone.

Adequate Computer Hardware for FEA. I once heard of a product failing when highly
loaded. An FEA analyst had limited modeling to a coarse FEA mesh with small-
displacement elastic analysis, and plotted nodal averaged stresses, on an underpowered
older computer. Proper computer equipment, some staff training, a finer mesh, nonlinear
analysis (large displacement and material plasticity), and more thorough post-processing
of results (PowerGraphics plots of shell element midplane and surface stresses) could
have detected a structural weakness. Prevention would have been easier than
modification. Such is life.

In an ideal world, adequate computer hardware would only rarely be an FEA modeling
issue. A company may save thousands of dollars by using inadequate FEA hardware, and
lose significantly more as a result. Computer hardware affects the mesh density possible
in FEA models, the time to develop FEA models, to run solutions, and to save, process,
review, and plot the results. Time saved by using better hardware makes it possible to use
better resolution in a model when it matters, to take analyst "short cuts" that save model
development time but increase computational expense, to check for errors, to check
effects such as large displacement buckling and plastic deformation, to check unusual
loadings, and to vary a design in attempts to reduce weight and costs. Convincing
management of this can be another matter. A few thousand dollars not spent on
computing hardware is a visible "saving". X million dollars in design errors that could be
prevented remain hypothetical until they happen. Y million dollars in cost reductions also
remain only a daydream if not proven in a non-rigged demonstration. In practice, funding
for the computer hardware is often set by people who are either unfamiliar with FEA and
engineering, or who have noticed that the analysis detail sometimes expands to fill the
available computing capacity. (How's that for a euphemism?) When analysts living with
deadlines spend an unacceptable amount of time waiting on computer hardware while
performing FEA work, significant differences of opinion about computer hardware can
develop between analysts and management. Analysts have been known to change
employers over this issue.

Given the price of the ANSYS software, a computer costing only a fraction of the
software cost can do a very substantial amount of analysis work given present (2004)
hardware costs. In the Windows XP world, a few thousand U.S. dollars will purchase a
computer with large RAM (2 GB or 4 GB), large hard drive (60 Gig or more), fast
processor (2.4 or more GHz), cheap laser printer, colored ink-jet printer, 17" or larger
monitor, graphics card with 32 Mbytes or more of RAM, and an adequate backup device
-- a CD or DVD burner is often employed. A budget of several thousand dollars will
allow a PC with a 2 CPU motherboard, 4 GB RAM, change the hard drive to a fast
version of SCSI, monitor to a 21" CRT or a 19" to 21" LCD, and graphics card to an
ANSYS tested powerful OpenGL card. Large budgets take the purchaser into the world
of very fast Windows or UNIX machines with 64-bit operating systems and multiple
processors. (My comments here will gradually become out of date.)
Hard drives have become very cheap. In FEA work, a hard drive should be able to store a
significant amount of work-in-progress and recent completed work, with additional
capacity to handle ANSYS solver temporary files for large models, including substantial
results file storage. I can't say it with authority, but I have the impression that a SCSI hard
drive will transfer information with less interruption of operation of the computer, for
disk-intensive aspects of FEA work (e.g. working from an input file, and using the frontal
solver on very large jobs). I have heard that having two SCSI drives, one for the
operating system including the virtual memory swap file, and one for the model being
run, can improve some FEA operations. I suspect that the money could be better spent on
a larger RAM or dual-processor machine.

RAM is currently very cheap. A large RAM will permit larger models to be run with the
SPARSE and PCG solvers in ANSYS; for this reason some companies have PC machines
with 2 GB or 4 GB of RAM -- this will depend on your work. Models too large for a 32-
bit operating system with the any solver will require a move to a 64-bit operating system
and RAM larger than 4 GBytes. A large RAM will help your solutions work quietly in the
background, with little swap file disk thrashing. Your ANSYS vendor can probably
advise on high-end equipment.

FEA work is one of the numerically intensive applications that justifies the extra expense
of a very fast processor. The availability of drivers for your operating system should also
be checked before the purchase of extras.

I have found both 17" and 21" monitors to be sufficient for FEA modeling. Make sure
that the cheapest monitor purchased supports at least refresh rates of 70 Hz or higher at
resolutions of 1024 x 768 pixels or higher. Make sure that the graphics card matches or
exceeds the monitor's resolution and refresh rates. A CRT monitor refresh rate lower than
70 Hz will cause the eye to perceive flicker of the image, and cause eye strain. Informed
people prefer 75 Hz or more. Many PC computers are delivered running their monitors at
a refresh rate of 60 Hz, and have to be properly set up by the end user. (I've known people
who went to the optometrist because the computer screen was bothering their eyes. All
that was wrong was that the refresh rate was at 60 Hz. The optometrist didn't know about
this phenomenon or its fix.) I currently use a 17" LCD monitor at 1280 x 1024, so the
refresh rate is not relevant for static images (60 Hz works with this LCD monitor and this
display device does not flicker), but if using a CRT monitor, I would prefer that it be set
to 1280 x 1024 pixels running at 85 Hz. A new monitor should support a resolution of at
least 1280 x 1024 pixels at 75 Hz or higher, as should any decent modern graphics card.
Today's graphics cards are cheap enough that this resolution should be supported with 24-
bit color. The OpenGL cards that ANSYS suggests should result in much faster model
graphics display. With large models, this should be a helpful investment.

Printers can be relatively inexpensive, although you can run up fairly high bills for
colored ink if you generate large numbers of plots. A laser printer can be a fast
inexpensive way to get black-and-white listings and plots during FEA work and report
writing. I keep both a gray-scaled and a colored ANSYS color map on my toolbar to
move quickly between black-and-white and color. A substantial amount of work can be
done cheaply with gray-scaled plot prints, prior to developing a final
report with color images. A color ink-jet printer is the least expensive
way to get helpful colored plots. If a larger budget is available, consider
an ink-jet that generates 11" x 17" plots, or a colored laser printer for
high-volume high-priced work. In some companies the speed-up in
analysis work will pay for the equipment in short order.

ANSYS® Tips
ANSYS Tips and ANSYS Tricks
Peter Budgell
Burlington, Ontario, Canada

© 1998, 1999 by Peter C. Budgell -- You are welcome to print and photocopy these pages.

These tips and comments are intended for user education purposes only. They are to be used at your own risk. The contents are
based on my experience with ANSYS 5.3 -- more recent versions may change things. The contents do not attempt to discuss all
the concepts of the finite element method that are required to obtain successful solutions. It is your responsibility to determine if
you have sufficient knowlege and understanding of finite element theory to apply the software appropriately. I have attempted to
give accurate information, but cannot accept liability for any consequences or damages which may result from errors in this
discussion. Accordingly, I disclaim any liability for any damages including, but not limited to, injury to person or property, lost profit,
data recovery charges, attorney's fees, or any other costs or expenses.

As one writer put it, This information is free, and may be well worth the price.

Return to Home Page


FEA and Optimization Introduction Page
FEA Modeling Issues Page

The ANSYS manuals explain many things and give some examples, but they do not give
many tips to the user. Here is a collection of things I have noted or learned. (Use at your
own risk...) Necessity is the mother of invention, and I learned virtually everything here
as a result of need, or as a result of trial and lots of error. I'm also thankful to my local
ANSYS distributor for many helpful conversations. The comments in these pages are
based on my experience with ANSYS 5.0 through ANSYS 5.3. I hope these tips will
shorten your learning curve. An analyst frequently does not have a mentor for guidance,
so considerable effort can be needed to deduce how to accomplish some tasks. ANSYS
users need to spend a generous amount of time reading the manuals and training
materials, and returning to read them again as the user's knowledge of the program
increases. Don't use anything here verbatim... understand why it works, and whether my
comments are in error or inappropriate for your situation, before employing any of these
suggestions.
The teaching of FEA at the academic level is intended to educate the mind, teach how
FEA methods are derived from first principles, and to develop students who can invent
and code new elements, test their behavior, write research or industrial quality software,
and apply it to difficult academic or research problems. Some professors feel strongly
that the purpose of an undergrad course in FEA is further education in how applied math,
engineering, continuum mechanics, energy methods, and analysis of structures come
together, building on the Strength of Materials courses already taken -- I have no
argument with that. A user with a comprehension of what underlies FEA work will know
when to apply and how to evaluate FEA work, have more creativity, learn quickly,
problem solve better, be more innovative, and make fewer serious modeling errors. The
professors do not feel that the course is intended to concentrate on modeling details or
learning the interface to a commercial FEA program. (Students, on the other hand, want
to graduate having used an FEA package to do something significant. Assignments and
projects with ANSYS/ED are a good way to get there.) I've heard the opinion expressed
that with FEA technology maturing, there is less research grant money for FEA work in
universities, and the supply of advanced FEA graduate students may be shrinking. The
teaching of commercial FEA program use is principally focused on training people to use
the interface to and commands of the particular software package, and how to perform
basic analysis types. Some instructors pepper their presentations with tips, but the
attendees may be drowning from information overload. Little is available to lead the user
through the techniques that can be used in modeling complex structures, and around the
traps that exist, except help from good vendor support people, co-workers, or other users,
and substantial reading, thought, trying examples, and testing techniques on the part of
the analyst. I hope that these pages will provide some helpful details.

CONTENTS:

Tip 1: Use Annotations


Tip 2: Making Room for Annotations
Tip 3: Using Parameters in Annotations
Tip 4: Use Small Annotations
Tip 5: Mathematical Functions Available
Tip 6: Start 16-Bit Applications before Starting ANSYS under Windows NT
Tip 7: Running ANSYS at Low Priority under Windows NT 4.0
Tip 8: Operating on (Scaling) Loads
Tip 9: Ramping Loads Down to Zero
Tip 10: Starting ANSYS Graphs at t=0
Tip 11: Pressure on Lines
Tip 12: Ramping Some Loads, Not Others
Tip 13: Force and Pressure on Flat Plates or Flat Shells
Tip 14: Linear and Nonlinear Buckling
Tip 15: Nonlinear Analysis and the Arc-Length Method
Tip 16: Animating Results from a Nonlinear or Other Analysis
Tip 17: Getting the Mass or Weight of a Model
Tip 18: Using Fnc Calls from Macros
Tip 19: Use ENSYM and ENORM to Turn Over Shell Elements
Tip 20: Shell Types to Try
Tip 21: Moving a Model from ANSYS Mechanical to ANSYS Linear/Plus
Tip 22: Deleting Nodes with Nodal Coupling
Tip 23: Convergence with Shell Finite Element Models in Nonlinear Analysis under
ANSYS
Tip 24: Working with Load Step Files in ANSYS
Tip 25: Plotting Shell Stress -- Surface, Mid-Plane Stress, Load Paths, ESYS and RSYS
Tip 26: Nodal Coupling (CP) versus Rigid Region (CERIG)
Tip 27: Vibration Modes with Pre-stress
Tip 28: Creating New Elements by Copying or Reflecting Existing Structure
Tip 29: Adding to a Model Comprised of Elements and Nodes Only
Tip 30: Zero Mass Beam Elements Form Rigid Region
Tip 31: Turn off Symbols When Changing a Model after Solution
Tip 32: Are the "Free-Free" Vibration Modes Relevant?
Tip 33: Selecting a CAD or FEA System -- Cover Yourself
Tip 34: Creating Lines Perpendicular to, or at Angle to Existing Lines
Tip 35: Use the /UI command in Your ANSYS Toolbar to Bring up GUI Dialog Boxes
Tip 36: Reaction Force, Nodal Force, and Load Paths
Tip 37: Inputting Temperatures with BF, BFE, and TUNIF in Structural Analysis
Tip 38: ANSYS Toolbar Use
Tip 39: ANSYS Piping Element Lengths
Tip 40: Graphical Output from ANSYS
Tip 41: Check Nodal Loads at Bolts, Rivets, Spot Welds and Links
Tip 42: Use QUERY to Check Results with Picking
Tip 43: Loads on Geometric Entities Overwrite Loads on Nodes and Elements -- Easy
Error to Make
Tip 44: Use Components for Load Input, and for Results Review
Tip 45: Simple Substructuring Examples -- Bottom Up and Top Down
Tip 46: Plot Applied Temperatures
Tip 47: Skipping Over Statements in an Input File
Tip 48: Static Analysis Followed by Transient Analysis
Tip 49: File Compression for Model Storage
Tip 50: Organizing Large FEA Models
Tip 51: Selecting Nodes in a Stress or Strain Range
Tip 52: Selecting Nodes that are Subjected to Nodal Coupling
Tip 53: /NOPR and /GOPR Speed Up Input Files and Macros
Tip 54: Using Commands IMMED and /UIS and /SHOW,OFF
Tip 55: What's the Bauschinger Effect? Comments on Material Yield
Tip 56: Thought Experiments
Tip 57: Control of Meshing
Tip 58: Four View Plot
Tip 59: Quick Review of Mode Shapes
Tip 60: Using ANSYS Help
Tip 61: The FEA Job Hunt
Tip 62: *VPUT and DESOL
Tip 63: How to Divide One Element Table Column by Another
Tip 64: Element Tables (ETABLE) and Arrays -- An Example
Tip 65: Error Estimation, PowerGraphics, and ERNORM
Tip 66: Concatenate and Mesh Last
Tip 67: ANSYS Output of Data to Files for Use by Other Programs
Tip 68: Writing Array Columns to Output or to Files
Tip 69: Synthesizing Parameter Names and Manipulating Jobnames and Long Strings in
APDL
Tip 70: Solid Elements 95 and 92 -- Efficiency and Interconnection

Tip 1: Use Annotations:

Only a one-line title is possible on the ANSYS screen or plot. Considerably more
information can be included in annotations on the screen. The annotations are kept
through all plots until they are deleted with the command: /ANNOT,DELE or via picking
with the graphical user interface (GUI).

At the top of the Annotation dialog box, there is a list box from which the user can
choose Text, Lines, etc., on down to Controls. These selections bring up different menus.
The Controls selection offers a SNAP setting that makes it much easier to get the text
aligned nicely. (Hint: ANSYS, Inc. should put this SNAP selection up front under Text, or
even on every menu.) Activate the Snap setting, then go back to Text to enter the
annotations.

Tip 2: Making Room for Annotations:

The /PLOPTS command controls what goes into the legend at the right (by default) side
of the ANSYS screen and plot. If you turn off LEG2 (the relatively useless "view"
information), you will get extra room at the bottom of the legend. This area can be used
for annotations if the number of contour levels in stress plots is not too great (the default
is fine).

Tip 3: Using Parameters in Annotations:

Just as in a title created with the command /TITLE, ANSYS permits the use of a
parameter in an annotation, as discussed in the Commands Manual description of the
/TLABLE command. When typing the annotation using the GUI, include the parameter
in percent signs like this: %pname% where pname is the parameter name. The parameter
can contain either numbers or text. The value of the parameter will be plotted in the
annotation string. The ANSYS function NINT can be used to round a number the nearest
integer, sometimes improving the appearance of the annotation for large numbers in
which the fractional part is irrelevant (e.g. NINT(123.456789) = 123 ). For this, the
parametric expression should be enclosed in percent signs. Annotations are usually
created in the GUI, but can be entered with code like that shown below. Entering a single
annotation line containing Result = %pname% generates log file contents such as:

! The following commands place an annotation on the screen.


! For information only. Use at your own risk.
! In this example, "pname" is a parameter with a numerical value
such as 123.456789
/ANUM ,0, 1, 1.2303, -.74699
/TSPEC, 15, .600, 1, 0, 0
/TLAB, 1.010, -.747,Result = %pname%

The last line in the above example contains the string that the user types manually. The
other data set up the string positioning on the screen, and the properties of the characters.
To apply the NINT function to the parameter, manually enter Result = %NINT(pname)%
as the annotation:

! For information only. Use at your own risk.


! Type the annotation in one line, so the log file contains:
/ANUM ,0, 1, 1.2303, -.74699
/TSPEC, 15, .600, 1, 0, 0
/TLAB, 1.010, -.747,Result = %NINT(pname)%

The beauty of doing this is that if the value of the parameter pname should change, then
when the next plot command is executed, the annotation will automatically update to
reflect the new value! Try it: after creating an annotation on the screen that includes a
parameter, change the parameter's value, then do a /REPLOT. Running a macro could get
information that goes into the parameter that a /REPLOT will automatically put it on the
screen. This makes it possible to automatically include far more information than can go
into the title, and to do it for a series of automatically generated plots or graphs.

Tip 4: Use Small Annotations:

The default character size setting for an annotation is 1. The size of an annotation can be
decreased using the GUI. A size of 0.6 is quite readable and permits far more information
to be packed into a plot. Note that there is a limit to the number of characters possible on
an annotation line – this is character size independent.

Tip 5: Mathematical Functions Available:

Under the Help listing for the *SET command there appears lists of mathematical
functions available in ANSYS. Another list is in the ANSYS User's Guide on APDL,
Chapter 14 of the Modeling and Meshing Guide. The commands are usable anywhere.
They include:

ABS(X) Absolute value

ACOS(X) ArcCosine

ASIN(X) ArcSin

ATAN(X) ArcTangent

ATAN2(X,Y) ArcTangent of (Y/X) with the sign of each component considered (see a
FORTRAN manual if you don't know what this means.)

COS(X) Cosine

COSH(X) Hyperbolic cosine

EXP(X) Exponential

GDIS(X,Y) Random sample of Gaussian distributions where X is the mean, and Y is the
standard deviation. Might be used in a Monte Carlo Simulation to explore the
distribution of outputs based on randomized loadings and material properties.
For an explanation, see a good modern engineering design textbook.
LOG(X) Natural log (to base e)

LOG10(X) Log (to base 10)

MOD(X,Y) Modulus (X/Y), it returns the remainder of X/Y. If Y=0, returns zero (0)

NINT(X) Nearest integer (nice for outputs of stresses to /TITLE or annotations (see Tip 3
above))

RAND(X,Y) Random number, where X is the lower bound, and Y is the upper bound.
(Useful for Monte Carlo Simulation, etc.)

SIGN(X,Y) Absolute value of X with sign of Y. Y=0 results in positive sign.

SIN(X) Sine

SINH(X) Hyperbolic sine

SQRT(X) Square root.

TAN(X) Tangent

TANH(X) Hyperbolic Tangent

Note:

The function form of the *GET commands can also be used to get information
from the model -- see the APDL guide mentioned above for a listing of available
functions. The APDL guide also gives functions to retrieve the values of
parameters, both numerical and character. The *VFUN command has a list of
functions that act on an array entry. The Commands manual lists functions that act
on Element Tables in the section "POST1 Command for Element Table".
Creatively used, the array and ETABLE algebra commands can be surprisingly
powerful.

Tip 6: Start 16-Bit Applications before Starting ANSYS under Windows NT:

It has been my experience that some large commercial 16-bit applications will not start
properly when ANSYS is already running. If you start them before launching ANSYS,
there will be no problem. If you intend to work with those 16 bit applications in the
foreground while the ANSYS SOLVE is running in the background, this will be a useful
tip. I have seen other applications start up very slowly (e.g. Internet Explorer) or wait
until ANSYS was done before proceeding (setup.exe for many Windows install
programs).

Tip 7: Running ANSYS at Low Priority under Windows NT 4.0:


Under
Windows
NT 4.0 the
priority
level of
individual
processes
can be user-
adjusted. To
do this,
bring up the
Task
Manager
(right click
on the
Windows
NT taskbar),
and click the
tab for
"Processes".
Right click
on the
process
titled "ANSYS.EXE", and "Set Priority >" comes up. Set the priority to "Low" to help
make foreground applications run more smoothly while ANSYS is running SOLVE in the
background. This may help more if you have a large RAM in the computer.

When ANSYS has completed the SOLVE process, return the priority to "Normal" so that
ANSYS is not slowed down when you start doing plots through the GUI.

Tip 8: Operating on (Scaling) Loads:

You can operate on loads on nodes and elements in order to scale them up or down.
Unfortunately, scaling loads on geometric entities (keypoints, lines, areas and volumes)
seems not to be available. If any load on your structure has been applied to a geometric
entity, rather than directly to elements or nodes, that load will be transferred to the
elements and nodes at solution time. The transfer will overwrite any scaling of loads that
you have applied. (Guess how I figured this out!)

So what can you do about this? Method 1: Transfer the loading from geometric entities to
the elements and nodes, then write a load step file. This records loading on elements and
nodes. Delete the loading on geometric entities, then read the load step file that was just
written. Now the loading can be scaled up or down freely. Method 2: For a faster method,
see the "LSCLEAR,SOLID" command, which will not require writing a load step file.
Method 3: Transfer the loading from geometric entities to the elements and nodes, then
delete the relationship between geometry and the FEA mesh with the MODMSH,DETA
command. Method 4: Transfer loading from geometric entities to the elements and nodes,
then un-select the geometric entities, before executing SOLVE. The element and node
loading can be scaled after it has been transferred from geometric entities. An un-selected
geometric entity will not transfer its loading to elements or nodes when SOLVE is
executed. Warnings: Method 3 ruins the relationship between geometry and the mesh.
Save the model under an appropriate file name before executing MODMSH,DETA.
Method 4 is fine, as long as you do not forget and re-select the geometric entities --
ALLSEL will do this.

Scaling displacements (nodal constraint values) is also possible. One thing that has not
worked for me is an attempt to reduce applied displacements to zero by using 0.0 as the
scaling factor. What did work for me was to use "_TINY" as a value, which multiplied
displacements by a factor of roughly 10^(-31) and reduced loads to virtually zero.
Attempts to use 0.0 as the factor resulted in NO change to the applied displacements.

Tip 9: Ramping Loads Down to Zero:

If you are ramping force, pressure, and acceleration loads up and down as part of an
analysis, you may want to return loads to zero. I do this when I want to inspect permanent
deformation that results from plastic yielding. If you delete the applied load, the loading
will drop immediately to zero, even if you have load ramping turned on. The thing to do
is to set the loading to virtually zero or the scaling factor to virtually zero, not delete the
load. It is important to appreciate that to ANSYS, reducing a load to nearly zero is not the
same thing as deleting it (zeroing it), for the purposes of ramping loads. The time substep
sizes to use will depend on your model.

Setting displacements to zero or near zero is, of course, very different from deleting
constraints.

Tip 10: Starting ANSYS Graphs at t=0

Graphs start at the first data point, which means that if you do a time-history trace, you
don't get a t=0 data point. If you leave time as 0.0 on the TIME command, you get the
default 1.0 in your output. The only way to get a graph from zero that I have found is to
do a first load step with "t" extremely small, in comparison to other times in the analysis,
e.g. t=0.0000001. The load at this time must be appropriate so that the response ramps up
correctly. (If your intent was to ramp up from zero load, just leave the loads as zero.) The
next load step continues as usual.

Tip 11: Pressure on Lines:

Applying pressure on a line results in loads being applied to the nodes associated with
that line. The loads on the nodes that the FEA program applies will be appropriate given
the formulation of the elements. If you want to apply a total force to the line, you can use
a *GET command to find the length of the line, then divide the force by the length and
use the result as the pressure.

Note that pressure on a line acts in the plane of the area that is attached to the line. If two
areas are attached at 90 degrees or another angle, two loads are set up, acting in each of
the area plane directions. You can use select logic on the areas to get some interesting
effects as to the direction in which the applied forces act, but only if both areas are
meshed, and the elements are selected. If you un-select one of the areas, pressure on the
line will only be exerted in the direction of the area that is selected. The select logic must
still be in place when you SOLVE, or else your carefully crafted load case can be
overwritten. As above, transferring loads from geometric entities to nodes and elements,
writing them as a load step, deleting all the loads on geometric entities, and reading in the
load step will protect your load case, and make scaling the loads possible. Alternatively,
consider the "LSCLEAR,SOLID" command.

NOTE: Pressures on surfaces follow the deformed shape during a Large Displacement
(geometrically nonlinear) analysis. Forces on nodes maintain their orientation in space,
even under Large Displacement. This difference will govern how loads should be applied
in some models.

Tip 12: Ramping Some Loads, Not Others:

To hold some loads constant and ramp up or down others, run a first load step with all the
loads at their starting values, ramping from zero only if appropriate.

If you want, use an extremely small value on the TIME command, e.g. 0.0000001, and
run this as a first load step. Then set up a second load step, with ramping activated.
Change those loads to be ramped from their starting values to new values. Hold the other
loads constant. The TIME command can be used with a new value, such as 1.0.

An example is the application of a gravity load before other loads are to be ramped up
from zero. In some cases, this could give a more realistic assessment of nonlinear
buckling caused by applied forces other than gravity loading. (You will want to check the
codes that regulate your design work before deciding on this. Codes that I have seen were
generally started before FEA was widely available, and do not address this concept. Find
out what is considered good practice in your industry.) Applying gravity first can give
much better convergence when assessing the effect of thermal expansion moving
structures across friction contact elements, where the normal load on the contact elements
is caused by gravity.

I suspect that this is not possible with the Arc-Length method. I have not experimented
with it, but do not see how controlled ramping of only some loads could be implemented
under Arc-Length control of applied loading -- any opinions?

Tip 13: Force and Pressure on Flat Plates or Flat Shells


There is a rule of thumb, that if the out-of-plane deflection of a flat plate or shell is
greater than half the thickness, then membrane forces start to become significant in
resisting the applied load. In ANSYS, this calls for activating a Large Displacement
solution (a.k.a. geometric nonlinearity). Ignoring this can result in your design missing
out on inherent strength, OR in grossly inadequate underdesign. Know what you are
doing.

Tip 14: Linear and Nonlinear Buckling:

Linear eigenvalue (classical Euler) buckling is a "quick" check on a structure, but the
ANSYS manuals go to considerable pains to point out that in many situations, a Large
Displacement solution (geometric nonlinearity) needs to be run also as a check on the
buckling adequacy of a design. As with linear buckling, nonlinear buckling may need to
be assessed with respect to a number of load cases. In some structures, a diagonal tension
field is developed in a web, and elastic buckling failure does not develop at the first
eigenvalues predicted. In other structures, buckling failure may occur before the first
eigenvalue, and only nonlinear analysis will predict this.

Linear eigenvalue buckling has to assume that gap and contact elements are either closed
and active, or open and inactive. Nonlinear analysis will follow the effects of these
elements as they go in and out of contact, when the loading is applied.

After any Large Displacement nonlinear elastic buckling analysis (if it doesn't diverge),
see whether the elastic stress limits have been exceeded (this includes the surfaces of
shell elements, and be careful that nodal averaging does not hide anything). If
significantly overstressed, the structure may not be adequate.

Combined bending and axial compression in a beam is a classic place where inadequacy
in strength can be predicted in FEA only by Large Displacement nonlinear analysis (i.e. a
linear analysis says it is OK, but a nonlinear analysis shows it is NOT). For some
structures undergoing elastic Large Displacement analysis without contact and gap
elements, the user may want to consider a Southwell plot.

If elastic stress limits are exceeded in the Large Displacement model, it may be desirable
to do a combined Large Displacement and Plastic Deformation model. If the structure is
overloaded, it may begin to collapse (perhaps only locally), and the Arc-Length method
may be needed for convergence control. A need to strengthen the structure may be
predicted or identified. The material properties to use are application domain and industry
specific -- start by talking to your co-workers, supervisor, and suppliers.
Tip 15: Nonlinear Analysis and the Arc-Length Method:

The basic way to do nonlinear analysis in ANSYS is to use NR iteration and many default
settings. At times, convergence will become a problem; I've encountered this with shell
structures under compressive stresses. The arc-length method can sometimes cope better
with nonlinear solutions, because of its ability to follow force-deflection curves that rise
and fall. Be prepared for long run times if your model is large.

My experience with the arc-length method is that in its default settings for step size
multipliers, it does not give satisfactory results when compressing some shell-based
models. What may work is to set a number of time substeps, such as 10, so that each
substep is 1/10 of the load step. Set the Arc-Length maximum multiplier MAXARC to
1.0 so that no substeps larger than 1/10 of the load step are taken. Set the Arc-Length
minimum multiplier MINARC to 0.1, so that the smallest load substep is 1/100 of the full
load step. I found this to help considerably. You may want to user a larger or smaller
MINARC setting, but my experience to date suggests that one should not get greedy with
MAXARC. Obviously, you may want to play with the number of time substeps.

The solution may still diverge but it is likely that you will get more information than
without arc-length analysis. You will want to set a termination condition for the analysis
if buckling is expected to result.

I find it desirable to save the results at every time substep when doing this type of
analysis (it helps to have a large hard drive) in order to review the process. When you
review the results of a single load case run under Arc-length control, the TIME value on
the ANSYS plots shows the decimal fraction of the full load being applied to the model.
As you move forward through the plots, if the load/displacement curve for the structure is
falling, the decimal fraction will fall, even though some displacements are visibly getting
larger.

As mentioned above, something I have not tried is to get the Arc-length solution control
to ramp some loads and not others, by having run a preliminary load step. Is this even
possible? If not, then the user may face the prospect of gravity being ramped up and
down, in addition to other applied loads, and the physical realism of the model may be
affected.

Tip 16: Animating Results from a Nonlinear or Other Analysis:

It can be helpful to watch the increasing stress levels that result as a nonlinear analysis
loading is ramped up. To create an animation, first run your analysis with loads ramped
up, and a number of substeps. Have all substeps written to the results file. Do a stress plot
of interest to set the type of stress plot to be animated by the macro that will be run. Make
the ANSYS Graphics window as small as you want the animation window to appear
(most screens will have lower resolution than a CAD workstation), keeping the aspect
ratio correct. Smaller graphics windows result in smaller animation files, if size matters.
Animation files under Windows NT (AVI files) from ANSYS often compress very well
for storage purposes. Use the PlotCtrls menu selection on the Utility Menu, and choose
Animate to get a sub-menu of choices. Choose "Dynamic Results" to create an animation
of your saved load substep results with the time shown in the legend. This seems to work
only for the last load step (read the ANSYS macro). The resulting AVI file can be viewed
with the media player, distributed, put on a web site, and so on. The media player can be
stepped manually for slow viewing. It makes it easier to watch the changing stress pattern
or deformation as nonlinear effects take over the model.

In animating a changing stress or other contour plot, you may wish to specify the contour
levels before generating the animation file. View the load step or substep with the worst
results as part of deciding where to set the contour levels.

I have not found that any of the ANSYS supplied animation macros do the one simplest
thing I want. Usually I want to animate every substep of every load step stored in the
results file. The following simple macro does this for me under Windows NT. There is
virtually no error checking in this macro. Note that this simple macro does not update
element table data at each frame. Consequently it will not work properly for plots of
element table data. If stresses, strains, or other data with amplitude information are to be
plotted, the user may want to fix the contour map levels ahead of time. The user will want
to set the displacement amplitude scaling with /DSCALE in advance--automatic scaling
will not be satisfactory. In general, it may not be satisfactory to have /ZOOM,OFF active,
since the view will change if plots of significant deflection are included in the animation.
Manually setting a view may yield a better animation. Modify this macro as you wish.
This macro must be called from within /POST1. The file that contains the results must
have already been selected, and a prototype plot command executed so that calling
/REPLOT will generate the type of plot the user wants:

!
-----------------------------------------------------------------
---
! MY_ANIM.MAC A quick-and-dirty animation of all of the substeps
!
-----------------------------------------------------------------
---
! For information only. Use at your own risk.
! User must indicate how many frames are to be animated
! This macro starts with the first substep in the results file
! by using the SET,FIRST command internally
! User implicitly indicates how many times to use the SET,NEXT
command.
! The number of frames needed must exist in the RST file, else
errors.
! NOTE: This does NOT work for plots of data in an element table.
! Plotting element table results would require a macro in
which
! the element table results are updated at each substep.
!
! Virtually NO Error Checking Is Performed ! ! ! ! !
!
! What will be plotted is based on /REPLOT therefore, on the last
user plot executed
! before this macro is called.
! Scaling, etc. are all based on the last user plot. Only the SET
value is updated.
!
! Call with:
!
! my_anim, time_delay_for_frame, number_of_frames_including_first
!
ar11=arg1
*if,arg1,eq,0,then
ar11=0.1
*endif
*if,arg2,ne,0, then
/NOPR
/gsav,xxx,gsav,,temp
/seg,delete
/seg,multi,,ar11
set,first
/replot
*do,_iii,1,arg2-1,1
set,next
/replot
*enddo
/seg,off
anim,1,1,ar11
/gres,xxx,gsav
/gopr
*endif
An alternative to this macro could step through all substeps on the RST file by using a
*GET command of the type *GET,NTOTAL,ACTIVE,0,SOLU,NCMSS to check the
number of substeps as the SET,NEXT command is issued. The parameter NTOTAL will
be re-set to 1 when the animation is complete, and the *IF and *EXIT commands can
check this and break out of a do loop -- see Tip 59 below for the example of
automatically plotting all mode shapes. The user would then not need to specify the
number of substeps to plot, improving the automation, and letting the solver use variable
substep sizes without the user having to check on the number of substeps that resulted.

Tip 17: Getting the Mass or Weight of a Model:

A reader has been helpful by pointing point out that mass (or weight, depending on your
units) of keypoints, lines, areas, or volumes in a model can be retrieved, when attributes
have been assigned to these entities, by using commands available in /PREP7. Using the
graphical user interface, enter into "PreprocessorOperateCalc Geometric Items" to see the
choices: "Of Keypoints, Of Lines, Of Areas, Of Volumes, Of Geometry". These items
execute the "sum" commands: "KSUM, LSUM, ASUM, VSUM, GSUM" respectively. If
no attributes have been assigned to the geometric entities, unit densities are assumed in
reporting mass and center of gravity information. After the execution of these commands,
the *GET command can be used to assign to a variable the implied volume of an area
(based on the thickness associated with its attributes) or the volume of a "volume". The
volume of a series of areas or "volumes" can also be retrieved with the *GET command
after a "sum" command is used. The *VGET command can also be used, where
appropriate, in retrieving information made available after one of the "sum" commands is
executed.

For some unstated reason, ANSYS will not directly give the total weight or mass of a
model (retrieved from the mass matrices of the elements), except to print it to output
during the solution of a problem. The user can run a partial solve in order to get this
weight or mass printed reasonably quickly. In Imperial units, it may be desirable to
convert between pounds mass and pounds weight. There is no *GET command that
directly returns the weight of selected elements. However, the volume of an element can
be returned, and the volume of a set of elements can be put into an element table, and
summed.

You can get the weight of many models into a parameter by: step through all material
types, selecting elements for each material type. Get the volume of those elements, and
multiply by the density of that material type. Sum the masses or weights of all the
material types. This will not include added mass and mass elements at nodes (check this
carefully against the output mass in the solve module) or other things that I may not have
thought of.

Of course, you can get the weight (assuming you gave densities in the material
definitions) by removing all loads (don't let thermal expansion, nodal rigid region, nodal
coupling, various gap and contact elements, or loads on constrained nodes trip you up --
use the minimal constraints needed to stabilize all bodies in 3-D), applying 1 g vertical,
having constraints on vertical motion, running SOLVE in a linear analysis, and finding
the vertical reaction force. In such a run, a combination of the FSUM (select vertically
restrained nodes only, with all attached elements) and *GET commands in /POST1 might
help you to get the weight into a variable. However, a partial solve will give the answer
more quickly (but not put it into a variable). Depending on your system of units,
remember, you may want to convert between weight and mass.

I base my comment, about the inability of ANSYS to directly return the weight of the
model with *GET, on comments in the manuals on Optimization. The optimization
examples work to reduce model volume, not weight.

Tip 18: Using Fnc Calls from Macros:

Before using macros for the first time, read about the *USE command in the ANSYS
Help manual, in addition to other relevant parts of the ANSYS manuals. The *USE
command help discusses the macro calling parameters and their local scope. Note a slight
difference in calling parameters AR19 and AR20 when the *USE form of a macro call is
used, versus the "unknown command" form.

There are times when calls from macros directly to the Function form of an ANSYS
command will be the only way to get the function called with picking. It may be desirable
to sent the user a message that explains why the picking has been requested. The function
must be called with the exact use of upper case and lower case characters. An example:
Fnc_ENSYM will work, whereas fnc_ENSYM will not, because the capital F is missing.

Tip 19: Use ENSYM and ENORM to Turn Over Shell Elements:

ANSYS has two commands, ENSYM and ENORM, for re-orienting shell elements so
that a set of shell elements can all have their "top" surface face the same way. This makes
application of pressure, contact elements, and review of results more feasible. This
orientation should be done before running SOLVE; the results are not re-oriented in the
database when these commands are applied, nor in the results file, so if the elements are
re-oriented after SOLVE, the stress results will no longer apply to the correct shell
surfaces and a meaningless mess will result. These commands work with shell elements
that are attached to areas, as well as with independent shell elements. Note: If you clear
the elements attached to an area, then re-mesh, the new elements will have the same
orientation as the area. (Hint: ANSYS ought to do this re-orientation for Areas, making it
easier to pressurize the interiors of containers defined with shell elements.)

See HELP,ENSYM for information on what this command will do. ENSYM can be used
to "flip over" a shell element so that the opposite side (Top or Bottom) is showing. To do
this would require reversing the node order in the database so that Face 1 (Bottom) and
Face2 (Top) get switched.

For more powerful capabilities in re-orienting shell elements, see HELP,ENORM. This
command will search outward from a chosen element that the user considers "correct",
re-orienting a connected set of shell elements so that they face the "same way" (this takes
some interpretation), even working around corners. It searches elements from the selected
set of elements, until it hits the edge of the model, or until two or more elements are
attached to one element edge. The user should experiment with this command in order to
understand exactly what it does, and inspect the model thoroughly after ENORM is
applied, to verify that the results are as desired. The correct use of ENORM can make the
application of pressure or contact elements to a complex model substantially easier.

It would be very helpful if ANSYS had a special command that would plot shell elements
with the sides colored according to whether they were FACE1 or FACE2 of the element.
This command could be extended to color the (up to) six sides of solid elements,
according to their face number. A similar command for plotting areas would help, too. It
could even be done for beams displayed with /ESHAPE showing the outer envelope. At
present, with ANSYS 5.3 running on Windows NT, I get different colors for Face 1 and
Face 2 of shell elements when PowerGraphics is ON, and "No Numbering" plus "Colors"
or "Colors and Numbers" has been chosen under PlotCtrls,Numbering. I have not seen
this documented. This does not happen for areas, or for solid elements.

Tip 20: Shell Types to Try:

I have used Shell 63 (for Elastic), Shell 43 (for Plastic), Shell 93 (8-Node, for Elastic &
Plastic), Shell 143 (for Plastic), and Shell 181 (for Plastic). The Revision 5.4 for ANSYS
will include a bug fix for a Shell 181 problem. Shell 143 is no longer supported, but is
still embedded (hidden) in Revision 5.3 of ANSYS for compatibility reasons.

I have recently found Shell 93 to be useful in modeling some curved structures, because
of its ability to follow curved surfaces. (Shell 63 elements are flat, and can make a mess
of a general curved surface under free meshing.) Shell 93 gave me good convergence for
both elastic and plastic Large Displacement (nonlinear geometric) analysis. It does not
like to follow too large an angle of curvature with one element, so the number of
elements on an area fillet can be large. Set the angle subtended by Shell 93 elements
during meshing to a value that is small enough to avoid warning messages. Watch out for
aspect ratio warnings. (Lack of warnings is not a complete guarantee of acceptable
element shapes.) If the structure has pressurized flat surfaces, Shell 93 often converges
better when stress stiffening is activated for Large Displacement analysis. Stress
stiffening for Shell 93 is activated at the solution phase of the analysis, whereas Shell 63
is (apparently) only stress-stiffened by setting one of the KEYOPT values. (I have
obtained different Large Displacement convergences with Shell 63 with no stress-
stiffening set, with the KEYOPT stress-stiffening set, with stress-stiffening set in /SOLU,
and with stress-stiffening set in both places.) Like Shell 63, Shell 93 also has the virtue of
being supported by the Linear/Plus version of ANSYS for Large Displacement elastic
analysis, so models can be moved back and forth.

When forcing mapped meshing of curve-sided Shell 93 elements on a plane area by


concatenating perimeter lines, I have occasionally had mid-side nodes created, in the
interior of the area, such that there was too much element curvature distortion in the plane
of the element. One fix is to have the elements created with the sides straight, which is
tolerable if the elements are flat, and if it does not cause trouble on the perimeter of the
plane area being meshed. "Trouble" here means poor representation of curved
boundaries--other elements on these boundaries may need to curve to follow curved
surfaces, or it may be desired to have a curved fit to an outside edge. If flat element sides
cause trouble on the perimeter, then start by meshing areas on the other side of the
perimeter with elements that have curved sides--these elements could even be triangular.
Next, mesh the area of interest with the elements sides set straight, then clear the
surrounding areas, if the surrounding areas are not intended to be meshed, or need better
element shape control. This will leave the plane area of interest meshed with elements
that have straight edges in the interior, and curved edges on the perimeter. This is
illustrated by the following images of an intentionally extreme example. In the first
image, a line plot of element edges shows extreme distortion in the plane. An intended
hole is meshed with triangles. All these elements are Shell 93, having mid-side nodes.

In the second image, meshing with mid-side nodes positioned on straight lines is being
chosen.
In the third image, the consequence of meshing the part with straight-sided elements is
shown. The elements at the hole have a curved side, because the hole is already meshed
with curved-sided elements.

In the fourth image, the elements bordering the hole are shown, after the hole has been
cleared of elements. The element curvature at the hole is visible. The interior of the plane
area is meshed with straight-sided elements. The same problem and a similar fix can be
encountered with mid-side noded SOLID95 brick elements that have 20 nodes. The
surface areas of a volume can be meshed with 8-node SHELL93 elements with curvature,
then the volume meshed with SOLID95 elements with the sides straight, then the shell
elements on the areas removed with the ACLEAR command. This will leave the volume
meshed with SOLID95 elements that are curved on the surface areas, but with straight
sides in the interior. There are rare occasions when this will eliminate element distortion
warning messages.

Tip 21: Moving a Model from ANSYS Mechanical to ANSYS Linear/Plus:

Because versions of ANSYS sell for different prices, a company may own one version to
be used for nonlinear models, and several licenses for linear work, or just for model
creation and results review. Occasionally, a model will be moved "down" from a fuller
version of ANSYS to the Linear/Plus version.

A user can run into difficulty moving a model from ANSYS Mechanical (or ANSYS
Structural, etc.) to the less expensive ANSYS Linear/Plus. The Linear/Plus version limits
the number of nodes allowed. Unfortunately, it implements this control by not allowing
node numbers that exceed a limiting value. This means that compression of node
numbers (and element numbers) may be required in order to get larger models to be
accepted by ANSYS Linear/Plus. Otherwise, the program quits without an opportunity to
compress the numbering (more recent ANSYS versions may be more tolerant, but the
numbering will have to be compressed at some point).

When the node and element numbers are compressed, coordination of loading with the
numbering expressed in load step files is lost. The way around this that I have used is to
read in the original database, read in a load step file, compress the numbering, and write
the load step file. The process, reading in the original database, must be repeated for
every load case (a macro could be written to automate this.) Finally, the original database
is read in, numbering is compressed, and the new database is written.

Unsupported element types cannot be used in ANSYS Linear/Plus; neither can too large a
wavefront (can the PCG solver get around this?). The unsupported elements need to be
deleted or changed before moving the model (e.g. change SHELL181 to SHELL63).
Then, if the number of entities does not exceed ANSYS Linear/Plus limitations, the
database can be moved to the other program.

The next problem in moving models to ANSYS Linear/Plus, is that nonlinear material
models must be deleted in ANSYS Mechanical (Structural, etc...) before moving the
database to ANSYS Linear/Plus. This is because the ANSYS Linear/Plus program will
complain that the material nonlinearity is included, but not accept the commands to delete
it (Hint: ANSYS should add this delete function to Linear/Plus.) Of course, I found all
this out the hard way.

On rare occasions, a model from a more recent version of ANSYS may be moved back to
an earlier ANSYS version. If IGES is not satisfactory, a user could use CDWRITE to
write out the element and node model and other model data to a file (the DB option), then
manually clean up the file so that the earlier version of ANSYS could accept it. This
includes modifying commands for element creation, after deducing what format is
needed. Writing the element data with EWRITE then cutting and pasting with the
CDWRITE file may be easier -- I haven't tried it. A user-written program can expedite
cleanup for a large model.

Tip 22: Deleting Nodes that have Nodal Coupling:

When deleting a set of nodes for which some were members of coupled node sets, delete
the coupling equations BEFORE deleting the nodes. Otherwise, unwanted coupling
equations may be active if you create more nodes. The coupling equations are not
automatically deleted when the nodes are deleted--is this a bug? Select the nodes to be
deleted, then delete node coupling equations for which any nodes are selected, then delete
the nodes. (You will have had to first delete the elements.) Clearing solid model entities is
the same as deleting the elements and nodes simultaneously.

I find it very helpful to turn on the symbols for nodal coupling when checking for proper
use of these details.

Tip 23: Convergence with Shell FEA Models in Nonlinear Analysis under ANSYS:

First, remember that there are three basic kinds of nonlinearity: (1) Large Displacement
(geometrically nonlinear) analysis, and (2) Plastic Material properties are the obvious
types. In addition, nonlinear solutions occur (3) when nonlinear elements such as gap
elements, hook elements, and surface contact elements are used. Because of (3) it is
clearer to refer to a "linear" analysis as "small displacement elastic", since "linear" may
be perceived as meaning that there are no nonlinear elements present. A nonlinear
analysis will take longer, usually considerably longer, than a linear analysis. For a large
finite element model, it helps to have a computer with an extremely fast CPU, large
RAM, large hard drive, and fast hard drive data transfer (high-speed SCSI may help on
PC's) for nonlinear analysis.

In ANSYS, the Shell 63 element will do Large Displacement, but is NOT capable of
material nonlinearity (plasticity). Shell 43, Shell 143, and Shell 181 are capable of both
Large Displacement and material nonlinearity. These four elements are 4-node quad
elements. ANSYS also has an 8-node shell element, Shell 93. The Shell 93 element is
capable of both Large Displacement and material nonlinearity. Shell 93 has the advantage
that it can follow a curved surface. There are also shell elements for composite materials
and for P-element solutions. I will restrict my comments to the basic shell elements: 63,
43, 143, 181, and 93.

The elements should have acceptable aspect ratios, not be ridiculously large or small, not
be pathologically deformed, and not generate warnings about being warped. If warped
quad elements are unavoidable during meshing, it may be desirable to use either small
triangles, or the Shell 93 element. Note that within the ANSYS manuals, high order
elements are not considered to be ideal for nonlinear work. However, I seem to have had
some success with the Shell 93 element (can't say if the results were ideally accurate).
You can evaluate the model quickly by doing a partial solve (Partial Solu in the GUI),
only generating the element matrices, and getting warnings (if any) and other information
in the ANSYS Output window.

If a Large Displacement solution is chosen, some solutions are improved by setting Stress
Stiffening before running the solve process. Stress stiffening for elements 63, 43, 143,
and 181 can (apparently) only be set with one of the KEYOPT values (Keyopt(2)) for the
element (see Options when using Add/Edit/Delete to add element types with the GUI).
Some beam elements are like this, too. It apparently (I find the manuals difficult to
interpret on this) can NOT be set within the Solve module, even though the GUI has a
selection box for Stress Stiffening. However, I seem to have had convergence differences
with Shell 63 with stress-stiffening set and not set in the solve module. For Shell 93,
stress stiffening IS set within the Solve module, by choosing it under Analysis Options in
the GUI (SSTIF). The use of stress stiffening for convergence improvement is
contraindicated by some conditions such as the substantial use of nodal coupling or nodal
constraint equations... see the ANSYS manuals on this. Note that SSTIF is NOT the same
thing as the command PSTRES.

A second thing that helps many nonlinear solutions (both Large Displacement and
plastic) to converge when substeps are being used is to activate the Predictor (PRED) in
the Solve module. (This may be more of a hindrance than a help when gap and other
nonlinear elements will be changing status frequently.)

There are other settings that can be tried when attempts at convergence are not working. I
usually stick to letting the program decide how to use Newton-Raphson iteration and
adaptive descent in the Solve module. Under the Nonlinear settings of the GUI, the user
can modify the Convergence Criteria. I often use only convergence on forces (not
moments) when analyzing shells if I am not inputting any moments directly. I usually
reduce the number of Equilibrium Iterations to 15 when doing shell models, preferring to
use smaller substeps instead. However, in a model with gap or contact elements it may be
desirable to have a much larger number of Equilibrium Iterations. I rarely try Line
Search.

Making a good choice of time substep sizes is critically important in getting models to
converge. If shell models of flat plates subjected to pressure or perpendicular forces are
included in the analysis, the shell will at first act as a flat plate in bending. Once the shell
has curved, by movement as small as half its thickness, the shell will start to carry the
applied load with membrane forces. In a model of this type, starting with very small
substeps (e.g. 1/100 of the full load) may be needed to achieve convergence. I would start
with a very small first substep, but allow the largest substep to be as large a fraction as
1/4 of the applied load. If there are no perpendicular loads but the loading is causing
Large Displacement, or if buckling is to be considered, it is likely that small timesteps
will be needed toward the end of the force application ramp. Where there is no pressure
or perpendicular force on flat shells, I would start with a substep such as 1/10 or 1/4 of
the applied load, but allow a minimum substep as small as 1/100 of the full load. If these
approaches will not work, it is likely that convergence control commands in addition to
time substep size will need consideration.

If the structure is buckling or undergoing plastic failure, or "simply will not converge" it
may help to use the Arc-Length method. As I have noted elsewhere, I don't use the
default Arc-Length settings. I usually start with a number of substeps (NSUBST), and
don't let the Arc-Length solver increase the size of a step beyond my maximum substep
size. I let the Arc-Length solver use a minimum step size that is 1/10 or 1/100 of my
substep size. I let the Arc-Length solver use a maximum step size multiplier of one. The
Arc-Length method can follow a rising and falling force-displacement relationship. I find
PlotCtrls/Animate/Dynamic_Results to be useful in reviewing the behavior during an
Arc-Length analysis, and other nonlinear analyses. I prefer to save the results at every
substep when doing this (Output Ctrls). When using Arc-Length analysis, it is usually
desirable to set a criterion to stop an analysis (NCNV). I usually use maximum
displacement as the criterion for shell work.

Remember to ramp up your loads, permit automatic time stepping, and in the NSUBST
command, allow the program to bisection by setting the maximum number of substeps
greater than the minimum number of substeps.

If you are having trouble with convergence, save the results at intermediate substeps so
you can review the stress and displacements. If you are doing combined Large
Displacement and plastic deformation, and having trouble with convergence, consider a
study in which you do (1) an elastic small displacement analysis as a check on element
shape, loading, and constraints, (2) a Large Displacement elastic solution, and possibly
(3) a plastic small displacement solution. If these work without significant warning
messages, you should be making some progress. If gap or contact elements are being
used, consider (4) softening the normal and tangential stiffness values in a preliminary
analysis (KN and KS). You can also (5) try relaxing the convergence criteria on force
and/or moment error. If desperate, a coarsely meshed model may improve speed enough
for you to study what helps get an answer. These preliminary studies may help you to
find what settings help you to get convergence or discover modeling problems before you
do more time-consuming accurate analysis. If you are trying a new technique, consider
testing it on a toy-sized problem, before applying it to a large industrial-sized problem
that runs for hours or days, in order to learn the peculiarities and pitfalls of a particular
time-consuming method.

If gap or contact elements are the only nonlinearities in a model, consider substructuring
the linear regions of the model. This can result in a tremendous increase in solution
speed. If only a sub-region of a model will behave in a nonlinear manner, it may reduce
solution time to substructure the region that can be regarded as acting in a linear manner.
This speedup effect or may not occur with large displacement modeling, when the
substructure itself will be undergoing large displacement -- I have done only limited
testing of this technique. See below for a brief discussion and for simple examples of
substructuring.

Tip 24: Working with Load Step Files in ANSYS:

Load step files can be used to automate the application of a number of different load
cases on a structure. A load step file contains loads on elements and nodes. It does NOT
contain loads on geometric entities. Consequently, a load step file can be generated after
all loads from geometric entities have been transferred to a model. After all loading on
geometric entities has been deleted, the load step file can be read back in, recovering all
applied loads. Alternatively, consider the "LSCLEAR,SOLID" command. These loads
can then be scaled.

The user needs to be careful when manipulating load step files. The load step files may
contain the KUSE instruction telling ANSYS to re-use the TRI file if the constraints have
not changed. If the user deletes a load step file, changes the order of their execution, or
manually modifies their contents, invalid analysis might result.

If the model is re-numbered after load step files are generated, the node and element
numbers in the load step file will no longer be synchronized with the model, and will be
invalid. A way around this is mentioned elsewhere in these notes (See Tip 21).

The reader should take note of the ANSYS user guides comments on the LSCLEAR
command. This deletes all loads and resets all load step options to their defaults. This can
"clean up" the load step data before using LSREAD to read a load step file for
modification. What this implies is that the load step execution process does NOT execute
an LSCLEAR command when a load step file is read in. If it did, then ANSYS would
have to implement substantial checking to see whether a TRI file was safe to re-use,
under the frontal solver (TRI file re-use saves considerable time). Load step
implementation can cause havoc when the user employs load step files in a manner for
which the method was not designed. It may help to read the contents of the
LSSOLVE.MAC macro in predicting what will happen, and to see what LSSOLVE does
to avoid trouble. The LSSOLVE.MAC macro at ANSYS 5.3 includes some
undocumented commands including DMARK, FMARD, SMARK, BMARK, and a
*GET command that retrieves the error number in the /SOLU process. It also uses an
"LSCLEAR,SOLID" command that removes loads on geometric entities before reading
in load step files. It selects all DOF labels, sets xCUM labels to "replace", and does a few
other things. I do not consider the manuals to pursue this topic adequately -- a user ought
to read the macro.

The ANSYS manual comments on the LSREAD command. The command does NOT
clear ALL current loads on the model when it reads in a new load step file (it does clear
some... read the manual).

When using load step files: If loads on nodes and elements are set with BF and BFE
commands (for example applying temperatures for a thermal deformation stress analysis),
then if you set up a subsequent load step, if these temperatures are to be returned to
ambient it may be necessary to use the BF and BFE command to set the nodes and
elements to the reference temperature (by default 0) rather than just deleting the loads
using BFDELE and BFEDELE and using BFUNIF to input the uniform temperature. It
may help to use commands such as "nsel,s,bf,temp,-999,99999" and "esel,s,bfe,temp,-
999,99999" to select all of the nodes or elements to which temperatures have been
applied, if you are going to change them. Be very careful with the BFE command. If you
set the value of the temperature at, for example, four locations on an element with BFE,
and in a later load step set the value at only two locations within an element, the
temperature at the other two locations will still be "hanging around" at the previous value.
It is very easy to make this mistake when running a series of load step files. (Another
thing I found out the hard way, in a model where both piping creation commands and
beam elements were used.)

If the user is deleting displacement constraints using DDELE, and then writing an
additional load step file, the old constraint may still be present when the series of load
step files is read in under LSREAD; check for this in your results. Be careful with this. It
may compromise the use of load step files, or require some intervention like writing an
input file that calls load step files in using LSREAD, implementing fix-up commands as
needed -- be careful that a TRI file is not re-used because a load step file contains
"KUSE,1" when your changes to constraints mean that a new TRI file should be
generated. Statements in the LSSOLVE.MAC macro can provide guidance on using
LSREAD effectively. You may need to look inside the load step files with a text editor.
Be warned that changing the contents of load step files with a text editor can be tricky
because of unintended side-effects.

In general the user will have to be careful that the "residue" from the loads and
displacements from one load step do not appear inappropriately in later load steps. This is
true when generating the load
step files in the first place, and
may apply when reading in load
step files with LSREAD. As
noted, LSSOLVE.MAC uses
cleanup statements.

The user will have to be careful to change loads between load steps in a manner
consistent with getting smooth ramping of loads and displacements, for those cases when
this is desired, either for transient analysis, or for good nonlinear analysis convergence, or
when intermediate results are desired at in-between loads.

Before reading in load step files to solve with LSREAD, ensure that loads on geometric
entities and elements and nodes have been deleted, unless you are keeping them
intentionally (as noted, loads on geometric entities overwrite loads on elements and
nodes). As noted, LSSOLVE.MAC in ANSYS 5.3 contains the command
"LSCLEAR,SOLID" to remove the solid model loads on the model before proceeding.

If Large Displacement analysis is going to be used in analyses run by load step files, the
NLGEOM flag must be set in the first load step file. There will be no NLGEOM
command generated in subsequent load step files. Because ANSYS does not permit the
kind of analysis to be changed when applying a series of load steps, error messages will
be result if the user changes the value of NLGEOM in the middle of a set of load step
files.

Tip 25: Plotting Shell Stresses -- Surface, Mid-Plane Stress, Load Paths, ESYS and
RSYS:

In the ANSYS database, shell stresses (and strains) for the basic shell elements (63, 43,
143, 181, and 93) are reported at the top and bottom surfaces of the shell element. The
user can has four options in ANSYS 5.3 for plotting shell stresses (and strains). Three of
them are selected with the commands: "SHELL,TOP" , "SHELL,MID" or
"SHELL,BOT". These will cause plotting of shell stresses (and strains) to be based on the
values at the top surface, mid-plane, or bottom surface of each shell element. This is a bit
misleading. The mid-plane stress is based on the average of the stresses at the top and
bottom (this may not be correct, at least for some elements, considering Section 2.3.4 of
the Theory manual, which refers to stress on the mid-plane of a shell element separately
from the top and bottom, and forms the force per linear unit from a weighted average of
top surface, mid-plane, and bottom surface stress -- what's going on here?). What
constitutes the top and bottom of a shell element depends on the element's orientation
when it was defined (see elsewhere in these pages). It is possible to have adjacent
elements, one with a "top" surface pointing upward, and its neighbour with the "top"
surface pointing downward. In complex structures it happens all the time. If nodal
averaged plots are done, for example with "PLNSOL,S,EQV", when either top surface or
bottom surface plotting is chosen, then with such adjacent elements, the plotted top
surface and bottom surface results will get blended, causing a misleading mess to be
displayed. (See Tip 19 for commands that can re-orient shell elements.)

More insight into the flow of stress in a model can be gained by plotting the stress
vectors, using the "PLVECT,S" command. With shells, these vectors will be plotted for
the mid-plane principle stress components. At times you will want to use vector graphics
with no hidden surface removal, to give the best view of these vectors. If there is local
compression, the vectors point inward. These vectors can give insight into load paths in a
structure.

Where there are intersections of planes of shell elements, e.g. corners or "Tee"
intersections, or where elements of differing thickness meet, the averaging of node
stresses can render local stress plot information meaningless at the intersection. This is
true of both surface and mid-plane stress plots. This is one way in which excessive
stresses will be unintentionally missed.

Any time that nodal averaged plotting is done, it is possible for the averaging to "wash
out" local stresses that may be important, yet it is common to do nodal averaged plots
because of their much cleaner appearance (I do them myself). The fourth option in
plotting shell stresses is to switch on the ANSYS Powergraphics feature. This causes
shell results to be displayed, even averaged, for the visible surface. Options activated with
the AVRES and /EFACET commands can refine the way the results are plotted under
Powergraphics (look them up). Powergraphics has the options to discontinue the
averaging of stress contours where there are certain discontinuities in the material or
geometry in the model. I'm going into this detail, because a high stress that is washed out
by nodal averaging could be a stress that causes serious fatigue or other damage, such as
cracking, or a weld being torn apart.

The only shortcoming is that Powergraphics will not work with mid-plane stress. The
user has few options here. Sometimes it is important to select only regions of a model
when doing nodal averaged mid-plane stress plots (using "SHELL,MID", without
Powergraphics) so that the averaging does not wash anything out. A mid-plane stress plot
without Powergraphics can be done for element stresses, using a command like
"PLESOL,S,EQV". This will look messy, but at least it doesn't hide an extreme stress. An
alternative I used is discussed elsewhere in these pages: I wrote a macro to get the mid-
plane averaged stress (all components) at every node of every element (a given node has
different results with reference to each of the elements to which it is attached, so a given
node will be looked up as many times as the number of elements to which it is attached),
and transfer it to the top and bottom surfaces, so that Powergraphics would plot mid-
plane averaged stress neatly, with discontinuities. CAUTION: This ruins the results
database. The macro is extremely slow to run. The method (under Powergraphics) does,
however, give far better looking plots than using the "PLESOL,S,EQV" command to plot
mid-plane element stresses without nodal averaging (without Powergraphics).
LOAD PATHS: The macro I mention above could be modified to multiply the mid-plane
averaged stress components by the local shell element thickness at each node. The
resulting values would yield a contour plot of force per linear inch (or other dimensional
unit) "averaged" at the mid-plane of the shell -- this could help to make load paths visible
in a complex shell structure. "PLVECT,S" plots that would now show arrows
corresponding to the load-per-unit-length on the mid-plane and show the principal
directions in which it points, helping to illustrate the load paths. This macro would also
ruin the database for any other use. Before plotting "load-per-unit-length" data, the user
needs to decide how to orient the results data coordinate systems with RSYS for
information such as Sx or Sy that contains direction information (stress and strain with
EQV does not contain direction information).

Note: The Output Data section on Shell63, Shell43, and Shell93 includes In-plane
element X, Y, and XY forces called TX, TY, and TXY. Consequently, shell "force per unit
length" data can be obtained directly in an Element Table very quickly, though with a
resolution of one value per element. (For Shell63, 43, and 93, use SMISC setting 1, 2, or
3 when generating the element table data.) The Theory Manual uses the term In-plane
forces per unit length while the elements manual refers to just forces as above -- a simple
test I ran shows the data to be force per unit length. The elements manual ought to clarify
this. The Element Table data can be contour plotted, but there are no principal stress style
vector plots of table data. (Clarification: PLVECT can plot vector arrows based on 3
ETABLE columns, but not the double-headed arrows for an ETABLE as in a principal
stress vector plot.) The Elements manual shows the TX, TY, and TXY values not being
available under "Miscellaneous Element Output" at every node, only at the centroid. The
Elements manual does not explicitly show that S,EQV or S,INT stress information can be
extracted at the mid-plane. Their value is extracted with the component name method.
Brief experimentation shows that if the command "SHELL,MID" is followed by
"ETABLE,SEQVMID,S,EQV" that the column called SEQVMID will contain an average
SEQV value for the mid-plane. If "SHELL,TOP" or "SHELL,BOT" is called, the
ETABLE value of SEQV will change if the update command "ETABLE,REFL" is
executed. Warning: When plotting ETABLE shell element element table data with
PLETAB the plot information legend will read TOP, MID, or BOT according to the
current setting of the SHELL command. This bit of information DOES NOT reflect the
SHELL surface setting conditions in effect when the ETABLE data was stored, and could
be misleading. For this reason, the label used for the column should indicate the shell
layer setting in use when the element table data was loaded, as with "SEQVMID" above.
Doing an element table update with ETABLE,REFL will re-fill columns with results data.
A change of the SHELL layer setting can change stress results that are loaded in an
update. Consequently, loading shell element data must be handled very carefully in order
that the layer choice is controlled. Element table data from the CALC module (adding
columns etc.) is NOT updated and has to be explicitly re-calculated.

NOTE Also: The direction of the element table load-per-unit-length TX, TY, and TXY is
as taken from the element in Element Coordinates. Unlike SX or SY, the values of TX,
TY, and TXY appear to be insensitive to the RSYS setting. The Element Coordinate
System will vary orientation from element to element, particularly under free meshing,
and affects the usefulness of TX, TY, and TXY data. The element table data can be
processed by the user to yield a new table column containing the "load-per-unit-length
intensity" in the sense of a Mohr's circle, giving rapid if somewhat coarse plots of load
path information along the shell mid-plane. The plots will usually be more informative
without nodal averaging. Section 2.3.4 of the ANSYS Theory manual discusses Forces
and Moments per unit length on shell elements -- the suggestion is that internally, at least
for some shell elements, the mid-plane stress is NOT simply the average of the top and
bottom stresses. The way around the problem of element coordinate systems being
arbitrarily oriented is to define local coordinate systems before meshing areas (or
otherwise generating shell elements) and use ESYS to get all shell elements oriented with
the local coordinate systems. ESYS assigned to elements can be modified after the fact
but before SOLVE, by using the EMODIF command in /PREP7. It may be desirable to
have a local coordinate system aligned with each flat area to be meshed with shell
elements so that all shell element coordinate systems can be aligned in the plane of the
area -- a time consuming process unless a macro is used. Curved surfaces would be
difficult.

The problem of orienting coordinate systems in the plotting of results is illustrated by the
images below. The first shows 3 elements that were created during free meshing. The
elements are plotted using vector graphics, with the element coordinate systems shown.
Each element has its coordinate system oriented differently. The image below it lists the
elements and their node numbers. Look at the sequence of node numbers for the three
elements to see why the element coordinate systems point in such different directions.
The next two images show a plot of TX done from an element table. The element table
was filled by the TX values for the elements (this is the load-per-unit-length in the
element coordinate system X direction). The values differ so much from element to
element because of the difference in the element coordinate systems. The plot
consequently tells us too little. The following element plot of Sx shows the stress in the X
direction. The results are shown in the global coordinate system.
The final images in this section show a group of Shell63 elements that have had their
element coordinate systems aligned with local coordinate systems at the time of the
creation of the elements, by the use of the ESYS command. This will permit element
table results TX, TY, and TXY to be aligned in a known manner. This also permits Sx, Sy,
and Sxy to be aligned in the plane of the elements creation if RSYS,SOLU is active when
plotting stress results. Knowlege of the alignment of the loads and stresses can make
plots more useful in understanding load paths, reduce the total number of plots required
in model assessment, and help facilitate an evaluation of loading on welds. The first plot
with vector graphics shows the elements with their element coordinate systems. Note that
they are aligned. There are two local coordinate systems at work in this example -- they
are numbered 11 and 12 and their symbols are plotted. Elements have been created
aligned with number 11 in one plane, and aligned with number 12 in the other plane of
elements. A line pressure has been applied in the global -Y direction. The second plot
with raster graphics is of Sx at the shell mid-plane. Because RSYS,SOLU was active
when the Sx plot was generated, there are Sx values shown in all elements. If RSYS,0
were active when the Sx plot was done, the plane of elements that is perpendicular to the
global X axis would show zero stress in the X direction in this example.
There is an alternative to using ESYS and RSYS,SOLU to align element coordinate
systems for the purposes of stress plots like Sx, Sy, and Sxy. During postprocessing in
/POST1, a local coordinate system can be aligned with the plane of shell elements of
interest, and RSYS set to that local coordinate system, before plotting Sx, Sy, or Sxy.
However, this would do nothing for TX, TY, and TXY which depend on the element
coordinate system and are generated in an Element Table.

I leave the topic of whether to plot surface or mid-plane shell stresses to the reader to
determine. Too much is industry or application domain specific. Hint: Check mid-plane
plus both shell surface stresses. Surface stresses and strains can cause local bending,
cracking, breaking of protective coatings, fatigue, and imply possible overload or prying
of welds and fasteners, and can highlight other troubles.

Tip 26: Nodal Coupling (CP) versus Rigid Region (CERIG):

I have seen analysts mistakenly use nodal coupling where rigid region constraint
equations should have been employed. (The nodes concerned were not at the same
location in space.) Rigid region constraint locks together a selected set of nodes so that
they translate AND rotate in space as if they were locked together by an infinitely stiff
structure. Nodal coupling locks together selected degrees of freedom (translation and/or
rotation) individually, so that the same degree of freedom value will result for the nodes
in the coupled set. Nodal coupling will not combine the rotations and translations that are
necessary to imply rotation as a rigid body in space.

Note that rigid region constraint may not be appropriate for Large Displacement, when
the displacement rotations are significant (sin(theta) differing from theta, etc.). This is
because ANSYS uses a linear approximation to the rigid body rotation matrix. A rigid
region grouping can be implied by tying nodes together with extremely stiff beam
elements (zero-mass beam elements a few orders of magnitude stiffer than the structure to
which they are attached.) The beam elements should have the advantage that they work
under Large Displacement. The beam elements should not be too stiff, or ill-conditioned
matrices could result. If the beams are of very widely varying lengths, then some may be
too stiff, others too flexible -- remember that flexibility is proportional to length cubed.

I ran a model in which about one thousand beam elements were used to position gap
elements. These beam elements would ideally have been infinitely stiff. I needed
elements, instead of nodal coupling or constraint equations, because of thermal expansion
considerations. The beam elements were widely varying in length. This created solver
trouble, until I wrote a macro that assigned each beam element a unique REAL value,
which set values for each BEAM4's Ixx, Iyy, Izz, and Area as a function of the element's
length. I found it sufficient to set their stiffness a couple of orders of magnitude stiffer
that contact stiffness for the gap elements.

Turning on the symbols for nodal coupling and for nodal constraint equations is very
helpful in reviewing the correctness of a model.
Tip 27: Vibration Modes with Pre-stress:

Calculation of natural frequencies and modes of vibration CAN be done with pre-
stressing of the structure under ANSYS. There is a "PRESTRESS" flag to set under
modal analysis. This is available in the dialog box for Modal Analysis Options. First, do a
static analysis with the prestress flag set. Exit Solution (click Finish or enter "/fini"). Re-
enter Solution, and do a modal analysis with the prestress flag set again. This does not
seem to work when the stress run is done with Large Displacement activated.

I leave the question of how a performer plays music with a hand saw and a violin bow as
an "exercise for the reader" :-)

Tip 28: Creating New Elements by Copying or Reflecting Existing Structure:

In order to create new elements by reflecting or copying existing elements, there are a
few things to do. First, select the elements to be copied and get their nodes with NSLE.
Copy or reflect the nodes, noting the nodal number offset that will be used -- write it
down. Copy or reflect the elements, using the nodal offset number that you wrote down.
ANSYS should default to a nodal offset number equal to your highest numbered node. If
you make it smaller, you run the risk of changing the location of nodes that already exist,
resulting in a lovely mess. If you are running something like ANSYS/ED you may want
to compress your node numbers first, for if a node number results that exceeds the
ANSYS/ED limit, the program will terminate immediately (the more recent ANSYS
revisions may give a non-fatal warning message and quit some time later if you don't
clean up). You could compress the node numbers, and then make the offset number equal
one plus the difference between the maximum node number of the whole model and the
lowest node number of those nodes to be copied or reflected. You can find these node
numbers with *GET commands. (Remember that compressing node or element numbers
will destroy synchronization with Load Step files.)

The same nodal offset number will need to be used if nodal coupling is to be copied as
well. In order to copy nodal coupling, use "Generate Coupled DOF Sets with same DOF"
for which you will need the same nodal offset number. Do a replot to see the newly
created nodal coupling. Caution: Be sure that if nodes were deleted earlier, that nodal
coupling equations that in the past included those deleted nodes were also deleted. If you
forget, you may get a pretty mess.

Remember that if there are nodes on the plane of reflection, new nodes will overlay them.
Merge commands may be wanted for the nodes on the reflection plane. Now the tricky
part: elements lying in the reflection plane (shell elements will do this) get generated with
the node order reversed, because of the mirror imaging. They Will Not Merge with the
element from which they were reflected. They may have to be deleted, depending on
what you are trying to accomplish. Alternatively, do not select elements that lie in the
plane of reflection when reflecting the structure. You still need to reflect the nodes on the
plane of reflection, in order to reflect the elements that will join them to the remainder of
the reflected structure, so the nodal merge will still be needed.
Tip 29: Adding to a Model Comprised of Elements and Nodes Only:

It may happen that a model that consists of nodes and elements only has to have a section
replaced, or requires the addition of more structure. The way to attach new geometry onto
existing nodes and elements is to: (1) Place keypoints on the nodes onto which new
geometry is to be built (i.e. grafted). (2) Join these keypoints with lines. (3) Set mesh
density along these lines to only one element. (4) Build new geometry outward from
these keypoints and lines. This gets messy if you are building solids. (5) Mesh the new
geometry. (6) Select the nodes (new and old) along the interface between the old nodes
and the nodes of the new geometry. (7) Merge ONLY these nodes along the interface
using the NUMMRG,NODE command. Alternatively (much more work unless a macro is
written or the CPINTF command is used correctly), fully couple the PAIRS of nodes with
the CP command. In the event of elements with mid-side nodes, lines will have to be
created curved so that a single line spans three keypoints placed on the three nodes along
the edge of an element. It is probably advisable to connect elements with mid-side nodes
to other elements with mid-side nodes.

This attaches the new geometry and mesh to the old elements and nodes. Be sure to
double check that the merging has been done correctly and according to your intentions --
I have found this to be a surprisingly error-prone operation.

Tip 30: Zero Mass Beam Elements Form Rigid Region:

An analyst could use very stiff beam elements (a few orders of magnitude stiffer than the
surrounding structure) in order imply a rigid region grouping of nodes, which works
under Large Displacement (a CERIG group does not work with large displacement). This
is an old FEA trick -- it is not perfect. A separate material should be created for these
beams, and be given zero mass (set the material density to zero) so that no gravitational
or other inertial load acts on the material. A thermal expansion coefficient should be input
if appropriate -- it would usually be identical to the coefficient value for the structure that
it approximates.

I wrote a macro to create a rigid region using beam elements. It is called after the set of
nodes to be connected is selected. The lowest numbered of the set of nodes is attached to
each of the other nodes in the set by a beam element. The beam element to use has to be
set up in advance, and the appropriate MAT, REAL, and TYPE set by the user. A macro
like this is very fast to run. Caution: Such a macro would become complex if it checked
for duplicate nodes at the first node location (ANSYS can't use zero length beams), and
checked for widely varying beam lengths. This is not a guaranteed method.

Tip 31: Turn off Symbols When Changing a Model after Solution:

If you have run SOLVE, the results database will be full of data. If you then change a
model, and create anything that plots a symbol, all symbols become active, and plots
become extremely slow. Turn off symbols with /PBC,ALL,,0 to speed things up. I put this
command in the Toolbox for convenience. I have found that plotting can become slow
with very large models when loads have been applied, and even when applied and
deleted. Presumably ANSYS is checking to see if any symbols should be shown. The
plotting speeded up considerably when symbols were turned off with "/PBC,ALL,,0"
even though there were, in fact, no symbols to be plotted.

Tip 32: Are the "Free-Free" Vibration Modes Relevant?:

Simple supports on a structure may be appropriate for static analysis and gravity
loading, since the structure will "sink" until the simple support reacts enough to
withstand the applied load. If a modal vibration is excited, small amplitude
vibrations may result in very little response from the support, and vibration
similar to a structure that is free in space may result (this is obviously very
problem dependent). If so, it may be desirable to run a modal vibration analysis
with no constraints. More than six modes must be requested, since the first 6
represent the free translation and rotation, and give Zero eigenvalues. A better
approach would be to characterize the flexibility of the constraint points. With
some structures, you may get a few surprises, as torsional and other vibration
modes appear.

Tip 33: Selecting a CAD or FEA System -- Cover Yourself

It is common to evaluate a few CAD or FEA packages when trying to make the right
choice for a purchase. Watch out for this stunt: (I've seen it done, and been threatened
with it once (I laughed at her).) A losing vendor writes a letter to your boss, or even to the
head of your company, claiming that the engineers are incompetent (stupid, uninformed,
can't spell, and so on) and making a huge mistake. If the boss is not an engineer and
cannot understand the issues, this could get awkward. (Certain Dilbert cartoons come to
mind.) Warn your boss(es) in advance that a few vendors pull this move and that you and
your group will evaluate the products in a thorough manner. Write down some criteria
and your assessments. Also, be careful that you cannot be accused of leaking information
unfairly from one vendor to another -- date your correspondence carefully, and work
through your purchasing department if that is appropriate at your firm. Some sales-types
are very greedy for their commissions, and petulant when they lose. (Names will not be
mentioned, to protect the guilty. If you've been around the block a couple of times,
perhaps you can make a few guesses.)

(The ANSYS vendor I've dealt with has been very professional.)

Tip 34: Creating Lines Perpendicular to, or at Angle to Existing Lines

When creating structures in the /PREP7 portion of ANSYS, I find that the commands that
create lines that are perpendicular to existing lines, or at an angle to existing lines, are
extremely useful. Look at the commands LANG, LTAN, L2ANG, as well as the others.
These commands break lines where new lines intersect, even though the original lines are
already attached to areas. Since I often model shells that are to act as if they are welded
together, I need the lines to be shared where areas contact each other. These commands
give the connectivity I need.

The command that meets another line at an angle may do better if it is entered manually,
with the first guess of the contact point set at 0.0, 0.5, or 1.0, depending on your
intention. This often succeeds when the interface command fails.

Tip 35: Use the /UI command in Your ANSYS Toolbar to Bring up GUI Dialog
Boxes

Take a look at the /UI command in ANSYS. You can use it in your Toolbar to activate
certain GUI dialog boxes with one-click simplicity, instead of finding your way though
the menu system. I sometimes get odd results from the Hard Copy command when I do
this -- I have no idea why.

Tip 36: Reaction Force, Nodal Force, and Load Paths

I worked on a model subject to aerodynamic pressure and gravity load. We needed to


know the load that the structure would apply to its foundations. Printing the Reaction
Force would give this value, however the +/- sign is in the direction of the force that the
constrained node (or nodes) applies TO the structure. If nodes are selected with the three
commands NSEL,S,D,UX $ NSEL,A,D,UY $ NSEL,A,D,UZ the Nodal Force at the
constrained nodes can be printed. This is the force with which the nodes press on the
supports. (NOTE: You may need to include nodes where there are constraints on rotation,
depending on what you are modeling.)

WARNING: A number of things can go wrong with this approach.

1. If you ask for nodal forces without limiting the node selection to nodes where
there are constraints, you will get nodal forces wherever forces and pressures have
been applied to your structure. (For the curious, printing nodal forces when only
pressure has been applied to shell and high order elements will illustrate that FEA
software inputs a complex set of forces and moments because of how the
elements are derived from first principles. What is being printed is the force with
which the nodes react to the forces input from outside -- if a moment is input, a
nodal "force" moment is output in reaction.)
2. If an input force has been applied to a constrained node, the nodal force and the
reaction force magnitudes will differ. When I tested this, the reaction force that
ANSYS listed was modified by the presence of a force applied directly to a
constrained node, whereas the nodal force (that is based only on element
deformation) was not affected.
3. IMPORTANT: All the elements to which the selected node is attached must be
selected in order to get the total force with which the node pushes on the outside
world (use ESLN after selecting the nodes). The generation of Nodal Force (and
Reaction Force, if I remember correctly) is determined from the deformation and
stiffness of attached elements. If elements attached to a node of interest are not
selected, then the contribution of those elements to the force at the node is not
included and will be missing.
4. Caution: If you have used a rigid region with the node of interest, the lack of
element deformation means that you will NOT get the Nodal Force or Reaction
correctly -- you may need to work from the set of nodes where the rigid region
attaches to the flexible part of the structure. I'm not sure what kind of effects
nodal coupling would have.

There are various other uses to which you can put Nodal Force. You can plot the Nodal
Force vectors along with your model (see the /PBC command), after SOLVE, giving
visual cues during your review. You can use NODAL FORCE to find out about the load
being carried in certain Load Paths:

• Determine where to position a "cut" in the model. Locate it where you want to
determine the force carried across the cut. The "cut" should follow a path along
the edge of a set of adjoining elements. Select all the elements on ONE side of the
"cut".
• Select the nodes on the "cut" side of those elements.
• Printing the Nodal Force (forces only) will tell you about the forces that your
selected elements apply to those nodes. The sum that is printed tells you the total
force carried across the "cut" in the X, Y, Z directions, based on the selected
elements.
• Caution: Getting the moment across the cut is not so easy, because moment is
determined with respect to an axis. You would have to do extra work to pursue
moment across a cut, determining your "neutral" axis, and using other commands.
See, for example, ANSYS manuals information on the SPOINT command.

Note my earlier comment Tip 25 on making load paths visible in shell models. For
further information, read the ANSYS manuals on the FSUM and NFORCE commands.

Tip 37: Inputting Temperatures with BF, BFE, and TUNIF in Structural Analysis

As discussed by ANSYS in Chapter 2.6 of the Elements Manual, Body Loads


(temperatures for structural analysis that cause thermal strains and affect temperature
dependent material properties) may be input in a nodal format or an element format.
"Either the nodal or the element loading format may be used for an element, with the
element format taking precedence. Body Loads are designated in the "Input Summary" of
each element." This means that if both BFE and BF are applied to an element and its
nodes, and the inputs differ, the BFE setting will govern. If temperature is input on a
nodal basis, the temperature input at a node will influence all the attached elements. If
temperature is input on an element basis, the temperature(s) input will influence only the
element to which it was applied. The commands TUNIF and BFUNIF can be used to set
all nodes to one default temperature that differs from the reference temperature. Then, BF
or BFE commands can to used on specific regions of the model to put in other
temperatures.
If you use piping commands to create pipe elements, and have applied temperatures,
ANSYS will apply the temperatures on an element basis (to check this, generate a Load
Step file and inspect its contents, or use the BFLIST and BFELIST commands). For the
user applying temperatures directly, it can be a little simpler to apply temperatures on a
nodal basis with BF, since the nodes can be selected by location. Inputting temperatures
on an element basis with BFE permits control of things such as temperature differences
between the inside and outside of pipe elements, or between the top and bottom of beam
elements. The element listing in the Elements Manual should be consulted before
applying temperatures with BFE. As I discussed elsewhere, if you change
temperatures that were previously set with BFE, the temperatures have to be
changed at all the locations within each element to which temperatures were
applied. Otherwise, the old temperatures will still be there. You may want to clean
up with a BFEDELE or other cleanup command before starting. The BFDELE and
BFEDELE commands only act on selected nodes and elements -- if you want to
remove all temperature application, select the full model first. Be wary of what
happens when you use Load Step files.

Tip 38: ANSYS Toolbar Use

The ANSYS Toolbar can be very helpful in giving "one click" access to frequently used
commands. Toolbar buttons can also call macros, or the function form of commands, for
example Fnc_Pl_Symbols to bring up the dialog box for setting symbols. If you want to
get fancy, a toolbar button could be used to activate an alternative toolbar.

In the toolbar shown here, a variety of buttons have been enabled. Some of the captions
are a little cryptic; this is because the captions are limited to only 8 characters. The
command that gets executed cannot include the $ sign. Consequently, only one command
can be executed, however, a macro can be called in order to perform a complex set of
instructions. The toolbar editing is brought up from the menu item "MenuCtrls". In the
example toolbar shown here, the buttons are not in a highly logical order. In order to
modify the button sequence, save the toolbar (I suggest the unimaginative file name
"toolbar") and re-order the lines in that file with a text editor. Keep the eventual sizing of
your toolbar in mind. The example here is sized for six rows deep, and seven columns
wide. Use the "Save Menu Layout" menu selection to save the layout of all of your
ANSYS windows including the toolbar shape. (This setting is destroyed if you modify
the "GUI configuration" under your ANSYS Interactive startup dialog box.) When you
are happy with the layout of your toolbar, you can append the toolbar file's contents to the
end of the "Start.ans" file located in the ANSYS "DOCU" subdirectory.
Tip 39: ANSYS Piping Elements

The use of ANSYS piping elements, Pipe16 and Pipe18, can simplify the work required
to create models of piping systems that will satisfy certain code requirements. Piping
commands can be used in /PREP7 to directly create models of piping. In using piping
creation commands, a user works out the intersection points of the runs of piping as if
there are sharp angle bends. Each run of pipe is entered as dx, dy, dz, creating Pipe16
elements, and then a radius of curvature at the previous intersection can be applied,
creating Pipe18 elements. The Pipe18 elements are taken out of the two Pipe16 elements
that met at the last corner intersection. If these two Pipe16 elements are too small to
encompass the Pipe18 bend elements, difficulties will result. If the user is defining U-
bends, it is easy to have zero-length Pipe16 elements generated. My approach to this is to
inspect the model for zero-length Pipe16 elements, and delete them, after I make sure that
all pipe nodes are merged. I use a macro to inspect the model and do the deletions.
Checks are included in the macro, because Pipe18 elements always return a zero length. I
have also seen users do a U-bend with a small extra space so that a very small Pipe16
element will remain between the two 90 degree bends that make up the 180 degree U-
bend, avoiding a zero-length element problem.

To list or plot useful stress information from the piping model usually requires putting
selected results data into element data tables, and the use of appropriate PLLS
commands. Fortunately, ANSYS includes many output possibilities for the two piping
element types, so typical piping code requirements can be met. See the element manual
for these elements for information on the available output data.

For obvious reasons, I leave proper use of design codes within piping analysis as an
"exercise for the reader." :-)

Piping creation in ANSYS includes the possibility of added mass due to fluid in the pipes,
and from insulation added to the piping. The insulation addition is simple -- the user can
input thickness and density. This lets the added mass presence of heat exchanger fins be
easily faked by inputting the product of fin_thickness x fins_per_inch x
fin_material_density as the "insulation" density, and fin height as the "insulation"
thickness. (Substitute the appropriate dimension for fins_per_inch, etc. for your system of
units.)

The deflection behavior of pipe elements is based on ANSYS beam elements. If accurate
vibration behavior is to be modeled, at least several pipe elements will be needed
between supports. If accurate gravity-induced deflections and stress are wanted, better
results will come from the use of a consistent mass matrix, if element density between
supports is low.

Developing an understanding of the function of the ANSYS element creation commands


(BRANCH, RUN, BEND, and so on) will require creating some elements with material
and dimensional information, then reviewing what element TYPE and REAL data has
been created in the model database. Model review is enhanced by plotting the elements
with the /ESHAPE option active.

Where piping is connected with sliding supports to the outside world, the use of gap
elements may be needed if sliding friction is to be included in the model. ANSYS does
not differentiate between static friction and sliding friction coefficients, so a reasonable
and conservative value for coefficient of friction (as well as contact stiffness of the gap
element) will have to be determined by the analyst. If there are thermal expansions in the
piping, stresses predicted by the model will usually be reduced if the gaps in the support
structure are included in the model (depending on the nature of the structure) rather than
having "tight" fits at the sliding connections.

Tip 40: Graphical Output from ANSYS

If you start up ANSYS under Windows NT with "win32" selected for graphics, the stress
plots will be shaded. If you select "win32c" for the graphics, the stress plots will not be
shaded, and will usually look better when plotted to paper, especially when plotted from
ANSYS with HardCopy to ink jet printers. They can be selected with the commands
/SHOW,WIN32 and /SHOW,WIN32C when using the GUI.

Plotting to the screen window with Z-buffering as the hidden surface control can give
very satisfactory and often quicker results. Hard copies of these Z-buffer plots, however,
will look "pixelated", being limited to a coarse resolution. Better looking hard copies to
paper will usually result if the screen is set to "Precise Hidden" or even to Centroidal
hidden surface control. This is usually true of plots sent to a file, for subsequent
processing with the ANSYS DISPLAY program.

Plots can be redirected to files by using the /SHOW command. This permits the
DISPLAY program to do various things with the results, including the generation of
animations. Under Windows NT, an animation can be generated as an AVI file.

I occasionally find it helpful to generate an animation file based on a single stress plot of
a load step, in which I spin the model about the screen X or Y axis. You can use the
/ANGLE command and the /REPLOT command to accomplish this. A simple macro does
/REPLOT calls with the model set at a series of angles from 0 to 360 degrees. You can
even execute this command on one line using the "$" symbol to separate the commands.
The command "*DO,III,0,355,5$/ANGLE,ALL,III,YS,0$/REPLOT$*ENDDO" will
achieve this for you. The scaling of the display should NOT be set with /ZOOM,OFF or
else the image will "move in and out" in order to fill the screen as the view is rotated --
set the zoom level manually with picking; you may want to move out so that the model
fits in all views. You may need to experiment. Node plots without symbols are a quick
way to assess the behavior while testing. If the plots have been re-directed to a file when
this command is executed, the plots in the file can be animated by the ANSYS DISPLAY
program.
AT the ANSYS 5.3 level, and presumably above, you can do a /SHOW,VRML plot to get
a 3-D VRML file produced of a 3-D model plot. This could be a stress contour plot of a
3-D model. With the right options activated for a good VRML viewer plugged into a Web
browser, the stresses on the 3-D model can be reviewed at any viewing angle with the
positioning control a VRML viewer. This ought to be particularly interesting on a
computer with a fast 3-D graphics accelerator.

There are utilities that can convert a Postscript output file from the ANSYS DISPLAY
program into a bitmap image file. A free conversion program is Ghostscript, once you
figure out how to use it. The user should get a front end for the Ghostscript program, for
ease of use.

Under Windows NT, the Alt/PrintScreen key combination will copy a window to the
Clipboard. This can be used to capture an ANSYS graphics window for pasting into a
word processor document, or into an image processing program for conversion to a GIF
or other bitmap file. GIF files can be used in WEB pages to show the results of ANSYS
work. I recommend GIF over JPEG files for images from ANSYS, because GIF files
precisely reproduce 256, 16, and 2 color images (you have to reduce the colors to 256 or
fewer levels in the image processing program, or accept the default color reduction used
when the GIF file is generated.) Before capturing the Graphics window of ANSYS, set its
size to your satisfaction. Bitmap image size changes in an image processing program are
not satisfactory with this type of graphical output. If you want to get really fancy,
generate a GIF file that contains an animation of a ANSYS model. (Animated GIF files
can be generated from individual images with software that you can find on the Web or
purchase.)

Re-sizing of the ANSYS graphics window under Windows NT is painful if a model has
been plotted, because ANSYS wants to keep re-plotting the image as the window edge or
corner is dragged. This problem goes away if you set the Windows NT Display Properties
to NOT show window contents while dragging. I keep my PC permanently set this way
for this reason.

Tip 41: Check Nodal Loads at Bolts, Rivets, Spot Welds and Links

Wherever connection by bolts, rivets, or spot welds has been represented by various
simplifications or representations in an ANSYS model, the load on those connections
should be checked, and compared with allowables. Spot weld review may require
assessment of moments (especially about an axis perpendicular to the sheets that are spot
welded together), as well as assessment of forces. One way to do this is to select the
appropriate node(s) at the connection, select elements on "one side" of the node(s), and
check nodal loads. The connection devices should not be overloaded. The hole in which a
bolt or rivet is placed must not be overloaded or too near an outside edge of a sheet or
plate, either. Additionally, building codes usually forbid or substantially limit "prying"
loads on bolted and riveted connections. If the FEA model has good detail, including gap
or contact elements, a high prying load can be demonstrated in some models (never
assume your FEA model will automatically show you all trouble spots).
Similarly, links or "spars" that are loaded should be checked for stress, and be checked
for buckling. Since a link will be represented by one element that is pin connected at the
ends, and only cross section area is entered, ANSYS will not generate buckling
information about the link, not even in a Large Displacement analysis. The user must do
some work to compare compressive load with critical buckling load (use a good margin
of safety). A user could write a macro to step through all link elements, identifying the
compressive stress and force, and calculating buckling information. The ANSYS Link10
element supports a tension-only and a compression-only capability. Where it is not
known in advance whether all links will remain in tension, and the links are slender, this
element could be used to imply that no link can support compression for what may be a
"worst case" evaluation of some models. An example would be the stays that support the
mast on a sailboat, with pre-tensioning implied with initial strain. (If the stays are woven
rope or steel cable, getting a representative crossection for the link elements will require
some extra work.)

Spot weld representation in large structures is usually an inexact science in FEA


modeling. Spot welds will be found, for example, in many automobile body structures.
Plug welds are a stronger alternative, applicable to thicker steel sheets and plates. The
crudest and quickest representation of spot welds is to merge coincident nodes from the
two joined layers where nodes have been intentionally created coincident at the spot
weld. Alternatively, the nodes can be fully coupled with the CP command if they are
coincident. They can be joined as a rigid region with CERIG if the nodes are close but
not touching as when shell elements are kept at the mid-plane position of two sheets that
are spot welded together. (Remember that CERIG is valid only in small displacement
analysis -- coupling with zero-mass stiff beam elements could be substituted if large
displacements were needed.) The shell nodes can be joined with a beam element that has
properties that reflect the diameter of the spot weld. The roughest approximation will
merge or couple just one node pair. If nodal coupling is used, rotations should be coupled
as well as translations, for spot weld representation. NOTE: With shell elements, read the
"drilling mode" comments in the ANSYS Elements Manual -- it may be necessary to set a
KEYOPT value to transmit rotation and torque about an axis perpendicular to the shell
elements when a spot weld is crudely represented by single node pair coupling, merging,
CERIG, or beam elements. Contact elements between the joined shells or materials may
want consideration. Exactly what to do for spot weld representation is very problem,
industry, and material dependent. These very approximate techniques tell us little or
nothing about stress, fatigue, or fracture possibilities near or at the weld. More elaborate
modeling (more nodes and elements, and special element types) of each spot weld could
give more information about local stresses, when local stresses matter. Studying the
"crack" that is hidden between the sheet metal layers in a spot weld is an "advanced
topic" -- discuss this with an expert or consultant. I doubt that you would find many spot
welds used with aluminum, not only because of the difficulty of welding aluminum, but
also because of the fatigue considerations -- consider how commonly aircraft use rivets
and modern adhesives.

There is a document on spot weld fatigue and FEA on the MSC/Nastran website. (There
is a variety of other good reading at the site, too.) Take a look at the paper in PDF format
by Heyes and Fermer, which, although it is MSC/Nastran related, is interesting and
includes the following references:

• Rupp, A., Störzel, K. and Grubisic, V. "Computer Aided Dimensioning of Spot-Welded Automotive Structures". SAE
Technical Paper 950711, 1995.
• Smith, R. A. and Cooper, J. F. "Theoretical predictions of the fatigue life of shear spot welds." Fatigue of Welded
Structures, Ed. S. J. Maddox, pp. 287 - 293, The Welding Institute, 1988.
• British Standards Institution. Code of Practice for Fatigue Design and Assessment of Steel Structures. BS 7608, 1993.
• Radaj, D. "Local Fatigue Strength Characteristic Values for Spot Welded Joints." Engineering Fracture Mechanics, Vol.
37, No. 1, pp. 245 - 250, 1990.
• Sheppard, S. D. and Strange, M. E. "Fatigue Life Estimation in Resistance Spot Welds: Initiation and Early Growth
Phase." Fatigue and Fracture of Engineering Materials and Structures, Vol. 15, No. 6, pp. 531 - 549, 1992.
• Sheppard, S. D. "Estimation of Fatigue Propagation Life in Resistance Spot Welds." ASTM STP 1211, Advances in
Fatigue Life Prediction Techniques, M. R. Mitchell and R. W. Landgraf, Eds., pp. 169 - 185, ASTM Philadelphia, 1993.
• Heyes, P., Dakin, J. and StJohn, C. "The Assessment and Use of Linear Static FE Stress Analyses for Durability
Calculations." SAE Technical Paper 951101, 1995.

I have never worked in aerospace, but I recently had a look inside a old helicopter that
was on public display. In addition to rivets, what was either a caulking or an adhesive
appeared to have been used between some ribs and the outer shell. This may prevent
corrosion in the gap, and help reduce vibration and fretting or galling. If it is purely a soft
caulking, it might be ignored in FEA, but if it functions as an adhesive, the load on the
rivets is probably reduced. Presumably the manufacturer has standards for this type of
design.

Tip 42: Use QUERY to Check Results with Picking

In /POST1 the "Query Results" capability applied to nodes makes it easy to check on
results (stresses, strains, deflections, etc.) by picking nodes. To see an image of this in
action Click to See Image and use your browser's Back button to return. For the shell
element illustrated, the result will be reported for the Top, Middle, or Bottom, according
to how the SHELL command was issued (the usual rules as to what constitutes the Top
and Bottom of a shell element apply). It will do this even if PowerGraphics is active for
the plot on the screen. Note that the nodal stresses are based on averages if more than one
element that is connected to a node is selected. You can inspect the consequence of
element selection on nodal stress easily with this feature. The element query returns only
data on energy and error estimation.

Tip 43: Loads on Geometric Entities Overwrite Loads on Nodes and Elements --
Easy Error to Make

My "dumb move of the week" was to retrieve an old model of a beam with redundant
supports, change the load on a node, and re-run the model. I then updated the element
table results, and used PLLS to plot the result, as shown below. This is a plot of top
surface bending stress, with gravity loading included. Both applied point loads and
reactions are shown as colored arrows. Stress colors have been gray scaled for printing to
a black and white laser printer. Upon inspection by a co-worker, he noticed that the
results were the same as the results the last time the model was run, under different
loading, a month before. What was wrong?
The model database file had been saved with the original loading and results. The
original model had the load applied to the keypoints. I changed the load on a node. When
I ran SOLVE, the load on the keypoint OVERWROTE the load on the node, and I got the
old result. When I listed the applied forces with FLIST before running SOLVE, I saw my
modified loads. When I listed the applied forces with FLIST after running SOLVE, I got
the OLD loads on the nodes. The same principle applies to loads on lines, areas, and
volumes. Presumably, it happens with applied displacements, also. Since loads on
geometric entities cannot be scaled, there may be little reason to keep the loads on
geometric entities after these loads have been transferred to nodes and elements,
EXCEPT when meshing may be changed in the future. The use of components is an
alternative way to select parts of the model for loading.

Suggestion: The user should add a warning annotation stating that loading is on
geometric entities, before archiving a model. Should ANSYS add a warning message
about SOLVE transferring loads from geometric entities, which requires user
acknowledgment?

A potentially dangerous mistake -- watch for it!

Tip 44: Use Components for Load Input, and for Results Review

A user-written input file could be used to apply loads to components that the user has
defined. An even more convenient use for components is for reviewing stresses due to a
load. The components can be called up and stresses plotted without the need to do manual
selection over and over for each load case. I wrote a macro that automatically steps
through all components, plotting the stresses for each component from a couple of
viewpoints, for each load case. When the plots were diverted to a plot file, the file could
be used in ANSYS DISPLAY to plot stresses for all components for all load cases.
Statements in the macro would put the component name and weight (based on volume
only) in an annotation; the title already contained the load case name.

Tip 45: Simple Substructuring Examples-- Bottom Up and Top Down

ANSYS/ED is capable of only a small number of Master Degrees of Freedom (50 the last
time I looked), so any use of substructuring in ANSYS/ED will have to be done with a
very small number of nodes for master degree of freedom use. A 2-D element such as
PLANE42 may be best for many substructure experiments with ANSYS/ED. In Large
Displacement substructuring, rotational degrees of freedom are needed at the nodes, and
ANSYS/ED will only handle very small numbers of nodes -- 2-D beams may be best for
learning experiments with Large Displacement. The problem with using beams elements
for learning is that review of stresses is more complex; element tables must be used to
hold and display beam stress information. For an alternative, consider SHELL63
elements with very few MDOF nodes (8 nodes x 6 DOF/node = 48 DOF), in Large
Displacement substructuring studies.

Substructuring has become more rare in FEA work, because of the capacity of modern
computers for large models. There are still times when it is desirable, such as when gap
elements or contact elements are employed in large models, or when extremely large
models are in use. The user will have to employ some insight to select substructures in a
way that minimizes the resulting number of degrees of freedom and wavefront size.
Substructuring is a relatively tricky procedure, particularly with multiple substeps or
multiple substructures. For serious use, the ANSYS manuals on substructuring should be
purchased and studied in detail.

The reader is reminded that the elements inside a substructure are treated as linear. Any
nonlinear elements grouped inside the substructure will be treated as if they were in their
initial condition, without material nonlinearity. The two simple examples below do not
address use of multiple substructures, multiple load cases, g-loading, vibrations, and
other complications. Nonlinearity (Large Displacement) is mentioned only briefly. If you
do not turn to expert help for substructure work, I recommend substantial testing of any
techniques on small models before doing any real work.

Warning: Read the ANSYS Elements Manual section on MATRIX50 the superelement.
Note its warning that if gravity is applied during the "gen" pass when the superelement is
created, and gravity is applied during the "use" pass, it will be applied TWICE to the
superelement substructure DOUBLING the gravity load on the superelement region of
the model. For this reason, gravity load would have to be introduced "carefully".
Unfortunately, a detailed desctiption of this careful application is not included in the base
ANSYS manuals. In the "Top Down" example below, I set "ACEL" for the model to
ZERO in all three global coordinate directions during the "gen" part that generates the
superelement. If the user has applied gravity to the model file that is read in, it will be
applied during the "use" part of the analysis, and so only applied once to the
superelement. This may affect the accuracy of the solution -- I have not yet done
comparison runs to test this. The example does not address centrifugal loading or other
complications. Unfortunately, linear acceleration loading (e.g. gravity loading) is more
accurately represented when applied as a load vector. This presents a problem when there
are elements with mass that are not included in substructures. I have not yet determined
whether gravity could be applied to a superelement in the GEN pass, without having a
superelement mass matrix generated -- if it could be done, then the "accurate" application
of gravity loading to the superelement could be accomplished without counting gravity
load twice. Using rotated superelements introduces another set of problems with the
direction in which loads are applied -- read the manuals.

Note that non-zero applied DOF displacements are not to be applied by a load vector, so
MDOF should be applied to nodes where non-zero DOF values are to be applied during
the analysis. Loads and constraints created in the GEN pass (i.e. in a load vector) cannot
be changed in the "USE" pass, except by uniform scaling. Brief testing I did suggests that
load ramping DOES work for load vectors -- the user should check this independently.
The exception to uniform scaling is with respect to angular motion -- read the ANSYS
tutorial and user's guide manuals on substructuring.

The substructuring examples given in Chapter 4 of the ANSYS Advanced Analysis


Techniques manual leaves out the routine steps -- leave out too many, in my opinion. The
user should purchase an ANSYS manual and tutorial manual on substructuring before
doing serious work. (ANSYS 5.5 has added some helpful comments to its Advanced
User's Guide on Substructuring.) The command EXPSOL has to be added in the
expansion pass of the bottom-up example in order to get any results in the expansion
results file. The SFE command is needed only if loads were applied to the superelement
-- if SFE is used, it has to point to the element number of the superelement that was read
in with the SE command as well as the appropriate load step number. A *GET command
could find the element number of the superelement right after the SE command. The
commands manual does not explain this adequately in ANSYS 5.3. The following
examples are fairly brief. In the bottom up example, the coupling command CPINTF is
used to join the superelement with the non-superelement portion of the model. The
example shows the stresses in the superelement after the expansion pass completes. The
results of the use pass are saved in the file "use.db" for later review by the user.

Bottom Up Substructuring Example:

! Substructuring demonstration
*************************************
! For information only. Use at your own risk.
fini ! finish whatever was active previously
/clear ! clear the database
/title,Substructure Technique Test

/filname,gen ! filename for the generation pass


/prep7 !
et,1,shell63 ! element type 1 set to SHELL63
r,1,.05 ! shell is 0.05 thick
mp,ex,1,30000000 ! set value of E
blc4,-.5,0.5,1.0,-1.0 ! create a rectangular area
lesize,all, , ,3,1,1 ! 3 elements per line -- user can change
this
amesh,1 ! mesh the rectangle
fini
/solu
antype,subst ! substructure analysis
seopt,gen ! generation pass
lsel,s,line,,2 ! line at right side
nsll,s,1 ! select all nodes on line
m,all,all ! make these nodes Master Degrees of
Freedom
lsel,s,line,,4 ! line at left side
nsll,s,1 ! select all nodes on line
d,all,all ! constrain nodes against all motion
allsel
save ! save this part of model as gen.db for
expansion pass
! the save need not follow "solve"
solve ! generates the gen.sub file
fini

/clear,nostart
/title,Shell elements are attached to a superelement
/filname,use ! filename for the use pass
/prep7
et,1,50 ! element type 1 set to superelement
MATRIX50
type,1 ! set type 1
se,gen ! read in the superelement matrix from
generation pass
! after reading superelement, create
remainder of model:
et,2,shell63 ! element type 2 set to SHELL63
r,2,.05 ! shell is 0.05 thick
mp,ex,2,30000000 ! set value of E
blc4,.5,.5,1.0,-1.0 ! create a new rectangular area
lesize,all, , ,3,1,1 ! 3 elements per line -- user can change
this
aatt,2,2,2 ! assign mat=2, real=2, type=2 to the
unmeshed area
amesh,1 ! mesh the area -- note superelement node
numbers are not used
cpintf,all ! automatically couple coincident nodes at
interface
eplo
fini
/solu
ksel,s,kp,,3 ! keypoint at upper right corner
nslk ! select node at this keypoint
f,all,fy,-1 ! put a load on node at upper right corner
nsel,all ! select all nodes
! SFE,1,1,SELV, ,1 ! no load applied in generation pass, this
statement not needed
solve ! results go in the file use.rst
save ! save use.db to review results in the
non-superelements
fini
/post1
/pbc,f,,1 ! show applied force symbols
/pbc,cp,,1 ! show nodal coupling symbols
plnsol,s,eqv ! plot the stresses in the non-superlements
fini

/clear,nostart
/filname,gen ! filename for the expansion pass
resume ! brings up gen.db saved above
/solu
expass,on ! activate expansion pass
seexp,gen,use ! options for the substructure expansion
pass
expsol,1,1 ! THIS IS NEEDED ! (read about NUMEXP
also) **************
! OUTRES,ALL,ALL ! not required for one load step solution
solve
fini
/POST1
/title,Stress in the Substructure
/pbc,mast,,1 ! show master degrees of freedom symbols
/pbc,u,,1 ! show displacement constraints
/pbc,rot,,1 ! show rotation constraints
plnsol,s,eqv ! look at the stress in the superelement

In the above example, the user can change the mesh density. The numbers and positions
of nodes along the common interface between the superelement and the normal portion of
the model have to be the same for CPINTF to successfully connect the two parts of the
model.

The model is created with the "bottom-up" approach. In the "use" part of this example,
the superelement is read in with SE before the remainder of the model is created. If the
remainder of the model was created before the superelement was read in, then the user
would have to add statements to control the node numbering, so that none of the master
nodes coming in with the superelement would replicate the node numbers of the existing
elements. If the superelement has master nodes that have the same node numbers as the
existing model, the model nodes will be redefined, and a mess will result. Check the
manual, and look at the SETRAN command to act on the superelement, or at the
NUMOFF command to act on the existing model, to prevent node replication problems.
The PARSAV and PARRES commands can be used to put model parameter information
into a coded file, and retrieve it after the /CLEAR command has been issued. The
maximum node number can be put into a parameter by *GET and put into a file with
PARSAV during the generation pass. It can be retrieved during the use pass by PARRES,
and used to guide the offset of node numbers in either the already generated superelement
with SETRAN, or in the remainder of the model with NUMOFF.

Top Down Substructuring Example

The following example is NOT a substitute for a detailed understanding of ANSYS


substructuring. It is for demonstration purposes only. Get the ANSYS Substructuring
Tutorial guide and the Substructuring Guide for serious work.

The top down substructuring technique makes it possible to take an existing model, and
have a portion of it changed into a substructure. This can boost efficiency in a number of
ways, such as dealing with contact surfaces and gap elements, and handling very large
models that have already been generated. In the example presented below, a model
database is read in from a user-prepared file named "model.db". This model in
"model.db" must have had a portion of the elements grouped into a component called
"super" using the command CM,SUPER,ELEM. This component will be rendered into a
substructure. The intended substructure should, in general, consist of linear elements. The
model must have had constraints and loads applied. The SFE command used in this
example expects loads to exist inside the superelement, but should work without them.
Some nodes can have been declared by the user to be master degrees of freedom. In order
to create master degrees of freedom through the GUI, the analysis type has to be
Substructure. In order to use the example below, the analysis type will have to be
changed back to the type desired after creating extra master degrees of freedom -- usually
to static analysis. Note that for dynamic analysis, master degrees of freedom are needed
throughout the substructure -- they are not created by the example below. In the example
presented, master degrees of freedom are automatically generated for the nodes on the
interface between the component "super" and the remainder of the model. (There is no
check for redundancy with user-declared master degrees of freedom.) The full model is
used -- the analysis is not limited to the selected set of elements in the file "model.db"
when it is loaded. The example will automatically perform the substructure generation
and the subsequent analysis, and will plot results to plot files. I have added a plot of the
results for the full model, with results files for both the substructure and the non-
substructure being read. There is no error checking in the example. This example has had
limited testing--let me know about errors.

When dealing with gap elements and/or contact surfaces, the usual procedure would be to
select all the linear elements in the model (not the gap or the contact elements), and in
this example, call them the component "super" for substructuring. Because the
substructure matrix is usually much smaller than the full model matrix, the iterations
required for convergence with gap and contact elements will usually run far faster than
iterations involving the full model, once the substructure matrix is generated. This makes
otherwise infeasible modeling into a possibility.

In dealing with extremely large models, where the objective is simply to deal with the
size, not nonlinear elements like gap elements, there may little advantage in turning the
entire large model into a substructure -- it could take as long to generate the superelement
as to solve the model for one load case. It would be more common to turn portions of the
model into one or more substructures. The connecting regions between the substructures
would be chosen to involve as small a number of nodes as possible, to minimize
substructure matrix size, and model wavefront size.

Although a MATRIX50 substructure superelement can undergo Large Displacement, it


will act internally as a linear elastic structure. Methods to use MATRIX50 in nonlinear
applications should be thoroughly tested by the user before application, including plots of
displacement and stress to look for compatibility in results among the substructure
regions and the remainder of the model, and checks that reaction forces equal the total
applied forces. I have encountered difficulties combining Large Displacement with
Substructuring -- see the image below.
To use the following example: (1) Create a model, and (2) select a portion of the elements
to become the substructure. Give this selection set of elements the component name
"super" with the command "CM,SUPER,ELEM". (3) The model should have loads and
constraints applied, and the analysis type defined. The analysis type must be acceptable
for substructure use. (4) Save the model with the database name "model.db". (5) Call the
routine below with the /INPUT command. Graphical results for the last load substep in
the results files will be plotted to disk files. If run interactively, the user will have to click
the "OK" button a few times, and a plot to the screen should result when done. Expect
warning messages related to partial element selection, and to reading from results files.
NOTE: This example sets gravity load to ZERO in the "gen" portion of the analysis;
otherwise, gravity would be DOUBLED on the superelement if the user's model includes
gravity -- see the Elements Manual for MATRIX50. (If gravity is applied, be sure a
density was applied to the materials in the model. If there is no gravity, the example can
have the "seopt" command changed to NOT generate the mass matrix for the
superelement.)

The loading on MDOF nodes would also be DOUBLED if it was used in the
superelement load vector, and used in the "USE" pass. For this reason, corrections to this
routine have been added (Nov.2, Nov.4 1998). A complication for substructuring: Only a
master node from a coupled node set or a constraint equation node group can be used as
an MDOF for substructuring. This complication is NOT addressed in the present
example. The example is for stress analysis. It does not address centrifugal loading. To
address other types of analyis, start with a look at the ANSYS Advanced User's Guide,
and look at the table of loads applicable in a substructure analysis.

Automating a substructure analysis is somewhat tricky -- this file will NOT be applicable
to all types of analyses. I have been testing it with simple stress examples. The
"POSTPROCESS" pass seems to work in loading stresses from the two different sources
for viewing, "USE.RST" and "GEN.RST", although I haven't seen this documented. I
tried the SUBSET command, but was getting warning messages about the nodal force
and other results not necessarily being correct. I haven't thoroughly investigated this.
Have a close look at how the substeps are output with OUTRES and selected for
expansion:

! "Top-Down" Substructuring Example -- In Development


!
! - For information only. Use at your own risk.
! - There is no error checking in this example.
! - Warning messages will be generated.
! - ANSYS/ED supports very few master degrees of freedom.
!
! WARNING: Gravity would be applied TWICE to the superelement if
ACEL
! were not zeroed in the "gen" pass. For models that do
not
! include inertial loads, change the "seopt" command to
generate
! STIFFNESS only. See the Elements manual for MATRIX50.
! This example NOT designed for other inertial loads.
!
! WARNING: In the "use" pass, nodal loads on superelement MDOF
nodes
! are deleted so loads on MDOF nodes are not counted
TWICE.
! FDELE and DDELE are used.
!
! The model to be processed is in the file "model.db". The user
must
! have identified the region to be substructured as the
component "super"
! with the command "CM,SUPER,ELEM" and saved the model as
"model.db".
! Anything nonlinear in the component "super" will be treated as
linear.
! Analysis type is defined by the file "model.db" -- must be
acceptable type.
! The "USE" pass has OUTRES set to write ALL substeps to the RST
file.
! The "EXPAND" pass has a *DO loop that expands solutions at ALL
substeps.
! The model must have had its loading and constraints applied.
! This example is for one load case only. Some Master Degrees of
Freedom
! can have been applied by the user -- needed for dynamic
analysis.
! Master Degrees of Freedom Nodes will be generated between the
substructure
! and the remainder of the model. No check for redundancy is
performed.
! Unless this file is run BATCH, the user will have to click the
"OK" button
! whenever the CLEAR command is executed, and if error
messages appear.
!

fini ! finish whatever was active previously


/clear ! clear the database

/COM,############ GEN ############


/COM,############ GEN ############
/COM,############ GEN ############

/show,part1,grp ! file for storing plots


resume,model,db ! read the model to be processed
! - all loads and constraints must
already be applied
! - the SFE command is employed in the
"use" pass to
! apply loads to the substructure
! - only one substructure generated in
this example
/filname,gen ! filename for the generation pass
/prep7
allsel
*get,nmx,node,,num,max ! get the highest node number
*get,nmn,node,,num,min ! get the lowest node number
cmsel,s,super ! select the elements identified as the
component "super"
nsle ! select nodes of these elements
esel,invert ! select the elements that are not part of
"super"
nsle,r ! reselect nodes connecting "super" to
remainder of model
m,all,all ! make these nodes Master Degrees of
Freedom (MDOF)
cmsel,s,super ! select the "super" elements again
nsle ! select their associated nodes
nsel,r,m,,nmn,nmx ! reselect all of these nodes that are
MDOF (don't want nodes
! outside the "super" that the user
called MDOF)
fdele,all,all ! delete loads on these MDOF nodes for
"gen"
ddele,all,all ! delete displacement loads on these MDOF
nodes for "gen"
nsle ! select nodes of the component "super"
/pbc,mast,,1
/pbc,f,,1
/pbc,m,,1
/pbc,u,,1
/pbc,rot,,1
/title,Elements of the to-be-substructure
eplo ! plot the elements of the to-be-
substructure
fini
/solu
antype,subst ! substructure analysis
seopt,gen,2 ! generation pass -- generate STIFFNESS
and MASS matrices
! - if no inertial load, change setting
to STIFFNESS only
save ! save this part of model as "gen.db" for
expansion pass
! - SAVE need not follow the command
"solve"
! - component "super" and its nodes
currently selected
acel,0,0,0 ! set gravity to Zero AFTER "save" but
BEFORE "solve"
solve ! generates the "gen.sub" file
fini

/COM,############ USE ############


/COM,############ USE ############
/COM,############ USE ############

/clear,nostart
/show,part2,grp
resume,model,db ! bring the model in again. restores
"acel" if any.
! "model.db" has to define the analysis
type -- it should not
! be a substructure generation
/filname,use ! filename for the use pass
/prep7
allsel
*get,nmn,node,,num,min
*get,nmx,node,,num,max
cmsel,s,super ! select the portion intended for the
substructure
esel,invert ! select the remainder of the model
nsle ! select nodes of the remainder of the
model
nsel,a,m,,nmn,nmx ! add MDOF nodes for visibility (not
needed for solve)
*get,ntp,etyp,,num,max ! get max element type number in the model
in parameter ntp
et,ntp+1,50 ! new element type ntp+1 set to
superelement MATRIX50
type,ntp+1 ! set type ntp+1 before reading "creating"
superelement with SE
se,gen ! read in the superelement matrix from
generation pass
! - master D.O.F. nodes already are at
the interface
! - no need to couple coincident
interface nodes this example
! - new element number assigned should
be above maximum
*get,snm,elem,,num,max ! get the element number of the
superelement just loaded
! - needed for SFE loading the
superelement below
! - extra work needed if more than one
superelement
/pbc,all,,0
/pbc,f,,1
/pbc,m,,1
/pbc,mast,,1
/title,Remainder of model attached to substructure
eplo ! plot the elements in the non-
substructure plus "outline" view
! of the substructure
fini
/solu ! "model.db" analysis type for
substructure is needed
SFE,snm,1,SELV, ,1 ! load applied in generation pass was in
"model.db"
! - apply load to to superelement
number "snm" found above
! - extra work needed if more than one
superelement
outres,all,all ! save results for the all substeps of
load step
! - change here and "EXPAND" below if
desired to change
solve ! results go in the file "use.rst"
save ! save "use.db" to optionally review non-
substructure results
fini ! "use.db" and "use.rst" now contain
non-substructure results
/post1
set,last ! plot results at the end of the load step
/title,Stress in the non-substructure elements
plnsol,s,eqv ! show nodal stress in the non-substructure
*get,lastlstp,active,,set,lstp ! get the last load step
number
*get,lastsbst,active,,set,sbst ! get the last substep
number
parsav,scalar,parameterstore,parm ! store them in file for
retrieval below
fini

/COM,############ EXPAND ############


/COM,############ EXPAND ############
/COM,############ EXPAND ############

/clear,nostart
/show,part3,grp
/filname,gen ! filename for the expansion pass
resume ! brings up "gen.db" saved above, "super"
is selected
parres,new,parameterstore,parm ! retrieve data on last
load step/substep
! parres must follow resume
statement
/solu
expass,on ! activate expansion pass
seexp,gen,use ! options for the substructure expansion
pass
*do,iii,1,lastsbst
expsol,lastlstp,iii,,yes ! expand result at last load
step/substep
! - (read about NUMEXP also)
outres,all,all ! all data written
solve
*enddo
fini ! "gen.rst" now contains substructure
results, last step
/POST1
/title,Stress in the Substructure
plnsol,s,eqv ! show nodal stress in the substructure
save,stresses_in_super,db
fini

/COM,############ POSTPROCESS ############


/COM,############ POSTPROCESS ############
/COM,############ POSTPROCESS ############
!
! WARNING: The following is my own
invention; use at your own risk.
! Warning messages will be
generated by ANSYS.
resume,model,db
/show,part4,grp
/post1
cmsel,s,super
nsle
file,gen,rst
set,last
esel,invert ! Select the elements NOT in substructure
component "super"
nsle ! Select the nodes of these elements
file,use,rst ! Point to file "use.rst" that contains
the rest of the results
set,last ! Read in load step data for selected
elements, last substep
esel,all ! Select all elements
nsle ! Select the nodes of the elements
/pbc,all,,0
/pbc,f,,1
/pbc,m,,1
/pbc,mast,,1
/title,Stress in the Full Structure
plnsol,s,eqv ! Show nodal stress for the full model.
! Because of averaging, PLNSOL stresses on
the interface of the
! substructure and non-substructure
regions cannot exactly
! match values for these locations
plotted separately, above.
! Element stress and displacement
should exactly match in
! a small displacement linear analysis.
save,stress_allelem,db ! Save the model with all stresses on
elements
/show,term ! Back to screen -- only works if used
interactively
plnsol,s,eqv ! Show the stress results for all elements
if interactive ANSYS

The "top down" example saves the results of the "use" pass and the "expansion" pass in
database files. These can be loaded to inspect results in the non-substructure and in the
substructure parts of the model, respectively. If the file is run interactively, the user will
have to click the "OK" button each time the /CLEAR command executes, and for a
variety of warning messages that can appear. It may be preferred to run the file under
Batch control, and to later review the results in the plot files, and in the resulting database
files. Remember to check for error and warning messages. Because of the complexity of
substructure analysis, the user should run checks on balance of forces, and do other
typical checking of results.

Large Displacement Nonlinearity and Substructure: The ANSYS 5.5 Advanced User's
Guide, Chapter 5 gives more help on large rotation (large displacement, geometrically
nonlinear) substructured analysis than at the 5.3 level. Note the comment that constraints
should be applied in the "use" pass, not in the "gen" pass, for large rotation analysis.

If the file "model.db", used in the above example, has had Large Displacement activated
with "NLGEOM,ON" then a nonlinear solution will be sought. Convergence criteria,
ramping of loading, substeps, and other nonlinear controls may be desired. Because the
substructure will act linearly internally, convergence may not be as easy as the user would
wish. When the run does converge, the results will not be an exact match for the result
without substructuring. The output plots should be examined to see if they read "Substep
999999", indicating failure to converge. If you test the above example with Large
Displacement, use a Large Displacement model that converges easily without a
substructure approach. An attempt has been made in the above example to cope with a
model that develops the Large Displacement solution in a load step containing a set of
substeps. This is the reason for statements that record the last loadstep and substep
numbers. However, the example does NOT reserve application of all DOF constraints for
the "USE" pass, as recommended in the ANSYS 5.5 guide, so it will NOT be appropriate
for models with constraints applied to non-MDOF nodes in the substructure region. The
user can get around this by manually assigning MDOF to all the nodes to which
constraints are applied in the component "super", in "model.db".

The master degrees of freedom for the superelement must have rotational degrees of
freedom for Large Displacement work. The user can try assigning MASS21 elements to
the master degree of freedom nodes if the elements in the model do not have rotational
degrees of freedom. The MASS21 elements can have a REAL value that contains zero
values for the masses and mass moments of inertia. This will introduce the requisite
rotational degrees of freedom.

When using elements like SHELL63, which have rotational degrees of freedom, I have
encountered a rather odd result: The Large Displacement solution for the elements in the
superelement (stored in "gen.rst" in the example) is for the displacement of the
substructure nodes with respect to a coordinate system embedded in the superelement,
not with respect to the global axes. This is not the case for small displacement solutions,
which appear displaced correctly. Since the superelement can undergo large rotation, the
displacement that is reported and plotted for the nodes inside the superelement will be far
smaller than the displacement reported and plotted for the remainder of the model, in a
Large Displacement solution. This is because the coordinate system embedded in the
superelement moves with the superelement. In limited testing, the SEQV stress plots
appear to be OK, if the load step that is to be expanded is identified properly. I have not
investigated what happens to stress components in the Global and Element Coordinate
Systems. Rotational transormation of the stress and strain tensors could be very complex.
See the image below for a result combining SHELL63 elements, substructuring, and
Large Displacement.

A possible visual displacement fix (for the displacement plot problem of 6 DOF elements
in Large Displacement substructures) is to transform the displaced position coordinates of
the non-MDOF nodes in the superelement on the basis of the rotations and translations of
the origin of the superelement in Global Coordinates. The origin of the superelement will
be the MDOF node that reports no displacements or rotations inside the superelement (in
superelement coordinates); it appears to be the MDOF node with the lowest node number.
Applying a transformation properly will require deducing or looking up the order of the
sequence of rotations that ANSYS uses in Large Displacement work, or that ANSYS uses
to report node rotations. A reading of the Theory Manual suggests that ANSYS internally
uses quaternions for large displacement rotations in space. This would be for the usual
reason that quaternions do not have a singularity in any orientation, in contrast to Euler
angles. It appears that the rotations reported at a node represent 3-D components of a
single rotation vector, rather than Euler or other angles, so the transformation will need to
be based on rotation about a vector that starts at a known point in space (the origin of the
superelement), plus translations. I will be working on this as my next project for this web
page. The reported rotation may be complicated by rotated nodal coordinate systems
(NROTAT) or superelements that the user has employed... this will require checking.
Reader feedback would be appreciated. If I get anywhere with this, I will limit myself to
displacements only. Transforming stress tensors would be a bit much!

The above plot was generated using the above sample /INPUT file on a Large
Displacement model of a cantilever beam created with SHELL63 elements. A similar
displacement discontinuity results with BEAM4 elements in a similar application, as
shown in the images below:
Tip 46: Plot Applied Temperatures

In a thermal stress analysis, temperatures will be applied as a "load". Temperatures can be


applied to nodes with the BF command, to elements with the BFE command, or implied
using other commands. (Check the BFE command and the element type in the ANSYS
documentation for details on using BFE.) A colored element plot of applied temperatures
can be generated by using the commands /PBF,TEMP,,1 and EPLO, which Shows body
force loads as contours on displays, per the ANSYS Commands manual.

When using beam, link, and pipe elements, if the element thickness is shown with the
/ESHAPE command before executing EPLO, temperatures can be made visible with
contour coloring for these line elements. It may be desired to exaggerate their displayed
thickness with /ESHAPE in order to make the temperature information more visible.

Tip 47: Skipping Over Statements in an ANSYS Input File

ANSYS commands can be developed in a file that is executed with the /INPUT
command. This can permit very flexible and sophisticated use of the program. Here is a
well known programming trick that can be used to temporarily skip over part of an
ANSYS input file. Set a parameter ("SKIP" in this example) to a value that tells an *IF
statement to jump over a section of code that you want to skip. This is much quicker than
commenting out a block of code, or cutting and pasting as an input file is developed and
modified. *IF statements that use this parameter could be located in a number of
positions in the input file -- this permits changing the value of one parameter at the
beginning of the input file to cause skipping of input code in a variety of locations.

! Input code to ANSYS...


! ...
SKIP=1 ! Set to 1 to skip, 0 to run the code
inside the *IF...*ENDIF commands
*IF,SKIP,EQ,0,THEN
! ANSYS commands that are optionally executed...
*ENDIF
! ... more code follows

Since there is no compilation of the input file, the "skip" technique uses little time in
choosing to execute or bypass the blocked off commands (ANSYS still has to read the
blocked out code in order to check off the number of *IF and *ENDIF commands).
Tip 48: Static Analysis Followed by Transient Analysis

Transient analysis by ANSYS can model transient vibrations, or the dynamics of a


flexible mechanism in motion, in addition to more complex effects. Initial conditions can
be applied, followed by transient analysis. One type of initial condition is a zero velocity
initial position with stored energy. The stored energy can be potential energy of position,
elastic energy, or both. Another initial condition is an initial velocity. A model can have
both initial velocity and stored energy. A static analysis may be desired to develop the
stored elastic energy, before starting a transient analysis. Remember that for transient
analysis, the mass of the model must be input in the appropriate mass units, not as
weight.

The following ANSYS input file illustrates the execution of a linear elastic static analysis
that sets an initial condition, followed by a transient analysis. The model is of a
cantilevered beam that has a force applied to the free end in a static analysis. The
transient vibration that results when the force on the free end is removed is obtained. No
gravity is used. No damping has been applied, and ANSYS defaults for the numerical
integration are implicit. This is a linear elastic solution, so the numerical integration
should be stable, given the ANSYS algorithm used. The time substep size for the
transient analysis should be smaller than 1/20 of the period of the first few modes of
vibration. (The user could tweak the ANSYS numerical integration parameters so that
very high frequency response modes are numerically damped. Stability in Large
Displacement nonlinear transient analysis is probably not guaranteed, although damping
and small time substep size should help.) The use of a consistent mass matrix (default)
should in general yield more accurate results than a reduced mass matrix if the element
density is coarse, however the use of a reduced mass matrix may shorten the solution
time in large models. The movement of the tip of the beam is plotted -- it is not a perfect
sinusoid because the initial deflected shape of the beam is not an exact match to a mode
of vibration.

! Transient vibration, cantilever beam, "plucking" the tip.


! For illustration purposes only. Use at your own risk.
fini
/clear ! Start fresh
/title,Transient Vibration of Cantilever Beam
/PREP7
ET,1,BEAM3 ! 2-D model of beam
R,1,1,1,1 ! beam crossection properties
MP,EX, 1, 30000000 ! Young's modulus, BIN units
MP,DENS,1, 7.34E-04 ! beam mass density, BIN units
K, , 0.0 ! keypoints
K, , 10. ! 10" long
L, 1, 2 ! line
LESIZE,ALL,,,8,1,1 ! 8 element divisions
LMESH, 1 ! mesh with beam elements
FINISH

/SOLU
ANTYPE,4 ! Select transient analysis
F,2,FY,-50000 ! apply down force on RHS node
(unrealistically high)
d,1,ux ! constrain first node at LHS, in X
direction
d,1,uy ! in Y direction
d,1,rotz ! and constrain rotation

time,0.0005 ! small time increment, static


OUTRES,ALL,ALL ! save all substep results
timint,off,all ! no time integration -- treat as
Steady State
nsubst,2 ! two substeps to imply zero initial
velocity for transient
kbc,1 ! step change load
solve ! find the static deformed shape

TIME,.002 ! time at end of transient (pre-


determined to show oscillation)
NSUBST,100 ! time steps small enough to show
vibration
KBC,1 ! step change load
fdele,all,all ! delete force -- show vibration
after force is released
OUTRES,ALL,ALL, ! save all substep results
timint,on,all ! activate transient analysis
solve ! find the transient vibration of the
beam
fini

/post1
/dscale,1 ! automatic scaling, to easily view
final result
pldisp,1 ! show final deformed shape
FINISH

/POST26
NSOL,2,2,U,Y,UY ! results variable for plotting
/title,Transient Vibration of Cantilever Beam: Motion of Tip
PLVAR,2 ! graph oscillation of the tip of the
beam
FINISH

NOTE: The use of the TIMINT command controls activation of the static and transient
portions of the solution. The static solution is obtained at two time substeps so that an
initial velocity of zero is implied. An animation of the transient solution can be generated
for the full beam in /POST1, showing the transient vibration in action. For an animation,
the user will have to set a satisfactory displacement scaling value with the command
/DSCALE, not use automatic scaling. In the animation of the Large Displacement
motions of a mechanism, a /DSCALE setting of 1.0 will generally be wanted, so that
angles of rotation look correct. A zoom setting other than /ZOOM,OFF will usually yield
a better animation.

Tip 49: File Compression for Model Storage


If no restart is to be executed on an ANSYS model, it will often be sufficient to save only
the model database file (*.DB) and the results file (*.RST) when archiving an ANSYS
model. If the model was generated from command input files, these will require storage.
If only one load step was written to the results file, the results file may not require
archival if the results are also contained in the database file. Load case, graphics output,
and other files may be wanted for archival. The database, graphics, and results files can
be extremely large. They often compress well using data compression programs such as
the UNIX compress and gzip utilities (gzip is more powerful than compress). On
Windows computers, gzip is also available for NT (it handles long file names), in addition
to the shareware ZIP utilities, though you will need to dig on the Internet to find gzip for
Windows NT -- have a look at GZIP on the web and look for instructions and the version
for your computer (test before use). In FEA work, I find the advantage of the gzip utility
to be that the compressed file name is simply the original file name with .gz appended,
and the uncompressed file is removed. The data storage requirement may be reduced by
roughly 25% to 80%, both on the hard drive, and on tape or removable disk. The data
compression is significantly more effective, though much slower, than with the disk
compression scheme that can be used by Windows NT 4.0, which also does not keep the
files compressed when they are sent over a network, or otherwise moved around. Hint:
Make sure that those who will decompress the files in future will know how to do it!

Tip 50: Organizing Large FEA Models

Examining the results of an FEA model, selecting and modifying portions of the model,
and keeping a record of what MAT (material) and REAL (shell thickness, beam size, etc.)
values were used for various parts of a model becomes very difficult with large FEA
models. A very large structure represented with hundreds or thousands of individual beam
elements or areas meshed with shell elements, will require that the identities, materials
and REAL settings for large numbers of parts be organized and recorded.

There is no one way to do this. Individual parts, or groups of parts, can be defined to be
components that are accessed by names of up to 8 characters. These parts can be
geometric entities, elements, or nodes. Macros can step through all the components using
*GET commands. Collections of components can be grouped into component assemblies.
An individual assembly could be created for those components that are to be selected
under certain circumstances, for analysis or for results review. The use of components
makes it possible to refer to either a part or a subassembly by one name, and easy to
select it. The creation of a component can save a set of entities that were selected with a
certain sequence of select logic, and be used in the enhancement of the ANSYS select
logic process. The database component commands are: CM, CMDELE, CMEDIT,
CMGRP, CMLIST, and CMSEL. A macro can be written that will step through all
components, plotting them, including the component name and information on it in the
plot title, or an annotation.

Either REAL values for elements and entities, or MAT values, can be used to identify
parts in a model with numbers. (The element type will have to support a REAL value if a
REAL is to be created for that element type. However, a REAL value can be forced on an
element even if the element type does not admit assignment of a REAL. This can be done
when creating an element, applied to the geometric entity that is to be meshed with the
element, or forced after the fact with EMODIF. When EMODIF is used, be cautioned that
in a re-meshing the REAL assigned to the geometric entity will be used. When the
element type does not accept a REAL setting, the R setting can simply be left blank. In
that case, the commands NUMMRG,ALL and NUMCMP,ALL can make a mess and
should not be used in this all-inclusive form -- stick to specific forms such as
NUMMRG,KP.) In a model made of shell elements or beam elements, for example, each
plate or beam could be described with its own REAL value, even though there may be
many plates or beams of a given thickness or size within the model. Where a group of
parts will always be chosen with the same REAL value, they could share one REAL
setting. This makes changing the shell thickness or the beam characteristics very simple,
and provides easy part selection with commands like ESEL, ASEL, or LSEL, as
appropriate, according to their REAL value, or a range of REAL values. An array (see the
*DIM command) could correlate REAL values with other information, such as part
names (with an 8 character limit). The same approach can be taken with the setting of
MAT values for describing the material properties. Using material numbers for part
identification, however, could get cumbersome, because there are such a large number of
individual material property settings, and they may be temperature dependent in some
models, or include material nonlinearity.

I find it helpful NOT to set any of the geometric entities or elements in a large model to a
MAT or REAL value of one. One is a default value that is sometimes assigned when no
value has been assigned by the user. Geometric entities may have a value of zero when
nothing has been assigned. I can then select things that have a MAT or REAL of zero or
one to check on whether I have forgotten to assign a value to any part of the model. Plots
with coloring assigned according to REAL or MAT will help in checking a model.

When REAL or MAT values have been used to differentiate between different parts of a
model, the user must be careful not to use a NUMCMP,ALL or NUMMRG,ALL
command on entity numbering, because it will compress or merge out REAL and MAT
values. This will destroy the identification scheme. The NUMCMP command will have to
be called with the specific quantities to be compressed individually identified, such as
NODE, as in the manual.

When the parts have been identified by different REAL or MAT numbers, a coloring
scheme based on REAL or MAT can be used during element or geometric entity plots, to
improve identification of the parts of a model, and the appearance of the FEA plot.
Caution: ANSYS does not use a "4 color map theorem" when plotting (can't do this in 3-
D anyway) so parts of differing REAL or MAT may be adjacent and have the same color.

Arrays could be used to assign numbers to component names, and to keep track of what
REAL values were used by the elements within components. Arrays could assign 8-
character names to the parts described by different REAL values. Arrays could be used to
set several values of a number of shell thicknesses or beam sizes to be examined in a
series of analyses that are to be run automatically. As discussed above, this parameter
information can be included in annotations during model and results plotting, making
model review easier and less error-prone.

Tip 51: Selecting Nodes in a Stress or Strain Range

The selection of nodes in a certain stress range can be effected with, for example, the
command NSEL,S,S,EQV,40000,9999999 in order to get nodes with EQV (Von Mises
equivalent) stresses from 40000 to 9999999. This and similar commands can be used to
get at only the portions of a full model that are significantly stressed.

The effectiveness of this command can be compromised somewhat by nodal stress


averaging, shell stress surface selection (TOP, MID, or BOT), and other complications.
The command would typically be followed by the two commands ESLN and NSLE to be
able to plot the associated elements and their stresses.

If the above part identification scheme using REAL values has been employed, the stress
level selection command could be followed by a macro that selects all parts that match
the REAL types of the selected elements. This would make it possible to see all highly
stressed parts. This approach is helpful with complex models with parts visually hidden
by other parts.

Tip 52: Selecting Nodes that are Subjected to Nodal Coupling

Nodes that are coupled can be selected with commands such as NSEL,S,CP,,1,999999 in
order to show only the coupled nodes, and to have the option of using picking to delete
nodal coupling, or for other purposes. Once coupled nodes have been selected, work to
evaluate the forces resulting from the coupling can begin. Similarly, nodes can be
selected according to their presence in constraint equations (CE), their applied
displacement (D), forces applied, and other criteria -- see the NSEL command for further
information.

Tip 53: /NOPR and /GOPR Speed Up Input Files and Macros

When a long input file or macro is read while running ANSYS interactively, text
information is written to the output screen and optionally to an output file. If a significant
number of *GET and similar operations are being executed, a large quantity of text
information will be written to output. If the input files and macros are known to be fully
debugged, they may execute faster if they start with /NOPR and end with /GOPR in order
to switch off text output while they are running. If their execution is causing geometry,
nodes, or elements to be generated, a speedup may result from temporarily switching off
the generation of graphics with IMMED,0 and /SHOW,OFF. They can be re-activated
with IMMED,1 and /SHOW,TERM. You may want to consider the /UIS command also.

Tip 54: Using Commands IMMED and /UIS and /SHOW,OFF to Suppress Plotting
I sometimes develop a model interactively, setting up some dimensions as parameters,
then manually modify and add to the log file that is generated. The resulting log file
becomes an input file that I can use for parametric generation of a model. When I run this
input log file, I don't want all of my various plot commands to be executed, only those for
finished model display and results review. This can be implemented with the IMMED,0
(for interactive execution), /UIS,REPLOT,0 and /SHOW,OFF commands. They can be
re-activated with IMMED,1 and /UIS/REPLOT,1 and /SHOW,TERM. If the graphics
output is intended to be sent to a graphics file, the command /SHOW,FILE can be used
for re-activation of writing to a file previously designated by a /SHOW,filename
command. If writing to a file, the immediate mode plotting is off by default. Be warned
that if you change to another output graphics filename with the /SHOW command, then
come back to the first filename, the first file will be overwritten.

When re-running a log file using /INPUT the messages that required clicking "OK" will
be generated and execution will pause. This means that the /INPUT command will not re-
run all log files unattended.I have not found the /UIS command to completely stop this,
such as when the /CLEAR command is issued. Running batch is sometimes desirable.

Tip 55: What's the Bauschinger Effect? Comments on Material Yield

I first wanted to do elastic/plastic analysis in ANSYS to get a feel for the onset of failure
in an automotive part. It was of value to show that one proposed crossection shape was
significantly better than another. This required me to use plastic material properties for
steel, in nonlinear large deflection analysis in ANSYS. Unfortunately, I had taken neither
an academic course in metal forming, nor attended an ANSYS course in nonlinear
analysis. Digging into the ANSYS manuals, the first thing one has to decide on is
whether to use Kinematic Hardening or Isotropic Hardening for the material model.
Fortunately, high precision was not needed for what I was doing, so the exact stress/strain
curve and the choice of material yield rules were not a big concern. Still, I wanted to
know what I was doing, within reason. The manual mentions the relationship between
kinematic hardening and the Bauschinger effect. After some poking around, I finally
found a basic description of the Bauschinger effect in Timoshenko's Strength of Materials
Part II: Advanced Theory and Problems Third Edition, Krieger, Florida, 1976.

Essentially, a tension test causing slight yielding permanently deforms (causes slip in)
unfavorably oriented crystals before other crystals in a specimen. Consequently, upon
unloading, the permanently deformed crystals are in some compression. After re-loading
with tension, the onset of yield is raised because the deformed crystals do not reach their
new slip stress until the load is higher than the first time. If the material is compressed
after tension loading, the deformed crystals reach their compression slip stress before the
rest of the crystals, with the result that compression yielding starts sooner than in a fresh
unstrained specimen. Quoting Timoshenko, "Thus the tensile test cycle raises the elastic
limit in tension, but lowers the elastic limit in compression." This is the Bauschinger
effect.
One thing that may affect the choice of a yield model in ANSYS will be what is
supported by an element type. Shell 63 does not support nonlinear material properties at
all. Shell 181 supports isotropic hardening but not kinematic hardening. Shell 43
apparently supports both, but it is suggested that Shell 181 is more capable.

It should be remembered that ANSYS requires a true strain curve in material


characterization, not the engineering strain curve, when multilinear curves are entered. In
quick-and-dirty checks on the possibility of failure of a structure, I sometimes consider it
sufficient just to use a bilinear model, with the yield portion of the curve fairly flat. This
wouldn't do for models of metal forming in manufacturing, but can sometimes be used to
assess whether structure failure is a concern when some portions of an elastic model are
exceeding yield. It may be desirable to load the structure beyond the design load in order
to observe where significant failure starts, in order to get a feel for margin of safety. This
may require arc-length analysis. (My use of the word "quick" in "quick-and-dirty" is
overly optimistic.)

Some design codes have rules for elastic-plastic or for fully plastic analysis that would
have to be used, if such an analysis was needed to justify or qualify a design legally or to
fulfill a contract.

Tip 56: Thought Experiments

Nothing so focuses the mind on the design details of a product as hearing that it failed in
testing or in service. You don't have to be Einstein to perform the following thought
experiment: Suppose that you heard that some aspect of a design had failed in service.
The failure could be yielding, buckling, crack growth, fracture, vibrating to death,
unacceptable deformation, wear or binding, or whatever is appropriate. Brainstorm as to
whether it could happen, what could have caused it, and how analysis could highlight
what is or could be wrong. Do this thought experiment for as many characteristics of the
product as you can. You may substantially extend the number of things that you consider
in the design, and in the FEA work. It may save someone's neck, either figuratively or
literally.

Possibilities and "What If's":

• What could cause yielding -- are fasteners or welds overloaded? Were their loads even
checked -- and for all load cases, or the bounding load cases? Are the bounding load
cases complete? Are stresses above yield over a significant region? Are surface stresses
of shell elements doing something unusual? Were nonlinearities considered? Were all
possible combinations of loading considered? Is there a high load situation that was not
considered? Were all components evaluated in FEA? Is there an unusual boundary
condition arrangement that has not been considered? Was the FEA mesh too coarse?
Were relevant details that cause stress concentrations left out of the model? Can what
was discounted as a local stress concentration lead to progressive collapse or crack
growth?
• Can buckling arise? Have both linear and nonlinear approaches to buckling possibilities
been considered? Has a portion of the model been represented so simplified that
buckling possibility is not detected? Has nonlinear buckling been considered at loads
greater than the design loads, so that some sense of the margin of safety is obtained?
Can restraint of thermal expansion cause stress and buckling?
• Crack Growth -- what details exist that could possibly be sites for crack growth? Do
surface stresses give any warnings? Where could details be included to reduce crack
growth possibilities? Are regions that have geometry that could lead to crack growth
highly stressed and/or cyclically stressed? Is direct tension on welds causing Type I
fracture loading? Is shear, bending or torsional loading (applied forces and moments,
and/or applied displacements and rotations) on structural details causing Type II or Type
III fracture loading on welds? Is the loading significant? Is fatigue an issue? Is fracture
analysis warranted? Is there a reliable shortcut guide to what is tolerable? Is such a guide
even possible? Could crack growth be so rapid that it happens between inspections and
causes sudden fracture? Are cracks detectable at a size that does not immediately cause
fracture? Should inspection intervals be more frequent when the product is new?
• Vibration -- what loading could stimulate vibration? What frequencies could drive
vibration? Is there adequate structural damping or are there other mechanisms to
suppress trouble? Where are the natural frequencies of vibration? Do steady state
responses or random vibration responses need to be evaluated? Is flow induced vibration
a possibility? Will sound and noise cause destructive vibrations? Have all possible
boundary condition arrangements been included in assessing vibration?
• Will large deformations go outside of what is acceptable? Is the structure stiffness high
enough for the product use? Will deformation cause loss of function, contact with the
surroundings, binding, interference, collision, or excessive wear of moving parts?
• Is the design something that can be manufactured with the quality and uniformity required
to avoid structural weakness?

The analyst should extend the above items to everything that needs to be considered, or
that could go wrong.

Tip 57: Control of Meshing

Since I am using ANSYS 5.3, I can't comment on the latest in ANSYS automatic meshing
capabilities, but a couple of suggestions about the basics may be helpful. You can select
the lines that have not yet had mesh density applied, with the command
"LSEL,S,NDIV,,0" as a check that all lines have had mesh density applied, or for
convenience. The same type of command can be used to find the lines with other mesh
densities.

Basic ANSYS training should have taught you that line and area concatenation can help
you get mapped meshing, which gives relatively neat regular meshes such as all four-
sided area elements, or all six-sided solid elements. This can make a big difference in
some models.

Tip 58: Four View Plot

When assessing modes of vibration or deflection of a 3-D structure, I have found it


convenient (though slower) to generate ANSYS plots showing my model in four views
on one sheet of paper or screen plot. The traditional views: Front Elevation (front), Plan
(top), Side Elevation (right), and Isometric (iso), can be positioned in four windows that
are located in the lower left quarter, upper left quarter, lower right quarter, and upper right
quarter of the plot, respectively. (Other standard view layouts can be substituted). A
displacement plot of a mode shape with PLDISP or PLDISP,1 with these four views
active will leave fewer ambiguities about what is happening with mode shapes than a
single-view plot. The only shortcoming is that the images are small -- I prefer to use 11"
x 17" paper in landscape mode for these plots.

An annecdote I heard from a guy I knew: A U.S. ship entered a foreign shipyard needing
a new propeller. The ship's engineer supplied a drawing, and a propeller was cast and
installed. The ship was launched and powered up. When set to go forward, the ship went
backward -- the shipyard used the European standard view interpretation of an American
drawing, and the propeller was mirror imaged!

The following code can be put into a macro to generate a four-view screen. Customize it
as you wish -- I include commands to turn off PowerGraphics and to use Centroidal sort.
This permits clean plots on paper with large models. NOTE: Users may want to set the
/DSCALE value to the same level in all four windows with "/DSCALE,ALL,value".

! For information only. Use at your own risk.


! Put these lines in a macro
! Set screen to show four standard views:
! User may want to set /DSCALE to the same value in all windows
/WIN,1,LTOP ! Window 1 left top
/WIN,2,RTOP ! Window 2 right top
/WIN,3,LBOT ! Window 3 left bottom
/WIN,4,RBOT ! Window 4 right bottom
/WIN,5,OFF ! Turn off Window 5
/VIEW,1,0,1,0 ! Window 1 top (plan) view
/VUP,1,Y ! Reference orientation
/VIEW,2,1,1,1 ! Window 2 ISO (isometric projection) view
/VUP,2,Y ! Reference orientation
/VIEW,3,0,0,1 ! Window 3 front (front elevation) view
/VUP,3,Y ! Reference orientation
/VIEW,4,1,0,0 ! Window 4 right (side elevation) view
/VUP,4,Y ! Reference orientation
/AUTO,ALL ! Fit all windows
/PLOPTS,INFO,1 ! Include information column
/PLOPTS,LEG2,0 ! Don't include view information
/TYPE,ALL,2 ! Centroid sort, better print that Z-
buffer
/CPLANE,0 ! Cutting plane
/graphics,full ! NOT PowerGraphics (fewer facets?)
! User has to issue the plot command

The next code can be used in a macro to return to a front view in one window. Again, the
user may want to customize some of the lines:

! For information only. Use at your own risk.


! Set screen to show one front view in Window 1
/WIN,1,SQUA ! Full square Window 1
/WIN,1,ON ! Turn on Window 1
/WIN,2,OFF ! Turn off Window 2
/WIN,3,OFF ! Turn off Window 3
/WIN,4,OFF ! Turn off Window 4
/WIN,5,OFF ! Turn off Window 5
/PLOPTS,INFO,1 ! Info on for right column
/PLOPTS,LEG2,0 ! Don't show the view information
/VIEW,1,0,0,1 ! Front (front elevation) view
/VUP,1,Y ! Reference orientation
/TYPE,ALL,2 ! Centroidal sort, better print than Z-
buffer
/CPLANE,0 ! Cutting plane
/graphics,full ! Not PowerGraphics (fewer facets?)
! User has to issue the plot command

After running one of the above view-generating macros, the user has to issue a plot
command to see the result.

Tip 59: Quick Review of Mode Shapes

To start printing plots of mode shapes directly from ANSYS mode shape results, having
the hardcopy window pop up automatically, type in an input line such as:

SET,1,1$PLDISP$/UI,COPY

The dollar sign separates the commands that are grouped on one input line. Click the
hardcopy OK button to kick off the hardcopy. You may want to set the print to landscape
mode, first. Then, to print plots of the rest of the mode shapes, type:

SET,NEXT$PLDISP$/UI,COPY

Simply keep repeating the second line (to avoid re-typing, double-click it in the ANSYS
Input window), and clicking the hardcopy OK button, to get the rest of the modes printed.
Note that the SET,NEXT command will loop back to the first mode shape after the
number of modes stored in the RST file has been exhausted.

You may see the somewhat odd message:

This comes up because of the /UI,COPY command, and has something to do with the
/ZOOM command. Consequently, this method may not work satisfactorily if your zoom
is not off. In viewing mode shapes, it will be typical to have zooming off.
The following code fragment read from an input or macro file can automatically plot a set
of mode shapes. *GET commands are used to detect information on what substep and
frequency are read. The user does not need to know how many modes were generated, so
automated plotting to a file is simpler. The displacement plots will contain substep and
frequency information. This code should cope with degenerate eigenvalues or rigid body
displacements. Execute this from within /POST1 after a mode case analysis was run, or
after the database and RST file for a mode case analysis are loaded. There is no error
check, so this must be used properly. The user will want to test and customize these
commands:

! For information only. Use at your own risk.


set,1,1 ! set to the first mode
pldisp ! plot the first mode
*do,iii,1,9999999 ! use a very large number
set,next ! set to the next mode
*get,ntotal,active,0,solu,ncmss ! cumulative substeps --
cycles to 1 if all modes in RST done
*get,thefreq,mode,iii,freq ! use this line if desired to
get frequency into a parameter
*if,ntotal,eq,1,then
*exit ! exit do loop if done
*endif
pldisp ! plot the displaced shape
*enddo

Tip 60: Using ANSYS Help

When using ANSYS interactively, help on any command can be accessed immediately by
typing HELP,commandname into the input window. If the HELP application has been
launched independently, the quick way to get help on a particular command is to use the
"Navigate" and "Help On..." menu choice. Type the command name into the "Help On"
text box that pops up, then click the Apply button.

This will send the help program to the Commands manual for the command name typed.
If you click the "Apply" button, the "Help On" dialog box remains visible, and can be
used to type in other command names.
The online ANSYS help system makes the need for trips to hard-copy documentation
much less frequent. The Help application can be launched while ANSYS is running in the
background, so ANSYS documentation can be studied while a large model is solving.

Tip 61: The FEA Job Hunt

Not strictly an ANSYS issue, I want to put in my own two cent's worth on this topic,
which is of interest to all of us. Some time back when I was job hunting during a
recession, I was given a pre-screening interview over the phone. The interviewer owned a
consulting firm. It rapidly became apparent that he had been lied to, many times and by
many people, about their FEA experience. I had been using a company's proprietary FEA
code and was not experienced with the major commercial programs, so was immediately
suspect. Another time, I heard of an applicant using another guy's FEA model images to
present in a job interview as "evidence" of his own experience. I interviewed a guy who
claimed experience with "a locally available product I wouldn't know." It became
apparent that he had been coached and knew only the buzz words. When job hunting it is
challenge enough to compete with other experienced people -- misrepresentation we don't
need.
The FEA job applicant should be able to present evidence of academic and/or post-
academic training. The applicant should have a portfolio of previous work. The applicant
should be instructed to bring these to the interview. The portfolio information can be a
little awkward when the products are proprietary. If it would be illegal to present any
images of work done, then I suggest a keen applicant independently develop a set of
small models using ANSYS/ED that illustrate the FEA techniques with which the
applicant is familiar. The applicant should be able to describe the modeling
considerations, techniques, compromises, pitfalls, and post-processing possibilities of the
examples.

Of course, with good references and personal networking, the above situations are less of
a concern. Still, as with computer programming, the productivity of individuals can vary
surprisingly. Consequently, the applicant should be able to describe what makes FEA
productivity possible, some of the modeling shortcuts possible, and give an energetic,
articulate and confident presentation of self.

Questions of the "how would you model this" and "how would you handle this" type are
just as relevant as they are in other professional interviews.

I have had both excellent and very poor interviewers assessing me. With some, I've had to
politely direct the interview just to be able to point out my range of skills. On the basis of
my own small set of experiences I would say that first impressions are very telling -- if
you get a bad feeling about a place during the interview, it may be for a good reason. If
you see a place as being a welcoming workplace with a healthy environment, and the
people you meet behave well and are socially skilled, the odds are that as long as you
function as a valuable employee, life there will be OK. I've had an interviewer keep me
waiting for an hour and a half past the appointment time. I got the job, and found that
things were frequently out of control, and the guy was impossible to see. I had an
unpleasant "stress interview", and sure enough, the place was cheap, had an unhealthy
work life, and poorly structured leadership. Another interviewer struck me as highly
manipulative, trying to goad me into making negative comments about employers and
ethnic groups (of all things--talk about playing with fire--a personnel manager who
apparently fancied himself a psychologist), and giving me inaccurate data on hiring
intentions. A friend of mine got the job and detested it, saying the place was poisoned
with political games. I've had a thoroughly positive interview, got the job, and found I
was with great people. Ignore their hype: What you see is (most likely) what you get.

To end this on a few positive notes: Employers want to hire someone who will be a
success for them. Your boss will be very happy if you make everyone's life easier through
your contributions. Prepare yourself to give a picture of your range of skills, energy,
confidence, communication skill, ability to work with others, range of past experience,
and ability to time-manage a set of responsibilities. Some employers begrudge every
penny, but others are pleased to compensate you attractively if you produce well. Size up
the potential employer carefully, for your time is valuable. Best of luck.

Tip 62: *VPUT and DESOL


I have no idea why the commands manual entry for *VPUT describes the parameter
ParR as "The name of the resulting vector array parameter." The parameter ParR is the
source of data, NOT what is changed by *VPUT. Note that *VPUT can write to node
results, and to an element ETABLE. There is a difference between writing to "nodal
degree of freedom results", and what the manual calls "element nodal results" with
*VPUT.

The *VPUT command can write information to the node results which can then be
plotted as if it was the nodal results, using the PLNSOL command. The command manual
tells us that the effect is permanent for degree of freedom results (changing the database),
but temporary for all others (derived results, not changing the underlying database).
Writing stress data with *VPUT does not affect element plots with PLESOL,S,option. If
you use *VPUT to write to "element nodal stress results", immediately do a PLNSOL
plot to see the effect, do a PLDISP plot (seeing unaffected degree of freedom data), and
then do another PLNSOL stress plot, the latter PLNSOL plot shows original data that is
unaffected by *VPUT. The temporary modified PLNSOL stress plot effect does not
cooperate with PowerGraphics to give a plot with contour discontinuities. Before using
the *VPUT temporary effect in plotting nodal stress (or other derived) results, test the
method carefully for errors, and for any errors in what I have just said! Warning: In a
shell model, the plot of temporary *VPUT derived data may make the plot legend
indicator for the TOP, MID or BOT surface of the shell elements meaningless -- annotate
the plot to inform the reviewer.

The DESOL command does write derived information into the database to the nodes of
elements, on a permanent basis. The command is powerful, and potentially dangerous.
Annotate plots and change titles to inform the reviewer. It can be painfully slow to apply
DESOL to every node of every element, element by element, in a large model.

Tip 63: How to Divide One Element Table Column by Another

To divide one column of an element table (ETABLE) by another column, use the SEXP
command. Make sure the denominator is nonzero! Per the commands manual, SEXP
"forms an element table item by exponentiating and multiplying." The result of SEXP is
formed from (ABS(Lab1)**EXP1)*(ABS(Lab2)**EXP2). Because of the absolute value
operations that protect ANSYS from complex numbers being generated, you will get the
absolute value of the answer you want. To divide with SEXP, use a positive exponent
EXP1=+1 for the numerator Lab1, and a negative exponent EXP2=-1 for the denominator
Lab2.

If you must have the ETABLE column answer with its positive or negative sign, one way
to get it would be to use a blend of SEXP and SMULT. Given ETABLE columns A and B,
you want a column of A/B values with their signs. Use SMULT to form C=A*B which
has the same sign as A/B. Use SEXP to form D=1/(ABS(B)**2) which is positive. Use
SMULT to form E=C*D. Now the column E=A/B with its correct sign.

Tip 64: Element Tables (ETABLE) and Array Data Exchange -- An Example
Note that *VGET and *VPUT can communicate with an ETABLE and with an Array. In
this way, information that can only be obtained in either an ETABLE or an Array can be
moved back and forth for manipulation, evaluation, and display. You have to be inside
/POST1 to use ETABLE. SOLVE has to have been executed in order for ETABLE data to
be available. In brief testing, I found partial solve execution not to be sufficient to make
any data available for an element table, for example, element volume. In order to use
*VPUT on an ETABLE, the array and ETABLE column will have to already exist. Create
the ETABLE column by making a copy of an existing ETABLE column (you can use the
SADD command, for example), or by creating a new dummy column from model
information. Give the new column an appropriate name for the data that will come from
the array. If selection of only a subset of your elements is in place, or if there are gaps in
the element numbering, it may be preferable to use a *VMASK during the *VGET and
*VPUT calls to avoid warning messages. The *VMASK array contents can be based on a
test of element selection. Array size can be based on MIN and MAX values for selected
element numbering, if you use an offset during data movement. Minimizing array size is
a good reason for compressing node and element numbers when developing a large
model, before load cases, solutions and array dimensions are prepared.

An example with shell elements: put shell element volume in an ETABLE, shell element
area in an array, move the area array into an ETABLE column, divide volume by area to
get shell element thickness for each element in a new ETABLE column, and do a colored
ETABLE contour plot of your model's shell element thicknesses (do not average the
values in the ETABLE plot). If the shell element is of varying thickness, this process
generates an average for the element. Print a colored picture of this plot. Augment this
with an element plot using /ESHAPE,1 and coloring based on REAL constant values.
This should help with "pesky visitors" who are always wondering how thick certain parts
of a complex shell model are. It may help you catch some modeling errors. Here is a
macro to generate the ETABLE thickness column and plot the model colored by element
thickness. This macro is written to process Shell63 elements in the selected set of
elements. It is a basic macro with no testing to prevent error conditions, or "cleanup" after
execution. It illustrates movement of an array column into an element table column, use
of masks, offset of the transferred data to minimize array and element table size, and
division of one ETABLE column by another using the SEXP command. The macro has to
be re-executed if the model is changed -- an ETABLE update would not be sufficient. The
denominator that contains element area in the divide operation should automatically be
nonzero because all shell elements have areas.

! ETABLE and Array usage and interaction example.


! For illustration only. Use at your own risk.
! This example is used on SHELL63 elements.
! An array is created called "aaa", element selection may be
reduced,
! and element tables are used. A plot results.
! Run from within /POST1
! Put element volume in an ETABLE, and element area in an array.
! Move the area array data into the ETABLE
! Divide element volume by element area to get an element
thickness column.
! Element thickness value should be the average for variable
thickness elements.
! Plot the ETABLE thickness data for a view of the Shell 63
elements
! that are contour colored according to the element thickness.
!
esel,r,ename,,63 ! re-select only the SHELL63
elements
aaa= ! kill the array to be used
*get,xmax,elem,,num,max ! what is the highest element
number selected?
! element number compression
will be desirable
*get,xmin,elem,,num,min ! minimum element number
*dim,aaa,array,xmax-xmin+1 ! array to hold areas has to be
this big
*vget,aaa(1),elem,xmin,esel ! fill array with info on
whether element is selected
! -1=not selected,
0=undefined, 1=selected
! offset with xmin (see the
manual)
*vmask,aaa(1) ! use element selection info as
a mask
*vget,aaa(1),elem,xmin,geom ! fill the array with geom info
on the elements
! for shell elements this is
AREA. Offset with xmin
etable,volu,volu ! create element table column
with element volume
sadd,geom,volu, ,0,1 ! create dummy column to
contain other data
*vmask,aaa(1) ! use array as a mask (geom
data is positive or zero)
*vput,aaa(1),elem,xmin,etab,geom ! put data into ETABLE "geom"
column. Offset with xmin
sexp,thick,volu,geom,1,-1 ! divide volume by area, get
avg. shell element thick.
/title,ETABLE Plot of Shell 63 Element Thickness Values
pletab,thick,noav

You can then select a few elements of interest, and list the element table for the column
containing the thickness, to get a numerical thickness value for those elements. You could
use *VGET to put the REAL values for the shell elements into an array, and transfer that
data into an ETABLE column. Then when you list ETABLE information for a selected
element, you could see the REAL and the thickness value side-by-side. Contour colors
could be explicitly assigned to the different thicknesses found in the ETABLE column, if
the number of thicknesses was not too great. You could place a button on the toolbar that
calls a macro that (1) checks that the user is in /POST1, (2) checks that there is results
data in the database, (3) asks you to pick elements with the mouse, then (4) generates and
prints the ETABLE values for the REAL, thickness, and stress values of those elements,
and (5) finally cleans up and restores the original element selection.
Another use: Put shell element mid-plane Sx, Sy, and Sxy into columns, and multiply
them by the element thickness column. The resulting data would be similar to TX, TY,
and TXY data that can be obtained directly, but now would be in the direction defined by
the active coordinate system. A macro could (1) have the user pick nodes or keypoints to
define a local coordinate system, (2) make it active, (3) develop this ETABLE data and
generate plots colored by load-per-unit-length in the known directions, then (4) clean up.
This could be done for SINT or SEQV if desired.

The possibilities are endless.

Tip 65: Error Estimation, PowerGraphics, and ERNORM

Error Estimation will not be available when you enter /POST1 if PowerGraphics is
active. If you turn off PowerGraphics, then the "Options for Output" in the GUI will offer
the ERNORM setting that activates error estimation. If PowerGraphics is not active, then
the "Options for Output" in the GUI will not offer the AVRES setting that controls
discontinuity of contours at changes of material and REAL value for elements. The first
time I encountered this was when I couldn't get an error report, and couldn't imagine why,
until I had poked around for a while. I had entered /POST1 with /PowerGraphics active.
ERNORM is ON by default when you enter /POST1, but only if PowerGraphics is OFF
with /GRAPHICS,FULL.

Tip 66: Concatenate and Mesh Last

One of the things I have seen go wrong in model development is: I have concatenated
lines and/or areas, then performed a boolean operation on them. Model problems resulted
and I had to start from scratch (I re-ran most of the log file); I suggest that boolean
operations happen first -- concatenate and mesh last. (This was with ANSYS 5.3; solid
modeling problems are reduced with each version.)

I have found that when three or more areas are concatenated, the lines that are implicitly
concatenated have to be concatenated manually before successful mapped meshing will
proceed. When only two areas are concatenated, the lines concatenate automatically.

When concatenated areas are used to map mesh a volume, it may happen that an adjacent
volume defined by an area that is part of the concatenated set will not mesh until the
"pseudo-area" that results from the concatenation is deleted. I've had occasions when this
did and did not happen. The concatenated lines may need cleanup, also. It is possible that
if the volumes that do not require concatenated areas in order to be meshed, are meshed
first, that the remaining volumes can have area and line concatenation created after, and
then be meshed themselves, without error messages. This consideration may compromise
easy mesh refinement and adaptive meshing. It may be necessary to go to tetrahedral
elements for easy meshing, unless ANSYS revisions have fixed up mapped meshing
concatenation.
Trivia: I first encountered the word "contatenate" when using an IBM 370 a long time
back. A neighbour told me that "concatenate" is based on the Latin word for "chain".

Tip 67: ANSYS Output of Data to Files for Use by Other Programs

Numerical data contained in parameters can be output into ASCII files using the
*CFOPEN, the *VWRITE, and the *CFCLOS commands. The *VWRITE command
only works when called from an input file that includes a format statement similar to
FORTRAN. The following simple macro makes the *VWRITE command easy to use:

! Put this code into a macro file called "writer.mac"


! call with: writer,data
! write data in arg1 to a file previously opened with *CFOPEN
! later on, close the file with *CFCLOSE
*vwrite,arg1
(E16.8)

The following ANSYS /INPUT data test will demonstrate the use of the above macro. In
this test, the macro has been called "writer.mac" and it is in the current directory. Either
numbers or parameters that evaluate to numbers can be used in the following commands:

! Example of data output to file from ANSYS


*cfopen,myoutput,dat
writer,123
writer,234
/clear
writer,345
writer,456
*cfclos

Executing the above commands from within ANSYS generates the following data in the
file "myoutput.dat", and demonstrates that the output file remains open in spite of the
/CLEAR command. The following is the content of the file "myoutput.dat":

.12300000E+03
.23400000E+03
.34500000E+03
.45600000E+03

The filename used in the demo ended in ".dat" so that the data would be accessible to the
MathCad program from MathSoft, Inc. The above procedure makes it possible to get
ANSYS information out into another program without errors in manual transcription. If
you can get the ANSYS information into a parameter, it can be moved to an external file.
Data can be read back into ANSYS with the *VREAD command. Similar methods can be
used to move arrays full of data. Note that the *VGET and *VPUT commands can move
data between element tables and arrays, and the arrays can be used to put data into
external text files, so significant automated data movement is possible. This approach can
help to reduce data errors in reports.
Here is an example of temporarily switching the ANSYS /OUTPUT information from the
default, to a file. Note that certain list information will not go to the file when the GUI is
in use (read the manual).

! The following code switches /OUTPUT to a file,


! writes two comments, writes PRSECT information on
! linearized stresses along a previously defined path,
! then returns /OUTPUT to the default.
/output,lin_path,out
/COM,Linearized Path Results from PRSECT
/COM,Compare Results with Code Allowables
PRSECT, ,0
/output

This gives a permanent record that is independent of plotted results.

Tip 68: Writing Array Columns to Output or to Files

The *VWRITE command can be used to output an array column, in addition to scalar
parameters. The array position from which the printing will start must be indicated when
executing the *VWRITE command. As mentioned above, the *VWRITE command
cannot be executed inside the GUI, it has to be executed from an input file or macro. The
*VWRITE command prints the data from the starting position on down to the end of the
column. The output that results can be re-directed with the /OUTPUT or with the
*CFOPEN and *CFCLOS commands. The following two macros can be used to make
calling the *VWRITE command easy. The array must exist, having been created with a
*DIM command. The first macro works on a 1-dimensional array parameter. Note the
instruction on how to call the macro, with the array parameter name surrounded by single
quotes in order to delay the evaluation.

! This macro will print a 1-dimensional array


! according to the starting position indicated.
! If this macro is called WRITEAR1.MAC and an
! array called COL1DATA is to be printed from
! position COL1DATA(1) to the end of the array
! then call this macro with the statement:
! WRITEAR1,'COL1DATA',1
! setting the name of the array in single quotes.
! The user may wish to change the FORMAT statement.

*vwrite,arg1(arg2)
(E16.8)

The second works on a 2-dimensional array parameter. The macro call will include the
row and column position from which to start. The *VWRITE statement will cause
printing of a column of the 2-dimensional array. When calling the macro, the array
parameter name is, as above, enclosed with single quotes to delay evaluation.

! This macro will print a column of a 2-dimensional array


! according to the starting position indicated.
! If this macro is called WRITEAR2.MAC and an
! array called MYDATA2D is to be printed from
! position MYDATA2D(1,2) to the end of column 2 then call
! this macro with the statement:
! WRITEAR2,'MYDATA2D',1,2
! setting the name of the array in single quotes.
! The user may wish to change the FORMAT statement.

*vwrite,arg1(arg2,arg3)
(E16.8)

Given that all ANSYS arrays are implicitly 3-dimensional, the second macro above could
be used to print out a 1-dimensional array if the second calling parameter is set to one. A
similar macro can be written to print a "column" of a 3-dimensional array. If a term in the
array is MYARRAY(III,JJJ,KKK) then the *VWRITE command will cycle through the
values of the III index when printing out data. The macro for a 3-dimensional array could
be written so that it tests ARG2, ARG3, and ARG4 to see if they are zero. If they are
zero, then they presumably were not entered, and the correct form of a *VWRITE
command could be used to print a scalar, 1-D array, 2-D array, or 3-D array, as
appropriate. Such a macro is illustrated below. Its use would be very error prone without
error checking code. A scalar need not have its name enclosed in single quotes in calling
this macro, but an array would have to be enclosed in single quotes as in the above
examples. A user may want to customize this macro to change the FORMAT statements,
or to remove the /NOPR and /GOPR commands.

! Macro to write a scalar or an array column, as appropriate.


! Indicate the starting position for *VWRITE if an array is used.
! Enclose an array parameter name in single quotes.
#################
! Examples, if this macro is called WRITER.MAC:
! writer,aaa ! if aaa is a scalar
! writer,'bbb',1,3,2 ! if bbb is a 3-D array parameter
! Note the /NOPR and /GOPR commands. They will overwrite user
settings.

/nopr ! reduce the amount printed to


/OUTPUT
*if,arg2,ne,0,then ! if nonzero an array is used
*if,arg3,eq,0,then ! if not a 2-D array
arg3=1
*endif
*if,arg4,eq,0,then ! if not a 3-D array
arg4=1
*endif
*vwrite,arg1(arg2,arg3,arg4)
(E16.8)
*else ! if a scalar
*vwrite,arg1
(E16.8)
*endif
/gopr ! switch on /OUTPUT
Test these macros thoroughly before use. Note that they contain no error handling code.
Warning: It is particularly difficult to remember to surround the name of the array
parameter with single quotes.

Tip 69: Synthesizing Parameter Names and Manipulating Jobnames and Long
Strings in APDL

Although I haven't found documentation reference to the following tricks, they work in
ANSYS 5.3, and presumably will in future. The ability to synthesize parameter names,
and to do other text manipulation, could lead to some very creative activities in
"programming" ANSYS methods and in macro writing. Parameter names themselves can
be synthesized by chaining text strings together. Remember that there is a limit on the
number of parameters in a model -- arrays must be used to get around this, if it is a
problem. Try these statements in ANSYS -- manually enter these lines one at a time in the
ANSYS Input window, and check what turns up when the *STATUS command is issued
(scroll down to the bottom of the STATUS text window that pops up):

aaa='qwer'
bbb='tyui'
%aaa%%bbb%=12345
*status
%aaa%1234=5
*status
*do,iii,1,9$abc%iii%=iii$*enddo
*status

For what it's worth, you can even store ANSYS commands in parameters, and execute
them. I encountered some trouble executing commands with commas embedded inside
the character parameters, but the following worked. Give them a try:

aaa='nplo'
%aaa%
a='*get'
b='xx'
c='active'
d='0'
e='time'
f='wall'
%a%,%b%,%c%,%d%,%e%,%f%
*status,%b%
thetime=%b%
/title,%a%,%b%,%c%,%d%,%e%,%f% yields %thetime%
/repl

Parameters that hold text data are limited to 8 characters. Several text variables can be
chained together (concatenated) using the "percent" sign. The user should experiment to
see how blanks are truncated. The following example illustrates chaining. The strings are
concatenated in the /TITLE command, and the /REPL command shows the result in the
plot title.
xxx='The 1st'
yyy=' & 2nd'
/TITLE,%xxx%%yyy%
/REPL

Situations when this trick might be used could be to save and employ long title strings,
annotation strings, or job names for the /FILNAME command. The method can use
several parameters, or several terms of an array parameter. A jobname can be up to 32
characters long, so up to four parameters, or 4 terms in an array parameter, would be
needed. Note that the *GET command can read in a jobname, with the *GET statement
pointing to the character in the jobname where the parameter will begin to read the string.
An example of array use is:

*DIM,aaa,char,4
aaa(1)='First te'
aaa(2)='rm & 2nd'
/TITLE,%aaa(1)%%aaa(2)%
/repl

A database could be saved with statements such as these, in which parameter "aaa" is text
and "num" is an integer. The contents of "num" could be the results of a load step "num"
and the file name would identify the load step number:

aaa='model'
num='456'
SAVE,%aaa%%num%,db
SAVE,%aaa%123,db
SAVE,myjob%num%,db

Another use is with a parameter inside percent signs grouped with text with all inside
single quotes. Consider this example, which was used to delete load step files in a
complex application:

! Parameter "compname" contains the Jobname of a load step file


*do,jjj,st1,st2,st3
*if,jjj,lt,10,then
/delete,compname,'s0%jjj%'
*else
/delete,compname,'s%jjj%'
*endif
*enddo

Both the RESUME command and the /CLEAR command can destroy the counter used in
a *DO loop. I have used the PARSAV command before RESUME, and the PARRES
command immediately after, in order to recover the counter and other looping parameters
when placing RESUME inside a *DO loop. The user should test this technique before
using -- the parameters saved and restored may undesirably overwrite parameters in the
file that is resumed. Look at this /CLEAR example; it illustrates what can be done:

! test of looping
fini ! exit whatever is active
*do,iii,1,3
parsav,all,xxx,parm
/clear ! be prepared to hit OK button
parres,new,xxx,parm
*enddo

These methods can make it possible to program considerable automation into ANSYS,
using the ability to assemble parameter names, parameter contents, commands, and file
names from letters and numbers.

Tip 70: Solid Elements 95 and 92 -- Efficiency and Interconnection

These two solid elements have mid-side nodes, so they follow curved surfaces nicely.
Fewer elements are needed than with Solid45 8-node bricks, for equivalent accuracy.
Note the ANSYS manual comments that the simpler flat-sided elements may be preferred
for material nonlinearity. However, nonlinearity is supported by Solid92 and Solid95. The
Solid45 tetrahedral element is "not recommended" in the ANSYS manuals because of its
low accuracy in predicting stresses -- the higher order Solid92 and 95 tetrahedral option
elements do not have these warnings, and may be preferred in some cases.

Solid element 95 is a 20-node brick element. It also supports a prism, pyramid, and
tetrahedral element shape option. At revision 5.3 of ANSYS, my testing suggests that the
prism and pyramid forms are not generated by the automatic volume mesher. At the 5.5
revision, pyramids may be generated by the mesher at the interface between the brick
form, and the tetrahedral form. Brief testing I did suggests that meshing of volumes can
successfully interface the brick and tetrahedral forms. The tetrahedra formed at the
interface with brick elements do not have mid-side nodes where they would not exist on
the matching brick elements, so there should not be a big mismatch problem at the
interface. These elements that have a mid-side node missing are considered to be
degenerate forms in the ANSYS manuals, and are not recommended in regions with high
stress gradients, or where exace stress values are important.

The Solid92 element is a 10 note tetrahedron -- it should act the same as the 10 node
version of the Solid95 element in producing results. However, it executes significantly
faster than the 10 node Solid95 element, presumably because the software is not dealing
with the redundant extra 10 nodes. In large models, this speedup will be of value to users,
so Solid92 elements should be considered where Solid95 tetrahedral forms would
otherwise be employed in large structural models.

You might also like