Professional Documents
Culture Documents
Master Thesis
Supervisors:
Author:
Prof. Antonio Huerta
Diego Canales
Prof. Francisco Chinesta
October 2014
[This page intentionally left blank]
Many engineers and scientists are like kids, if we have a bigger hammer we want
to try bigger nails
Adrien Leygue
Universitat Politecnica de Catalunya
Abstract
Master of Science in Numerical Methods in Engineering
by Diego Canales
Canales, D., Cueto, E., Feulvarch, E., & Chinesta, F. (2014, July). First Steps towards
Parametric Modeling of FSW Processes by Using Advanced Separated Representations:
Numerical Techniques. In Key Engineering Materials (Vol. 611, pp. 513-520).
Acknowledgements
This work would never have been carried out if Paco Chinesta had not said what
is started must be ended one year ago. For this, for his ideas and for giving me
the opportunity to work in his wonderful team, I would like to express my deepest
and sincere gratitude.
Thanks to the people from Zaragoza, Elas Cueto, Icar Alfaro and David Gonzalez
for their help with the Natural Element Method and their warm welcome in their
research group for a very profitable scientific stay.
I would also like to thank to Antonio Huerta, for accepting the direction of this
project and for writing the book from which I have learned the most in the last
two years: Finite Element Methods for Flow Problems.
Thanks to Adrien Leygue and Felipe Bordeu, my everyday teachers. Their tools,
ideas and advice are the most fundamental part of my daily progress.
Thanks to Elisa, the woman of my life and the mother of my children, for her
support. And for allowing me to work on weekends and holidays. . .
v
Contents
Abstract iv
Acknowledgements v
Contents vi
3 Robust Interpolants 31
3.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 31
3.2 Introduction to the Natural Element Method . . . . . . . . . . . . . 33
3.2.1 Interpolation of Natural Neighbors . . . . . . . . . . . . . . 34
3.2.1.1 NEM: an academic example . . . . . . . . . . . . . 39
3.3 Stabilized Conforming Nodal Integration . . . . . . . . . . . . . . . 40
vii
Contents viii
A Matlab Codes 67
A.1 PGD code for compression . . . . . . . . . . . . . . . . . . . . . . . 67
A.2 Analytical solution of the co-extrusion problem . . . . . . . . . . . 69
Bibliography 73
For Elisa, Luca . . . and Paula
ix
Chapter 1
1.1 Introduction
Six initiatives have been financed by the European Research Council based on their
potential for scientific breakthroughs with high social and industrial impact [1].
All of them are related with the field of information and communication technology
(ICT). These are:
Guardian Angels for a Smarter Life project. This project aims to develop
tiny devices which act as personal assistants, delivering features that go
beyond human capabilities in complex situations.
The Human Brain Project. Its goal is to understand how the human brain
works.
Despite being different, there is a common ingredient to all of them: the emphasis
on the necessity to use advanced simulation-driven sciences and engineering. These
projects share some key aspects related to efficient computational sciences and all
of them face important limitations of todays computer capabilities and, notably,
simulation techniques.
Up to now, the solution of complex models, preferably fast and accurate, is ad-
dressed by using high performance computing (HPC) and hyper powerful com-
puting platforms. But, at the same time, there is a need for a new generation of
simulation techniques beyond HPC, because today many problems in science and
engineering remain intractable. To illustrate this issue, lets consider the following
challenging scenarios:
Thus, it is clear that fast computations are needed in science and engineering. How-
ever, this necessity is not new, from the Mesopotamian abaci to the nomograms
of the S.XIX, the human being throughout history developed several facilities for
giving fast responses to a variety of questions.
Recently, Model Order Reduction (MOR) techniques opened the door to this pos-
sibility. The first developments were based on the Proper Orthogonal Decomposi-
tion (POD) [3]. The POD allows extracting the most significant characteristic of
the solution to construct a reduce approximation base, that can be then applied
for solving models slightly different to the ones that served to defined the base.
There is an extensive literature about this topic; the interested reader can refer
to [4] and the numerous references therein.
The calculation of the reduced basis is not unique. There are many alternatives
such as the Goal Oriented Model Constrained Optimization approach [5] or the
Modal Identification Method [6]. Another family of model reduction techniques
lies in the use of reduced basis constructed by combining a greedy algorithm and
a priori error indicator [7].
PP hP i2
M m
m=1 i=1 (xi )u (xi )
= PM , (1.1)
i=1 ((xi ))
c = . (1.2)
Here, the vector has the i-th component (xi ), and c is known as the two-point
correlation matrix
P
X P
X
m m
cij = u (xi )u (xj ) = um (um )T , (1.3)
m=1 m=1
we have
c = Q QT . (1.5)
The aim is to obtain a reduced-order model, thus we solve equation 1.2 and keep
the N most important eigenvectors i , that is, those associated with the domi-
nating eigenvalues (the larger ones). The reduced model is created if N is found
to be much lower than M . This is very often the case in practice but not always.
If, for instance, an explicit time-stepping scheme is used to compute the discrete
solution um+1 at time tm+1 a linear algebraic system like
Gm um+1 = H m (1.7)
N
X
u m+1
i im+1 = B m+1 . (1.8)
i=1
or
B T Gm B m+1 = B T H m (1.10)
Coefficients m+1 defininig the solution of the reduced-order model are thus ob-
tained by solving an algebraic system of size N instead of M . When N << M , as
it is the case in numerous applications, the solution of 1.10 is thus very convenient
because of its reduced size. This POD-based approaches works well if the searched
solution is not far from those used for constructing the reduced base. Another
drawback is the need to solve several times the full problem. Those difficulties are
responsible for the birth of a new generation of techniques such as Reduced Basis
or Proper Generalized Decomposition. The last one will be described in detail in
the next chapter, being a fundamental tool in this work.
In this section we show, through some examples, that the challenge scenarios
previously mentioned are very often present in the simulation of material forming
and processes. In all of these cases, the traditional simulation approaches fail (even
classic MOR techniques), and some other strategies should be followed. Here the
Chapter 1. MOR and new approaches to numerical simulations 7
PGD has proved to be a very promising technique in order to provide fast and
efficient solutions in these situations.
The PGD is covered in detail in chapter two but we advance here its key features
in order to understand how this technique has open a new insight in the simulation
of material forming and processes.
The PGD is an a priori model order reduction technique. This means that
previously information about the solution of the full-problem is not needed
anymore.
The PGD complexity scales linearly with the discretization of the dimen-
sions. Thus, the curse of dimensionality does not occur.
The PGD allows to introduce any parameter of the problem as extra coor-
dinate. Thus, the solution depends explicitly on these new parameters and
it becomes a metasolution or computational vademecum [8].
It is well known that the finer description of matter (ranging from quantum
chemistry to statistical mechanics descriptions) leads to models defined in high-
dimensional space. For example, in the field of composite manufacturing, fine
rheological descriptions are needed to study some advanced manufacturing pro-
cesses such as the Resin Transfer Moulding (RTM). Here a very viscous fluid, the
resin, with embedded fibers flows into a mould. The velocity and pressure of the
complex fluid, and the distribution and orientation of the fibers play an important
role in the final result and the cost of the process. Unfortunately, the necessary
fine physical descriptions (kinetic theory models) cannot be solved with traditional
methods, because the number of dimensions of the problem (typically one coor-
dinate by particle) is prohibitive. The PGD emerges as an efficient alternative
thanks to its robustness against the curse of dimensionality [9].
welding time, could be almost impossible with traditional approaches. When the
space of parameters is huge and the computational cost associated to one test is
important, the trial-and-error approach is just unaffordable. The PGD has made a
major contribution to this field, creating the concept of computational vademecum:
a meta-solution in which the parameters of the processes appear explicitly. Then,
it is possible to derivate, in an explicit way, the solution with respect to the
parameters of the process and thus, not only the optimization but the analysis
of the sensibility of the problem with respect to any parameter of interest are
obtained in a direct and simple way. Moreover, these computational vademecums,
because they are computed off-line, can be introduced in deployed devices, open
the field of DDDAS and online control in real-time with light computing platform.
This idea has been used in active-control of tool paths in machining processes [10].
Finally, some of the models in material forming are defined in degenerated do-
mains like plate-type geometries. For instance, in automated tape placement
(ATP) manufacturing process of composites, the physics through the thickness
of the layers (the degenerated dimension) is very rich. A deep understanding of
this physics is fundamental in order to design and optimize this process. Tradi-
tional mesh-based methods need very fine discretizations leading to a very high
computational cost. The PGD allows computing the 3D solution of this problem
with the computational cost of a 2D problem [11].
This introduction has provided an updated view of the new challenges in numerical
simulation in science and engineering. It has been seen as the current simulation
techniques may be insufficient to meet them in full. The MOR techniques are an
appealing alternative, therefore it has been illustrated through the classic POD. We
have proved, with some examples, that the simulation of manufacturing processes
and material forming processes leads to many of the challenges mentioned above.
This partially motivated the emergence of a new generation of ROM methods such
as PGD. After this introductory chapter, this document is organized as follows.
Through them, a deeper comprehension of the method and its implementation was
gained.
In chapter four the major contributions of this work appear. A new efficient
updated-Lagrangian general strategy with possible application to the simulation of
material forming processes is proposed. In this technique, most of the ingredients
presented in the previous chapters (PGD and SCNI) are incorporated. A second
original contribution consists in the development and validation of a new form of
imposition of Dirichlet boundary conditions on a PGD framework. Very suitable
when the proposed spatial decomposition is used.
In the last chapter the conclusions of this work and the main scientific and engi-
neering perspectives are collected.
Chapter 2
2.1 Introduction
In the previous chapter, a general idea about ROM techniques and their impor-
tance for fast computation in engineering has been presented. We have illustrated,
using classic POD, how these techniques are able to extract the relevant informa-
tion from a set of particular solutions (from experiences or simulations) and to
construct an approximate solution involving much less information. In this way,
the reduced solution is constructed a posteriori through a set of solutions of the
complete problem.
However, one wonders if there is the possibility to construct these reduced ap-
proximations a priori, that is, without having to solve the full problem on several
occasions or disposing of previous experimental data. This desire becomes neces-
sary when faced with very complex problems, or when the number of technological
and material parameters makes it unfeasible to generate a set of particular solu-
tions representative of the complete solution space (see the interesting and didactic
TED talk [12] for more details). Very often, this is the case when material form-
ing processes are simulated to optimize them: the process is too complex and the
number of combinations of the parameters to be tested simply explodes.
11
Chapter 2. Proper Generalized Decomposition 12
Lets consider a problem (typically model led using PDE) defined in a D-dimensional
space with dimensions (x1 , ..., xD ). These dimensions are not only spatial but in-
clude material parameters, boundary conditions or even the time variable, as will
be explained in detail later. The PGD yields an approximate solution uN in the
separated form
N
X
uN (x1 , ..., xD ) = Fi1 (xi ) FiD (xD ), (2.1)
i=1
where the involved functions Fij (xj ) and the number of terms of the sum are
unknowns a priori. Henceforward, to alleviate the notation, uN will be denoted
just u.
It is important to stress that this idea is not new, the well-known method of sepa-
ration of variables is present in any introductory course of PDEs. The difference
is that, in PGD, not only the weights of the function of the base are unknown;
but the functional base itself is unknown a priori. It is the reason why the PGD
reduced base is able, in general, to capture the physics of the problem better than
other ROM techniques, especially when the available initial information is not
complete enough. The price to pay is the non-linearity of the formulation, but
fortunately PGD is in practice equipped with simple and robust algorithms to
deal with this issue.
n
X
un+1 (x1 , ..., xD ) = Fi1 (x1 ) FiD (xD ) + Fn+1
1 D
(x1 ) Fn+1 (xD ). (2.2)
i=1
Chapter 2. Proper Generalized Decomposition 13
This computation is carried out using the weak form of the PDE, and the expres-
sion 2.2 constitutes the trial function. The test function is constructed using the
variations of the enrichment mode,
u = Fn+1
1 2
(x1 ) Fn+1 D
(x2 ) Fn+1 (xD ) =
1 2 D
Fn+1 (x1 ) Fn+1 (x2 ) Fn+1 (xD )+
1 2 D
Fn+1 (x1 ) Fn+1 (x2 ) Fn+1 (xD )+
1 2 D
Fn+1 (x1 ) Fn+1 (x2 ) Fn+1 (xD ). (2.3)
After introducing the test and trial functions, the problem to solve becomes non-
linear regardless of the fact that it was linear before. This means that an iterative
solver is needed at each enrichment step. The alternated directions fixed-point
algorithm is one of these solvers that have proved to be simple and robust for
many of the practical applications. We will present it later in this chapter.
It is fundamental to reflect about the implications of the use of PGD in the com-
plexity of the problem. If M nodes are used to discretize each coordinate space j ,
the total number of PGD unknowns is N M D. With standard mesh-based
discretization methods, the number of degrees of freedom is M D . That means
that the complexity of PGD scales linearly with the dimensions of the problem
instead of exponentially as in traditional methods. The exponential increase of
degrees of freedom in a problem constitutes the so called curse of dimensional-
ity in the literature. PGD provides an elegant solution to circumvent this curse,
which partially explains its popularity lately. Moreover, it has been observed that
the number of terms necessary to obtain an approximate solution with a certain
grade of accuracy does not depend on the number of dimensions of the problem,
but it rather depends on the separable character of the exact solution.
In many of the applications referred to in the literature and in the present work,
the number of terms of the PGD approximation is of the order of magnitude of
ten. A good indicator of the separability of the PGD solution is the separability,
when it is available, of a particular solution of the problem. The concept of good
separability of the solution, from an engineering point of view, is the possibility
to find a good approximation with a reasonable number of terms.
Chapter 2. Proper Generalized Decomposition 14
Once the general idea of the PGD has been presented, it is very convenient to
remark the main differences with the POD-based ROM methods, introduced in
the first chapter.
On the other hand, the PGD does not rely on the availability of a prior solution of
the problem. The reduced approximation basis functions are unknown a priori and
it is constructed sequentially, based on the variational formulation of the problem.
The PGD can also be seen as an efficient solver, even in certain applications, such
as those included in this document, this feature constitutes the main reason for
using the PGD.
Finally, it should be noted that the PGD is not linked to any particular discretiza-
tion method. Obviously, its implementation with FEM is the most popular one
(because FEM is widely used) but it can be implemented using Finite Differences,
Meshless methods or any other.
u
k u = f (2.4)
t
with homogeneous initial and boundary conditions. In this example we will also
explain one of the most important features of the PGD: the possibility of intro-
ducing general parameters of the problem as extra coordinates. It means that, for
example, the material properties or the applied forces are no longer parameters
but coordinates of the problem. Thus, the solution can be particularized later
for any particular value of each parameter (coordinate) inside their domains of
Chapter 2. Proper Generalized Decomposition 15
definition. One solves a more complex problem off-line that is evaluated online
(with negligible cost) for any particular case.
The price to pay is an increase of the problem dimensionality and the resulting
nonlinear formulation. The first drawback is overcome by the fact that the PGD
formulation scales linearly with the dimensions so it is not a major issue. The
second one is circumvented using an appropriate iterative solver, in practice, the
alternate directions fixed-point algorithm.
In this example we introduce the time and the conductivity of the material as
extra coordinates. This implies that the desired solution will have the next form:
N
X
u(x, t, k) Xi (x) Ti (t) Ki (k). (2.6)
i=1
and one desires to compute a new functional product Xn (x) Tn (t) Kn (k), which
we write as Rn (x) Sn (t) Wn (k) for notational simplicity. Thus, the solution in
this step n reads:
un = un1 + Rn (x) Sn (t) Wn (k). (2.8)
In order to compute the new enrichment functional we consider the next test
function:
u = Rn (x) Sn (t) Wn (k) + Rn (x) Sn (t) Wn (k) + Rn (x) Sn (t) Wn (k). (2.9)
Hence, the trial and test functions are given by the equations 2.8 and 2.9 respec-
tively. Introducing them in the variational form, the next nonlinear problem is
Chapter 2. Proper Generalized Decomposition 16
obtained:
Z
dS
u RW k S W R dx dt dk =
It Ik dt
Z n1 Z
X
dTi
u f u Xi K i k Ki Ti Xi dx dt dk
It Ik i=1 I t I k
dt
(2.10)
where the coordinate dependencies have been removed to alleviate the notation.
Additionally, we will consider that the source function can be expressed in a sep-
arated form: m
X
f fjx (x) fjt (t) fjk (k), (2.11)
j=1
As we have indicate before, this problem is solved with the alternated direction
fixed-point algorithm which works as follows:
Z Z m
X Z
x x
R Rdx x x
R Rdx = xj jx R fjx dx
j=1
n1
X Z Z
ix ix R Xi dx ix ix R Xi dx , (2.12)
i=1
where the the integrals have been split and the next scalar values are known
at this point:
Z Z
x 2
= W dk ix = kW Ki dk
Ik Ik
Z Z
x dS dTi
= S dt ix = S dt
dt dt
ZIt ZIt
x = kW 2 dk ix = kW Ki dk
ZIk ZIk
x
= S 2 dt ix = STi dt
ZIt ZIt
II With the assumed W (k) and the previously computed R(x), S(t) is obtained
solving:
Z Z m Z
dS
X
t t
S dt t t
S Sdt = tj jt S ftj dt
It dt It j=1 It
n1 Z Z
dTi
X
t t t t
i i S dt i i S Ti dt , (2.13)
i=1 It
dt It
III Finally, using R(x) and S(t) from previous steps W (k) is computed:
Z Z m
X Z
k
k
W W dk k k
kW W dk = kj jk W fkj dk
Ik Ik j=1 Ik
n1
X Z Z
ik ik W Kdk ik ik kW Ki dk , (2.14)
i=1 Ik Ik
where
Z Z
k 2
= R dx ik = RXi dx
Z Z
k dS dTi
= S dt ik = S dt
dt dt
ZIt ZIt
k = RRdx ik = RXi dx
Z Z
k = S 2 dk ik = STi dt
ZIt Z It
This tree steps are repeated in a loop until the convergence of the new functional
product is achieve. It is important to remark that the equation 2.12 is a regular
PDE of second order which can be solved for any method. In practice, the integra-
tion by parts is applied and then linear interpolations can be used. The equation
2.13 is an ODE of first order if the strong form is recovered and can be solved with
any temporal integrator. The equation 2.14 does not involve any derivatives, it is
a linear system of equations.
Before finishing this section, we will give a brief explanation about the imposi-
tion of non-homogeneous boundary conditions. The Neumann boundary condi-
tion emerges explicitly in the variational formulation as a new integral operator
of a known field. The challenge is to express this field as a functional product of
functions of the different coordinates of the problem. In practice, this is carried
out using the Singlar Value Decomposition (SVD), the High Order SVD or with a
PGD approximation (equivalent to SVD in two-dimensional problems). An exam-
ple of this PGD approximation is shown in the section 2.2.1. On the other hand,
to impose non-homogeneous Dirichlet boundary conditions, the easiest way is to
construct artificially D initial PGD modes that satisfy the boundary conditions,
D
X N
X
u(x, t, k) Xi (x) Ti (t) Ki (k) + Xi (x) Ti (t) Ki (k), (2.15)
i=1 i=D+1
and solving the associated homogeneous problem to compute the rest of the modes.
Another method, developed during this work, to impose essential boundary con-
ditions will be presented in chapter four.
The impotance of the concept of separability of a field has been previously noted.
An approximate PGD solution will contain few terms if it is strongly separable or
a great number of them if it is not. The SVD and HOSVD provide a good test to
check this feature of the solution. Moreover, in the weak PGD formulation all the
involved fields should be expressed in a separated form, otherwise the complete
separation of the integral operation could not be performed. For these reasons the
SVD plays a very important role in the practical implementation of PGD.
Chapter 2. Proper Generalized Decomposition 19
M = U V T. (2.16)
where U i and V i denote the columns of the matrices U and V . We say that
the matrix M is separable if it is possible to find a number of terms r such that
r << N having a good approximation of M . If it is the case, the amount of
information diminishes significantly.
For its importance in this work, the PGD constructor is presented briefly in the
next lines. Additionally, we will show an interesting result: SVD and PGD are
equivalent in two-dimensional cases. It can be also proved that the decomposition
is optimal in that case [14].
In order to emphasize the connexion between PGD and SVD we look for a the
one-term PGD approximation
The fixed point algorithm, as explained before, consists in solving iteratively the
next two equations,
Z
X Y (XY f (x, y))dxdy = 0 (2.21)
x y
Z
XY (XY f (x, y))dxdy = 0. (2.22)
x y
and R
Xf dx
Y = Rx . (2.24)
x
X 2 dx
After discretization,
MT Y
X= , (2.25)
YT Y
and
M X
Y = , (2.26)
XT X
where is easy to see that X and Y are eigenvectors of M T M and M M T
respectively. These coincide with the left-and right-singular vectors in the SVD
decomposition of M .
It is not the purpose of this work to carry out a deeply and complete PGD review
from its creation. The necessary references are included when necessary for a better
understanding of the different concepts. Moreover, the PGD is an almost mature
techniques and very interesting reviews have been published in the last years such
as [13] or [8]. Furthermore some interesting books about PGD can be found. For
a comprehensive introduction and description of PGD, the interested reader can
consult [15]. To get a perspective on some of its practical applications, [16] is a
good reference. Finally, in the recent book [4], a deeper presentation of the PGD
in the context of Reduced Basis Methods can be found.
Thus, we present here some of the new tendencies of PGD and the last develop-
ments that appear during the elaboration of this master thesis.
One of the most prolific PGD applications, because of the quality of the results and
its diverse applicability, is the so-called in-plane-out-of-plane decomposition. The
first applications in linear elasticity were published by Brice et al. in [11]. More
recently it has been used in no-Newtonian squeeze flows with porous media [18].
Nowadays it is been used also to study the nucleation and propagation of defects
in composite materials [19].
Traditionally, in engineering, this problem has been solved using simplified models
defined in domains with less dimensions. This is the case of the classical theories
of strength of materials, where three-dimensional solids are approximated by 1D
or 2D models (beams, plates and shells) [20]. These approximations involve some
kinematical and mechanical hypotheses on the evolution of the solution through
the degenerated dimension. Then, the solution is only valid in those points which
satisfies the Saint-Venant principle.
The PGD has the advantage of the separation of the variables to decrease the
dimensionality of the operators to compute. As we have seen previously, it is
Chapter 2. Proper Generalized Decomposition 23
N
X
u(x) Xi (x) Yi (y) Zi (z), (2.27)
i=1
N
X
u(x) Xi (x, y) Zi (z). (2.28)
i=1
In the first case, a complete separation is carried out. This means that we also
need a fully-separated geometry, which is complicated to find in practice. However,
the plate-type decomposition or the in-plane-out-of-plane decomposition is much
more versatile: many interesting geometries can be generated using the extrusion
of a generic 2D section. This technique is used throughout this work and it will
be explained in detail.
Lets illustrate the technique with a simple problem, a linear elastic problem in
a plate-shape domain = I. Assuming the next separated form of the
displacements field
u(x, y, z) ui (x, y) uiz (z)
XN
xy
v i (x, y) vzi (z)
u(x, y, z) = v(x, y, z) . (2.29)
xy
i=1 i
w(x, y, z) wxy (x, y) wzi (z)
where uxy (x, y), vxy (x, y) and wxy (x, y) are function of the in-plane coordinates
whereas uz (z), vzi (z) and wz (z) are functions involving the thickness coordinate.
Chapter 2. Proper Generalized Decomposition 24
where K is the Hooke tensor, f d represents the body forces, F d the forces applied
on the boundary N . The strains have the next separated expression:
uixy,x uiz
i i
v xy,y v z
N i i
X wxy wz,z
(u(x, y, z)) ui ui + v i v i .
(2.31)
i=1 xy,y z xy,x z
i i
u u + w i i
xy z,z xy,x wz
i i i
vxy vz,z + wxy,y wzi
Assuming that the first n modes have been computed, we want enrich the solution
adding another functional product:
Ru (x, y) Su (z)
un+1 (x, y, z) = un (x, y, z) +
Rv (x, y) Sv (z) .
(2.32)
Rw (x, y) Sw (z)
The last expression conforms the trial function. As we have seen in the previous
section, the test function has the next form:
Ru (x, y) Su (z) + Ru (x, y) Su (z)
u (x, y, z) =
. (2.33)
R
v (x, y) Sv (z) + Rv (x, y) Sv (z)
Rw (x, y) Sw (z) + Rw (x, y) Sw (z)
The weak form, after introducing the trial and test function, reads:
Z Z
(u (x, y, z)) K (un+1 (x, y, z)) dx dy dz =
xy z
Z Z Z
u (x, y, z) f d dx dy dz + u (x, y, z) F d d. (2.34)
xy z N
It is easy to check that, due to the product of unknown functions, the problem
has become nonlinear. To solve it, the alternated directions fixed-point algorithm
previously described.
Chapter 2. Proper Generalized Decomposition 25
Given an initial value S (0) (z) of S(z) arbitrarily chosen, all z dependent functions
are known. The equation 2.34 therefore reduces to a 2D problem where the three
components of R(x, y) are the unknown fields. Its solution yields R(1) (x, y), a
first approximation of R(x, y). Then using the just computed R(1) (x, y) in 2.34,
we similarly obtain a 1D problem which allows computing the three components
of S (1) (z) that constitutes the next approximation of S(z). This fixed point loop
keeps running until reaching convergence, i.e:
i=3
Z X 2
R(j) (x, y) S (j) (z) R(j1) (x, y) S (j1) (z) dx dy dz , (2.35)
i=1
One continues adding new PGD modes until certain approximation grade is reached.
In general this is achieved imposing a minimum bound to the residual of the PDE
after the new mode is added. In practice the residual is not computed after each
new incorporation because it is computational expensive. Depending on the ap-
plication, the residual is computed after 5, 10 or more PGD modes.
The exterior loop, or the enrichment loop, which adds new PGD modes until
the residual of the PDE is small enough.
The interior loop, in which for computing the corresponding mode the non-
linear problem is solved iteratively. The nonlinear solver is the fixed-point
already presented and it stops when the new mode has converged.
Of course there are some sophisticated variations such as the residual minimization
for nonsymmetrical problems. This is beyond the scope of this work, the interested
reader can consult [15] for more details.
Chapter 2. Proper Generalized Decomposition 26
u = 2(1y 4 )(1z 6 )(1x2 )12y 2 (1z 6 )(1x2 )(1y 4 )30z 4 in = [1, 1]3
(2.36)
with homogeneous Dirichlet boundary conditions.
The source term was artificially constructed in order to compare the results with
the analytical solution,
3
X
f= fixy fiz , (2.38)
i=1
where
The operators of the weak form, once the trial and test functions are introduced
in the variational formulation, are obtained after separating the integral domains.
These operators can be assembled and integrated as in any FEM solver.
In the figure 2.2, a section of the PGD solution is shown. In the figure 2.3 the
distribution of the relative error is presented. With only two PGD modes the
maximum relative error is of the order of 5%.
Slip condition in the top wall. In the cylindrical obstacle a tangencial velocity is
imposed, which magnitude varies linearly from the top wall to the bottom. The
fluid enters into the channel with a constant unidirectional velocity profile.
Because of their importance in the present work, the Stokes equations and the
penalized formulation are presented starting from the Navier-Stokes equations.
More details can be found in [21].
where b are the volumetric forces. In the case of highly viscous flow, the convective
terms in the Navier-Stokes equation can be neglected. If we assume an stationary
process we obtain:
=b in (2.44)
v =0 in (2.45)
v = vD on D (2.46)
n =t on N (2.47)
A constitutive equation is needed to close the problem, = (p, v), for instance
the linear Stokes law:
= pI + 2S v. (2.48)
This is a vectorial problem, where the primary variable (the unknown) is the
velocity field. For a Newtonian fluid this is completely analogous to a linear
elastic problem where the unknown is the displacements field. The abstract form
of the variational problem reads:
where w and q are the test functions corresponding to velocity and pressure re-
spectively, which are defined in their appropiate functional spaces. The operators
of the abstract form 2.49 have the following definitions:
Z
a(w, v) = w(i,j) Cijkl v(k,l) d (2.50)
Z
b(v, q) = q v d, (2.51)
v () = p() /, (2.52)
and, after the discretization of the operators, the problem to solve is:
(K + K )u() = f (2.54)
and the trial and test functions are constructed as explained before.
Robust Interpolants
3.1 Introduction
The classic Finite Elements methods have been widely used in industry and re-
search from their first developments in the 1960s and 1970s. One of their general
features is that they rely on the use of polynomial interpolants on compact sup-
ports. These interpolants are diverse, having different orders of consistency and
properties. In the last decades, a plethora of modified FE-based methods have
appeared, introducing certain modifications of these interpolants (or shape func-
tions), such as the interesting X-FEM method introduced by Belytschko et al in
1999 [23].
In general, in FEM, the interpolant functions are only different to zero in a small
region around the discretization node (compact support). This region is deter-
mined by the connectivity established between the nodes, which is the result of
the mesh used in the discretization and it is called element. It is well known that
the geometrical aspects of the elements have a huge influence in the accuracy of
the method [24].
Remeshing strategies. If eventually the mesh is not good enough the old
mesh is substitute by a new one. This can be seen as a force brute method,
31
Chapter 3. Robust Interpolants 32
but in fact it is an interesting research field because there are very elegant
and sophisticate techniques to remesh in a smart way such as the adap-
tive remeshing [25]. Classical updated-Lagrangian approaches often use this
technique.
Moving meshes strategies. These strategies try to avoid the use of very
distorted mesh (or diminish the remeshing frequency) establishing a dis-
cretization which evolves in a convenient way. The formulation of the prob-
lem changes due to the inclusion of convective terms which depends on the
movement of the mesh. In general, this technique is called Arbitrarian-
Lagrangrian-Eulerian (ALE) approach.
We have not included in this classification the family of fixed-mesh methods, be-
cause, by construction, they are not going to face any mesh issue. However, in
general, these methods have other drawbacks which make them not very suit-
able for certain applications, such as the simulation of material forming process.
Among them it can be highlighted:
The history of the material particles should be reconstructed after the simu-
lation. This process can be expensive from the computational point of view.
Especially, this is an issue when the physics of the problem depends on the
history of the material points, such as in FSW.
Thereby, in the present work this last family of methods is not examined.
It should be noted that the use of a certain general framework (Eulerian, La-
grangian, ALE) does not implies the use of a specific interpolant method. How-
ever, in general, meshless and robust methods are preferred when large distortions
occur, typically in Lagrangian methods. Even when they can also be used in an
Eulerian framework, probably the extra cost that implies to compute the inter-
polant functions does not compensate the effort.
Among the different meshless methods, NEM has been proved to be very suitable
for material forming processes [26]. Additionally, the robustness of FE-SCNI,
together with its simple implementation, makes it a very appealing method and
for this reason it will also be covered in this chapter.
Meshless methods are alternative techniques to the finite element method in solv-
ing partial differential equations. While the finite element method derives an
approximation based on the elements, using the connectivity, the meshless meth-
ods allow us to derive an approximation at any point thanks to the information
provided by the surrounding nodes. In these approaches the concept of element is
thus not used any more. Connectivity between the nodes is not defined any more
by the mesh but only by the concept of field of influence.
These methods were developed with the aim of avoiding the numerical problems
involved in mesh construction and its degradation. This is the case of manufactur-
ing processes such as extrusion, injection, or FSW. These methods also facilitate
the refining of the solution in certain areas of the domain, simply by adding new
nodes without the geometrical constraints known within the framework of finite
elements and the problems related to precise projection of the fields between the
original and the refined mesh.
Chapter 3. Robust Interpolants 34
Although structures with a geometrical character are necessary, these do not in-
terfere, in general, with the quality of the solution and thus can be built indepen-
dently. The term grid instead of mesh to refer to the cloud of nodes is preferred
by many authors to emphasize this feature. Among the many meshless methods
available nowadays, the Natural Element Method possesses some noteworthy ad-
vantages over other meshless methods. The NEM proposes an interpolation based
on the concepts of the Voronoi diagram and its natural neighbors.
The choice of support of shape functions is automatic and optimal in the sense that
node vicinity is taken into account as much as possible to define the interpolation.
With regard to the imposition of boundary conditions, for convex domains, it
is direct and proceeds as the finite elements: the influence of internal nodes on
a given domain is cancelled on the edges of the latter. In non-convex domain,
different techniques can be used such as alpha shapes [27] or constrained Voronoy
diagrams [28].
It follows that NEM combines the advantages of meshless methods and finite
element approaches. Moreover, these methods seem promising for complex simu-
lations because the evolution of internal variables can be calculated on the nodal
trajectories without requiring fields projections.
The Voronoi diagram was originally introduced by Descartes in 1644 and studied
and extended by the mathematicians Dirichlet and Voronoi in 1850 and 1907
respectively. It has been applied in many scientific disciplines.
Given a set of nodes in Rn , the Voronoi diagram is a partition of the space into
regions TI each of which is associated with a node, such that in any point inside
one of these area s is neared to the node defining the cell than any other node.
Formally is given by
By connecting the nodes sharing a common side of the Voronoi cell, we obtain the
Delaunay triangulation, figure 3.1, the dual of the Voronoi diagram. The vertices
of the Voronoi cells are the orthocenters of the Delaunay triangles and the centers
of the circumscribed circles with these triangles.
The natural neighbors of a node are the nodes whose Voronoi cell shares an edge
with the one of the considered nodes or which are connected to the node by an
edge of the Delaunay triangle. In 2D this triangulation maximizes the minimum
interior angle of the triangles.
Consider a cloud of nodes and the associated Voronoi diagram, we will say that
a node i is a neighbor of another node j if their Voronoi cells have a common
side. Consider now the introduction of the point x (see figure 3.2). Due to its
inclusion, the Voronoi diagram will be altered, affecting the Voronoi cells of the
natural neighbours of x. The value of Sibson interpolant in a point x is defined as
the ratio of the cell TI that is transferred to Tx when adding x to the initial cloud
of points to the total cell volume. To illustrate this, according with the figure 3.2,
the value of natural neighbor interpolant in x associated to node 1 is the following
ratio of areas:
Aabf e
1 (x) = . (3.3)
Aabcd
Chapter 3. Robust Interpolants 36
I (x)
1 (x) = , (3.4)
(x)
which express the ratio between the Lebesgue measurements of the second-order
Voronoi cell and the fist-order Voronoi cell at the point x. In other words, is the
value of the Sibsons interpolant in the point x associated to node 1. In general,
the point x will be a point of integration.
The resulting shape function, figure 3.3, has some interesting properties:
Are C 1 functions almost everywhere, except at the nodes where they are C 0 .
N
X
h
uu = i ui (3.5)
i=1
where uI is the vector of the nodal variables and n is the number of natural
neighbors of point x. Contrary to what happens in classical FEM when , where
the number of nodes determine the number n in the former expression and it is a
Chapter 3. Robust Interpolants 37
feature of the element, in NEM this could change in each integration point. Then,
it can be seen how this interpolant has a more natural behavior when faced with
a distorted mesh. In the figure 3.4 we illustrate this idea.
For a given point x with a mesh-based method, the connectivity could lead to
a situation in which the more distant nodes have more influence than the closer
ones. With the natural interpolant this problem is always avoided.
One of the drawbacks of any meshless method is the imposition of Dirichlet bound-
ary conditions. The problem comes from the impossibility of making, on any type
of boundary, the shape function associated with the interior nodes vanish. In
NEM, this problem is associated with not-convex boundaries because what the
method sees, in its basic implementation, is the convex hull domain of the given
set points. That means that, for example in figure 3.5 the node c can have in-
fluence on the node e, which is an undesirable situation. This problem does not
Chapter 3. Robust Interpolants 38
happen in the nodes of convex boundaries, where the Dirichlet boundary condition
can be imposed as usual.
Two different strategies can be found in the literature to circumvent this problem:
The introduction of the so-called -shapes, which is associated with the level
of detail with which we desire to represent the domain described by the point
cloud. The mathematical formalism of this technique can be found in [REF],
but the main idea is simple. For a given point on the boundary, the area
where their neighbors are searched is restricted to a Rn -ball with radius
centered in the point. Thus, the influence of undesirable nodes is avoided
when a convenient value is selected. The proper selection of this value
depends on the application and it could be quite difficult in complex geome-
tries. In figure 3.6 we present a clarifying example of a set of nodes (extracted
from a human jaw bone tomography), treated with different values. In the
lower limit, near zero, we see the set of points with no connections (in-
fluence) between them. In the upper limit, tending to infinite, we obtain
the convex hull domain associated to the nodal set. An intermediate situa-
tion should be found in order to have the appropriate influence between the
nodes, allowing to impose the essential boundary conditions properly.
The example consists on an elastic block with a uniform top load and the vertical
displacements restricted on the bottom line.
In the figure 3.7 the resulting displacements are shown, which are in accordance
with the expected results.
It is important to remark that this is a very simple linear elastic example with
the purpose of learning how to implement this method. However, NEM is very
convenient for nonlinear solid problems with large displacements and large strains
because not remeshing is necessary during the iterative application of the load.
Chapter 3. Robust Interpolants 40
Meshless methods have another drawback not mentioned before: domain integra-
tion using Gauss quadrature introduces significant numerical errors due to the
following:
On the other hand, direct nodal integration, using the nodes as integration points,
leads to numerical instabilities.
Chen et al. [30] suggested the SCNI method to solve this issue, which was applied
to the NEM by Gonzalez et al. [31]. Moreover, it is very interesting that SCNI can
be incorporated into traditional FEM to produce a very robust method to deal
with distorted mesh.
In this work this FE-SCNI method has been used in the 3D examples of chapter
four, and for that reason the technique is explained in detail.
The SCNI is based on the assumed strain method, in which a modified gradient is
introduced at the integration point (node):
Z
1
u(x i) = u(x) d, (3.7)
Ai i
Chapter 3. Robust Interpolants 41
where xi are the coordinates of node ni . The cell i is one element of any partition
of the domain. Typically a Voronoi tessellation is used. We illustrate the technique
with an elastic problem.
(xi ) = B i d, (3.10)
where d is the vector of nodal displacements and the components of B i are defined
by:
j (xi )
Z
1
= j (x)n1 (x)d
x1 Ai i
j (xi )
Z
1
= j (x)n2 (x)d, (3.11)
x2 Ai i
and the global stiffness matrix is obtained by assembling the contribution of each
node ni :
X T
K= Ai B i D B i . (3.12)
i
An efficient updated-Lagrangian
technique for material forming
processes
4.1 Introduction
In the first chapter we have seen that the material forming simulation is a very
challenging task, even for the high performance computation available nowadays.
In one hand, processes such as co-extrusion, friction stir welding (FSW) or resin
transfer moulding involve some inherent difficulties:
On the other hand, the actual industry demands a new paradigm in numerical
simulation in order to improve its competitiveness. This new paradigm leads to:
43
Chapter 4. An efficient up-Lagrangian technique for material forming 44
Creates virtual test platforms [32] able to reduce the design cycle of products
and processes. These platforms should provide the user with rapid responses
to complex coupled problems.
To circumvent the first group of problems, those intrinsic to this kind of numerical
simulations, different solutions can be found in the literature. All of them can
be categorized using the following general classification of simulation frameworks.
This classification can be applied to any numerical method, but it has an espe-
cial relevance in the simulation of material forming processes. In general, three
frameworks can be established:
Eulerian. The discretization nodes are fixed to the space. That means
that the mesh does not evolve in time. In practice, with this approach, no
distortion problems occur. It is present in any commercial code available
and it is, in principle, the easiest approach to any computational simulation.
Unfortunately, the material derivative with a fixed reference (also called total
derivative) will contain a convective term. This convective term, when dom-
inates the problem, leads to numerical instabilities and the problem should
be stabilized (not trivial in 3D problems). Additionally, the path and ther-
momechanical history of the material particles should be reconstructed a
posteriori which involves numerical diffusion. Moreover, this reconstruction
should be frequent since the physics of the problem depends on the material
distribution. If this is not enough, the treatment of free surfaces and evolv-
ing boundary conditions is very complicated and maybe inaccurate in an
Eulerian framework. Thus, it is clear that the use of an Eulerian approach
is quite limited for the metal forming simulations.
Lagrangian. The discretization nodes are fixed to the material particles. The
mesh evolves in time following the material, thus, their thermomechanical
history is obtained directly. Free surfaces are easily tracked and boundary
conditions imposed in a simple way.
However, this approach leads to distorted meshes when large deformations
occur, and therefore, a frequent remeshing is needed. Remeshing is very
Chapter 4. An efficient up-Lagrangian technique for material forming 45
expensive in practice, being in some applicatons the bottle neck of the simu-
lation. In addition, reiterative projections of the fields between the old and
new meshes are required, introducing numerical diffusion.
In the figure 4.1, the pros and cons of these different frameworks are summarized.
It is important to note that in the previous classification we have equated the mesh
and the nodal discretization. This is not true in a strictly sense, since any meshless
method can be applied using the different frameworks presented. However, it is
difficult to find in practice an application where the use of meshless method in an
Eulerian or even in ALE frameworks could be justified over a traditional FEM.
It is still an open question how to satisfy the second group of requirements, those
demanded by the most advanced industries such as the aeronautical and the au-
tomotive ones. The common denominator of all of them is the necessity of fast
computing. Therefore, MOR methods are an appealing alternative as we have seen
in the introductory chapter. Among these methods, in this work, the application
of the in-plane-out-of-plane PGD-based decomposition is explored as an efficient
technique to solve problems in updated-Lagrangian frameworks. Thus, PGD in
this particular application should be seen more as an efficient solver than as a
MOR method.
In the section 2.3 we have seen that the in-plane-out-of-plane PGD-based de-
composition can solve 3D problems through a succession of 2D problems. The
provided examples were solved in an Eulerian framework, and in fact this is the
only approach that can be found in the PGD literature.
However, in this chapter, we have concluded that an Eulerian mesh is not the
best option to simulate material forming processes. Therefore, the question is
obvious: Could be the in-plane-out-of-plane PGD-based decomposition extended
to updated-Lagrangian frameworks?
Proving that the answer to the former question is affirmative is the main purpose
of this work and of this chapter in particular. The key idea is quite simple: Using
the orthogonal projections of the 3D nodes in a plane and in an axis to construct
the discrete functional spaces for the PGD modes. In that way, the values of any
unknown field will be obtained directly in the nodes and no information is trans-
ferred between different meshes. The tracking of the particles, quite important in
material forming simulations, is inherent to the Lagrangian nature of the method.
It is easy to understand that the nodal projections in the plane could lead to a very
distorted 2D mesh, even when the optimal triangulation (Delaunay triangulation)
is used. To circumvent this issue, the strategy introduces another key ingredient:
the SCNI, presented in the chapter two, which provides robustness to the numerical
integration of the 2D operators.
Chapter 4. An efficient up-Lagrangian technique for material forming 47
= 2d pI (4.1)
The balance of momentum and mass equations without inertia and the assumed
incompressibility of the flow read:
= 0, v = 0 (4.2)
r = : d (4.4)
for large mesh distortions, i.e., it allows considering the projected nodes without
the necessity of repositioning them.
u(x, y, z) N uixy (x, y) uiz (z)
X
v(x, y, z) = v(x, y, z)
v i (x, y) v i (z) . (4.5)
xy z
i=1 i i
w(x, y, z) wxy (x, y) wz (z)
Once the three-dimensional velocity is obtained, the position of the nodes is up-
dated according to:
xi+1 = xi + t vi (4.6)
to solve again the thermal problem and restart the solution loop.
M
X
(x, y, z) = xy z
k (x, y)k (z) (4.7)
k=1
The whole strategy is summarized in the figure 4.2. In what follows we are con-
sidering in more detail the flow model.
(
p = ( v)
(4.8)
v =0
where is the fluid viscosity.
Chapter 4. An efficient up-Lagrangian technique for material forming 49
v+p=0 (4.9)
or more explicitly
1 u v w v
p= + + = (4.10)
x y z
( v) + v = 0 (4.11)
or
2u v w 2u 2u 2u
x2
+ x y
+ x z x2
+ y 2
+ z 2
2v 2v 2v 2v
u w + =0 (4.13)
y x
+ y2 + y z x2
+ y 2
+ z 2
u v 2 2w 2w 2w
z x
+ z y
+ zw2 x2
+ y 2
+ z 2
(
p = T
(4.14)
v =0
where the extra-stress tensor for power-law fluids writes:
n1
T = 2K Deq D (4.15)
with K and n two rheological parameters and D the strain rate tensor:
u 1 u v 1 u w
x 2 y
+ x 2 z
+ x
1 u v v 1 v w
D= + + y (4.16)
2 y x y 2 z
1 u w 1 v w w
2 z
+ x 2 z
+ y z
To validate the proposed strategy, the first question that arises is how an SVD
decomposition can reconstruct a field, knowing its values in certain particles and
using the idea of the nodal projections.
Chapter 4. An efficient up-Lagrangian technique for material forming 51
which in certain random nodes {x1 , x2 , ..., xr } takes the values {f1 , f2 , ..., fr }.
These values are the only available information to construct a separated approxi-
mation
N
X
f (x, y) Fi (x)Gi (y) (4.19)
i=1
Pr Pr
where Fi (x) = j=1 r (x)Fi (xj ) and Gi (y) = j=1 r (y)Gi (yj ).
In the former interpolations, xj and yj are the orthogonal projections of the particle
xj in the Cartesian axes (its coordinates); r and {Fi , Gi } are, respectively, the
shape functions and the nodal values (unknowns).
The next step is to study how to solve a 2D PDE with the source term only known
in certain nodal positions and using the separation of variables proposed by the
PGD. Again, the 1D-PGD modes will be constructed with the projections of the
particles in the axes. In practice, as we have seen in chapter two, this reduces
dramatically the complexity of the problem.
Chapter 4. An efficient up-Lagrangian technique for material forming 52
Eigenvalues decayment
50 Particles
102 300 Particles
3000 Particles
105
/ 1
108
1011
1014 0
10 101 102 103 104
Number of SVD terms
Figure 4.3: Eigenvalues decayment
102
103
104
0 10 20 30 40 50 60 70 80 90 100
Number of SVD modes
Figure 4.4: Absolute error in the material particles
u = f in = [0, 2] [0, 1] (4.20a)
u=0 on x = 0 (4.20b)
u=0 on y = 0 (4.20c)
u = y(y 1)
on x = 2 (4.20d)
u
= g(x) on y = 2 (4.20e)
y
Chapter 4. An efficient up-Lagrangian technique for material forming 53
102
103 1
10 102 103 104
Number of Particles
Figure 4.5: PDE absolute error
where
0, if x 0.5
g(x) =
1, if x > 0.5.
The criterion to stop the enrichment process was to set 104 as tolerance for the
residual. This leads to between 30 to 50 PGD modes, depending on the number
of the particles introduced. When more particles are introduced, the solution is
richer and more modes are necessary to capture the higher frequencies.
In the figure 4.5 the absolute error (in infinity norm) of the PGD solution is
presented. A FEM solution in a fine mesh was taken as exact solution. Logically,
the more particles are introduced in the domain, the more information we get and
less error is obtained. This method is approximately of order one.
The exact solution and the reconstructed PGD solution in the particles are
presented in the figure 4.6. A qualitative idea of the distribution of the error is
shown in the figure 4.7. In this figure, the exact solution is plotted in the plane.
Chapter 4. An efficient up-Lagrangian technique for material forming 54
At this point we have seen how the PGD can solve PDEs in separate coordinates
with only the source known in certain points (particles). However, our proposed
strategy is an updated-Lagrangian technique, so it is in problems which evolve
with times where it has sense.
Chapter 4. An efficient up-Lagrangian technique for material forming 55
For that purpose, we are going to solve the transient rotating pulse example from
the book [21] in this section. The problem reads:
u + a u (u) = s in = [0, 1] [0, 1] (4.22a)
t
u=0 on (4.22b)
u=0 at t = 0 (4.22c)
The FEM solution requires stabilization regardless of the time integration scheme
selected. Here we have used, as a reference solution, a SUPG discretization in
space and a R2,2 time scheme. The details of this formulation can be found in [21].
It is important to note that, even with SUPG, the solution suffers oscillations
near the borders. In an updated-Lagrangian approach, the convective term does
not appear, stabilization is not necessary and this problem does not occur. In an
updated-Lagrangian approach, the problem reads
ut 2 u = s. (4.24)
u
(2 )u = sn+1 + 2 un . (4.25)
t
To solve this problem using the PGD separation of variables, the test and trial
functions are constructed as explained in chapter 2. Moreover, in the time step
n + 1 two known fields should be expressed in separated form, un and sn+1 . The
separation of these spaces were carried out using a SVD .
In the figure 4.8, the FEM solution represented with a surface and the PGD
solution at t = in the particles is shown. It is important to note that, with the
updated-Lagrangian PGD strategy, no stabilization is needed. Moreover, thanks
Chapter 4. An efficient up-Lagrangian technique for material forming 56
to the separation of variables the problem is solved at the same cost than a few
1D problems.
In this section we start the 3D tests of the proposed strategy. Let assume a
simple model of co-extrusion, two immiscible fluids, with very different viscosities,
entering in a squared pipe. The interface between the fluids is, at the enter, in the
middle of the section as it is shown in the figure 4.9. But this is not an equilibrium
situation, the less viscous fluid advances faster and, due the conservation of mass,
the interface moves to diminish the effective flow section.
0.5
0.3
0.2
0.1
0
106 105 104 103 102 101 100
Viscosity ratio 1 /2
In the figure 4.10, the equilibrium position of the interface for two fluids with a
given viscosity ratio is presented. This plot has been obtained solving the an-
alytical solution of two Stokes flow between two parallel plates. On the upper
and bottom plates the sticking condition was imposed and in the interface the
velocities and tangential forces were equaled.
This is not exactly our 3D problem, but we can expect, qualitatively, a similar
behavior in the symmetry plane. In the plot it can be seen that, for a viscosity
ratio of 0.01 the interface position diminishes around a 25% of the total high of
the section. The Matlab code to solve the former problem can be found in the A.
In the numerical simulation we solve the Stokes flows using the in-plane-out-of-
plane PGD-based decomposition in an updated-Lagrangian approach. At each
time step, the material nodes are moved with the computed velocity and, in the
new particles configuration, the problem is solved again until an equilibrium situ-
ation is reached. The viscosity field was separated using an SVD.
The results, even when only a few nodes were used (around 800), are in very good
agreement with the qualitative expected behavior. In the figure 4.11, it can be
seen that the flow section of the blue fluid has diminished in approximately the
predicted quantity.
Chapter 4. An efficient up-Lagrangian technique for material forming 58
One of the issues of the separation of variables in PGD is the imposition of es-
sential boundary conditions. Usually, in PGD, some first modes are artificially
constructed to satisfy only the Dirichlet conditions and the rest of the modes are
computing solving the associated homogeneous problem (the associated degrees
of freedom from the system are removed). But, with an in-plane-out-of-plane de-
composition, this technique is not possible if the geometry of the boundary is not
a complete extrusion of a 2D shape along the z axis.
This problem has been solved before adding penalization terms in the weak form.
In this work, a new strategy with no penalty terms is presented.
where U0 and can be expressed in separated way, for instance using an SVD.
The function U0 satisfies the essential boundary conditions and is a characteristic
function which is null in the essential boundaries.
Chapter 4. An efficient up-Lagrangian technique for material forming 59
Thus, the PGD solution is sequentially constructed respecting always the imposed
values in these points where the characteristic function is zero but enriching the
U0 function in the rest.
u = 0, in (4.27)
u = 0, on 1 (4.28)
u = 2, on 2 (4.29)
where 1 is the external border and 2 is the internal one. In this case, the
PGD decomposition is an axis-axis separation. In the figure 4.12 a reference
solution with FEM and the PGD solution are shown. In the PGD solution, the
hole is filled, because we do not create any hole explicitly, but in the domain of
interest the solution is the expected one. Again, this 2D problem is solved with
the computational cost associated to solve some 1D problems.
In the figure 4.13 the maximum relative error of the PGD solution as a function
of the number of terms computed is presented. In this case, with 40 modes,
an error inferior to 1% is obtained. The error, as it can be observed in figure
4.14, is concentrated in the internal corners, where the characteristic function has
singularities.
100
Relative Error
101
102
103 0
10 101 102
Number of PGD modes
Figure 4.13: Error as a function
The same idea has been successfully tested in a 3D problems. In the figure 4.15
we solve the next 3D Laplace equation,
u = 0, in (4.30)
u = 1, on x = 1 (4.31)
u = 2, on int (4.32)
where int is the surface of an internal cylinder which does not reach the bottom
plane.
Chapter 4. An efficient up-Lagrangian technique for material forming 61
The last example of this works tries to incorporate all the numerical ingredients
presented. We will solve a 3D problem using the updated-Lagrangian PGD-based
technique and imposing Dirichlet conditions as we have seen in the previous sec-
tion. To achieve this, we have made a simulation inspired by the FSW process
(figure 4.18).
Chapter 4. An efficient up-Lagrangian technique for material forming 62
FSW is a solid state welding technique which since its invention in 1991 is of great
interest to the industry [35].The FSW welding process is conceptually simple. A
non-consumable rotating tool with a specially designed pin and shoulder is inserted
into the abutting edges of sheets or plates to be joined and traversed along the line
of joint. The tool heats the workpiece ant its stir movement produces the joint.
The obtained results proved that the technique is feasible: with a reasonable
number of nodes the global behavior of the fluid is captured with real 3D effects.
However, these results are not representative of the physics of the FSW process
because the non-Newtonian behavior of the fluids has not been incorporated in
the model. When Newtonian fluid are used, all the domain is perturbed by the
rotation of the cylinder while with the correct behavior law, i.e. a Power Law, the
rotational motion will be confined to the zone around the pin.
In figures 4.19 and 4.20 the initial position and the position of the particles after
the 20 timesteps are shown.
Chapter 4. An efficient up-Lagrangian technique for material forming 63
5.1 Conclusions
The work presented here could be understood in two very different -but at the same
time complementary- ways. On the one hand, it is the closure of the Master of
Science in Numerical Methods in Engineering, and therefore it aims to apply many
of the concepts studied and many of the skills developed during these two years.
It is clear that, without the previous work done during the Masters program, we
could not have had reached to the knowledge necessary to the implementation of
the PGD, to the Natural Elements or to understand the variational principles in
fluids in about one year. Moreover, the topics of this work enrich the contents
of the master itself since these methods are not part of the curriculum but they
complement it.
On the other hand, this work represents the starting of a research career, the
beginning of a doctoral thesis and thus it has helped to create and develop many
of the skills needed to achieve it: managing scientific literature, a critical thinking
approach to the scientific work and the pleasure for the rigorous study of state-of-
the-art topics among others competences.
65
Chapter 5. Conclusions and Perspectives 66
Recently, the PGD has benn demonstrated as a very efficient MOR method
and a particularly efficient solver for the simulation of material forming pro-
cesses.
The using of robust interpolants is very convenient for the simulation of ma-
terial forming processes. However, in 3D problems, they are computationally
expensive.
5.2 Perspectives
Exploring the possibility of including this and other MOR strategies in com-
mercial codes for advanced simulations.
Appendix A
Matlab Codes
13 %%
14 max pgd=100;
15 max fp iter=30;
16
17 %%
18 F=zeros(im nx,max pgd);
19 G=zeros(im ny,max pgd);
20
23 S=rand(im ny,1);
24 for fp=1:max fp iter
67
Appendix Matlab Codes 68
25
26 % Calculo de R, S supuesto
27
40 gx=S.*G(:,j);
41 gamma x(j)=int trapecios(gx);
42 suma x=suma x + gamma x(j)*F(:,j);
43
44 end
45
48 % Calculo de S, R anterior
49
63 gy=R.*F(:,j);
64 gamma y(j)=int trapecios(gy);
65 suma y=suma y + gamma y(j)*G(:,j);
66
67 end
68
Appendix Matlab Codes 69
71 end
72
73 F(:,i)=R;
74 G(:,i)=S;
75 end
76
77 U=G(:,1)*F(:,1)';
78 for i=2:max pgd % Constructing the solution
79 U=U+G(:,i)*F(:,i)';
80 end
81 figure
82 imshow(U)
83 title('PGD picture')
84 shg
1 close all
2 clear all
3 clc
4
7 A1 = dp/(2*mu1);
8 A2 = dp/(2*mu2);
9
10 M = sym(zeros(3,3));
11
12 M(1,1) = 1;
13 M(2,1) = 0;
14 M(2,2) = 0;
15 M(2,3) = 1;
16 M(3,1) = (h.3)/3;
17 M(3,2) = (h.2)/2;
18 M(3,3) = h;
19
Appendix Matlab Codes 70
20 rhs = sym(zeros(3,1));
21 rhs(1) = A1;
22 rhs(3) = Q1;
23
24 x = simple(inv(M)*rhs);
25
26 a1 = x(1);
27 b1 = x(2);
28 c1 = x(3);
29
30 M = sym(zeros(3,3));
31
32 M(1,1) = 1;
33 M(2,1) = H.2;
34 M(2,2) = H;
35 M(2,3) = 1;
36 M(3,1) = (H.3h.3)/3;
37 M(3,2) = (H.2h.2)/2;
38 M(3,3) = (Hh);
39
40 rhs = sym(zeros(3,1));
41 rhs(1) = A2;
42 rhs(3) = Q2;
43
44 x = simple(inv(M)*rhs);
45
46 a2 = x(1);
47 b2 = x(2);
48 c2 = x(3);
49
50 a1 = A1;
51
55 % stress
56 tau1 = simple(mu1*subs(diff(a1*y2+b1*y+c1,y),y,h));
57 tau2 = simple(mu2*subs(diff(a2*y2+b2*y+c2,y),y,h));
58
59 e2 = simple(tau1tau2);
60
61 E = simple(subs(e1,dp,solve(e2,dp)));
62
63 MU2 = logspace(6,0,50);
Appendix Matlab Codes 71
64 for i = 1:numel(MU2)
65 HH = solve(subs(E,[H,mu1,mu2,Q1,Q2],[1,MU2(i),1,0.5,0.5]));
66 h(i)=HH(HH<1 & HH >0);
67 end
68
69 semilogx(MU2,h,'o')
Bibliography
[1] http://www.epractice.eu/en/news/5304734.
[2] Chinesta, F., Chinesta, F., and Ammar, A., 2008. On the Frontier of the
Simulable World : When Models Involve Excessive Degrees of Freedom.
European Journal of Computational Mechanics, 17(5 6 7), pp. 583595.
[3] Narasimha, R., 2011. Kosambi and proper orthogonal decomposition. Res-
onance, 16(6), pp. 574581.
[4] Chinesta, F., 2014. Separated Representations and PGD-Based Model Reduc-
tion, Vol. 554 of CISM International Centre for Mechanical Sciences. Springer
Vienna, Vienna.
[5] Bui-Thanh, T., Willcox, K., Ghattas, O., and van Bloemen Waanders, B.,
2007. Goal-oriented, model-constrained optimization for reduction of large-
scale systems. Journal of Computational Physics, 224(2), pp. 880896.
[6] Girault, M., Videcoq, E., and Petit, D., 2010. Estimation of time-varying
heat sources through inversion of a low order model built with the modal
identification method from in-situ temperature measurements. International
Journal of Heat and Mass Transfer, 53(13), pp. 206 219.
[7] Buffa, A., Maday, Y., Patera, A. T., Prudhomme, C., and Turinici, G., 2012.
A priori convergence of the greedy algorithm for the parametrized reduced
basis method. ESAIM: Mathematical Modelling and Numerical Analysis,
46(03), pp. 595603.
[8] Chinesta, F., Leygue, A., Bordeu, F., Aguado, J. V., Cueto, E., Gonzalez, D.,
Alfaro, I., Ammar, A., and Huerta, A., 2013. PGD-Based Computational
Vademecum for Efficient Design, Optimization and Control. Archives of
Computational Methods in Engineering, 20(1), Jan., pp. 3159.
73
Bibliography 74
[9] Ammar, a., Mokdad, B., Chinesta, F., and Keunings, R., 2006. A new family
of solvers for some classes of multidimensional partial differential equations
encountered in kinetic theory modelling of complex fluids. Journal of Non-
Newtonian Fluid Mechanics, 139(3), Dec., pp. 153176.
[10] Gonzalez, D., Masson, F., Poulhaon, F., Leygue, A., Cueto, E., and Chinesta,
F., 2012. Proper Generalized Decomposition based dynamic data driven
inverse identification. Mathematics and Computers in Simulation, 82(9),
May, pp. 16771695.
[11] Bognet, B., Bordeu, F., Chinesta, F., Leygue, A., and Poitou, A., 2012. Ad-
vanced simulation of models defined in plate geometries: 3D solutions with
2D computational complexity. Computer Methods in Applied Mechanics and
Engineering, 201-204, Jan., pp. 112.
[13] Chinesta, F., Ladeveze, P., and Cueto, E., 2011. A Short Review on Model
Order Reduction Based on Proper Generalized Decomposition. Archives of
Computational Methods in Engineering, 18(4), Oct., pp. 395404.
[14] Kolda, T. G., and Bader, B. W., 2009. Tensor decompositions and applica-
tions. SIAM review, 51(3), pp. 455500.
[15] Chinesta, F., Nantes, E. C. D., Keunings, R., and Leygue, A., 2013. The
Proper Generalized Decomposition for Advanced Numerical Simulations : A
Primer.
[16] Chinesta, F., and Cueto, E., 2014. PGD-based modeling of materials, struc-
tures and processes. ESAFORM Bookseries on Material Forming. Springer,
Cham.
[17] Chinesta, F., Gonzalez, D., and Cueto, E., 2014. Real-time direct integration
of reduced solid dynamics equations. International Journal for Numerical
Methods in Engineering.
[18] Ghnatios, C., Chinesta, F., and Binetruy, C., 2013. 3D Modeling of Squeeze
Flows Occurring in Composite Laminates. pp. 127.
Bibliography 75
[19] Giner, E., Bognet, B., Rodenas, J. J., Leygue, A., Fuenmayor, F. J., and
Chinesta, F., 2013. The Proper Generalized Decomposition (PGD) as a nu-
merical procedure to solve 3D cracked plates in linear elastic fracture mechan-
ics. International Journal of Solids and Structures, 50(10), May, pp. 1710
1720.
[21] Donea, J., and Huerta, A., 2003. Finite element methods for flow problems.
John Wiley & Sons.
[22] Belytschko, T., Liu, W. K., Moran, B., and Elkhodary, K., 2013. Nonlinear
finite elements for continua and structures. John Wiley & Sons.
[23] Dolbow, J., and Belytschko, T., 1999. A finite element method for crack
growth without remeshing. Int. J. Numer. Meth. Engng, 46(1), pp. 131
150.
[24] Logan, D., 2011. A first course in the finite element method. Cengage Learn-
ing.
[25] Boussetta, R., Coupez, T., and Fourment, L., 2006. Adaptive remeshing
based on a posteriori error estimation for forging simulation. Computer
methods in applied mechanics and engineering, 195(48), pp. 66266645.
[26] Chinesta, F., Cescotto, S., Cueto, E., and Lorong, P., 2013. Natural Element
Method for the Simulation of Structures and Processes. John Wiley & Sons,
Inc., Hoboken, NJ, USA, Feb.
[27] Cueto, E., Doblare, M., and Gracia, L., 2000. Imposing essential boundary
conditions in the natural element method by means of density-scaled -shapes.
[28] Cueto, E., and Chinesta, F., 2013. Meshless methods for the simulation of
material forming. International Journal of Material Forming, Aug.
[29] Alfaro, I., Yvonnet, J., Chinesta, F., and Cueto, E., 2007. A study on the
performance of natural neighbour-based Galerkin methods. pp. 14361465.
[30] Chen, J.-S., Wu, C.-T., Yoon, S., and You, Y., 2001. A stabilized conforming
nodal integration for galerkin mesh-free methods. International Journal for
Numerical Methods in Engineering, 50(2), pp. 435466.
Bibliography 76
[31] Gonzalez, D., Cueto, E., Martinez, M., and Doblare, M., 2004. Numerical
integration in natural neighbour galerkin methods. International Journal
for Numerical Methods in Engineering, 60(12), pp. 20772104.
[32] Schmitz, G. J., and Prahl, U., 2009. Toward a virtual platform for materials
processing. JOM, 61(5), pp. 1923.
[33] Feulvarch, E., Roux, J.-C., and Bergheau, J.-M., 2013. A simple and ro-
bust moving mesh technique for the finite element simulation of Friction Stir
Welding. Journal of Computational and Applied Mathematics, 246(2013),
July, pp. 269277.
[34] Guerdoux, S., and Fourment, L., 2009. A 3D numerical simulation of dif-
ferent phases of friction stir welding. Modelling and Simulation in Materials
Science and Engineering, 17(7), Oct., p. 075001.
[35] Mishra, R., and Ma, Z., 2005. Friction stir welding and processing. Mate-
rials Science and Engineering: R: Reports, 50(1-2), Aug., pp. 178.