06452304

Embed Size (px)

Citation preview

  • 7/28/2019 06452304

    1/8

    Application of Optimization Heuristics in Tuning

    Decentralized PID Controllers

    Bruno Leandro Galvao Costa, Joao Paulo Lima Silva de Almeida, Bruno Augusto AngelicoUTFPR - Universidade Tecnologica Federal do Parana. Campus Cornelio Procopio

    Av. Alberto Carazzai, 1640. Cornelio Procopio - PR, Brasil. CEP 86300-000

    [email protected], [email protected], [email protected]

    Abstract This paper aims to make the tuning of PIDControllers in multivariable systems, using two optimizationheuristics, namely: the GA (Genetic Algorithm) and the PSO(Particle Swarm Optimization). Two multivariable processes withtwo inputs and two outputs (TITO), in a decentralized controlstrategy, are analyzed. The first process is the Quadruple-Tankand the second process is the Wood-Berry Distillation Column.The PID tuning is modeled as an optimization problem, whichcost function seeks to improve the dynamic response of thesystem, while forcing decoupling of the loops. Results shownthat both GA and PSO were able to find good PID parametersin order to provide a reasonable dynamic behavior and a goodloop decoupling.

    Keywords Decentralized PID Controllers, GA, Multi-

    variable Systems, PSO, TITO.

    I. INTRODUCTION

    Nowadays, multivariable processes involving the control of

    many input and output variables are often found in industrial

    plants. Over the years, many studies have been developed

    in order to find new control strategies for providing a good

    dynamic behavior to multivariable systems.

    Researchs showed that, in many industrial plants, a strategy

    widely used is a decentralized approach of the classical

    PID (Proportional-Integrative-Derivative) algorithm, known as

    multi-loop PID control [1]. In this technique, the entire system

    is decomposed into individual control loops, making the

    system easy to implement. However, the problem identified

    in this strategy is that this decomposition affects the dynamics

    of the multivariable process, since there is interaction between

    control loops [2].

    Another problem encountered in practical situations is re-

    lated to the tuning controllers employed in the system. Such

    a tuning is, in many cases, made by extensive trial and error

    methods, which are very time consuming [3].

    Therefore, the main challenge consists in finding optimal

    parameters for the decentralized controllers, so that the inter-

    action between loops is minimized and the desired response

    present a good dynamic behavior and no stationary error.One strategy that has been considered in the literature is the

    application of optimization heuristics to obtain the parameters

    for the controllers. This is due the ability of such methods in

    solving optimization problems (mostly nonlinear), with many

    constraints, resulting in optimized values [4].

    In this paper, the GA and the PSO will be considered as

    search techniques for multi-loop PID tuning in two processes,

    in order to minimize a cost function which enhances the

    dynamic response of the system, and causes a good decoupling

    between control loops.

    II. HEURISTIC OPTIMIZATION

    Optimization, today, is a concept widely discussed in scien-

    tific research communities, because its application is given to

    obtain the best performances in several problems. Optimiza-

    tion aims to improve the performance of a system in a specific

    situation, modeled as a cost function with its constraints.

    Within this context the heuristics methods are inserted. Theyrepresent a good estimate for searching optimal solutions. The

    concept of heuristics is to seek for solutions using an intuitive

    analysis, ensuring a substantial reduction in the complexity of

    searching problems, without a deep knowledge (exact) [4].

    There are several heuristics developed in the literature, but

    for this paper the focus will be given to GA and PSO.

    A. Genetic Algorithm (GA)

    The Genetic Algorithm is an optimization technique based

    on principles of genetics and natural selection [5]. The GA

    is based on the concept of evolution of individual structures,

    using selection and recombination operators to produce newsamples in a search space. Thus the algorithm is developed

    according to the degree of fitness of an individual facing a

    problem (environment) conceived, and so the individual who

    has a higher fitness and adaptation to the environment is more

    likely to survive and produce fittest offspring [6].

    There are two models of representation for the GA: binary

    and continuous. The continuous GA (explored in this work)

    works with a set of continuous variables, represented by

    floating points. An advantage of this model is the speed on

    evaluations of the cost function, once the variables need not

    be decoded and requires less storage [7].

    The flowchart with the main steps of the algorithm is

    described in Figure 1.The algorithm implemented in this work is based on [7].

    First, a set of P chromosomes is generated randomly (within

    a first predetermined interval - lower and upper bounds), each

    chromosome consists of a N-dimensional candidate solution

    vector Xi, such that i = 1, 2, . . . , N where N is the number of

  • 7/28/2019 06452304

    2/8

    Generating an initial population

    Cost for each chromosome

    Selection of the best chromosomes

    Cross-over between chromosomes

    Mutation

    gens and updated costs

    END

    Settings: cost function, algorithm parameters

    Convergence analysis

    NO

    YES

    Fig. 1: Flowchart of Genetic Algorithm.

    variables of the problem. Then, each chromosome is evaluated

    in the cost function.

    Next, T best chromosomes are selected for pairing, gener-

    ating the mating pool. The method for mating pairs within Tchromosomes is the roulette wheel. Each of the T /2 couplesgenerates two descendants, that have part of their genetic

    material, resulting in a total of T offspring. In the sequel,the P-T weaker chromosomes are replaced by descendants ofthe parents. A specific pair of chromosomes is given as:

    Xdad = [Xd1 , Xd2 , . . . , X dN]; (1)

    Xmom = [Xm1 , Xm2 , . . . , X mN]. (2)

    The procedure for the crossover begins with the randomselection of a crossover point (a random position), as shown

    in (3):

    j = u.N, (3)where u is a random variable uniformly distributed (u.d.) inthe interval [0,1] and j is defined as j = 1, 2, . . . , T /2. Then,each two chromosomes (dad and mom) are combined to form

    new values that appear in the offspring. For a specific pair

    represented in (1) and (2), the following values are defined:

    XO1 = Xmom(j) [Xmom(j) Xdad(j)]; (4)

    XO2 = Xdad(j) + [Xmom(j) Xdad(j)]. (5)

    The variable is a value u.d. in [0,1]. Thus, the offspringare generated according to [7]:

    Xoffs1 = [Xd1 , Xd2 , . . . , X O1 , XmN]; (6)

    Xoffs2 = [Xm1 , Xm2 , . . . , X O2 , XdN]. (7)

    Some objective functions may have many local minima

    and, sometimes, the algorithm may end up converging to a

    local minimum. Because of this, is necessary to perform the

    mutation operation, which in this paper is a Gaussian mutation.

    The parameter Pm represents the rate of change, resulting inNm uniformly mutations, according to:

    Nm = [Pm.(P 1).N]. (8)

    If Xi is chosen, then, after the mutation, it is replaced by(9):

    Xi = Xi +N(0, m2). (9)

    In equation (9), N(0, m2) represents the Gaussian variable

    with mean zero and variance m2.

    After this, a calculation is made for restricting the updated

    values of the chromosomes, so they do not exceed a certainoperation range xmax and xmin, defined by the user.

    Thus, the chromosomes of the matrix are evaluated again

    in the fitness function, and so the algorithm continues its

    execution until the total number of iterations (also defined by

    the user) is reached.

    B. Particle Swarm Optimization (PSO)

    The PSO algorithm was developed based on the social

    behavior of animals searching for resources. This technique

    relies on simple concepts, having direct relation to the ex-

    isting methodologies of artificial life (swarm in general) and

    evolutionary computation [8].

    Instead of using genetic operators as in the GA, this

    algorithm evolves through cooperation and competition among

    its own elements, generating a timing behavior, which is a

    direct function of the efforts of each group member [9]. One

    of the fundamental hypothesis for the development of PSO

    was the attempt to simulate graphically the choreography

    of swarms seeking for food, based on experience gained by

    individuals in the group. This creates a decisive advantage

    in developing this model, which overcomes the disadvantages

    related to competition between the animals [8].

    The algorithm is modeled with primitive mathematical oper-

    ations, and this feature makes this model relatively inexpensive

    in terms of computational complexity. PSO is important in

    solving optimization problems due to its ability to handle

    difficult cost functions with multiple local minima and is

    designed to be robust and fast in nonlinear problems [9].

    The flowchart in Figure 2 shows the structure for imple-

    menting the PSO algorithm.

    The algorithm implementation [9] generates individuals at

    random and uniform distribution within a first predefined

    interval (as in GA).

    Each group member is named as a particle, as can be

    seen in (10), which is nothing more than a potential solution

    to solve the problem. Therefore, the i-th particle of a givengroup is represented as follows:

    Xi = [Xi,1, Xi,2, . . . , X i,N], i = 1, 2, . . . , P , (10)

    where N is the number of variables and P is the amount ofindividuals (population). These individuals, with their respec-

    tive values, are tested in the objective function, and the so far

    best individual is chosen.

  • 7/28/2019 06452304

    3/8

    Generating an initial population

    Obtaining initial cost

    Calculating the velocity

    Calculating the position

    Obtain new costs

    Comparing with the previous

    Positions and Updated costs

    END

    Settings: cost function, algorithm parameters

    Convergence analysis

    NO

    YES

    Fig. 2: Flowchart of PSO algorithm.

    Each of the generated element has a position and a speed.

    At any given iteration (it) they are described by equations (11)

    and (12):

    Xi(it) = [Xi,1(it),Xi,2(it), . . . , X i,N(it)]; (11)

    Vi(it) = [Vi,1(it), Vi,2(it), . . . , V i,N(it)]. (12)

    As the algorithm evolves, the values presented by the parti-

    cles of the equations (11) and (12) will be updated according

    to the equations (13) and (14):

    Vi(it + 1) = Vi(it) + 1r1(Xbesti Xi(it))

    +2r2(Xbestg Xi(it)); (13)

    Xi(it + 1) = Xi(it) + Vi(it + 1), (14)

    where: Xbesti and X

    bestg represent, respectively, the best position

    of an individual particle, and the best position of all the

    particles;

    1 and 2 are the single acceleration coefficient and

    coefficient of overall acceleration, respectively;

    r1 and r2 are cognitive parameter (individual choice

    of each element) and social parameter (represent the

    thinking of the whole group), respectively, being both

    diagonal matrices u.d. in the interval [0,1];

    is the inertia weight, which plays the role of controlling

    the operation of the cluster. Responsible for the search

    diversification and also for avoiding the algorithm get

    stuck to a local minimum.

    In order to prevent velocity extrapolation, the maximum

    speed Vmax and minimum speed Vmin are added to the model,

    such that:

    IfVi(it + 1) Vmax, then Vi(it + 1) = Vmax; (15)

    IfVi(it + 1) Vmin, thenVi(it + 1) = Vmin. (16)

    As considered in GA, a restriction in the particles position

    is also imposed, xmax and xmin, in order to prevent the distant

    solution.

    III. MULTIVARIABLE SYSTEMS AND DECENTRALIZED

    CONTROL

    An open loop multivariable system with n inputs and n

    outputs is usually represented by the following expression:

    y(s) = G(s)u(s), (17)

    where G is an nxn process transfer function matrix, y is an noutput vector and u is an n input vector.

    Considering a two input two output (TITO) system, expres-

    sion (17) can be written as:y1(s)

    y2(s)

    =

    G11(s) G12(s)

    G21(s) G22(s)

    u1(s)

    u2(s)

    . (18)

    Decentralized controllers (multi-loop controllers) have been

    extensively considered in industrial processes control, simpli-

    fying the system structure and improving the performance [2].

    The strategy consists of a process composed ofn independent

    outputs which are controlled by n individual controllers [10],

    as seen in Figure 3.

    However this approach is still undeveloped, because it may

    have stability problems. Moreover, the tuning of controllers

    is not a trivial task, because many interactions occur between

    control loops that affects the behavior of each individually [2].

    CONTROLLERn

    ( )n

    y t( )n

    r t

    MULTIVARIABLE

    PROCESS

    +CONTROLLER 1

    - 1( )y t

    1( )r t

    CONTROLLER 2

    2( )y t

    2( )r t

    +-

    +-

    Fig. 3: Multivariable Decentralized Control [3].

    The control strategy for this approach is decomposed into

    two stages: first the subsystem decoupling and then the

    subsystem control [11]. Figure 4 shows the structure of a

    multivariable TITO system considered in this work.

    In a TITO system, the decentralized control strategy can be

    represented as [2]:

    C(s) =

    C1(s) 00 C2(s)

    , (19)

    where each one of the controllers follows the equation (20):

    Cj(s) = Kpj +Kij

    s+ Kdjs, (20)

  • 7/28/2019 06452304

    4/8

    +PID 1

    - 1( )y t

    1( )r t

    2( )y t

    2( )r t

    +

    -PID 2

    G11

    G21

    G12

    G22+

    +

    +

    +1( )u t

    2( )u t

    Fig. 4: Decentralized control of a TITO system considered in

    this article [2], [3].

    with j = 1, 2 and Kpj , Kij and Kdj being proportional,integrative and derivative gains, respectively.

    IV. CONSIDERED PROCESSES

    In this work, two cases are considered: the Quadruple-Tank

    [12] and Wood-Berry Distillation Column [13].

    A. The Quadruple-Tank

    The Quadruple-Tank is a non-linear system which consistsof four interconnecting tanks, as represented in Figure 5 [12].

    Tank 1 Tank 2

    Tank 3 Tank 4

    Pump 1 Pump 2y1 y2

    v1 v2

    Fig. 5: The Quadruple-Tank [12].

    The aim of this system is to control the fluid level in the two

    lower tanks using two pumps. It is considered that the input

    variables are v1 and v2 represent, respectively, the voltagelevel on the pumps 1 and 2, while the output variables y 1and y2 are the level values in the tank 1 and 2, respectively.The process inputs are defined by voltage levels applied to

    pumps and outputs signals representing the measure of the

    level sensors.

    The behavior of this system can be represented and studied

    considering two operating points [12], but only one of these

    points will be addressed in this work, which is represented by

    transfer matrix in equation (21).

    G(s) =

    1.51+63s

    2.5(1+39s)(1+63s)

    2.5(1+56s)(1+91s)

    1.61+91s

    . (21)

    For this problem a PI controller is designed based on [10],

    so one j-th candidate vector for the problem is given as:

    Xj = [Kp1j , Ki1j , Kp2j , Ki2j ], (22)

    where Kp1j and Ki1j are regarding C1(s), and Kp2j and Ki2jare regarding C2(s).

    B. The Wood-Berry Distillation Column

    The work presented by Wood and Berry is a validation

    of a control strategy for multivariable systems, to reduce

    the interaction between the control loops of the overhead

    and bottoms compositions of an pilot plant of the distillation

    column [13], shown in Figure 6.

    One alternative is to reduce this interaction made by means

    of compensating controllers. Thus, it is necessary to find

    a matrix representation that could satisfactorily describe the

    process. Such a matrix is described by equation (23).The outputs of this process are the overhead and bottoms

    compositions, having as inputs the reflux and steam flow rates.

    PIC

    LLIC

    CR

    TCR

    FRC FR

    FRCTRCDIGITAL

    COMPUTER

    FRC

    A

    LLIC FR

    COOLING WATER

    TOP

    PRODUCT

    STEAM

    BOTTOM

    PRODUCT

    REFLUX

    FEED

    A

    CR

    FR

    FRC

    LLIC

    PIC

    TRC

    GAS CHROMATOGRAPH

    COMPOSITION RECORDER

    FLOW RECORDER

    FLOW RECORDER

    CONTROLLER

    LIQUID LEVEL

    INDICATOR CONTROLLER

    PRESSURE INDICATOR

    CONTROLLER

    TEMPERATURE RECORDER

    CONTROLLER

    Fig. 6: The Wood-Berry Distillation Column [13].

    G(s) =

    12.8es1+16.7s 18.9e3s1+21s

    6.6e7s

    1+10.9s19.4e3s

    1+14.4s

    . (23)

  • 7/28/2019 06452304

    5/8

    For this problem a PID controller is designed [3]. Then

    one j-th candidate vector for the PID controller applied to the

    problem is given as:

    Xj = [Kp1j , Ki1j , Kd1j , Kp2j , Ki2j , Kd2j ], (24)

    where Kp1j , Ki1j and Kd1j are regarding C1(s), and Kp2j ,Ki2j and Kd2j are regarding C2(s).

    V. OPTIMIZATION PROBLEM

    The objective is to design n (n = 2) PI/PID based con-trollers, associated with the n loops, such that the i outputs of

    the processes may have desired dynamic responses in relationto its i inputs references. In addition, its desired that the j

    inputs do not contribute significantly to the outputs i (j = i).The cost function to be optimized is based on the weighted

    integral of the system performance including the constraints

    on the controllers output u ij presented in [10], with slight

    modifications being described by the equations (25) and (26).

    These equations relates the desired region of the dynamic

    response.

    Jij =

    tmax0

    max(f

    (low)ij (t) yi(t), 0)

    2

    +max(yi(t) f(up)ij (t), 0)

    2

    dt; (25)

    J =

    ni=1

    nj=1

    wij

    tmax0

    |uij (t)|dt + wij Jij

    . (26)

    In these equations, f(low)ij (t) and f

    (up)ij (t) are continuous

    functions defining the lower and upper boundaries of the

    shaded regions (shown in figure 7); w ij and wij are the

    weighting factors of objective function element for the i-th

    output under the j-th set-point, with w ij 0, wij 0 [10].

    In order to evaluate the performance of the controllers, an

    unit step input signal is applied to the closed loop system. The

    desired response regions are shown in Figure 7. The main idea

    implemented by the cost function minimization, is to keep the

    response i to the input i inside region in Figure 7(a), and the

    response i to the input j, for i = j, inside the region in Figure7(b).

    1c

    2c

    3c

    4c

    ssc

    t0

    ( )i

    y t

    1t

    2t

    maxt

    ( )( )up

    ijf t

    ( )( )

    low

    ijf t

    (a) Response region for the out-puts i = j

    3t

    5c

    6c

    7c

    8c

    0

    maxt

    t

    ( )( )

    up

    ijf t

    ( )( )

    low

    ijf t

    ( )i

    y t

    (b) Response region for the out-puts i = j

    Fig. 7: Regions for the desired systems response y i,j, i = 1,2

    and j = 1,2, (a) i = j and (b) i = j [10].

    In these two figures, the parameters X1, X2, . . . ,X8, Xss,t1, t2, t3 and tmax are constants specified by the users.

    VI . SIMULATIONS PARAMETERS AND RESULTS

    The algorithms have been implemented with the aid of

    software MATLAB R without using any toolbox. The adopted

    parameters for the simulations can be viewed in Tables I and

    II.

    TABLE I

    Time parameters for the cost function

    System (21) System (23)

    t1 (s) 300 20t2 (s) 500 40t3 (s) 400 30

    tmax (s) 2000 100

    TABLE II

    Response parameters for the cost function

    Xss X1 X2 X3 X4 X5 X6 X7 X81.0 1.2 1.05 0.95 0.8 0.2 0.05 -0.05 -0.2

    The weighting factors of the objective function, for the two

    problems, were the following values: w11

    = 1.0, w12

    = 0.25,

    w21 = 0.25, w22 = 1.0; w11 = w

    12 = w

    21 = w

    22 = 0.1 [10].

    TABLE III

    Lower and upper bounds for the controller gains.

    Kp Ki Kdmax 1.5 0.15 0min -1.5 -0.15 -0.2

    The lower and upper bounds initially set for the controller

    gains are shown in Table III.

    TABLE IV

    Simulation parameters for the GA

    Parameters System (21) System (23)

    Total number of genes N 4 6Number of iterations 300 300

    Time interval (s) [0,2000] [0,100]Time resolution 0.1 0.1

    P 20 20Pm 0.8 0.8m 0.15 0.25T P/2 P/2

    xmax 5 5xmin -5 -5

    Tables IV and V present the GA and PSO parameters to the

    systems (21) and (23).

    For all the considered problems, the best result over 300

    iterations/generations are considered as th PI/PID tuning pa-

    rameters.

  • 7/28/2019 06452304

    6/8

    A. The Quadruple Tank

    The parameters of PI controllers obtained by each method,

    as well as initial and final values of the cost function, can be

    seen in Table VI.

    The Figures 8 and 9 present the response signal obtained in

    accordance with the best controller gains found by GA to the

    system (21).

    0 500 1000 1500 20000.2

    0

    0.2

    0.4

    0.6

    0.8

    1

    1.2

    1.4

    1.6

    Time (s)

    y(t)

    y11

    y21

    Fig. 8: Unit step response, y11 and y21, for the system (21)

    using GA for tuning the PI controller.

    0 500 1000 1500 20000.2

    0

    0.2

    0.4

    0.6

    0.8

    1

    1.2

    Time (s)

    y(t)

    y12

    y22

    Fig. 9: Unit step response, y12 and y22, for the system (21)

    using GA for tuning the PI controller.

    The Figures 10 and 11 show the response signal obtained

    for the best controller gains found by PSO to the system (21).

    B. The Wood-Berry Distillation Column

    The parameters of PID controllers obtained by each method,

    as well as initial and final values of the cost function, can be

    seen in Table VII.

    0 500 1000 1500 20000.4

    0.2

    0

    0.2

    0.4

    0.6

    0.8

    1

    1.2

    Time (s)

    y(t)

    y11

    y21

    Fig. 10: Unit step response, y11 and y21, for the system (21)

    using PSO for tuning the PI controller.

    0 500 1000 1500 2000

    0

    0.5

    1

    1.5

    2

    2.5

    Time (s)

    y(t)

    y12

    y22

    Fig. 11: Unit step response, y12 and y22, for the system (21)

    using PSO for tuning the PI controller.

    TABLE V

    Simulation parameters for the PSO

    Parameters System (21) System (23)

    Total number of variables N 4 6

    Number of iterations 300 300

    Time interval (s) [0,2000] [0,100]

    Time resolution 0.1 0.1

    P 20 20

    0.75 0.75

    1 0.55 0.552 0.65 0.65

    Vmax 0.8 0.8

    Vmin -0.8 -0.8

    xmax 5 5

    xmin -5 -5

  • 7/28/2019 06452304

    7/8

    TABLE VI

    Parameters for PI controllers.

    Initial Values GA PSO

    Kp1 1.0 1.038538 -0.182045

    Ki1 1.0 0.192073 -0.001532

    Kp2 1.0 -0.094342 2.734893

    Ki2 1.0 -0.001184 0.149574

    J 177087.1015 730.725403 589.484503

    The Figures 12 and 13 indicate the response obtained in

    accordance with the best gains found by GA to the system

    (23).

    0 20 40 60 80 1000.2

    0

    0.2

    0.4

    0.6

    0.8

    1

    1.2

    Time (s)

    y(t)

    y11

    y21

    Fig. 12: Unit step response, y11 and y21, for the system (23)

    using GA for tuning the PID controller.

    0 20 40 60 80 1000.2

    0

    0.2

    0.4

    0.6

    0.8

    1

    1.2

    Time (s)

    y(t)

    y12

    y22

    Fig. 13: Unit step response, y12 and y22, for the system (23)

    using GA for tuning the PID controller.

    The Figures 14 and 15 show the response signal obtained

    for the best controller gains found by PSO to the system (23).

    0 20 40 60 80 1000.2

    0

    0.2

    0.4

    0.6

    0.8

    1

    1.2

    Time (s)

    y(t)

    y11

    y21

    Fig. 14: Unit step response, y11 and y21, for the system (23)

    using PSO for tuning the PID controller.

    0 20 40 60 80 1000.2

    0

    0.2

    0.4

    0.6

    0.8

    1

    1.2

    Time (s)

    y(t)

    y12

    y22

    Fig. 15: Unit step response, y12 and y22, for the system (23)

    using PSO for tuning the PID controller.

    VII. CONCLUSIONS

    The optimization heuristics presented in this work (GA

    and PSO) were able to find proper PI/PID parameters sothat the multivariable control systems achieved satisfactory

    performance in terms of dynamic response, stationary errors,

    and decoupling. It also can be concluded that the adopted

    cost function is suitable for measuring the performance of the

    considered systems.

  • 7/28/2019 06452304

    8/8

    TABLE VII

    Parameters of PID controllers.

    Initial Values GA PSO

    Kp1 0.1 0.384753 0.371040

    Ki1 0.1 0.054480 0.021396

    Kd1 0.001 0.000438 -0.163217

    Kp2 0.1 -0.074940 -0.094906

    Ki2 0.1 -0.013637 -0.009450

    Kd2 0.001 -0.185986 -0.136004

    J 365689.1015 5.188941 4.885578

    Considering the final values of the objective functions,

    and taking into account the particular PSO and GA input

    parameters adopted, PSO provided the best cost minimization

    in all considered cases, resulting in dynamic responses and

    decoupling slightly better than GA.

    As suggestion for future works, a convergence and com-

    putational complexity analysis can be done in order to state

    which algorithm presents the best performance in terms of

    cost function minimization, convergence time and number of

    operations needed for achieving convergence.

    ACKNOWLEDGEMENT

    This work was supported by Fundacao Araucaria.

    REFERENCES

    [1] K. Astrom, K. Johansson, and Q.-G. Wang, Design of decoupled picontrollers for two-by-two systems, Control Theory and Applications,

    IEEE Proceedings -, vol. 149, no. 1, pp. 7481, jan. 2002.[2] M. A. Johnson and M. H. Moradi, PID Control: New identification and

    design methods. Springer, 2005.[3] M. C. S. Swiech, E. Oroski, and L. V. R. d. Arruda, Sintonia

    de controladores pid em colunas de destilacao atraves de algoritmosgeneticos, in 3o Congresso Brasileiro de P&D em Petroleo e Gas,Salvador, oct. 2005, pp. 16.

    [4] R. Valerdi, Heuristics for systems engineering cost estimation, SystemsJournal, IEEE, vol. 5, no. 1, pp. 9198, mar. 2011.

    [5] J. H. Holland, Adaptation in Natural and Artificial Systems. Ann Arbor:University of Michigan Press, 1975.

    [6] A. P. Grosko, J. R. Gorski, and J. d. S. Dias, Algoritmo genetico: revisaohistorica e exemplificacao, in X Congresso Brasileiro de Informatica

    em Saude, Florianopolis, oct. 2006, pp. 16.[7] R. L. Haupt and S. E. Haupt, Practical genetic algorithms, 2nd ed. AJohn Wiley & Sons, Inc., Publications, 2004.

    [8] J. Kennedy and R. Eberhart, Particle swarm optimization, in Proceed-ings of the Sixth International Symposium on Micro Machine and HumanScience, vol. 4, nov/dec. 1995, pp. 19421948.

    [9] Y. Shi and R. Eberhart, A modified particle swarm optimizer, inEvolutionary Computation Proceedings, 1998. IEEE World Congress onComputational Intelligence., The 1998 IEEE International Conferenceon, may 1998, pp. 6973.

    [10] A. Mehrabian and C. Lucas, Automatic tuning of decentralized con-trollers by swarm intelligence, in Intelligent Systems, 2006 3rd Inter-national IEEE Conference on, sept. 2006, pp. 350353.

    [11] P. Prerez and A. Sala, Multivariable Control Systems: An EngineeringApproach. Springer, 2004.

    [12] K. Johansson, The quadruple-tank process: a multivariable laboratoryprocess with an adjustable zero, Control Systems Technology, IEEE

    Transactions on, vol. 8, no. 3, pp. 456 465, may 2000.[13] R. K. Wood and M. W. Berry, Terminal composition of a binarydistillation column, Chemical Engineering Science, vol. 28, pp. 17071717, 1973.