NN-5

Embed Size (px)

Citation preview

  • 8/7/2019 NN-5

    1/11

  • 8/7/2019 NN-5

    2/11

    0.9920% and 7.2613%, respectively. Similarly, for the fuel consumption, RMSE, R2 and

    MAPE were 0.2860%, 0.9299% and 7.5448%, respectively. With these results, we believe that

    the ANN can be used for the prediction of engine performance as an appropriate method for

    spark-ignition (SI) engines. 2004 Elsevier Ltd. All rights reserved.

    Keywords: Artificial neural-network; Spark-ignition engine; Variable valve-timing; Engine performance

    1. Introduction

    Valve control is one of the most important parameters for optimizing efficiency

    and emissions, permitting combustion engines to conform to future emission

    targets and standards. Control of the intake valve provides optimal filling of the

    cylinder at all engine speeds. This natural supercharging, and the improved en-

    gine-torque and power that accompany it, makes it possible to downsize engine-

    capacity and thus reduce fuel consumption at all operating conditions. For many

    years, in order to increase the performance of internal-combustion engines, many

    studies have been conducted. One of the most important of these studies is the

    Nomenclature

    ANN artificial neural-network

    aBDC after bottom-dead-center

    aOT after original-timing

    aTDC after top-dead-center

    bTDC before top-dead-center

    bBDC before bottom-dead-center

    bOT before original-timing

    CA crankshaft angle ()

    LM LevenbergMarquardtMAPE mean absolute percentage error

    o output value

    OT original timing

    p pattern

    R2 absolute fraction of variance

    RMSE root-mean-squared error

    SCG scaled conjugate gradient

    SFC specific fuel-consumption (g/kWh)

    SI spark-ignition

    t target valueVVT variable valve-timing

    188 M. Golcu et al. / Applied Energy 81 (2005) 187197

  • 8/7/2019 NN-5

    3/11

    one which tries to optimize the amount of timing and opening of the intake and

    exhaust valves for all intervals of engine load and speed in SI engines [13].

    Traditionally, valve timing has been designed to optimize the operation at high

    engine-speeds and wide-open throttle conditions [2,4]. Variable valve-timing(VVT) relates to both the opening time and the duration of the valves open-interval.

    Controlling valve-timing can improve the torque curve, the brake-power curve, or

    the indicator-power curve of a given engine. Variable valve-timing can also be used

    to reduce the fuel consumption and, to a small extent, the engine emissions [5]. This

    is achieved by controlling the maximum temperature in the cylinder and the amount

    of residuals remaining at the commencement of the compression stroke (i.e., exhaust-

    gas recirculation control) [6]. Numerous VVT systems have been proposed and some

    of these have been demonstrated in engines [713].

    The ANN technique can be used as an alternative method in modeling highly-

    complex and ill-defined problems, engineering analysis and prediction. ANNs donot require a precise formulation of the physical relationship of the concerned

    problem. In other words, they only need solution examples concerning the prob-

    lem. ANNs have been used for energy systems, such as internal-combustion en-

    gine performance [14], thermodynamic analysis of an ejectorabsorption cycle

    [15], mapping and estimation of solar potential in Turkey [16], prediction of ax-

    ial-piston pump performance [17] and energy consumption prediction of passive-

    solar buildings [18].

    In this study, ANNs are used to determine the effects of intake valve-timing on

    engine performance and fuel economy. Experimental studies were complete to obtain

    training and test data. Intake valve-timing and engine speed have been used as the

    input layer; engine torque and fuel consumption have been used as the output layer.

    Pattern numbers (77) have been obtained from the experiments. Inputs for the net-

    work were the intake valve-timing and engine speed, while the outputs were torque

    and fuel consumption. The results of the system indicate a relatively good agreement

    between the predicted values and the experimental ones. The experimental study to

    determine power, torque and fuel consumption in a spark-ignition engine is complex,

    time consuming and costly. It also requires specific tools. To overcome these difficul-

    ties, an ANN can be used for the prediction of performance and fuel consumption in

    a SI engine.

    2. Artificial neural-networks

    Artificial intelligence consists of two major branches, namely the study of ANNs

    and expert systems. During the last ten years, there has been a substantial increase in

    the interest in ANNs. A neuron is the fundamental processing element of a neural

    network. An artificial neuron is a model, whose components have direct analogs

    to components of an actual neuron. ANNs have been used successfully in solving

    complex problems in various fields of engineering, economics, neurology, mathemat-ics, medicine, meteorology and many others. Some of the most important ones are

    in pattern, sound and speech recognition, in the identification of explosives in

    M. Golcu et al. / Applied Energy 81 (2005) 187197 189

  • 8/7/2019 NN-5

    4/11

    passenger suitcases and in the identification of military targets [1921]. A neural net-

    work operates like a black box model and does not require detailed information

    about the system. On the other hand, it learns the relationship between the input

    parameters and the controlled and uncontrolled variables by studying previously re-corded data, in a similar way that a non-linear regression might behave. Another

    advantage of using ANNs is their ability to handle large and complex systems with

    many interrelated parameters. They simply ignore existing data that are of minimal

    significance and concentrate instead on the more important inputs [22].

    The output of a specific neuron is a function of the weighted input, the bias of

    the neuron and the transfer function. Fig. 1 shows the basic artificial neuron of

    the hidden layer. The input layer, some hidden layers and an output layer are

    usually basic features of the network. In its simple form, each single neuron is

    connected to other neurons of a previous layer through adaptable synaptic

    weights. Knowledge is usually stored as a set of connection weights. The outputof any neuron is given by:

    si Xnj1

    xjwij bj; 1

    where

    yj fsi: 2

    The transfer function f can be selected from a set of readily-available functions.

    The selected ANN structure of a multi-layer is shown in Fig. 2. It consists of twoinput layers, one hidden layer, and two output layers. The output of the output layer

    is a result of the combined effect of all the neurons in the network. Each input is mul-

    tiplied by a connection weight. In the simplest case, the products and biases are sim-

    ply summed, then transformed through a transfer function to generate a result, and

    finally an output obtained. Networks with biases can represent relationships between

    inputs and outputs more easily than networks without biases.

    Fig. 1. Presentation of a basic artificial neuron.

    190 M. Golcu et al. / Applied Energy 81 (2005) 187197

  • 8/7/2019 NN-5

    5/11

    Back-propagation algorithms and their variants are the most popular learning

    algorithms. The back-propagation algorithm in an ANN is one of the most effective

    learning algorithms. The training of all patterns of a training data group is named an

    epoch [22,23].Gradient descent and gradient descent with momentum are generally slower for

    practical problems because of their requiring small learning rates for stable learning

    than the other algorithms. Moreover, success in the algorithms depends on the user-

    dependent parameters learning-rate and the momentum constant. Algorithms such

    as conjugate gradient (SCG), BFGS quasi-Newton and LevenbergMarquardt

    (LM) are faster algorithms than the other algorithms and use standard numerical

    optimization-techniques. An ANN with a back propagation algorithm learns by

    changing the weights, and these changes are stored as perception information, or

    knowledge.

    The error is described by the root-mean-squared error (RMSE) and defined as

    follows:

    RMSE 1=p Xj

    tj oj 2 !1=2 3

    In addition, the absolute fraction of variance (R2) and mean absolute percentage

    error (MAPE) are defined, respectively, as follows:

    R

    2

    1 Pj tj oj

    2

    Pj oj

    2 !

    4

    and

    Fig. 2. ANN architecture used for 15 neurons in a single hidden-layer.

    M. Golcu et al. / Applied Energy 81 (2005) 187197 191

  • 8/7/2019 NN-5

    6/11

  • 8/7/2019 NN-5

    7/11

    3600 rpm at 200 rpm intervals, after the engine reached the working temperature of

    80 C. During the experiments, the average ambient temperature and atmospheric

    pressure were 22 C and 752 mm-Hg, respectively.

    4. Application of the artificial neural-network

    ANNs have been used in a broad range of applications, including pattern classi-

    fication, identification, prediction, optimization and control procesess. ANNs learn

    by using some examples, namely patterns. In other word, to train and test a neural

    network, input data and corresponding target values are necessary. The examples in

    this study are numerical values performed by using the experimental results and 77

    patterns were obtained from the experiments. Here, ANNs were used for the mod-

    eling of fuel consumption and torque in a spark-ignition engine. Inputs for the net-work are engine speed and intake valve timing; the outputs are torque and fuel

    consumption.

    The experimental results were used to train and test: 62 experimental results, from

    the total of 77, were employed as data sets to train the network, while 15 results were

    used as test data. The architecture of the ANN becomes 2-15-2, 2 corresponding to

    the input values, 15 for the number of hidden layer neurons and 2 for the outputs.

    The back-propagation learning algorithm has been used in a feed-forward, single

    hidden layer. Variants of the algorithm used in the study are the LM and scaled con-

    jugate gradient (SCG). The selected neural-network architecture consists of one hid-

    den layer of log-sigmoid neurons followed by an output layer of one linear neuron.

    Linear neurons are those which have a linear transfer-function. The transfer function

    is purelin. Back propagation networks use the log-sigmoid (logsis), or the tan-sig-

    moid (tansig) transfer-function. A logistic sigmoid (logsis) transfer-function has been

    used i.e.,

    Table 2

    Samples for input and output

    Engine speed (rpm) Intake-valve timing (CA) Torque (Nm) Fuel consumption (kg/h)

    1600 30 bOT 10.07 0.6456

    2200 30 bOT 11.05 0.9562

    2600 20 bOT 10.13 1.0432

    3000 20 bOT 9.22 1.1165

    1800 10 bOT 10.15 0.7353

    2400 10 bOT 11.2 1.050

    2600 0 OT 11.25 1.123

    3600 0 OT 9.43 1.4667

    1600 10 aOT 9.53 0.661

    2800 10 aOT 10.97 1.195

    1800 20 aOT 9.43 0.7582

    2600 20 aOT 10.5 1.10962200 30 aOT 9.22 0.9370

    3400 30 aOT 10.13 1.3971

    M. Golcu et al. / Applied Energy 81 (2005) 187197 193

  • 8/7/2019 NN-5

    8/11

    fx 1

    1 ex6

    where x is the weighted sum of the input.

    A computer program has been performed under MATLAB 6.3. In the training, an

    increased number of neurons (from 15 to 19) is used in a single hidden-layer. When

    the network training was successfully finished, the network was tested with test data.

    Some statistical methods, R2, RMSE and MAPE values, have been used for compar-

    ison. Selected sample data sets used for training and testing the network are shown in

    Table 2.

    5. Results and discussion

    Numerical results obtained from experimental and the related parameters have

    been used to train the network. The intake valve timing, engine speed, fuel consump-

    tion and torque for a single-cylinder four-stroke spark-ignition engine have been

    used to train the network. Initially, fifteen hidden neurons in a single hidden-layer

    have been used for all the algorithms. Then, the number of neurons has been in-

    creased. The results revealed that the optimum number of hidden neurons is different

    for different algorithms. Of all the variants that we studied, the fastest learning is ob-

    tained with the LM algorithm. SCG is also fast, but it produces slightly more errors

    compared with errors with the LM.

    Statistical values such as RMSE, R2

    and MAPE of the torque and fuel consump-tion are given in Tables 3 and 4 for different training algorithms and hidden number

    of neurons respectively. Comparisons of some experimental and ANN-predicted

    data are also given in Table 3 for the torque. The LM algorithm, with 15 neurons,

    has produced the best results. It shows that R2 is 0.9920%, MAPE is 7.2613% and

    the RMSE value is 0.9017% in the test. As for the training, R2 is 0.9935% and MAPE

    is 6.6644%, while the RMSE value is 0.8071%. For fuel consumption, it shows that

    R2 is 0.9299%, MAPE is 7.5448% and the RMSE value is 0.2860% in the test. As for

    Table 3Error values of the ANN approach for torque used in training and testing

    Algorithm Hidden

    number

    RMSE-test R2-test MAPE-test RMSE-training R2-training MAPE-

    training

    LM 15 0.9017 0.9920 7.2613 0.8071 0.9935 6.6644

    LM 16 1.0771 0.9884 9.3971 1.0730 0.9885 9.2242

    LM 17 0.9242 0.9916 8.0124 0.8769 0.9923 7.2603

    LM 18 0.9776 0.9906 8.3436 0.8804 0.9922 7.1448

    LM 19 2.0223 0.9634 11.7415 0.7821 0.9939 5.8978

    SCG 15 0.8932 0.9888 8.8819 0.7288 0.9910 7.8691

    SCG 16 1.1255 0.9815 11.1495 1.0770 0.9799 11.7259

    SCG 17 1.1282 0.9815 10.2462 0.9954 0.9828 10.7015SCG 18 1.2834 0.9761 14.1361 1.2530 0.9729 14.3240

    SCG 19 0.9214 0.9884 9.1869 0.9302 0.9859 10.4135

    194 M. Golcu et al. / Applied Energy 81 (2005) 187197

  • 8/7/2019 NN-5

    9/11

    the training, R2 is 0.9988% and MAPE is 2.9744%, while the RMSE value is

    0.0372%.

    Fig. 4 compares the calculated experimental and predicted engine performance

    values for the test data. As shown in the figure, the values predicted by the ANN

    approximately match the experimental values. Fig. 5 shows the effect of the number

    Table 4

    Error values of the ANN approach for fuel consumption used in training and testing

    Algorithm Hidden

    number

    RMSE-test R2-test MAPE-test RMSE-training R2-training MAPE-

    trainingLM 15 0.2860 0.9299 7.5448 0.0372 0.9988 2.9749

    LM 16 0.3109 0.9166 7.8440 0.0273 0.9994 2.1699

    LM 17 0.3022 0.9228 7.1863 0.0247 0.9995 1.8757

    LM 18 0.3040 0.9195 7.5993 0.0355 0.9989 2.7299

    LM 19 0.3321 0.9024 11.7153 0.0232 0.9995 1.7823

    SCG 15 0.2535 0.9225 8.1570 0.0388 0.9978 3.4550

    SCG 16 0.2668 0.9121 9.5868 0.0372 0.9980 3.1932

    SCG 17 0.2868 0.8901 15.7186 0.1051 0.9825 9.5916

    SCG 18 0.2623 0.9152 10.6323 0.0514 0.9961 4.5956

    SCG 19 0.2617 0.9126 9.8629 0.0499 0.9962 3.9420

    300

    350

    400

    450

    500

    550

    0 4 8 12 1

    Test pattern

    SFC(g/kWh)

    6

    exper imental predicted

    0

    0,4

    0,8

    1,2

    1,6

    2

    0 4 8 12 1

    Test pattern

    Fuelconsumption(kg/h)

    6

    experimental predicted

    6

    8

    10

    12

    14

    0 4 8 12Test pattern

    Enginetorque(Nm)

    16

    exper imental predicted

    1

    2

    3

    4

    5

    0 4 8 12

    Test pattern

    Power(kW)

    16

    experimental predicted

    Fig. 4. Experimental and ANN-predicted results for engine performance.

    M. Golcu et al. / Applied Energy 81 (2005) 187197 195

  • 8/7/2019 NN-5

    10/11

    of neurons in the hidden layer on the root mean square errors for torque and fuel

    consumption. The training epoch for each neural network is 30,000. It is shown that

    the training error is minimized when 15 neurons are used for both the LM and SCG

    algorithms for torque and fuel consumption. Thus, these ANN models, with mini-

    mum errors, are adopted for further studies.

    6. Conclusion

    The aim of this paper was to use neural networks for the estimation of a the per-formance and fuel consumption of a spark-ignition engine using different initial

    valve timings and engine speeds. The overall results show that the networks can

    be used as an alternative way for predicting the performances of these systems.

    This paper introduced the ANN technique for modeling the variable-intake valve-

    timing in a spark-ignition engine. It used 62 results as data sets to train the network,

    while 15 results were used as test data from the total of 77 experimental results. LM

    and SCG algorithms have been studied and the best results were obtained from the

    LM algorithm with 15 neurons and the mean absolute percentage error was limited

    to 7.28.8% for both the LM and SCG algorithm. So, these ANN predicted results

    can be considered to be within acceptable limits. It is observed that the predicted re-sults for the torque are better than those for the fuel consumption. The results show

    good agreement between the predicted and experimental values.

    References

    [1] Maekawa K, Ohsawa N. Development of a valve-timing control system. SAE Paper, 890680, 1989.

    [2] Dresner T, Barkan P. A review and classification of variable valve-timing. SAE Paper, 890674, 1989.

    [3] Asmus TG. Perspectives on application of variable valve-timing. SAE Paper, 910445, 1991.

    [4] Sher E, Bar-Kohany T. Optimization of variable valve-timing for maximizing performance of an

    unthrottled SI engine: a theoretical study. Energy 2002;27(8):75775.

    [5] Nagumo S, Hara S. Study of fuel-economy improvement through control of intake valve-closing

    timing: cause of combustion deterioration and improvement. JSAE Rev 1995.

    0

    0.5

    1

    1.5

    2

    2.5

    14 15 16 17 18 19 20

    Neuron numbers

    RMSE

    SCG-Torque

    LM-Torque

    0.15

    0.2

    0.25

    0.3

    0.35

    14 15 16 17 18 19 20

    Neuron numbers

    RMSE

    SCG-Fuel Consumption

    LM-Fuel consumption

    Fig. 5. Effects of the number of neurons in the hidden layer on the root-mean-square error.

    196 M. Golcu et al. / Applied Energy 81 (2005) 187197

  • 8/7/2019 NN-5

    11/11

    [6] Kohany T, Sher E. Using the 2nd Law of thermodynamics to optimize variable valve-timing for

    maximizing torque in a throttled SI engine. SAE paper, 1999-01-0328, 1999.

    [7] Freudenstein F, Maki ER, Tsai L. The synthesis and analysis of variable valve-timing mechanisms for

    internal-combustion engines. SAE Paper, 880387, 1988.

    [8] Leone TG, Christenson EJ, Stein RA. Comparison of variable camshaft timing strategies at part load.

    SAE Paper, 960584, 1996.

    [9] Hosaka T, Hamazaki M. Development of the variable valve-timing and lift (VTEC) engine for the

    Honda NSX. SAE Paper, 910008, 1991.

    [10] Akbas A, Cnar C, Sekmen Y. Buji ile atelemeli motorlarda deiken supap zamanlamasnn

    performansa etkileri zerine deneysel bir aratrma, Mhendislik Bilimleri, Cilt 7, Say 1, Sayfa 3538,

    2001, (in Turkish).

    [11] Nakayasu T, Yamada H, Suda T, Iwase N, Takahashi K. Intake and exhaust systems equipped with a

    variable valve-control device for enhancing of engine power. SAE Paper, 2001-01-0247, 2001.

    [12] Hara S, Kenji K, Matsumoto Y. Application of a valve lift and timing-control system to an

    automotive Engine. SAE Paper, 890681, 1989.

    [13] Bozza F, Gimelli A, Senatore A, Caraceni A. A theoretical comparison of various VVA systems forperformance and emission improvements of SI-engines. SAE paper, 20001.

    [14] Arcaklioglu E, Celikten I. A diesel-engines performance and exhaust emissions. Appl Energy

    2004;80(1):1122.

    [15] Sozen A, Arcaklioglu E, Ozalp M. A new approach to thermodynamic analysis of ejectorabsorption

    cycle: artificial neural-networks. Appl Therm Eng 2003;23(8):93752.

    [16] Sozen A, Arcaklioglu E, Ozalp M. Estimation of solar potential in Turkey by artificial neural-

    networks using meteorological and geographical data. Energy Conver Manage 2004;45:303352.

    [17] Karkoub MA, Gad EO, Rabie MG. Predicting axial piston-pump performance using neural

    networks. Mech Mach Theory 1999;34(8):121126.

    [18] Kalogirou SA, Bojic M. Artificial neural-networks for the prediction of the energy consumption of a

    passive solar-building. Energy 2000;25:47991.

    [19] Kalogirou SA, Panteliou S, Dentsoras A. Modeling of solar domestic water-heating systems usingartificial neural-networks. Solar Energy 1999;65:33542.

    [20] Kalogirou SA, Neocleous CS, Schizas CN. Artificial neural-networks for modeling the starting-up of

    a solar, steam-generator. Appl Energy 1988;60:89100.

    [21] Chouai A, Laugier S, Richon D. Modeling of thermodynamic properties using neural networks:

    application to refrigerants. Fluid Phase Equilibria 2002;199(12):5362.

    [22] Kalogirou SA. Application of artificial neural-networks in energy systems: a review. Energy Conver

    Manage 1999;40:107387.

    [23] Haykin S. Neural networks: a compherensive foundation. New York: Macmillan; 1994.

    [24] Sozen A, Arcaklioglu E. Prediction of solar potential in Turkey. Appl Energy 2004;80(1):3545.

    M. Golcu et al. / Applied Energy 81 (2005) 187197 197