Genetic Programming for Adaptive Signal Processing   [GP] [ASP] [SP]

by

Esparcia-Alcazar, A., I.

Literature search on Evolutionary ComputationBBase ©1999-2013, Rasmus K. Ursem
     Home · Search · Adv. search · Authors · Login · Add entries   Webmaster
Note to authors: Please submit your bibliography and contact information - online papers are more frequently cited.

Info: 1998
Keywords:genetic algorithms, genetic programming
Abstract:
a new way of handling numerical parameters in GP, node gains. A node gain is a numerical parameter associated to a node that multiplies its output value. This concept was introduced by Sharman and Esparcia-Alcázar (1993) and is fully developed here. The motivation for a parameterised GP is addressed, together with an overview of how it has been addressed by other authors. The drawbacks of these methods are highlighted: there is no established way of determining the number [TNO] of parameters to use and their placement; further, unused parameters can be unnecessarily adapted while, on the other hand, useful ones might be eliminated. The way in which node gains overcome these problems is explained. An extra advantage is the possibility of expressing complex systems in a compact way, which is labelled "compacting effect" of node gains. The costs of node gains are also pointed out: increase in the degrees of freedom and increased complexity. This, in theory, results in an increase of computational expense, due to the handling of more complex nodes and to the fact that an extra multiplication is needed per node. These costs, however, are expected to be of, at most, the same order of magnitude as those of the alternatives. Experimental analysis shows that random node gains may not be able to achieve all the potential benefits expected. It is conjectured that optimisation of the values is needed in order to attain the full benefits of node gains, which brings along the next contribution. a mathematical model is given for an adaptive GP. As concluded from the previous point, adaptation of the values of the node gains is needed in order to take full advantage of them. A Simulated Annealing (SA) algorithm [SA] is introduced as the adaptation algorithm. This is put in the context of an analogy: the adaptation of the gains by SA is equivalent to the learning process of an individual during its lifetime. This analogy gives way to the introduction of two learning schemes, labelled Lamarckian and Darwinian, which refer to the possibility of inheriting the learned gains. The Darwinian and Lamarckian learning schemes [LL] for GP are compared to the standard GP technique and also to GP with random node gains. Statistical analysis, done for both fixed and time-varying environments, shows the superiority of both learning methods over the non-learning ones, although it is not possible at this stage to determine which of the two provides a better performance. a number of interesting results in the channel equalisation problem. These are compared to those obtained by other techniques and it is concluded that the proposed method obtains better or similar performance when extreme values (maximum fitness or minimum error) are considered.
URL(s):(G)zipped postscript
HTML

Review item:

Mark as doublet (will be reviewed)

Print entry



BibTex:
@PhdThesis{Esparcia-Alcazar:1998:thesis,
  author =       "Anna I. Esparcia-Alcazar",
  title =        "Genetic Programming for Adaptive Signal Processing",
  school =       "Electronics and Electrical Engineering, University of
                 Glasgow",
  year =         "1998",
  month =        jul,
  keywords =     "genetic algorithms, genetic programming",
  URL =          "http://www.elec.gla.ac.uk/~anna/abstract.htm",
  size =         "pages",
  abstract =     "This thesis is devoted to presenting the application
                 of the Genetic Programming (GP) paradigm to a class of
                 Digital Signal Processing (DSP) problems. Its main
                 contributions are

                 a new methodology for representing Discrete-Time
                 Dynamic Systems (DDS) as expression trees. The
                 objective is the state space specification of DDSs: the
                 behaviour of a system for a time instant t_0 is
                 completely accounted for given the inputs to the system
                 and also a set of quantities which specify the state of
                 the system. This means that the proposed method must
                 incorporate a form of memory that will handle this
                 information.

                 For this purpose a number of node types and associated
                 data structures are defined. These will allow for the
                 implementation of local and time recursion and also
                 other specific functions, such as the sigmoid commonly
                 encountered in neural networks. An example is given by
                 representing a recurrent NN as an expression tree.

                 a new approach to the channel equalisation problem. A
                 survey of existing methods for channel equalisation
                 reveals that the main shortcoming of these techniques
                 is that they rely on the assumption of a particular
                 structure or model for the system addressed. This
                 implies that knowledge about the system is available;
                 otherwise the solution obtained will have a poor
                 performance because it was not well matched to the
                 problem.

                 This gives a main motivation for applying GP to channel
                 equalisation, which is done in this work for the first
                 time. Firstly, to provide a unified technique for a
                 wide class of problems, including those which are
                 poorly understood; and secondly, to find alternative
                 solutions to those problems which have been
                 successfully addressed by existing techniques.

                 In particular, in the equalisation of nonlinear
                 channels, which have been mainly addressed with Neural
                 Networks and various adaptation algorithms, the
                 proposed GP approach presents itself as an interesting
                 alternative.",
  abstract =     "a new way of handling numerical parameters in GP, node
                 gains. A node gain is a numerical parameter associated
                 to a node that multiplies its output value. This
                 concept was introduced by Sharman and Esparcia-Alcázar
                 (1993) and is fully developed here.

                 The motivation for a parameterised GP is addressed,
                 together with an overview of how it has been addressed
                 by other authors. The drawbacks of these methods are
                 highlighted: there is no established way of determining
                 the number of parameters to use and their placement;
                 further, unused parameters can be unnecessarily adapted
                 while, on the other hand, useful ones might be
                 eliminated. The way in which node gains overcome these
                 problems is explained. An extra advantage is the
                 possibility of expressing complex systems in a compact
                 way, which is labelled {"}compacting effect{"} of node
                 gains.

                 The costs of node gains are also pointed out: increase
                 in the degrees of freedom and increased complexity.
                 This, in theory, results in an increase of
                 computational expense, due to the handling of more
                 complex nodes and to the fact that an extra
                 multiplication is needed per node. These costs,
                 however, are expected to be of, at most, the same order
                 of magnitude as those of the alternatives.

                 Experimental analysis shows that random node gains may
                 not be able to achieve all the potential benefits
                 expected. It is conjectured that optimisation of the
                 values is needed in order to attain the full benefits
                 of node gains, which brings along the next
                 contribution.

                 a mathematical model is given for an adaptive GP. As
                 concluded from the previous point, adaptation of the
                 values of the node gains is needed in order to take
                 full advantage of them. A Simulated Annealing (SA)
                 algorithm is introduced as the adaptation algorithm.
                 This is put in the context of an analogy: the
                 adaptation of the gains by SA is equivalent to the
                 learning process of an individual during its
                 lifetime.

                 This analogy gives way to the introduction of two
                 learning schemes, labelled Lamarckian and Darwinian,
                 which refer to the possibility of inheriting the
                 learned gains.

                 The Darwinian and Lamarckian learning schemes for GP
                 are compared to the standard GP technique and also to
                 GP with random node gains. Statistical analysis, done
                 for both fixed and time-varying environments, shows the
                 superiority of both learning methods over the
                 non-learning ones, although it is not possible at this
                 stage to determine which of the two provides a better
                 performance.

                 a number of interesting results in the channel
                 equalisation problem. These are compared to those
                 obtained by other techniques and it is concluded that
                 the proposed method obtains better or similar
                 performance when extreme values (maximum fitness or
                 minimum error) are considered.",
}