Baglietto M., Cervellera C., Parisini T., Sanguineti M., Zoppoli R. Approximating networks, dynamic programming and stochastic approximation. In: ACC 2000 - American Control Conference (Chicago (IL), USA, 28-30 June 2000). Proceedings, vol. 5 pp. 3304 - 3308. IEEE, 2000. |

Abstract (English) |
Approximate solution of a general N-stage stochastic optimal control problem is considered. It is known that discretizing uniformly the state components in applying dynamic programming may lead this procedure to incur the "curse of dimensionality". Approximating networks, i.e., linear combinations of parametrized basis functions provided with density properties in some normed linear spaces, are then defined and used in two approximate methods (examples of such networks are neural networks with one hidden layer and linear output activation functions, radial basis functions, etc.). The first one consists of approximating the optimal cost-to-go functions in dynamic programming (such a technique is known in literature as "neuro-dynamic programming"); the second method reduces the original functional optimization problem to a nonlinear programming one that is solved by means of stochastic approximation. Approximating networks of suitable types benefit by the property that the number of parameters to be optimized and the number of samples to be used for approximating some classes of regular functions increase only linearly (or moderately) with the of dimensions of the arguments of the functions and the number of samples used to train the networks. We deem that such properties may enable us to solve N-stage stochastic optimal problems often avoiding the curse of dimensionality. The two methods are tested end compared in an example involving a 10-dimension state vector. | |

DOI: | 10.1109/ACC.2000.879176 | |

Subject | approximation theory dynamic programming neural nets optimal control stochastic systems |

1) Download Document PDF |

Open access Restricted Private