•  
  •  
 

Abstract

This paper considers approximation methods for solving optimization problems while discussing the possibility to reduce the large amount of calculations. Hence, the approximation range is selected in accordance with the optimal value of the quality indicator. To do this, the upper and lower boundaries are being calculated followed by generating the control sequences that minimize the specified boundaries. The next step is to find an approximate solution. An approximate solution to the dual control problem comes from the problem of optimal control of a linear Gaussian system with a quadratic quality criterion and unknown parameters. For the dual control problem, according to the minimization procedure using the dynamic programming method, an optimal loss function VN-k (x(k)/Zk) is determined for a (N-k) -step process with an initial state. The optimal loss function must satisfy the Bellman equation. By solving the equation, it is possible to obtain a quadratic form that does not have the property of reproducibility. To calculate the matrices U or T of quadratic form, the method for obtaining recurrent expressions is unknown. To take into account the required active control, U or T must be calculated at the rate of the real process.

First Page

123

Last Page

127

References

1. Alexandrov, A.G. (2003). Optimal and Adaptive Systems. Moscow. 278 p.

2. Shikhovtsev, V.A. (1999). Recursive Minimization Method in Optimization Problems of Dynamic Systems Under Uncertainty. Proceedings of the Academy of Sciences. Theory and Control Systems, 1, 40-57.

3. Alexandrov, A.G. (2008). Methods for Constructing Automatic Control Systems. Moscow: Fizmatlit. 232 p.

4. Alexeeva, E.Y., Besedin, A.A. (2010). Dual Control of Extreme Objects Under Uncertainty. Computer Technologies, Control, Radioelectronics, 2(11), 26-28.

5. Dorf, R., Bishop, R. (2002). Modern Control Systems. Moscow: Unimediastyle. 822 p.

6. Egupov, N.D., Pupkov, K.A. (2004). Methods of Classical and Modern Automatic Control Theory. Moscow: Publishing House of MGTU named after N.E. Bauman.

7. Kostin, V.A. (2004). Control Theory. Moscow: Gardariki. 224 p.

8. Bukov, V.N. (1987). Adaptive Predictive Flight Control Systems. Moscow: Science. 260 p.

9. Peltsverger, S.B. (2004). Algorithmic Support for Estimation Processes in Dynamic Systems Under Uncertainty. Moscow: Science. 116 p.

10. Afanasev, V.N. (2011). Optimal Control Systems. Analytical Design. Moscow: Moscow State University. 170 p.

11. Ljung, L. (2002). Prediction Error Estimation Methods. Circuits, Systems, and Signal Processing, 21(1), 11-21.

12. Simon, D. (2006). Optimal State Estimation: Kalman, H-infinity, and Nonlinear Approaches. Hoboken, NJ: Wiley, 526p.

13. Michael Basin, Dario Calderon-Alvarez (2010). Optimal Filtering Over Linear Observations with Unknown Parameters. 347(6), 988-1000.

Included in

Engineering Commons

Share

COinS
 
 

To view the content in your browser, please download Adobe Reader or, alternately,
you may Download the file to your hard drive.

NOTE: The latest versions of Adobe Reader do not support viewing PDF files within Firefox on Mac OS and if you are using a modern (Intel) Mac, there is no official plugin for viewing PDF files within the browser window.