Reinforcement Learning and Dynamic Programming Using Function Approximators

Nonfiction, Computers, Advanced Computing, Theory, Science & Nature, Technology, Electricity, Electronics
Cover of the book Reinforcement Learning and Dynamic Programming Using Function Approximators by Lucian Busoniu, Robert Babuska, Bart De Schutter, Damien Ernst, CRC Press
View on Amazon View on AbeBooks View on Kobo View on B.Depository View on eBay View on Walmart
Author: Lucian Busoniu, Robert Babuska, Bart De Schutter, Damien Ernst ISBN: 9781351833820
Publisher: CRC Press Publication: July 28, 2017
Imprint: CRC Press Language: English
Author: Lucian Busoniu, Robert Babuska, Bart De Schutter, Damien Ernst
ISBN: 9781351833820
Publisher: CRC Press
Publication: July 28, 2017
Imprint: CRC Press
Language: English

From household appliances to applications in robotics, engineered systems involving complex dynamics can only be as effective as the algorithms that control them. While Dynamic Programming (DP) has provided researchers with a way to optimally solve decision and control problems involving complex dynamic systems, its practical value was limited by algorithms that lacked the capacity to scale up to realistic problems.

 However, in recent years, dramatic developments in Reinforcement Learning (RL), the model-free counterpart of DP, changed our understanding of what is possible. Those developments led to the creation of reliable methods that can be applied even when a mathematical model of the system is unavailable, allowing researchers to solve challenging control problems in engineering, as well as in a variety of other disciplines, including economics, medicine, and artificial intelligence.

Reinforcement Learning and Dynamic Programming Using Function Approximators provides a comprehensive and unparalleled exploration of the field of RL and DP. With a focus on continuous-variable problems, this seminal text details essential developments that have substantially altered the field over the past decade. In its pages, pioneering experts provide a concise introduction to classical RL and DP, followed by an extensive presentation of the state-of-the-art and novel methods in RL and DP with approximation. Combining algorithm development with theoretical guarantees, they elaborate on their work with illustrative examples and insightful comparisons. Three individual chapters are dedicated to representative algorithms from each of the major classes of techniques: value iteration, policy iteration, and policy search. The features and performance of these algorithms are highlighted in extensive experimental studies on a range of control applications.

The recent development of applications involving complex systems has led to a surge of interest in RL and DP methods and the subsequent need for a quality resource on the subject. For graduate students and others new to the field, this book offers a thorough introduction to both the basics and emerging methods. And for those researchers and practitioners working in the fields of optimal and adaptive control, machine learning, artificial intelligence, and operations research, this resource offers a combination of practical algorithms, theoretical analysis, and comprehensive examples that they will be able to adapt and apply to their own work.

Access the authors' website at www.dcsc.tudelft.nl/rlbook/ for additional material, including computer code used in the studies and information concerning new developments.

View on Amazon View on AbeBooks View on Kobo View on B.Depository View on eBay View on Walmart

From household appliances to applications in robotics, engineered systems involving complex dynamics can only be as effective as the algorithms that control them. While Dynamic Programming (DP) has provided researchers with a way to optimally solve decision and control problems involving complex dynamic systems, its practical value was limited by algorithms that lacked the capacity to scale up to realistic problems.

 However, in recent years, dramatic developments in Reinforcement Learning (RL), the model-free counterpart of DP, changed our understanding of what is possible. Those developments led to the creation of reliable methods that can be applied even when a mathematical model of the system is unavailable, allowing researchers to solve challenging control problems in engineering, as well as in a variety of other disciplines, including economics, medicine, and artificial intelligence.

Reinforcement Learning and Dynamic Programming Using Function Approximators provides a comprehensive and unparalleled exploration of the field of RL and DP. With a focus on continuous-variable problems, this seminal text details essential developments that have substantially altered the field over the past decade. In its pages, pioneering experts provide a concise introduction to classical RL and DP, followed by an extensive presentation of the state-of-the-art and novel methods in RL and DP with approximation. Combining algorithm development with theoretical guarantees, they elaborate on their work with illustrative examples and insightful comparisons. Three individual chapters are dedicated to representative algorithms from each of the major classes of techniques: value iteration, policy iteration, and policy search. The features and performance of these algorithms are highlighted in extensive experimental studies on a range of control applications.

The recent development of applications involving complex systems has led to a surge of interest in RL and DP methods and the subsequent need for a quality resource on the subject. For graduate students and others new to the field, this book offers a thorough introduction to both the basics and emerging methods. And for those researchers and practitioners working in the fields of optimal and adaptive control, machine learning, artificial intelligence, and operations research, this resource offers a combination of practical algorithms, theoretical analysis, and comprehensive examples that they will be able to adapt and apply to their own work.

Access the authors' website at www.dcsc.tudelft.nl/rlbook/ for additional material, including computer code used in the studies and information concerning new developments.

More books from CRC Press

Cover of the book Circuits and Electronics by Lucian Busoniu, Robert Babuska, Bart De Schutter, Damien Ernst
Cover of the book Handbook of Discrete and Computational Geometry by Lucian Busoniu, Robert Babuska, Bart De Schutter, Damien Ernst
Cover of the book Clinical Governance in General Dental Practice by Lucian Busoniu, Robert Babuska, Bart De Schutter, Damien Ernst
Cover of the book Building Materials, Health and Indoor Air Quality by Lucian Busoniu, Robert Babuska, Bart De Schutter, Damien Ernst
Cover of the book ATM Technology for Broadband Telecommunications Networks by Lucian Busoniu, Robert Babuska, Bart De Schutter, Damien Ernst
Cover of the book Intermittent and Nonstationary Drying Technologies by Lucian Busoniu, Robert Babuska, Bart De Schutter, Damien Ernst
Cover of the book Human-Robot Interaction in Social Robotics by Lucian Busoniu, Robert Babuska, Bart De Schutter, Damien Ernst
Cover of the book Carpentry and Joinery 1 by Lucian Busoniu, Robert Babuska, Bart De Schutter, Damien Ernst
Cover of the book Circuit Analysis and Feedback Amplifier Theory by Lucian Busoniu, Robert Babuska, Bart De Schutter, Damien Ernst
Cover of the book Leading and Managing Innovation by Lucian Busoniu, Robert Babuska, Bart De Schutter, Damien Ernst
Cover of the book Numerical Methods for Chemical Engineers Using Excel, VBA, and MATLAB by Lucian Busoniu, Robert Babuska, Bart De Schutter, Damien Ernst
Cover of the book Discrete Mathematics and Applications by Lucian Busoniu, Robert Babuska, Bart De Schutter, Damien Ernst
Cover of the book Friction Stir Welding by Lucian Busoniu, Robert Babuska, Bart De Schutter, Damien Ernst
Cover of the book Bassett's Environmental Health Procedures by Lucian Busoniu, Robert Babuska, Bart De Schutter, Damien Ernst
Cover of the book Lubricant Additives by Lucian Busoniu, Robert Babuska, Bart De Schutter, Damien Ernst
We use our own "cookies" and third party cookies to improve services and to see statistical information. By using this website, you agree to our Privacy Policy