Dynamic Scheduling Method for Job-Shop Manufacturing Systems by Deep Reinforcement Learning with Proximal Policy Optimization

Abstract

With the rapid development of Industrial 4.0, the modern manufacturing system has been experiencing profoundly digital transformation. The development of new technologies helps to improve the efficiency of production and the quality of products. However, for the increasingly complex production systems, operational decision making encounters more challenges in terms of having sustainable manufacturing to satisfy customers and markets’ rapidly changing demands. Nowadays, rule-based heuristic approaches are widely used for scheduling management in production systems, which, however, significantly depends on the expert domain knowledge. In this way, the efficiency of decision making could not be guaranteed nor meet the dynamic scheduling requirement in the job-shop manufacturing environment. In this study, we propose using deep reinforcement learning (DRL) methods to tackle the dynamic scheduling problem in the job-shop manufacturing system with unexpected machine failure. The proximal policy optimization (PPO) algorithm was used in the DRL framework to accelerate the learning process and improve performance. The proposed method was testified within a real-world dynamic production environment, and it performs better compared with the state-of-the-art methods.

Publication DOI: https://doi.org/10.3390/su14095177
Divisions: College of Engineering & Physical Sciences > School of Engineering and Technology > Mechanical, Biomedical & Design
College of Engineering & Physical Sciences
College of Engineering & Physical Sciences > Aston Institute of Urban Technology and the Environment (ASTUTE)
Aston University (General)
Additional Information: © 2022 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https:// creativecommons.org/licenses/by/ 4.0/). Funding: This research was funded by RECLAIM project “Remanufacturing and Refurbishment Large Industrial Equipment” and received funding from the European Commission Horizon 2020 research and innovation programme under grant agreement No 869884.
Uncontrolled Keywords: Artificial neural networks,Deep reinforcement learning,Dynamic scheduling,Industry 4.0,Manufacturing sustainability,Geography, Planning and Development,Renewable Energy, Sustainability and the Environment,Environmental Science (miscellaneous),Energy Engineering and Power Technology,Management, Monitoring, Policy and Law
Publication ISSN: 2071-1050
Last Modified: 12 Nov 2024 17:02
Date Deposited: 27 Apr 2022 09:57
Full Text Link:
Related URLs: https://www.mdp ... -1050/14/9/5177 (Publisher URL)
http://github.c ... Kuhnle/SimRLFab (Related URL)
http://www.scop ... tnerID=8YFLogxK (Scopus URL)
PURE Output Type: Article
Published Date: 2022-04-25
Accepted Date: 2022-04-22
Authors: Zhang, Ming (ORCID Profile 0000-0001-5202-5574)
Lu, Yang
Hu, Youxi
Amaitik, Nasser (ORCID Profile 0000-0002-0962-4341)
Xu, Yuchun (ORCID Profile 0000-0001-6388-813X)

Download

[img]

Version: Published Version

License: Creative Commons Attribution

| Preview

Export / Share Citation


Statistics

Additional statistics for this record