Deep Q-network model for dynamic job shop scheduling pproblem based on discrete event simulation

Winter Simulation Conference(2020)

Cited 5|Views1
No score
Abstract
ABSTRACTIn the last few decades, dynamic job scheduling problems (DJSPs) has received more attention from researchers and practitioners. However, the potential of reinforcement learning (RL) methods has not been exploited adequately for solving DJSPs. In this work deep Q-network (DQN) model is applied to train an agent to learn how to schedule the jobs dynamically by minimizing the delay time of jobs. The DQN model is trained based on a discrete event simulation experiment. The model is tested by comparing the trained DQN model against two popular dispatching rules, shortest processing time and earliest due date. The obtained results indicate that the DQN model has a better performance than these dispatching rules.
More
Translated text
Key words
deep Q-network model,dynamic job shop scheduling pproblem,dynamic job scheduling problems,DJSPs,reinforcement learning methods,delay time,discrete event simulation experiment,trained DQN model,shortest processing time
AI Read Science
Must-Reading Tree
Example
Generate MRT to find the research sequence of this paper
Chat Paper
Summary is being generated by the instructions you defined