The file attached to this record is the author's final peer reviewed version. The Publisher's final version can be found by following the DOI link.
The performance of decomposition-based algorithms is sensitive to the Pareto front shapes since their reference vectors preset in advance are not always adaptable to various problem characteristics with no a priori knowledge. For this issue, this paper proposes an adaptive reference vector reinforcement learning approach to decomposition-based algorithms for the industrial copper burdening optimization. The proposed approach involves two main operations, i.e., a reinforcement learning operation and a reference point sampling operation. Given the fact that the states of reference vectors interact with the landscape environment (quite often), the reinforcement learning operation treats the reference vector adaption process as a reinforcement learning task, where each reference vector learns from the environmental feedback and selects optimal actions for gradually fitting the problem characteristics. Accordingly, the reference point sampling operation uses estimation-of-distribution learning models to sample new reference points. Finally, the resultant algorithm is applied to handle the proposed industrial copper burdening problem. For this problem, an adaptive penalty function and a soft constraint-based relaxing approach are used to handle complex constraints. Experimental results on both benchmark problems and real-world instances verify the competitiveness and effectiveness of the proposed algorithm.