Computational Optimization Nash-MTL: An Efficient Multi-Task Learning Method Based on Gradient Intelligent Sampling
DOI:
https://doi.org/10.61173/2f0wr760Keywords:
Multi-task learning, Gradient optimization, Nash bargaining solution, Computation OptimizationAbstract
Aiming at the exponential computational complexity problem of Nash-MTL in multi-task learning, this paper proposes a computationally optimized Nash-MTL framework. This method introduces three core points. First, there is a phased gradient update mechanism, which combines cyclic sampling and dynamic random sampling strategies, and can maintain the optimized performance while To minimize redundant gradient computing, secondly, there is a dynamic importance scheduling model, which assesses task priority by means of loss change rate and gradient size, thereby intelligently allocating computing resources, and is also supplemented by a security recovery strategy. Thirdly, there is a stability guarantee mechanism with periodic global updates and abnormal trigger rollback operations. Experiments were conducted on QM9, NYUv2 and cityscape datasets, which confirmed the effectiveness of the framework. The framework can maintain task performance (with a deviation within 5%) while significantly reducing computing time by 55.4%. These advancements greatly enhance the feasibility of deploying complex multi-task learning systems in resource-constrained edge computing environments.