学术报告
Performance-enhanced aggregated representation learning
题目:Performance-enhanced aggregated representation learning
报告人:李文慧 (中国科学院数学与系统科学研究院)
摘要:Representation learning is a key technique in modern machine learning that enables models to identify meaningful patterns in complex data. However, different methods tend to extract distinct aspects of the data, and relying on a single approach may overlook important insights relevant to downstream tasks. This paper proposes a performance-enhanced aggregated representation learning method, which combines multiple representation learning approaches to improve the performance of downstream tasks. The framework is designed to be general and flexible, accommodating a wide range of loss functions commonly used in machine learning models. To ensure computational efficiency, we use surrogate loss functions to facilitate practical weight estimation. Theoretically, we prove that our method asymptotically achieves optimal performance in downstream tasks, meaning that the risk of our predictor is asymptotically equivalent to the theoretical minimum. Additionally, we derive that our method asymptotically assigns nonzero weights to correctly specified models. We evaluate our method on diverse tasks by comparing it with advanced machine learning models. The experimental results demonstrate that our method consistently outperforms baseline methods, showing its effectiveness and broad applicability in real-world machine learning scenarios.
报告人简介:李文慧,中国科学院数学与系统科学研究院预测科学研究中心助理研究员。2019年获武汉大学学士学位,2024年获中国科学技术大学博士学位。她的研究领域是计量经济学和统计学,主要涉及的研究方向包括模型平均、高维统计推断、因子模型及机器学习方法等。相关研究工作发表在UTD24期刊之一《INFORMS Journal on Computing》等。
报告时间:2025年4月23日(周三)下午13:30-14:30
报告地点:教四101
联系人:胡晓楠