Adaptive Feature Alignment with Theoretical Guarantee for Domain Adaptation

Main Article Content

Piyush Rai
Anmoldeep Singh
Nitish K. Verma
Rohit Khandelwal

Abstract




Domain adaptation aims to learn effective predictive models for a target domain different from a labeled source domain. Distribution mismatch between domains poses a significant challenge. We propose Adaptive Feature Alignment with Theoretical Guarantee (AFA-TG), a novel method leverag- ing a quadratic domain discrepancy (QDD) metric which measures differences in mean and covariance of latent features. We establish a theoretical upper bound on target generalization error via QDD minimization. Empirical evaluation on a synthetic toy dataset demonstrates superior performance of AFA-TG over raw feature and MMD-based baselines under domain shift.




Article Details

How to Cite
Rai, P., Singh, A., Verma, N. K., & Khandelwal, R. (2025). Adaptive Feature Alignment with Theoretical Guarantee for Domain Adaptation. Special Interest Group on Artificial Intelligence Research, 1(1). Retrieved from https://sigair.org/index.php/journal/article/view/14
Section
Articles

References

Shai Ben-David, John Blitzer, Koby Crammer, Alex Kulesza, Fernando Pereira, and Jennifer Wortman Vaughan. A theory of learning from different domains. Machine learning, 79:151–175, 2010.

Nicolas Courty, Re ́mi Flamary, Devis Tuia, and Alain Rakotomamonjy. Optimal transport for domain adaptation. IEEE transactions on pattern analysis and machine intelligence, 39(9):1853–1865, 2016.

Damien Franc ̧ois, Vincent Wertz, and Michel Verleysen. Choosing the metric: a simple model approach. In Meta-Learning in Computational Intelligence, pages 97–115. Springer, 2011.

Yaroslav Ganin, Evgeniya Ustinova, Hana Ajakan, Pascal Germain, Hugo Larochelle, Franc ̧ois Lavio- lette, Mario March, and Victor Lempitsky. Domain-adversarial training of neural networks. Journal of machine learning research, 17(59):1–35, 2016.

Arthur Gretton, Karsten M Borgwardt, Malte J Rasch, Bernhard Scho ̈lkopf, and Alexander Smola. A kernel two-sample test. The Journal of Machine Learning Research, 13(1):723–773, 2012.

Diederik P Kingma. Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980, 2014. Mingsheng Long, Yue Cao, Jianmin Wang, and Michael Jordan. Learning transferable features with deep

adaptation networks. In International conference on machine learning, pages 97–105. PMLR, 2015.

Mingsheng Long, Han Zhu, Jianmin Wang, and Michael I Jordan. Deep transfer learning with joint adaptation networks. In International conference on machine learning, pages 2208–2217. PMLR, 2017.

Santisudha Panigrahi, Anuja Nanda, and Tripti Swarnkar. A survey on transfer learning. In Intelligent and Cloud Computing: Proceedings of ICICC 2019, Volume 1, pages 781–789. Springer, 2020.

Vishal M Patel, Raghuraman Gopalan, Ruonan Li, and Rama Chellappa. Visual domain adaptation: A survey of recent advances. IEEE signal processing magazine, 32(3):53–69, 2015.

Ievgen Redko, Amaury Habrard, and Marc Sebban. Theoretical analysis of domain adaptation with op- timal transport. In Machine Learning and Knowledge Discovery in Databases: European Conference, ECML PKDD 2017, Skopje, Macedonia, September 18–22, 2017, Proceedings, Part II 10, pages 737– 753. Springer, 2017.

Baochen Sun and Kate Saenko. Deep coral: Correlation alignment for deep domain adaptation. In Com- puter vision–ECCV 2016 workshops: Amsterdam, the Netherlands, October 8-10 and 15-16, 2016, proceedings, part III 14, pages 443–450. Springer, 2016.