Interactive Demo
We show a text-video pair from the MSR-VTT 1K test split, the initial retriever (X-Pool) top-5, and X-CoT's re-ranked top-5 with explanations. The ground-truth video is highlighted in green if it appears in the top-5.
a person is connecting something to system
X‑CoT Explanations
Abstract
Prevalent text‑to‑video retrieval systems mainly adopt embedding models for feature extraction and compute cosine similarities for ranking. However, this design presents two limitations. Low‑quality text‑video data pairs could compromise the retrieval, yet are hard to identify and examine. Cosine similarity alone provides no explanation for the ranking results, limiting the interpretability. We ask that can we interpret the ranking results, so as to assess the retrieval models and examine the text‑video data? This work proposes X‑CoT, an explainable retrieval framework upon LLM CoT reasoning in place of the embedding model‑based similarity ranking. We first expand the existing benchmarks with additional video annotations to support semantic understanding and reduce data bias. We also devise a retrieval CoT consisting of pairwise comparison steps, yielding detailed reasoning and complete ranking. X‑CoT empirically improves the retrieval performance and produces detailed rationales. It also facilitates the model behavior and data quality analysis.
Citation
If you find this work valuable for your research, we kindly request that you cite the following paper:
@inproceedings{pulakurthi2025x,
title={X-CoT: Explainable Text-to-Video Retrieval via LLM-based Chain-of-Thought Reasoning},
author={Pulakurthi, Prasanna Reddy and Wang, Jiamian and Rabbani, Majid and Dianat, Sohail and Rao, Raghuveer and Tao, Zhiqiang},
booktitle={Proceedings of the 2025 Conference on Empirical Methods in Natural Language Processing},
pages={31172--31183},
year={2025}
}