A curated collection of cutting-edge research on agentic reasoning, memory mechanisms, and large language models in recommender systems
Receive weekly research highlights on agentic recommender systems.
- researchers subscribed | Unsubscribe
The evolution of recommender systems has shifted preference storage from rating matrices and dense embeddings to semantic memory in the agentic era...
🔄 Loading latest papers from ArXiv...
A curated collection of seminal papers that have shaped the field of agentic recommender systems.
The evolution of recommender systems has shifted preference storage from rating matrices and dense embeddings to semantic memory in the agentic era...
@article{chen2026memrec,
title = {MemRec: Collaborative Memory-Augmented Agentic Recommender System},
author = {Chen, Weixin and Zhao, Yuhan and Huang, Jingyuan and Ye, Zihe and
Ju, Clark Mingxuan and Zhao, Tong and Shah, Neil and Chen, Li and
Zhang, Yongfeng},
year = 2026,
journal = {arXiv preprint arXiv:2601.08816},
url = {https://arxiv.org/abs/2601.08816}
}
This work proposes iAgent, an LLM-based agent that acts as a protective shield between users and recommender systems, enabling more transparent and controllable recommendation processes.
@inproceedings{xu2025iagent,
title = {i{A}gent: {LLM} Agent as a Shield between User and Recommender Systems},
author = {Xu, Wujiang and Shi, Yunxiao and Liang, Zujie and Ning, Xuying and
Mei, Kai and Wang, Kun and Zhu, Xi and Xu, Min and Zhang, Yongfeng},
booktitle = {Findings of the Association for Computational Linguistics, {ACL} 2025},
pages = {18056--18084},
year = {2025}
}
MACRec introduces a multi-agent collaboration framework where multiple LLM agents work together to improve recommendation quality through collaborative reasoning and information sharing.
@inproceedings{wang2024macrec,
title = {Macrec: A Multi-agent Collaboration Framework for Recommendation},
author = {Wang, Zhefan and Yu, Yuanqing and Zheng, Wendi and Ma, Weizhi and
Zhang, Min},
booktitle = {Proceedings of the 47th International ACM SIGIR Conference on Research
and Development in Information Retrieval},
pages = {2760--2764},
year = {2024}
}
This paper explores the potential of generative agents powered by large language models in recommendation scenarios, discussing their capabilities, challenges, and future directions.
@inproceedings{zhang2024generative,
title = {On Generative Agents in Recommendation},
author = {Zhang, An and Chen, Yuxin and Sheng, Leheng and Wang, Xiang and
Chua, Tat-Seng},
booktitle = {Proceedings of the 47th International ACM SIGIR Conference on Research
and Development in Information Retrieval},
pages = {1807--1817},
year = {2024}
}
This work presents a framework for simulating realistic user behaviors using LLM-based agents, enabling better testing and evaluation of recommender systems without real user data.
@article{wang2025user,
title = {User Behavior Simulation with Large Language Model-based Agents},
author = {Wang, Lei and Zhang, Jingsen and Yang, Hao and Chen, Zhi-Yuan and
Tang, Jiakai and Zhang, Zeyu and Chen, Xu and Lin, Yankai and Sun, Hao
and Song, Ruihua and others},
journal = {ACM Transactions on Information Systems},
volume = {43},
number = {2},
pages = {1--37},
year = {2025}
}
RAH proposes a human-centric framework that leverages LLMs as assistants to facilitate better interaction between users and recommender systems, emphasizing user control and interpretability.
@article{shu2023rah,
title = {Rah! Recsys-assistant-human: A Human-central Recommendation Framework
with Large Language Models},
author = {Shu, Yubo and Gu, Hansu and Zhang, Peng and Zhang, Haonan and Lu, Tun
and Li, Dongsheng and Gu, Ning},
journal = {IEEE Trans. Comput. Soc. Syst.},
volume = {11},
number = {5},
pages = {6759--6770},
year = {2024}
}
RecMind introduces a comprehensive agent framework powered by large language models that can understand user intent, reason about preferences, and generate personalized recommendations through multi-step reasoning.
@inproceedings{wang2024recmind,
title = {Recmind: Large Language Model Powered Agent for Recommendation},
author = {Wang, Yancheng and Jiang, Ziyan and Chen, Zheng and Yang, Fan and
Zhou, Yingxue and Cho, Eunah and Fan, Xing and Lu, Yanbin and
Huang, Xiaojiang and Yang, Yingzhen},
booktitle = {Findings of the Association for Computational Linguistics: NAACL 2024},
pages = {4351--4364},
year = {2024}
}
InteRecAgent presents an AI agent that integrates large language models to enable interactive and conversational recommendations, supporting natural language queries and explanations for better user engagement.
@article{huang2025recommender,
title = {Recommender Ai Agent: Integrating Large Language Models for Interactive
Recommendations},
author = {Huang, Xu and Lian, Jianxun and Lei, Yuxuan and Yao, Jing and
Lian, Defu and Xie, Xing},
journal = {ACM Transactions on Information Systems},
volume = {43},
number = {4},
pages = {1--33},
year = {2025}
}
Pre-extracted multimodal features for MovieLens 1M, enabling research on visual, textual, and acoustic perception in recommendation systems. Derived from movie trailers and metadata, processed with state-of-the-art encoders.
| Modality | Encoder | Dimension | Size |
|---|---|---|---|
| Visual | ResNet50 (ImageNet pre-trained) | 3,706 × 1,000 | ~30 MB |
| Textual | Sentence-Transformers | 3,706 × 384 | ~12 MB |
| Acoustic | VGGish (YouTube-8M pre-trained) | 3,706 × 128 | ~4 MB |
| Interactions | User-Item ratings with metadata | 1,000,209 records | ~39 MB |
Download Components:
Note: All features are stored in NumPy .npy format (float32).
The interaction file follows RecBole format with fields:
userID, itemID, rating, timestamp, x_label (sensitive attribute),
u_gender, u_age, u_occupation.
Total dataset size: ~85 MB.
Citation:
@article{chen2025causality,
title = {Causality-Inspired Fair Representation Learning for Multimodal Recommendation},
author = {Chen, Weixin and Chen, Li and Ni, Yongxin and Zhao, Yuhan},
journal = {ACM Transactions on Information Systems},
year = {2025},
volume = {43},
number = {6},
articleno = {153},
numpages = {29},
doi = {10.1145/3706602}
}