Skip to main content Skip to main navigation

Publikation

Panel on Neural Relational Data: Tabular Foundation Models, LLMs... or both?

Paolo Papotti; Carsten Binnig
In: Proceedings of the VLDB Endowment (PVLDB), Vol. 18, No. 12, Pages 5513-5515, VLDP, 2025.

Zusammenfassung

Recent breakthroughs in artificial intelligence have produced Large Language Models (LLMs) and a new wave of Tabular Foundation Models (TFMs). Both promise to redefine how we query, integrate, and reason over relational data, yet they embody opposing philoso- phies: LLMs pursue broad generality through massive text-centric pre-training, whereas TFMs embed inductive biases that mirror table structure and relational semantics. This panel assembles re- searchers and practitioners from academia and industry to debate which path, specialized TFMs, ever stronger general-purpose LLMs, or a hybrid of the two, will most effectively power the next genera- tion of data management systems. Panelists will confront questions of generality, accuracy, scalability, robustness, cost, and usability across core data management tasks such as Text-to-SQL translation, schema understanding, and entity resolution. The discussion aims to surface critical research challenges and guide the community’s investment of effort and resources over the coming years.

Weitere Links