Online platforms increasingly employ AI-generated review summaries (AIGS) to reduce information overload, but the implications of AI selection for individual reviews remain unclear. Using hotel review data from TripAdvisor, this study examines which reviews are chosen by AI for summarization and how this selection relates to review helpfulness. Logistic regression results show that reviews with a higher proportion of hotel-related words are more likely to be selected by AI, whereas reviews with greater lexical diversity or length are less likely to be chosen. Negative binomial and Tobit models reveal a significant negative relationship between AI selection and helpful votes, suggesting that content favored by AI may differ from what readers find most helpful. These findings highlight a potential mismatch between algorithmic curation and human evaluation, offering practical implications for both reviewers and platforms in designing AI–human co-curation systems.
목차
Abstract Introduction Related Literatures AI-Generated Review Summaries (AIGS) Review Helpfulness Empirical Analysis Empirical Setting Data Variable Definition Method Results Discussion and Conclusion References