We Got Bots – Exploring the Differences Between Likely-Human and Likely-Bot Responses in Online Research

Hendricks Lui, Hin Tat (2024) We Got Bots – Exploring the Differences Between Likely-Human and Likely-Bot Responses in Online Research. Honours thesis, University of Southern Queensland. (Unpublished)


Abstract

With more researchers collecting data through online surveys, researchers are increasingly finding bot-generated and low-quality survey responses (i.e., likely-bot responses) in their data. However, a paucity of research has examined how such responses may impact research findings, and the differences between authentic survey responses (i.e., likely-human responses) and likely-bot responses are largely unknown. This study aimed to explore the differences between the groups, specifically examining how likely-bot responses may impact research integrity with a known bot-corrupted dataset. Responses (N = 350) were sorted as either likely-human (n = 85) or likely-bot responses (n = 240) using detection strategies consistent with past literature that incorporated manual flagging and outliers statistical analyses. Independent sample t-tests, comparison of response consistency, comparison of correlations between the groups and moderation analysis were run using trait sadism and trolling perpetration as the measured variables. Analyses revealed significant differences in means responses between groups and the consistency of responses between groups, with the likely-bot group responding more consistently. The correlation was also significantly stronger in the likely-bot group. Moderation analysis provided further insight into the influence of likely-bot responses on the predictive relationship between trait sadism and trolling perpetration, with the relationship strengthened in the likely-bot group. These findings raised concern that likely-bot responses may have become sophisticated enough to produce expected responses to evade detection, compromising research integrity. Future research is encouraged to utilise these findings to develop a universal data-cleaning protocol and further explore whether likely-bot responses are more likely to target incentive online surveys.


Statistics for USQ ePrint 53099
Statistics for this ePrint Item
Item Type: Thesis (Non-Research) (Honours)
Item Status: Live Archive
Additional Information: Current UniSQ staff and students can request access to this thesis. Please email research.repository@unisq.edu.au with a subject line of SEAR thesis request and provide: Name of the thesis requested and Your name and UniSQ email address
Faculty/School / Institute/Centre: Current – Faculty of Health, Engineering and Sciences - School of Psychology and Wellbeing (1 Jan 2022 -)
Supervisors: Marrington, Jessica
Qualification: Bachelor of Science (Honours) (Psychology)
Date Deposited: 22 Jan 2026 05:20
Last Modified: 22 Jan 2026 05:20
Uncontrolled Keywords: likely-bot responses, likely-bot responses detection strategies, research integrity, non-random systematic variance
Fields of Research (2008): 17 Psychology and Cognitive Sciences > 1701 Psychology > 170106 Health, Clinical and Counselling Psychology
Fields of Research (2020): 52 PSYCHOLOGY > 5204 Cognitive and computational psychology > 520406 Sensory processes, perception and performance
URI: https://sear.unisq.edu.au/id/eprint/53099

Actions (login required)

View Item Archive Repository Staff Only