Exploring the Impact of Intervention Methods on Developers’ Security Behavior in a Manipulated ChatGPT Study
2025Conference / Journal
Authors
Alena Naiakshina Asli Yardim Raphael Serafini
Research Hub
Research Hub D: Benutzerfreundlichkeit
Abstract
Increased AI use in software development raises concerns about AI-generated code security. We investigated the impact of security prompts, insecure AI suggestion warnings, and the use of password storage guidelines (OWASP, NIST) on the security behavior of software developers when presented with insecure AI assistance. In an online lab setting, we conducted a study with 76 freelance developers who completed a password storage task divided into four conditions. Three conditions included a manipulated ChatGPT-like AI assistant, suggesting an insecure MD5 implementation. We found a high level of trust in AI-generated code, even when insecure suggestions were presented. While security prompts, AI warnings, and guidelines improved security awareness, 32% of those notified about insecure AI recommendations still accepted weak implementation suggestions, mistakenly considering it secure and often expressing confidence in their choice. Based on our results, we discuss security implications and provide recommendations for future research.