Advertisement

Immediate injection flaws in GitLab Duo highlights dangers in AI assistants



Thank you for reading this post, don't forget to subscribe!

GitLab’s coding assistant Duo can parse malicious AI prompts hidden in feedback, supply code, merge request descriptions and commit messages from public repositories, researchers discovered. This system allowed them to trick the chatbot into making malicious code strategies to customers, share malicious hyperlinks and inject rogue HTML code in responses that stealthily leaked code from non-public initiatives.

“GitLab patched the HTML injection, which is nice, however the greater lesson is obvious: AI instruments are a part of your appʼs assault floor now,” researchers from software safety agency Legit Safety mentioned in a report. “In the event that they learn from the web page, that enter must be handled like every other user-supplied knowledge — untrusted, messy, and probably harmful.”

Immediate injection is an assault method towards massive language fashions (LLMs) to govern their output to customers. And whereas it’s not a brand new assault, will probably be more and more related as enterprises develop AI brokers that parse user-generated knowledge and independently take actions based mostly on that content material.