It’s an ai roleplay app of some sort. The user (pink text) instructed it to say hello world in html and the ai did it. Showing the app vulnerable to prompt injections since it didn’t do any kind of validation before sending the request to chatgpt/similar and then returning the response.
It’s an ai roleplay app of some sort. The user (pink text) instructed it to say hello world in html and the ai did it. Showing the app vulnerable to prompt injections since it didn’t do any kind of validation before sending the request to chatgpt/similar and then returning the response.