Technical Architects, Stay Ahead of the Top 6 GenAI Security Risks
What i took off from this is how people can us AI tools to generate incorrect responses to a user. There are two types of security risks involving injections. The first one is called Direct Prompt Injection, where hackers send harmful behavior to chatbots and use them to extract information even when there are programs in place to prevent answering malicious questions. The second type is called Indirect Prompt Injection, which is not highly exploitable compared to the first one. To execute this, hackers need to hack into the web/SaaS and then try to send users misleading information.