Which of the following is a defense against AI threats?

Get ready for the ISACA AI Fundamentals Test with flashcards and multiple-choice questions. Each question features hints and detailed explanations. Prepare to ace your exam with confidence!

Multiple Choice

Which of the following is a defense against AI threats?

Explanation:
The idea being tested is protecting AI assets through trusted execution environments. Secure enclaves create a protected area where AI models and their data are processed in isolation from the rest of the system. The code and data inside an enclave are kept encrypted in memory and can only be accessed by the trusted enclave itself, with strong controls and attestation that verify the code running inside hasn’t been tampered with. This means sensitive inputs, outputs, and the model remain confidential and protected from a compromised operating system or untrusted software. Because the model and its data stay inside this secure boundary, it’s much harder for attackers to exfiltrate the model or alter its behavior, which addresses threats like model theft and data leakage during inference or training. It’s not a flawless solution—there are challenges like potential side-channel risks and performance trade-offs—but as a defense mechanism, secure enclaves provide a strong, hardware-backed way to guard AI assets against several common threats. Data poisoning, evasion, and model theft describe attack techniques, not defenses, so they don’t fit as protective measures.

The idea being tested is protecting AI assets through trusted execution environments. Secure enclaves create a protected area where AI models and their data are processed in isolation from the rest of the system. The code and data inside an enclave are kept encrypted in memory and can only be accessed by the trusted enclave itself, with strong controls and attestation that verify the code running inside hasn’t been tampered with. This means sensitive inputs, outputs, and the model remain confidential and protected from a compromised operating system or untrusted software.

Because the model and its data stay inside this secure boundary, it’s much harder for attackers to exfiltrate the model or alter its behavior, which addresses threats like model theft and data leakage during inference or training. It’s not a flawless solution—there are challenges like potential side-channel risks and performance trade-offs—but as a defense mechanism, secure enclaves provide a strong, hardware-backed way to guard AI assets against several common threats.

Data poisoning, evasion, and model theft describe attack techniques, not defenses, so they don’t fit as protective measures.

Subscribe

Get the latest from Examzify

You can unsubscribe at any time. Read our privacy policy