Which technique adds noise to data to protect privacy?

Get ready for the ISACA AI Fundamentals Test with flashcards and multiple-choice questions. Each question features hints and detailed explanations. Prepare to ace your exam with confidence!

Multiple Choice

Which technique adds noise to data to protect privacy?

Explanation:
Differential privacy adds random noise to data outputs or to the data itself in a controlled way so that removing or adding a single individual's information doesn’t noticeably change the results. This deliberate perturbation protects individuals by making it difficult to infer whether any one person’s data was included, even when attackers have access to the outputs. The noise is calibrated based on a privacy parameter (often called epsilon), balancing privacy with data usefulness. In practice, you might see noise added to query answers or to model training updates (as in DP-SGD), ensuring useful statistics or models while limiting exposure of any one person’s details. The other options aren’t about adding calibrated noise to protect privacy: federated learning focuses on keeping data on devices and aggregating updates, privacy challenges in AI is a broad topic of risks rather than a specific protective technique, and deepfakes relate to synthetic media rather than privacy protection mechanisms.

Differential privacy adds random noise to data outputs or to the data itself in a controlled way so that removing or adding a single individual's information doesn’t noticeably change the results. This deliberate perturbation protects individuals by making it difficult to infer whether any one person’s data was included, even when attackers have access to the outputs. The noise is calibrated based on a privacy parameter (often called epsilon), balancing privacy with data usefulness. In practice, you might see noise added to query answers or to model training updates (as in DP-SGD), ensuring useful statistics or models while limiting exposure of any one person’s details.

The other options aren’t about adding calibrated noise to protect privacy: federated learning focuses on keeping data on devices and aggregating updates, privacy challenges in AI is a broad topic of risks rather than a specific protective technique, and deepfakes relate to synthetic media rather than privacy protection mechanisms.

Subscribe

Get the latest from Examzify

You can unsubscribe at any time. Read our privacy policy