AI Product Security
This fall I completed the LinkedIn learning course titled AI Product Security: Building Strong Data and Governance by Meghan Maneval. In this course I learned, most importantly, why AI needs strong governance, which is rooted in the uprising of new risks like privacy leaks and manipulation regarding AI data. I also learned how a strong data governance framework should be built. A framework should define data ownership, as well as set rules for access to data in addition to establish policies for data collection. In addition to these topics, compliance and ethical AI was something I learned which involves aligning AI systems with regulations in place, as well as maintaining transparency, which aligns to the “security by design” principle. This LinkedIn learning course also relates to Managing Enterprise Security course, as the course I have taken here at Temple puts emphasis on control frameworks for how data and systems are monitored, specifically through the NISt framework. This would also apply for AI behavior relating to data. This course is also helpful for my career as often times Data Analysts, or IT Auditors are expected to know how to properly handle data, and understand the risks of AI, which calls for applying the proper controls and protocol NIST outlines.

