If you have plans for your security teams to work with generative AI systems, what training and resources are on your radar?
Sort by:
1. Clear IDEA about sensitive data/information.
2. What is absolute no share items.
3. How to handle the dilema
4. Impact of such mistakes.
Companies to develop a roadmap for developing the skills of Employees as well as establish policies and practices for generative AI oversight and risk management.
1- Develop a knowledge base for new rules and regulations for AI tools.
2- Enterprise-wide strategy for AI trust, risk, and security management
3- Training of AI systems to assess their safety and security.
4- Develop a learning culture, sign up for training services, and review the latest books, blogs, and research papers.
Some technical training to understand before engaging any AI/Security consultants.
1. Data Governance - managing sensitive data, data boundaries
2.Data Management - setting up boundaries for data access i.e. within org. ecosystem, wrangle from public/internet or API from subscribers.
3. Fundamental concept of AI in security areas i.e. rules vs ML, DLP, Anomaly Detection and etc.
4. Data Security Risk & Impact analysis