Do you think it’s important for leaders to experiment with AI capabilities for security even if implementation is currently cost-prohibitive for their organization? Could small-scale experimentation be worth the effort in the long term?
We definitely see value in experimenting with AI. We encourage our team members to test and explore various tools, and we have established a robust training program. As a Microsoft shop, we have deployed Copilot for everyone, along with special Copilot licenses. Anyone interested in these licenses must complete about three hours of training through LinkedIn Learning to ensure they understand that AI will not replace them or do their job for them—they remain responsible for any output generated by these tools.
We believe everyone should be testing and experimenting with AI, whether in a controlled sandbox environment or on their own personal devices. While we do not allow ChatGPT within the organization, many of our cyber defense team members use it on their own devices to explore its capabilities.
We definitely see value in experimenting with AI. We encourage our team members to test and explore various tools, and we have established a robust training program. As a Microsoft shop, we have deployed Copilot for everyone, along with special Copilot licenses. Anyone interested in these licenses must complete about three hours of training through LinkedIn Learning to ensure they understand that AI will not replace them or do their job for them—they remain responsible for any output generated by these tools.
We believe everyone should be testing and experimenting with AI, whether in a controlled sandbox environment or on their own personal devices. While we do not allow ChatGPT within the organization, many of our cyber defense team members use it on their own devices to explore its capabilities.