What are your thoughts on using AI capabilities for data security, especially when it comes to data discovery or classification? Do you think there’s real potential for AI to create significant improvements in this area (and is this something you plan to pursue)?

369 viewscircle icon5 Comments
Sort by:
Director of IT10 months ago

Booz Allen is using AI to do data classification including security classification markings based on document content and federal security classification guides.  Here is a quote from our LabelLogic solution slick sheet:
"QUALITY OF LIFE - TIME SAVING
When a paragraph is processed for classification, prompt again for the topic of the paragraph as well based on SCG. Capture all topics in a list and provide a final prompt with numbers of each topic to determine aggregate classification of the document."

If this interests you, I can email the slick sheet to you.

CISO10 months ago

Absolutely. In fact, I bet my career on it. I just transitioned from Netskope over to Sierra SSE. I have been very focused on cloud security, but I quickly had to shift focus to data protection and classifications. This is where GenAI, or AI in general, can really make a difference. AI can look at the documents, look at unstructured and structured data, and it can really help. AI ican classify the data, correct the classification, and auto label it. It can now let you and me, as security officers, put the protections where we need them.

We can use similar AI tools to get rid of old data. We have this data out there, and we don't even know what's in it, but I'm spinning it every day, backing it up. I want to get rid of it. We all know from a risk perspective having that old data is bad. So, I think AI is making a huge difference, and any tool you're looking at now or part of the platform should really be embracing it.

Sr. Director of Enterprise Security in Software10 months ago

We're going to be really dependent on AI to do things better than it could before. Previously, our data classification tools were like, "This looks like a credit card number. This looks like a birthday," right? It was pretty basic stuff. Now, I think the challenge I'm seeing is who do you trust with your data? I'm trusting a data classification tool to analyze all of my data and then build its own learning model off of my data for what data should look like. There's a bit of cause for concern in doing this, and it makes people fairly nervous, because of what the AI is learning off of your data. I lose sleep at night thinking about the data across all of these tools.

All these different AI tools we have are using our data to learn, and oftentimes we're not having the opportunity to opt in or opt out. These tools just get turned on, and suddenly there's an AI component in a tool that you've been using every day. Your users are quick to adopt them, and then you have to stop and wonder, "OK, what did I agree to, and what has now been pushed out on my tool? What data do I have, and where is it going?"

That's the double-edged sword. I'm very much looking forward to AI capabilities to help us with things like data classification, but I'm also concerned about what they are doing with all the metadata they've gathered in my environment.

CISO/CPO & Adjunct Law Professor in Finance (non-banking)10 months ago

I think that AI is here to stay. One upside is that there's a lot of attention being placed on AI and cybersecurity or data security because it is something that people think AI can fix. The uninitiated, people who are not in the field, might think you can just hand over your information security practice to AI. But for those of us in the field, we understand that you don't just hand over your information security practice to AI.

The challenge in terms of data is what's going to happen with it. My view is that lawyers will be involved at some point. You can either involve them upfront to know what you can do and what you legally cannot do, or involve them afterward when things get out of hand. This is tricky for a lot of cybersecurity or data security because legal and compliance are connected and related, but they don't generally move at the same speed as business. People want a business answer, like "Can I do this? Yes or no?" And any lawyer worth their salt will say, "Let me look at it." People want the salesperson's promise of getting it up and running today, rather than having it evaluated and understood by legal first.

In many organizations, the legal ramifications, I fear, are not being as well evaluated due to the
pressure to not be left behind as a business.

1 Reply
no title10 months ago

Having an AI responsible use document is really important. It gets legal involved, it gets your SMEs involved, so you're making decisions as a corporation about how you're going to use AI. What are the ethical responsibilities we have? How do we manage that? How do we monitor it?<br><br>

Content you might like

Yes - for all employees59%

Yes - for some employees29%

Not yet - working on this9%

No3%

View Results

Yes72%

No16%

It depends on the size/industry10%

I’m not sure…

View Results