Your Sensitivity Classification Is Probably Useless
We walk into client environments and find spreadsheets labeled “confidential” sitting in shared drives. We see PII scattered across test databases. We watch teams encrypt data at rest but beam it unencrypted across networks. And when we ask why—the answer is always the same: “We classified it, so it’s protected.”
It’s not. Classification without corresponding controls is theater. You can slap a “Restricted” label on something and accomplish nothing if your team doesn’t know what to do with it.
Handling sensitive data isn’t about finding the perfect taxonomy. It’s about linking every classification level to concrete, enforced actions—and making sure those actions actually stick in your environment.
Start With Your Own Definition (Not Someone Else’s)
NIST SP 800-122, ISO 27001, GDPR, HIPAA—they all have frameworks. They’re useful. But here’s what we tell clients: build your classification scheme around your operational reality first, then map it to frameworks.
Why? Because “sensitive data” means different things depending on your industry, your threat model, and your regulatory obligations. A healthcare provider’s sensitivity classification looks nothing like a SaaS startup’s. And frankly, a generic framework won’t save you when an auditor is drilling down on why a contractor accessed your customer database.
A Real Classification That Works
Here’s a structure we’ve seen stick in mid-market environments:
- Public: Marketing materials, published docs. Zero harm if disclosed.
- Internal: Employee directories, internal processes. Disclosed = reputation risk, maybe business risk. No regulatory bite.
- Confidential: Unpublished product specs, financial data, customer lists. Disclosure = material business impact or regulatory violation.
- Restricted: PII, payment card data, health records, trade secrets. Disclosure = breach notification, fines, lawsuits. Encrypt it, log access, minimize who touches it.
The magic isn’t the labels—it’s what you do next. Each level gets specific handling rules. No ambiguity.
Advisory note: Don’t create five levels. Teams can’t remember five. Four is the ceiling before you get decision paralysis and people just guess.
Where Most Programs Fail (And Why We’ve Seen It Every Time)
Classification without enforcement is the #1 failure mode we encounter. You’ll audit a client, they’ll show you a beautiful data inventory with sensitivity tags on everything—and then you’ll find the “Restricted” dataset with 200 people with access, no audit logging, and passwords written on a sticky note.
This happens because classification happens once (usually during a big compliance project), and then people move on. Nobody revisits it. Nobody ties it to access controls, encryption policies, or monitoring. It becomes a checkbox, not a control.
The Real Issue: You Need Three Connected Systems
Discovery & Classification — Know where your sensitive data lives. Use automated tools (DLP, DSPM, data cataloging solutions) to find PII and other regulated data. Don’t rely on manual tagging. Manual tagging fails. We’ve watched spreadsheets with customer SSNs sit unclassified for years because nobody ran a scan.
Access Control — Once you’ve classified something as Restricted, enforce who can access it. Use role-based access control (RBAC), principle of least privilege, and conditional access policies. If your classification system exists but anyone with a network login can still read Restricted data, you’ve wasted time.
Audit & Monitoring — Log who accesses sensitive data, when, and what they did with it. Yes, this creates logs. Yes, they’re big. Use SIEM aggregation, alerting, or managed detection services if your SOC is thin. But without visibility, you won’t know if someone exfiltrated data until the breach is public.
From the field: We audited a financial services client who classified customer account data as Restricted but had zero logging on the database. A junior analyst copied 50,000 customer records to his laptop “for analysis.” They didn’t find out until six months later when he left and the new team asked why the data was on his machine. Classification didn’t help them. Logging would have.
The Technical Reality: Encryption, Encryption, Encryption
If data is Restricted, it should be encrypted. Not “encrypted where it makes sense.” Encrypted—period. In transit, at rest, ideally end-to-end.
We know this sounds obvious. It’s not. We still find teams sending customer PII over unencrypted HTTP, storing passwords in plaintext databases, and emailing API keys. You’d think this was solved by 2024. It’s not.
Practical Encryption Baseline for Restricted Data
Data at Rest: AES-256 encryption on databases, file stores, and backups. Use key management services (AWS KMS, Azure Key Vault, HashiCorp Vault) so keys aren’t sitting next to data. If you’re managing keys manually, you’re doing it wrong.
Data in Transit: TLS 1.2 minimum (preferably 1.3). HTTPS only. VPNs for remote access to databases. SFTP instead of FTP. No exceptions.
Data at Rest in Memory: If you’re processing sensitive data, use secure enclaves or encrypted memory where feasible. For most teams this isn’t practical, but know your secrets shouldn’t be sitting unencrypted in application memory.
Here’s a config snippet for encrypting a PostgreSQL database column using transparent data encryption (TDE):
-- Enable encryption for sensitive column
ALTER TABLE customers
ADD COLUMN email_encrypted BYTEA;
-- Use pgcrypto extension
CREATE EXTENSION pgcrypto;
-- Encrypt email on insert/update
UPDATE customers
SET email_encrypted = encrypt(email::bytea,
'your-secret-key'::bytea, 'aes')
WHERE email_encrypted IS NULL;
This isn’t cutting-edge. It’s table stakes. If your Restricted data isn’t encrypted in transit and at rest, your compliance posture is built on sand.
Access Control: Make It Sticky
Sensitivity classification should automatically drive access policies. If something is classified as Restricted, your identity and access management (IAM) system should enforce minimum required access.
Use attribute-based access control (ABAC) where possible. Instead of manually managing access lists, tag your data with sensitivity levels and configure policies that say: “Restricted data only accessible to users with role=DBA AND department=Finance AND location=office.”
Advisory note: If you’re still managing access via spreadsheets and manual requests, stop. You’re not controlling anything—you’re creating a liability. Implement a modern PAM (Privileged Access Management) or IAM solution. Okta, Entra ID, Ping Identity—the specific tool matters less than having something that ties classification to enforcement.
Practical Checklist: Is Your Sensitive Data Access Locked Down?
| Control | Status | Action |
|---|---|---|
| All Restricted data has an explicit access policy (not inherited) | ☐ | Audit IAM today. Find overpermissioned accounts. |
| Access requires MFA for Restricted systems | ☐ | Enable MFA on databases, data warehouses, and file stores. |
| Temporary/contractor access expires automatically | ☐ | Set TTLs on all third-party access. No permanent access. |
| Access to Restricted data is logged and monitored | ☐ | Implement audit logging. Feed logs to SIEM or CloudTrail. |
| Quarterly access reviews are completed (and evidenced) | ☐ | Schedule reviews. Remove inactive users. Document it. |
Handling Sensitive Data Across Teams (The Hard Part)
Classification and encryption work until someone needs to actually use the data. Then friction appears—developers need production data for testing, analysts need customer records for reports, contractors need access for integrations. And suddenly you’re punching holes in your controls.
We’ve seen this destroy security programs. The controls exist, but people work around them because the legitimate use case is too painful.
The Practical Path Forward
Data masking/anonymization for non-production: Developers don’t need real customer SSNs to test payment flows. Anonymize or mask sensitive columns in dev/test environments. Tools like Dataguard or Delphix automate this.
Just-in-time (JIT) access: Analysts need access for 2 hours to pull a report. Use JIT provisioning to grant temporary access, then revoke it. Okta, AWS IAM Identity Center, or similar tools handle this—no standing access, less liability.
Data residency and segregation: If you’re handling sensitive data across regions or for different customers, segregate it. Separate databases, separate encryption keys, separate audit trails. One breach doesn’t compromise everything.
If your team is asking for ways to bypass your controls, that’s not a security failure—that’s a design failure. Fix the process, not the policy.
The Week-One Action List
Don’t redesign your entire data program this week. Do this:
- Audit what you’re calling “sensitive.” Pull your current classification scheme. Is it tied to actual controls, or is it just labels? Be honest.
- Find your Restricted data. Run a DLP or DSPM tool against your major databases and file stores. Identify PII, financial data, and trade secrets you know about but haven’t classified.
- Check encryption status. Are your Restricted systems encrypted at rest and in transit? If not, encrypt them this week. This is non-negotiable.
- Review access logs. Pull last month’s access to your most sensitive systems. Are there users who shouldn’t have access? Revoke it.
If you want a deeper assessment of your current data handling posture—classification gaps, access control weaknesses, or encryption blindspots—run our data security assessment at cyentrix.com/assessments/. We’ll identify the gaps you’re missing and prioritize fixes based on risk, not just compliance checkboxes.
Handling sensitive data isn’t elegant. It’s messy, operational work. But it’s the difference between a control that exists and a control that actually prevents breaches. Start this week.