I’ve been talking to a lot of HR firms recently, and I’m hearing some similar experiences with AI that are worth discussing. Here are a few examples:
- A team member uses ChatGPT to draft a termination letter for a client.
- An analyst pastes compensation survey data into an AI tool to speed up a benchmarking report.
- A recruiter runs candidate notes through a summarizer to prep for a hiring manager debrief.
None of these people did anything malicious, but in each case, confidential client data — names, salaries, performance histories, headcount plans — left the building. With no log, no policy violation alert, and no way to walk it back.
This is Shadow AI. And in a business like HR, where trust is the product, it’s one of the most underacknowledged operational risks in the market right now.
Why HR Firms Are Especially Exposed
Most industries have data sensitivity. HR firms have data intimacy. You handle terminations, investigations, compensation restructuring, and succession plans. Your clients tell you things they haven’t told their boards yet. The confidentiality standard is the foundation of every client relationship you have.
When employees use consumer AI tools they’re operating outside any enterprise data agreement. They don’t know what gets retained, what gets used for model training, or what their actual exposure is. And in most firms, neither does leadership.
The instinct is to block the tools, but banning AI right now is like banning social media in 2009. The behavior just goes further underground, and you lose the visibility you had.
The other instinct is to ignore it and hope for the best. That’s not a strategy. That’s a liability waiting to be named in a client audit.
There’s a Third Option
The firms navigating this well aren’t banning AI and they aren’t ignoring it, but they are getting intentional about it.
That means first understanding where your actual exposure is. Which roles, which workflows, which tools are already in use? Then building guardrails that match your risk profile that is calibrated to the specific ways confidential data moves through your practice. What’s the difference between an appropriate AI use case and an inappropriate one in your context? That line needs to be drawn explicitly, or your people will draw it themselves.
And it means implementing strategically: identifying where AI can genuinely accelerate your work without compromising client trust, and building those workflows deliberately so adoption is visible, governable, and actually beneficial.
This is what I call AI with Intent — a structured approach to moving from reactive damage control to proactive capability building.
The Conversation Your Clients Will Eventually Have
Here’s what I’ve observed in professional services firms navigating this: the question is no longer whether AI is entering your workflows. It’s already there. The question is whether you’re the one who decided how.
HR firms that get ahead of this will have a meaningful differentiator. Imagine being able to tell a prospective client: We’ve mapped every AI touchpoint in our practice. Here’s our data governance framework. Here’s how we ensure your information stays safe. That’s not just a risk management story — it’s a trust story. And in HR services, trust is a competitive advantage.
If you are wondering where you might be exposed, I can conduct a quick AI Exposure Audit that would point out your biggest risks. Then, we can develop a plan to mitigate them with proper, governed AI adoption the entire firm can get behind.
Schedule a meeting to get started. Book Time Here.






