Talent Acquisition in the Age of AI: What Works, What’s Risky, and What to Avoid
Matt Tague is a ModelExpand senior consultant, talent strategist, and customer success leader with a track record of building and scaling hiring infrastructure at companies like Lyft, Microsoft, and LinkedIn and most recently led Customer Success at Gem. He holds four U.S. patents for recruiting technologies and has pioneered data-driven talent acquisition strategies. In this post, Matt shares his take on the new structure of recruiting.
Artificial intelligence is transforming talent acquisition, offering exciting opportunities to enhance efficiency, insight, and candidate experience. Yet many companies still lack a clearly defined policy for how, where, and whether to use AI within their TA function. Meanwhile, leadership teams, especially in tech, are urging AI-first experimentation, putting TA professionals in the challenging position of balancing innovation with risk management.
Why TA Needs Its Own AI Policy
While enterprise-wide AI policies are a good starting point, Talent is fundamentally different due to the sensitivity, privacy, and legal exposure inherent in people decisions. To help protect your candidates, your people, and your organization, this guide outlines what to do and what to avoid when implementing AI in your Talent team.
Do’s: Build a Safe, Smart, and Responsible AI Practice
1. Create a Dedicated AI Policy for People Teams
Generic enterprise AI policies often miss the nuances of people data. Develop a TA-specific framework that clearly defines how AI can and cannot be used across your TA processes, including:
Candidate outreach and selection
Scheduling
Interview transcription and analysis
Guidance for candidates using AI
Hiring Decisions and offer letter creation
2. Understand and Limit Your Data Exposure
Ask: Is this AI tool storing or training on confidential employee data? Define what counts as confidential and restrict its use to enterprise-grade tools with proper safeguards. ALL CHATGPT CHATS (yes, even the deleted ones. Yes, even if you pasted something and didn’t press enter) ARE STORED FOREVER AND ARE DISCOVERABLE
Example: A recruiting team member pastes an email draft containing offer details to Claude (Anthropic) to ask whether the AI can make the email more attractive. That data could now be retained and used to train Claude’s models, forever.
3. Map and Score AI Risk for Talent Data
Document all Talent data types and decision points, and assign a risk score. The deeper data is in your talent funnel the higher the risk score:
High risk: Offer data, candidate PII (email, phone numbers, diversity data) → Never use with AI.
Medium/low risk: Interview notes, scheduling, etc. → Use cautiously with proper controls.
4. Build and Maintain an AI tool Inventory
Track all tools that intersect with your hiring process:
Enterprise AI Tools: Copilot, Gemini, Zoom
HR-Specific AI: Workday’s AI, Greenhouse SmartAssist, Gem, AI sourcing tools
Rogue/Shadow AI: Tools used unofficially by individuals
Everything is discoverable. Unapproved tools carry major compliance risks.
5. Educate Teams that AI ≠ Thinking
Train your recruiting team not to use AI as a shortcut for thinking.
Example: A recruiter pastes the transcript from their recruiter screen into ChatGPT and asks “write a hiring manager summary for this candidate based on this conversation”
6. Use AI Agents Wisely in HR
Before assigning AI agents to HR tasks, ask: “Would I ever outsource this work externally?” If the answer is no, don’t use an agent for it. If yes, consider the minimal scope required for the agent to operate (just like you would if you outsourced a particular task to a different country).
7. Use AI for Exploration, Not Decisions
AI can surface patterns (e.g., “this candidate looks like a past successful hire”), but it must never make final decisions like whether a candidate proceeds in an interview, whether one candidate is more qualified than another, or certainly whether a hire should be made or not.
8. Practice Data Minimization
Only input the minimum necessary data. Avoid uploading any identifiable information unless it’s essential, compliant, and securely managed.
9. Develop a candidate AI use policy and train the team
Just as TA teams are using AI, so are candidates. It is important to think through how/if you want candidates using AI and clearly communicating that throughout the interview process.
Don’ts: Avoid High-Risk & Irresponsible AI Uses
1. Don’t Use AI for DEI Decision-Making
Never let AI analyze or explain disparities by group identity (e.g., “Why aren’t women getting moved to onsite as often?”). These are complex systemic issues requiring human interpretation.
2. Don’t Rely on AI for Individual Assessments
Recruiters and hiring managers may be tempted to write notes from interviews in AI and ask it to make a decision on a candidate. This is dangerous, especially without context or oversight.
3. Don’t Assume Vendor Tools Are Compliant. In fact, don’t trust vendors by default.
Just because a tool is popular or widely used doesn’t mean it complies with your data policies or regional regulations. Always verify and ask deeper questions.
Example: You buy an in demand tool for Recruiting that touts an AI feature, but do you know which company or model they are using to power that feature? How do they store your data? Are they using Deepseek to process your hiring data for example?
4. Don’t Allow Unmonitored Rogue AI Use
Shadow AI tools can lead to major data breaches. Make it clear what is and isn’t allowed, and monitor usage proactively.
5. Don’t Use AI Outputs Without Human Review
AI should support, not replace, human judgment. Always layer critical thinking, ethical reasoning, and context onto AI-generated insights before using it internally or externally.
Conclusion: The Future of AI in Talent Is Human-Led
AI will continue to reshape how Talent Acquisition operates. Its success depends on human leadership, ethical judgment, and thoughtful governance. The most effective teams treat AI as a partner, not a replacement: using it to enhance efficiency and insight while safeguarding privacy, fairness, and trust.
Building clear policies, training teams to think critically, and maintaining transparency with candidates aren’t just compliance measures, they’re the foundations of responsible innovation. As AI becomes more embedded in recruiting, the organizations that lead with intention will attract the best talent and earn lasting credibility in the market.
About ModelExpand
ModelExpand is a talent advisory firm that helps companies build a high-performing internal recruiting engine. We partner with your team to design the people, processes, and systems that drive consistent, faster, higher-quality hiring at scale. Contact us to learn more.