AI phishing no longer relies on mass volume and bad grammar. Modern attacks use scraped LinkedIn profiles, company websites, and social media to craft targeted emails that reference your real manager, your real projects, and your real writing style. The old “looks suspicious” instinct no longer applies.
Analysis Briefing
- Topic: AI-powered spear phishing at scale
- Analyst: Mike D (@MrComputerScience)
- Context: A technical briefing developed with Claude
- Source: Pithy Cyborg
- Key Question: Why can’t you spot AI phishing the way you spotted the old kind?
How AI Builds a Targeting Profile Before the First Email
Before a single message is sent, automated tools scrape LinkedIn for your job title and your manager’s name. They pull your company website for project names, client references, and internal terminology. They aggregate your public social media for writing patterns and recent life events.
The result is a profile. The phishing email that arrives references your actual project, uses familiar names, and matches the tone of internal communications. It does not look like spam. It looks like email.
This is spear phishing made scalable. What previously required a skilled social engineer spending hours on a single target now runs at volume on commodity infrastructure.
| Feature | Legacy Spear Phishing (Manual) | 2026 AI Spear Phishing (Automated) |
| Reconnaissance | Hours/Days of manual OSINT | Seconds (Automated API scraping) |
| Personalization | Generic templates or high effort | Hyper-personalized (Tone-matched) |
| Language | Often broken/non-native | Native-level fluency (50+ languages) |
| Scalability | 1–5 targets per day (estimated) | 10,000+ unique targets per hour (estimated) |
| Click Rate | 10-12% estimated. | Over 50% estimated (4.5x improvement) |
Why Voice and Video Make This Worse
Text is only the beginning. AI voice cloning produces a convincing replica of your manager’s voice from three to five seconds of public audio. Attackers follow up phishing emails with phone calls that confirm the request using a recognizable voice.
The combination of a plausible email and a confirming phone call from a familiar voice defeats most human verification instincts.
Deepfake video is emerging for high-value targets. Documented cases exist of video calls with AI-generated executives used to authorize wire transfers.
Why Your Security Training Is Now Calibrated to the Wrong Attack
Security training teaches people to look for bad grammar, suspicious links, and urgent requests from unknown senders. AI-generated phishing has good grammar, legitimate-looking domains, and comes from apparently known senders about apparently real topics.
The instinct calibrated against the old attack is the wrong instinct for the new one. The new attack is specifically engineered to not feel off.
What a Real AI Phishing Attack Actually Looks Like
In 2024, a finance employee at a multinational firm received an email from what appeared to be the CFO requesting an urgent wire transfer. The email referenced a real upcoming acquisition, used internal terminology from the company’s own communications, and was followed by a video call with an AI-generated version of the CFO confirming the request. The employee transferred $25 million before the fraud was detected.
This is not an edge case reserved for large enterprises. The same targeting infrastructure is available to low-level attackers running automated campaigns against small businesses and individuals. The tools are cheap. The research is automated. The only thing limiting scale is the attacker’s ambition.
What This Means For You
- Verify through a separate, “out-of-band” channel. If a request arrives by email, call the person back on a known number you already have saved. Never use the contact info provided in the suspicious message.
- Implement hardware security keys (like YubiKeys) or device-bound passkeys. These are phishing-resistant by design because the “key” cannot be tricked into giving itself to a fake website, even a perfect one.
- Adopt a family “Check Question” or codeword. Talk to your family about voice cloning. Agree on a boring, non-searchable secret word to use during “emergency” calls.
- Pause on any urgent financial request. In 2026, urgency is the primary “signature” of an AI attack. If someone says “do it now,” that is your cue to slow down.
Enjoyed this deep dive? Join my inner circle:
- Pithy Cyborg → AI news made simple without hype.
