345 Follow-ups, Zero Replies: Debugging Automated Outreach

Paulo Rodrigues7 min read

345 Follow-ups, Zero Replies: Debugging Automated Outreach

We run an automated cold outreach pipeline for ImparLabs. It discovers companies with outdated websites, analyzes their tech stack, and sends personalized emails offering our services. The entire system runs on self-hosted infrastructure — n8n for orchestration, PostgreSQL for data, Groq for AI email generation.

After three weeks of operation, the numbers looked like this:

MetricValue
Emails sent (first touch)1,088
Replies from first touch10
Follow-up emails sent345
Replies from follow-ups0

Ten replies from initial emails. Zero from 345 follow-ups. Not a single one.

Follow-ups are supposed to be the workhorse of any outreach campaign. Most prospects need 3-4 touches before they respond. A 0% rate meant something was fundamentally broken.

Finding the Root Cause

I started by reading the actual emails the system was sending. The first-touch emails mentioned specific findings — "your site uses jQuery 2.2.4" or "we noticed your SSL certificate is missing." The follow-ups said things like "just following up on my previous email" and "we can help improve your website."

Generic. Forgettable. Indistinguishable from spam.

The code told the story. The initial email endpoint used execute_sql_json() with a query that JOINed the tech_stacks table — the table where we store everything we learn about a prospect's website. jQuery version, HTTPS status, load time, CMS version, copyright year.

The follow-up endpoint used execute_sql() (a different, older function) with a query that had no JOIN to tech_stacks at all. The AI was generating "personalized" follow-ups with zero data about the prospect's actual website.

Four Problems, One File

Once I started pulling the thread, four issues emerged:

1. No Tech Data in Follow-ups

The SQL query fetched the campaign record and company name, but nothing from tech_stacks. The AI prompt received no information about the prospect's website issues. It had to make up generic value propositions.

Fix: Added LEFT JOIN tech_stacks ts ON c.id = ts.company_id and selected all relevant columns — has_https, is_mobile_friendly, jquery_version, cms, cms_version, copyright_year, load_time_ms, tech_outdated.

2. Unsafe Data Parsing

The old execute_sql() function returns pipe-delimited text — each row is a string like 123|email@example.com|Company Name|subject. If a company name contains a pipe character (like "Cafe & Bar | Lisboa"), the parsing breaks silently. The fields shift, and the email gets sent to the wrong address or with corrupted data.

Fix: Switched to execute_sql_json() which returns proper dictionaries. No more string splitting, no more field corruption.

3. Flat Timing

All follow-ups waited exactly 7 days regardless of sequence position. The first follow-up waited 7 days. The second waited 7 days. The third waited 7 days. No urgency progression, no pattern variation.

Fix: Implemented variable timing with a SQL CASE expression:

  • Step 1 → 2: wait 3 days
  • Step 2 → 3: wait 5 days
  • Step 3 → 4: wait 7 days

This creates natural urgency — the first follow-up arrives while the initial email is still fresh, and the gaps widen as the sequence progresses.

4. Generic AI Prompts

Even with data available, the prompts didn't instruct the AI to use it differently per follow-up step. All four follow-up angles got the same generic instruction.

Fix: Created a helper function (_summarize_tech_findings()) that converts raw tech data into a Portuguese summary string. Each follow-up angle now gets specific context:

  • Follow-up 2 (tech value): References specific website findings — "jQuery v2.2.4 has known vulnerabilities"
  • Follow-up 3 (social proof): Uses industry and location for relevance — "companies in your sector in Lisbon are already..."
  • Follow-up 4 (final hook): One sharp technical finding as a closing argument

The NULL Trap

There was a fifth issue hiding in the fix itself. When a company has no entry in tech_stacks (because enrichment hasn't run yet), the LEFT JOIN returns NULL for all tech columns. The initial version of _summarize_tech_findings() checked if not tech.get("has_https") — which treats NULL the same as False.

This meant companies we hadn't analyzed yet would get emails saying "your site doesn't have HTTPS" when we simply didn't know. The fix was strict identity comparison: if tech.get("has_https") is False — only flag what we've explicitly confirmed.

Testing What Matters

This pipeline had zero automated tests. Every change was verified by manually calling the API and reading the output. For a system sending hundreds of emails to real businesses, that's not acceptable.

We added 111 tests covering the core business logic — SQL escaping, email validation, company name cleaning, and the new tech findings summarizer. All pure functions, no database connections, no network calls. The full suite runs in 0.19 seconds.

The key insight: if your function needs a database connection to test, the function is doing too much. Extract the logic, test the logic, trust the infrastructure.

The Lesson

The same AI model, with the same prompt structure, produces either spam or value. The difference is entirely in what data you feed it.

Our follow-up emails went from "we can help improve your website" to "your site still uses jQuery 2.2.4 — a simple migration resolves known vulnerabilities and improves load time." Same AI, same prompt template, dramatically different output.

Automation without context is noise. If your "personalized" outreach is just a template with a name swapped in, you're burning leads. The infrastructure to collect data about prospects is only useful if you actually pass it to the system that writes the emails.

We deployed the fix and we're monitoring results. The follow-up reply rate can only go up from 0%.


This pipeline runs entirely on self-hosted infrastructure — n8n, PostgreSQL, Groq AI — under EU jurisdiction with GDPR compliance. No prospect data leaves our servers. If you're building automated outreach for the European market, we can help you do it right.

Frequently Asked Questions

Why do automated follow-up emails get zero replies?

The most common cause is fake personalization — the email appears customized but contains no specific data about the recipient. In our case, the SQL query didn't JOIN the tech analysis table, so the AI generated generic text despite having detailed website data available.

How do you make AI-generated follow-up emails more effective?

Feed the AI specific, verifiable data about each prospect. Instead of 'we can improve your website,' reference concrete findings like 'your site uses jQuery 2.2.4 which has known vulnerabilities.' Variable timing (3/5/7 days) also creates natural urgency compared to flat intervals.

What is variable follow-up timing and why does it matter?

Variable timing means changing the delay between follow-up emails based on the sequence step — for example, 3 days for the first follow-up, 5 for the second, 7 for the third. This creates a sense of increasing urgency and prevents the pattern from feeling automated.

Ready to automate your business?

We build AI tools and automation systems for European SMEs — from rapid MVPs to production systems, always GDPR-compliant.

it's human stuff

Weekly AI insights for European SMEs. No hype, just what works.

Keep Reading