Google Ads Account Audit: What a Complete Diagnostic Actually Produces
A Google Ads account audit that checks settings, keywords, and search terms in one pass. Here is what the output looks like.
Most Google Ads account audits are not audits. They are someone looking at campaign level CPA and ROAS, comparing them to a target, and making surface level suggestions. “Consider adding negative keywords.” “Test new ad copy.” “Review your bid strategy.”
A real Google Ads account audit checks 300 or more data points across four layers: campaign settings, keyword performance, search term categorization, and ad group benchmarks. It applies thresholds derived from auditing over 300 accounts. And it produces a prioritized action list ranked by dollar impact, not a data dump.
Here is what that actually looks like when applied to a real account.
What separates a Google Ads account audit from a metrics review
A metrics review tells you whether the account is above or below target. A diagnostic audit tells you why, and quantifies the cost of every problem it finds.
The difference is the layer beneath the numbers. An account showing $85 CPA against a $100 target looks fine in a metrics review. A diagnostic audit finds that 3 of 6 campaigns are producing conversions at $45 CPA while the other 3 are producing nothing, and the blended average masks $6,000 per month in waste.
The audit workflow I built encodes this diagnostic thinking into a repeatable process. One command pulls four datasets simultaneously, calculates account level benchmarks, derives thresholds from those benchmarks, and produces findings classified by severity and ranked by dollar impact. The 15 point audit checklist documents the full diagnostic sequence behind these thresholds.
The four data layers a complete audit checks
Campaign settings layer. Six settings that silently leak budget from the day a campaign is created. Display Network enabled on Search campaigns. Search Network partners. Location targeting type. Auto applied recommendations. Bidding strategy alignment with conversion volume. Target CPA or ROAS vs actual performance. These appear in roughly 60% of the accounts I audit. Each one is checked against a specific threshold. For the full dollar impact analysis of each setting, see the wasted spend diagnostic framework.
Keyword layer. This is where most automated tools produce misleading recommendations. They flag keywords for action without checking whether those keywords are actually active. A keyword in a paused ad group is already effectively paused. Recommending that someone pause an already inactive keyword erodes trust in the entire analysis. The workflow applies parent status filtering: it only flags keywords where the full chain is active (campaign enabled, ad group enabled, keyword enabled). For active keywords, it identifies zero conversion keywords above the spend threshold, keywords with CPA more than double the account average, and low quality score keywords actively draining budget.
Search term layer. Every non converting search term gets categorized into four groups: competitor, irrelevant, informational, and relevant but not converting. Each category has a different implication and a different recommended action. When the workflow finds a converting search term not yet added as a keyword, it cross references against every active keyword in the account before recommending additions. For the full categorization process, see how to read a search terms report.
Ad group layer. Performance benchmarked against account averages. Ad groups with CPA more than 2x the account average get flagged. Ad groups with strong CTR but zero conversions get marked as potential funnel problems rather than targeting problems. This distinction changes the fix entirely.
Example audit output: a SaaS account spending $18,000 per month
This is what the workflow produced on a B2B SaaS account targeting “project management software” and related terms.
Campaign settings findings. Two of four Search campaigns had the Display Network enabled. Combined Display spend: $1,400 per month. Purchases from Display placements: zero. The clicks came from app inventory and content sites where nobody was evaluating software. Location targeting was set to “Presence or Interest” on three campaigns, leaking $900 per month to clicks from countries the company does not serve. One campaign was running Target CPA at $80 while actual CPA was $240. The algorithm was oscillating between overbidding on low quality traffic and underbidding on high intent queries, producing erratic results on a daily basis.
Keyword findings. Seven active keywords had consumed more than 3x the target CPA ($80) without a single conversion. Combined excess spend: $2,800 per month. Four keywords carried quality scores of 1 to 3, paying a 300% click premium. Combined spend on those four: $1,100 per month with one conversion. Effective CPA: $1,100.
One zero conversion keyword told a more nuanced story. “Cloud ERP solutions” had spent $420 with no conversions. But the search terms it triggered included “ERP implementation for mid market manufacturing” (3 clicks, 2 conversions, $45 total) and “cloud ERP for professional services” (5 clicks, 1 conversion, $62). The keyword looked like waste. Its search terms were converting at $36 CPA, well below the $80 target. The recommendation: save those converting terms as exact match keywords, then pause the broad match parent.
Search term findings. 680 unique search terms in 30 days. 420 had spend with zero conversions. Categorized: 85 competitor terms (“Monday.com pricing,” “Asana enterprise plan,” “Basecamp alternatives”) totaling $980. 190 irrelevant terms (“project management jobs,” “PMP certification,” “project management salary”) totaling $2,100. 95 informational terms (“what is project management software,” “best free project management tool”) totaling $780. 50 relevant but not converting terms (“project management software for construction”) totaling $640.
Themed negative keyword lists generated: a “careers and education” list (6 entries blocking 130 or more future matches), a “competitor” list (12 entries), and a “certification” list (4 entries).
Total identified waste: $8,300 per month. 46% of total budget.
How findings get classified by severity
Not all waste is equally urgent. The workflow classifies every finding using a formula: identified monthly waste divided by total monthly spend, multiplied by 100.
Critical: waste exceeds 10% of total monthly spend. In the SaaS account, the combined search term waste from competitor and irrelevant categories ($3,080, or 17.1% of spend) was critical. This means more than one dollar in six was going to terms where no conversion was possible.
Medium: waste between 5% and 10%. The Display Network leak ($1,400, or 7.8% of spend) was medium. Significant, but the fix is a single settings change.
Low: waste below 5% but above the minimum threshold. The location targeting leak ($900, or 5% of spend) sat at the boundary. Individual informational search terms fell into low severity individually but aggregated into medium when viewed as a category.
This classification prevents the most common audit mistake: spending time on a $200 per month inefficiency while a $3,000 per month leak runs unchecked. The prioritized action list puts the highest dollar impact findings first. Fix the critical items before touching anything else.
The distinction that changes the fix: targeting problem vs funnel problem
One of the SaaS account’s ad groups showed a pattern that a surface level audit would misdiagnose. CTR was 3.8%. Quality score averaged 7. But conversion rate was zero over 60 days.
A targeting problem would show low CTR and irrelevant search terms. That is not what the data showed. The traffic was relevant. People were clicking. Nobody was converting. The workflow flagged this as a funnel problem: the landing page had been redesigned three weeks earlier and the demo request form was broken on mobile. Mobile traffic represented 58% of clicks in that ad group.
Compare that to another ad group where CTR was 1.1% and the top search terms were “project management internship” and “project management course online.” Low CTR. Irrelevant terms. That is a targeting problem. The fix is match types and negative keywords, not landing page changes.
Missing this distinction leads teams to pause keywords that are generating qualified traffic (fixing the wrong thing) or to rewrite landing pages when the real problem is that the wrong people are clicking (also fixing the wrong thing). The audit workflow flags each finding with its likely root cause so the response matches the problem.
What this means for teams running multiple accounts
When a single Google Ads account audit produces this depth of analysis consistently, two things change.
First, every account gets the same standard of review regardless of who runs it. A junior team member running the workflow produces the same diagnostic depth as the senior person whose judgment was encoded. The quality floor rises across the entire portfolio. This is the core idea behind encoding expertise so it persists independent of any individual.
Second, the senior practitioners get their time back. Instead of spending three to four hours on mechanical analysis per account, they spend 20 minutes reviewing the output and making the strategic calls that require human context. The AI handles what can be systematized. The human handles what cannot. For more on where this creates genuine leverage and where it falls short, see what works and what does not in AI for Google Ads management.
The accounts that perform best over time are not the ones that got audited once. They are the ones where the audit runs with the same rigor every month.
The free audit I run is powered by this workflow. Every finding is prioritized by estimated monthly impact. For teams that want to build workflows like this around their own expertise, AI agents are how I help make that happen.