The Facebook Ads Audit I Run on Every New Client Account | Mujeeb Rehman
← Back to Blog

The Facebook Ads Audit I Run on Every New Client Account

Every account has different campaigns. Almost every account has the same problems. Here is the exact audit — seven areas, the questions I ask in each, and what I fix before I touch a single bid or budget.

Mujeeb Rehman

Mujeeb Rehman

Digital Marketing Consultant & AI Strategist · MSc Digital Marketing (Distinction)

When I take on a new paid social client, the first thing I do is not touch a single campaign. I audit first. Every time without exception — because the decisions made in the first week of an engagement set the frame for everything that follows. If I start optimising before I understand what is actually broken, I am solving the wrong problem faster.

Over years of auditing Meta Ads accounts across e-commerce, lead generation, and service businesses, the same problems appear in almost every account. Not the same campaigns — the same structural issues, the same tracking errors, the same targeting habits, the same creative mistakes. The problems have different names in each account. The patterns are identical.

This is the exact audit I run. Seven areas, in this order, every time.

Before I Touch Anything

Before opening Ads Manager, I ask the client three questions. The answers determine how I weight everything I find in the audit.

What does success look like in numbers? Not "more sales" — specific numbers. What CAC, what ROAS, what leads per month. If they do not know, the audit is secondary to establishing this first. You cannot optimise toward a target you have not defined.

What is your break-even ROAS? Most clients have never calculated this. I calculate it with them before the audit — because every finding in the audit needs to be measured against whether it is moving them toward or away from profitability, not just toward better-looking metrics.

What have you already tried and stopped? The campaign graveyard — the paused campaigns and dead ad sets — is often as informative as what is running. It tells me what the client has been through, what they have given up on, and sometimes what they abandoned prematurely.

You cannot optimise your way out of a structural problem. Fix the structure first. Then optimise.

1

Account Structure

Campaign hierarchy, naming conventions, objective alignment

Account structure is the foundation. If it is wrong, everything built on top of it is compromised — and no amount of creative or targeting optimisation will fix a structurally broken account.

I look for three things: whether campaigns are organised by objective (prospecting vs retargeting vs retention), whether the naming convention is clear enough to navigate quickly, and whether there is a coherent logic to why each campaign exists.

What I check

Are prospecting and retargeting campaigns separated — or running in the same campaign with mixed audiences?
Is there a clear naming convention? Can I understand what a campaign is targeting and optimising for from its name alone?
How many active ad sets are there? Is the budget spread too thin to exit the learning phase on any of them?
Are there duplicate audiences across ad sets competing with each other for the same inventory?
Are paused campaigns still accumulating — a graveyard of tests that were never reviewed and consolidated?

Most common red flag

More than 10 active ad sets with a total monthly budget under $5,000. This guarantees that no single ad set reaches the 50 events needed to exit learning — and the algorithm never gets enough data to optimise properly. Consolidate aggressively.

2

Pixel & Tracking

Conversion events, attribution windows, signal quality

This is the area where I find the most consequential problems — and the most overlooked ones. A broken pixel is the worst possible foundation for a paid campaign, but many accounts have been running on broken tracking for months without anyone noticing because the campaign metrics still look reasonable.

I verify every conversion event manually using Meta's Test Events tool. I do not trust the Events Manager dashboard alone — it can show events as active when the implementation has errors that cause undercounting or miscounting.

What I check

Is the Meta Pixel installed correctly — via direct code, GTM, or a native integration (Shopify, WordPress)?
Are the right conversion events set up? Purchase, lead, add to cart, initiate checkout — whichever are relevant to the business goal?
Are conversion events firing correctly and at the right stage in the funnel? Use Test Events to verify, not just Events Manager.
Is the Conversions API set up alongside the pixel? Without server-side tracking, iOS attribution losses can be severe.
What attribution window is being used — and does it match how the business makes decisions? 7-day click is standard; check if view-through attribution is inflating results.

Most common red flag

Purchase events firing on the product page rather than the order confirmation page — attributing revenue every time someone views a product, not when they buy. I have seen accounts scaling budget on the basis of purchase data that was entirely fictional.

Fix first

Tracking errors must be fixed before any other optimisation. Every decision made on incorrect data is a decision made in the wrong direction — and scaling budget makes the problem more expensive, not less.

3

Campaign Objectives

Objective-goal alignment, funnel coverage

The objective you choose tells Meta's algorithm what to optimise for. Choose the wrong objective and you are paying for the wrong outcome — and the algorithm will deliver exactly what you asked for, which may not be what you need.

What I check

Are campaigns using the correct objective for their stage in the funnel? Traffic campaigns for cold audiences, Sales campaigns where purchase is the goal.
Is anyone running Traffic or Engagement objectives expecting conversions? This is common — and it optimises for clicks or engagement, not for buyers.
Is there any awareness or consideration activity, or is every campaign bottom-of-funnel? Cold audiences need warming before converting efficiently.
Are Advantage+ Shopping Campaigns being used appropriately — or is the account over-relying on them at the expense of controlled prospecting?

Most common red flag

Running a Traffic objective to drive sales — because it is cheaper per click. It is cheaper because Meta is optimising for people who click, not people who buy. The audience that clicks cheaply and the audience that converts are rarely the same people.

4

Audience Targeting

Audience size, overlap, exclusions, lookalike quality

Audience targeting is where most accounts have the most opinions and the least rigour. Interest stacks, saved audiences, lookalikes — many accounts have accumulated targeting configurations over years without any systematic review of whether they are still working.

What I check

Are existing customers excluded from prospecting campaigns? Paying to convert someone who already bought is wasted spend — and common.
Is there audience overlap between ad sets? Use Audience Overlap in Audience Manager to check if ad sets are competing for the same people.
Are lookalike audiences based on high-quality seed data — purchasers, high-LTV customers — rather than all website visitors or page fans?
What is the retargeting audience size? Too small (under 1,000) and frequency will be dangerously high. Too large and it loses intent signal.
Are interest-based audiences still being tested, or has the account given up on them? Broad targeting has become more effective post-iOS but deserves structured testing, not abandonment or blind faith.

Most common red flag

No customer exclusions on prospecting campaigns, combined with a retargeting audience that includes everyone who ever visited the site — including existing customers seeing ads for products they already own. Both waste budget and damage brand perception.

5

Ad Creative

Freshness, format variety, message match, creative fatigue

Creative is the highest-leverage variable in any Meta Ads account. More than targeting, more than bidding, more than structure — the creative determines whether the ad stops the scroll or disappears into the feed. I look at creative with two questions: is it fresh enough to avoid fatigue, and is it good enough to earn attention?

What I check

When was the creative last refreshed? Anything running longer than 4–6 weeks on a cold audience needs checking for fatigue signals.
Is frequency rising while CTR is falling? This is the clearest early signal of creative fatigue — often appearing 2–3 weeks before CPA visibly deteriorates.
Is there format variety — static, video, carousel, UGC? Accounts running only one format are leaving reach and efficiency on the table.
Does the ad creative match the landing page? Message discontinuity between ad and landing page is one of the top causes of high click, low conversion performance.
Are there at least 3 creative variants per ad set? This gives the algorithm room to find the best performer without constantly resetting learning.

Most common red flag

One creative per ad set, running for 3+ months, with frequency above 4 and a CTR that has dropped by more than 40% from launch — but nobody noticed because the CPA is still within target. The CPA will follow the CTR down. Usually by the time it does, it drops sharply.

What works consistently

UGC-style video, direct-to-camera testimonials, and before/after formats consistently outperform polished brand creative in most categories. If the account has none of these, this is the first creative test I recommend.

6

Budget & Bidding

Spend distribution, bid strategy, learning phase status

Budget and bidding decisions have a compounding effect on everything else in the account. Too little budget per ad set and nothing exits learning. Wrong bid strategy and you are either winning the wrong auctions or paying too much for the right ones.

What I check

Is any ad set in Learning Limited or Learning status? If so — why? Usually budget fragmentation or too many recent edits.
Is CBO (Campaign Budget Optimisation) or ABO (Ad Set Budget Optimisation) being used — and is the choice appropriate for the account's scale and objectives?
Are there any ad sets receiving less than $5/day? At this level, most ad sets cannot gather meaningful data and should be consolidated or paused.
Is the bid strategy matched to the account's maturity? Lowest cost for newer accounts, cost cap or bid cap only when there is enough conversion history to support it.
Is spend distributed sensibly across the funnel — or is 95% going to bottom-of-funnel with nothing feeding the top?

Most common red flag

Using cost cap bidding on an account with fewer than 30 conversions per week — which causes the algorithm to under-deliver because it cannot find enough inventory that meets the cost threshold. The result is low spend, low data, and the impression that "the ads aren't working" when the problem is the bid strategy.

7

Reporting & KPIs

Metrics in use, benchmarks, decision framework

The final area is how the account is being measured and reported on — because the metrics you track determine the decisions you make. Many accounts are being managed against the wrong KPIs, which means even good decisions produce the wrong optimisations.

What I check

Is ROAS being reported as the primary success metric — and has the break-even ROAS been calculated? ROAS without a break-even floor is decorative, not decisional.
Are blended metrics (total revenue ÷ total spend) being tracked alongside platform-reported metrics? The gap between the two reveals attribution inflation.
Is there a regular review cadence — weekly at minimum — with a consistent set of metrics reviewed each time?
Are creative, audience, and placement breakdowns being reviewed, or just top-line campaign metrics?
Is there a documented decision rule for when to scale, pause, or change a campaign — or is every decision made on gut feel?

The reporting setup I implement

A custom column view in Ads Manager showing: spend, impressions, CPM, CTR, CPC, conversions, CPA, ROAS, and frequency. Reviewed weekly alongside Shopify or CRM data to reconcile platform-reported figures against actual revenue.

What I Fix First — The Priority Order

After the audit, there is always a list of issues. The order in which you fix them matters as much as fixing them. Not all problems have equal impact, and some fixes are prerequisites for others to work.

# Fix Impact
1 Fix tracking errors. Nothing else is meaningful until the data is reliable. Critical
2 Consolidate ad sets. Reduce to the minimum needed to test key variables. Exit learning. Critical
3 Add customer exclusions to all prospecting campaigns. Immediate spend efficiency gain. High
4 Fix campaign objectives where misaligned — Traffic → Sales, Engagement → Leads. High
5 Refresh fatigued creative. Test UGC or direct-response formats if not present. High
6 Establish reporting framework with break-even ROAS visible alongside platform ROAS. Medium
7 Review bid strategy once structure and tracking are clean. Not before. Medium

The rule I follow

I do not change bids or budgets in the first two weeks of an engagement. Structure and tracking first — always. Optimising spend allocation before the foundation is solid is the most expensive mistake in paid social management. Fix what is broken. Then scale what is working.


Frequently Asked Questions

How do you audit a Facebook Ads account?

A Facebook Ads audit should cover seven areas in order: account structure (campaign hierarchy, naming, objective alignment), pixel and tracking (conversion event setup, attribution windows, signal quality), campaign objectives (whether objectives match business goals), audience targeting (size, overlap, exclusions, lookalike quality), ad creative (freshness, format variety, message match), budget and bidding (spend distribution, bid strategy, learning phase status), and reporting (which metrics drive decisions and whether they are the right ones).

What are the most common Facebook Ads mistakes?

The most common Facebook Ads mistakes are: too many ad sets fragmenting the learning phase; purchase events firing on the wrong page giving false conversion data; using Traffic or Engagement objectives when the goal is sales; not excluding existing customers from prospecting; letting creative run past the fatigue point without monitoring frequency and CTR; and using cost cap bidding before the account has enough conversion volume to support it.

What is the Facebook Ads learning phase and why does it matter?

The learning phase is the period during which Meta's algorithm gathers data to optimise delivery for a new ad set. An ad set needs approximately 50 optimisation events within 7 days to exit learning. Too many ad sets with fragmented budgets — or frequent edits to running campaigns — keep the account in permanent Learning Limited status, where the algorithm cannot optimise properly and performance is consistently suboptimal.

How often should you refresh Facebook ad creative?

Refresh creative every 3–4 weeks for most cold audiences, or earlier if frequency rises above 3 and CTR is declining. Creative fatigue shows up in CTR and CPM before it appears in CPA — so monitor leading indicators weekly rather than waiting for cost metrics to deteriorate. Running 3–5 active creative variants per ad set helps manage fatigue without repeatedly resetting the learning phase.

What is a good Facebook Ads account structure?

A good account structure separates campaigns by objective (prospecting, retargeting, retention), separates ad sets by audience type, and consolidates budget to allow the algorithm to find the best performers. The most common structural mistake is too many ad sets with too little budget each — preventing any single ad set from gathering enough data to exit the learning phase and optimise properly.

Mujeeb Rehman

Mujeeb Rehman

Digital Marketing Consultant & AI Strategist · MSc Digital Marketing, Distinction — Robert Gordon University

7+ years running paid social campaigns across e-commerce and lead generation. I audit and rebuild Meta Ads accounts for businesses that are spending but not growing. Available for paid social audits, strategy, and full management engagements.

Want me to run this audit on your account?

Facebook Ads Audit — £300

Full account review across all 7 areas, a prioritised fix list, and a 30-minute call to walk through findings. Delivered within 5 working days.

Book an Audit →