3 Competitor Research Signals That Still Work After Meta Andromeda
Meta Andromeda killed the old competitor research playbook. Ad set structure, audience tells, and testing-phase counts are gone. Three outcome-based signals still work — here's how to use them.
3 Competitor Research Signals That Still Work After Meta Andromeda
The old playbook is dead. These three signals survive because they measure outcomes, not account structure.
Your Old Spy Playbook Stopped Working
If you've been doing competitor research on Meta the same way since 2024, most of what you're reading is noise.
Andromeda changed how Meta delivers ads. The algorithm used to match ads to audiences based on targeting inputs — demographics, interests, lookalikes. Advertisers picked the audience. The system delivered.
That's over.
Andromeda is a retrieval engine. It scores every piece of creative against every user in real time. Your targeting inputs are suggestions, not instructions. Advantage+ campaigns don't even let you set audiences — Meta picks who sees what.
This broke three classic competitor research signals overnight:
Ad set structure. You used to count how many ad sets a brand ran to understand their testing framework. Now Advantage+ consolidates everything into one campaign. Structure tells you nothing.
Audience tells. Detailed targeting exclusions, custom audience layers, lookalike percentages — all visible in the Ad Library before 2025. Now most brands run broad or Advantage+ with zero audience inputs exposed.
Testing-phase counts. The old method: count how many new ads launched this week, assume the brand is testing. But Andromeda rotates creative dynamically. A brand can test 30 variants inside one campaign and you'll never see the internal rotation.
Most competitor research guides written before 2026 still teach you to look for these signals. They're outdated. The data they reference doesn't exist in the Meta Ads ecosystem anymore.
If you're still counting ad sets or reading audience targeting, you're making decisions based on signals that vanished.
The good news: three signals still work. They survived Andromeda because they measure outcomes — what the algorithm actually does — instead of account structure. Andromeda can change how it delivers ads every quarter. It can't change the fact that winning creatives get scaled, successful angles get repeated, and real budgets get spent.
Signal 1: Duplicate Count Spikes Tell You What's Scaling Right Now
This is the most underrated metric in ad research.
When Meta's algorithm finds a winning creative, it duplicates the ad across placements, audience segments, and campaign structures. A single creative can appear 15–40 times in the Ad Library once it's scaling hard.
That duplication is Andromeda's fingerprint. The algorithm doesn't duplicate losers. High duplicate count means the system is actively distributing that creative to more people because it's converting.
Before Andromeda, you'd look at how many ad sets a brand ran to gauge scale. Now you look at how many times the algorithm duplicated a single creative. Same question, different signal — and this one is more reliable because the advertiser can't fake it.
Here's how to use it. Open Brandsearch Discovery, filter to Meta, and sort by Most duplicates. You see the ads that Meta's algorithm has decided to scale.
I combine this with a running days filter. Set 25+ days and Most duplicates sort. That gives you creatives that are both proven and actively scaling.
A brand with one ad at 30 duplicates tells you more than a brand with 200 active ads and no duplicates. The first has a winner the algorithm loves. The second is spraying creatives and hoping something sticks.
Watch for spikes too. If you check a competitor weekly and their top creative jumps from 8 duplicates to 25, that ad just entered a scaling phase. The algorithm found an audience pocket and is pushing hard.
Sustained high counts. A creative sitting at 10+ duplicates for 30 days is a proven winner. The brand didn't create those duplicates — the algorithm did, because the creative keeps converting.
Cluster patterns. If 3 of a brand's top 5 duplicated ads use the same hook format — pain-point question, UGC testimonial, before/after — that tells you the angle that's working, not just the individual creative.
I check duplicate counts weekly. It takes 5 minutes and tells you more about what's actually working than scrolling the free Meta Ad Library for an hour.
The free Ad Library shows you what's running. Duplicate count tells you what's winning. That's the difference between a list of ads and actual competitive intelligence.
Signal 2: Creative Test Clusters Show Which Angles Won
Knowing what a competitor is running right now is useful. Knowing which angles they tested and killed is more useful.
The Creative Tests tab in Brand Analysis groups a brand's ads into clusters based on visual and copy similarity. Each cluster is a batch of creatives the brand tested around the same angle — same product shot style, same hook structure, same offer framing.
What matters is which clusters survived.
Open Brandsearch Brand Analysis for a competitor. Go to the Creative Tests tab. You'll see clusters ranked by run-length and reach. The top clusters are the angles that won. The bottom clusters — short run times, low reach — are the angles that got killed.
Pull up a competitor like gymshark.com. Look at their top 3 clusters. Note the patterns:
Hook type. Do their winners open with a problem, a result, or a demonstration? If 3 out of 4 top clusters use problem-first hooks, that's a signal about what the audience responds to.
Visual format. Are the winning clusters mostly UGC testimonials, studio product shots, or before/after comparisons? The format that survives longest is the format the algorithm rewards for that niche.
Offer angle. Bundle vs. single product, urgency vs. value, social proof vs. feature callout. The winning clusters tell you which positioning the market has validated.
This is intelligence that targeting data never gave you. You're not guessing what angles to test — you're seeing which ones already passed the market's test.
I track 5-8 competitors and check their Creative Tests tab every two weeks. A real pattern: three fitness supplement brands shifted their top clusters from "ingredient education" angles to "before/after transformation" UGC within the same 30-day window. That's not coincidence. That's the algorithm telling you what converts in that market right now.
When three competitors converge on the same hook style, that's market-level signal you can act on.
One more thing. The Creative Tests tab also shows you how fast a brand iterates. A brand that launches 4 new clusters per month and kills 3 of them is running an aggressive testing system. A brand that's been running the same 2 clusters for 90 days has found what works and is riding it.
Both patterns are useful. The fast iterator tells you which angles the market is rejecting right now. The slow rider tells you which angle has sustained staying power. Track both types of competitors.
Stop reading about winners. Find them yourself.
Search 6.5M+ brands, their ads, revenue, and products — all in one place.
Try Brandsearch freeSignal 3: EU Adspend Shows Real Budget, Not Estimates
Most "ad spend" numbers you see online are guesses. They're modeled from impression ranges and CPM assumptions. I've seen tools estimate a brand's spend at $2,000/day when the real number was $400.
EU Adspend is different. It's actual disclosed data.
Meta is legally required to disclose real advertising spend in European Union markets. That disclosure includes daily spend by country, total campaign spend, and reach numbers — all verified, not modeled.
This matters because budget is the truest signal of conviction. A brand can launch 200 ads as a test. But a brand spending EUR 3,200/day across France, Germany, and the Netherlands for 6 weeks straight is not testing. They found something that works and they're putting real money behind it.
Here's how to use it:
Verify scale. A brand with 200 active ads might spend only EUR 400/day in EU. That's a testing budget spread thin, not a scaling operation.
Track budget shifts. If a brand's EU daily spend jumps from EUR 800 to EUR 2,500 in two weeks, they found something. Cross-reference with duplicate count spikes to find the creative driving the increase.
Compare competitors. Two brands in the same niche running similar campaigns — one spends EUR 5,000/day and the other EUR 600/day. The spend tells you who's winning.
You can sort by Total Adspend (EU) or Avg. Daily Adspend (EU) in Brandsearch Discovery to see real budget commitment across any niche. Combined with running-day filters, it gives you a shortlist of brands spending real money on ads that keep running.
The country breakdown adds another layer. A brand spending 60% of their EU budget in Germany and 5% in France has tested both markets and found one that converts. That's market-entry intelligence you'd normally pay a consultant for.
I use EU Adspend as the final confirmation layer. Signal 1 (duplicate count) tells me a creative is scaling. Signal 2 (creative test clusters) tells me which angle won. EU Adspend tells me the brand is putting EUR 2,000+/day behind it. All three pointing at the same creative? That's a validated strategy worth studying.
Without the budget data, you don't know if a high-duplicate creative is backed by real spend or just a glitch in a $50/day campaign. EU Adspend removes the guesswork.
The 3-Signal Weekly Workflow (20 Minutes)
Each signal is useful alone. Together they build a competitor research system that works regardless of what Meta changes next.
- Duplicate scan (5 min). Open Brandsearch Discovery, Meta platform, sort by most duplicates. Filter to your niche and 25+ running days. Note which creatives are spiking. Save interesting ones to a Swipe File.
- Cluster check (10 min). Open Brandsearch Brand Analysis for your top 3-5 competitors. Check the Creative Tests tab. Which angle clusters are growing? Which died? Write down the winning hook formats.
- Budget verify (5 min). Check EU Adspend for any brand from steps 1-2. Are they spending EUR 1,000+/day, or just running lots of low-budget tests? Drop any brand from your watchlist that spends under EUR 500/day — they're still testing.
That's 20 minutes. You walk away with a short list of validated competitor strategies, the exact creative angles that won, and real budget numbers backing each one.
After a month you'll have 16 data points building into a competitor intelligence file that reflects what's actually happening in your market.
Compare that to the old method of screenshotting ad sets and guessing at budgets. This is faster, more accurate, and it works regardless of what Meta changes next.
The key: do it every week. One-off research sessions give you a snapshot. Weekly signals give you a trend. After 4 weeks you'll see which competitors are consistently scaling, which angles keep winning across brands, and where real money flows in your niche.
The Bottom Line
Andromeda didn't kill competitor research. It killed the lazy version of it.
The signals that relied on account structure — ad sets, audience inputs, testing-phase counts — are gone because the algorithm took over those decisions. You can't spy on choices the advertiser no longer makes.
Three signals survive because they measure outcomes:
- Brandsearch Discovery (duplicate count sort) — find what the algorithm is actively scaling
- Brandsearch Brand Analysis (Creative Tests tab) — see which angles won and which got killed
- Brandsearch Discovery (EU Adspend sort) — verify real budget commitment
Twenty minutes a week. Three signals. Real intelligence that works in 2026.
The algorithm changed how ads get delivered. It didn't change the fact that winning creatives get duplicated, successful angles get repeated, and real money leaves real bank accounts.
Stop researching account structure. Start researching outcomes.