Unlocking Meta’s Product-Level Ad Data

Unlocking Meta’s Product-Level Ad Data

Ecommerce and Meta often go hand in hand. You can give Meta a 20,000-item catalog and a budget, and with its AI-powered Advantage+ campaigns, it’ll try to pair the right person with the right product, whether that’s a new customer or someone who’s already viewed those products before.

But what’s actually happening inside that ad? And is there a way to optimize this “black box” Dynamic Product Ad (DPA) format?

Advertisers can see ad-level performance, but have no platform-native insights on which specific products are being shown, clicked, or ignored within a broad DPA.

Is The Algorithm Making The Right Decisions?

That’s exactly the question we wanted to answer.

There are three common traps brands fall into:

1. Over-segmentation: Brands that want more insight break apart their catalog into niche product sets with tons of DPAs.

  • Pros: You can give each ad a bespoke name, which tells you exactly what’s being served. Nice!
  • Cons: This reduces data density and can kill ROI. There’s also a tendency to try to predict which audiences will respond to which products, which is no longer effective for most brands since Meta’s improved Andromeda updates

2. Convoluted reporting: Brands try to infer what products Meta is prioritizing by pairing Google Analytics 4 session data (sessions by product) to Meta ads data (the campaigns/ads that sent these users).

  • Pros: Enables some analysis without falling into the “over-segmentation” pitfall.
  • Cons: Time-consuming to set up, and incomplete. This method doesn’t tell us anything about product-specific engagement within Meta; we would only be guessing at click-through-rate, spend, and impressions.

3. “Set it and forget”: Brands give up all control and let Meta take the wheel.

  • Pros: Avoids over-segmentation issues.
  • Cons: There’s a big risk in trusting the algorithm. You might be pushing products that get high impressions but low sales, effectively burning your budget and losing efficiency.

Trying to make decisions from just Meta Ads Manager UI data is a risk. Many marketers are still not confident in AI-powered campaigns.

At my agency, we created technology to solve this challenge, but fear not, I can walk you through the exact steps so you can do the same for your brand.

Our pilot client for the new technology was a major bathroom retailer investing heavily in DPAs within conversion campaigns.

Let’s go through the three phases in our journey to overcoming this ecommerce challenge.

Phase One: Surfacing Engagement Data

The first stage was visibility: understanding what was happening now within these “black box” DPA formats.

As I said above, Meta doesn’t directly report which specific product led to a specific purchase within a DPA in the Ads Manager interface. It’s simply not an available breakdown in the same way that age, placement, etc. are offered.

But the good news is that a treasure trove of insight is buried in the Meta APIs:

  1. Meta Marketing API (specifically the Insights API) is the main API we use to get all ad performance data. It’s how we’re pulling the key metrics like spend, impressions, and clicks for each ad_id and product_id.
  2. Meta Commerce Platform API (or Catalog API). This API provides the list of all product_ids and their associated details (like name, price, category, etc.).

Here are the steps:

  1. You first need to pipe API data into a data warehouse (we used BigQuery). Make sure you’re pulling the following metrics from the Insights AP: impressions, clicks, spend, ad_id, product_id. If you aren’t a developer, you can use ETL connectors (like Supermetrics, Funnel.io) to get this data into BigQuery or Google Sheets, or use Python scripts if you have a data team.
  2. Once you have these two data streams, join these APIs in a table, using a specific Join Key. We used Product ID; this is the common thread that must exist in both the Ad data and the Catalog data to make the connection work.

Once you’ve done this, you can view your ad performance data (clicks, impressions), but now with a breakdown by product.

This new, combined dataset was then visualized in a Looker Studio report template. Again, other reporting options are available.

To make sense of the data, we needed an easily navigable report rather than pages of raw data. We built the following visualizations:

Screenshot of Product scatter chart from Impression DPEx tool
Product Scatter Chart, Impression Dynamic Product Explorer (DPEx), (Image from author, December 2025)

Product Scatter Chart: Separating each product into four distinct categories:

  • “Star Performers”: High impressions and high clicks.
  • “Promising Products”: Low impressions but a high click-through rate.
  • “Window Shoppers”: High impressions but very low clicks.
  • “Low Priority”: Low clicks and impressions.
Screenshot of DPEx chart
Top 10 Product Types Chart (Image from author, December 2025)
Screenshot of DPEx chart
Bottom 10 Product Types (Image from author, December 2025)

Top/Bottom Products Bar Charts: See at a glance the top 10 and bottom 10 products by engagement.

Product Details Table: View detailed metrics for each product.

This could all be filtered by product name, product type, availability, and any other metrics we wanted (color, price, etc.).

We produced our first-ever client report for product-level ad engagement, and even with just engagement data, we learned a lot:

Creative: We used the data to improve creative briefs.

  • In our client data report, it was interesting to see how much Meta was pushing non-white products (orange sinks, green baths), despite the fact that 95% of their product sales are traditional white variations.
  • We hadn’t prioritized these products initially for the client, but have now created lots more video and creator content featuring these highly clickable variations.

Product Segmentation: We built powerful, data-driven product sets based on real engagement metrics.

  • For example, we tested showing only our most engaging “Star Performer” products in feed-powered collection ads in our upper funnel campaigns, where usually the algorithm has fewer signals to optimize towards

Efficiency: This automated a complex analysis that was previously unwieldy and time-consuming.

Crucially, for the first time, we had enough evidence to challenge Meta’s “best practice” of using the widest possible product set.

Pitfalls & Key Considerations

This was a great first step, but we knew there were some key areas that just tapping into Meta’s APIs won’t solve:

  • Engagement Vs. Conversions: The major downfall with this is that product-level breakdowns are only available for clicks and impression data, not revenue or conversions. The “Window Shoppers” category, for example, identifies products that get low clicks, but we couldn’t (in this phase) definitively say they don’t lead to sales.
  • Context Is Key: This data is a powerful new diagnostic tool. It tells us what Meta is showing and what users are clicking, which is a huge step forward. The why (e.g., “is this high-impression, low-click item just a high-value product?”) still requires our team’s analysis.

Phase Two: Evolving Meta Engagement Data With GA4 Revenue Data

We knew the above Meta-only data just explores one part of the journey. To evolve, we needed to join with GA4 data to find out what customers are actually buying after they’re interacting with our feed-powered dynamic product ads.

The Technical Bridge: How We Joined the Data

While Phase One relied on ETL connectors to pull Meta’s API data, Phase Two requires a different stream for GA4. We tapped into the native GA4 BigQuery export specifically for purchase events. This provides the raw event-level data, revenue and units sold, for every transaction.

The join isn’t a single step – but relies on two primary keys to connect the datasets:

  • The Ad ID Bridge: To link a GA4 session back to a specific Meta ad, we captured the ad_id via dynamic UTM parameters. By setting your URL parameters to utm_content={{ad.id}}, you create a magic bridge between the click and the session.
  • The Item ID Match: Once the session is linked, we use the Item ID. This must be perfectly aligned so that your Meta product_id and GA4 item_id are identical; otherwise, the model breaks.

Pitfalls & Key Considerations

Joining Meta and GA4 data sounds easy enough, but there were some key blockers to overcome.

Clean Data. The whole model breaks if your Meta ID doesn’t cleanly match your GA4 IDs. You must ensure your product catalogs and your GA4 tagging are perfectly aligned before you start.

However, our second issue is harder to overcome: attribution issues. The GA4 data will almost always show lower conversion numbers than Meta’s UI.

This is because, in our experience, Meta often “over-credits.” It benefits from longer attribution windows, including view-through conversions, and it gives itself full credit for each conversion it measures (rather than spreading out across multiple channels).

GA4 often “under-credits” channels like Meta. It uses data-driven attribution to try and give credit to multiple touchpoints. However, it is unable to completely follow user journeys, especially those that don’t include clicks to the site. This means GA4 doesn’t know to credit a social ad, even if that ad was the deciding factor in the purchase journey.

Although we’d love to be able to get a 1:1 match from each product purchase back to a specific product interacted with on Meta, neither GA4 nor Meta can achieve this insight easily. However, there’s still value in the relative insights and trends.

Here’s an example:

  • Meta’s UI: Reported our “Luxury Bath – Green” product was our top performer last month, with high volumes of clicks and impressions in our dynamic ads.
  • The Problem: When we joined our GA4 data, we saw no sales for that specific bath last month, at all, from any channel!
  • The Assumption: If we only used ad engagement data, we’d assume this product is wasting spend by generating low-quality traffic

But, by looking at all items purchased in those GA4 sessions that originated from the “Luxury Bath – Green” product, we discover that many users who clicked the bath went on to convert, just for the white variation instead.

The Insight: The “Luxury Bath” ad wasn’t a failure; it was a highly effective halo product for our client. As a result, it drew in aspirational customers who then converted to buy other products.

The Action: We can confidently commission creator content, focusing on the green bath, to draw in new users even if we know users are likely to buy a different color when it comes to purchase.

Phase Three: Performance-Enhanced Feeds

Once we had this data at our fingertips, the temptation was to focus on it purely for insights and data.

The next level was even better, using this data to create automated supplementary feeds.

It was time to bring back those four product performance segments from our scatter charts.

Using our feed management tools, we pushed the product performance segments into our Meta product feed as new custom labels. This means we were able to dynamically set new product sets based on product performance, for example, a rule was created to Product Set where Custom Label 0 equals Star Performer.

We could then conduct the following product set tests:

  • “Window Shoppers”: (High impressions, low clicks/sales). Feed these into an exclusion set to understand if efficiency improves when we remove from the feed.
  • “Promising Products”: (High CTR, high CVR, low impressions). Feed these into a scaling set with more budget to understand if demand is hidden.
  • “Star Performers”: (High impressions, high clicks). Feed these into a retargeting set to recapture engaged users with our signature ranges.

Pitfalls & Key Considerations

The tests above are simply examples of hypotheses. However, your mileage will vary! We strongly recommend structured experimentation to understand impacts on overall performance.

Is Your Brand Ready To Break Out Of The ‘Black Box’?

You can partially break out of Meta’s “black box,” and this can be a strategic move for ecommerce brands.

The journey moves from surfacing basic engagement data (Phase One) to joining it with sales data for true, profit-driven insights (Phase Two), and ultimately, to automating your strategy with performance-enhanced feeds (Phase Three).

This is how you move from trusting the algorithm to challenging it with evidence. If you’re a decision-maker wondering where to start, here are the three questions to ask:

  1. “Can you show me which specific products in our catalog are being prioritized by Meta?”
  2. “Are our Meta product_ids and GA4 item_ids identical?”
  3. “Are we capturing the ad.id in our UTM parameters on every single ad?”

If the answers to these questions are “I don’t know,” you’re probably still operating inside the black box. Breaking it open is possible. It just requires the right data, the right technical expertise, and the will to finally see what’s truly driving performance.

More Resources:


Featured Image: Roman Samborskyi/Shutterstock


#Unlocking #Metas #ProductLevel #Data

Leave a Reply

Your email address will not be published. Required fields are marked *