PartsLink to ACES Conversion:
How an Exception Queue Prevents Wrong Fitments
If you’re converting fitment data to ACES and your plan is “we’ll map everything,” you’re about to learn an expensive lesson.
Not because your team isn’t smart.
Because the real world isn’t deterministic.
There will always be a slice of rows where:
the note has partial clues (2.0L but multiple engines exist)
the model name is almost right (marketing name vs VCDB name)
the variables collide (LT means “Left” in one column and “Lariat” in another)
the same vehicle has multiple valid configurations and the source data didn’t specify which one
Most teams handle that slice the wrong way:
They guess.
And the scariest part is you usually don’t notice the guess. It “publishes.” It “matches.” It even sells.
Until you scale.
Then the returns show up as “doesn’t fit,” the marketplace starts suppressing, and everyone is back in spreadsheet hell trying to reverse-engineer what happened.
The fix is simple:
Build an exception queue.
Not as a nice-to-have. As the core safety system of your conversion pipeline.
Why an exception queue beats “manual cleanup”
Manual cleanup is a dead end because it’s unstructured.
You fix the same pattern 50 times because you didn’t capture the rule.
You don’t know who touched what.
You don’t know what’s still risky.
And you can’t measure progress.
An exception queue is different.
It’s a controlled gate that says:
“If the system cannot resolve this row deterministically, it does not get to become ACES.”
That’s it.
No silent wrong fitments.
No invented EngineBaseIDs.
No “close enough.”
Just: Pass / Fail / Review - with a reason code.
The mindset shift (this is the part that matters)
You’re not trying to automate 100%.
You’re trying to automate the safe 97% and isolate the risky 3% so it doesn’t poison the rest of the catalog.
That 3% is where most returns live.
What goes into the exception queue
Your exception queue should catch anything that fails a deterministic rule.
Here are the most common ones I see in real conversions:
1) VCDB validation fails (Year/Make/Model is not real)
Make/model spelling mismatch
Trim stuck in the model field
“Classic” / “New Body Style” nonsense
Marketing model name that VCDB doesn’t recognize
Outcome: route to queue with reason code: INVALID_YMM
2) Engine/config is ambiguous
Your note says “2.0L” and that’s it.
VCDB says that BaseVehicle has multiple 2.0L engines or multiple configs that still remain valid after filtering.
Outcome: queue with AMBIGUOUS_ENGINE_CONFIG
3) Conflicting location tokens
“Left” and “Right” both appear due to bad parsing or a messy source string.
Or “Front” + “Rear” show up in the same row.
Outcome: queue with CONFLICTING_POSITION
4) Missing critical qualifiers
If a part family absolutely requires an attribute stack (bed length, brake package, connector type, etc.) and the row doesn’t include it - you should not publish it.
This is how you stop the “technically compatible” but practically wrong applications.
Outcome: queue with MISSING_REQUIRED_QUALIFIER
5) Part terminology unmapped
If your Pname cannot map cleanly to a valid PartTerminologyName, that’s not a “close enough” moment.
That’s a “define the dictionary” moment.
Outcome: queue with UNMAPPED_TERMINOLOGY
The queue isn’t a trash can - it’s a work system
If your exception queue is just “a list of bad rows,” it becomes a graveyard.
A real queue has:
Reason code (why it failed)
Candidate options (what VCDB configs are possible)
Recommended next action (rule needed vs human decision)
Owner (who resolves it)
Status (new / in progress / resolved / rule created)
Rule link (if a rule was added so it won’t happen again)
This is the loop:
Exception → Resolution → Rule → Re-run → Queue shrinks
That’s how you scale without expanding headcount forever.
What the exception queue table should look like (simple schema)
You don’t need fancy tools to start. A database table or even a structured spreadsheet works.
Minimum columns:
SourceRowID
Source (vendor/file)
Year / Make / Model
PartTerminologyName (or raw Pname)
VARIABLES / Note text
Parsed tokens (what your parser extracted)
Candidate VCDB IDs (list)
ReasonCode
NextAction
Owner
Status
CreatedDate / ResolvedDate
If you do just that, you’ve already beaten 90% of conversion projects.
The KPI nobody tracks (but should)
If you want to run this like an operator, track:
Queue size by reason code (what’s actually breaking you)
Aging (how long rows sit unresolved)
Rule yield (how many exceptions became rules)
Repeat rate (are the same failures coming back every refresh?)
This is how you prove progress without guessing.
What happens when you don’t do this
Without an exception queue, bad rows don’t fail - they leak.
They become:
wrong ACES output
wrong marketplace fitment display
wrong orders
higher returns
suppressed listings
and a permanent loss of buyer trust
The worst part is you end up “fixing fitment” forever because you never built the gate.
Final thought
ACES conversion isn’t hard because it’s complicated.
It’s hard because teams try to force uncertainty into certainty.
The exception queue is the mechanism that keeps your catalog honest.
Automate what is deterministic.
Queue what is ambiguous.
Turn patterns into rules.
Repeat.
That’s how you convert fitment at scale without guessing.
If you’re converting a dataset to ACES and you want to avoid silent wrong fitments, I can help you stand up the validation gates + exception queue workflow (including reason codes, rule order, and VCDB-filter logic).
Send me:
20-50 real rows (including VARIABLES + notes), and
an example of your ideal ACES output
…and I’ll tell you exactly what can be automated cleanly, what needs VCDB filtering, and where the risk lives before you scale it.