Skip to content

Dotlot

Workflow Automation

Dotlot cut catalogue ops costs by 50% by fully automating catalogue operations

Cut catalogue ops costs in half. The team is now expanding into two new markets.

50%

Ops cost reduction

<0.2%

Error rate

94%

Time-to-market reduction

8,000+

SKUs automated

The challenge

Dotlot operated across Shopify, Amazon, two wholesale portals, and an internal inventory system. Every SKU update (pricing, availability, descriptions, imagery) required manual entry in each system. With 8,000 active SKUs and a catalogue team of four, the process consumed the equivalent of two full-time roles and still produced errors weekly.

The manual process was slow and structurally fragile. Each sales channel had its own formatting requirements, character limits, image specifications, and category taxonomies. A product title that worked on Shopify needed to be reformatted for Amazon's search algorithm. Wholesale portal A required EAN codes in a specific field that wholesale portal B did not even support. The internal inventory system tracked stock levels and warehouse locations in a schema that bore no resemblance to any of the customer-facing channels.

The four-person catalogue team spent their days copying data between systems. A typical product update, say, a price change on 50 SKUs for a seasonal promotion, required the team member to update the master Google Sheet, then log into Shopify and edit each product individually, then update the Amazon listing through Seller Central, then email the updated pricing matrix to both wholesale partners in their respective template formats, and finally update the internal inventory system. A batch of 50 SKU updates could take a full working day, and the entire process had to be repeated for every type of change: new product launches, description updates, image swaps, stock adjustments, and end-of-life removals.

Errors were inevitable. Price mismatches between channels were the most common and most damaging, a product listed at one price on Amazon and a different price on Shopify would trigger customer complaints and, on Amazon, potential policy violations. Image mismatches, where an older product photo persisted on one channel after being updated on others, eroded brand consistency. The team caught most errors within a day or two, but some persisted for weeks. The founder estimated that pricing errors alone cost the business several thousand pounds per quarter in refunds, customer service time, and marketplace penalties.

What we built

We built a centralised catalogue automation pipeline on n8n, treating their Google Sheets master as the single source of truth. The project started with a one-week audit of all five channels, Shopify, Amazon, wholesale portal A, wholesale portal B, and the internal inventory system, documenting the exact data schema, API capabilities, rate limits, and formatting requirements for each. We mapped every field from the master sheet to its corresponding field in each channel, identifying transformations needed at every step.

The core pipeline works as follows. When a row in the Google Sheets master is created or modified, an n8n webhook triggers the sync workflow. The system reads the changed row, identifies which fields were updated, and routes the update to the appropriate channel-specific sub-workflows. Each sub-workflow handles the transformations required for its target: truncating titles to Amazon's character limits, reformatting bullet points into Shopify's HTML description field, mapping internal category codes to each wholesale portal's taxonomy, and converting pricing to the correct currency and VAT treatment per channel.

Image processing was one of the more complex modules. Dotlot's product images are uploaded to a central Google Drive folder in high resolution. The pipeline automatically generates channel-specific variants, white-background hero images cropped to Amazon's required dimensions, lifestyle images resized for Shopify's gallery format, and compressed thumbnails for the wholesale portals. Images are processed using Sharp via a Node.js function node in n8n, uploaded to each channel's CDN or media library via API, and linked to the correct product record.

Variant generation handles the combinatorial complexity of Dotlot's product line. Many products come in multiple sizes, colours, and bundle configurations. The pipeline reads variant definitions from the master sheet and generates the correct child SKUs, option sets, and inventory records for each channel. Shopify variants, Amazon parent-child relationships, and wholesale portal size matrices are all generated from the same source row.

A validation layer sits between the transformation stage and the publish stage. Before any data reaches a live storefront, the validator checks for required fields, price consistency across channels, image availability, stock-level sanity (no negative values, no stock listed for discontinued items), and EAN/barcode format correctness. Errors are flagged in a dedicated Slack channel with the specific row, field, and issue, giving the team a chance to correct the source data before it propagates. Only clean records pass through to the channel APIs.

Results

The four-person catalogue team was fully reassigned to merchandising and growth within six weeks of deployment. The transition was gradual, during the first two weeks, the team ran the automated pipeline in parallel with their manual process to validate accuracy. By week three, confidence was high enough to cut over entirely. Two team members moved into merchandising strategy, focusing on product selection, pricing optimisation, and supplier negotiations. The other two shifted to growth initiatives, including launching Dotlot's first marketplace presence on a new European channel, which the automation pipeline supported with minimal additional configuration.

Catalogue ops costs fell by over 50% year-on-year. The savings came from labour reallocation (the two FTE-equivalents previously consumed by manual data entry), elimination of error-related costs (refunds, marketplace penalties, customer service hours), and faster time-to-market enabling the team to capitalise on trending products and seasonal opportunities that they previously could not launch quickly enough to capture.

Error rates on live product data dropped to under 0.2%. Before the automation pipeline, the team estimated that 3 to 5% of live product records contained at least one discrepancy across channels at any given time. The validation layer catches virtually all data issues before they reach any storefront. The few errors that do make it through are typically edge cases involving newly onboarded wholesale partners with undocumented field requirements, and these are caught within hours by the monitoring system.

Time-to-market for new SKUs fell from 3 days to under 4 hours. Previously, launching a new product meant creating the master record, manually entering it into each of the five channels, uploading and formatting images for each, and verifying that everything looked correct across all storefronts, a process that took the better part of three working days. Now, the team creates a single row in the master sheet, uploads images to the Drive folder, and the pipeline handles everything else. A new SKU is live across all five channels within four hours, including image processing and validation. This speed has become a competitive advantage, Dotlot can now react to trending products and restock fast-sellers significantly faster than before.

Four people doing nothing but copying data between systems. That's just overhead. Now those same people are running our expansion into two new markets.

Sophie, Dotlot

Want results like these?

Tell us about your business. We’ll give you an honest answer on whether AI can help, and exactly how.

Want similar results for your business? Chat with us.

Aiwah Labs
Infinity Bot
Online
powered by
Aiwah Labsinfinity