Skip to content

Sync Daily Work Order Status

  • Procedure Name: uspSyncDailyWorkOrderStatus
  • Database: SooperWizer
  • Schema: Reports
  • Business Purpose: The procedure uspSyncDailyWorkOrderStatus is responsible for aggregating and synchronizing per-piece garment production event data into a denormalized daily reporting table [Reports].[DailyWorkOrderStatus]. It is designed to be executed on a scheduled basis (e.g., daily) and performs a full upsert — updating existing records and inserting new ones — ensuring the reporting table always reflects the latest production status for every combination of Work Order, Style, Purchase Order, Color, Size, Line, and Status Date.

  • Functional Description:

  1. Drops and rebuilds two temporary staging tables (#TempBundleStatus, #DailyStatusAgg) to ensure a clean run.
  2. Evaluates piece-level bitmask flags from [Reports].[Piece] to extract realized production event dates across 11 manufacturing milestones (Card Initialized, Cutting Out, Induction, End-Line Quality Pass/Rejection, Sewing Last Operation, Send Down, Finishing First Operation, Finishing Quality Pass/Rejection, Finishing Last Operation).
  3. UNPIVOTs piece event dates into a tall event log and aggregates counts per BundleID and StatusDate into #TempBundleStatus.
  4. Enriches bundle-level data by joining to Work Order, Style, Customer, Purchase Order, Color, Size, Line, Branch, and Factory dimensions — then rolls up to a fully denormalized daily summary into #DailyStatusAgg.
  5. UPDATEs existing records in [Reports].[DailyWorkOrderStatus] where the composite key (WorkOrderCode, StyleName, BuyMonth, PurchaseOrderCode, ColorCode, SizeName, LineID, StatusDate) matches.
  6. INSERTs new records into [Reports].[DailyWorkOrderStatus] for any composite key combination not already present.
  7. Returns a success/failure message with a status code.
  • Return Values / Effects:
    • Populates [Reports].[DailyWorkOrderStatus] with up-to-date daily production quantities across all manufacturing pipeline stages.
    • Returns a result set with two columns: Code (0 for success, SQL error number on failure) and Message (descriptive status text).
    • On failure, the transaction is fully rolled back — and the error number and message are returned.

This procedure takes no input parameters

  • Base Tables:
    • [Reports].[DailyWorkOrderStatus] (UPDATE, INSERT)
    • [Reports].[Piece], [Cutting].[Bundle], [Orders].[WorkOrderStyle], [Orders].[WorkOrder], [Orders].[Style], [Essentials].[Customer], [Orders].[PurchaseOrder], [Essentials].[Color], [Essentials].[Size], [Essentials].[Line], [Essentials].[Branch], [Essentials].[Factory] (SELECT)
  • Temporary Objects:
    • #TempBundleStatus - Bundle-level aggregation of piece event counts per StatusDate. Contains BundleID, StatusDate, and one quantity column per event type (SyncedQty, CardInitializedQty, CuttingOutQty, InductionQty, EndLineQualityPassedQty, EndLineQualityRejectedQty, SewingLastOperationQty, SendDownQty, FinishingFirstOperationQty, FinishingQualityPassQty, FinishingQualityRejectionQty, FinishingLastOperationQty).
    • #DailyStatusAgg - Fully enriched and denormalized aggregation joined with all dimensional tables. Contains the complete set of columns that map 1-to-1 with [Reports].[DailyWorkOrderStatus], grouped by WorkOrderCode, StyleName, BuyMonth, PurchaseOrderCode, ColorCode, SizeName, LineID, and StatusDate.
  • Referenced Objects:
CREATE OR ALTER PROCEDURE [Reports].[uspSyncDailyWorkOrderStatus]
AS
BEGIN
SET NOCOUNT ON;
SET XACT_ABORT ON;
BEGIN TRY
BEGIN TRANSACTION;
DROP TABLE IF EXISTS #TempBundleStatus;
DROP TABLE IF EXISTS #DailyStatusAgg;
WITH PieceEvents AS (
SELECT
BundleID,
PieceID,
CreatedDate AS PieceSyncedDate,
CASE WHEN PieceStatus & 2 = 2 THEN CardInitializedDate ELSE NULL END AS CardInitializedDate,
CASE WHEN PieceStatus & 4 = 4 THEN CuttingOutToSewingDate ELSE NULL END AS CuttingOutDate,
CASE WHEN PieceStatus & 4096 = 4096 THEN InductionDate ELSE NULL END AS InductionDate,
CASE WHEN PieceStatus & 8192 = 8192 THEN EndLineQualityPassDate ELSE NULL END AS EndLineQualityPassDate,
CASE WHEN PieceStatus & 16384 = 16384 THEN EndLineQualityRejectionDate ELSE NULL END AS EndLineQualityRejectionDate,
CASE WHEN PieceStatus & 32768 = 32768 THEN SewingLastOperationDate ELSE NULL END AS SewingLastOperationDate,
CASE WHEN PieceStatus & 65536 = 65536 THEN SendDownDate ELSE NULL END AS SendDownDate,
CASE WHEN PieceStatus & 4194304 = 4194304 THEN FinishingFirstOperationDate ELSE NULL END AS FinishingFirstOperationDate,
CASE WHEN PieceStatus & 8388608 = 8388608 THEN FinishingQualityPassDate ELSE NULL END AS FinishingQualityPassDate,
CASE WHEN PieceStatus & 16777216 = 16777216 THEN FinishingQualityRejectionDate ELSE NULL END AS FinishingQualityRejectionDate,
CASE WHEN PieceStatus & 33554432 = 33554432 THEN FinishingLastOperationDate ELSE NULL END AS FinishingLastOperationDate
FROM
[Reports].[Piece]
),
-- Explode each piece into event rows (UNPIVOT style)
EventLog AS (
SELECT BundleID, EventDate, EventType
FROM (
SELECT
BundleID, PieceSyncedDate, CardInitializedDate, CuttingOutDate, InductionDate, EndLineQualityPassDate,
EndLineQualityRejectionDate, SewingLastOperationDate, SendDownDate, FinishingFirstOperationDate,
FinishingQualityPassDate, FinishingQualityRejectionDate, FinishingLastOperationDate
FROM PieceEvents
) p
UNPIVOT (
EventDate FOR EventType IN (
PieceSyncedDate, CardInitializedDate, CuttingOutDate, InductionDate, EndLineQualityPassDate,
EndLineQualityRejectionDate, SewingLastOperationDate, SendDownDate, FinishingFirstOperationDate,
FinishingQualityPassDate, FinishingQualityRejectionDate, FinishingLastOperationDate
)
) AS unpvt
WHERE EventDate IS NOT NULL
)
-- Final Aggregation
SELECT
BundleID, EventDate AS StatusDate,
COUNT(CASE WHEN EventType = 'PieceSyncedDate' THEN 1 END) AS SyncedQty,
COUNT(CASE WHEN EventType = 'CardInitializedDate' THEN 1 END) AS CardInitializedQty,
COUNT(CASE WHEN EventType = 'CuttingOutDate' THEN 1 END) AS CuttingOutQty,
COUNT(CASE WHEN EventType = 'InductionDate' THEN 1 END) AS InductionQty,
COUNT(CASE WHEN EventType = 'EndLineQualityPassDate' THEN 1 END) AS EndLineQualityPassedQty,
COUNT(CASE WHEN EventType = 'EndLineQualityRejectionDate' THEN 1 END) AS EndLineQualityRejectedQty,
COUNT(CASE WHEN EventType = 'SewingLastOperationDate' THEN 1 END) AS SewingLastOperationQty,
COUNT(CASE WHEN EventType = 'SendDownDate' THEN 1 END) AS SendDownQty,
COUNT(CASE WHEN EventType = 'FinishingFirstOperationDate' THEN 1 END) AS FinishingFirstOperationQty,
COUNT(CASE WHEN EventType = 'FinishingQualityPassDate' THEN 1 END) AS FinishingQualityPassQty,
COUNT(CASE WHEN EventType = 'FinishingQualityRejectionDate' THEN 1 END) AS FinishingQualityRejectionQty,
COUNT(CASE WHEN EventType = 'FinishingLastOperationDate' THEN 1 END) AS FinishingLastOperationQty
INTO #TempBundleStatus
FROM EventLog
GROUP BY BundleID, EventDate;
SELECT
wo.WorkOrderCode, cs.CustomerCode, cs.CustomerName, wo.BuyMonth, st.StyleName, po.PurchaseOrderCode, c.ColorCode, c.ColorName,
s.SizeName, f.FactoryCode, f.FactoryName, br.BranchCode, br.BranchName, l.LineID, l.LineCode, l.LineName, StatusDate,
SUM(SyncedQty) AS SyncedQuantity, SUM(CardInitializedQty) AS CardInitializedQuantity, SUM(CuttingOutQty) AS CuttingOutQuantity,
SUM(InductionQty) AS InductionQuantity, SUM(EndLineQualityPassedQty) AS EndLineQualityPassedQuantity,
SUM(EndLineQualityRejectedQty) AS EndLineQualityRejectedQuantity,SUM(SewingLastOperationQty) AS SewingLastOperationQuantity,
SUM(SendDownQty) AS SendDownQuantity, SUM(FinishingFirstOperationQty) AS FinishingFirstOperationQuantity,
SUM(FinishingQualityPassQty) AS FinishingQualityPassQuantity,SUM(FinishingQualityRejectionQty) AS FinishingQualityRejectionQuantity,
SUM(FinishingLastOperationQty) AS FinishingLastOperationQuantity
INTO #DailyStatusAgg
FROM #TempBundleStatus tbs
JOIN [Cutting].[Bundle] b ON b.BundleID = tbs.BundleID
JOIN [Orders].[WorkOrderStyle] ws ON ws.WorkOrderStyleID = b.WorkOrderStyleID
JOIN [Orders].[WorkOrder] wo ON wo.WorkOrderID = ws.WorkOrderID
JOIN [Orders].[Style] st ON st.StyleID = ws.StyleID
LEFT JOIN [Essentials].[Customer] cs ON cs.CustomerID = wo.CustomerID
JOIN [Orders].[PurchaseOrder] po ON po.PurchaseOrderID = b.PurchaseOrderID
JOIN [Essentials].[Color] c ON c.ColorID = b.ColorID
JOIN [Essentials].[Size] s ON s.SizeID = b.SizeID
JOIN [Essentials].[Line] l ON l.LineID = b.LineID
JOIN [Essentials].[Branch] br ON br.BranchID = l.BranchID
JOIN [Essentials].[Factory] f ON f.FactoryID = br.FactoryID
GROUP BY
wo.WorkOrderCode, cs.CustomerCode, cs.CustomerName, wo.BuyMonth, st.StyleName, po.PurchaseOrderCode, c.ColorCode, c.ColorName,
s.SizeName, f.FactoryCode, f.FactoryName, br.BranchCode, br.BranchName, l.LineID, l.LineCode, l.LineName, StatusDate;
UPDATE D
SET
D.SyncedQuantity = A.SyncedQuantity,
D.CardInitializedQuantity = A.CardInitializedQuantity,
D.CuttingOutQuantity = A.CuttingOutQuantity,
D.InductionQuantity = A.InductionQuantity,
D.EndLineQualityPassedQuantity = A.EndLineQualityPassedQuantity,
D.EndLineQualityRejectedQuantity = A.EndLineQualityRejectedQuantity,
D.SewingLastOperationQuantity = A.SewingLastOperationQuantity,
D.SendDownQuantity = A.SendDownQuantity,
D.FinishingFirstOperationQuantity = A.FinishingFirstOperationQuantity,
D.FinishingQualityPassQuantity = A.FinishingQualityPassQuantity,
D.FinishingQualityRejectionQuantity = A.FinishingQualityRejectionQuantity,
D.FinishingLastOperationQuantity = A.FinishingLastOperationQuantity,
D.UpdatedAt = GETUTCDATE()
FROM [Reports].[DailyWorkOrderStatus] D
JOIN #DailyStatusAgg A
ON D.WorkOrderCode = A.WorkOrderCode
AND D.StyleName = A.StyleName
AND D.BuyMonth = A.BuyMonth
AND D.PurchaseOrderCode = A.PurchaseOrderCode
AND D.ColorCode = A.ColorCode
AND D.SizeName = A.SizeName
AND D.LineID = A.LineID
AND D.StatusDate = A.StatusDate;
INSERT INTO [Reports].[DailyWorkOrderStatus] (
WorkOrderCode, CustomerCode, CustomerName, BuyMonth, StyleName, PurchaseOrderCode, ColorCode, ColorName, SizeName,
FactoryCode, FactoryName, BranchCode, BranchName,LineID, LineCode, LineName, StatusDate,SyncedQuantity, CardInitializedQuantity,
CuttingOutQuantity, InductionQuantity,EndLineQualityPassedQuantity, EndLineQualityRejectedQuantity,SewingLastOperationQuantity,
SendDownQuantity,FinishingFirstOperationQuantity, FinishingQualityPassQuantity, FinishingQualityRejectionQuantity,
FinishingLastOperationQuantity
)
SELECT
A.WorkOrderCode, A.CustomerCode, A.CustomerName, A.BuyMonth, A.StyleName, A.PurchaseOrderCode, A.ColorCode, A.ColorName, A.SizeName,
A.FactoryCode, A.FactoryName, A.BranchCode, A.BranchName,A.LineID, A.LineCode, A.LineName, A.StatusDate,A.SyncedQuantity,
A.CardInitializedQuantity, A.CuttingOutQuantity, A.InductionQuantity,A.EndLineQualityPassedQuantity, A.EndLineQualityRejectedQuantity,
A.SewingLastOperationQuantity, A.SendDownQuantity,A.FinishingFirstOperationQuantity, A.FinishingQualityPassQuantity,
A.FinishingQualityRejectionQuantity, A.FinishingLastOperationQuantity
FROM #DailyStatusAgg A
WHERE NOT EXISTS (
SELECT 1
FROM [Reports].[DailyWorkOrderStatus] D
WHERE D.WorkOrderCode = A.WorkOrderCode
AND D.StyleName = A.StyleName
AND D.BuyMonth = A.BuyMonth
AND D.PurchaseOrderCode = A.PurchaseOrderCode
AND D.ColorCode = A.ColorCode
AND D.SizeName = A.SizeName
AND D.LineID = A.LineID
AND D.StatusDate = A.StatusDate
);
COMMIT TRANSACTION;
SELECT 0 AS Code, 'Daily WorkOrder Status Synced Successfully !.' AS Message;
END TRY
BEGIN CATCH
IF @@TRANCOUNT > 0
ROLLBACK TRANSACTION;
SELECT ERROR_NUMBER() AS Code, 'Failed to sync Daily WorkOrder Status Data. Error: ' + ERROR_MESSAGE() AS Message;
END CATCH
END
GO
  1. Cleanup Temporary Tables: Before any processing begins, both temporary tables (#TempBundleStatus and #DailyStatusAgg) are dropped if they already exist.

  2. CTE - PieceEvents (Bitmask Evaluation): A Common Table Expression (CTE) named PieceEvents reads all rows from [Reports].[Piece] and evaluates the PieceStatus bitmask integer column. Each bit corresponds to a specific production milestone. If a given bit is set, the corresponding date field is returned; otherwise NULL is returned. This ensures that only dates for stages the piece has actually reached are carried forward.

  3. CTE - EventLog (UNPIVOT): The EventLog CTE unpivots the PieceEvents output, converting the wide columnar date format into a tall row-per-event structure with three columns: BundleID, EventDate, and EventType. Rows where EventDate IS NULL are excluded, so only realized events are present.

  4. Aggregate into #TempBundleStatus: The EventLog CTE result is grouped by BundleID and EventDate (cast as StatusDate). A conditional COUNT is applied for each EventType to produce one quantity column per event. The result is inserted into the temporary table #TempBundleStatus.

  5. Enrich and Aggregate into #DailyStatusAgg: #TempBundleStatus is joined to all dimensional tables (Bundle, WorkOrder, Style, Customer, PurchaseOrder, Color, Size, Line, Branch, Factory) to produce a fully denormalized dataset. The quantities are summed and grouped by the natural business key: WorkOrderCode, StyleName, BuyMonth, PurchaseOrderCode, ColorCode, SizeName, LineID, and StatusDate. The result is inserted into #DailyStatusAgg. Note: The join to [Essentials].[Customer] is a LEFT JOIN, allowing work orders without a matched customer record to still appear in the output with NULL customer fields.

  6. UPDATE Existing Records: All records in [Reports].[DailyWorkOrderStatus] that match an existing row in #DailyStatusAgg (matched on WorkOrderCode, StyleName, BuyMonth, PurchaseOrderCode, ColorCode, SizeName, LineID, and StatusDate) are updated with the latest quantity values. The UpdatedAt column is set to GETUTCDATE().

  7. INSERT New Records: Any row in #DailyStatusAgg that does not have a matching record in [Reports].[DailyWorkOrderStatus] (using the same composite key as the UPDATE) is inserted as a new row. The CreatedAt column is not explicitly set here and defaults to the table’s column default.

  8. Commit or Rollback: If all steps complete without error, the transaction is committed and the success result set is returned. If any error is raised, the CATCH block rolls back the transaction (if one is active) and returns the error number and message.

  • Indexes Used:

    • Clustered/default indexes on DailyWorkOrderStatusID.
  • Potential Bottlenecks:

    • Large COUNT aggregations across all rows in [Reports].[Piece] using bitmask evaluation on every run — no date filter limits the scan.
    • Multi-step CTE chain (PieceEvents → EventLog → aggregation) followed by a wide UNPIVOT across 12 event columns adds processing overhead on high-volume piece data.
    • Secondary aggregation joining #TempBundleStatus to 10 dimensional tables with multiple GROUP BY columns before loading #DailyStatusAgg.
    • Upsert pattern (separate UPDATE + INSERT with NOT EXISTS) performs two passes over [Reports].[DailyWorkOrderStatus] on every execution.
  • Transaction Handling:

    • The entire operation (temp table cleanup, CTE aggregation, staging build, UPDATE, INSERT) is wrapped in a single explicit transaction. SET XACT_ABORT ON ensures any runtime error automatically triggers a rollback, and the CATCH block explicitly rolls back if @@TRANCOUNT > 0, guaranteeing no partial writes are ever persisted to [Reports].[DailyWorkOrderStatus].
  • Optimization:

    • Two-stage pre-aggregation (bundle-level in #TempBundleStatus, then dimension-enriched rollup in #DailyStatusAgg) reduces repeated scans of the base [Reports].[Piece] table.
    • Conditional COUNT(CASE WHEN …) inside a single GROUP BY pass avoids multiple separate queries per event type.
    • Separating UPDATE from INSERT (rather than using a MERGE) avoids MERGE’s known concurrency and plan-reuse issues in SQL Server.
    • The LEFT JOIN on [Essentials].[Customer] prevents work orders without a customer record from being silently dropped, avoiding data loss in the target table.
  • SQL Server Agent (scheduled execution)
EXEC [Reports].[uspSyncDailyWorkOrderStatus];
  • Success Scenario:

    • Data Aggregation: Piece-level bitmask flags successfully evaluated and aggregated into daily production quantities per Work Order, Style, Purchase Order, Color, Size, Line, and Status Date — fully loaded into [Reports].[DailyWorkOrderStatus].
    • Correct Totals: Counts correctly calculated for all 12 pipeline stages — Synced, Card Initialized, Cutting Out, Induction, End-Line Quality Pass, End-Line Quality Rejection, Sewing Last Operation, Send Down, Finishing First Operation, Finishing Quality Pass, Finishing Quality Rejection, and Finishing Last Operation.
    • Upsert Accuracy: Existing records are updated with the latest quantities and UpdatedAt timestamp; new date/line/order combinations are inserted without duplicating existing rows.
    • Dimension Linkage: All dimensional attributes (Factory, Branch, Line, Customer, Color, Size, Style, Purchase Order) correctly resolved and denormalized into the target table via JOIN.
    • Return Value: Result set returns ‘Daily WorkOrder Status Synced Successfully !.’ with Code = 0.
  • No Data Scenario:

    • Procedure completes successfully with no changes if [Reports].[Piece] contains no rows or all bitmask evaluations yield NULL dates. The UPDATE affects zero rows and the INSERT finds no new keys to add — the target table [Reports].[DailyWorkOrderStatus] remains unchanged.
  • Error Scenarios:

    • Procedure fails and the transaction is fully rolled back if any SQL error occurs, including missing or inaccessible source tables, data type mismatches, constraint violations, or system-level failures. No partial updates or inserts are persisted to [Reports].[DailyWorkOrderStatus]. The result set returns the SQL error number as Code and a composed message containing the full ERROR_MESSAGE() detail.
  • Numeric Columns: Quantity columns are derived from conditional COUNT(CASE WHEN …) expressions, which naturally return 0 for unmatched events rather than NULL — no explicit ISNULL wrapping is applied, but counts are always non-negative integers.
  • Customer Columns (CustomerCode, CustomerName): May be NULL if no matching customer record exists in [Essentials].[Customer] — the JOIN to the customer table is a LEFT JOIN, so work orders without a linked customer are still retained in the output.
  • Dimensional Columns (FactoryCode, FactoryName, BranchCode, BranchName, LineCode, LineName): Expected to be non-NULL as all corresponding joins are INNER JOINs — missing dimensional records will silently exclude bundles from the output rather than producing NULLs.
  • StatusDate: Derived from actual piece event dates after UNPIVOT — will never be NULL as the EventLog CTE explicitly filters WHERE EventDate IS NOT NULL.
  • WorkOrderCode, PurchaseOrderCode, StyleName, ColorCode, SizeName: Must exist and resolve correctly across all dimensional joins for a bundle to appear in the output. Any broken join silently excludes that bundle from [Reports].[DailyWorkOrderStatus].
  • BuyMonth, StatusDate: Must be valid dates in source tables; no explicit format validation is enforced within the procedure.
  • PieceStatus: Must be a valid integer bitmask in [Reports].[Piece] — corrupt or unexpected bitmask values will cause incorrect event date extraction without raising an error.
  • LineID: Used as part of the composite upsert key — must be consistent and stable across runs; changes to LineID in [Essentials].[Line] will generate duplicate rows rather than updating existing ones.
  • Quantity Fields: SyncedQuantity, CardInitializedQuantity, CuttingOutQuantity, InductionQuantity, EndLineQualityPassedQuantity, EndLineQualityRejectedQuantity, SewingLastOperationQuantity, SendDownQuantity, FinishingFirstOperationQuantity, FinishingQualityPassQuantity, FinishingQualityRejectionQuantity, FinishingLastOperationQuantity must be numeric and non-negative; guaranteed by COUNT aggregation.
  • Execution Time: Monitor end-to-end execution time, particularly as row volume in [Reports].[Piece] grows — the full-table bitmask scan with no date filter is the primary driver of runtime.
  • Rows Processed: Track row counts in #TempBundleStatus (bundle-event level) and #DailyStatusAgg (denormalized summary level) to detect unexpected drops or spikes in data volume between stages.
  • Aggregation Efficiency: Monitor CTE performance across the PieceEvents bitmask evaluation, EventLog UNPIVOT, and the two-stage GROUP BY aggregation — these are the most compute-intensive steps.
  • Upsert Balance: Track the ratio of rows updated vs rows inserted on each run. A consistently high INSERT count may indicate the composite key is drifting or source data is accumulating historical gaps.
  • Data Quality Checks: Validate that piece counts in #TempBundleStatus align with expected bundle volumes from [Cutting].[Bundle]; verify that no bundles are silently dropped due to missing dimensional records (Color, Size, Line, Factory).
  • Error Handling: Monitor for SQL errors related to missing or renamed source tables, bitmask evaluation failures on unexpected PieceStatus values, and transaction rollback events logged via the CATCH block’s returned error message and code.
  • Maintenance Requirements: Regular index maintenance on [Reports].[Piece] (BundleID, PieceStatus), [Cutting].[Bundle] (BundleID, WorkOrderStyleID), and the composite key columns of [Reports].[DailyWorkOrderStatus]; statistics updates on all source and target tables to ensure optimal query plans for the multi-join aggregation and upsert operations.