Forming Dept. Daily Production Report
Replaced manual shift data collection with an automated MES API-connected workbook covering 62 shift sheets.
Continuous Improvement Specialist & Aspiring Data Analyst
A collection of real-world systems, tools, and initiatives built at the intersection of manufacturing operations and data. Each project started with a problem on the plant floor.
Our facility lacked real-time visibility into machine performance across over 100 pieces of equipment, including presses, extruders, formers, cutters, and thermoformers. Operators had no live KPI feedback, and data collection was inconsistent and manual.
I facilitated the full MES rollout at our facility, coordinating across department managers, the plant electrical engineer, and corporate IoT teams. Key responsibilities included:
Operators now have live visibility into their performance through strategically placed KPI dashboards. Data that previously required manual collection is now captured automatically through the MES, laying the groundwork for deeper cross-system analysis.
Click any card to expand the full case study.
Replaced manual shift data collection with an automated MES API-connected workbook covering 62 shift sheets.
ERP-driven scrap approximation at the run level, combined with a structured speed loss capture overhaul.
Anchor-date methodology and historical OEE analysis to measure whether machine centerlines improved performance.
Fleet-wide fault frequency report flagging both excessively faulting machines and potential sensor reliability failures.
Merged MES, QMS, and ERP data via ODBC and API into automated Excel dashboards used daily by plant leadership.
Before this report existed, supervisors had to physically walk to every machine on the floor, manually record production numbers by hand, and enter that data into an outdated workbook with no automation — twice a day, every day. The old workbook had no connection to live systems and provided no department-level visibility.
I built an analysis table inside the MES that the system references and exports as a CSV. Power Query connects to the MES via its web API, pulling production data from all machines simultaneously — either live mid-shift or at shift end depending on which query is used.
The workbook contains 62 sheets — one per shift, two per day — covering a full month. All API calls are built into a single parameterized M function so each sheet passes its own date and shift parameters into the same underlying logic. Metadata tables layer in department and machine context that the raw API output alone couldn't provide.
Supervisors no longer walk the floor to collect numbers. Production data for every line in both sides of the forming department is pulled automatically and available on refresh. The report also serves as a shift turnover tool — supervisors log notes, flag issues, and hand off context shift to shift. OEE and production totals are surfaced at line level and rolled up for each side of the department.
The press department had no reliable way to track material consumption or approximate scrap at the run level. Without roll count and starting linear footage, there was no way to close the loop on what was actually used versus what was produced. Speed loss — one of the biggest hidden contributors to poor OEE — was also going completely uncaptured, making it impossible to identify whether slowdowns were tied to a specific job, deck, ink color, operator, or shift pattern.
The report connects to the ERP system via Power Query to pull run-level data for every press. Combining linear feet produced with roll count and starting linear footage yields a scrap approximation per run — a metric that previously required manual estimation or didn't exist at all.
For speed loss, I overhauled the department's operator production books to add structured capture fields. Operators log speed reductions as part of their normal workflow, with some data points feeding automatically from connected systems and others entered manually. Power Query then joins this data to the ERP run data for trend analysis.
Run-level visibility into linear feet produced, rolls consumed, and approximate scrap — all pulled automatically from ERP. The overhauled production books gave the department its first structured speed loss dataset, enabling trend analysis across job, deck, color, machine, operator, and shift.
Centerline events — resetting a machine to a known baseline mechanical state — are a significant investment of time and labor. Without a structured way to measure before and after OEE, there was no reliable way to know whether a centerline actually moved the needle. Decisions about which machines needed intervention were largely subjective, and the impact of completed centerlines went unmeasured.
The system starts with a consistent process for identifying chronically struggling machines based on sustained OEE trends. Once a centerline is completed, an anchor date is recorded — the dividing line between before and after periods used in analysis.
Large sets of historical OEE data are pulled from the MES for every machine in the fleet. Power Query automates all transformation and computation using heavy grouping, indexing, and cross-dataset compilation to align pre- and post-centerline windows per machine. Everything runs on refresh with no manual data manipulation required.
For the first time, the facility has a repeatable, data-driven way to evaluate the effectiveness of centerline events. CI can point to specific OEE deltas tied to specific interventions and build a historical record that informs future decisions about where to focus maintenance and improvement efforts.
Machine sensors generate faults that stop equipment when something goes out of tolerance — but there was no systematic way to analyze fault patterns across the fleet. High fault counts drive downtime and can signal conditions leading to quality defects. But the less obvious problem is equally important — very few or zero faults may indicate a sensor reliability issue. Both extremes are signals that were going undetected.
Sensor fault data is pulled from the MES via Power Query and the platform's web API. The data is transformed to count fault occurrences per machine, broken down by fault type and sensor, and organized into time-based sequences to surface how frequency changes over time.
The report flags machines at both ends of the spectrum — high fault counts surfaced for mechanical or process review, low or zero counts flagged as potential sensor reliability concerns. A side-by-side machine comparison view gives reviewers a consistent basis for prioritizing where to focus. Threshold values are currently being validated against known machine histories.
Maintenance and CI will have a tool to proactively identify two categories of risk that were previously invisible — machines being stopped excessively, and machines where sensors may have stopped doing their job. Both are caught before they become larger problems.
The MES, QMS, and ERP systems each held a piece of the picture, but no one was looking at all three together. Cross-system insights — connecting quality results to production conditions, or scheduling data to actual throughput — required manual exports and hours of reconciliation.
Using ODBC connections and API responses, Power Query pulls data directly from all three source systems into a single Excel layer. Transformation logic joins, cleans, and shapes the data into consistent output tables that feed the dashboards. Reports refresh against live data — no manual exports, no copy-paste reconciliation.
Leadership now has a fuller picture of performance, quality, and operational efficiency in one place. Insights that required hours of manual work to surface are available on demand, and the reports serve as connective tissue between systems that were never designed to talk to each other.