LOGISTICS

Machine Learning Solution for Engine Anomaly Detection & Reporting Data Drift

Real-time Detection

Results

We automated the detection of incidents on a tugboat. This reduced the time to go through manual incident reports and summarize different situations that could damage an engine. We streamlined and automated the end-to-end machine learning lifecycle, from development and training to deployment and monitoring. This process allows them to add and deploy new models with ease.

Real-time Detection

Allows business to identify unusual patterns as soon as they occur. Reduces the likelihood of overlooked anomalies.

Data Driven Decision Making

Timely insights enable quicker responses and help identify potential risks before they escalate.

Data Drift Detection

Ensures models remain accurate and reliable. Allows team to focus on strategic tasks rather than monitoring the data.

Background

Moran, New Canaan, CT

Our client lacked visibility into engine anomalies and encountered delays in incident reporting, leading to millions in part replacement costs and vessel downtime. Headstorm developed a machine learning model to detect engine anomalies as well as data drift in engine sensors to proactively identify incidents that could potentially destroy the engine and save significant downtime. 

Team
  • Azure DevOps
  • Azure Synapse
  • Azure Machine Learning
  • Terraform
 
Build Type
  • Outlier Detection
  • Operational Excellence
Tech Stack
  • Project Leader
  • ML Architect
  • Sr. Data Scientist/ML Engineer
  • Data Scientist
Headstorm Sitemap

What we did

Feasibility Study

Through the Headstorm Foundry process, we met with different stakeholders to identify and confirm the most feasible business problem to solve. We began with 4 use cases and, based on the business value and data availability, landed on anomaly detection.

Modeling

We identified the key data attributes related to the use case and validated the relationship on existing data. We then iterated through the process of model selection, experimentation, testing and performance metrics evaluation until we met the agreed evaluations.

MLOps & Monitoring

We setup Infrastructure as Code (IaC) for resource provisioning to ensure seamless integration of models into production environment. We also set up alerts for pipeline failures and data drifts to highlight the need of adapting the model to new data to ensure the model remains effective in capturing the evolving pattern of data.

Feedback

The stakeholder was satisfied with the deployment of the models in production and the automated detection of different incidents along with data drift in the sensor. Ability to add additional model with ease to MLOps pipeline was a bonus.

Cookies Content

This website uses cookies to ensure you get the best experience on our website. By continuing to use our site, you agree to the use of cookies. Read more about our privacy policy.