This page shows how the four core platform components introduced in the Platform Overview (model-driven architecture, data integration, machine learning, and elastic compute) are used together in a working application. The example is a turbine monitoring system that predicts mechanical failures. It collects live sensor data, analyzes that data for early signs of equipment issues, and recommends preventive actions based on those predictions.
This is a simplified example, but predictive maintenance is readily supported by C3 AI. You can find more details on extending predictive maintenance functionality in C3 AI’s Reliability Guide.

Application model

Every C3 AI application starts by modeling the things it needs to reason about. In this example, we have physical assets, incoming data, machine learning components, and the outputs that drive maintenance operations. The diagram below shows the model used in this example. Wind Turbine Example Structure First are the types that describe the physical system being monitored, from facilities containing wind turbines to the individual sensors collecting data on those turbines.
  • WindFarm: A named site or region containing multiple turbines.
  • Facility: A physical subdivision within a wind farm.
  • ReliabilityAsset: An individual wind turbine.
  • Sensor: A device installed on an asset to collect data like vibration or temperature.
  • SensorMeasurement: A time-stamped reading from a sensor.
These types are populated through data integration pipelines and provide the raw inputs for analytics and model scoring. Next is the type used to define a predictive model and track its version and configuration:
  • ReliabilityMLModel: A trained model used to generate predictions. Includes metadata like algorithm name and version.
This type is used in scoring pipelines and linked to the predictions it produces. Finally, the model includes types that store the outputs of prediction workflows and support decision-making:
  • TimeToEventPrediction: A prediction result. Stores a failure time estimate and confidence score for a specific asset.
  • FailureMode: A classification of the predicted failure type (example: a loose bearing).
  • Recommendation: A suggested action (example: inspect) based on a failure mode.
These outputs are used by the application to surface insights, drive operational workflows, and trigger alerts.

Data integration

The system ingests telemetry from wind turbines and maps it into the application model. Data arrives from external systems—often in inconsistent formats—and needs to be cleaned and linked to the correct sensors and assets. In this example:
  • Turbines stream sensor data using a vendor API or MQTT broker.
  • Incoming payloads are parsed and validated in a Connector Type.
  • Each record is transformed using a Transformation Type that handles field renaming, unit conversion, or filtering.
  • A Mapping Type matches the incoming sensor ID to a Sensor object and writes the reading as a new SensorMeasurement.
The platform supports both streaming and batch ingest. Failed records are logged with full visibility. Pipelines can be re-run for corrections or backfills using the same logic. Each SensorMeasurement is immediately queryable, joined to its parent Sensor, and available for model scoring. These ingestion pipelines operate on the same Types defined in the application—there’s no need for external ETL tools or schema translation.

Machine learning

Once sensor data is flowing into the system, the application generates predictions about future equipment failures through the ReliabilityMLModel. The platform supports this through three main components: feature store, model training, and scheduled scoring jobs. C3 AI aids feature extraction and engineering by hosting a centralized Feature Store that acts as a repository for pre-computed feature data, providing functions for creating, materializing, and evaluating features. Once features are selected, models are trained on labeled data, extracting outlier boundaries to aid prediction. Based on its performance, the model version can then be promoted and deployed to start generating predictions. Once deployed, the platform schedules scoring jobs that use the current model and computed features to generate TimeToEventPrediction records for each monitored asset. Each prediction includes a failure time estimate, a confidence score, and references to both the source asset and the model version used.

Elastic compute

Applications run inside isolated environments that package the full application definition: Types, pipelines, models, schedules, and interfaces. Each environment:
  • Uses the same model definition
  • Maintains its own data and configuration
  • Runs jobs in its own compute pool
This setup allows development and testing to happen independently of production. New Types or changes to ingestion logic can be deployed to Dev, validated in Test, and promoted to Prod without reconfiguring infrastructure or touching unrelated applications. Jobs for ingestion, feature computation, and model scoring are scheduled by the platform. Failures, retries, and logging are handled automatically.

Next steps

This structure is reused across many applications. The same principles apply whether you’re monitoring turbines, forecasting supply chain demand, or analyzing manufacturing output. Explore prebuilt applications
See how C3 AI application suites solve real-world problems in asset performance, supply chain, and generative AI use cases.
Learn how to build your own application
Start building with the C3 Type System and other core platform components.