NimbleEdge Platform
Effortlessly deploy and maintain real-time machine learning models on mobile edge for session-aware, privacy-preserving personalization
Benefits
NimbleEdge Platform Architecture
NimbleEdge Platform Architecture
NimbleEdge
Simulator
Run large scale ML inference and training simulations to estimate uplift benefits from real-time personalization
NimbleEdge
Cloud Service
Managed service orchestrating the machine learning pipelines for edge deployment
NimbleEdge
SDK
Ultra light-weight SDK integrated into client mobile app to enable on-device ML capabilities
NimbleEdge SDK
Managed, optimized on-device ML across the complete device landscape with a lightweight SDK of size <200 KB
Store user interactions in real-time with a single SDK API call in a managed on-device database, with provisions for dynamic adjustments to ingested events
Use a Python-based data processing engine to execute operations on raw, real-time data to generate ML features
Minimize memory and compute footprint on app with managed, custom operator implementation
Precompute features based on real-time user interactions to preserve CPU and compute bandwidth for inference
Sync features between user devices and cloud feature stores to enable use of holistic feature sets for inference
Optimize on-device inference with rapid latencies for all device models across OS and chipset architectures
Minimize resource consumption (battery, RAM) with managed inference using levers like operator fusion, and prioritization
NimbleEdge Cloud Service
Seamlessly manage on-device ML pipelines, from deployment to monitoring with NimbleEdge cloud service, accessed via an easy to use customer portal
Effortlessly serialize and host edge ML models, and orchestrate execution and data flow management
One-click, over-the-air updates for ML models and data processing scripts to enable rapid experimentation
Deploy existing models written in PyTorch, TensorFlow, Numpy, ONNX, or XGBoost with no need to rewrite for edge
Leverage pre-built integrations with AWS, Azure, GCP and Databricks to use existing models and cloud feature stores
Cohort-based, verbose logs and dashboards providing complete visibility into on-device deployments
Ingest user clickstream data directly to cloud data stores as needed for real-time ML experimentation