NimbleEdge Platform
Deploy machine learning models on mobile edge for real time, hyper-personalized, and privacy-aware ML
Eliminates
NimbleEdge Cloud Orchestration
NimbleEdge
Simulator
Run large scale inference and training ML simulations to estimate uplift benefits, with advanced techniques like hyper-personalization and real-time ML.
NimbleEdge
Cloud Service
Managed service orchestrating the machine learning pipelines for edge deployment
NimbleEdge
SDK
Ultra light-weight SDK integrated into the mobile application for on-device ML capabilities
NimbleEdge Cloud Service
Serializes and hosts ML models to be downloaded by mobile edge device.
Orchestration engine to determine execution and data flow management.
ML teams have complete central control of the on-device pipelines, including on-the-fly updates of data processing pipeline and model updates.
Pre-built integrations with AWS, Azure, Databricks for leveraging existing models and cloud feature stores.
Learning Platform
Aggregates on-device individualized models to update (train) models
Ensures model training while personally identifiable information (PII) remains on the customer’s device
NimbleEdge SDK
Warehouse
On-device managed database for storing persistent raw data like user interaction events.
Data Processing Engine
On-device managed data processing engine for computing user interaction based aggregate features.
Ultra fast feature computation for each user by virtue of decentralized processing.
Data Store
On-device in memory processed data replica store for dynamically computed feature vectors from user events and cloud-based data.
Precomputed feature vectors are served for inference within 0.5-1 millisecond, about 5-10 times faster than equivalent cloud data stores.
Inference Engine
OS, CPU architecture and model specific compile and run time optimizations for minimum latency and resource consumption.
Execution times of less than 30 milliseconds from user interaction to prediction.
Unique model for every user for truly hyper-personalized results.
5-10% uplift in model performance.
Privacy-aware ML as PII is not transmitted to NimbleEdge.