Fuel your computer vision AI models

In-cabin monitoring runs on our synthetic data

SKY ENGINE AI's platform provides an easy, flexible solution for acquiring high-quality reliable data to train and test computer vision algorithms for in-cabin monitoring systems.

Scroll down to discover how

Contact us

Browse the potential ways
⁠you can use our Platform

Private Vehicles Driver and Occupant State 

In private vehicles, ICMS monitor drivers' eye movements, head positions, and blink rates to detect signs of fatigue. If drowsiness is identified, the system alerts the driver, thereby reducing the risk of fatigue-related accidents.

Aviation Pilot Vigilance Assurance

In aircraft, ICMS track pilots' attention and alertness levels by analyzing facial cues and eye movements. The system can provide alerts or take corrective actions if signs of distraction or drowsiness are detected, thereby enhancing flight safety.

Railway Transportation Operator Vigilance Monitoring 

In trains, ICMS track the attention and alertness of operators by monitoring eye movements and head positions. If the system detects signs of drowsiness or distraction, it can trigger alarms or initiate automated responses to ensure passenger safety.

Public Transportation Passenger Behavior Analysis

In buses and trains, ICMS can monitor passenger activities to identify unusual or potentially dangerous behaviors. This enhances security by enabling rapid responses to incidents such as vandalism or passenger distress.

Commercial Trucking Driver Fatigue Monitoring 

In long-haul trucking, ICMS ensure safety by detecting drowsiness, distraction, or unsafe behavior during long drives. These systems alert drivers to take corrective action, improving awareness and promoting safer driving. In-cabin monitoring systems help trucking companies reduce accidents, enhance fleet safety, and comply with rest regulations.

Marine Vessels Crew Alertness Monitoring 

On ships, ICMS monitor crew members' alertness by analyzing facial expressions and eye movements. If signs of fatigue or distraction are detected, the system can issue alerts to prevent accidents, enhancing maritime safety.

Selected Platform Features

In the cabin

The cabin interiors come in various makes and models with adjustable front and rear seats. Seatbelt status (fastened, incorrectly fastened, not fastened), seat occupancy (driver, passenger, child), and the presence of pets or objects can also be customized.

Camera location

For in-cabin monitoring applications, we provide the possibility of using multiple camera locations for the renders, either the center stack, the rearview mirror, or one of the two pillars. Training your CV algorithm with many camera views ensures your product is more versatile and robust.

Human models

Our characters come in diverse identities, varying in height, body mass, and ethnicity. They include males and females across all ages, from newborns to the elderly. Customization options also cover clothing, headwear, eyewear (spectacles, IR-benign and IR-blocking sunglasses), and face masks.

Hand-held objects

Drivers and passengers can hold various objects, including phones, cigarettes, food, and drinks. Additional handheld items like keys, wallets, books, cameras, and tablets add realism, while the driver can also grip the steering wheel.

Gestures

Passengers and drivers display various gestures, from waving and pointing to peace signs, facepalms, and finger guns. Additional interactions include clapping, clenched fists, and biker waves. Drivers can also adjust hand placement—both hands on or off the wheel, one on another steering element, or engaging in motions like scratching or touching panels.

Activities

Occupants' head movements include looking around, checking mirrors, turning, tilting, and reacting to light, as well as blinking, yawning, crying, laughing, closing eyes, and interacting with other occupants. Body movements include sitting upright, slouching, leaning, turning toward the rear seat, using the headrest, and tilting.

Objects Placed in Cabin

The cabin can feature various child seats, from baby capsules to booster seats, along with everyday items like books, phones, and food. Larger objects, such as handbags, laptops, and sports gear, can also be placed on seats for realistic vehicle conditions.

External Environment

You can modify not only the internal cabin settings, but also you can change the external environment. The ambient light is balanced across time of day, and external scenes can randomly change from urban to countryside to seaside to woods to highway and others.

Distractions

You can program your drivers and occupants to display a wide selection of emotions (anger, sadness, happiness), focus states (drowsy, asleep, alert) and 360 gaze vectors. With this information, you will be able to train your models for detection of distracted driving and other potentially dangerous behaviors.

Datasets Tailored to Your
Machine Learning Needs

Multimodality
Support

Our Synthetic Data Cloud lets you generate images in various modalities you may need: visible light, near infrared (NIR), thermal vision, radars, ultra-wideband (UWB), as well as RGB sensors.

Ground
Truth

Ground truth includes metadata on humans, objects, seat configurations, and seat occupancy (by humans, objects, or children). It also covers seat belt status (fastened or not) and annotations on child seat types and occupancy. Additionally, it includes metadata on head and body pose, objects held by hand, and hand activity.

Randomization
Tools

Our Synthetic Data Cloud employs deterministic machinery for randomizing scene parameters and optimizing active learning. Stable random number generators and sampling are crucial for such outcomes, which traditional rendering can't match. The Platform's infrastructure guarantees stability and reproducibility across modules.

More features

A synthetic data generation platform tailored to development of DMS systems

SKY ENGINE AI’s platform offers scalable and realistic datasets that ensure both quality and privacy. Synthetic data provides:

  • Accelerated AI models training
  • Enhanced AI models’ generalizability
  • Zero data privacy concerns

Data generated with our platform offers information-rich and dependable ground truths

You can access sophisticated metadata, such as:

  • Semantic masks and bounding boxes
  • Depth and normal maps
  • Key points
  • Customizable annotations, such as COCO json

Trusted by

Efrat Swissa

Director Core ML, Google. Ex-Nvidia

My team and I take a great pleasure in supporting SKY ENGINE AI, our collaboration is a win-win for both Nvidia and SKY ENGINE AI. SKY ENGINE showing what is possible with Nvidia tech and SKY ENGINE AI is leading the way with synthetic data and ML platform, which ultimately will dominate how the companies train DL models. I look forward to more collaboration opportunities.

Some answers to your most-asked questions

What details are available in labels generated on the SKY ENGINE AI Platform?

We provide a wide array of detailed labels required for training computer vision models. They include: - Classification (Whole-image content labels) - Image-aligned 2D and 3D bounding boxes (horizontal and vertical box edges) - Object-aligned 2D and 3D bounding boxes (bounding box fits snugly about the object) - Segmentation (Pixel-level shadings of training object shapes) - Instance segmentation - 3D Keypoints - Tracks of training object movement trajectories across sequences of generated imagery

What kinds of sensor outputs can be modelled and simulated on the Platform?

You can choose from a variety of modalities, among others: - Multispectral imagery (including RGB images) - Panchromatic imagery - Near-infrared imagery - Hyperspectral imagery - Lidar data - Motion data – electro-optical - Motion data – infrared - Ultra-wideband (UWB)

Are your renders physically based?

Yes, all our renders are based on physical models of light interactions with surfaces and sensors, such as microfacet models for refraction through rough surfaces [1] or Fresnel term approximations for metals [2].

How are your images annotated?

Our images are annotated automatically on the Platform. Having complete control over the scene means you possess all the information regarding the 3D dependencies present. By eliminating manual labeling, you remove biases and inconsistencies associated with labeling.

What kinds of animation can be modelled and simulated on the Platform?       

Virtually any animation can be modelled and simulated in our Synthetic Data Cloud. It depends only on the resources and time you have to spare. Our example animations include: - Dynamic shadowing - Dynamic illumination - Vehicle activity and scenarios - Human activity and scenarios - Weather - Sensor platform movement (platform motion, fly-through behavior) - Sensor movement (camera jitter, sway, or tilting)