NITYMED stands for Nighttime-Yawning-Microsleep-Eyeblink-Distraction. It is a dataset developed in the framework of CPSoSaware project by ESDA LAB of project partner University of Peloponnese (UoP).

130 videos have been capture in Patras, Greece, displaying drivers in real cars, moving under nighttime conditions where drowsiness detection is more important. The participating drivers are: 11 males and 10 females of Caucasoid race. The selected drivers have different features (hair color, beard, glasses, etc.). This dataset has been created for two purposes:
a) to train customized AI/ML models for facial shape alignment in videos or photographs displaying Caucasian drivers in nighttime conditions
b) to test the accuracy in drowsiness detection and compare more general AI/ML models trained both in daytime and nighttime, under various environmental conditions

This dataset has been used to detect yawnings and sleepy eye blinks. However, other face, mouth and eye tracking applications can also be tested using this dataset (driver distraction/microsleep, facial expressions, etc.).

The technical details of the offered videos can be found in the following links:
ESDALAB website
IEEE dataport 

Realistic vehicle trajectories  and driving parameters from CARLA autonomous driving simulator

This dataset contains realistic trajectories from multiple vehicles moving in the simulated environment of CARLA autonomous driving simulator. Two different maps (Map04 and Map10) have been exploited, corresponding to realistic driving conditions in simulated urban environments. Five sub-datasets have been extracted, corresponding to different number of vehicles, e.g., 50, 100 and 200, spawned in each map for 200 seconds. Every record of the dataset consists of: time stamp, vehicle ID, ground truth position x, y, z in CARLA reference system, ground truth pitch, roll and yaw angles (in degrees), ground truth linear velocity in x, y and z direction (m/s), ground truth linear acceleration in x, y and z direction (m^2/s) and ground truth angular velocity in x, y and z direction (deg/s).

Access the dataset at IEEE dataport.

CarlaScenes: A synthetic dataset for odometry in autonomous driving

Despite the great scientific effort to capture adequately the complex environments in which autonomous vehicles (AVs) operate there are still use-cases that even SoA methods fail to handle. Specifically in odometry problems, on the one hand, geometric solutions operate with certain assumptions that are often breached in AVs, and on the other hand, deep learning methods do not achieve high accuracy. To contribute to that we present CarlaScenes, a large-scale simulation dataset captured using the CARLA simulator. The dataset is oriented to address the challenging odometry scenarios that cause the current state of art odometers to deviate from their normal operations. Based on a case study of failures presented in experiments we distinguished 7 different sequences of data. CarlaScenes besides providing consistent reference poses, includes data with semantic annotation at the instance level for both image and lidar.

The full dataset is available at GitHub.