This dataset contains face images from different datasets and their reconstruction through the StyleGAN2 inversion process.
TrueFace is a dataset containing real and synthetic human faces generated by StyleGAN and StyleGAN2 generative models and shared on three popular social networks (Facebook, Telegram, and Twitter), for a total of 210k images.
SHADE (SHAring DEvice) is a collection of images shared on WhatsApp from different types of devices, operating systems and user interfaces (e.g., mobile, desktop, browser).
Perception of Synthetic Faces
We release the dataset and collected data of the work:
FF++ Social contains videos from the widely known dataset FaceForensics++ (validation and testing splits) that have been shared through Facebook and YouTube. The dataset is intended to support researchers in multimedia forensics in evaluating the generalization ability of forensic detectors when dealing with data circulating on the web through popular sharing platforms.
The FF++ Social dataset has been used in the work:
F. Marcon, C. Pasquini, G. Boato, "Detection of manipulated face videos over social networks:
The ISIMA (Images Sharing via Instant Messaging App) dataset contains images that are shared between Android phone and iOS phone via three instant messaging applications Facebook Messenger, Whatsapp, and Telegram. All images are stored in JPEG format
RSMUD & VSMUD
To facilitate research on tracking social network origin of images, we collect two datasets: R-SMUD (RAISE Social Multiple Up-Download), and V-SMUD (VISION Social Multiple Up-Download).
All images are shared at maximum three times through three platforms: Facebook (FB), Flickr (FL), Twitter (TW).
Computer Vision and Behaviour Analysis
All the videos in the dataset are retrieved from YouTube; 25 of them are recorded using car mounted Dash-Cams, the remaining ones have been taken by other devices such as mobile phones.
The UNITN Social Interaction(USI) Dataset consists of 4 types of two-person interactions: Talking, Shaking, Hugging and Fighting. Each type of two-person interaction has 16 samples, with the total number of 16x4 = 64 samples.
The EventMask Dataset consists of two archives of images. Each archive contains three directories, one with the original images used for the game and another one with the event saliency maps already in binary form.