JPL Mars Yard Database

Introduction

JPL Mars Yard Database is built to understand terrain types from various sensors, such as RGB and IR. It has 2 datasets;
    1. Semantic Dataset 1: Understanding Terrain Types from RGB and IR,
    2. Virtual Sensor Dataset 1: Deriving RGB-to-IR mapping models.

Both datasets were collected at JPL Mars Yard (Fig. 1) on Nov. 17th 2017, with a RGB camera (FLIR Grasshopper 5M) and a thermal camera (FLIR AX65). We collected images every 1 hour, from 10 am till 5 pm (sunset), totally 8 times. Total number of images at each time is about 52 images, by changing the position of the cameras. The dataset includes challenging images such as shadows, reflection due to the Sun, and direct sunlight into the cameras (Fig. 2). Figure 3 shows IR image captured at the same time with Fig. 2, and it shows IR image does not have the effect of direct sunlight. The visible and thermal images are taken by different cameras, so a registration process between cameras is necessary. After we removed distortion with estimated camera inner parameters, we applied an affine transformation with estimated homography matrix.

Figure 1 Mars Yard Figure 2 RGB image at 4 pm Figure 3 IR image at 4 pm


JPL Mars Yard Database, Semantic Dataset 1

Semantic Dataset 1 contains RGB, IR, and annotation images as shown in Fig. 4. We manually annotated all images into 7 categories, unlabeled, sand, soil, rocks, bedrock, rocky terrain, and ballast. In the annotation images, pink, brown, green, light blue, purple, and red correspond to sand, soil, rocks, bedrock, rocky terrain, and ballast, respectively.

The dataset was first introduced in the MIPR 2019 paper, TU-Net and TDeepLab: Deep Learning-based Terrain Classification Robust to Illumination Changes, Combining Visible and Thermal Imagery [1].

The lists of training, validation, and test dataset are provided in the section below. In the paper [1], we evaluated the proposed approach with 3 different settings:
    (Exp. 1) train, evaluate, and test with the dataset at 17:00, which has no influence of the Sun,
    (Exp. 2) train, evaluate, and test with all dataset from 10:00 to 17:00,
    (Exp. 3) train and evaluate with dataset from 14:10 to 17:00, and two tests: (i) test with dataset from 10:00 to 13:00 and (ii) dataset from 14:00 to 17:00.

The figure below shows examples of RGB, IR, and annotation images.

Images at 10:07 Images at 10:24 Images at 12:35 Images at 16:00 Images at 16:53
Figure 4 From top to bottom rows show RGB, IR, and annotation images, respectively. In the annotation images, pink, brown, green, light blue, purple, and red correspond to sand, soil, rocks, bedrock, rocky terrain, and ballast, respectively.


JPL Mars Yard Database, Virtual Sensor Dataset 1

Virtual Sensor Dataset 1 contains RGB and IR images. The lists of training, validation, and test dataset are provided below.
The dataset was first introduced in the PBVS 2019 paper, MU-Net: Deep Learning-based Thermal IR Image Estimation from RGB Image [2].

Download from Caltech data website

You can download segmented videos from Caltech data website. There are 4 tar.gz files.
- RGB images
- IR images
- Mask images
- Annotation images
All 4 tar.gz files are used in "JPL Mars Yard Database, Semantic Dataset 1", while the first 3 files are used in "JPL Mars Yard Database, Virtual Sensor Dataset 1".

There are 3 txt files, which are list of training, validation, and test.
- train.txt: the list of training data
- val.txt: the list of validation data
- test.txt: the list of test data


Citation

If you make use of the JPL Mars Yard Database, Semantic Dataset 1 in any form, please do cite the following paper:
[1] Y. Iwashita, Kazuto Nakashima, Adrian Stoica, Ryo Kurazume, "TU-Net and TDeepLab: Deep Learning-based Terrain Classification Robust to Illumination Changes, Combining Visible and Thermal Imagery", IEEE International Conference on Multimedia Information Processing and Retrieval (MIPR 2019), San Jose, California, USA, 2019.3.28-30, 2019.

If you make use of the JPL Mars Yard Database, Virtual Sensor Dataset 1 in any form, please do cite the following paper:
[2] Y. Iwashita, Kazuto Nakashima, Sir Rafol, Adrian Stoica, Ryo Kurazume, "MU-Net: Deep Learning-based Thermal IR Image Estimation from RGB Image", IEEE Workshop on Perception Beyond the Visible Spectrum (PBVS), Long Beach, CA, USA, 2019.6.16, 2019.

Tips on implementation

At each pixel in the annotation images, it has numeric number which corresponds terrain type.
    0: __unlabeled__
    1: sand
    2: soil
    3: ballast
    4: rock
    5: bedrock
    6: rocky terrain

It is encoded and can be read by python as follows:
    encoded = np.array(Image.open(path_to_label))
    label = np.bitwise_or(np.bitwise_or(
    encoded[:, :, 0].astype(np.uint32),
    encoded[:, :, 1].astype(np.uint32) << 8),
    encoded[:, :, 2].astype(np.uint32) << 16)

Updated 01/01/2020


Flag Counter