Kyushu University Parametric Human Model
Kyushu University Parametric Human Model (KY human model) was built by Kurazume Laboratory in Graduate School of Information Science and Electrical Engineering, Kyushu University based on AIST/HQL database of human body dimension and shape. The KY human model was built from 17 male models out of 49 models in the AIST/HQL database. The KY human model consists of a standard human shape and 30 parameters, and the human shape can be changed to various shapes as shown in Fig. 1 by changing the parameters. Please refer to [2] for more details.
[1] Makiko Kouchi, Masaaki Mochimaru, "National Institute of Advanced Industrial Science and Technology H18PRO-503"
[2] Shinji Tarumi, Yumi Iwashita, Ryo Kurazume, "Model-based motion capture system using person-adaptive model", Meeting on Image Recognition and Understanding, IS3-48, 2011 (in Japanese)
Release of the KY human model
If you want the KY human model (the standard model and parameters) and a program for visualizing the model like Fig. 1, you should follow the steps:
1. Download database release agreement.
2. After you sing the agreement, send it to us via mail or e-mail.
Consent
The researcher(s) agrees to the following restrictions on the KY human model:
- Redistribution: Without prior approval from Kyushu University Principal Investigator, the KY human model, in whole or in part, will not be further distributed, published, copied, or disseminated in any way or form whatsoever, whether for profit or not. This includes further distributing, copying or disseminating to a different facility or organizational unit within the requesting university, organization or company.
- Modification and Commercial Use: Without prior approval from Kyushu University, the KY human model, in whole or in part, may not be modified or used for commercial purposes.
- Citation: Any document that reports on research that uses the KY human model must acknowledge its use by including an appropriate citation to [2].
- The agreement of the AIST/HQL database should be observed. https://www.dh.aist.go.jp/database/fbodyDB/download/agreement.html
Kyushu University Parametric Human Model |
---|
Papers
- Makiko Kouchi, Masaaki Mochimaru, "National Institute of Advanced Industrial Science and Technology H18PRO-503"
- Shinji Tarumi, Yumi Iwashita, Ryo Kurazume, "Model-based motion capture system using person-adaptive model", Meeting on Image Recognition and Understanding, IS3-48, 2011 (in Japanese)
Kyushu University Kinect Place Recognition Database
Kyushu University Kinect Place Recognition Database, which was collected and distributed by Kurazume Laboratory in Graduate School of Information Science and Electrical Engineering, Kyushu University, consists of color and depth images for 6 categories including corridor, kitchen, laboratory, office study room and toilet. For more details, please refer the following papers. Kurazume laboratory owns copyright of the collection of these images and serves as the source for distribution of Kyushu University Kinect Place Recognition Database.
Copyright
Copyright of the Kyushu University Kinect Place Recognition Database is owned by Kurazume laboratory, Kyushu University.
Citation
Any document that reports on research that uses the Kyushu University Kinect Place Recognition Database must acknowledge its use by including an appropriate citation to
Oscar Martinez Mozos, Hitoshi Mizutani, Ryo Kurazume, Tsutomu Hasegawa, Categorization of Indoor Places Using the Kinect Sensor, Sensors, Vol. 12, No. 5, pp.6695-6711, 2012
Commercial Use
Without prior approval from the principal investigator, the Kyushu University Kinect Place Recognition Database, in whole or in part, may not be modified or used for commercial purposes.
Redistribution
Without prior approval from the principal investigator, the Kyushu University Kinect Place Recognition Database, in whole or in part, will not be further distributed, published, copied, or disseminated in any way or form whatsoever, whether for profit or not.
Kyushu University Kinect Place Recognition Database
corridors(255 Mbyte, 5 categories) |
genkiclub_f3_corridor_01, genkiclub_f4_corridor_01, w2_10f_corridor_01, w2_7f_corridor_01, w2_9f_corridor_02 |
kitchens(204 Mbyte, 8 categories) |
genkiclub_f3_kitchen_01, genkiclub_f3_kitchen_02, w2_10f_kitchen_01, w2_10f_kitchen_09, w2_9f_kitchen_01, w2_9f_kitchen_02, w2_9f_kitchen_10, w4_6f_kitchen_01 |
labs(583 Mbyte, 4 categories) |
hasegawa_lab, kurazume_lab, taniguchi_lab, uchida_lab |
offices(95 Mbyte, 3 categories) |
hasegawa_office, kurazume_office, morooka_office |
studyrooms(328 Mbyte, 8 categories) |
w2_2f_studyroom_01, w2_2f_studyroom_02, w2_2f_tatamiroom_01, w2_2f_tatamiroom_02, w4_2f_studyroom_01, w4_2f_studyroom_02, w4_2f_tatamiroom_01, w4_2f_tatamiroom_02 |
toilets(116 Mbyte, 3 categories) |
w2_10f_toilet_01, w2_2f_toilet_01, w2_9f_toilet_01 |
Datasets with red texts are used for the experiments in our papers.,
To convert from a depth file (*.txt) to a point cloud (*.pcd), you can use this sample code "depth2cloud.cpp". Text files are included in the same directory.
For more information about the .pcd format, please visit
http://pointclouds.org/documentation/tutorials/pcd_file_format.php
Papers
Kyushu University 4D Gait Database
Kyushu University 4D Gait Database (KY 4D Gait Database) was built by Kurazume Laboratory in Graduate School of Information Science and Electrical Engineering, Kyushu University. Kyushu University owns copyright of the KY 4D Gait Database including images and 3D models.
The KY 4D Gait Database consists of sequential 3D models of forty two walking people, silhouette images taken by 16 cameras, and camera parameters of each camera. Figure 1 shows examples of 3D models. For more details, please refer to [1]..
[1] "Person identification from spatio-temporal 3D gait", Yumi Iwashita, Ryosuke Baba, Koichi Ogawara, Ryo Kurazume, Int. Conf. on Emerging Security Technologies, pp.30-35, September, 2010.
Release of the KY 4D Gait Database
If you want the KY 4D Gait Database, you should follow the steps:
- Download database release agreement.
- 同意書に必要事項を記入し,メールもしくは下記郵送先まで御郵送ください.
- After you sing the agreement, send it to us via mail or e-mail. You have a way to get the KY 4D Gait Database. Provide your UPS or FedEx Account in your request e-mail, then request the package delivery company (such as FedEx or UPS) to take the KY 4D Gait Database DVDs. Our address is: Yumi Iwashita, W2-928, 744 Motooka Nishi-ku Fukuoka, 819-0013, Japan (tel: +81-92-802-3605) The DVDs are free of charge. But Kyushu University would NOT pay for the delivery, and YOU should pay for it.
Consent
The researcher(s) agrees to the following restrictions on the KY 4D Gait Database:
Redistribution
Without prior approval from Kyushu University Principal Investigator, the KY 4D Gait Database, in whole or in part, will not be further distributed, published, copied, or disseminated in any way or form whatsoever, whether for profit or not. This includes further distributing, copying or disseminating to a different facility or organizational unit within the requesting university, organization or company.
Modification and Commercial Use
Without prior approval from Kyushu University, the KY 4D Gait Database, in whole or in part, may not be modified or used for commercial purposes.
Citation:
Any document that reports on research that uses the KY 4D Gait Database must acknowledge its use by including an appropriate citation to [1].
Kyushu University 4D Gait Database |
---|
Papers
- Yumi Iwashita, Ryosuke Baba, Koichi Ogawara, Ryo Kurazume, "Person identification from spatio-temporal 3D gait", Int. Conf. on Emerging Security Technologies, pp.30-35, September, 2010.
Kyushu University Indoor Semantic Place Dataset
High resolution range and reflectance panoramic images
Kyushu University Indoor Semantic Place Dataset, which was collected and distributed by Kurazume Laboratory in Graduate School of Information Science and Electrical Engineering, Kyushu University, consists of reflectance and depth images for 5 categories including corridor, kitchen, laboratory, study room and office measured by SICK LMS-151 laser scanner and a rotating table. Kurazume laboratory owns copyright of the collection of these images and serves as the source for distribution of Kyushu University Indoor Semantic Place Dataset.
Copyright
Copyright of the Kyushu University Indoor Semantic Place Dataset is owned by Kurazume laboratory, Kyushu University.
Citation
Any document that reports on research that uses the Kyushu University Indoor Semantic Place Dataset must acknowledge its use by including an appropriate citation to
[1] Oscar Martinez Mozos, Hitoshi Mizutani, Hojung Jung, Ryo Kurazume, Tsutomu Hasegawa, Categorization of Indoor Places by Combining Local Binary Pattern Histograms of Range and Reflectance Data from Laser Range Finders, Advanced Robotics, Vol.27, No.18, pp.1455-1464, 2013
Data format
Text file format: depth and reflectance values
Convert to PCD file (depth2pcd)(depth2pcd)
Image size
RGB and Reflectance images: 3750~3755 x 760 pixels
Vertical field of view: -85~105 degrees
Horizontal field of view: 360 degrees
Commercial use
Without prior approval from the principal investigator, the Kyushu University Indoor Semantic Place Dataset, in whole or in part, may not be modified or used for commercial purposes.
Redistribution
Without prior approval from the principal investigator, the Kyushu University Indoor Semantic Place Dataset, in whole or in part, will not be further distributed, published, copied, or disseminated in any way or form whatsoever, whether for profit or not.
Kyushu University Indoor Semantic Place Dataset | |
---|---|
Category |
depth and reflectance |
Corridor (3.0 Gbyte, 60 scans) |
|
Kitchen (3.0 Gbyte, 60 scans) |
|
Laboratory (3.0 Gbyte, 60 scans) |
|
Study room (2.8 Gbyte, 60 scans) |
|
Office (2.4 Gbyte, 45 scans) |
|
Number of scans | |||||||||
---|---|---|---|---|---|---|---|---|---|
Place | Set 1 | Set 2 | Set 3 | Set 4 | Total | File size (GB) | |||
Corridor | 15 | 15 | 15 | 15 | 60 | 3.0 | |||
Kitchen | 15 | 15 | 15 | 15 | 60 | 3.0 | |||
Laboratory | 15 | 15 | 15 | 15 | 60 | 3.0 | |||
Study room | 15 | 15 | 15 | 15 | 60 | 2.8 | |||
Office | 15 | 15 | 15 | - | 45 | 2.4 | |||
Total | 75 | 75 | 75 | 60 | 285 | 14.2 |
Papers
SICK Fukuoka Outdoor Semantic Place Dataset
High resolution range and reflectance panoramic images
SICK Fukuoka Outdoor Semantic Place Dataset, which was collected and distributed by Kurazume Laboratory in Graduate School of Information Science and Electrical Engineering, Kyushu University, consists of reflectance and depth images for 4 categories including forest, residential area, parking lot and urban area measured by SICK LMS-151 laser scanner and a rotating table. Kurazume laboratory owns copyright of the collection of these images and serves as the source for distribution of Kyushu University Indoor Semantic Place Dataset..
Copyright
Copyright of the SICK Fukuoka Outdoor Semantic Place Dataset is owned by Kurazume laboratory, Kyushu University.
Citation
Any document that reports on research that uses the SICK Fukuoka Outdoor Semantic Place Dataset must acknowledge its use by including an appropriate citation to
[1] Hojung Jung, Oscar Martinez Mozos, Yumi Iwashita, Ryo Kurazume, The Outdoor LiDAR Dataset for Semantic Place Labeling, International Conference on Advanced Mechatronics (ICAM2015), 2015
Data format
pts file format: points and reflectance values
Convert to PCD file (ptxrgb2pcd)(ptxrgb2pcd)
Convert to PGM file (ptxrgb2pgm)(ptxrgb2pgm)
Image size
RGB and Reflectance images: 3750~3755 x 760 pixels
Vertical field of view: -85~105 degrees
Horizontal field of view: 360 degrees
Commercial use
Without prior approval from the principal investigator, the SICK Fukuoka Outdoor Semantic Place Dataset, in whole or in part, may not be modified or used for commercial purposes.
Redistribution
Without prior approval from the principal investigator, the SICK Fukuoka Outdoor Semantic Place Dataset, in whole or in part, will not be further distributed, published, copied, or disseminated in any way or form whatsoever, whether for profit or not.
SICK Fukuoka Outdoor Semantic Place Dataset | |
---|---|
Category |
depth and reflectance |
Forest (695.1 Mbyte, 36 scans) |
|
Residential area (440.4 Mbyte, 31 scans) |
|
Parking lot (564.1 Mbyte, 32 scans) |
|
Urban area (641.9 Mbyte, 44 scans) |
|
Number of scans | |||||||||
---|---|---|---|---|---|---|---|---|---|
Place | Set 1 | Set 2 | Set 3 | Set 4 | Set 5 | Set 6 | Set 7 | Total | File size (MB) |
Forest | 4 | 2 | 3 | 6 | 7 | 6 | 8 | 36 | 695.1 |
Residential area | 5 | 5 | 4 | 4 | 13 | 0 | 0 | 31 | 440.4 |
Parking lot | 6 | 6 | 8 | 8 | 4 | 0 | 0 | 32 | 564.1 |
Urban area | 5 | 5 | 5 | 7 | 8 | 6 | 8 | 44 | 641.9 |
Total | 20 | 18 | 20 | 25 | 32 | 12 | 16 | 143 | 2341.5 |
Papers
- Hojung Jung, Oscar Martinez Mozos, Yumi Iwashita, Ryo Kurazume, The Outdoor LiDAR Dataset for Semantic Place Labeling, International Conference on Advanced Mechatronics (ICAM2015), 2015
Dense Multi-modal Panoramic 3D Outdoor Dataset for Place Categorization
High resolution color 3D point clouds with reflectance using a FARO laser scanner and synchronized color images
Dense Multi-modal Panoramic 3D Outdoor (MPO) Dataset, which was collected and distributed by Kurazume Laboratory in Graduate School of Information Science and Electrical Engineering, Kyushu University, consists of color, reflectance, and depth images for 6 categories including indoor parking, outdoor parking, coast, forest, residential, and urban areas measured by FARO Focus 3D laser scanner. Kurazume laboratory owns copyright of the collection of these images and serves as the source for distribution of Dense MPO Dataset for Place Categorization.
Copyright
Copyright of the Dense Multi-modal Panoramic 3D Outdoor Dataset for Place Categorization is owned by Kurazume laboratory, Kyushu University.
Citation
Any document that reports on research that uses the Dense MPO Dataset must acknowledge its use by including an appropriate citation to
Multi-modal Panoramic 3D Outdoor Datasets for Place Categorization
Hojung Jung, Yuki Oto, Oscar Mozos, Yumi Iwashita, Ryo Kurazume
2016 IEEE/RSJ International Conference on Intelligent Robots and Systems, pp.4545-4550, Daejeon, Korea, 2016.10.9-14, 2016
[pdf]
Data format
PTX file format : X, Y, Z, intensity, R, G, B
PTX is an ASCII based interchange format for point cloud data. PTX file includes headers and point data with 7 columns. For more information, please see here.
Example:
col ; number of columns row ; number of rows st1 st2 st3 ; scanner registered position sx1 sx2 sx3 ; scanner registered axis 'X' sy1 sy2 sy3 ; scanner registered axis 'Y' sz1 sz2 sz3 ; scanner registered axis 'Z' r11 r12 r13 0 ; transformation matrix r21 r22 r23 0 ; this is a simple rotation and translation 4x4 matrix r31 r32 r33 0 ; just apply to each point to get the transformed coordinate tr1 tr2 tr3 1 ; use double-precision variables X Y Z intensity R G B ; begin coordinate list X Y Z intensity R G B ; intensity range: 0 - 1
Convert to PCD
PCL (Point Cloud Library) is required.
ptxrgb2pcd.zip
Image size
RGB image, Depth image, Reflectance image : 5140 x 1757 pixels
Vertical field of view : 300 degrees
Horizontal field of view : 360 degrees
Commercial use
Without prior approval from the principal investigator, the Dense MPO Dataset, in whole or in part, may not be modified or used for commercial purposes.
Redistribution
Without prior approval from the principal investigator, the Dense MPO Dataset, in whole or in part, will not be further distributed, published, copied, or disseminated in any way or form whatsoever, whether for profit or not.
How to get this database
Please contact to dbadmin@irvs.ait.kyushu-u.ac.jp
Dense Multi-modal Panoramic 3D Outdoor Dataset | |||||
---|---|---|---|---|---|
Category | RGB Example Image (click to enlarge) | Depth Example Image (click to enlarge) | Reflectance Example Image (click to enlarge) | 3D movie (link to Youtube video) | |
>Indoor parking | |||||
Example file: ParkingIn.zip (Example: Size 227 Mbytes) Data size: 38.2 Gbyte, 105 scans Map of places |
|||||
Outdoor parking | |||||
Example file: ParkingOut.zip (Example: Size 148 Mbytes) Data size: 40.0 Gbyte, 108 scans Map of places |
|||||
Coast area | |||||
Example file: Coast.zip (Example: Size 137 Mbytes) Data size: 38.7 Gbyte, 103 scans Map of places |
|||||
Forest area | |||||
Example file: Forest.zip (Example: Size 176 Mbytes) Data size: 43.2 Gbyte, 116 scans Map of places |
|||||
Residential area | |||||
Example file: Residential.zip (Example: Size 142 Mbytes) Data size: 39.6 Gbyte, 106 scans Map of places |
|||||
Urban area | |||||
Example file: Urban.zip (Example: Size 166 Mbytes) Data size: 42.3 Gbyte, 112 scans Map of places |
Each place category contains 7 sets of panoramic scans. Each set corresponds to a different place inside the same category as shown in the following table:
Number of scans | |||||||||
---|---|---|---|---|---|---|---|---|---|
Place | Set 1 | Set 2 | Set 3 | Set 4 | Set 5 | Set 6 | Set 7 | Total | File size (GB) |
Indoor Parking | 16 | 16 | 13 | 15 | 17 | 13 | 15 | 105 | 38.2 |
Outdoor Parking | 15 | 17 | 16 | 15 | 15 | 14 | 16 | 108 | 40.0 |
Coast | 14 | 14 | 16 | 12 | 17 | 14 | 16 | 103 | 38.7 |
Forest | 16 | 16 | 17 | 18 | 16 | 16 | 17 | 116 | 43.2 |
Residential | 14 | 16 | 14 | 15 | 16 | 15 | 16 | 106 | 39.6 |
Urban | 16 | 17 | 16 | 16 | 15 | 16 | 16 | 112 | 42.3 |
Total | 91 | 96 | 92 | 91 | 96 | 88 | 96 | 650 | 242 |
Papers
- Hojung Jung, Yuki Oto, Oscar Mozos, Yumi Iwashita, Ryo Kurazume, Multi-modal Panoramic 3D Outdoor Datasets for Place Categorization, 2016 IEEE/RSJ International Conference on Intelligent Robots and Systems, pp.4545-4550, Daejeon, Korea, 2016.10.9-14, 2016
Sparse Multi-modal Panoramic 3D Outdoor Dataset for Place Categorization
Low-resolution 3D point clouds with reflectance using a Velodyne laser scanner
Sparse Multi-modal Panoramic 3D Outdoor (MPO) Dataset, which was collected and distributed by Kurazume Laboratory in Graduate School of Information Science and Electrical Engineering, Kyushu University, consists of depth, reflectance, and color (reference) images for 6 categories including indoor parking, outdoor parking, coast, forest, residential, and urban areas measured by Velodyne HDL-32e 3D laser scanner. Kurazume laboratory owns copyright of the collection of these images and serves as the source for distribution of Sparse MPO Dataset for Place Categorization.
Copyright
Copyright of the Sparse Multi-modal Panoramic 3D Outdoor Dataset for Place Categorization is owned by Kurazume laboratory, Kyushu University. All requests for the Sparse MPO Dataset should be forwarded to the principal investigator.
Citation
Any document that reports on research that uses the Sparse MPO Dataset must acknowledge its use by including an appropriate citation to
Multi-modal Panoramic 3D Outdoor Datasets for Place Categorization
Hojung Jung, Yuki Oto, Oscar Mozos, Yumi Iwashita, Ryo Kurazume
2016 IEEE/RSJ International Conference on Intelligent Robots and Systems, pp.4545-4550, Daejeon, Korea, 2016.10.9-14, 2016
[pdf]
Commercial use
Without prior approval from the principal investigator, the Sparse MPO Dataset, in whole or in part, may not be modified or used for commercial purposes.
Redistribution
Without prior approval from the principal investigator, the Sparse MPO Dataset, in whole or in part, will not be further distributed, published, copied, or disseminated in any way or form whatsoever, whether for profit or not.
How to get this database
Please contact to dbadmin@irvs.ait.kyushu-u.ac.jp.
Sensors
- Laser range finder : Velodyne HDL-32E laser scanner
- GPS : GARMIN GPS 18x LVC
- Omni-directional camera : Kodak PIXPRO SP360
Data format
PCD, NMEA, BMP format
<Point cloud> EQUIPMENT:HDL-32e (Velodyne) FILE FORMAT:.pcd DATA FORMAT:PointCloud2 (See PCL library) PARAMETER:x, y, z, intensity, ring <GPS> EQUIPMENT:GPS 18x LVC, 5m (GARMIN) FILE FORMAT:.txt DATA FORMAT:NMEA Sentences($GPRMC) Timestamp is included. <IMG> EQUIPMENT:PIXPRO SP360 (Kodak) FILE FORMAT:.bmp DATA FORMAT:Bitmap
Color images are reference data, and are not registered with point cloud data
Each GPS file includes its timestamp in the header.
Point cloud and GPS files are synchronized.
Each image file is named with its timestamp.
Example
In "set01" dataset in "Coast" category,
- Coast_001_00001.pcd … Coast_001_00511.pcd
- Coast_001_00001_gps.txt … Coast_001_00511_gps.txt
Coast_001_00001_gps.txt seq: 17760 stamp: secs: 1447644393 nsecs: 178497076 frame_id: velodyne $GPRMC,032605,A,3338.8718,N,13012.4973,E,009.9,213.9,161115,007.0,W,D*06
- Coast_01_0001_1447644399.20.bmp … Coast_01_1483_1447644650.35.bmp
Convert to PTX
PCL (Point Cloud Library) and ROS are required
pcd2ptx(velodyne).zip
Convert to cylindrical images
OpenCV and contrib (ccalib/omnidir.hpp) are required.
omni2pers.zip
Sampling frequency
Point cloud and GPS : 2 Hz
RGB image : 6 ~ 7 Hz
Image size
Depth image, Reflectance image : 32 x 1260~1270 pixels
Vertical field of view : 10.67 ~ -30.67 degrees
Horizontal field of view : 360 degrees
Vertical angular resolution : 1.33 degrees
Horizontal angular resolution : 0.17 degrees
Color image : 1024 x 1024 pixels
Sparse Multi-modal Panoramic 3D Outdoor Dataset | ||||
---|---|---|---|---|
Category | RGB image (reference) (click to Youtube video) | 3D point cloud (click to enlarge) | 3D movie (link to Youtube video) | |
Indoor parking | ||||
Example file: ParkingInVelodyne.zip (Example: Size 2 Mbytes) Data size: 17.7 Gbyte, 4780 scans Map of places |
||||
Outdoor parking | ||||
Example file: ParkingOutVelodyne.zip (Example: Size 2 Mbytes) Data size: 23.4 Gbyte, 5445 scans Map of places |
||||
Coast area | ||||
Example file: CoastVelodyne.zip (Example: Size 2 Mbytes) Data size: 18.3 Gbyte, 4298 scans Map of places |
||||
Forest area | ||||
Example file: ForestVelodyne.zip (Example: Size 2 Mbytes) Data size: 32.2 Gbyte, 6479 scans Map of places |
||||
Residential area | ||||
Example file: ResidentialVelodyne.zip (Example: Size 2 Mbytes) Data size: 31.4 Gbyte, 7464 scans Map of places |
||||
Urban area | ||||
Example file: UrbanVelodyne.zip (Example: Size 2 Mbytes) Data size: 24.0 Gbyte, 5734 scans Map of places |
Each place category contains 10 sets of panoramic scans. Each set corresponds to a different place inside the same category as shown in the following table:
Number of scans | ||||||||||||
---|---|---|---|---|---|---|---|---|---|---|---|---|
Place | Set 1 | Set 2 | Set 3 | Set 4 | Set 5 | Set 6 | Set 7 | Set 8 | Set 9 | Set 10 | Total | File size (GB) |
Indoor Parking | 520 | 357 | 274 | 873 | 583 | 343 | 466 | 592 | 344 | 428 | 4780 | 17.7 |
Outdoor Parking | 874 | 579 | 388 | 370 | 477 | 536 | 581 | 563 | 460 | 617 | 5445 | 23.4 |
Coast | 511 | 254 | 571 | 221 | 314 | 376 | 872 | 506 | 386 | 287 | 4298 | 18.3 |
Forest | 440 | 824 | 980 | 707 | 730 | 720 | 439 | 311 | 797 | 531 | 6479 | 32.2 |
Residential | 674 | 787 | 667 | 724 | 563 | 973 | 717 | 720 | 977 | 662 | 7464 | 31.4 |
Urban | 490 | 572 | 587 | 487 | 410 | 566 | 712 | 565 | 606 | 739 | 5734 | 24.0 |
Total | 3509 | 3373 | 3467 | 3382 | 3077 | 3514 | 3787 | 3257 | 3570 | 3264 | 34200 | 147 |
Papers
- Hojung Jung, Yuki Oto, Oscar Mozos, Yumi Iwashita, Ryo Kurazume, Multi-modal Panoramic 3D Outdoor Datasets for Place Categorization, 2016 IEEE/RSJ International Conference on Intelligent Robots and Systems, pp.4545-4550, Daejeon, Korea, 2016.10.9-14, 2016
JPL Mars Yard Database
JPL Mars Yard Database were collected at JPL Mars Yard and built to understand terrain types from various sensors, such as RGB and IR.
Figure 1 Mars Yard | Figure 2 RGB image at 4 pm | Figure 3 IR image at 4 pm |