DogCentric Activity Dataset

Introduction

DogCentric Activity Dataset is composed of dog activity videos taken from a first-person animal viewpoint. The dataset contains 10 different types of activities, including activities performed by the dog himself/herself, interactions between people and the dog, and activities performed by people or cars. This dataset was first introduced in the ICPR 2014 paper, First-Person Animal Activity Recognition from Egocentric Videos" [1].

There is a couple of errors in our paper officially published by ICPR 2014. The paper available on this website is its corrected version. For more details, please check "Corrections on the paper".

Dataset

We attached a GoPro camera to the back of each of the four dogs, and their owners took them on a walk to their familiar walking routes. The walking routes are in various environments, such as residential area, a park along a river, a sand beach, a field, streets with traffic, etc. Thus even though different dogs do the same activity, their background varies.

The video contains various activities, and we chose 10 activities of interest as our target activities. 'playing with a ball', 'waiting for a car to passed by', 'drinking water', 'feeding', 'turning dog's head to the left', 'turning dog's head to the right', 'petting', 'shaking dog's body by himself', 'sniffing', and 'walking' are the activities of importance we chose to recognize. The videos are in 320*240 image resolution, 48 frames per second.

Download

Segmented videos:
Each video is temporally segmented to contain a single activity. You can download segmented videos from anonymous FTP server (ftp://robotics-ftp.ait.kyushu-u.ac.jp). The videos are in "dogcentric" folder.

Citation

If you make use of the DogCentric Activity Dataset in any form, please do cite the following paper:

[1] Y. Iwashita, A. Takamine, R. Kurazume, and M. S. Ryoo, "First-Person Animal Activity Recognition from Egocentric Videos", International Conference on Pattern Recognition (ICPR) 2014.

@inproceedings{yumi2014first,
      title={First-Person Animal Activity Recognition from Egocentric Videos},
      author={Y. Iwashita and A. Takamine and R. Kurazume and M. S. Ryoo},
      booktitle={International Conference on Pattern Recognition (ICPR)},
      year={2014},
      month={August},
      address={Stockholm, Sweden},
}

Corrections on the paper

(1) On the paper officially published by ICPR, results of Fig. 6 (e) and those of Fig. 6 (f) were switched. The following figure is the right one.



(2) On the paper officially published by ICPR, some of values in Table I were not right. The table below is the right one. Red circles indicate corrected values.



Tips on implementation

(1) In experiments, we randomly selected half video sequences of each activity from our dataset as training dataset and use the rest of sequences for the testing. In case that the number of videos (=N) is odd number, we used (N-1) / 2 videos for training.

Updated 07/20/2014


Flag Counter