نتایج جستجو برای: stip

تعداد نتایج: 127  

Journal: :Image Vision Comput. 2014
Worapan Kusakunniran

This paper proposes a new method to extract a gait feature from a raw gait video directly. The SpaceTime Interest Points (STIPs) are detected where there are significant movements of human body along both spatial and temporal directions in local spatio-temporal volumes of a raw gait video. Then, a histogram of STIP descriptors (HSD) is constructed as a gait feature. In the classification stage,...

Journal: :CoRR 2017
Mengyuan Liu Hong Liu Chen Chen

3D action recognition has broad applications in human-computer interaction and intelligent surveillance. However, recognizing similar actions remains challenging since previous literature fails to capture motion and shape cues effectively from noisy depth data. In this paper, we propose a novel two-layer Bag-of-Visual-Words (BoVW) model, which suppresses the noise disturbances and jointly encod...

2012
YingLi Tian Liangliang Cao Zicheng Liu Zhengyou Zhang

This chapter addresses the problem of action detection from cluttered videos. In recent years, many feature extraction schemes have been designed to describe various aspects of actions. However, due to the difficulty of action detection, e.g., the cluttered background and potential occlusions, a single type of features cannot effectively solve the action detection problems in cluttered videos. ...

2011
Utkarsh Gaur Yingying Zhu Bi Song Amit K. Roy-Chowdhury

Videos usually consist of activities involving interactions between multiple actors, sometimes referred to as complex activities. Recognition of such activities requires modeling the spatio-temporal relationships between the actors and their individual variabilities. In this paper, we consider the problem of recognition of complex activities in a video given a query example. We propose a new fe...

2012

Classifying realistic human actions in video remains challenging for existing intro-variability and inter-ambiguity in action classes. Recently, Spatial-Temporal Interest Point (STIP) based local features have shown great promise in complex action analysis. However, these methods have the limitation that they typically focus on Bag-of-Words (BoW) algorithm, which can hardly discriminate actions...

Journal: :Computer Vision and Image Understanding 2013
Yingying Zhu Nandita M. Nayak Utkarsh Gaur Bi Song Amit K. Roy-Chowdhury

In this paper, a novel generalized framework of activity representation and recognition based on a ‘string of feature graphs (SFG)’ model is introduced. The proposed framework represents a visual activity as a string of feature graphs, where the string elements are initially matched using a graph-based spectral technique, followed by a dynamic programming scheme for matching the complete string...

2013
Maximilian Panzner Oliver Beyer Philipp Cimiano

In this paper we present an online approach to human activity classification based on Online Growing Neural Gas (OGNG). In contrast to state-of-the-art approaches that perform training in an offline fashion, our approach is online in the sense that it circumvents the need to store any training examples, processing the data on the fly and in one pass. The approach is thus particularly suitable i...

2013
Mahmood Karimian Mostafa Tavassolipour Shohreh Kasaei

In large databases, the lack of labeled training data leads to major difficulties in classification. Semi-supervised algorithms are employed to suppress this problem. Video databases are the epitome for such a scenario. Fortunately, graph-based methods have shown to form promising platforms for Semi-supervised video classification. Based on multimodal characteristics of video data, different fe...

2012
Qianru Sun Hong Liu

Classifying realistic human actions in video remains challenging for existing intro-variability and inter-ambiguity in action classes. Recently, Spatial-Temporal Interest Point (STIP) based local features have shown great promise in complex action analysis. However, these methods have the limitation that they typically focus on Bag-of-Words (BoW) algorithm, which can hardly discriminate actions...

نمودار تعداد نتایج جستجو در هر سال

با کلیک روی نمودار نتایج را به سال انتشار فیلتر کنید