نتایج جستجو برای: disjoint camera views
تعداد نتایج: 149291 فیلتر نتایج به سال:
In augmented reality, accurate geometric adjustment of real scene and virtual 3D models is important. In this paper, we propose a new method for generating arbitrary views of 3D motion events accurately by using the mutual projections between user’s cameras and cameras around the user. In particular, we show that the trifocal tensors computed from the mutual camera projections can be used effic...
This paper demonstrates an approach which exploits an active camera as a projective pointing mechanism. The optical centre of a static camera is notionally substituted by the centre of rotation of the active camera, as is the image plane by a frontal plane, a plane perpendicular to the optical axis of the active camera in its resting direction. Algorithms devised for 3D motion and 3D structure ...
In a crowded public space, people often walk in groups, either with people they know or strangers. Associating a group of people over space and time can assist understanding individual’s behaviours as it provides vital visual context for matching individuals within the group. Seemingly an ‘easier’ task compared with person matching given more and richer visual content, this problem is in fact v...
Previous studies on inferring the origin of routing changes in the Internet are limited to failure events that generate a large number of routing changes. In this paper, we present a novel approach to origin inference of small failure events. Our scheme focuses on routing changes imposed on preferred paths of prefixes and not on transient paths triggered by path exploration. We first infer the ...
We present a novel approach to selective sampling, cotesting, which can be applied to problems with redundant views (i.e., problems with multiple disjoint sets of attributes that can be used for learning). The main idea behind co-testing consists of selecting the queries among the unlabeled examples on which the existing
Most face recognition and tracking techniques employed in surveillance and human-computer interaction (HCI) systems rely on the assumption of a frontal view of the human face. In alternative approaches, knowledge of the orientation angle of the face in captured images can improve the performance of techniques based on non-frontal face views. In this paper, we propose a collaborative technique f...
A simple, stable and generic approach for estimation of relative positions and orientations of multiple rigidly coupled cameras is presented in this paper. The algorithm does not impose constraints on the field of view of the cameras and works even in the extreme case when the sequences from the different cameras are totally disjoint (i.e. when no part of the scene is captured by more than one ...
An approach for estimating 3D body pose from multiple, uncalibrated views is proposed. First, a mapping from image features to 2D body joint locations is computed using a statistical framework that yields a set of several body pose hypotheses. The concept of a “virtual camera” is introduced that makes this mapping invariant to translation, image-plane rotation, and scaling of the input. As a co...
Multiple cameras are needed to completely cover an environment for monitoring activity. To track people successfully in multiple perspective imagery, one needs to establish correspondence between objects captured in multiple cameras. We present a system for tracking people in multiple uncalibrated cameras. The system is able to discover spatial relationships between the camera field of views an...
This paper gives a practical algorithm for the selfcalibration of a camera from several views. The method involves non-iterative methods for finding an initial calibration for the camera, followed by leastsquares iteration to an optimum solution. At the same time, a scaled Euclidean reconstruction of the scene appearing in the images is computed.
نمودار تعداد نتایج جستجو در هر سال
با کلیک روی نمودار نتایج را به سال انتشار فیلتر کنید