SIA OpenIR  > 工艺装备与智能机器人研究室
Visual-GPS: Ego-Downward and Ambient Video based Person Location Association
Yang L(杨亮)1,4; Jiang, Hao2; Huo, Zhouyuan3
Department工艺装备与智能机器人研究室
Conference Name32nd IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)
Conference DateJune 16-20, 2019
Conference PlaceLong Beach, CA
Source Publication2019 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION WORKSHOPS (CVPRW 2019)
PublisherIEEE
Publication PlaceNew York
2019
Pages371-380
Indexed ByEI ; CPCI(ISTP)
EI Accession number20201608480602
WOS IDWOS:000569983600044
Contribution Rank1
ISSN2160-7508
ISBN978-1-7281-2506-0
AbstractIn a crowded and cluttered environment, identifying a particular person is a challenging problem. Current identification approaches are not able to handle the dynamic environment. In this paper, we tackle the problem of identifying and tracking a person of interest in the crowded environment using egocentric and third person view videos. We propose a novel method (Visual-GPS) to identify, track, and localize the person, who is capturing the egocentric video, using joint analysis of imagery from both videos. The output of our method is the bounding box of the target person detected in each frame of the third person view and the 3D metric trajectory. At glance, the views of the two cameras are quite different. This paper illustrates an insight into how they are correlated. Our proposed method uses several difference clues. In addition to using RGB images, we take advantage of both the body motion and action features to correlate the two views. We can track and localize the person by finding the most correlated individual in the third view. Furthermore, the target person's 3D trajectory is recovered based on the mapping of the 2d-3D body joints. Our experiment confirms the effectiveness of ETVIT network and shows 18.32% improvement in detection accuracy against the baseline methods.
Language英语
Citation statistics
Document Type会议论文
Identifierhttp://ir.sia.cn/handle/173321/27662
Collection工艺装备与智能机器人研究室
Corresponding AuthorYang L(杨亮)
Affiliation1.Robotics Lab, The City College of New York, City University, New York, USA
2.Microsoft, Redmond, USA
3.University of Pittsburgh, Pittsburgh, USA
4.State Key Laboratory of Robotics, University of Chinese Academy of Sciences, China
Recommended Citation
GB/T 7714
Yang L,Jiang, Hao,Huo, Zhouyuan. Visual-GPS: Ego-Downward and Ambient Video based Person Location Association[C]. New York:IEEE,2019:371-380.
Files in This Item:
File Name/Size DocType Version Access License
Visual-GPS_ Ego-Down(1056KB)会议论文 开放获取CC BY-NC-SAView Application Full Text
Related Services
Recommend this item
Bookmark
Usage statistics
Export to Endnote
Google Scholar
Similar articles in Google Scholar
[Yang L(杨亮)]'s Articles
[Jiang, Hao]'s Articles
[Huo, Zhouyuan]'s Articles
Baidu academic
Similar articles in Baidu academic
[Yang L(杨亮)]'s Articles
[Jiang, Hao]'s Articles
[Huo, Zhouyuan]'s Articles
Bing Scholar
Similar articles in Bing Scholar
[Yang L(杨亮)]'s Articles
[Jiang, Hao]'s Articles
[Huo, Zhouyuan]'s Articles
Terms of Use
No data!
Social Bookmark/Share
File name: Visual-GPS_ Ego-Downward and Ambient Video based Person Location Association.pdf
Format: Adobe PDF
All comments (0)
No comment.
 

Items in the repository are protected by copyright, with all rights reserved, unless otherwise indicated.