Page Not Found
Page not found. Your pixels are in another canvas.
A list of all the posts and pages found on the site. For you robots out there is an XML version available for digesting as well.
Page not found. Your pixels are in another canvas.
About me
This is a page not in th emain menu
Published:
Code for our paper “DFuseNet: Deep Fusion of RGB and Sparse Depth Information for Image Guided Dense Depth Completion” is now on GitHub - CODE
Published:
Our team (PLUTO) just got back from a successful run at the DARPA STIX in Colorado.
Published:
Two papers that I was a part of:
Published:
Code for our paper “Real Time Dense Depth Estimation by Fusing Stereo with Sparse Depth Measurements” is now on GitHub - CODE
Published:
I was invited to speak at the NVIDIA GPU Technology Conference for at their NVIDIA Jetson AGX Xavier Developer Day. Here is a video from the talk. The talk is mainly centered around the use of the NVIDIA Jetson platform on our quadrotors and some information regarding the autonomous UAV software stack that was designed at our lab.
Published:
Published:
Ever since the Jetson Xavier was announced, I’ve been itching to get my hands on one of them to put it through it’s paces. Thanks to James over at Ghost Robotics I finally get to play with one of these. I’ve spent a fair amount of time with the Jetson TX1 and Jetson TX2 and I will be making direct comparisons to the Xavier’s predecessor, the TX2.
Published:
3D pose tracking of infants in occluding play settings
Published:
Semantic segmentation for fruit detection and counting
Published:
GPU accelerated dense stereo semi global matching (CUDA) using NVIDIA TX2 , CUDA, OpenCV and OpenVX
Published:
An old project from my undergrad - Arduino and Leap Motion based wireless gesture controlled robotic arm
Published:
The goal of the FLA program is to explore non-traditional perception and autonomy methods that could enable a new class of algorithms for minimalistic high-speed navigation in cluttered environments.
Published in IEEE Robotics and Automation Letters (Volume: 2, Issue: 2, April 2017), 2017
This paper describes a fruit counting pipeline based on deep learning that accurately counts fruit in unstructured environments
Recommended citation: Chen, S.W. (2017). "Counting Apples and Oranges with Deep Learning: A Data-Driven Approach" Journal 1. 1(2). https://ieeexplore.ieee.org/abstract/document/7814145/
Published in 2017 International Conference on Rehabilitation Robotics (ICORR), 2017
This paper describes the design and implementation of a multiple view stereoscopic 3D vision system and a supporting infant tracker pipeline..
Recommended citation: Shivakumar, S.S. (2017). "Stereo 3D Tracking of Infants in Natural Play Conditions." 2017 International Conference on Rehabilitation Robotics. 1(1). https://ieeexplore.ieee.org/document/8009353/
Published in IEEE Robotics and Automation Letters (Volume: 3, Issue: 3, July 2018) , 2018
In this study, we propose an unsupervised learning algorithm that trains a Deep Convolutional Neural Network to estimate planar homographies.
Recommended citation: Ngyuen, Ty. (2018). "Unsupervised Deep Homography: A Fast and Robust Homography Estimation Model" Journal 1. 1(3). https://ieeexplore.ieee.org/document/8302515/
Published in IEEE Robotics and Automation Letters (27 February 2019) , 2019
We present a cheap, lightweight, and fast fruit counting pipeline. Our pipeline relies only on a monocular camera..
Recommended citation: Liu, Xu. (2019). "Monocular Camera Based Fruit Counting and Mapping with Semantic Data Association" Journal 1. 1(3). https://ieeexplore.ieee.org/document/8653965/
Published in 2019 International Conference on Robotics and Automation (ICRA), 2019
We present an approach to depth estimation that fuses information from a stereo pair with sparse range measurements derived from a LIDAR sensor or a range camera.
Recommended citation: S. S. Shivakumar, K. Mohta, B. Pfrommer, V. Kumar and C. J. Taylor, "Real Time Dense Depth Estimation by Fusing Stereo with Sparse Depth Measurements," 2019 International Conference on Robotics and Automation (ICRA), Montreal, QC, Canada, 2019, pp. 6482-6488. doi: 10.1109/ICRA.2019.8794023 https://ieeexplore.ieee.org/abstract/document/8794023
Published in 2019 International Conference on Robotics and Automation (ICRA), 2019
In this paper we describe the Open Vision Computer (OVC) which was designed to support high speed, vision guided autonomous drone flight.
Recommended citation: M. Quigley et al., "The Open Vision Computer: An Integrated Sensing and Compute System for Mobile Robots," 2019 International Conference on Robotics and Automation (ICRA), Montreal, QC, Canada, 2019, pp. 1834-1840. doi: 10.1109/ICRA.2019.8794472 https://ieeexplore.ieee.org/abstract/document/8794472
Published in IEEE Robotics and Automation Letters ( Volume: 4 , Issue: 4 , Oct. 2019 ) , 2019
Real-time semantic image segmentation on platforms subject to size, weight, and power constraints is a key area of interest for air surveillance and inspection.
Recommended citation: T. Nguyen et al., "MAVNet: An Effective Semantic Segmentation Micro-Network for MAV-Based Tasks," in IEEE Robotics and Automation Letters, vol. 4, no. 4, pp. 3908-3915, Oct. 2019. doi: 10.1109/LRA.2019.2928734 https://ieeexplore.ieee.org/document/8764006