Page not found. Your pixels are in another canvas.
A list of all the posts and pages found on the site. For you robots out there is an XML version available for digesting as well.
This is a page not in th emain menu
Code for our paper “DFuseNet: Deep Fusion of RGB and Sparse Depth Information for Image Guided Dense Depth Completion” is now on GitHub - CODE
Our team (PLUTO) just got back from a successful run at the DARPA STIX in Colorado.
Two papers that I was a part of:
Code for our paper “Real Time Dense Depth Estimation by Fusing Stereo with Sparse Depth Measurements” is now on GitHub - CODE
I was invited to speak at the NVIDIA GPU Technology Conference for at their NVIDIA Jetson AGX Xavier Developer Day. Here is a video from the talk. The talk is mainly centered around the use of the NVIDIA Jetson platform on our quadrotors and some information regarding the autonomous UAV software stack that was designed at our lab.
PyTorch Scribble Pad
Ever since the Jetson Xavier was announced, I’ve been itching to get my hands on one of them to put it through it’s paces. Thanks to James over at Ghost Robotics I finally get to play with one of these. I’ve spent a fair amount of time with the Jetson TX1 and Jetson TX2 and I will be making direct comparisons to the Xavier’s predecessor, the TX2.
3D pose tracking of infants in occluding play settings
Semantic segmentation for fruit detection and counting
GPU accelerated dense stereo semi global matching (CUDA) using NVIDIA TX2 , CUDA, OpenCV and OpenVX
An old project from my undergrad - Arduino and Leap Motion based wireless gesture controlled robotic arm
The goal of the FLA program is to explore non-traditional perception and autonomy methods that could enable a new class of algorithms for minimalistic high-speed navigation in cluttered environments.
Published in IEEE Robotics and Automation Letters (Volume: 2, Issue: 2, April 2017), 2017
This paper describes a fruit counting pipeline based on deep learning that accurately counts fruit in unstructured environments
Recommended citation: Chen, S.W. (2017). "Counting Apples and Oranges with Deep Learning: A Data-Driven Approach" Journal 1. 1(2). https://ieeexplore.ieee.org/abstract/document/7814145/
Published in 2017 International Conference on Rehabilitation Robotics (ICORR), 2017
This paper describes the design and implementation of a multiple view stereoscopic 3D vision system and a supporting infant tracker pipeline..
Recommended citation: Shivakumar, S.S. (2017). "Stereo 3D Tracking of Infants in Natural Play Conditions." 2017 International Conference on Rehabilitation Robotics. 1(1). https://ieeexplore.ieee.org/document/8009353/
Published in IEEE Robotics and Automation Letters (Volume: 3, Issue: 3, July 2018) , 2018
In this study, we propose an unsupervised learning algorithm that trains a Deep Convolutional Neural Network to estimate planar homographies.
Recommended citation: Ngyuen, Ty. (2018). "Unsupervised Deep Homography: A Fast and Robust Homography Estimation Model" Journal 1. 1(3). https://ieeexplore.ieee.org/document/8302515/
Published in IEEE Robotics and Automation Letters (27 February 2019) , 2019
We present a cheap, lightweight, and fast fruit counting pipeline. Our pipeline relies only on a monocular camera..
Recommended citation: Liu, Xu. (2019). "Monocular Camera Based Fruit Counting and Mapping with Semantic Data Association" Journal 1. 1(3). https://ieeexplore.ieee.org/document/8653965/