Carefully integrating physical constraints can dramatically enhance learning leading to very data efficient solutions. Specifically, we study how motion facilitates visual information processing for machine learning. Using vision sensors like RGB cameras, drone-mounted cameras, and event cameras, we develop learning mechanisms are capable of acquiring an understanding of the scene's structure from minimal data - e.g. scene depth, object distances, camera and object motions.
P. Bideau, E. Learned-Miller, C. Schmid, and K. Alahari
The Right Spin: Learning Object Motion from Rotation-Compensated Flow Fields
International Journal of Computer Vision, Jan. 2024
DOI: 10.1007/s11263-023-01859-x
M. Halawa, O. Hellwich, and P. Bideau
Action-Based Contrastive Learning for Trajectory Prediction
European Conference on Computer Vision (ECCV), Oct. 2022
DOI: 10.1007/978-3-031-19842-7_9
P. Bideau, A. RoyChowdhury, R.R. Menon, and E. Learned-Miller
The Best of Both Worlds: Combining CNNs and Geomtric Constraints for Hierarchical Motion Segmentation
Proceedings IEEE/CVF International Conference on Computer Vision and Pattern Recognition (CVPR), June. 2018
DOI: 10.1109/CVPR.2018.00060
P. Bideau and E. Learned-Miller
A Detailed Rubric for Motion Segmentation
arXiv preprint, Oct. 2016
arXiv: 1610.10033 [cs.CV]