CSCE Capstone
Student Site for Individual and Collaborative Activites
Team 13 – Action Recognition
Team Members:
Garrett Bartlow
Braxton Parker
Daniel Miao
Joshua Stadtmueller
Jonathan Zamudio
Sponsor/Mentor:
Dr. Khoa Luu
Description:
Automatic action recognition is one of the primary tasks in video understanding. It has various practical applications such as human behavior analysis, virtual reality, and gesture recognition. Advancement in artificial intelligence has resulted in the studying of deep learning techniques. Deep learning is the technique to perform machine learning based on the structure and inspiration of the human brain. In the deep learning era, there are many methods that have been proposed to address the problem of action recognition. The objective of this project is to successfully apply the techniques used in the Temporal Shift Module (TSM) to solve a real-world problem in automatic action recognition. This will be done using technologies like deep learning libraries (PyTorch), Computer Vision and Video Processing (OpenCV) and Scikit-Learn. The goal is that by offering a successful application of the TSM, applications for more advanced problems may be inspired or extrapolated such that there is an obvious and non-trivial benefit in the community.
Contact:
abparker@uark.edu
Tasks:
- Understand/gain background about using the hand gestures inputs and controlling outside applications/software.
- Define exactly what actions/controls will be implemented.
- Determine design of GUI for hand gesture controller (front end)
- Determine design of how to implement hand gesture controller
- Implement basic version of GUI
- Implement the first hand gesture control
- Test/debug basic version of application
- Implement a more advanced version of GUI
- Implement the other twelve hand gesture controls
- Test more advanced version of application
- Put finishing touches on application/allow time for unforeseen errors
- Document everything we have done
- Work on demonstration of our application