Vocalytics analyzes your body language to make you a better public speaker

Vocalytics, one of the many projects that came out of our Disrupt SF hackathon this weekend, wants to make you a better public speaker.

The project uses machine learning to analyze videos of your performances and gives you feedback on your body language. The team trained the system to look for your hand gestures and pose, but the plan is to expand this project to also look at your eye gaze, facial animation and other non-verbal gestures.

The ultimate goal of Vocalytics, which was built by Danish Dhamani and Paritosh Gupta, is to build an A.I. that can give you feedback on your body language that’s on par with what you’d get from a professional coach. A typical speaker training session with a human coach generally involves a deep analysis of a previous performance — and analyzing your body language is always part of that.

The team used a Microsoft data set to train its algorithm and Dhamani tells me that this is one of the first times that this data has been used for this kind of applications. In addition, Vocalytics used two Facebook tools: the Caffe deep learning framework and React.js library.

One thing Vocalytics can’t do is critique your actual speech, but all too often, it’s your body language that distracts from the rest of your performance anyway. Still, the good news here is that there’s SpeechCoach.ai, another hackathon project, that does actually analyze your speech, and that with Orai, the team behind Vocalytics has already build a full speech-coaching app that’s already in Apple’s app store today (and that will come to Android in the near future).