How to train/test/validate hierachical classifiers?

I am writing an algorithm which allows to detect activities based on wearable data. I would like to try it out an hierachical approach (Local Classifier Per Parent Node structure). In the first level, I determine the intensity of the activity (1 classifier), and in the second level I determine the activity label (3 classifiers). I am however struggling with how I need to approach the training/testing/validation of such a structure. What I did now is: Split data into 2 …
Category: Data Science

Pedestrian cell-phone usage recognition

Not sure if this is the correct SE to ask this. If not, kindly refer me to the correct one. I am following this work, as well as a small number of other works on the subject. I am looking for a ready-to-use implementation of a model which can classify distracted pedestrians. I want to be able to upload it into an Arduino UNO, but I can convert it from any programming language, so I wish to have something that …
Category: Data Science

Division of data into training and validation sets

I have a multi-sensory dateset for the activities of daily living. It contains data from 10 volunteers each performing 9 activities. Each volunteer wears 6 sensors on their body with the recorded data type quaternions, acceleration, and angular velocity. For each volunteer, I have total of 7 CSV files i-e 6 for each sensor and one for annotation. Now, I would like to divide the data of 7 volunteers into training and validation and the remaining 3 for testing. For …
Category: Data Science

Action Recognition for multiple objects and localization

I want to ask questions regarding the action detection on the video with proposed frames. I've used Temporal 3D ConvNet for the action recognition on video. Successfully trained it and can recognize action on videos. When I do inference, I just collect 20 frames from video, feed it to the model and it gives me the result. The point is that events on different videos are not similar in size. Some of them cover 90% of the frame, but some …
Category: Data Science

Detecting punch type using CoreML Activity classifier

I’m trying to train an activity classifier (made by Apple) to detect with kind of punch is thrown during boxing training. Accelerations are taken directly from an Arduino Nano 33 using Bluetooth low energy. I have acceleration in 3 axes and gyroscope acceleration in 3 axes at a 100Hz sample rate. A punch depending on the experience of the boxer lasts from 0.3 to 0.7 seconds To get accelerations I have a special configuration on my application that produces CSV …
Category: Data Science

Activity recognition with binary sensors

I have a bunch of streams coming from a set of 28 binary sensors around a SmartHome like this: Where: OBJECT: indicates the name of a binary sensor STATE: is obviously the state of the sensor at that precise instant (you can think of it like Movement/Pressure/Open=1 and NoMovement/NoPressure/Close=0) ACTIVITY: is the Label I would like to predict and indicates the activity the Human was doing TIMESTAMP/HOUR: you know what they mean... The activities to be recognised are 24 and …
Category: Data Science

About

Geeks Mental is a community that publishes articles and tutorials about Web, Android, Data Science, new techniques and Linux security.