Human Activity Recognition or HAR for short is the problem of predicting what a person is doing based on a trace of their movement using sensors.
Movements are often normal indoor activities such as standing, sitting, jumping, and going up stairs. Sensors are often located on the subject such as a smartphone or vest and often record accelerometer data in three dimensions (x, y, z).
Free Step-by-step Guide To Become A Data Scientist
Subscribe and get this detailed guide absolutely FREE
The idea is that once the subject’s activity is recognized and known, an intelligent computer system can then offer assistance.
It is a challenging problem because there is no clear analytical way to relate the sensor data to specific actions in a general way. It is technically challenging because of the large volume of sensor data collected (e.g. tens or hundreds of observations per second) and the classical use of hand crafted features and heuristics from this data in developing predictive models
Problem Description
The dataset “Activity Recognition from Single Chest-Mounted Accelerometer Data Set” was collected and made available by Casale, Pujol et al. from the University of Barcelona in Spain. It is freely available from the UCI Machine Learning repository:
https://archive.ics.uci.edu/ml/datasets/Activity+Recognition+from+Single+Chest-Mounted+Accelerometer
The dataset is comprised of un-calibrated accelerometer data from 15 different subjects, each performing 7 activities. Each subject wore a custom-developed chest-mounted accelerometer and data was collected at 52 Hz(52 observations per second).
Data Description
Un-calibrated Accelerometer Data are collected from 15 participants performing 7 activities. The dataset provides challenges for identification and authentication of people using motion patterns.
Data Set Information:
— The dataset collects data from a wearable accelerometer mounted on the chest
— Sampling frequency of the accelerometer: 52 Hz
— Accelerometer Data are Un-calibrated
— Number of Participants: 15
— Number of Activities: 7
— Data Format: CSV
Attribute Information:
— Data are separated by participant
— Each file contains the following information
—- Sequential number, x acceleration, y acceleration, z acceleration, label
— Labels are codified by numbers
— 1: Working at Computer
— 2: Standing Up, Walking and Going updown stairs
— 3: Standing
— 4: Walking
— 5: Going UpDown Stairs
— 6: Walking and Talking with Someone
— 7: Talking while Standing
Data Reading
data_dir = '/home/ubuntu/Documents/LearnersHeaven/content_preparation/course_content/development/case_studies/ML/data/HumanActivityRecogmnition/Activity Recognition from Single Chest-Mounted Accelerometer_'
Import libraries and tools
from glob import glob import pandas as pd %matplotlib inline all_data = glob(data_dir+"/*.csv") all_data[:3]
Output:
[‘/home/ubuntu/Documents/LearnersHeaven/content_preparation/course_content/development/case_studies/ML/data/HumanActivityRecogmnition/Activity Recognition from Single Chest-Mounted Accelerometer_/4.csv’,
‘/home/ubuntu/Documents/LearnersHeaven/content_preparation/course_content/development/case_studies/ML/data/HumanActivityRecogmnition/Activity Recognition from Single Chest-Mounted Accelerometer_/2.csv’,
‘/home/ubuntu/Documents/LearnersHeaven/content_preparation/course_content/development/case_studies/ML/data/HumanActivityRecogmnition/Activity Recognition from Single Chest-Mounted Accelerometer_/8.csv’
Load data set
def load_dataset(all_data): subjects = pd.DataFrame() for i,name in enumerate(all_data): df = pd.read_csv(name, header=None) df['subject_id'] = i+1 subjects = subjects.append(df.iloc[:,1:]) return subjects subjects_df = load_dataset(all_data) subjects_df.columns = ['x', 'y', 'z', 'label','subject_id'] subjects_df.head()
Output:
print('Loaded %d subjects' % len(subjects_df.subject_id.unique()))
Output: Loaded 15 subjects
Plot a subject
from matplotlib import pyplot # plot the x, y, z acceleration and activities for a single subject def plot_subject(subject): pyplot.figure() # create a plot for each column for col in range(subject.shape[1]): pyplot.subplot(subject.shape[1], 1, col+1) pyplot.plot(subject[:,col]) pyplot.show() # plot activities for a single subject plot_subject(subjects_df[subjects_df.subject_id==1].iloc[:,:4].values)
Running the example creates a line plot for each variable for the first loaded subject. We can see some very large movement in the beginning of the sequence that may be an outlier or unusual behaviour that could be removed. We can also see that the subject performed some actions multiple times. For example, a closer look at the plot of the class variable (bottom plot) suggests the subject performed activities in the following order, 1, 2, 0, 3, 0, 4, 3, 5, 3, 6, 7. Note that activity 3 was performed twice.
Plot Total Activity Durations
subjects = [] for k,values in subjects_df.groupby('subject_id'): subjects.append(values.iloc[:,:4].values) # returns a list of dict, where each dict has one sequence per activity def group_by_activity(subjects, activities): grouped = [{a:s[s[:,-1]==a] for a in activities} for s in subjects] return grouped # calculate total duration in sec for each activity per subject and plot def calculate_durations(grouped, activities): # calculate the lengths for each activity for each subject freq = 52 durations = [[len(s[a])/freq for s in grouped] for a in activities] return durations def plot_durations(grouped, activities): durations = calculate_durations(grouped, activities) pyplot.boxplot(durations, labels=activities) pyplot.show() # grouped activities = [i for i in range(0,8)] grouped = group_by_activity(subjects, activities) # plot durations plot_durations(grouped, activities)
We can see that there is relatively fewer observations for activities 0 (no activity), 2 (standing up, walking and going up/down stairs), 5 (going up/down stairs) and 6 (walking and talking). We can also see that each subject spent a lot of time on activity 1 (standing Up, walking and going up/down stairs) and activity 7 (talking while standing).
calculate_durations(grouped, activities) # plot the x, y, z acceleration for each subject def plot_subjects(subjects): pyplot.figure() # create a plot for each subject xaxis = None for i in range(len(subjects)): ax = pyplot.subplot(len(subjects), 1, i+1, sharex=xaxis) if i == 0: xaxis = ax # plot a histogram of x data for j in range(subjects[i].shape[1]-1): pyplot.hist(subjects[i][:,j], bins=100) pyplot.show() plot_subjects(subjects)
Running the example creates a single figure with 15 plots, one for each subject, and 3 histograms on each plot for each of the 3 axis of accelerometer data. The three colors blue, orange and green represent the x, y and z axes. This plot suggests that the distribution of each axis of accelerometer is Gaussian or really close to Gaussian. This may help with simple outlier detection and removal along each axis of the accelerometer data. The plot really helps to show both the relationship between the distributions within a subject and differences in the distributions between the subjects.
Within each subject, a common pattern is for the x (blue) and z (green) are grouped together to the left and y data (orange) is separate to the right. The distribution of y is often sharper whereas the distributions of x and z are flatter.
Across subjects, we can see a general clustering of values around 2,000 (whatever the units are), although with a lot of spread. This marked difference in distributions does suggest the need to at least standardize (shift to zero mean and unit variance) the data per axis and per subject before any cross-subject modelling is performed.
Defining the variables
X = subjects_df[['x','y','z']] y = subjects_df['label']
Train data split
# evaluate the model by splitting into train and test sets from sklearn.model_selection import train_test_split X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.30, random_state=12)
Modeling
from sklearn.ensemble import RandomForestClassifier model = RandomForestClassifier(n_estimators=5) model.fit(X_train, y_train) Output: RandomForestClassifier(bootstrap=True, class_weight=None, criterion='gini', max_depth=None, max_features='auto', max_leaf_nodes=None, min_impurity_decrease=0.0, min_impurity_split=None, min_samples_leaf=1, min_samples_split=2, min_weight_fraction_leaf=0.0, n_estimators=5, n_jobs=None, oob_score=False, random_state=None, verbose=0, warm_start=False) # use the model to make predictions with the test data predicted = model.predict(X_test) # Import metrics from sklearn import metrics # generate evaluation metrics- print(metrics.accuracy_score(y_test, predicted))
Output:
0.6952820511046259
# Print out the confusion matrix print(metrics.confusion_matrix(y_test, predicted))
Output:
[[ 117 502 22 79 166 11 19 180] [ 272 160448 2903 3187 6897 639 351 8123] [ 14 5364 3061 1065 2809 123 90 1940] [ 55 4260 896 28911 13701 2166 1469 13322] [ 123 8671 1589 11863 69524 1950 1059 12247] [ 23 1441 182 3323 5457 2113 491 2427] [ 25 750 130 2140 2773 644 3077 4887] [ 125 9465 1513 12593 14535 1662 3435 134670]] # Print out the classification report, and check the f1 score print(metrics.classification_report(y_test, predicted))
Output:
precision recall f1-score support
0 0.16 0.11 0.13 1096
1 0.84 0.88 0.86 182820
2 0.30 0.21 0.25 14466
3 0.46 0.45 0.45 64780
4 0.60 0.65 0.62 107026
5 0.23 0.14 0.17 15457
6 0.31 0.21 0.25 14426
7 0.76 0.76 0.76 177998
micro avg 0.70 0.70 0.70 578069
macro avg 0.46 0.42 0.44 578069
weighted avg 0.68 0.70 0.69 578069
Tree Visualisation
# Extract single tree estimator = model.estimators_[2] import numpy as np, pandas as pd, matplotlib.pyplot as plt, pydotplus from sklearn import tree, metrics, model_selection, preprocessing from IPython.display import Image, display from sklearn.tree import graphviz # Export as dot file dot_data = tree.export_graphviz(estimator, out_file='tree.dot', feature_names = X.columns.tolist(), class_names = list(map(str,y.unique())), rounded = True, proportion = False, precision = 2, filled = True) # Convert to png using system command (requires Graphviz) from subprocess import call call(['dot', '-Tpng', 'tree.dot', '-o', 'tree.png', '-Gdpi=600']) # Display in jupyter notebook graph = pydotplus.graph_from_dot_data(dot_data) display(Image(graph.create_png())) graph.write_png('decTreeOutput.png')