The unsupervised machine learning technique of clustering data into similar groups can be useful and fairly efficient in most cases. The big trick is often how you pick the number of clusters to make (the K hyperparameter). The number of clusters may vary dramatically depending on the characteristics of the data, the different types of variables (numeric or categorical), how the data is normalized/encoded and the distance metric used.

**For this notebook we're going to focus specifically on the following:**

- Optimizing the number of clusters (K hyperparameter) using Silhouette Scoring
- Utilizing an algorithm (DBSCAN) that automatically determines the number of clusters

- Zeek Analysis Tools (ZAT): https://github.com/SuperCowPowers/zat
- Pandas: https://github.com/pandas-dev/pandas
- Scikit-Learn: http://scikit-learn.org/stable/index.html

- One Hot Encoding: http://pandas.pydata.org/pandas-docs/stable/generated/pandas.get_dummies.html
- t-SNE: https://distill.pub/2016/misread-tsne/
- Kmeans: http://scikit-learn.org/stable/modules/generated/sklearn.cluster.KMeans.html
- Silhouette Score: https://en.wikipedia.org/wiki/Silhouette_(clustering)
- DBSCAN: https://en.wikipedia.org/wiki/DBSCAN

In [1]:

```
# Third Party Imports
import pandas as pd
import numpy as np
import sklearn
from sklearn.manifold import TSNE
from sklearn.discriminant_analysis import LinearDiscriminantAnalysis
from sklearn.cluster import KMeans, DBSCAN
# Local imports
import zat
from zat.log_to_dataframe import LogToDataFrame
from zat.dataframe_to_matrix import DataFrameToMatrix
# Good to print out versions of stuff
print('ZAT: {:s}'.format(zat.__version__))
print('Pandas: {:s}'.format(pd.__version__))
print('Scikit Learn Version:', sklearn.__version__)
```

In [2]:

```
# Create a Pandas dataframe from the Zeek log
log_to_df = LogToDataFrame()
http_df = log_to_df.create_dataframe('../data/http.log')
# Print out the head of the dataframe
http_df.head()
```

Out[2]:

When we look at the http records some of the data is numerical and some of it is categorical so we'll need a way of handling both data types in a generalized way. We have a DataFrameToMatrix class that handles a lot of the details and mechanics of combining numerical and categorical data, we'll use below.

**We'll now use the Scikit-Learn tranformer class to convert the Pandas DataFrame to a numpy ndarray (matrix). The transformer class takes care of many low-level details**

- Applies 'one-hot' encoding for the Categorical fields
- Normalizes the Numeric fields
- The class can be serialized for use in training and evaluation
- The categorical mappings are saved during training and applied at evaluation
- The normalized field ranges are stored during training and applied at evaluation

In [3]:

```
# We're going to pick some features that might be interesting
# some of the features are numerical and some are categorical
features = ['id.resp_p', 'method', 'resp_mime_types', 'request_body_len']
# Use the DataframeToMatrix class (handles categorical data)
# You can see below it uses a heuristic to detect category data. When doing
# this for real we should explicitly convert before sending to the transformer.
to_matrix = DataFrameToMatrix()
http_feature_matrix = to_matrix.fit_transform(http_df[features], normalize=True)
print('\nNOTE: The resulting numpy matrix has 12 dimensions based on one-hot encoding')
print(http_feature_matrix.shape)
http_feature_matrix[:1]
```

Out[3]:

In [6]:

```
# Plotting defaults
%matplotlib inline
import matplotlib.pyplot as plt
plt.rcParams['font.size'] = 12.0
plt.rcParams['figure.figsize'] = 14.0, 7.0
```

"The silhouette value is a measure of how similar an object is to its own cluster (cohesion) compared to other clusters (separation). The silhouette ranges from -1 to 1, where a high value indicates that the object is well matched to its own cluster and poorly matched to neighboring clusters. If most objects have a high value, then the clustering configuration is appropriate. If many points have a low or negative value, then the clustering configuration may have too many or too few clusters."

In [8]:

```
from sklearn.metrics import silhouette_score
scores = []
clusters = range(2,16)
for K in clusters:
clusterer = KMeans(n_clusters=K)
cluster_labels = clusterer.fit_predict(http_feature_matrix)
score = silhouette_score(http_feature_matrix, cluster_labels)
scores.append(score)
# Plot it out
pd.DataFrame({'Num Clusters':clusters, 'score':scores}).plot(x='Num Clusters', y='score')
```

Out[8]:

- 'Optimal': Human intuition and clustering involves interpretation/pattern finding and is often partially subjective :)
- For large datasets running an exhaustive search can be time consuming
- For large datasets you can often get a large K using max score, so pick the 'knee' of the graph as your K

In [9]:

```
# So we know that the highest (closest to 1) silhouette score is at 10 clusters
kmeans = KMeans(n_clusters=10).fit_predict(http_feature_matrix)
# TSNE is a great projection algorithm. In this case we're going from 12 dimensions to 2
projection = TSNE().fit_transform(http_feature_matrix)
# Now we can put our ML results back onto our dataframe!
http_df['cluster'] = kmeans
http_df['x'] = projection[:, 0] # Projection X Column
http_df['y'] = projection[:, 1] # Projection Y Column
```

In [10]:

```
# Now use dataframe group by cluster
cluster_groups = http_df.groupby('cluster')
# Plot the Machine Learning results
colors = {-1:'black', 0:'green', 1:'blue', 2:'red', 3:'orange', 4:'purple', 5:'brown', 6:'pink', 7:'lightblue', 8:'grey', 9:'yellow'}
fig, ax = plt.subplots()
for key, group in cluster_groups:
group.plot(ax=ax, kind='scatter', x='x', y='y', alpha=0.5, s=250,
label='Cluster: {:d}'.format(key), color=colors[key])
```

In [11]:

```
# Now print out the details for each cluster
pd.set_option('display.width', 1000)
for key, group in cluster_groups:
print('\nCluster {:d}: {:d} observations'.format(key, len(group)))
print(group[features].head(3))
```

Density-based spatial clustering is a data clustering algorithm that given a set of points in space, groups points that are closely packed together and marking low-density regions as outliers.

- You don't have to pick K
- There are other hyperparameters (eps and min_samples) but defaults often work well
- https://en.wikipedia.org/wiki/DBSCAN
- Hierarchical version: https://github.com/scikit-learn-contrib/hdbscan

In [12]:

```
# Now try DBScan
http_df['cluster_db'] = DBSCAN().fit_predict(http_feature_matrix)
print('Number of Clusters: {:d}'.format(http_df['cluster_db'].nunique()))
```

In [13]:

```
# Now use dataframe group by cluster
cluster_groups = http_df.groupby('cluster_db')
# Plot the Machine Learning results
fig, ax = plt.subplots()
for key, group in cluster_groups:
group.plot(ax=ax, kind='scatter', x='x', y='y', alpha=0.5, s=250,
label='Cluster: {:d}'.format(key), color=colors[key])
```

So obviously we got a bit lucky here and for different datasets with different feature distributions DBSCAN may not give you the optimal number of clusters right off the zat. There are two hyperparameters that can be tweeked but like we said the defaults often work well. See the DBSCAN and Hierarchical DBSCAN links for more information.

- https://en.wikipedia.org/wiki/DBSCAN
- Hierarchical version: https://github.com/scikit-learn-contrib/hdbscan

Well that's it for this notebook, given the usefulness and relatively efficiency of clustering it a good technique to include in your toolset. Understanding the K hyperparameter and how to determine optimal K (or not if you're using DBSCAN) is a good trick to know.

If you liked this notebook please visit the zat project for more notebooks and examples.