Skip to content

Commit ebca54e

Browse files
committed
add lab 8 and lab 9
1 parent 22590d5 commit ebca54e

File tree

2 files changed

+699
-0
lines changed

2 files changed

+699
-0
lines changed
Lines changed: 316 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,316 @@
1+
# Lab 8: $k$-means clustering
2+
3+
[COM6012 Scalable Machine Learning **2022**](https://github.com/haipinglu/ScalableML) by [Haiping Lu](https://haipinglu.github.io/) at The University of Sheffield
4+
5+
**Accompanying lectures**: [YouTube video lectures recorded in Year 2020/21.](https://www.youtube.com/watch?v=eLlwMhfbqAo&list=PLuRoUKdWifzxZfwTMvWlrnvQmPWtHZ32U)
6+
7+
## Study schedule
8+
9+
- [Task 1](#1-k-means-clustering): To finish in the lab session on 31st March. **Essential**
10+
- [Task 2](#2-exercises): To finish by the following Tuesday 26th April (due to 3-week vocation). ***Exercise***
11+
- [Task 3](#3-additional-ideas-to-explore-optional): To explore further. *Optional*
12+
13+
### Suggested reading
14+
15+
- Chapters *Clustering* and *RFM Analysis* of [PySpark tutorial](https://runawayhorse001.github.io/LearningApacheSpark/pyspark.pdf)
16+
- [Clustering in Spark](https://spark.apache.org/docs/3.2.1/ml-clustering.html)
17+
- [PySpark API on clustering](https://spark.apache.org/docs/3.2.1/api/python/reference/api/pyspark.ml.clustering.KMeans.html)
18+
- [PySpark code on clustering](https://github.com/apache/spark/blob/master/python/pyspark/ml/clustering.py)
19+
- [$k$-means clustering on Wiki](https://en.wikipedia.org/wiki/K-means_clustering)
20+
- [$k$-means++ on Wiki](https://en.wikipedia.org/wiki/K-means%2B%2B)
21+
- [$k$-means|| paper](http://theory.stanford.edu/~sergei/papers/vldb12-kmpar.pdf)
22+
23+
## 1. $k$-means clustering
24+
25+
[$k$-means](http://en.wikipedia.org/wiki/K-means_clustering) is one of the most commonly used clustering algorithms that clusters the data points into a predefined number of clusters. The Spark MLlib implementation includes a parallelized variant of the [$k$-means++](https://en.wikipedia.org/wiki/K-means%2B%2B) method called [$k$-means||](http://theory.stanford.edu/~sergei/papers/vldb12-kmpar.pdf).
26+
27+
`KMeans` is implemented as an `Estimator` and generates a [`KMeansModel`](https://spark.apache.org/docs/3.2.1/api/python/reference/api/pyspark.ml.clustering.KMeansModel.html) as the base model.
28+
29+
[API](https://spark.apache.org/docs/3.2.1/api/python/reference/api/pyspark.ml.clustering.KMeans.html): `class pyspark.ml.clustering.KMeans(featuresCol='features', predictionCol='prediction', k=2, initMode='k-means||', initSteps=2, tol=0.0001, maxIter=20, seed=None, distanceMeasure='euclidean', weightCol=None)`
30+
31+
The following parameters are available:
32+
33+
- *k*: the number of desired clusters.
34+
- *maxIter*: the maximum number of iterations
35+
- *initMode*: specifies either random initialization or initialization via k-means||
36+
- *initSteps*: determines the number of steps in the k-means|| algorithm (default=2, advanced)
37+
- *tol*: determines the distance threshold within which we consider k-means to have converged.
38+
- *seed*: setting the **random seed** (so that multiple runs have the same results)
39+
- *distanceMeasure*: either Euclidean (default) or cosine distance measure
40+
- *weightCol*: optional weighting of data points
41+
42+
Let us request for 2 cores using a regular queue. We activate the environment as usual and then install `matplotlib` (if you have not done so).
43+
44+
```sh
45+
qrshx -pe smp 2
46+
source myspark.sh # myspark.sh should be under the root directory
47+
conda install -y matplotlib
48+
cd com6012/ScalableML # our main working directory
49+
pyspark --master local[2] # start pyspark with 2 cores requested above.
50+
```
51+
52+
We will do some plotting in this lab. To plot and save figures on HPC, we need to do the following before using pyplot:
53+
54+
```python
55+
import matplotlib
56+
matplotlib.use('Agg') # Must be before importing matplotlib.pyplot or pylab!
57+
```
58+
59+
Now import modules needed in this lab:
60+
61+
```python
62+
from pyspark.ml.clustering import KMeans
63+
from pyspark.ml.clustering import KMeansModel
64+
from pyspark.ml.evaluation import ClusteringEvaluator
65+
from pyspark.ml.linalg import Vectors
66+
import matplotlib.pyplot as plt
67+
```
68+
69+
### Clustering of simple synthetic data
70+
71+
Here, we study $k$-means clustering on a simple example with four well-separated data points as the following.
72+
73+
```python
74+
data = [(Vectors.dense([0.0, 0.0]),), (Vectors.dense([1.0, 1.0]),),
75+
(Vectors.dense([9.0, 8.0]),), (Vectors.dense([8.0, 9.0]),)]
76+
df = spark.createDataFrame(data, ["features"])
77+
kmeans = KMeans(k=2, seed=1) # Two clusters with seed = 1
78+
model = kmeans.fit(df)
79+
```
80+
81+
We examine the cluster centers (centroids) and use the trained model to "predict" the cluster index for a data point.
82+
83+
```python
84+
centers = model.clusterCenters()
85+
len(centers)
86+
# 2
87+
for center in centers:
88+
print(center)
89+
# [0.5 0.5]
90+
# [8.5 8.5]
91+
model.predict(df.head().features)
92+
# 0
93+
```
94+
95+
We can use the trained model to cluster any data points in the same space, where the cluster index is considered as the `prediction`.
96+
97+
```python
98+
transformed = model.transform(df)
99+
transformed.show()
100+
# +---------+----------+
101+
# | features|prediction|
102+
# +---------+----------+
103+
# |[0.0,0.0]| 0|
104+
# |[1.0,1.0]| 0|
105+
# |[9.0,8.0]| 1|
106+
# |[8.0,9.0]| 1|
107+
# +---------+----------+
108+
```
109+
110+
We can examine the training summary for the trained model.
111+
112+
```python
113+
model.hasSummary
114+
# True
115+
summary = model.summary
116+
summary
117+
# <pyspark.ml.clustering.KMeansSummary object at 0x2b1662948d30>
118+
summary.k
119+
# 2
120+
summary.clusterSizes
121+
# [2, 2]]
122+
summary.trainingCost #sum of squared distances of points to their nearest center
123+
# 2.0
124+
```
125+
126+
You can check out the [KMeansSummary API](https://spark.apache.org/docs/3.2.1/api/java/org/apache/spark/ml/clustering/KMeansSummary.html) for details of the summary information, e.g., we can find out that the training cost is the sum of squared distances to the nearest centroid for all points in the training dataset.
127+
128+
### Save and load an algorithm/model
129+
130+
We can save an algorithm/model in a temporary location (see [API on save](https://spark.apache.org/docs/3.2.1/api/python/reference/api/pyspark.ml.PipelineModel.html?highlight=pipelinemodel%20save#pyspark.ml.PipelineModel.save)) and then load it later.
131+
132+
Save and load the $k$-means algorithm (settings):
133+
134+
```python
135+
import tempfile
136+
137+
temp_path = tempfile.mkdtemp()
138+
kmeans_path = temp_path + "/kmeans"
139+
kmeans.save(kmeans_path)
140+
kmeans2 = KMeans.load(kmeans_path)
141+
kmeans2.getK()
142+
# 2
143+
```
144+
145+
Save and load the learned $k$-means model (note that only the learned model is saved, not including the summary):
146+
147+
```python
148+
model_path = temp_path + "/kmeans_model"
149+
model.save(model_path)
150+
model2 = KMeansModel.load(model_path)
151+
model2.hasSummary
152+
# False
153+
model2.clusterCenters()
154+
# [array([0.5, 0.5]), array([8.5, 8.5])]
155+
```
156+
157+
### Iris clustering
158+
159+
Clustering of the [Iris flower data set](https://en.wikipedia.org/wiki/Iris_flower_data_set) is a classical example [discussed the Wikipedia page of $k$-means clustering](https://en.wikipedia.org/wiki/K-means_clustering#Discussion). This data set was introduced by [Ronald Fisher](https://en.wikipedia.org/wiki/Ronald_Fisher), "the father of modern statistics and experimental design" (and thus machine learning) and also "the greatest biologist since Darwin". The code below is based on Chapter *Clustering* of [PySpark tutorial](https://runawayhorse001.github.io/LearningApacheSpark/pyspark.pdf), with some changes introduced.
160+
161+
#### Load and inspect the data
162+
163+
```python
164+
df = spark.read.load("Data/iris.csv", format="csv", inferSchema="true", header="true").cache()
165+
df.show(5,True)
166+
# +------------+-----------+------------+-----------+-------+
167+
# |sepal_length|sepal_width|petal_length|petal_width|species|
168+
# +------------+-----------+------------+-----------+-------+
169+
# | 5.1| 3.5| 1.4| 0.2| setosa|
170+
# | 4.9| 3.0| 1.4| 0.2| setosa|
171+
# | 4.7| 3.2| 1.3| 0.2| setosa|
172+
# | 4.6| 3.1| 1.5| 0.2| setosa|
173+
# | 5.0| 3.6| 1.4| 0.2| setosa|
174+
# +------------+-----------+------------+-----------+-------+
175+
# only showing top 5 rows
176+
df.printSchema()
177+
# root
178+
# |-- sepal_length: double (nullable = true)
179+
# |-- sepal_width: double (nullable = true)
180+
# |-- petal_length: double (nullable = true)
181+
# |-- petal_width: double (nullable = true)
182+
# |-- species: string (nullable = true)
183+
```
184+
185+
We can use `.describe().show()` to inspect the (statistics of) data:
186+
187+
```python
188+
df.describe().show()
189+
# +-------+------------------+-------------------+------------------+------------------+---------+
190+
# |summary| sepal_length| sepal_width| petal_length| petal_width| species|
191+
# +-------+------------------+-------------------+------------------+------------------+---------+
192+
# | count| 150| 150| 150| 150| 150|
193+
# | mean| 5.843333333333335| 3.0540000000000007|3.7586666666666693|1.1986666666666672| null|
194+
# | stddev|0.8280661279778637|0.43359431136217375| 1.764420419952262|0.7631607417008414| null|
195+
# | min| 4.3| 2.0| 1.0| 0.1| setosa|
196+
# | max| 7.9| 4.4| 6.9| 2.5|virginica|
197+
# +-------+------------------+-------------------+------------------+------------------+---------+
198+
```
199+
200+
#### Convert the data to dense vector (features)
201+
202+
Use a `transData` function similar to that in Lab 2 to convert the attributes into feature vectors.
203+
204+
```python
205+
def transData(data):
206+
return data.rdd.map(lambda r: [Vectors.dense(r[:-1])]).toDF(['features'])
207+
208+
dfFeatureVec= transData(df).cache()
209+
dfFeatureVec.show(5, False)
210+
# +-----------------+
211+
# |features |
212+
# +-----------------+
213+
# |[5.1,3.5,1.4,0.2]|
214+
# |[4.9,3.0,1.4,0.2]|
215+
# |[4.7,3.2,1.3,0.2]|
216+
# |[4.6,3.1,1.5,0.2]|
217+
# |[5.0,3.6,1.4,0.2]|
218+
# +-----------------+
219+
# only showing top 5 rows
220+
```
221+
222+
#### Determine $k$ via silhouette analysis
223+
224+
We can perform a [Silhouette Analysis](https://en.wikipedia.org/wiki/Silhouette_(clustering)) to determine $k$ by running multiple $k$-means with different $k$ and evaluate the clustering results. See [the ClusteringEvaluator API](https://spark.apache.org/docs/3.2.1/api/python/reference/api/pyspark.ml.evaluation.ClusteringEvaluator.html), where `silhouette` is the default metric. You can also refer to this [scikit-learn notebook on the same topic](https://scikit-learn.org/stable/auto_examples/cluster/plot_kmeans_silhouette_analysis.html). Other ways of determining the best $k$ can be found on [a dedicated wiki page](https://en.wikipedia.org/wiki/Determining_the_number_of_clusters_in_a_data_set).
225+
226+
```python
227+
import numpy as np
228+
229+
numK=10
230+
silhouettes = np.zeros(numK)
231+
costs= np.zeros(numK)
232+
for k in range(2,numK): # k = 2:9
233+
kmeans = KMeans().setK(k).setSeed(11)
234+
model = kmeans.fit(dfFeatureVec)
235+
predictions = model.transform(dfFeatureVec)
236+
costs[k]=model.summary.trainingCost
237+
evaluator = ClusteringEvaluator() # to compute the silhouette score
238+
silhouettes[k] = evaluator.evaluate(predictions)
239+
```
240+
241+
We can take a look at the clustering results (the `prediction` below is the cluster index/label).
242+
243+
```python
244+
predictions.show(15)
245+
# +-----------------+----------+
246+
# | features|prediction|
247+
# +-----------------+----------+
248+
# |[5.1,3.5,1.4,0.2]| 1|
249+
# |[4.9,3.0,1.4,0.2]| 1|
250+
# |[4.7,3.2,1.3,0.2]| 1|
251+
# |[4.6,3.1,1.5,0.2]| 1|
252+
# |[5.0,3.6,1.4,0.2]| 1|
253+
# |[5.4,3.9,1.7,0.4]| 5|
254+
# |[4.6,3.4,1.4,0.3]| 1|
255+
# |[5.0,3.4,1.5,0.2]| 1|
256+
# |[4.4,2.9,1.4,0.2]| 1|
257+
# |[4.9,3.1,1.5,0.1]| 1|
258+
# |[5.4,3.7,1.5,0.2]| 5|
259+
# |[4.8,3.4,1.6,0.2]| 1|
260+
# |[4.8,3.0,1.4,0.1]| 1|
261+
# |[4.3,3.0,1.1,0.1]| 1|
262+
# |[5.8,4.0,1.2,0.2]| 5|
263+
# +-----------------+----------+
264+
# only showing top 15 rows
265+
```
266+
267+
Plot the cost (sum of squared distances of points to their nearest centroid, the smaller the better) against $k$.
268+
269+
```python
270+
fig, ax = plt.subplots(1,1, figsize =(8,6))
271+
ax.plot(range(2,numK),costs[2:numK],marker="o")
272+
ax.set_xlabel('$k$')
273+
ax.set_ylabel('Cost')
274+
plt.grid()
275+
plt.savefig("Output/Lab8_cost.png")
276+
```
277+
278+
We can see that this cost measure is biased towards a large $k$. Let us plot the silhouette metric (the larger the better) against $k$.
279+
280+
```python
281+
fig, ax = plt.subplots(1,1, figsize =(8,6))
282+
ax.plot(range(2,numK),silhouettes[2:numK],marker="o")
283+
ax.set_xlabel('$k$')
284+
ax.set_ylabel('Silhouette')
285+
plt.grid()
286+
plt.savefig("Output/Lab8_silhouette.png")
287+
```
288+
289+
We can see that the silhouette measure is biased towards a small $k$. By the silhouette metric, we should choose $k=2$ but we know the ground truth $k$ is 3 (read the [data description](https://archive.ics.uci.edu/ml/datasets/iris) or count unique species). Therefore, this metric is not giving the ideal results in this case (either). [Determining the optimal number of clusters](https://en.wikipedia.org/wiki/Determining_the_number_of_clusters_in_a_data_set) is an open problem.
290+
291+
## 2. Exercises
292+
293+
### Further study on iris clustering
294+
295+
Carry out some further studies on the iris clustering problem above.
296+
297+
1. Choose $k=3$ and evaluate the clustering results against the ground truth (class labels) using the [Normalized Mutual Information (NMI) available in scikit-learn](https://scikit-learn.org/stable/modules/generated/sklearn.metrics.normalized_mutual_info_score.html). You need to install `scikit-learn` in the `myspark` environment via `conda install -y scikit-learn`. This allows us to study the clustering quality when we know the true number of clusters.
298+
2. Use multiple (e.g., 10 or 20) random seeds to generate different clustering results and plot the respective NMI values (with respect to ground truth with $k=3$ as in the question above) to observe the effect of initialisation.
299+
300+
## 3. Additional ideas to explore (*optional*)
301+
302+
### RFM Customer Value Analysis
303+
304+
- Follow Chapter *RFM Analysis* of [PySpark tutorial](https://runawayhorse001.github.io/LearningApacheSpark/pyspark.pdf) to perform [RFM Customer Value Analysis](https://en.wikipedia.org/wiki/RFM_(customer_value))
305+
- The data can be downloaded from [Online Retail Data Set](https://archive.ics.uci.edu/ml/datasets/online+retail) at UCI.
306+
- Note the **data cleaning** step that checks and removes rows containing null value via `.dropna()`. You may need to do the same when you are dealing with real data.
307+
- The **data manipulation** steps are also useful to learn.
308+
309+
### Network intrusion detection
310+
311+
- The original task is a classification task. We can ignore the class labels and perform clustering on the data.
312+
- Write a standalone program (and submit as a batch job to HPC) to do $k$-means clustering on the [KDDCUP1999 data](https://archive.ics.uci.edu/ml/datasets/KDD+Cup+1999+Data) with 4M points. You may start with the smaller 10% subset.
313+
314+
### Color Quantization using K-Means
315+
316+
- Follow the scikit-learn example [Color Quantization using K-Means](https://scikit-learn.org/stable/auto_examples/cluster/plot_color_quantization.html#sphx-glr-auto-examples-cluster-plot-color-quantization-py) to perform the same using PySpark on your high-resolution photos.

0 commit comments

Comments
 (0)