Jupyter Notebook is a web-based interactive computational environment. A Jupyter Notebook document is a JSON document, following a versioned schema, and containing an ordered list of input/output cells which can contain code, text (using Markdown), mathematics, plots and rich media, usually ending with the ".ipynb" extension. BeakerX is a collection of kernels and extensions to the Jupyter interactive computing environment. It provides JVM support, Spark cluster support, polyglot programming, interactive plots, tables, forms, publishing, and more.
BeakerX is available via conda, pip, and docker. Or try it live online with Binder. You can install with conda with just:
conda install -c conda-forge ipywidgets beakerx
Docker is a great way to run and manage servers including Jupyter with BeakerX built-in. BeakerX has a container distributed by Docker Hub so you can download and run it with just one command:
docker run -p 8888:8888 beakerx/beakerx
Smile (Statistical Machine Intelligence and Learning Engine) is a fast and comprehensive machine learning, NLP, linear algebra, graph, interpolation, and visualization system in Java and Scala. With advanced data structures and algorithms, Smile delivers state-of-art performance. Smile covers every aspect of machine learning, including classification, regression, clustering, association rule mining, feature selection, manifold learning, multidimensional scaling, genetic algorithms, missing value imputation, efficient nearest neighbor search, etc. See the project website for programming guides and more information.
%%classpath add mvn
com.github.haifengl smile-scala_2.11 1.5.2
import smile._
import smile.util._
import smile.math._
import java.lang.Math._
import smile.math.Math.{log2, logistic, factorial, choose, random, randomInt, permutate, c, cbind, rbind, sum, mean, median, q1, q3, `var` => variance, sd, mad, min, max, whichMin, whichMax, unique, dot, distance, pdist, KullbackLeiblerDivergence => kld, JensenShannonDivergence => jsd, cov, cor, spearman, kendall, norm, norm1, norm2, normInf, standardize, normalize, scale, unitize, unitize1, unitize2, root}
import smile.math.distance._
import smile.math.kernel._
import smile.math.matrix._
import smile.math.matrix.Matrix._
import smile.stat.distribution._
import smile.data._
import java.awt.Color, smile.plot._
import smile.interpolation._
import smile.validation._
import smile.association._
import smile.classification._
import smile.regression.{ols, ridge, lasso, svr, gpr}
import smile.feature._
import smile.clustering._
import smile.vq._
import smile.manifold._
import smile.mds._
import smile.sequence._
import smile.projection._
import smile.nlp._
import smile.wavelet._
import smile._ import smile.util._ import smile.math._ import java.lang.Math._ import smile.math.Math.{log2, logistic, factorial, choose, random, randomInt, permutate, c, cbind, rbind, sum, mean, median, q1, q3, `var`=>variance, sd, mad, min, max, whichMin, whichMax, unique, dot, distance, pdist, KullbackLeiblerDivergence=>kld, JensenShannonDivergence=>jsd, cov, cor, spearman, kendall, norm, norm1, norm2, normInf, standardize, normalize, scale, unitize, unitize1, unitize2, root} import smile.math.distance._ import smile.math.kernel._ import smile.math.matrix._ import smile.math.matrix.Matrix._ import smile.stat.distribution._ import smile.data._ import java.awt.Color import smile.plot._ import smile.interpolation._ import smile.validation._ import smile.association._ import smile.classi...
Most Smile algorithms take simple double[]
as input. But we often use the encapsulation class AttributeDataset. In fact, the output of most Smile data parsers is an AttributeDataset
object. AttributeDataset
contains a fixed number of attributes. All attribute values are stored as double even if the attribute may be nominal, ordinal, string, or date. The dataset is stored row-wise internally, which is fast for frequently accessing instances of dataset.
A data object may have an associated class label (for classification) or real-valued response value (for regression). Optionally, a data object or attribute may have a (positive) weight value, whose meaning depends on applications. However, most machine learning methods are not able to utilize this extra weight information.
AttributeDataset
may also contains meta data such as data name and descriptions.
Suppose we have an AttributeDataset
object, which may be created by a parser. To feed the data to a learning algorithm, we need to unwrap the data by the function unzip
function. If the data also have labels, the function unzipInt
can be used to return a tuple (x, y)
, of which x
is an array of feature vectors and y
is an array of labels. If the response variable is real valued in case of regression, unzipDouble
can be used.
val iris = read.arff("iris.arff", 4)
iris class sepallength sepalwidth petallength petalwidth [1] Iris-setosa 5.1000 3.5000 1.4000 0.2000 [2] Iris-setosa 4.9000 3.0000 1.4000 0.2000 [3] Iris-setosa 4.7000 3.2000 1.3000 0.2000 [4] Iris-setosa 4.6000 3.1000 1.5000 0.2000 [5] Iris-setosa 5.0000 3.6000 1.4000 0.2000 [6] Iris-setosa 5.4000 3.9000 1.7000 0.4000 [7] Iris-setosa 4.6000 3.4000 1.4000 0.3000 [8] Iris-setosa 5.0000 3.4000 1.5000 0.2000 [9] Iris-setosa 4.4000 2.9000 1.4000 0.2000 [10] Iris-setosa 4.9000 3.1000 1.5000 0.1000 140 more rows...
iris.summary
iris Summary min q1 median mean q3 max sepallength 4.3000 5.1000 5.8000 5.8433 6.4000 7.9000 sepalwidth 2.0000 2.8000 3.0000 3.0540 3.3000 4.4000 petallength 1.0000 1.6000 4.4000 3.7587 5.1000 6.9000 petalwidth 0.1000 0.3000 1.3000 1.1987 1.8000 2.5000
iris.tail(10)
iris[140, 150] class sepallength sepalwidth petallength petalwidth [1] Iris-virginica 6.7000 3.1000 5.6000 2.4000 [2] Iris-virginica 6.9000 3.1000 5.1000 2.3000 [3] Iris-virginica 5.8000 2.7000 5.1000 1.9000 [4] Iris-virginica 6.8000 3.2000 5.9000 2.3000 [5] Iris-virginica 6.7000 3.3000 5.7000 2.5000 [6] Iris-virginica 6.7000 3.0000 5.2000 2.3000 [7] Iris-virginica 6.3000 2.5000 5.0000 1.9000 [8] Iris-virginica 6.5000 3.0000 5.2000 2.0000 [9] Iris-virginica 6.2000 3.4000 5.4000 2.3000 [10] Iris-virginica 5.9000 3.0000 5.1000 1.8000
val (x, y) = iris.unzipInt
[0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2]
In Scala, we can also wrap AttributeDataset
into a DataFrame
, which provides advanced data manipulation functions. For example, we can get a row with the array syntax or refer a column by its name.
val df = DataFrame(iris)
DataFrame(iris class sepallength sepalwidth petallength petalwidth [1] Iris-setosa 5.1000 3.5000 1.4000 0.2000 [2] Iris-setosa 4.9000 3.0000 1.4000 0.2000 [3] Iris-setosa 4.7000 3.2000 1.3000 0.2000 [4] Iris-setosa 4.6000 3.1000 1.5000 0.2000 [5] Iris-setosa 5.0000 3.6000 1.4000 0.2000 [6] Iris-setosa 5.4000 3.9000 1.7000 0.4000 [7] Iris-setosa 4.6000 3.4000 1.4000 0.3000 [8] Iris-setosa 5.0000 3.4000 1.5000 0.2000 [9] Iris-setosa 4.4000 2.9000 1.4000 0.2000 [10] Iris-setosa 4.9000 3.1000 1.5000 0.1000 140 more rows...)
df.sepallength
sepallength [1] 5.1000 [2] 4.9000 [3] 4.7000 [4] 4.6000 [5] 5.0000 [6] 5.4000 [7] 4.6000 [8] 5.0000 [9] 4.4000 [10] 4.9000 140 more values...
It is also easy to create a subset of data frame.
df(10,100)
DataFrame(iris[10, 100] class sepallength sepalwidth petallength petalwidth [1] Iris-setosa 5.4000 3.7000 1.5000 0.2000 [2] Iris-setosa 4.8000 3.4000 1.6000 0.2000 [3] Iris-setosa 4.8000 3.0000 1.4000 0.1000 [4] Iris-setosa 4.3000 3.0000 1.1000 0.1000 [5] Iris-setosa 5.8000 4.0000 1.2000 0.2000 [6] Iris-setosa 5.7000 4.4000 1.5000 0.4000 [7] Iris-setosa 5.4000 3.9000 1.3000 0.4000 [8] Iris-setosa 5.1000 3.5000 1.4000 0.3000 [9] Iris-setosa 5.7000 3.8000 1.7000 0.3000 [10] Iris-setosa 5.1000 3.8000 1.5000 0.3000 80 more rows...)
Similarly, we can select a few columns to create a new data frame.
df("sepallength", "sepalwidth")
DataFrame(iris class sepallength sepalwidth [1] Iris-setosa 5.1000 3.5000 [2] Iris-setosa 4.9000 3.0000 [3] Iris-setosa 4.7000 3.2000 [4] Iris-setosa 4.6000 3.1000 [5] Iris-setosa 5.0000 3.6000 [6] Iris-setosa 5.4000 3.9000 [7] Iris-setosa 4.6000 3.4000 [8] Iris-setosa 5.0000 3.4000 [9] Iris-setosa 4.4000 2.9000 [10] Iris-setosa 4.9000 3.1000 140 more rows...)
Advanced operations such as exists
, forall
, find
, filter
are also supported. The predicate of these functions expect a Row, which has fields x for attribute vector and y for responsible variable. If the response variable is a class label, the user may use label to access the response variable in string label.
df.filter { row => row.x(1) > 3 && row.y != 0 }
DataFrame(iris class sepallength sepalwidth petallength petalwidth [1] Iris-versicolor 7.0000 3.2000 4.7000 1.4000 [2] Iris-versicolor 6.4000 3.2000 4.5000 1.5000 [3] Iris-versicolor 6.9000 3.1000 4.9000 1.5000 [4] Iris-versicolor 6.3000 3.3000 4.7000 1.6000 [5] Iris-versicolor 6.7000 3.1000 4.4000 1.4000 [6] Iris-versicolor 5.9000 3.2000 4.8000 1.8000 [7] Iris-versicolor 6.0000 3.4000 4.5000 1.6000 [8] Iris-versicolor 6.7000 3.1000 4.7000 1.5000 [9] Iris-virginica 6.3000 3.3000 6.0000 2.5000 [10] Iris-virginica 7.2000 3.6000 6.1000 2.5000 15 more rows...)
In machine learning and pattern recognition, classification refers to an algorithmic procedure for assigning a given input object into one of a given number of categories. The input object is formally termed an instance, and the categories are termed classes.
Classification normally refers to a supervised procedure, i.e. a procedure that produces an inferred function to predict the output value of new instances based on a training set of pairs consisting of an input object and a desired output value. The inferred function is called a classifier if the output is discrete or a regression function if the output is continuous.
The inferred function should predict the correct output value for any valid input object. This requires the learning algorithm to generalize from the training data to unseen situations in a "reasonable" way.
A wide range of supervised learning algorithms is available, each with its strengths and weaknesses. There is no single learning algorithm that works best on all supervised learning problems. The most widely used learning algorithms are AdaBoost and gradient boosting, support vector machines, linear regression, linear discriminant analysis, logistic regression, naive Bayes, decision trees, k-nearest neighbor algorithm, and neural networks (multilayer perceptron).
If the feature vectors include features of many different kinds (discrete, discrete ordered, counts, continuous values), some algorithms cannot be easily applied. Many algorithms, including linear regression, logistic regression, neural networks, and nearest neighbor methods, require that the input features be numerical and scaled to similar ranges (e.g., to the [-1,1]
interval). Methods that employ a distance function, such as nearest neighbor methods and support vector machines with Gaussian kernels, are particularly sensitive to this. An advantage of decision trees (and boosting algorithms based on decision trees) is that they easily handle heterogeneous data.
If the input features contain redundant information (e.g., highly correlated features), some learning algorithms (e.g., linear regression, logistic regression, and distance based methods) will perform poorly because of numerical instabilities. These problems can often be solved by imposing some form of regularization.
If each of the features makes an independent contribution to the output, then algorithms based on linear functions (e.g., linear regression, logistic regression, linear support vector machines, naive Bayes) generally perform well. However, if there are complex interactions among features, then algorithms such as nonlinear support vector machines, decision trees and neural networks work better. Linear methods can also be applied, but the engineer must manually specify the interactions when using them.
Smile's classification algorithms are in the package smile.classification
and all algorithms implement the interface Classifier
that has a method predict
to predict the class label of an instance. An overloaded version in SoftClassifier
can also calculate the a posteriori probabilities besides the class label. For all algorithms, the model can be trained through the constructor. Meanwhile, each algorithm has a Trainer
companion class that can hold model hyper-parameters and be applied to multiple training datasets.
Some algorithms with online learning capability also implement the interface OnlineClassifier
. Online learning is a model of induction that learns one instance at a time. The method learn updates the model with a new instance.
The k-nearest neighbor algorithm (k-NN) is a method for classifying objects by a majority vote of its neighbors, with the object being assigned to the class most common amongst its k
nearest neighbors (k
is typically small). k-NN is a type of instance-based learning, or lazy learning where the function is only approximated locally and all computation is deferred until classification.
cv(x, y, 10) { case (x, y) => knn(x, y, 3) }
cv 1...cv 2...cv 3...cv 4...cv 5...cv 6...cv 7...cv 8...cv 9...cv 10...Confusion Matrix: ROW=truth and COL=predicted class 0 | 50 | 0 | 0 | class 1 | 0 | 47 | 3 | class 2 | 0 | 2 | 48 | Accuracy: 96.67%
[0.9666666666666667]
Random forest is an ensemble classifier that consists of many decision trees and outputs the majority vote of individual trees. The method combines bagging idea and the random selection of features.
Each tree is constructed using the following algorithm:
N
, randomly sample N
cases with replacement from the original data. This sample will be the training set for growing the tree.M
input variables, a number m
<< M
is specified such that at each node, m
variables are selected at random out of the M
and the best split on these m is used to split the node. The value of m
is held constant during the forest growing.where ntrees is the number of trees, and mtry is the number of attributed randomly selected at each node to choose the best split. Although the original random forest algorithm trains each tree fully without pruning, it is useful to control the tree size some times, which can be achieved by the parameter maxNodes. The tree can also be regularized by limiting the minimum number of observations in trees' terminal nodes with the parameter nodeSize. When subsample = 1.0, we use the sampling with replacement to train each tree as described above. If subsample < 1.0, we instead select a subset of samples (without replacement) to train each tree. If the classes are not balanced, the user should provide the classWeight (proportional to the class priori) so that the sampling is done in stratified way. Otherwise, small classes may be not sampled sufficiently.
cv(x, y, 10) { case (x, y) => randomForest(x, y, iris.attributes) }
cv 1...cv 2...cv 3...cv 4...cv 5...cv 6...cv 7...cv 8...cv 9...cv 10...Confusion Matrix: ROW=truth and COL=predicted class 0 | 50 | 0 | 0 | class 1 | 0 | 46 | 4 | class 2 | 0 | 3 | 47 | Accuracy: 95.33%
[0.9533333333333334]
The basic support vector machine (SVM) is a binary linear classifier which chooses the hyperplane that represents the largest separation, or margin, between the two classes. If such a hyperplane exists, it is known as the maximum-margin hyperplane and the linear classifier it defines is known as a maximum margin classifier.
cv(x, y, 10) { case (x, y) => svm(x, y, new LinearKernel, C = 1, epoch = 2) }
cv 1...SVM training epoch 1... SVM training epoch 2... cv 2...SVM training epoch 1... SVM training epoch 2... cv 3...SVM training epoch 1... SVM training epoch 2... cv 4...SVM training epoch 1... SVM training epoch 2... cv 5...SVM training epoch 1... SVM training epoch 2... cv 6...SVM training epoch 1... SVM training epoch 2... cv 7...SVM training epoch 1... SVM training epoch 2... cv 8...SVM training epoch 1... SVM training epoch 2... cv 9...SVM training epoch 1... SVM training epoch 2... cv 10...SVM training epoch 1... SVM training epoch 2... Confusion Matrix: ROW=truth and COL=predicted class 0 | 50 | 0 | 0 | class 1 | 0 | 48 | 2 | class 2 | 0 | 1 | 49 | Accuracy: 98.00%
[0.98]
If there exists no hyperplane that can perfectly split the positive and negative instances, the soft margin method will choose a hyperplane that splits the instances as cleanly as possible, while still maximizing the distance to the nearest cleanly split instances.
The nonlinear SVMs are created by applying the kernel trick to maximum-margin hyperplanes. The resulting algorithm is formally similar, except that every dot product is replaced by a nonlinear kernel function. This allows the algorithm to fit the maximum-margin hyperplane in a transformed feature space. The transformation may be nonlinear and the transformed space be high dimensional. For example, the feature space corresponding Gaussian kernel is a Hilbert space of infinite dimension. Thus though the classifier is a hyperplane in the high-dimensional feature space, it may be nonlinear in the original input space. Maximum margin classifiers are well regularized, so the infinite dimension does not spoil the results.
The effectiveness of SVM depends on the selection of kernel, the kernel's parameters, and soft margin parameter C
. Given a kernel, best combination of C and kernel's parameters is often selected by a grid-search with cross validation.
A multilayer perceptron neural network consists of several layers of nodes, interconnected through weighted acyclic arcs from each preceding layer to the following, without lateral or feedback connections. Each node calculates a transformed weighted linear combination of its inputs (output activations from the preceding layer), with one of the weights acting as a trainable bias connected to a constant input. The transformation, called activation function, is a bounded non-decreasing (non-linear) function, such as the sigmoid functions (ranges from 0
to 1
). Another popular activation function is hyperbolic tangent which is actually equivalent to the sigmoid function in shape but ranges from -1
to 1
. More specialized activation functions include radial basis functions which are used in RBF networks.
For neural networks, the input patterns usually should be scaled/standardized. Commonly, each input variable is scaled into interval [0, 1]
or to have mean 0
and standard deviation 1
.
cv(x, y, 10) { case (x, y) => mlp(x, y, numUnits = c(4, 20, 3), error = NeuralNetwork.ErrorFunction.CROSS_ENTROPY, activation = NeuralNetwork.ActivationFunction.SOFTMAX) }
cv 1...cv 2...cv 3...cv 4...cv 5...cv 6...cv 7...cv 8...cv 9...cv 10...Confusion Matrix: ROW=truth and COL=predicted class 0 | 50 | 0 | 0 | class 1 | 0 | 47 | 3 | class 2 | 0 | 2 | 48 | Accuracy: 96.67%
[0.9666666666666667]
Clustering is the assignment of a set of observations into subsets (called clusters) so that observations in the same cluster are similar in some sense. Clustering is a method of unsupervised learning, and a common technique for statistical data analysis used in many fields.
Hierarchical algorithms find successive clusters using previously established clusters. These algorithms usually are either agglomerative ("bottom-up") or divisive ("top-down"). Agglomerative algorithms begin with each element as a separate cluster and merge them into successively larger clusters. Divisive algorithms begin with the whole set and proceed to divide it into successively smaller clusters.
Partitional algorithms typically determine all clusters at once, but can also be used as divisive algorithms in the hierarchical clustering. Many partitional clustering algorithms require the specification of the number of clusters to produce in the input data set, prior to execution of the algorithm. Barring knowledge of the proper value beforehand, the appropriate value must be determined, a problem on its own for which a number of techniques have been developed.
Density-based clustering algorithms are devised to discover arbitrary-shaped clusters. In this approach, a cluster is regarded as a region in which the density of data objects exceeds a threshold.
Subspace clustering methods look for clusters that can only be seen in a particular projection (subspace, manifold) of the data. These methods thus can ignore irrelevant attributes. The general problem is also known as Correlation clustering while the special case of axis-parallel subspaces is also known as two-way clustering, co-clustering or biclustering in bioinformatics: in these methods not only the objects are clustered but also the features of the objects, i.e., if the data is represented in a data matrix, the rows and columns are clustered simultaneously. They usually do not however work with arbitrary feature combinations as in general subspace methods.
Agglomerative hierarchical clustering seeks to build a hierarchy of clusters in a bottom up approach: each observation starts in its own cluster, and pairs of clusters are merged as one moves up the hierarchy. The results of hierarchical clustering are usually presented in a dendrogram.
In general, the merges are determined in a greedy manner. In order to decide which clusters should be combined, a measure of dissimilarity between sets of observations is required. In most methods of hierarchical clustering, this is achieved by use of an appropriate metric, and a linkage criteria which specifies the dissimilarity of sets as a function of the pairwise distances of observations in the sets.
Hierarchical clustering has the distinct advantage that any valid measure of distance can be used. In fact, the observations themselves are not required: all that is used is a matrix of distances.
val clusters = hclust(pdist(x), "complete")
clusters.getTree
[[9, 34], [37, 150], [101, 142], [0, 17], [7, 39], [10, 48], [128, 132], [27, 28], [1, 12], [29, 30], [3, 47], [19, 21], [8, 38], [57, 93], [80, 81], [82, 92], [96, 99], [63, 91], [65, 75], [116, 137], [127, 138], [40, 153], [4, 171], [49, 154], [151, 158], [88, 95], [112, 139], [123, 126], [23, 26], [53, 89], [66, 84], [74, 97], [94, 166], [110, 147], [120, 143], [2, 160], [46, 161], [78, 167], [54, 58], [136, 148], [140, 144], [141, 145], [103, 169], [45, 174], [107, 130], [43, 178], [69, 164], [51, 56], [68, 87], [105, 122], [50, 52], [113, 152], [20, 31], [67, 165], [172, 173], [157, 204], [25, 193], [11, 159], [42, 185], [70, 170], [124, 184], [104, 156], [55, 90], [121, 201], [6, 208], [5, 18], [83, 133], [86, 200], [175, 182], [125, 129], [36, 155], [13, 162], [32, 33], [149, 209], [111, 146], [76, 188], [115, 189], [73, 187], [98, 163], [190, 210], [61, 71], [117, 131], [44, 186], [77, 183], [24, 207], [202, 220], [72, 216], [79, 196], [35, 206], [177, 236], [85, 197], [16, 222], [203, 218], [176, 191], [168, 225], [114, 213], [180, 212], [181, 227], [102, 219], [192, 211], [59, 179], [195, 205], [118, 199], [14, 15], [194, 248], [230, 242], [64, 237], [224, 239], [215, 235], [214, 221], [234, 251], [233, 243], [217, 244], [241, 253], [100, 229], [119, 198], [60, 228], [108, 249], [240, 247], [238, 260], [250, 256], [223, 245], [109, 135], [226, 264], [232, 258], [22, 259], [246, 255], [134, 257], [231, 272], [62, 270], [262, 268], [261, 267], [263, 274], [269, 275], [271, 277], [276, 279], [273, 281], [252, 278], [41, 283], [265, 280], [254, 287], [106, 285], [284, 289], [266, 291], [286, 292], [282, 288], [290, 294], [293, 295], [296, 297]]
K-Means clustering partitions n observations into k clusters in which each observation belongs to the cluster with the nearest mean. Although finding an exact solution to the K-Means problem for arbitrary input is NP-hard, the standard approach to finding an approximate solution (often called Lloyd's algorithm or the K-Means algorithm) is used widely and frequently finds reasonable solutions quickly.
K-Means is a hard clustering method, i.e. each sample is assigned to a specific cluster. In contrast, soft clustering, e.g. the Expectation-Maximization algorithm for Gaussian mixtures, assign samples to different clusters with different probabilities.
The K-Means algorithm has at least two major theoretic shortcomings:
First, it has been shown that the worst case running time of the algorithm is super-polynomial in the input size.
Second, the approximation found can be arbitrarily bad with respect to the objective function compared to the optimal learn.
In Smile, we use K-Means++ which addresses the second of these obstacles by specifying a procedure to initialize the cluster centers before proceeding with the standard K-Means optimization iterations. With the K-Means++ initialization, the algorithm is guaranteed to find a solution that is O(log k)
competitive to the optimal K-Means solution.
We also use K-D trees to speed up each K-Means step as described in the filter algorithm by Kanungo, et al.
val clusters = kmeans(x, 6, runs = 20)
val y = clusters.getClusterLabel
[1, 5, 5, 5, 1, 1, 5, 1, 5, 5, 1, 5, 5, 5, 1, 1, 1, 1, 1, 1, 1, 1, 5, 1, 5, 5, 1, 1, 1, 5, 5, 1, 1, 1, 5, 5, 1, 5, 5, 1, 1, 5, 5, 1, 1, 5, 1, 5, 1, 5, 0, 0, 0, 4, 0, 0, 0, 4, 0, 4, 4, 0, 4, 0, 4, 0, 0, 4, 0, 4, 0, 4, 0, 0, 0, 0, 0, 0, 0, 4, 4, 4, 4, 0, 4, 0, 0, 0, 4, 4, 4, 0, 4, 4, 4, 4, 4, 0, 4, 4, 3, 0, 2, 3, 3, 2, 4, 2, 3, 2, 3, 3, 3, 0, 3, 3, 3, 2, 2, 0, 3, 0, 2, 0, 3, 2, 0, 0, 3, 2, 2, 2, 3, 0, 0, 2, 3, 3, 0, 3, 3, 3, 0, 3, 3, 3, 0, 3, 3, 0]