Hierarchical Clustering: Agglomerative and Divisive — Explained

栏目: IT技术 · 发布时间: 4年前

内容简介:Hierarchical clustering is a method of cluster analysis that is used to cluster similar data points together. Hierarchical clustering follows either the top-down or bottom-up method of clustering.Clustering is an unsupervised machine learning technique tha

Hierarchical Clustering: Agglomerative and Divisive — Explained

An overview of agglomeration and divisive clustering algorithms and their implementation

Aug 2 ·5min read

Hierarchical Clustering: Agglomerative and Divisive — Explained

Photo by Lukas Blazek on Unsplash

Hierarchical clustering is a method of cluster analysis that is used to cluster similar data points together. Hierarchical clustering follows either the top-down or bottom-up method of clustering.

What is Clustering?

Clustering is an unsupervised machine learning technique that divides the population into several clusters such that data points in the same cluster are more similar and data points in different clusters are dissimilar.

  • Points in the same cluster are closer to each other.
  • Points in the different clusters are far apart.

Hierarchical Clustering: Agglomerative and Divisive — Explained

(Image by Author), Sample 2-dimension Dataset

In the above sample 2-dimension dataset, it is visible that the dataset forms 3 clusters that are far apart, and points in the same cluster are close to each other.

There are several types of clustering algorithms other than Hierarchical clusterings, such as k-Means clustering, DBSCAN, and many more. Read the below article to understand what is k-means clustering and how to implement it.

In this article, you can understand hierarchical clustering, its types.

There are two types of hierarchical clustering methods:

  1. Divisive Clustering
  2. Agglomerative Clustering

Divisive Clustering:

The divisive clustering algorithm is a top-down clustering approach, initially, all the points in the dataset belong to one cluster and split is performed recursively as one moves down the hierarchy.

Steps of Divisive Clustering:

  1. Initially, all points in the dataset belong to one single cluster.
  2. Partition the cluster into two least similar cluster
  3. Proceed recursively to form new clusters until the desired number of clusters is obtained.

Hierarchical Clustering: Agglomerative and Divisive — Explained

(Image by Author), 1st Image: All the data points belong to one cluster, 2nd Image: 1 cluster is separated from the previous single cluster, 3rd Image: Further 1 cluster is separated from the previous set of clusters.

In the above sample dataset, it is observed that there is 3 cluster that is far separated from each other. So we stopped after getting 3 clusters.

Even if start separating further more clusters, below is the obtained result.

Hierarchical Clustering: Agglomerative and Divisive — Explained

(Image by Author), Sample dataset separated into 4 clusters

Hierarchical Clustering: Agglomerative and Divisive — Explained

How to choose which cluster to split?

Check the sum of squared errors of each cluster and choose the one with the largest value. In the below 2-dimension dataset, currently, the data points are separated into 2 clusters, for further separating it to form the 3rd cluster find the sum of squared errors (SSE) for each of the points in a red cluster and blue cluster.

Hierarchical Clustering: Agglomerative and Divisive — Explained

(Image by Author), Sample dataset separated into 2clusters

The cluster with the largest SSE value is separated into 2 clusters, hence forming a new cluster. In the above image, it is observed red cluster has larger SSE so it is separated into 2 clusters forming 3 total clusters.

How to split the above-chosen cluster?

Once we have decided to split which cluster, then the question arises on how to split the chosen cluster into 2 clusters. One way is to use Ward’s criterion to chase for the largest reduction in the difference in the SSE criterion as a result of the split.

How to handle the noise or outlier?

Due to the presence of outlier or noise, can result to form a new cluster of its own. To handle the noise in the dataset using a threshold to determine the termination criterion that means do not generate clusters that are too small.


以上就是本文的全部内容,希望本文的内容对大家的学习或者工作能带来一定的帮助,也希望大家多多支持 码农网

查看所有标签

猜你喜欢:

本站部分资源来源于网络,本站转载出于传递更多信息之目的,版权归原作者或者来源机构所有,如转载稿涉及版权问题,请联系我们

MacTalk 人生元编程

MacTalk 人生元编程

池建强 / 人民邮电出版社 / 2014-2-1 / 45

《MacTalk·人生元编程》是一本随笔文集,主要内容来自作者的微信公众平台“MacTalk By 池建强”。本书撰写于2013年,书中时间线却不止于此。作者以一个70 后程序员的笔触,立于Mac 之上,讲述技术与人文的故事,有历史,有明天,有技术,有人生。70 多篇文章划分为六大主题:Mac、程序员与编程、科技与人文、人物、工具、职场。篇篇独立成文,可拆可合,随时阅读。 此外,作者还对原来......一起来看看 《MacTalk 人生元编程》 这本书的介绍吧!

HTML 压缩/解压工具
HTML 压缩/解压工具

在线压缩/解压 HTML 代码

CSS 压缩/解压工具
CSS 压缩/解压工具

在线压缩/解压 CSS 代码

XML 在线格式化
XML 在线格式化

在线 XML 格式化压缩工具