Hierarchical Clustering: Agglomerative and Divisive — Explained

栏目: IT技术 · 发布时间: 3年前

内容简介:Hierarchical clustering is a method of cluster analysis that is used to cluster similar data points together. Hierarchical clustering follows either the top-down or bottom-up method of clustering.Clustering is an unsupervised machine learning technique tha

Hierarchical Clustering: Agglomerative and Divisive — Explained

An overview of agglomeration and divisive clustering algorithms and their implementation

Aug 2 ·5min read

Hierarchical Clustering: Agglomerative and Divisive — Explained

Photo by Lukas Blazek on Unsplash

Hierarchical clustering is a method of cluster analysis that is used to cluster similar data points together. Hierarchical clustering follows either the top-down or bottom-up method of clustering.

What is Clustering?

Clustering is an unsupervised machine learning technique that divides the population into several clusters such that data points in the same cluster are more similar and data points in different clusters are dissimilar.

  • Points in the same cluster are closer to each other.
  • Points in the different clusters are far apart.

Hierarchical Clustering: Agglomerative and Divisive — Explained

(Image by Author), Sample 2-dimension Dataset

In the above sample 2-dimension dataset, it is visible that the dataset forms 3 clusters that are far apart, and points in the same cluster are close to each other.

There are several types of clustering algorithms other than Hierarchical clusterings, such as k-Means clustering, DBSCAN, and many more. Read the below article to understand what is k-means clustering and how to implement it.

In this article, you can understand hierarchical clustering, its types.

There are two types of hierarchical clustering methods:

  1. Divisive Clustering
  2. Agglomerative Clustering

Divisive Clustering:

The divisive clustering algorithm is a top-down clustering approach, initially, all the points in the dataset belong to one cluster and split is performed recursively as one moves down the hierarchy.

Steps of Divisive Clustering:

  1. Initially, all points in the dataset belong to one single cluster.
  2. Partition the cluster into two least similar cluster
  3. Proceed recursively to form new clusters until the desired number of clusters is obtained.

Hierarchical Clustering: Agglomerative and Divisive — Explained

(Image by Author), 1st Image: All the data points belong to one cluster, 2nd Image: 1 cluster is separated from the previous single cluster, 3rd Image: Further 1 cluster is separated from the previous set of clusters.

In the above sample dataset, it is observed that there is 3 cluster that is far separated from each other. So we stopped after getting 3 clusters.

Even if start separating further more clusters, below is the obtained result.

Hierarchical Clustering: Agglomerative and Divisive — Explained

(Image by Author), Sample dataset separated into 4 clusters

Hierarchical Clustering: Agglomerative and Divisive — Explained

How to choose which cluster to split?

Check the sum of squared errors of each cluster and choose the one with the largest value. In the below 2-dimension dataset, currently, the data points are separated into 2 clusters, for further separating it to form the 3rd cluster find the sum of squared errors (SSE) for each of the points in a red cluster and blue cluster.

Hierarchical Clustering: Agglomerative and Divisive — Explained

(Image by Author), Sample dataset separated into 2clusters

The cluster with the largest SSE value is separated into 2 clusters, hence forming a new cluster. In the above image, it is observed red cluster has larger SSE so it is separated into 2 clusters forming 3 total clusters.

How to split the above-chosen cluster?

Once we have decided to split which cluster, then the question arises on how to split the chosen cluster into 2 clusters. One way is to use Ward’s criterion to chase for the largest reduction in the difference in the SSE criterion as a result of the split.

How to handle the noise or outlier?

Due to the presence of outlier or noise, can result to form a new cluster of its own. To handle the noise in the dataset using a threshold to determine the termination criterion that means do not generate clusters that are too small.


以上就是本文的全部内容,希望本文的内容对大家的学习或者工作能带来一定的帮助,也希望大家多多支持 码农网

查看所有标签

猜你喜欢:

本站部分资源来源于网络,本站转载出于传递更多信息之目的,版权归原作者或者来源机构所有,如转载稿涉及版权问题,请联系我们

轻公司

轻公司

李黎、杜晨 / 中信出版社 / 2009-7 / 39.00元

《轻公司》解读了在互联网和IT技术越来越充裕的环境里,传统的商业法则是如何被打破,而新的商业法则如何建立起来的过程。大量生动翔实的采访,为我们构筑了互联网和IT技术影响下的未来商业趋势。李黎和杜晨在《IT经理世界》上发表了一篇封面报道《轻公司》后,迅速在传统行业及互联网行业产生极大反响,无论是老牌的传统企业、创业公司、风险投资商,都视这篇文章为新商业宝典,甚至有业界人士评价,这篇文章拯救了中国的电......一起来看看 《轻公司》 这本书的介绍吧!

在线进制转换器
在线进制转换器

各进制数互转换器

随机密码生成器
随机密码生成器

多种字符组合密码

MD5 加密
MD5 加密

MD5 加密工具