Machine learning

Today, there is growing interest in developing intelligent machines that can learn from data and make decisions independently. As such, the need for machine learning (ML) solutions continues to rise at an accelerating pace. Machine learning allows for advanced capabilities like self-driving vehicles and individualized content suggestions on digital streaming services. In fact, machine learning is now a ubiquitous technology that powers numerous aspects of modern life. To better comprehend how this field works, let’s delve into several kinds of machine learning techniques and their benefits, drawbacks, and use cases.

Machine learning expagorated

Machine learning, at its core, represents a fragment of the vast realm of artificial intelligence (AI), wherein it possesses the remarkable ability to conquer feats that would otherwise be deemed unattainable or excessively laborious through the employment of conventional programming languages.

And although a plethora of ML algorithms exist, it is widely contended that the data itself reigns supreme over the chosen algorithm. Numerous quandaries can arise in the realm of data, encompassing but not limited to insufficiency, abysmal quality, inaccuracies, omissions, irrelevance, duplication, and an array of similar issues.

Supervised learning

Supervised learning is like when a fancy ML thingy learns from data that’s got labels. It means the data used to train the thingy already has the right answers marked on it. So, let’s say we want the thingy to recognize handwritten numbers. We’d give it a bunch of labeled pictures of numbers and train it using that.

In addition, there are two main types of supervised learning algorithms: classification and regression. So, classification algorithms, they’re all about predicting some categorical output, ya know? Like figuring out if something belongs to a certain group or not. On the other hand, regression algorithms, they’re all about predicting some continuous output, like the price of something e.g., houses on the real estate market.

That being said, the advantages of supervised learning include its ability to make accurate predictions and its ease of use. However, its limitations include the need for labeled data, which can be time-consuming and expensive to obtain. Yeah, that‘s that.

Unsupervised learning

Unsupervised learning employs machine learning algorithms to analyze and cluster unlabeled datasets. These algorithms autonomously discover hidden patterns or data groupings without the need for human intervention. Thus unsupervised learning models are primarily utilized for clustering, association, and dimensionality reduction tasks.

Below are some brief expagorations of unsupervised machine-learning algorithms that have got something to do with clustering:


Alright, picture this, you’ve got a whole bunch of data points, and you’re itching to put them in groups based on how similar they are. Enter k-Means, the hero of the hour. It’s gonna help you track down these groups, also known as clusters. Here’s the deal: you pick a number of clusters you wanna make, let’s say “k.” Then, the algorithm goes does its things and assigns each data point to one of those k clusters randomly. And each cluster is represented by its centroid, which is just the average of all the data points in that cluster. Bam!

Note: In order to utilize the k-Means algorithm, the user must explicitly indicate the number of clusters, denoted as ‘k’, and also provide the initial cluster centers. The algorithm exhibits sensitivity towards the initial cluster centers, thus it holds crucial significance to exercise caution and select them meticulously. One widely employed technique involves the random selection of ‘k’ data points from the dataset, which subsequently serve as the initial cluster centers.

Once the algorithm has converged, the user can analyze the resulting clusters to gain insights into the data. For example, in customer segmentation, the clusters can be used to identify different customer groups based on their purchasing behavior. In image segmentation, the clusters can be used to separate different objects in an image

Hierarchical cluster analysis (HCA)

Hierarchical cluster analysis (HCA) is a useful unsupervised machine learning technique for identifying patterns and relationships within a dataset, especially when the number of clusters is unknown. The output of HCA is a dendrogram, which is a tree-like diagram that shows the hierarchical relationship between clusters. The dendrogram can be used to determine the optimal number of clusters by identifying the point at which the distance between clusters becomes too large.

Note: ​The main​​ advantage of​​ HCA is its ability​​ to create a hierarchical​​ structure of​​ clusters, allowing​​ for the extraction​​ of information​​ at different​​ levels of granularity​​. This is particularly​​ useful when dealing​​ with high-dimensional data​​, such as hyperspectral images​​, where traditional​​ clustering methods​​ may struggle​​ to identify meaningful​​ patterns.

Expectation maximization

The expectation–maximization (EM) algorithm involves two main steps: the E-step, where the algorithm calculates the expected values of the latent variables based on the current model parameter estimates, and the M-step, where the algorithm updates the model parameters by maximizing the likelihood of the observed data using the expected values of the latent variables computed in the E-step. This iterative process continues until convergence is achieved.

One of the key advantages of the EM algorithm is its ability to handle incomplete or missing data, making it suitable for unsupervised learning tasks where the true labels of the data points are not available. The algorithm is generic and can also be integrated with various clustering methods and driven by both binary classification and regression.

Of course, as with any method, EM isn’t perfect. One potential drawback is that it is sensitive to the initial parameter estimates and can converge to local optima, which may not represent the true underlying distribution of the data. Furthermore, the convergence of the algorithm can be slow, especially in cases with a large number of parameters or complex models.

Now, below are some brief expagorations of unsupervised machine learning algorithms that have got something to do with dimensionality reduction:

PCA (principal component analysis)

This badass technique in the world of unsupervised machine learning i.e., principal component analysis, or PCA for short. It’s all about shrinking down the dimensions of your data and pulling out the juicy features from it. So, let’s say you’ve got a ton of crazy high-dimensional data, right? PCA swoops in like a boss and transforms that stuff into a lower-dimensional representation. But get this, it still manages to hold onto as much of the original data’s variation as possible.

And yeah, principal component analysis has several steps. First, the data is centered by subtracting the mean from each feature. Then, the covariance matrix of the centered data is computed. The eigenvectors and eigenvalues of the covariance matrix are then calculated, and the eigenvectors are sorted in descending order of their corresponding eigenvalues. The eigenvectors with the highest eigenvalues are the principal components of the data. Finally, the data is projected onto the principal components to obtain the lower-dimensional representation.

However, one should note that PCA may not always be the optimal choice for reducing dimensionality, and certain applications might benefit more from alternative techniques like LDA (linear discriminant analysis). Also, the effectiveness of PCA relies on the data’s quality and structure, and it might not perform well with highly nonlinear or complex data.

LLE (locally linear embedding)

Locally linear embedding (LLE) is an unsupervised learning algorithm that computes low-dimensional, neighborhood-preserving embeddings of high-dimensional inputs.

First, it constructs a neighborhood graph by identifying k-nearest neighbors for each data point. Second, LLE seeks to preserve local linear relationships by approximating each data point as a linear combination of its neighbors. The reconstruction weights are determined through optimization techniques such as minimizing the squared differences between original and reconstructed data points. Finally, LLE projects the data into a lower-dimensional space while preserving the local relationships, ultimately revealing the intrinsic structure.

Locally linear embedding vs PCA

Locally linear embedding is often compared to principal component analysis (PCA). And although PCA finds the directions of maximum variance in the data, LLE finds the directions that best preserve the local structure of the data. This makes LLE particularly useful for data that lies on a nonlinear manifold, where PCA may not be able to capture the underlying structure of the data.

t-SNE (t-distributed stochastic neighbor embedding)

t-SNE works by first calculating the similarity between pairs of high-dimensional data points and then mapping these similarities to a low-dimensional space. It does this by minimizing the divergence between two probability distributions: a high-dimensional distribution that measures pairwise similarities between data points and a low-dimensional distribution that measures pairwise similarities between their corresponding points in the low-dimensional space.

Nevertheless, t-distributed stochastic neighbor embedding offers numerous benefits. But it is essential to be aware of its limitations. The algorithm is computationally intensive and can be time-consuming for large datasets. It is also sensitive to the choice of its hyperparameters. For example, such as perplexity and learning rate, which need to be carefully tuned to obtain optimal results. Also, t-distributed stochastic neighbor embedding does not preserve distances well in the low-dimensional space. This perhaps makes it rather unsuitable for tasks that require accurate distance preservation.

Semi-supervised learning

Semi-supervised learning epitomizes a paradigm in the realm of machine learning whereby an algorithm undertakes the process of acquiring knowledge not only from data that has been meticulously annotated and categorized but also from a substantial corpus of unannotated, unlabeled data. And in this methodology, the algorithm undergoes training utilizing a meager subset of meticulously labeled data and an extensive magnitude of unannotated data.

For instance, an algorithm entrenched in the domain of semi-supervised learning may be employed to undertake the task of classifying diverse images depicting various animal species by leveraging a diminutive collection of meticulously labeled data, while simultaneously capitalizing upon a copious magnitude of unannotated data to augment the classification endeavor.

Anyway, the noteworthy benefits associated with the utilization of semi-supervised learning reside in its inherent capacity to effectively leverage the amalgamation of annotated and unannotated data, thereby augmenting performance levels even when confronted with a paucity of labeled data. Notwithstanding these merits, it is essential to acknowledge the inherent constraints that impede its widespread adoption. Specifically, the demanding requirement of an extensive corpus of unlabeled data presents a formidable challenge, while the intricate task of discerning the most suitable algorithm further compounds the predicament.