Presently Deep Learning has been revolutionizing many subfields such as natural language processing, computer vision, robotics, etc. Deep learning certainly involves training carefully designed deep neural networks and various design decisions impact the training regime of these deep networks. Some of these design decisions include

  • Type of network layer to use such as convolution layer, linear layer, recurrent layer, etc. in the network, and how many layers deep should our network be?
  • What kind of normalization layer we should use if at all?
  • What should be the correct loss function to optimize for?

Majorly these design decisions depend upon the…

What is Self-supervised Learning?

Self-supervised Learning is an unsupervised learning method where the supervised learning task is created out of the unlabelled input data.

This task could be as simple as given the upper-half of the image, predict the lower-half of the same image, or given the grayscale version of the colored image, predict the RGB channels of the same image, etc.

Learn from yourself
Learn from yourself
Image by Author

Why Self-supervised Learning?

Supervised learning requires usually a lot of labelled data. Getting good quality labelled data is an expensive and time-consuming task specially for a complex task such as object detection, instance segmentation where more detailed annotations are desired. On the…

Lately, Self-supervised learning methods have become the cornerstone for unsupervised visual representation learning. One such method Bootstrap Your Own Latent(BYOL) which is introduced recently is reviewed in this post. I have already covered other interesting self-supervised learning methods based on contrastive learning that came before BYOL such as SimCLR, MoCo, etc. in another post thoroughly and should be considered for understanding the fundamentals of this post. Please find them here.

Unlike other contrastive learning methods, BYOL achieves state-of-the-art performance without using any negative samples. …

With several advancements in Deep Learning, complex networks such as giant transformer networks, wider and deeper Resnets, etc. have evolved which keeps a larger memory footprint. More often than not, while training these networks, deep learning practitioners need to use multiple GPUs to train them efficiently. In this post, I am going to walk you through, how distributed neural network training could be set up over a GPU cluster using PyTorch.

Image for post
Image for post
Photo by Nana Dua on Unsplash

Usually, distributed training comes into the picture in two use-cases.

  1. Model Splitting across GPUs: When the model is so large that it cannot fit into a single GPU’s memory…

Objects In Mirror Are Closer Than They Appear

Image for post
Image for post
Image Taken from:

“Objects In Mirror Are Closer Than They Appear”. I am sure, most of you, would have seen this disclaimer. Ever Wondered Why? Read along to find out!

Due to several advancements in camera technology, cameras are capable of clicking shots at various angles and views. Different angles for instance high-angle by drones, Bird’s Eye, side view, etc., or different camera lenses such as fisheye lens, project the scene on the camera image plane differently than the actual position of the object in the world. This phenomenon is termed as Perspective distortion in photography.

The disclaimer “Objects In Mirror Are Closer…

Nilesh Vijayrania

Intrigued about Deep learning and all things ML.

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store