Presently Deep Learning has been revolutionizing many subfields such as natural language processing, computer vision, robotics, etc. Deep learning certainly involves training carefully designed deep neural networks and various design decisions impact the training regime of these deep networks. Some of these design decisions include
Majorly these design decisions depend upon the…
Self-supervised Learning is an unsupervised learning method where the supervised learning task is created out of the unlabelled input data.
This task could be as simple as given the upper-half of the image, predict the lower-half of the same image, or given the grayscale version of the colored image, predict the RGB channels of the same image, etc.
Why Self-supervised Learning?
Supervised learning requires usually a lot of labelled data. Getting good quality labelled data is an expensive and time-consuming task specially for a complex task such as object detection, instance segmentation where more detailed annotations are desired. On the…
Lately, Self-supervised learning methods have become the cornerstone for unsupervised visual representation learning. One such method Bootstrap Your Own Latent(BYOL) which is introduced recently is reviewed in this post. I have already covered other interesting self-supervised learning methods based on contrastive learning that came before BYOL such as SimCLR, MoCo, etc. in another post thoroughly and should be considered for understanding the fundamentals of this post. Please find them here.
Unlike other contrastive learning methods, BYOL achieves state-of-the-art performance without using any negative samples. …
With several advancements in Deep Learning, complex networks such as giant transformer networks, wider and deeper Resnets, etc. have evolved which keeps a larger memory footprint. More often than not, while training these networks, deep learning practitioners need to use multiple GPUs to train them efficiently. In this post, I am going to walk you through, how distributed neural network training could be set up over a GPU cluster using PyTorch.
Usually, distributed training comes into the picture in two use-cases.
“Objects In Mirror Are Closer Than They Appear”. I am sure, most of you, would have seen this disclaimer. Ever Wondered Why? Read along to find out!
Due to several advancements in camera technology, cameras are capable of clicking shots at various angles and views. Different angles for instance high-angle by drones, Bird’s Eye, side view, etc., or different camera lenses such as fisheye lens, project the scene on the camera image plane differently than the actual position of the object in the world. This phenomenon is termed as Perspective distortion in photography.
The disclaimer “Objects In Mirror Are Closer…
Intrigued about Deep learning and all things ML.