We recently gave a talk on Semantic Segmentation to Danbury AI.
One of the first fundamental problems of computer vision was the classification of images — where we are given a matrix of pixel data and we must each pixel a categorical class label. When we can classify images, in our computer vision applications we can reason about the class labels instead of comparing pixel values. Semantic segmentation takes image classification to the next level of complexity: not only do we need to classify an image, but also define the specific regions of an image which relate to one or more trained classes representing objects of interest. When we can obtain successful semantic segmentations, we can achieve amazing feats of machine reasoning, among which are self-driving cars.