Chat
Search
Ithy Logo

Python Code for Descriptors in Image Processing

A comprehensive guide to feature extraction and matching using Python and OpenCV

scenic computer vision techniques

Key Takeaways

  • Comprehensive Integration: Explore various feature descriptors such as ORB, SIFT, SURF, and HOG, as well as shape descriptors like Zernike Moments.
  • Implementation Practices: Gain practical insights into using OpenCV for detecting keypoints, extracting descriptors, and matching features using both brute-force and FLANN approaches.
  • Application Versatility: Understand how these descriptors enhance tasks like image matching, object recognition, and shape comparison in modern computer vision applications.

Introduction

Feature descriptors are fundamental components in image processing and computer vision. They transform visual information from images into numerical representations that describe key points or regions. These descriptors are indispensable for tasks such as image matching, object recognition, image stitching, and shape analysis.

When dealing with images, extracting and comparing features are the first steps in building robust computer vision systems. Python, combined with the OpenCV library, offers an extensive suite of tools that empower developers to implement various feature extraction algorithms easily. In this guide, we delve into the implementation of some of the most popular feature descriptors and matching techniques, providing code samples and explanations.


Understanding Feature Descriptors

Feature descriptors capture essential information from images by recognizing unique patterns and keypoints that remain stable under various conditions such as scale, orientation, and illumination. They convert these keypoints into feature vectors that can later be compared across different images. The quality and robustness of these descriptors directly impact the performance of higher-level tasks such as object detection and image matching.

Types of Descriptors

Commonly used feature descriptors include:

  • ORB (Oriented FAST and Rotated BRIEF): A fast and efficient descriptor that is free of patent restrictions. It is widely used for real-time applications.
  • SIFT (Scale-Invariant Feature Transform): Highly robust to scale and rotation changes, SIFT produces distinctive descriptors but can have licensing restrictions in some cases.
  • SURF (Speeded Up Robust Features): Similar to SIFT but faster, SURF trades off some precision for speed. It also has patent limitations.
  • HOG (Histogram of Oriented Gradients): Primarily used for object detection, HOG captures edge orientation distributions effectively, most notably used in human detection.
  • Zernike Moments: These are shape descriptors that provide a rotation-invariant representation of object shapes, making them ideal for shape comparison tasks.

Implementing Feature Descriptors with Python Code

Below you will find a collection of Python code examples that demonstrate how to detect keypoints, extract descriptors, and perform matching using OpenCV. These examples cover both standard feature descriptors (ORB, SIFT) and shape descriptors (Zernike Moments).

1. Using ORB for Feature Detection and Description

ORB is an efficient and effective binary descriptor that is ideal for real-time applications. The following example demonstrates how to load two images in grayscale, detect keypoints, extract their descriptors with ORB, and match them using a Brute-Force matcher.

ORB Feature Extraction and Matching Code


# Import necessary libraries
import cv2
import numpy as np

# Load images in grayscale
img1 = cv2.imread('image1.jpg', cv2.IMREAD_GRAYSCALE)
img2 = cv2.imread('image2.jpg', cv2.IMREAD_GRAYSCALE)

# Initialize ORB detector
orb = cv2.ORB_create()

# Detect keypoints and compute descriptors for both images
kp1, des1 = orb.detectAndCompute(img1, None)
kp2, des2 = orb.detectAndCompute(img2, None)

# Create a Brute-Force Matcher with Hamming distance
bf = cv2.BFMatcher(cv2.NORM_HAMMING, crossCheck=True)

# Match descriptors between the two images
matches = bf.match(des1, des2)

# Sort matches based on their distance (the lower, the better)
matches = sorted(matches, key=lambda x: x.distance)

# Draw the top 10 matches for visualization
img_matches = cv2.drawMatches(img1, kp1, img2, kp2, matches[:10], None, flags=cv2.DrawMatchesFlags_NOT_DRAW_SINGLE_POINTS)

# Display matched results
cv2.imshow('ORB Matches', img_matches)
cv2.waitKey(0)
cv2.destroyAllWindows()
  

In this script, we first load the images and use ORB to detect keypoints and compute descriptors for both images. The Brute-Force matcher with Hamming distance (ideal for binary descriptors) is used to find matching pairs between the images. Finally, the top matches are drawn and displayed.


2. Using SIFT for Superior Descriptor Robustness

SIFT is known for its robustness in detecting and describing keypoints, making it ideal for challenging image conditions. The following example demonstrates how to use SIFT in a similar manner as ORB.

SIFT Feature Extraction and Matching Code


import cv2
import numpy as np
import matplotlib.pyplot as plt

def main():
    # Load an image and convert it to grayscale
    image_path = 'image.jpg'  # Replace with your image file
    image = cv2.imread(image_path)
    if image is None:
        print("Image not found or unable to load.")
        return

    gray = cv2.cvtColor(image, cv2.COLOR_BGR2GRAY)

    # Create a SIFT detector (requires opencv-contrib-python)
    sift = cv2.SIFT_create()

    # Detect keypoints and compute descriptors
    keypoints, descriptors = sift.detectAndCompute(gray, None)
    
    print("Number of keypoints detected:", len(keypoints))
    print("Descriptor shape:", descriptors.shape if descriptors is not None else None)
    
    # Draw keypoints with additional size and orientation details
    img_with_keypoints = cv2.drawKeypoints(gray, keypoints, None, flags=cv2.DRAW_MATCHES_FLAGS_DRAW_RICH_KEYPOINTS)

    # Display the keypoints using matplotlib
    plt.imshow(img_with_keypoints, cmap='gray')
    plt.title("SIFT Keypoints")
    plt.axis('off')
    plt.show()

if __name__ == "__main__":
    main()
  

This code initializes the SIFT detector, computes keypoints and descriptors, and draws the keypoints with details such as scale and orientation, which allow it to handle various image transformations effectively.


3. Exploring Other Descriptors: SURF and HOG

While ORB and SIFT are popular, there are other descriptors available:

SURF Descriptor

SURF (Speeded Up Robust Features) offers a good compromise between speed and accuracy. Note that SURF is part of the opencv-contrib package due to patent restrictions.


import cv2
import numpy as np

# Load image in grayscale
img = cv2.imread('image.jpg', cv2.IMREAD_GRAYSCALE)

# Create a SURF detector (requires opencv-contrib-python)
surf = cv2.xfeatures2d.SURF_create()

# Detect keypoints and compute descriptors
keypoints, descriptors = surf.detectAndCompute(img, None)

print("Number of keypoints detected with SURF:", len(keypoints))
print("SURF descriptor shape:", descriptors.shape if descriptors is not None else None)
  

HOG Descriptor

The Histogram of Oriented Gradients (HOG) is not a keypoint detector per se, but rather a descriptor that captures the distribution of gradient directions. HOG is especially effective for object detection.


import cv2
import numpy as np

# Load image in grayscale
img = cv2.imread('image.jpg', cv2.IMREAD_GRAYSCALE)

# Define HOG parameters
win_size = (64, 128)
block_size = (16, 16)
block_stride = (8, 8)
cell_size = (8, 8)
num_bins = 9

# Initialize the HOG descriptor
hog = cv2.HOGDescriptor(win_size, block_size, block_stride, cell_size, num_bins)

# Compute the HOG descriptor
descriptors = hog.compute(img)
print("HOG descriptor shape:", descriptors.shape)
  

Comparing Feature Descriptors

Beyond extraction, one of the primary uses of feature descriptors is to compare images through matching. OpenCV offers different methods to perform this matching:

Brute-Force Matching

The Brute-Force (BF) matcher compares all descriptors from one image with those in another. It uses distance metrics like L2 for float-based descriptors (i.e., SIFT) and Hamming for binary descriptors (i.e., ORB). The BFMatcher can be used with options like cross-checking for greater consistency.

Brute-Force Matching Example with ORB Descriptors


import cv2
import numpy as np
import matplotlib.pyplot as plt

# Load query and train images in grayscale
query_img = cv2.imread('box.png', cv2.IMREAD_GRAYSCALE)
train_img = cv2.imread('box_in_scene.png', cv2.IMREAD_GRAYSCALE)

# Initialize ORB detector
orb = cv2.ORB_create()

# Detect keypoints and compute descriptors for both images
kp1, des1 = orb.detectAndCompute(query_img, None)
kp2, des2 = orb.detectAndCompute(train_img, None)

# Create the BFMatcher object using Hamming distance and cross-check enabled
bf = cv2.BFMatcher(cv2.NORM_HAMMING, crossCheck=True)

# Match descriptors
matches = bf.match(des1, des2)
matches = sorted(matches, key=lambda x: x.distance)

# Draw the top 10 matches
matched_img = cv2.drawMatches(query_img, kp1, train_img, kp2, matches[:10], None,
                              flags=cv2.DrawMatchesFlags_NOT_DRAW_SINGLE_POINTS)

# Display the result using matplotlib
plt.imshow(matched_img, cmap='gray')
plt.title("ORB Brute-Force Matching")
plt.axis('off')
plt.show()
  

FLANN Based Matching

For large datasets or higher-dimensional descriptors, the FLANN (Fast Library for Approximate Nearest Neighbors) matcher provides a more efficient alternative. FLANN uses various algorithms to quickly approximate the nearest neighbors.

FLANN Matching Example with SIFT Descriptors


import cv2
import numpy as np
import matplotlib.pyplot as plt

# Load images in grayscale
img1 = cv2.imread('box.png', cv2.IMREAD_GRAYSCALE)
img2 = cv2.imread('box_in_scene.png', cv2.IMREAD_GRAYSCALE)

# Create a SIFT detector
sift = cv2.SIFT_create()

# Detect keypoints and compute descriptors using SIFT
kp1, des1 = sift.detectAndCompute(img1, None)
kp2, des2 = sift.detectAndCompute(img2, None)

# Define FLANN parameters for SIFT (using KDTree)
FLANN_INDEX_KDTREE = 1
index_params = dict(algorithm=FLANN_INDEX_KDTREE, trees=5)
search_params = dict(checks=50)

# Initialize the FLANN based matcher
flann = cv2.FlannBasedMatcher(index_params, search_params)

# Using knnMatch to get the 2 best matches for each descriptor
matches = flann.knnMatch(des1, des2, k=2)

# Apply Lowe's ratio test to filter good matches
good_matches = []
for m, n in matches:
    if m.distance < 0.75 * n.distance:
        good_matches.append([m])

# Draw the good matches using drawMatchesKnn
result_img = cv2.drawMatchesKnn(img1, kp1, img2, kp2, good_matches, None,
                                flags=cv2.DrawMatchesFlags_NOT_DRAW_SINGLE_POINTS)
plt.imshow(result_img, cmap='gray')
plt.title("SIFT FLANN Matching with Ratio Test")
plt.axis('off')
plt.show()
  

Advanced Descriptor: Zernike Moments for Shape Analysis

Zernike moments are used to extract rotationally invariant shape descriptors from images. They are particularly useful when the task involves comparing the shapes of objects irrespective of their orientation.

Implementing Zernike Moments in Python

The following example demonstrates a simplified version of Zernike moments calculation, along with an approach to compare images based on computed features.

Zernike Moments and Image Comparison Code


import cv2
import numpy as np
from scipy.spatial import distance

class ZernikeMoments:
    def __init__(self, radius):
        self.radius = radius

    def describe(self, image):
        # Simplified placeholder for actual Zernike moment calculation.
        moments = []
        for degree in range(0, self.radius + 1):
            for order in range(0, degree + 1):
                # Placeholder: Actual calculation involves complex polynomial integration.
                moment = 0
                moments.append(moment)
        return np.array(moments)

class Searcher:
    def __init__(self, index):
        self.index = index
    
    def search(self, queryFeatures):
        results = {}
        for key, features in self.index.items():
            d = distance.euclidean(queryFeatures, features)
            results[key] = d
        # Sort results based on Euclidean distance
        results = sorted([(v, k) for k, v in results.items()])
        return results

# Load and preprocess the image
image_path = 'query_image.jpg'
image = cv2.imread(image_path)
gray = cv2.cvtColor(image, cv2.COLOR_BGR2GRAY)

# Apply adaptive thresholding to obtain a binary image
thresh = cv2.adaptiveThreshold(gray, 255, cv2.ADAPTIVE_THRESH_MEAN_C,
                               cv2.THRESH_BINARY_INV, 11, 7)
outline = np.zeros(gray.shape, dtype="uint8")

# Find contours from the thresholded image and draw the largest ones
cnts = cv2.findContours(thresh.copy(), cv2.RETR_EXTERNAL,
                        cv2.CHAIN_APPROX_SIMPLE)
cnts = cnts[0] if len(cnts) == 2 else cnts[1]
cnts = sorted(cnts, key=cv2.contourArea, reverse=True)
cv2.drawContours(outline, cnts, -1, 255, -1)

# Compute simplified Zernike moments
desc = ZernikeMoments(21)
queryFeatures = desc.describe(outline)
print("Computed Zernike Moments:", queryFeatures)

# Assume an index of pre-computed features exists (for demonstration)
feature_index = {
    "img_001": np.zeros(queryFeatures.shape),
    "img_002": np.ones(queryFeatures.shape)
}

# Compare using a searcher instance
searcher = Searcher(feature_index)
results = searcher.search(queryFeatures)
print("Image similarity results:", results)
  

In this example, a simplified version of Zernike moments is computed from a preprocessed image. A searcher class is used to compare the resultant feature vector with pre-computed descriptors for different images, demonstrating a method for shape-based image comparison.


Descriptor Summary Table

Below is an HTML table summarizing key properties, strengths, and considerations for each of the discussed descriptors:

Descriptor Type Key Attributes Best Used For
ORB FAST keypoint detection, binary descriptor, efficient, patent-free Real-time applications, matching, mobile apps
SIFT Scale and rotation invariance, floating-point descriptor, distinctive Robust matching under varying conditions, object recognition
SURF Faster alternative to SIFT, patented Speed-critical applications with acceptable trade-offs in precision
HOG Edge orientation histograms, not a keypoint detector Object detection, especially human detection
Zernike Moments Rotation invariant shape descriptor, numerical moments Shape analysis, pattern recognition

Conclusion

In this guide, we explored several feature descriptors used in image processing with Python, focusing on a variety of methods ranging from binary descriptors like ORB to robust methods like SIFT and SURF, as well as shape descriptors such as Zernike Moments and gradient-based HOG. The provided code examples demonstrate how to extract features, visualize detected keypoints, and perform matching using both the Brute-Force and FLANN matching techniques.

Feature descriptors play a critical role in accurately comparing images and recognizing objects, and the choice of descriptor largely depends on the specific application requirements, computational resources, and patent considerations. By mastering these techniques, developers can implement cutting-edge computer vision systems capable of robust image analysis.


References


More


Last updated February 18, 2025
Ask Ithy AI
Export Article
Delete Article