CQNN

Convolutional Quadratic Neural Networks

Standard convolutional neural networks rely on linear transformations followed by an activation function (like ReLU). CQNN rethinks this fundamental building block by introducing Quadratic Neurons, which inherently capture second-order interactions between input features.

Why Quadratic Neurons?

In many computer vision tasks, the decision boundaries are naturally non-linear. By using a quadratic form $y = x^T W x$ as the basic operation, CQNNs can:

  • Increase Model Capacity: Capture complex patterns with fewer total layers.
  • Improve Decision Boundaries: Achieve better separation in high-dimensional feature spaces compared to standard linear-based layers.
  • Robust Feature Extraction: Enhance performance in texture analysis and intricate pattern recognition tasks.
UHCTD Synthetic Classes
Architectural comparison: Standard Convolution vs. Convolutional Quadratic (CQ) layers.

Key Contributions

  • Non-Linear Primitives: Integration of quadratic operations directly into the convolutional framework.
  • Experimental Validation: Demonstrated superior performance on standard benchmarks for image classification and pattern recognition.
  • Efficiency: Achieving high accuracy with a more compact representation of non-linearities.

Resources


References

2021

  1. Cqnn: Convolutional quadratic neural networks
    Pranav Mantini and Shishr K Shah
    In 2020 25th International Conference on Pattern Recognition (ICPR), 2021