CNN model ideas, accelerated algorithm design and experimental examples

Since AlexNet won the ILSVRC 2012 ImageNet Image Classification Competition, the Convolutional Neural Network (CNN) has sparked a revolution in the field of computer vision. By replacing traditional hand-crafted features and classifiers, CNNs introduced an end-to-end learning approach that significantly improved the accuracy of image recognition tasks, even surpassing human performance on some benchmarks like the LFW face recognition dataset. As CNN models have become more powerful, their depth and size have also grown exponentially, raising concerns about practical deployment. This rapid growth led to an uncomfortable situation: large models were only suitable for high-performance platforms, making it difficult to deploy them on mobile or embedded devices. Additionally, transferring these models over networks required significant bandwidth, which was not feasible for many users. Moreover, the power consumption and processing speed of such models posed serious challenges for real-world applications. Therefore, despite their impressive performance, these models were still far from being widely used in practice. In response to these challenges, researchers began exploring ways to miniaturize and accelerate CNN models. Early methods included techniques like weight pruning and matrix singular value decomposition (SVD), but these approaches often failed to achieve satisfactory compression ratios or efficiency improvements. In recent years, model compression has evolved into two main categories: one focusing on reducing the number of weights and the other on optimizing network architecture. Furthermore, from the perspective of computational efficiency, compression can be divided into two types: models that are compressed in size only, and those that are compressed while also improving speed. This paper discusses several representative works and methods, including SqueezeNet, Deep Compression, XNOR-Net, Distillation, MobileNet, and ShuffleNet. These methods can be broadly classified based on their compression strategies and whether they consider speed improvement. Table 2 provides a comparison of several classic compression methods: | Method | Compression Approach | Speed Consideration | |---------------|----------------------|---------------------| | SqueezeNet | Architecture | No | | Deep Compression | Weights | No | | XNOR-Net | Weights | Yes | | Distillation | Architecture | No | | MobileNet | Architecture | Yes | | ShuffleNet | Architecture | Yes | ### 1.1 Design Ideas SqueezeNet, introduced in the 2016 paper "SqueezeNet: AlexNet-level accuracy with 50x fewer parameters and < 0.5MB model size" by F. N. Iandola, S. Han, et al., is a compact CNN architecture that achieves similar accuracy to AlexNet but with a much smaller model size—about 510 times smaller. The core idea behind SqueezeNet is to minimize the number of parameters while maintaining high accuracy, which is the ultimate goal of all model compression techniques. To achieve this, SqueezeNet proposed a three-point design strategy: **Strategy 1: Replace 3x3 convolutions with 1x1 convolutions.** This reduces the number of parameters significantly, as a 1x1 convolution has only 1/9th the parameters of a 3x3 convolution. This change theoretically compresses the model by a factor of 9. **Strategy 2: Reduce the number of input channels to 3x3 convolutions.** The total number of parameters in a convolutional layer with 3x3 kernels is given by: $$ \text{Parameters} = N \times C \times 3 \times 3 $$ where $ N $ is the number of filters (output channels), and $ C $ is the number of input channels. To reduce the overall parameter count, both $ N $ and $ C $ must be minimized. **Strategy 3: Delay downsampling as much as possible.** Downsampling operations, such as strided convolutions or pooling layers, reduce the spatial dimensions of feature maps. Delaying this process allows higher-resolution feature maps to be preserved for longer, which can lead to better classification accuracy. This strategy helps maintain model performance while keeping the model compact. Among the three strategies, the first two focus on reducing the number of parameters, while the third aims to maximize accuracy.

Filter Inductance

Characristic

●High permeability

●Low loss

●Magnetostriction coefficient is close to zero

●Curie temperature is high

Application fields

Filter inductance is suitable for energy storage and filtering inductors in switching power supply, because of its high BS value and low loss.Compared with iron powder core and ferrite with the same volume and magnetic permeability, it has higher energy storage capacity. More widely used in AC inductors, output inductors, rotary transformers, pulse transformers, power factor positive circuits.

Filter Inductance,New Design Filter Inductance,High Performance Filter Inductance,Cost-Optimal Filter Inductance

Anyang Kayo Amorphous Technology Co.,Ltd. , https://www.kayoamotech.com