| |
|
开发:
C++知识库
Java知识库
JavaScript
Python
PHP知识库
人工智能
区块链
大数据
移动开发
嵌入式
开发工具
数据结构与算法
开发测试
游戏开发
网络协议
系统运维
教程: HTML教程 CSS教程 JavaScript教程 Go语言教程 JQuery教程 VUE教程 VUE3教程 Bootstrap教程 SQL数据库教程 C语言教程 C++教程 Java教程 Python教程 Python3教程 C#教程 数码: 电脑 笔记本 显卡 显示器 固态硬盘 硬盘 耳机 手机 iphone vivo oppo 小米 华为 单反 装机 图拉丁 |
-> 人工智能 -> 深度学习模型压缩与加速技术(五):紧凑网络 -> 正文阅读 |
|
[人工智能]深度学习模型压缩与加速技术(五):紧凑网络 |
相关链接: 深度学习模型压缩与加速技术(五):紧凑网络 深度学习模型压缩与加速技术(六):知识蒸馏 深度学习模型压缩与加速技术(七):混合方式 总结
A:压缩参数 B:压缩结构 紧凑网络定义设计更紧凑的新型网络结构,是一种新兴的网络压缩与加速理念,构造特殊结构的 filter、网络层甚至网络, 特点从头训练,获得适宜部署到移动平台等资源有限设备的网络性能,不再需要像参数压缩类方法那样专门存储预训练模型,也不需要通过微调来提升性能,降低了时间成本,具有存储量小、计算量低和网络性能好的特点。缺点在于:由于其特殊结构很难与其他的压缩与加速方法组合使用,并且泛化性较差,不适合作为预训练模型帮助其他模型训练。 1.卷积核级别新型卷积核
简单filter组合
2.层级别
3.网络结构级别
参考文献主要参考:高晗,田育龙,许封元,仲盛.深度学习模型压缩与加速综述[J].软件学报,2021,32(01):68-92.DOI:10.13328/j.cnki.jos.006096. [124] Iandola FN, Han S, Moskewicz MW, et al. SqueezeNet: AlexNet-level accuracy with 50x fewer parameters and <0.5MB model size. arXiv Preprint arXiv: 1602.07360, 2016. [125] Howard AG, Zhu M, Chen B, et al. Mobilenets: Efficient convolutional neural networks for mobile vision applications. arXiv Preprint arXiv: 1704.04861, 2017. [126] Sandler M, Howard A, Zhu M, et al. Mobilenetv2: Inverted residuals and linear bottlenecks. In: Proc. of the IEEE Conf. on Computer Vision and Pattern Recognition. 2018. 4510?4520. [127] Zhang X, Zhou X, Lin M, et al. Shufflenet: An extremely efficient convolutional neural network for mobile devices. In: Proc. of the IEEE Conf. on Computer Vision and Pattern Recognition. 2018. 6848?6856. [128] Ma N, Zhang X, Zheng HT, et al. Shufflenet v2: Practical guidelines for efficient CNN architecture design. In: Proc. of the European Conf. on Computer Vision (ECCV). 2018. 116?131. [129] Zhang T, Qi GJ, Xiao B, et al. Interleaved group convolutions. In: Proc. of the IEEE Int’l Conf. on Computer Vision. 2017. 4373?4382. [130] Xie G, Wang J, Zhang T, et al. Interleaved structured sparse convolutional neural networks. In: Proc. of the IEEE Conf. on Computer Vision and Pattern Recognition. 2018. 8847?8856. [131] Wang X, Kan M, Shan S, et al. Fully learnable group convolution for acceleration of deep neural networks. In: Proc. of the IEEE Conf. on Computer Vision and Pattern Recognition. 2019. 9049?9058. [132] Park J, Li S, Wen W, et al. Faster CNNs with direct sparse convolutions and guided pruning. arXiv Preprint arXiv: 1608.01409, \2016. [133] Zhang J, Franchetti F, Low TM. High performance zero-memory overhead direct convolutions. arXiv Preprint arXiv: 1809.10170, \2018. [134] Ioannou Y, Robertson D, Shotton J, et al. Training cnns with low-rank filters for efficient image classification. arXiv Preprint arXiv: 1511.06744, 2015. [135] Bagherinezhad H, Rastegari M, Farhadi A. LCNN: Lookup-based convolutional neural network. In: Proc. of the IEEE Conf. on Computer Vision and Pattern Recognition. 2017. 7120?7129. [136] Wang Y, Xu C, Chunjing XU, et al. Learning versatile filters for efficient convolutional neural networks. In: Advances in Neural Information Processing Systems. 2018. 1608?1618. [137] Huang G, Sun Y, Liu Z, et al. Deep networks with stochastic depth. In: Proc. of the European Conf. on Computer Vision. Cham: Springer-Verlag, 2016. 646?661. [138] Dong X, Huang J, Yang Y, et al. More is less: A more complicated network with less inference complexity. In: Proc. of the IEEE Conf. on Computer Vision and Pattern Recognition. 2017. 5840?5848. [139] Li D, Wang X, Kong D. Deeprebirth: Accelerating deep neural network execution on mobile devices. In: Proc. of the 32nd AAAI Conf. on Artificial Intelligence. 2018. [140] Prabhu A, Varma G, Namboodiri A. Deep expander networks: Efficient deep networks from graph theory. In: Proc. of the European Conf. on Computer Vision (ECCV). 2018. 20?35. [141] Wu B, Wan A, Yue X, et al. Shift: A zero flop, zero parameter alternative to spatial convolutions. In: Proc. of the IEEE Conf. on Computer Vision and Pattern Recognition. 2018. 9127?9135. [142] Chen W, Xie D, Zhang Y, et al. All you need is a few shifts: Designing efficient convolutional neural networks for image classification. In: Proc. of the IEEE Conf. on Computer Vision and Pattern Recognition. 2019. 7241?7250. [143] Kim J, Park Y, Kim G, et al. SplitNet: Learning to semantically split deep networks for parameter reduction and model parallelization. In: Proc. of the 34th Int’l Conf. on Machine Learning, Vol.70. JMLR.org, 2017. 1866?1874. [144] Gordon A, Eban E, Nachum O, et al. Morphnet: Fast & simple resource-constrained structure learning of deep networks. In: Proc. of the IEEE Conf. on Computer Vision and Pattern Recognition. 2018. 1586?1595. [145] Kim E, Ahn C, Oh S. Nestednet: Learning nested sparse structures in deep neural networks. In: Proc. of the IEEE Conf. on Computer Vision and Pattern Recognition. 2018. 8669?8678. |
|
|
上一篇文章 下一篇文章 查看所有文章 |
|
开发:
C++知识库
Java知识库
JavaScript
Python
PHP知识库
人工智能
区块链
大数据
移动开发
嵌入式
开发工具
数据结构与算法
开发测试
游戏开发
网络协议
系统运维
教程: HTML教程 CSS教程 JavaScript教程 Go语言教程 JQuery教程 VUE教程 VUE3教程 Bootstrap教程 SQL数据库教程 C语言教程 C++教程 Java教程 Python教程 Python3教程 C#教程 数码: 电脑 笔记本 显卡 显示器 固态硬盘 硬盘 耳机 手机 iphone vivo oppo 小米 华为 单反 装机 图拉丁 |
360图书馆 购物 三丰科技 阅读网 日历 万年历 2024年11日历 | -2024/11/26 9:52:57- |
|
网站联系: qq:121756557 email:121756557@qq.com IT数码 |