distillation

转载:深度学习:蒸馏Distill

转载,写的比较好了,可以参考:https://blog.csdn.net/pipisorry/article/details/117257414 Distilling the knowledge in a neural networkHinton 在论文中提出方法很简单,就是让学生模型的预测分布,来 ......
深度 Distill

Cold Brew: Distilling Graph Node Representations with Incomplete or Missing Neighborhoods

目录概符号说明Cold Brew代码 Zheng W., Huang E. W., Rao N., Katariya S., Wang Z., Subbian K. Cold brew: Distilling graph node representations with incomplete or ......

Linkless Link Prediction via Relational Distillation

目录概符号说明LLP代码 Guo Z., Shiao W., Zhang S., Liu Y., Chawla N. V., Shah N. and Zhao T. Linkless link prediction via relational distillation. ICML, 2023. 概 ......

Distilling Knowledge from Graph Convolutional Networks

目录概符号说明DistillGCNLocal Structure Preserving代码 Yang Y., Qiu J., Song M., Tao D. and Wang X. Distilling knowledge from graph convolutional networks. CVP ......

Decoupled Knowledge Distillation

目录概符号说明DKD代码 Zhao B., Cui Q., Song R., Qiu Y. and Liang J. Decoupled knowledge distillation. CVPR, 2022. 概 对普通的 KD (Knowledge Distillation) 损失解耦得到 Tar ......
Distillation Decoupled Knowledge

论文阅读:Knowledge Distillation via the Target-aware Transformer

摘要 Knowledge distillation becomes a de facto standard to improve the performance of small neural networks. 知识蒸馏成为提高小型神经网络性能的事实上的标准。 Most of the previo ......

8 Innovative BERT Knowledge Distillation Papers That Have Changed The Landscape of NLP

8 Innovative BERT Knowledge Distillation Papers That Have Changed The Landscape of NLP Contemporary state-of-the-art NLP models are difficult to be ut ......

Unbiased Knowledge Distillation for Recommendation

目录概UnKD代码 Chen G., Chen J., Feng F., Zhou S. and He X. Unbiased knowledge distillation for recommendation. WSDM, 2023. 概 考虑流行度偏差的知识蒸馏, 应用于推荐系统. UnKD M ......

Knowledge Distillation from A Stronger Teacher

目录概DIST代码 Huang T., You S., Wang F., Qian C. and Xu C. Knowledge distillation from a stronger teacher. NIPS, 2022. 概 用 Pearson correlation coefficient ......
Distillation Knowledge Stronger Teacher from

[论文阅读] Anomaly detection via reverse distillation from one-class embedding

Anomaly detection via reverse distillation from one-class embedding Introduction 在知识蒸馏(KD)中,知识是在教师-学生(T-S)对中传递的。在无监督异常检测的背景下,由于学生在训练过程中只接触到正常样本,所以当查询是 ......

DE-RRD: A Knowledge Distillation Framework for Recommender System

目录概DE-RRDDistillation Experts (DE)Relaxed Ranking Distillation (RRD)代码 Kang S., Hwang J., Kweon W. and Yu H. DE-RRD: A knowledge distillation framewor ......

Topology Distillation for Recommender System

目录概Topology DistillationFull Topology Distillation (FTD)Hierarchical Topology Distillation (HTD)代码 Kang S., Hwang J., Kweon W. and Yu H. Topology dist ......
Distillation Recommender Topology System for

Collaborative Distillation for Top-N Recommendation

目录概符号说明Collaborative distillation (CD) Lee J., Choi M., Lee J. and Shim H. Collaborative distillation for top-N recommendation. ICDM, 2019. 概 Ranking- ......

Relational Knowledge Distillation

目录概符号说明RKD代码 Park W., Kim D., Lu Y. and Cho M. Relational knowledge distillation. CVPR, 2019. 概 符号说明 \(f_T, f_S\), teacher and student model; \(\mathc ......
Distillation Relational Knowledge

Ranking Distillation: Learning Compact Ranking Models With High Performance for Recommender System

目录概符号说明Ranking Distillation代码 Tang J. and Wang K. Ranking Distillation: Learning compact ranking models with high performance for recommender system. ......

论文解读(TAMEPT)《A Two-Stage Framework with Self-Supervised Distillation For Cross-Domain Text Classification》

论文信息 论文标题:A Two-Stage Framework with Self-Supervised Distillation For Cross-Domain Text Classification论文作者:Yunlong Feng, Bohan Li, Libo Qin, Xiao Xu, ......

论文解读(BSFDA)《Black-box Source-free Domain Adaptation via Two-stage Knowledge Distillation》

Note:[ wechat:Y466551 | 可加勿骚扰,付费咨询 ] 论文信息 论文标题:Black-box Source-free Domain Adaptation via Two-stage Knowledge Distillation论文作者:Shuai Wang, Daoan Zhan ......

论文解读(KDSSDA)《Knowledge distillation for semi-supervised domain adaptation》

Note:[ wechat:Y466551 | 可加勿骚扰,付费咨询 ] 论文信息 论文标题:Knowledge distillation for semi-supervised domain adaptation论文作者:Mauricio Orbes-Arteaga, Jorge Cardoso论 ......

论文解读(KD-UDA)《Joint Progressive Knowledge Distillation and Unsupervised Domain Adaptation》

Note:[ wechat:Y466551 | 可加勿骚扰,付费咨询 ] 论文信息 论文标题:Joint Progressive Knowledge Distillation and Unsupervised Domain Adaptation论文作者:Yanping Fu, Yun Liu论文来源 ......

论文解读(AAD)《Knowledge distillation for BERT unsupervised domain adaptation》

Note:[ wechat:Y466551 | 可加勿骚扰,付费咨询 ] 论文信息 论文标题:Knowledge distillation for BERT unsupervised domain adaptation论文作者:Minho Ryu、Geonseok Lee、Kichun Lee论文来 ......

Incrementer:Transformer for Class-Incremental Semantic Segmentation with Knowledge Distillation Focusing on Old Class论文阅读笔记

## 摘要 目前已有的连续语义分割方法通常基于卷积神经网络,需要添加额外的卷积层来分辨新类别,且在蒸馏特征时没有对属于旧类别/新类别的区域加以区分。为此,作者提出了基于Transformer的网络incrementer,在学习新类别时只需要往decoder中加入对应的token。同时,作者还提出了对 ......

Teachable Reinforcement Learning via Advice Distillation

**发表时间:**2021 (NeurIPS 2021) **文章要点:**这篇文章提出了一种学习policy的监督范式,大概思路就是先结构化advice,然后先学习解释advice,再从advice中学policy。这个advice来自于外部的teacher,相当于一种human-in-the-l ......

【论文阅读笔记】Distiling Causal Effect of Data in Class-Incremental Learning

Author: Hanwang Zhang, Xinting Hu Create_time: April 24, 2022 11:01 AM Edited_by: Huang Yujun Publisher: CVPR 2021 Org: Nanyang Technological Universi ......

文献阅读——Understanding the Role of Mixup in Knowledge Distillation: An Empirical Study

Hongjun Choi, Eun Som Jeon, Ankita Shukla, Pavan Turaga; Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision (WACV), 2023 ......

文献阅读——The Augmented Image Prior Distilling 1000 Classes by Extrapolating from a Single Image

Y. M. Asano and A. Saeed, ‘THE AUGMENTED IMAGE PRIOR: DISTILLING 1000 CLASSES BY EXTRAPOLATING FROM A SINGLE IMAGE’, 2023. ICLR2023,阿姆斯特丹大学和埃因霍芬理工大学两位 ......
共25篇  :1/1页 首页上一页1下一页尾页