recommendation self-attention sequential stochastic

Unified Conversational Recommendation Policy Learning via Graph-based Reinforcement Learning

图的作用: 图结构捕捉不同类型节点(即用户、项目和属性)之间丰富的关联信息,使我们能够发现协作用户对属性和项目的偏好。因此,我们可以利用图结构将推荐和对话组件有机地整合在一起,其中对话会话可以被视为在图中维护的节点序列,以动态地利用对话历史来预测下一轮的行动。 由四个主要组件组成:基于图的 MDP ......

粗读Multi-Task Recommendations with Reinforcement Learning

论文: Multi-Task Recommendations with Reinforcement Learning 地址: https://arxiv.org/abs/2302.03328 # 摘要 In recent years, Multi-task Learning (MTL) has yi ......

MEANTIME Mixture of Attention Mechanisms with Multi-temporal Embeddings for Sequential Recommendation

[TOC] > [Cho S., Park E. and Yoo S. MEANTIME: Mixture of attention mechanisms with multi-temporal embeddings for sequential recommendation. RecSys, 20 ......

Memory Augmented Graph Neural Networks for Sequential Recommendation

[TOC] > [Ma C., Ma L., Zhang Y., Sun J., Liu X. and Coates M. Memory augmented graph neural networks for sequential recommendation. AAAI, 2021.](http: ......

4.1 Self-attention

# 1. 问题引入 我们在之前的课程里遇到的都是输入是一个向量,输出是类别或者标量.但如果输入是向量的集合且向量长度还会变化,又应该怎么处理呢? ![image](https://img2023.cnblogs.com/blog/2264614/202307/2264614-202307021649 ......
Self-attention attention Self 4.1

关于Deep Neural Networks for YouTube Recommendations的一些思考和实现

作者自己实现该文章的时候遇到的一些值得思考的地方: - [关于Deep Neural Networks for YouTube Recommendations的一些思考和实现](https://cloud.tencent.com/developer/article/1170340) - [备份网址] ......
Recommendations Networks YouTube Neural Deep

Self-attention with Functional Time Representation Learning

[TOC] > [Xu D., Ruan C., Kumar S., Korpeoglu E. and Achan K. Self-attention with functional time representation learning. NIPS, 2019.](http://arxiv.or ......

Graph Masked Autoencoder for Sequential Recommendation

[TOC] > [Ye Y., Xia L. and Huang C. Graph masked autoencoder for sequential recommendation. SIGIR, 2023.](http://arxiv.org/abs/2305.04619) ## 概 图 + MA ......

混合性对话:Towards Conversational Recommendation over Multi-Type Dialogs

## 混合型对话 传统的人机对话研究专注于单一类型的对话,并且往往预设用户一开始就清楚对话目标。但实际应用中,人机对话常常混合了多种类型,例如闲聊、任务导向对话、推荐对话、问答等,并且用户目标是未知的。在这样的混合型对话中,机器人需要主动自然地进行对话推荐。 “混合型对话”这个新颖的任务于2020年 ......

Time Interval Aware Self-Attention for Sequential Recommendation

[TOC] > [Li J., Wang Y., McAuley J. Time interval aware self-attention for sequential recommendation. WSDM, 2020.](https://dl.acm.org/doi/10.1145/3336 ......

Matlab马尔可夫链蒙特卡罗法(MCMC)估计随机波动率(SV,Stochastic Volatility) 模型|附代码数据

全文下载链接:http://tecdat.cn/?p=16708 最近我们被客户要求撰写关于随机波动率的研究报告,包括一些图形和统计输出。 波动率是一个重要的概念,在金融和交易中有许多应用。它是期权定价的基础。波动率还可以让您确定资产配置并计算投资组合的风险价值 (VaR) 甚至波动率本身也是一种金 ......
Stochastic Volatility 模型 代码 数据

Attention、Self-Attention 与 Multi-Head Attention

Corpus语料库与DB数据库 World Knowledge世界常识库:OALD牛津高阶/Synonyms/Phrases/…, 新华字典/成语词典/辞海, 行业词典,大英百科,Wikipedia,… 全局信息: Corpus语料库、行业通用数据库(例如Springer/Google Schola ......

业务场景(用户交互) + Corpus语料库/数据库建立 + Attention 与 Self-Attention:世界常识库|全局信息|语法信息|句法信息|Context上下文信息

一、场景(用户交互): 1. 用户发起新会话Session,初始化交互系统,等待 用户输入 或 传入任务文档; 2. 用户实时输入,触发实时交互,设当前输入句子为S: 当前输入句子 S 长度未定,并且可能是动态字符流式输入: 因此可以用 Sliding Window滑动窗口, 提取 当前输入单词Wo ......
信息 Attention 语料库 语料 句法

Exploiting Positional Information for Session-based Recommendation

[TOC] > [Qiu R., Huang Z., Chen T. and Yin H. Exploiting positional information for session-based recommendation. ACM Transactions on Information Syst ......

Contrastive Learning for Representation Degeneration Problem in Sequential Recommendation

[TOC] > [Qiu R., Huang Z., Ying H. and Wang Z. Contrastive learning for representation degeneration problem in sequential recommendation. WSDM, 2022.] ......

Self-Supervised Hypergraph Convolutional Networks for Session-based Recommendation

[TOC] > [Xia X., Yin H., Yu J., Wang Q., Cui L and Zhang X. Self-supervised hypergraph convolutional networks for session-based recommendation. AAAI, ......

Self-Supervised Graph Co-Training for Session-based Recommendation

[TOC] > [Xia X., Yin H., Yu J., Shao Y. and Cui L. Self-supervised graph co-training for session-based recommendation. CIKM, 2021.](http://arxiv.org/a ......

Global Context Enhanced Graph Neural Networks for Session-based Recommendation

[TOC] > [Wang Z., Wei W., Cong G., Li X., Mao X. and Qiu M. Global context enhanced graph neural networks for session-based recommendation. SIGIR, 202 ......

Neural Attentive Session-based Recommendation

[TOC] >[ Li J., Ren P., Chen Z., Ren Z., Lian T. and Ma J. Neural attentive session-based recommendation. CIKM, 2017.](http://arxiv.org/abs/1711.04725 ......

Memory Priority Model for Session-based Recommendation

[TOC] > [Liu Q., Zeng Y., Mokhosi R. and Zhang H. STAMP: Short-term attention/memory priority model for session-based recommendation. KDD, 2018.](http ......

《AutoInt: Automatic Feature Interaction Learning via Self-Attentive Neural Networks》特征交叉论文阅读

背景 这是一篇利用多头attention机制来做特征交叉的论文 模型结构 AutoInt的模型结构如上图所示,搞模型包含 Embedding Layer、Interacting Layer、Output Layer三个部分,其中Embedding Layer和Output Layer和普通模型没什么 ......

Personalized Top-N Sequential Recommendation via Convolutional Sequence Embedding

Tang J. and Wang K. Personalized top-n sequential recommendation via convolutional sequence embedding. WSDM, 2018. 概 序列推荐的经典之作, 将卷积用在序列推荐之上. 符号说明 $\ma ......

Handling Information Loss of Graph Neural Networks for Session-based Recommendation

Chen T. and Wong R. C. Handling information loss of graph neural networks for session-based recommendation. KDD, 2020. 概 作者发现图用在 Session 推荐中存在: lossy ......

李宏毅self-attention笔记

面对的问题是什么? 复杂输入,多个变长的向量 这里自然会想到RNN,后面会有比较 具体的场景, 可以是一段话,每个word一个向量,可以用one hot,但大多时候是用embedding 可以是一段印频,每25ms一个向量,按10ms滑动,可以看出音频的数据量是非常大的 也可以是一张图片。。。 输出 ......
self-attention attention 笔记 self

论文阅读笔记《Stochastic Grounded Action Transformation for Robot Learning in Simulation》

Stochastic Grounded Action Transformation for Robot Learning in Simulation 发表于IROS 2020(CCF C) 模拟中机器人学习的随机接地动作转换 Desai S, Karnan H, Hanna J P, et al. ......

解决 c3p0报错 Establishing SSL connection without server's identity verification is not recommended

解决 c3p0报错 Establishing SSL connection without server's identity verification is not recommended ?useSSL=false <c3p0-config> <default-config> <property ......

Transformer网络-Self-attention is all your need

一、Transformer Transformer最开始用于机器翻译任务,其架构是seq2seq的编码器解码器架构。其核心是自注意力机制: 每个输入都可以看到全局信息,从而缓解RNN的长期依赖问题。 输入: (待学习的)输入词嵌入 + 位置编码(相对位置) 编码器结构: 6层编码器: 一层编码器 = ......

Stochastic Training of Graph Convolutional Networks with Variance Reduction

Chen J., Zhu J. and Song L. Stochastic training of graph convolutional networks with variance reduction. ICML, 2018. 概 我们都知道, GCN 虽然形式简单, 但是对于结点个数非常多的 ......

DiffuRec: A Diffusion Model for Sequential Recommendation

Li Z., Sun A. and Li C. DiffuRec: A diffusion model for sequential recommendation. arXiv preprint arXiv:2304.00686, 2023. 概 扩散模型用于序列推荐, 性能提升很大. DiffuR ......

Sequential Recommendation via Stochastic Self-Attention

Fan Z., Liu Z., Wang A., Nazari Z., Zheng L., Peng H. and Yu P. S. Sequential recommendation via stochastic self-attention. International World Wide W ......
共110篇  :3/4页 首页上一页3下一页尾页