Preference

RLHF · PBRL | B-Pref:生成多样非理性 preference,建立 PBRL benchmark

贡献:提出一种生成非理性(模拟人类)preference 的方法,使用多样化的 preference,评测了 PBRL 各环节算法设计(select informative queries、feedback schedule)的效果。 ......
非理性 PBRL preference benchmark B-Pref

RLHF · PBRL | PEBBLE:通过 human preference 学习 reward model

① 使用熵 intrinsic reward 的 agent pre-training,② 选择尽可能 informative 的 queries 去获取 preference,③ 使用更新后的 reward model 对 replay buffer 进行 relabel。 ......
preference PEBBLE reward human model

Learning Heterogeneous Temporal Patterns of User Preference for Timely Recommendation

目录概符号说明TimelyRecMulti-aspect Time Encoder (MATE)Time-aware History Encoder (TAHE)Prediction代码 Cho J., Hyun D., Kang S. and Yu H. Learning heterogeneou ......

Measuring the diversity of recommendations: a preference-aware approach for evaluating and adjusting diversity

Meymandpour R. and Davis J. G. Measuring the diversity of recommendations: a preference-aware approach for evaluating and adjusting diversity. Knowled ......
共4篇  :1/1页 首页上一页1下一页尾页