Reinforcement Learning of Code Search Sessions

被引:5
|
作者
Li, Wei [1 ]
Yan, Shuhan [1 ]
Shen, Beijun [1 ]
Chen, Yuting [1 ]
机构
[1] Shanghai Jiao Tong Univ, Sch Elect Informat & Elect Engn, Shanghai, Peoples R China
关键词
Code search; session search; reinforcement learning; Markov decision process; SOFTWARE;
D O I
10.1109/APSEC48747.2019.00068
中图分类号
TP31 [计算机软件];
学科分类号
081202 ; 0835 ;
摘要
Searching and reusing online code is a common activity in software development. Meanwhile, like many general-purposed searches, code search also faces the session search problem: in a code search session, the user needs to iteratively search for code snippets, exploring new code snippets that meet his/her needs and/or making some results highly ranked. This paper presents Cosoch, a reinforcement learning approach to session search of code documents (code snippets with textual explanations). Cosoch is aimed at generating a session that reveals user intentions, and correspondingly searching and reranking the resulting documents. More specifically, Cosoch casts a code search session into a Markov decision process, in which rewards measuring the relevances between the queries and the resulting code documents guide the whole session search. We have built a dataset, say CosoBe, from StackOverflow, containing 103 code search sessions with 378 pieces of user feedback. We have also evaluated Cosoch on CosoBe. The evaluation results show that Cosoch achieves an average NDCG@3 score of 0.7379, outperforming StackOverflow by 21.3%.
引用
收藏
页码:458 / 465
页数:8
相关论文
共 50 条
  • [21] Automatically Generating Data Exploration Sessions Using Deep Reinforcement Learning
    Bar El, Ori
    Milo, Tova
    Somech, Amit
    SIGMOD'20: PROCEEDINGS OF THE 2020 ACM SIGMOD INTERNATIONAL CONFERENCE ON MANAGEMENT OF DATA, 2020, : 1527 - 1537
  • [22] Reinforcement learning and A* search for the unit commitment problem
    de Mars, Patrick
    OSullivan, Aidan
    ENERGY AND AI, 2022, 9
  • [23] On Monte Carlo Tree Search and Reinforcement Learning
    Vodopivec, Tom
    Samothrakis, Spyridon
    Ster, Branko
    JOURNAL OF ARTIFICIAL INTELLIGENCE RESEARCH, 2017, 60 : 881 - 936
  • [24] Reinforcement Learning Enhanced PicHunter for Interactive Search
    Ma, Zhixin
    Wu, Jiaxin
    Loo, Weixiong
    Ngo, Chong-Wah
    MULTIMEDIA MODELING, MMM 2023, PT I, 2023, 13833 : 690 - 696
  • [25] Search-Based Testing of Reinforcement Learning
    Tappler, Martin
    Cordoba, Filip Cano
    Aichernig, Bernhard K.
    Koenighofer, Bettina
    PROCEEDINGS OF THE THIRTY-FIRST INTERNATIONAL JOINT CONFERENCE ON ARTIFICIAL INTELLIGENCE, IJCAI 2022, 2022, : 503 - 510
  • [26] Feature Search in the Grassmanian in Online Reinforcement Learning
    Bhatnagar, Shalabh
    Borkar, Vivek S.
    Prabuchandran, K. J.
    IEEE JOURNAL OF SELECTED TOPICS IN SIGNAL PROCESSING, 2013, 7 (05) : 746 - 758
  • [27] Reinforcement Learning with Automated Auxiliary Loss Search
    He, Tairan
    Zhang, Yuge
    Ren, Kan
    Liu, Minghuan
    Wang, Che
    Zhang, Weinan
    Yang, Yuqing
    Li, Dongsheng
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 35 (NEURIPS 2022), 2022,
  • [28] Discussion of "Reinforcement learning behaviors in sponsored search'
    Zhu, Yada
    APPLIED STOCHASTIC MODELS IN BUSINESS AND INDUSTRY, 2016, 32 (03) : 369 - 370
  • [29] IRLAS: Inverse Reinforcement Learning for Architecture Search
    Guo, Minghao
    Zhong, Zhao
    Wu, Wei
    Lin, Dahua
    Yan, Junjie
    2019 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR 2019), 2019, : 9013 - 9021
  • [30] Heuristic search based exploration in reinforcement learning
    Vien, Ngo Anh
    Viet, Nguyen Hoang
    Lee, SeungGwan
    Chung, TaeChoong
    COMPUTATIONAL AND AMBIENT INTELLIGENCE, 2007, 4507 : 110 - +