Evaluating the performance and neutrality/bias of search engines

被引:2
|
作者
Kamoun, Ahmed [1 ]
Maille, Patrick [1 ]
Tuffin, Bruno [2 ]
机构
[1] IMT Atlantique, IRISA, UBL, F-29238 Brest, France
[2] Univ Rennes, IRISA, CNRS, INRIA, Rennes, France
关键词
Search engines; consensus; search neutrality; search bias;
D O I
10.1145/3306309.3306325
中图分类号
TP301 [理论、方法];
学科分类号
081202 ;
摘要
Different search engines provide different outputs for the same keyword. This may be due to different definitions of relevance, to different ranking aggregation methods, and/or to different knowledge/anticipation of users' preferences, but rankings are also suspected to be biased towards own content, which may prejudicial to other content providers. In this paper, we make some initial steps toward a rigorous comparison and analysis of search engines, by proposing a definition for a consensual relevance of a page with respect to a keyword, from a set of search engines. More specifically, we look at the results of several search engines for a sample of keywords, and define for each keyword the visibility of a page based on its ranking over all search engines. This allows to define a score of the search engine for a keyword, and then its average score over all keywords. Based on the pages visibility, we can also define the consensus search engine as the one showing the most visible results for each keyword, and discuss how biased results toward specific pages can be highlighted and quantified to provide answers to the search neutrality debate. We have implemented this model and present an analysis of the results.
引用
收藏
页码:103 / 109
页数:7
相关论文
共 50 条
  • [21] Mitigating Bias in Search Results Through Contextual Document Reranking and Neutrality Regularization
    Zerveas, George
    Rekabsaz, Navid
    Cohen, Daniel
    Eickhoff, Carsten
    PROCEEDINGS OF THE 45TH INTERNATIONAL ACM SIGIR CONFERENCE ON RESEARCH AND DEVELOPMENT IN INFORMATION RETRIEVAL (SIGIR '22), 2022, : 2532 - 2538
  • [22] Improving Search Engines Performance on Multithreading Processors
    Bonacic, Carolina
    Garcia, Carlos
    Marin, Mauricio
    Prieto, Mannel
    Tirado, Francisco
    Vicente, Cesar
    HIGH PERFORMANCE COMPUTING FOR COMPUTATIONAL SCIENCE - VECPAR 2008, 2008, 5336 : 201 - +
  • [23] Estimating the recall performance of Web search engines
    Clarke, SJ
    Willett, P
    ASLIB PROCEEDINGS, 1997, 49 (07): : 184 - 189
  • [24] Automatic performance evaluation of Web search engines
    Can, F
    Nuray, R
    Sevdik, AB
    INFORMATION PROCESSING & MANAGEMENT, 2004, 40 (03) : 495 - 514
  • [25] The little engines that could: Modeling the performance of World Wide Web search engines
    Bradlow, ET
    Schmittlein, DC
    MARKETING SCIENCE, 2000, 19 (01) : 43 - 62
  • [26] Automatic performance evaluation of web search engines using judgments of metasearch engines
    Sadeghi, Hamid
    ONLINE INFORMATION REVIEW, 2011, 35 (06) : 957 - 971
  • [27] Evaluating search engines and large language models for answering health questions
    Fernandez-Pichel, Marcos
    Pichel, Juan C.
    Losada, David E.
    NPJ DIGITAL MEDICINE, 2025, 8 (01):
  • [28] Evaluating the Retrieval Effectiveness of Search Engines using Persian Navigational Queries
    Mahmoudi, Maryam
    Badie, Reza
    Zahedi, Mohammad Sadegh
    Azimzadeh, Masoumeh
    2014 7TH INTERNATIONAL SYMPOSIUM ON TELECOMMUNICATIONS (IST), 2014, : 563 - 568
  • [29] Evaluating Study of Search Engines on E-commerce Websites in China
    Li Fenglin
    Liu Yaqi
    EIGHTH WUHAN INTERNATIONAL CONFERENCE ON E-BUSINESS, VOLS I-III, 2009, : 162 - 167
  • [30] Proposing a New Combined Indicator for Measuring Search Engine Performance and Evaluating Google, Yahoo, DuckDuckGo, and Bing Search Engines based on Combined Indicator
    Hoseinabadi, Azadeh Hajian
    CheshmehSohrabi, Mehrdad
    JOURNAL OF LIBRARIANSHIP AND INFORMATION SCIENCE, 2024, 56 (01) : 178 - 197