DBpedia - A large-scale, multilingual knowledge base extracted from Wikipedia

被引:1698
|
作者
Lehmann, Jens [1 ]
Isele, Robert [7 ]
Jakob, Max [5 ]
Jentzsch, Anja [4 ]
Kontokostas, Dimitris [1 ]
Mendes, Pablo N. [6 ]
Hellmann, Sebastian [1 ]
Morsey, Mohamed [1 ]
van Kleef, Patrick [3 ]
Auer, Soeren [1 ,8 ,9 ]
Bizer, Christian [2 ]
机构
[1] Univ Leipzig, Inst Comp Sci, AKSW Grp, D-04009 Leipzig, Germany
[2] Univ Mannheim, Res Grp Data & Web Sci, D-68159 Mannheim, Germany
[3] OpenLink Software, Burlington, MA 01803 USA
[4] Hasso Plattner Inst IT Syst Engn, D-14482 Potsdam, Germany
[5] Neofonie GmbH, D-10115 Berlin, Germany
[6] Wright State Univ, Kno E Sis Ohio Ctr Excellence Knowledge Enabled C, Dayton, OH 45435 USA
[7] Brox IT Solut GmbH, D-30625 Hannover, Germany
[8] Univ Bonn, Enterprise Informat Syst, D-53117 Bonn, Germany
[9] Fraunhofer IAIS, D-53117 Bonn, Germany
关键词
Knowledge extraction; Wikipedia; multilingual knowledge bases; Linked Data; RDF; LINKED DATA; SEMANTIC WEB;
D O I
10.3233/SW-140134
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
The DBpedia community project extracts structured, multilingual knowledge from Wikipedia and makes it freely available on the Web using Semantic Web and Linked Data technologies. The project extracts knowledge from 111 different language editions of Wikipedia. The largest DBpedia knowledge base which is extracted from the English edition of Wikipedia consists of over 400 million facts that describe 3.7 million things. The DBpedia knowledge bases that are extracted from the other 110Wikipedia editions together consist of 1.46 billion facts and describe 10 million additional things. The DBpedia project maps Wikipedia infoboxes from 27 different language editions to a single shared ontology consisting of 320 classes and 1,650 properties. The mappings are created via a world-wide crowd-sourcing effort and enable knowledge from the different Wikipedia editions to be combined. The project publishes releases of all DBpedia knowledge bases for download and provides SPARQL query access to 14 out of the 111 language editions via a global network of local DBpedia chapters. In addition to the regular releases, the project maintains a live knowledge base which is updated whenever a page in Wikipedia changes. DBpedia sets 27 million RDF links pointing into over 30 external data sources and thus enables data from these sources to be used together with DBpedia data. Several hundred data sets on the Web publish RDF links pointing to DBpedia themselves and make DBpedia one of the central interlinking hubs in the Linked Open Data (LOD) cloud. In this system report, we give an overview of the DBpedia community project, including its architecture, technical implementation, maintenance, internationalisation, usage statistics and applications.
引用
收藏
页码:167 / 195
页数:29
相关论文
共 50 条
  • [21] A Large-Scale Experiment in Executing Extracted Programs
    Cruz-Filipe, Luis
    Letouzey, Pierre
    ELECTRONIC NOTES IN THEORETICAL COMPUTER SCIENCE, 2006, 151 (01) : 75 - 91
  • [22] Large SMT data-sets extracted from Wikipedia
    Tufis, Dan
    Ion, Radu
    Dumitrescu, Stefan
    Stefanescu, Dan
    LREC 2014 - NINTH INTERNATIONAL CONFERENCE ON LANGUAGE RESOURCES AND EVALUATION, 2014, : 656 - 663
  • [23] WikiSQE: A Large-Scale Dataset for Sentence Quality Estimation in Wikipedia
    Ando, Kenichiro
    Sekine, Satoshi
    Komachi, Mamoru
    THIRTY-EIGHTH AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE, VOL 38 NO 16, 2024, : 17656 - 17663
  • [24] MLS: A Large-Scale Multilingual Dataset for Speech Research
    Pratap, Vineel
    Xu, Qiantong
    Sriram, Anuroop
    Synnaeve, Gabriel
    Collobert, Ronan
    INTERSPEECH 2020, 2020, : 2757 - 2761
  • [25] Multimodal and Multilingual Embeddings for Large-Scale Speech Mining
    Duquenne, Paul-Ambroise
    Gong, Hongyu
    Schwenk, Holger
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 34 (NEURIPS 2021), 2021, 34
  • [26] Large-scale knowledge acquisition from botanical texts
    Role, Francois
    Gavilanes, Milagros Fernandez
    de la Clergerie, Eric Villemonte
    NATURAL LANGUAGE PROCESSING AND INFORMATION SYSTEMS, PROCEEDINGS, 2007, 4592 : 395 - 400
  • [27] Large-Scale Extraction and Use of Knowledge from Text
    Clark, Peter
    Harrison, Phil
    K-CAP'09: PROCEEDINGS OF THE FIFTH INTERNATIONAL CONFERENCE ON KNOWLEDGE CAPTURE, 2009, : 153 - 160
  • [28] Extracting large-scale knowledge bases from the web
    Kumar, R
    Raghavan, P
    Rajagopalan, S
    Tomkins, A
    PROCEEDINGS OF THE TWENTY-FIFTH INTERNATIONAL CONFERENCE ON VERY LARGE DATA BASES, 1999, : 639 - 650
  • [29] Automatically building large-scale named entity recognition corpora from Chinese Wikipedia
    Zhou, Jie
    Li, Bi-cheng
    Chen, Gang
    FRONTIERS OF INFORMATION TECHNOLOGY & ELECTRONIC ENGINEERING, 2015, 16 (11) : 940 - 956
  • [30] Automatically building large-scale named entity recognition corpora from Chinese Wikipedia
    Jie Zhou
    Bi-cheng Li
    Gang Chen
    Frontiers of Information Technology & Electronic Engineering, 2015, 16 : 940 - 956