Learning motion primitives and annotative texts from crowd-sourcing

被引:0
|
作者
Takano W. [1 ]
机构
[1] The Univ. of Tokyo, Bunkyoku Hongo, Tokyo
来源
ROBOMECH Journal | / 2卷 / 1期
基金
日本学术振兴会;
关键词
Crowd-sourcing; Motion primitives; Natural language;
D O I
10.1186/s40648-014-0022-7
中图分类号
学科分类号
摘要
Humanoidrobots are expected to be integrated into daily life, where a large variety of human actions and language expressions are observed. They need to learn the referential relations between the actions and language, and to understand the actions in the form of language in order to communicate with human partners or to make inference using language. Intensive research on imitation learning of human motions has been performed for the robots that can recognize human activity and synthesize human-like motions, and this research is subsequently extended to integration of motions and language. This research aims at developing robots that understand human actions in the form of natural language. One difficulty comes from handling a large variety of words or sentences used in daily life because it is too time-consuming for researchers to annotate human actions in various expressions. Recent development of information and communication technology gives an efficient process of crowd-sourcing where many users are available to complete a lot of simple tasks. This paper proposes a novel concept of collecting a large training dataset of motions and their descriptive sentences, and of developing an intelligent framework learning relations between the motions and sentences. This framework enables humanoid robots to understand human actions in various forms of sentences. We tested it on recognition of human daily full-body motions, and demonstrated the validity of it. © 2015, Takano; licensee Springer.
引用
收藏
相关论文
共 50 条
  • [21] USGS Expands Nationwide Crowd-sourcing Project
    不详
    GIM INTERNATIONAL-THE WORLDWIDE MAGAZINE FOR GEOMATICS, 2013, 27 (05): : 13 - 13
  • [22] USING CROWD-FUNDING AS A MOTIVATIONAL DEVICE FOR CROWD-SOURCING ASSESSMENTS FROM TEACHERS
    Zualkernan, Imran A.
    Ali, Mustafa
    Hassoun, Mark
    Jadoon, Noshad
    EDULEARN14: 6TH INTERNATIONAL CONFERENCE ON EDUCATION AND NEW LEARNING TECHNOLOGIES, 2014, : 2223 - 2232
  • [23] MPING Crowd-Sourcing Weather Reports for Research
    Elmore, Kimberly L.
    Flamig, Z. L.
    Lakshmanan, V.
    Kaney, B. T.
    Farmer, V.
    Reeves, Heather D.
    Rothfusz, Lans P.
    BULLETIN OF THE AMERICAN METEOROLOGICAL SOCIETY, 2014, 95 (09) : 1335 - 1342
  • [24] Samromur: Crowd-sourcing large amounts of data
    Hedstrom, Staffan
    Mollberg, David Erik
    Thorhallsdottir, Ragnheiour
    Guonason, Jon
    LREC 2022: THIRTEEN INTERNATIONAL CONFERENCE ON LANGUAGE RESOURCES AND EVALUATION, 2022, : 2311 - 2316
  • [25] Crowd-Sourcing Romantic Alienation: Following The Following
    Rachman, Stephen
    EDGAR ALLAN POE REVIEW, 2013, 14 (01): : 71 - 78
  • [26] UNLarium: a Crowd-Sourcing Environment for Multilingual Resources
    Martins, Ronaldo
    LREC 2014 - NINTH INTERNATIONAL CONFERENCE ON LANGUAGE RESOURCES AND EVALUATION, 2014,
  • [27] Some Security Considerations on Crowd-Sourcing an Ontology
    Migliardi, Mauro
    Riccomagno, Eva
    2013 36TH INTERNATIONAL CONVENTION ON INFORMATION AND COMMUNICATION TECHNOLOGY, ELECTRONICS AND MICROELECTRONICS (MIPRO), 2013, : 953 - 958
  • [28] The wisdom of the expert crowd: a crowd-sourcing task for character discovery from nematocyst ultrastructur
    Daly, M.
    Reft, A. J.
    Law, E.
    O'Leary, M.
    INTEGRATIVE AND COMPARATIVE BIOLOGY, 2014, 54 : E260 - E260
  • [29] Crowd-sourcing meets deep learning: A hybrid approach for retinal image annotation
    Roesch, Karin
    Leifman, George
    Swedish, Tristan
    Joshi, Dhruv
    Gupta, Vishal
    Chhablani, Jay
    Raskar, Ramesh
    INVESTIGATIVE OPHTHALMOLOGY & VISUAL SCIENCE, 2016, 57 (12)
  • [30] The RedDots Platform for Mobile Crowd-Sourcing of Speech Data
    Lee, Kong Aik
    Wang, Guangsen
    Ng, Kam Pheng
    Sun, Hanwu
    Trung Hieu Nguyen
    Thai, Ngoc Thuy Huong
    Ma, Bin
    Li, Haizhou
    16TH ANNUAL CONFERENCE OF THE INTERNATIONAL SPEECH COMMUNICATION ASSOCIATION (INTERSPEECH 2015), VOLS 1-5, 2015, : 2603 - 2604