Memory-limited non-U-shaped learning with solved open problems

被引:2
|
作者
Case, John [1 ]
Koetzing, Timo [2 ]
机构
[1] Univ Delaware, Dept Comp & Informat Sci, Newark, DE 19716 USA
[2] Max Planck Inst Informat, Dept Algorithms & Complex 1, D-66123 Saarbrucken, Germany
关键词
Learning from positive data; U-shaped learning; INDUCTIVE INFERENCE; LANGUAGE; TEXTS;
D O I
10.1016/j.tcs.2012.10.010
中图分类号
TP301 [理论、方法];
学科分类号
081202 ;
摘要
In empirical cognitive science, for human learning, a semantic or behavioral U-shape occurs when a learner first learns, then unlearns, and, finally, relearns, some target concept. Within the formal framework of Inductive Inference, for learning from positive data, previous results have shown, for example, that such U-shapes are unnecessary for explanatory learning, but are necessary for behaviorally correct and non-trivial vacillatory learning. Herein we also distinguish between semantic and syntactic U-shapes. We answer a number of open questions in the prior literature as well as provide new results regarding syntactic U-shapes. Importantly for cognitive science, we see more of a previously noticed pattern that, for parameterized learning criteria, beyond very few initial parameter values, U-shapes are necessary for full learning power. We analyze the necessity of U-shapes in two memory-limited settings. The first setting is Bounded Memory State (BMS) learning, where a learner has an explicitly-bounded state memory, and otherwise only knows its current datum. We show that there are classes learnable with three (or more) memory states that are not learnable non-U-shapedly with any finite number of memory states. This result is surprising, since, for learning with one or two memory states, U-shapes are known to be unnecessary. This solves an open question from the literature. The second setting is that of Memoryless Feedback (MLF) learning, where a learner may ask a bounded number of questions about what data has been seen so far, and otherwise only knows its current datum. We show that there is a class learnable memorylessly with a single feedback query such that this class is not learnable non-U-shapedly memorylessly with any finite number of feedback queries. We employ self-learning classes together with the Operator Recursion Theorem for many of our results, but we also introduce two new techniques for obtaining results. The first is for transferring inclusion results from one setting to another. The main part of the second is the Hybrid Operator Recursion Theorem, which enables us to separate some learning criteria featuring complexity-bounded learners, employing self-learning classes. Both techniques are not specific to U-shaped learning, but applicable for a wide range of settings. (C) 2012 Elsevier B.V. All rights reserved.
引用
收藏
页码:100 / 123
页数:24
相关论文
共 14 条
  • [1] Memory-limited U-shaped learning
    Carlucci, Lorenzo
    Case, John
    Jain, Sanjay
    Stephan, Frank
    LEARNING THEORY, PROCEEDINGS, 2006, 4005 : 244 - 258
  • [2] Solutions to Open Questions for Non-U-Shaped Learning with Memory Limitations
    Case, John
    Koetzing, Timo
    ALGORITHMIC LEARNING THEORY, ALT 2010, 2010, 6331 : 285 - 299
  • [3] Results on memory-limited U-shaped learning
    Carlucci, Lorenzo
    Case, John
    Jain, Sanjay
    Stephan, Frank
    INFORMATION AND COMPUTATION, 2007, 205 (10) : 1551 - 1573
  • [4] Non-U-shaped vacillatory and team learning
    Carlucci, Lorenzo
    Case, John
    Jain, Sanjay
    Stephan, Frank
    JOURNAL OF COMPUTER AND SYSTEM SCIENCES, 2008, 74 (04) : 409 - 430
  • [5] Strongly non-U-shaped language learning results by general techniques
    Case, John
    Koetzing, Timo
    INFORMATION AND COMPUTATION, 2016, 251 : 1 - 15
  • [6] Deep Learning Enabled Task-Oriented Semantic Communication for Memory-Limited Devices
    Deng, Hanmin
    Wang, Weiqi
    Liu, Min
    MOBILE NETWORKS & APPLICATIONS, 2023, 28 (04): : 1519 - 1530
  • [7] Non U-shaped vacillatory and team learning
    Carlucci, L
    Case, J
    Jain, S
    Stephan, F
    ALGORITHMIC LEARNING THEORY, 2005, 3734 : 241 - 255
  • [8] Internal and external memory in neuroevolution for learning in non-stationary problems
    Bellas, Francisco
    Becerra, Jose A.
    Duro, Richard J.
    FROM ANIMALS TO ANIMATS 10, PROCEEDINGS, 2008, 5040 : 62 - 72
  • [9] Emotional intensity produces a linear relationship on conditioned learning but an inverted U-shaped effect on episodic memory
    Ouyang, Lingwei
    Dunsmoor, Joseph E.
    LEARNING & MEMORY, 2024, 31 (12)
  • [10] Learning classifier systems with memory condition to solve non-Markov problems
    Zang, Zhaoxiang
    Li, Dehua
    Wang, Junying
    SOFT COMPUTING, 2015, 19 (06) : 1679 - 1699