- Sebastian Berndt, Maciej Liskiewicz, Matthias Lutter, Rüdiger Reischuk:
Learning Residual Alternating Automata.
Electronic Colloquium on Computational Complexity (ECCC), 24(46)2017.
Go to website | Show abstract
Residuality plays an essential role for learning finite automata.
While residual deterministic and nondeterministic
automata have been understood quite well, fundamental
questions concerning alternating automata (AFA) remain open.
Recently, Angluin, Eisenstat, and Fisman have initiated
a systematic study of residual AFAs and proposed an algorithm called AL*
-an extension of the popular L* algorithm - to learn AFAs.
Based on computer experiments they conjectured
that AL* produces residual AFAs, but have not been able to give a proof.
In this paper we disprove this conjecture by constructing a counterexample.
As our main positive result we design an efficient
learning algorithm, named AL**, and give a proof
that it outputs residual AFAs only.
In addition, we investigate the succinctness of these different FA types in more detail.
- Sebastian Berndt, Maciej Liskiewicz, Matthias Lutter, Rüdiger Reischuk:
Learning Residual Alternating Automata.
Proc. 31st AAAI Conference on Artificial Intelligence (AAAI 2017), pp. 1749-1755.
AAAI Press,
2017.
Go to website | Show abstract
Residuality plays an essential role for learning finite automata.
While residual deterministic and non-deterministic
automata have been understood quite well, fundamental questions
concerning alternating automata (AFA) remain open.
Recently, Angluin, Eisenstat, and Fisman (2015) have initiated
a systematic study of residual AFAs and proposed an
algorithm called AL* – an extension of the popular L* algorithm
– to learn AFAs. Based on computer experiments
they have conjectured that AL* produces residual AFAs, but
have not been able to give a proof. In this paper we disprove
this conjecture by constructing a counterexample. As
our main positive result we design an efficient learning algorithm,
named AL**, and give a proof that it outputs residual
AFAs only. In addition, we investigate the succinctness of
these different FA types in more detail.
- Maciej Liskiewicz, Matthias Lutter, Rüdiger Reischuk:
Proper Learning of k-term DNF Formulas from Satisfying Assignments.
Electronic Colloquium on Computational Complexity (ECCC), 24(114)2017.
Go to website | Show abstract
In certain applications there may only be positive samples available to to learn concepts of a class of interest, and this has to be done properly, i.e. the hypothesis space has to coincide with the concept class, and without false positives, i.e. the hypothesis always has be a subset of the real concept (one-sided error). For the well studied class of k-term DNF formulas it has been known that learning is difficult. Unless RP = NP, it is not feasible to learn k-term DNF formulas properly in a distribution-free sense even if both positive and negative samples are available and even if false positives are allowed.
This paper constructs an efficient algorithm that for arbitrary fixed k, if samples are drawn from distributions like uniform or q-bounded ones, properly learns the class of k-term DNFs without false positives from positive samples alone with arbitrarily small relative error.