- Arfst Nickelsen, Till Tantau:
Partial Information Classes.
SIGACT News, 34(1):32--46, 2003.
Go to website | Show abstract
In this survey we present partial information
classes, which have been studied under different
names and in different contexts in the
literature. They are defined in terms of partial
information algorithms. Such algorithms take a word
tuple as input and yield a small set of
possibilities for its characteristic string as
output. We define a unified framework for the study
of partial information classes and show how previous
notions fit into the framework. The framework allows
us to study the relationship of a large variety of
partial information classes in a uniform way. We
survey how partial information classes are related
to other complexity theoretic notions like advice
classes, lowness, bi-immunity, NP-completeness, and
decidability.
- Till Tantau:
Query Complexity of Membership Comparable Sets.
Theoretical Computer Science, 302(1-3):467-474, 2003.
Go to website | Show abstract
This paper investigates how many queries to
k-membership comparable sets are needed in
order to decide all (k+1)-membership
comparable sets. For k >= 2 this query
complexity is at least linear and at most cubic. As
a corollary we obtain that more languages are
O(log n)-membership comparable than
truth-table reducible to P-selective sets.
- Mitsunori Ogihara, Till Tantau:
On the Reducibility of Sets Inside NP to Sets with Low Information Content.
Journal of Computer and System Sciences, 69(4):299-324, 2004.
Show abstract
This paper studies for various natural problems in
NP whether they can be reduced to sets with low
information content, such as branches, P-selective
sets, and membership comparable sets. The problems
that are studied include the satisfiability problem,
the graph automorphism problem, the undirected graph
accessibility problem, the determinant function, and
all logspace self-reducible languages. Some of these
are complete for complexity classes within NP, but
for others an exact complexity-theoretic
characterization is not known. Reducibility of these
problems is studied in a general framework
introduced in this paper: prover-verifier protocols
with low-complexity provers. It is shown that all
these natural problems indeed have such
protocols. This fact is used to show, for certain
reduction types, that these problems are not
reducible to sets with low information content
unless their complexity is much less than what it is
currently believed to be. The general framework is
also used to obtain a new characterization of the
complexity class L: L is the class of all logspace
self-reducible sets in LL-sel.
- Till Tantau:
Comparing Verboseness for Finite Automata and Turing Machines.
Theory of Computing Systems, 37(1):95-109, 2004.
Show abstract
A language is called (m,n)-verbose if
there exists a Turing machine that enumerates for
any n words at most m possibilities
for their characteristic string. We compare this
notion to (m,n)-fa-verboseness, where
instead of a Turing machine a finite automaton is
used. Using a new structural diagonalisation
method, where finite automata trick Turing machines,
we prove that all (m,n)-verbose
languages are (h,k)-verbose, iff all
(m,n)-fa-verbose languages are
(h,k)-fa-verbose. In other words,
Turing machines and finite automata behave in
exactly the same way with respect to inclusion of
verboseness classes. This identical behaviour
implies that the Nonspeedup Theorem also holds for
finite automata. As an application of the
theoretical framework, we prove a lower bound on the
number of bits needed to be communicated to finite
automata protocol checkers for nonregular
protocols.
- Till Tantau:
Strahlende Präsentationen in LaTeX.
Die TeXnische Komödie, 2/04:54-80, 2004.
In German.
Go to website | Show abstract
Die LaTeX-Klasse beamer dient dem Erstellen von
Prsentationen mittels LaTeX, pdfLaTeX oder LyX. Wie
der Name der Klasse schon verrät, will beamer
es insbesondere leicht machen, Beamervorträge
zu erzeugen; klassische Folienvorträge werden
aber genauso unterstützt. Soweit möglich
baut beamer auf dem Benutzer vertrauten
LaTeX-Befehlen auf, wie \section zur Erzeugung von
Abschnitten oder enumerate zur Erzeugung von
Aufzählungen. Beamervorträge profitieren
oft von der Verwendung von Overlays und deren
Erstellung wird durch eine einfache, aber
mächtige Syntaxerweiterung erleichtert. Die
Klasse folgt der LaTeX-Philosophie der Trennung von
Form und Inhalt: Aus einer einzigen Quelle lassen
sich durch Variation von Layout-Parametern
verschieden gestaltete Vorträge
erzeugen. Vorgefertigte Layouts reichen von
klassisch-sachlich bis modisch-effektvoll, folgen
aber immer dem Bauhausgrundsatz "form follows
function".
- Lane A. Hemaspaandra, Proshanto Mukherji, Till Tantau:
Contextfree Languages Can Be Accepted With Absolutely No Space Overhead.
Information and Computation, 203(2):163-180, 2005.
Show abstract
We study Turing machines that are allowed absolutely
no space overhead. The only work space the machines
have, beyond the fixed amount of memory implicit in
their finite-state control, is that which they can
create by cannibalizing the input bits' own
space. This model more closely reflects the
fixed-sized memory of real computers than does the
standard complexity-theoretic model of linear
space. Though some context-sensitive languages
cannot be accepted by such machines, we show that
all context-free languages can be accepted
nondeterministically in polynomial time with
absolutely no space overhead, and that all
deterministic context-free languages can be accepted
deterministically in polynomial time with absolutely
no space overhead.
- Arfst Nickelsen, Till Tantau:
The Complexity of Finding Paths in Graphs with Bounded Independence Number.
SIAM Journal on Computing, 34(5):1176-1195, 2005.
Show abstract
We study the problem of finding a path between two
vertices in finite directed graphs whose
independence number is bounded by some constant
k. The independence number of a graph is the
largest number of vertices that can be picked such
that there is no edge between any two of them. The
complexity of this problem depends on the exact
question we ask: Do we only wish to tell whether a
path exists? Do we also wish to construct such a
path? Are we required to construct the shortest one?
Concerning the first question, we show that the
reachability problem is first-order definable for
all k and that its succinct version is
Pi2P-complete for all
k. In contrast, the reachability problems for
many other types of finite graphs, including dags
and trees, are not first-order definable and their
succinct versions are PSPACE-complete. Concerning
the second question, we show that not only can we
construct paths in logarithmic space, there even
exists a logspace approximation scheme for this
problem. The scheme gets a ratio r ≥ 1 as
additional input and outputs a path that is at most
r times as long as the shortest
path. Concerning the third question, we show that
even telling whether the shortest path has a certain
length is NL-complete and thus as difficult as for
arbitrary directed graphs.
- Till Tantau:
Weak Cardinality Theorems.
Journal of Symbolic Logic, 70(3):861-878, 2005.
Show abstract
Kummer's Cardinality Theorem states that a language A must be
recursive if a Turing machine can exclude for any n words
w1, ..., wn one of the n +
1 possibilities for the cardinality of {w1, ...,
wn} \cap A. There was good reason to believe
that this theorem is a peculiarity of recursion theory: neither the
Cardinality Theorem nor weak forms of it hold for classical resource-bounded
computational models like polynomial time. This belief may be flawed. In this
paper it is shown that Weak Cardinality Theorems hold for finite automata and
also for other models. A mathematical explanation is given as to ``why''
recursion-theoretic and automata-theoretic Weak Cardinality Theorems hold,
but not corresponding `middle-ground theorems': the recursion- and
automata-theoretic Weak Cardinality Theorems are instantiations of purely
logical Weak Cardinality Theorems. The logical theorems can be instantiated
for logical structures characterizing recursive computations and finite
automata computations. A corresponding structure characterizing polynomial
time does not exist.
- Jens Gramm, Till Nierhoff, Roded Sharan, Till Tantau:
Haplotyping with Missing Data via Perfect Path Phylogenies.
Discrete and Applied Mathematics, 155:788-805, 2007.
Show abstract
Computational methods for inferring haplotype information from genotype data are used in studying the association between genomic variation and medical condition. Recently, Gusfield proposed a haplotype inference method that is based on perfect phylogeny principles. A fundamental problem arises when one tries to apply this approach in the presence of missing genotype data, which is common in practice. We show that the resulting theoretical problem is NP-hard even in very restricted cases. To cope with missing data, we introduce a variant of haplotyping via perfect phylogeny in which a path phylogeny is sought. Searching for perfect path phylogenies is strongly motivated by the characteristics of human genotype data: 70% of real instances that admit a perfect phylogeny also admit a perfect path phylogeny. Our main result is a fixed-parameter algorithm for haplotyping with missing data via perfect path phylogenies. We also present a simple linear-time algorithm for the problem on complete data.
- Till Tantau:
Logspace Optimization Problems and Their Approximability Properties.
Theory of Computing Systems, 41(2):327-350, 2007.
Show abstract
Logspace optimization problems are the logspace
analogues of the well-studied polynomial-time
optimization problems. Similarly to them, logspace
optimization problems can have vastly different
approximation properties, even though the underlying
decision problems have the same computational
complexity. Natural problems -- including the
shortest path problems for directed graphs,
undirected graphs, tournaments, and forests --
exhibit such a varying complexity. In order to study
the approximability of logspace optimization
problems in a systematic way, polynomial-time
approximation classes and polynomial-time reductions
between optimization problems are transferred to
logarithmic space. It is proved that natural
problems are complete for different logspace
approximation classes. This is used to show that
under the assumption L \neq NL some logspace
optimization problems cannot be approximated with a
constant ratio; some can be approximated with a
constant ratio, but do not permit a logspace
approximation scheme; and some have a logspace
approximation scheme, but cannot be solved in
logarithmic space.
- Max Bannach, Zacharias Heinrich, Rüdiger Reischuk, Till Tantau:
Dynamic Kernels for Hitting Sets and Set Packing.
Algorithmica, 84:3459–3488, 2022.
Go to website | Show abstract
Computing small kernels for the hitting set problem is a well-studied computational
problem where we are given a hypergraph with n vertices and m hyperedges, each of
size d for some small constant d, and a parameter k. The task is to compute a new
hypergraph, called a kernel, whose size is polynomial with respect to the parameter k
and which has a size-k hitting set if, and only if, the original hypergraph has one.
State-of-the-art algorithms compute kernels of size k d (which is a polynomial as d is a
constant), and they do so in time m · 2d poly(d) for a small polynomial poly(d) (which
is linear in the hypergraph size for d fixed). We generalize this task to the dynamic
setting where hyperedges may continuously be added or deleted and one constantly
has to keep track of a size-k d kernel. This paper presents a deterministic solution with
worst-case time 3d poly(d) for updating the kernel upon inserts and time 5d poly(d)
for updates upon deletions. These bounds nearly match the time 2d poly(d) needed by
the best static algorithm per hyperedge. Let us stress that for constant d our algorithm
maintains a hitting set kernel with constant, deterministic, worst-case update time that
is independent of n, m, and the parameter k. As a consequence, we also get a deter-
ministic dynamic algorithm for keeping track of size-k hitting sets in d-hypergraphs
with update times O(1) and query times O(c k ) where c = d ? 1 + O(1/d) equals
the best base known for the static setting.
- Max Bannach, Till Tantau:
On the Descriptive Complexity of Color Coding.
MDPI Algorithms, 2021.
Special Issue: Parameterized Complexity and Algorithms for Nonclassical Logics
Go to website | Show abstract
Color coding is an algorithmic technique used in parameterized complexity theory to detect “small” structures inside graphs. The idea is to derandomize algorithms that first randomly color a graph and then search for an easily-detectable, small color pattern. We transfer color coding to the world of descriptive complexity theory by characterizing—purely in terms of the syntactic structure of describing formulas—when the powerful second-order quantifiers representing a random coloring can be replaced by equivalent, simple first-order formulas. Building on this result, we identify syntactic properties of first-order quantifiers that can be eliminated from formulas describing parameterized problems. The result applies to many packing and embedding problems, but also to the long path problem. Together with a new result on the parameterized complexity of formula families involving only a fixed number of variables, we get that many problems lie in FPT just because of the way they are commonly described using logical formulas.
- Max Bannach, Till Tantau:
Computing Hitting Set Kernels By AC^0-Circuits.
Theory of Computing Systems, 64(3):374--399, 2020.
Go to website | Show abstract
Given a hypergraph $H = (V,E)$, what is the smallest subset $X
\subseteq V$ such that $e \cap X \neq \emptyset$ holds for all $e \in E$?
This problem, known as the \emph{hitting set problem,} is a
basic problem in parameterized complexity theory. There are well-known
kernelization algorithms for it, which get a hypergraph~$H$ and a
number~$k$ as input and output a hypergraph~$H'$ such that (1)
$H$ has a hitting set of size~$k$ if, and only if, $H'$ has such a
hitting set and (2) the size of $H'$ depends only on $k$
and on the maximum cardinality $d$ of edges in~$H$. The
algorithms run in polynomial time and can be parallelized
to a certain degree: one can compute hitting set kernels in parallel
time $O(d)$ -- but it was conjectured that this is
the best parallel algorithm possible. We
refute this conjecture and show how hitting set kernels can be
computed in \emph{constant} parallel time. For our proof, we
introduce a new, generalized notion of hypergraph sunflowers and
show how iterated applications of the color coding technique can
sometimes be collapsed into a single application.
- Michael Elberfeld, Martin Grohe, Till Tantau:
Where First-Order and Monadic Second-Order Logic Coincide.
Transactions on Computational Logic, (Volume 17 Issue 4, November 2016 Article No. 25)2016.
- Till Tantau:
A Gentle Introduction to Applications of Algorithmic Metatheorems for Space and Circuit Classes.
Algorithms, 9(3):1-44, 2016.
Go to website | Show abstract
Algorithmic metatheorems state that if a problem can be described in a certain logic and the inputs are structured in a certain way, then the problem can be solved with a certain amount of resources. As an example, by Courcelle’s Theorem, all monadic second-order (“in a certain logic”) properties of graphs of bounded tree width (“structured in a certain way”) can be solved in linear time (“with a certain amount of resources”). Such theorems have become valuable tools in algorithmics: if a problem happens to have the right structure and can be described in the right logic, they immediately yield a (typically tight) upper bound on the time complexity of the problem. Perhaps even more importantly, several complex algorithms rely on algorithmic metatheorems internally to solve subproblems, which considerably broadens the range of applications of these theorems. This paper is intended as a gentle introduction to the ideas behind algorithmic metatheorems, especially behind some recent results concerning space and circuit classes, and tries to give a flavor of the range of their applications.
- Michael Elberfeld, Christoph Stockhusen, Till Tantau:
On the Space and Circuit Complexity of Parameterized Problems: Classes and Completeness.
Algorithmica, 71(3):661-701, 2015.
Go to website | Show abstract
The parameterized complexity of a problem is generally considered
``settled'' once it has been shown to be fixed-parameter
tractable or to be complete for a class in a parameterized
hierarchy such as the weft hierarchy. Several natural
parameterized problems have, however, resisted such a
classification. In the present paper we argue that, %at least in
some cases, this is due to the fact that the parameterized
complexity of these problems can be better understood in terms of
their \emph{parameterized space} or \emph{parameterized circuit}
complexity. This includes well-studied, natural problems like the
feedback vertex set problem, the associative generability
problem, or the longest common subsequence problem. We show that
these problems lie in and may even be complete for different
parameterized space classes, leading to new insights into the
problems' complexity. The classes we study are defined in terms
of different forms of bounded nondeterminism and simultaneous
time--space bounds.
- Till Tantau:
Graph Drawing in TikZ.
Journal of Graph Algorithms and Applications, 17(4):495-513, 2013.
Show PDF | Show abstract
At the heart of every good graph drawing algorithm lies an efficient procedure for assigning canvas positions to a graph's nodes and the bend points of its edges. However, every real-world implementation of such an algorithm must address numerous problems that have little to do with the actual algorithm, like handling input and output for-mats, formatting node texts, and styling nodes and edges. We present a new framework, implemented in the Lua programming language and integrated into the TikZ graphics description language, that aims at sim-plifying the implementation of graph drawing algorithms. Implementers using the framework can focus on the core algorithmic ideas and will au-tomatically profit from the framework's pre- and post-processing steps as well as from the extensive capabilities of the TikZ graphics language and the TeX typesetting engine. Algorithms already implemented using the framework include the Reingold-Tilford tree drawing algorithm, a mod-ular version of Sugiyama's layered algorithm, and several force-based multilevel algorithms.
- Valentina Damerow, Bodo Manthey, Friedhelm Meyer auf der Heide, Harald Räcke, Christian Scheideler, Christian Sohler, Till Tantau:
Smoothed Analysis of Left-To-Right Maxima with Applications.
ACM Transactions on Algorithms, 8(3):Article No. 30, 2012.
Go to website | Show abstract
A left-to-right maximum in a sequence of n numbers s1, …, sn is a number that is strictly larger than all preceding numbers. In this article we present a smoothed analysis of the number of left-to-right maxima in the presence of additive random noise. We show that for every sequence of n numbers si ? [0,1] that are perturbed by uniform noise from the interval [-?,?], the expected number of left-to-right maxima is ?(&sqrt;n/? + log n) for ?>1/n. For Gaussian noise with standard deviation ? we obtain a bound of O((log3/2 n)/? + log n).
We apply our results to the analysis of the smoothed height of binary search trees and the smoothed number of comparisons in the quicksort algorithm and prove bounds of ?(&sqrt;n/? + log n) and ?(n/?+1&sqrt;n/? + n log n), respectively, for uniform random noise from the interval [-?,?]. Our results can also be applied to bound the smoothed number of points on a convex hull of points in the two-dimensional plane and to smoothed motion complexity, a concept we describe in this article. We bound how often one needs to update a data structure storing the smallest axis-aligned box enclosing a set of points moving in d-dimensional space.
- Michael Elberfeld, Ilka Schnoor, Till Tantau:
Influence of Tree Topology Restrictions on the Complexity of Haplotyping with Missing Data.
Theoretical Computer Science, 432:38–51, 2012.
Show PDF | Go to website | Show abstract
Haplotyping, also known as haplotype phase prediction, is the problem of predicting likely haplotypes based on genotype data. One fast haplotyping method is based on an evolutionary model where a perfect phylogenetic tree is sought that explains the observed data. Unfortunately, when data entries are missing, which is often the case in laboratory data, the resulting formal problem IPPH, which stands for incomplete perfect phylogeny haplotyping, is NP-complete. Even radically simplified versions, such as the restriction to phylogenetic trees consisting of just two directed paths from a given root, are still NP-complete; but here, at least, a fixed-parameter algorithm is known. Such drastic and ad hoc simplifications turn out to be unnecessary to make IPPH tractable: we present the first theoretical analysis of a parametrized algorithm, which we develop in the course of the paper, that works for arbitrary instances of IPPH. This tractability result is optimal insofar as we prove IPPH to be NP-complete whenever any of the parameters we consider is not fixed, but part of the input.
- Michael Elberfeld, Till Tantau:
Phylogeny- and Parsimony-Based Haplotype Inference with Constraints.
Information and Computation, 213:33–47, 2012.
Show PDF | Go to website | Show abstract
Haplotyping, also known as haplotype phase prediction, is the problem of predicting likely haplotypes based on genotype data. One fast computational haplotyping method is based on an evolutionary model where a perfect phylogenetic tree is sought that explains the observed data. An extension of this approach tries to incorporate prior knowledge in the form of a set of candidate haplotypes from which the right haplotypes must be chosen. The objective is to increase the accuracy of haplotyping methods, but it was conjectured that the resulting formal problem constrained perfect phylogeny haplotyping might be NP-complete. In the paper at hand we present a polynomial-time algorithm for it. Our algorithmic ideas also yield new fixed-parameter algorithms for related haplotyping problems based on the maximum parsimony assumption.
- Edith Hemaspaandra, Lane A. Hemaspaandra, Till Tantau, Osamu Watanabe:
On the Complexity of Kings.
Theoretical Computer Science, 411(2010):783-798, 2010.
Go to website | Show abstract
A k-king in a directed graph is a node from which each node in the graph can be reached via paths of length at most~k. Recently, kings have proven useful in theoretical computer science, in particular in the study of the complexity of reachability problems and semifeasible sets. In this paper, we study the complexity of recognizing k-kings. For each succinctly specified family of tournaments (completely oriented digraphs), the k-king problem is easily seen to belong to $\Pi_2^{\mathrm p}$. We prove that the complexity of kingship problems is a rich enough vocabulary to pinpoint every nontrivial many-one degree in $\Pi_2^{\mathrm p}$. That is, we show that for every $k \ge 2$ every set in $\Pi_2^{\mathrm p}$ other than $\emptyset$ and $\Sigma^*$ is equivalent to a k-king problem under $\leq_{\mathrm m}^{\mathrm p}$-reductions. The equivalence can be instantiated via a simple padding function. Our results can be used to show that the radius problem for arbitrary succinctly represented graphs is $\Sigma_3^{\mathrm p}$-complete. In contrast, the diameter problem for arbitrary succinctly represented graphs (or even tournaments) is $\Pi_2^{\mathrm p}$-complete.
- Jens Gramm, Tzvika Hartman, Till Nierhoff, Roded Sharan, Till Tantau:
On the complexity of SNP block partitioning under the perfect phylogeny model.
Discrete Mathematics, 309(18):5610-5617, 2009.
Go to website | Show abstract
ecent technologies for typing single nucleotide polymorphisms (SNPs) across a population are producing genome-wide genotype data for tens of thousands of SNP sites. The emergence of such large data sets underscores the importance of algorithms for large-scale haplotyping. Common haplotyping approaches first partition the SNPs into blocks of high linkage-disequilibrium, and then infer haplotypes for each block separately. We investigate an integrated haplotyping approach where a partition of the SNPs into a minimum number of non-contiguous subsets is sought, such that each subset can be haplotyped under the perfect phylogeny model. We show that finding an optimum partition is NP-hard even if we are guaranteed that two subsets suffice. On the positive side, we show that a variant of the problem, in which each subset is required to admit a perfect path phylogeny haplotyping, is solvable in polynomial time.
- Jens Gramm, Arfst Nickelsen, Till Tantau:
Fixed-Parameter Algorithms in Phylogenetics.
The Computer Journal, 51(1):79--101, 2008.
Show PDF | Show abstract
We survey the use of fixed-parameter algorithms in
the field of phylogenetics, which is the study of
evolutionary relationships. The central problem in
phylogenetics is the reconstruction of the
evolutionary history of biological species, but its
methods also apply to linguistics, philology, or
architecture. A basic computational problem is the
reconstruction of a likely phylogeny (genealogical
tree) for a set of species based on observed
differences in the phenotype like color or form of
limbs, based on differences in the genotype like
mutated nucleotide positions in the DNA sequence, or
based on given partial phylogenies. Ideally, one
would like to construct so-called perfect
phylogenies, which arise from a very simple
evolutionary model, but in practice one must often
be content with phylogenies whose ``distance from
perfection'' is as small as possible. The
computation of phylogenies has applications in
seemingly unrelated areas such as genomic sequencing
and finding and understanding genes. The numerous
computational problems arising in phylogenetics
often are NP-complete, but for many natural
parametrizations they can be solved using
fixed-parameter algorithms.