- Klaus Didrich, Wolfgang Grieskamp, Florian Schintke, Till Tantau, Baltasar Trancón-y-Widemann:
Reflections in Opal - Meta Information in a Functional Programming Language.
In Proceedings of the 11th International Workshop on Implementation of Functional Languages, Lochem, The Netherlands, September 1999, (IFL'99), Selected Papers, Volume 1868 of Lecture Notes in Computer Science, pp. 146--164.
Springer,
2000.
Show abstract
We report on an extension of the Opal system that
allows the use of reflections. Using reflections, a
programmer can query information like the type of an
object at runtime. The type can in turn be queried
for properties like the constructor and
deconstructor functions, and the resulting reflected
functions can be evaluated. These facilities can be
used for generic meta-programming. We describe the
reflection interface of Opal and its applications,
and sketch the implementation. For an existing
language implementation like Opal's the extension by
a reflection facility is challenging: in a
statically typed language the management of runtime
type information seems to be an alien
objective. However, it turns out that runtime type
information can be incorporated in an elegant way by
a source-level transformation and an appropriate set
of library modules. We show how this transformation
can be done without changing the Opal core system
and causing runtime overhead only where reflections
are actually used.
- Arfst Nickelsen, Till Tantau:
Closure of Polynomial Time Partial Information Classes under Polynomial Time Reductions.
In Proceedings of the 13th International Symposium on Fundamentals of Computation Theory (FCT 2001), Volume 2138 of Lecture Notes in Computer Science, pp. 299-310.
Springer,
2001.
Go to website | Show abstract
Polynomial time partial information classes are
extensions of the class P of languages decidable in
polynomial time. A partial information algorithm for
a language A computes, for fixed n, on
input of words x1, ...,
xn a set P of bitstrings,
called a pool, such that
\chiA(x1, ...,
xn) \in P, where P
is chosen from a family D of pools. A
language A is in P[D], if there is a
polynomial time partial information algorithm which
for all inputs (x1, ...,
xn) outputs a pool P in
D with
\chiA(x1, ...,
xn) \in P. Many extensions
of P studied in the literature, including
approximable languages, cheatability, p-selectivity
and frequency computations, form a class P[D]
for an appropriate family D. We characterise
those families D for which P[D] is
closed under certain polynomial time reductions,
namely bounded truth-table, truth-table, and Turing
reductions. We also treat positive reductions. A
class P[D] is presented which strictly
contains the class P-sel of p-selective languages
and is closed under positive truth-table
reductions.
- Arfst Nickelsen, Till Tantau:
On Reachability in Graphs with Bounded Independence Number.
In Proceedings of the Eighth Annual International Computing and Combinatorics Conference (COCOON 2002), Volume 2387 of Lecture Notes in Computer Science, pp. 554--563.
Springer,
2002.
Show abstract
We study the reachability problem for finite
directed graphs whose independence number is bounded
by some constant k. We show that this problem
is first-order definable for all k. In
contrast, the reachability problem for many other
types of finite graphs, including dags and trees, is
not first- order definable. We also study the
reachability problem for succinctly represented
graphs with independence number at most k. We
also study the succinct version of the problem and
show that it is \PiP2-complete
for all k.
- Till Tantau:
Comparing Verboseness for Finite Automata and Turing Machines.
In Proceedings of the 19th International Symposium on Theoretical Aspects of Computer Science (STACS 2002), Volume 2285 of Lecture Notes in Computer Science, pp. 465-476.
Springer,
2002.
Go to website | Show abstract
A language is called (m,n)-verbose if
there exists a Turing machine that enumerates for
any n words at most m possibilities
for their characteristic string. We compare this
notion to (m,n)-fa-verboseness, where
instead of a Turing machine a finite automaton is
used. Using a new structural diagonalisation
method, where finite automata trick Turing machines,
we prove that all (m,n)-verbose
languages are (h,k)-verbose, iff all
(m,n)-fa-verbose languages are
(h,k)-fa-verbose. In other words,
Turing machines and finite automata behave in
exactly the same way with respect to inclusion of
verboseness classes. This identical behaviour
implies that the Nonspeedup Theorem also holds for
finite automata. As an application of the
theoretical framework, we prove a lower bound on the
number of bits needed to be communicated to finite
automata protocol checkers for nonregular
protocols.
- Till Tantau:
Towards a Cardinality Theorem for Finite Automata.
In Proceedings of the 27th International Symposium on Mathematical Foundations of Computer Science (MFCS 2002), Volume 2420 of Lecture Notes in Computer Science, pp. 625-636.
Springer,
2002.
Go to website | Show abstract
Kummer's cardinality theorem states that a language
is recursive if a Turing machine can exclude for any
n words one of the n+1 possibilities
for the number of words in the language. This paper
gathers evidence that the cardinality theorem might
also hold for finite automata. Three reasons are
given. First, Beigel's nonspeedup theorem also holds
for finite automata. Second, the cardinality theorem
for finite automata holds for n=2. Third, the
restricted cardinality theorem for finite automata
holds for all n.
- Lane A. Hemaspaandra, Proshanto Mukherji, Till Tantau:
Computation with Absolutely No Space Overhead.
In Proceedings of the Seventh International Conference on Developments in Language Theory (DLT 2003), Volume 2710 of Lecture Notes in Computer Science, pp. 325-336.
Springer,
2003.
Go to website | Show abstract
We study Turing machines that are allowed absolutely
no space overhead. The only work space the machines
have, beyond the fixed amount of memory implicit in
their finite-state control, is that which they can
create by cannibalizing the input bits' own
space. This model more closely reflects the
fixed-sized memory of real computers than does the
standard complexity-theoretic model of linear
space. Though some context-sensitive languages
cannot be accepted by such machines, we show that
subclasses of the context-free languages can even be
accepted in polynomial time with absolutely no space
overhead.
- Till Tantau:
Weak Cardinality Theorems for First-Order Logic.
In Proceedings of the 14th International Symposium on Fundamentals of Computation Theory (FCT 2003), Volume 2751 of Lecture Notes in Computer Science, pp. 400-411.
Springer,
2003.
Go to website | Show abstract
Kummer's cardinality theorem states that a language
is recursive if a Turing machine can exclude for any
n words one of the n + 1 possibilities
for the number of words in the language. It is known
that this theorem does not hold for polynomial-time
computations, but there is evidence that it holds
for finite automata: at least weak cardinality
theorems hold for finite automata. This paper shows
that some of the recursion-theoretic and
automata-theoretic weak cardinality theorems are
instantiations of purely logical theorems. Apart
from unifying previous results in a single
framework, the logical approach allows us to prove
new theorems for other computational models. For
example, weak cardinality theorems hold for
Presburger arithmetic.
- Jens Gramm, Till Nierhoff, Roded Sharan, Till Tantau:
On the Complexity of Haplotyping via Perfect Phylogeny.
In Proceedings of Second RECOMB Satellite Workshop on Computational Methods for SNPs and Haplotypes, pp. 35-46.
,
2004.
Show abstract
The problem of haplotyping via perfect phylogeny has
received a lot of attention lately due to its
applicability to real haplotyping problems and its
theoretical elegance. However, two main research
issues remained open: The complexity of haplotyping
with missing data, and whether the problem is
linear-time solvable. In this paper we settle the
first question and make progress toward answering
the second one. Specifically, we prove that Perfect
Phylogeny Haplotyping with missing data is
NP-complete even when the phylogeny is a path and
only one allele of every polymorphic site is present
in the population in its homozygous state. Our
result implies the hardness of several variants of
the missing data problem, including the general
Perfect Phylogeny Haplotyping Problem with missing
data, and Hypergraph Tree Realization with missing
data. On the positive side, we give a linear-time
algorithm for Perfect Phylogeny Haplotyping when the
phylogeny is a path. This variant is important due
to the abundance of yin-yang haplotypes in the human
genome. Our algorithm relies on a reduction of the
problem to that of deciding whether a partially
ordered set has width~2.
- Jens Gramm, Till Nierhoff, Till Tantau:
Perfect Path Phylogeny Haplotyping with Missing Data is Fixed-Parameter Tractable.
In Proceedings of the 2004 International Workshop on Parameterized and Exact Computation, Volume 3162 of Lecture Notes in Computer Science, pp. 174-186.
Springer,
2004.
Show abstract
Haplotyping via perfect phylogeny is a method for
retrieving haplotypes from genotypes. Fast
algorithms are known for computing perfect
phylogenies from complete and error-free input
instances---these instances can be organized as a
genotype matrix whose rows are the genotypes and
whose columns are the single nucleotide
polymorphisms under consideration. Unfortunately, in
the more realistic setting of missing entries in the
genotype matrix, even restricted forms of the
perfect phylogeny haplotyping problem become
NP-hard. We show that haplotyping via perfect
phylogeny with missing data becomes computationally
tractable when imposing additional biologically
motivated constraints. Firstly, we focus on asking
for perfect phylogenies that are paths, which is
motivated by the discovery that yin-yang haplotypes
span large parts of the human genome. A yin-yang
haplotype implies that every corresponding perfect
phylogeny <i>has</i> to be a path. Secondly, we
assume that the number of missing entries in every
column of the input genotype matrix is bounded. We
show that the perfect path phylogeny haplotyping
problem is fixed-parameter tractable when we
consider the maximum number of missing entries per
column of the genotype matrix as parameter. The
restrictions we impose are met by a majority of the
problem instances encountered in publicly available
human genome data.
- Till Tantau:
Über strukturelle Gemeinsamkeiten der Aufzählbarkeitsklassen von Turingmaschinen und endlichen Automaten.
In Ausgezeichnete Informatikdissertationen 2003, Lecture Notes in Informatics, pp. 189-198.
Springer,
2004.
Show abstract
Der Beitrag enthält eine Zusammenfassung der
Dissertation On Structural Similarities of Finite
Automata and Turing Machine Enumerability
Classes.
- Richard Karp, Till Nierhoff, Till Tantau:
Optimal Flow Distribution Among Multiple Channels with Unknown Capacities.
In Proceedings of the 2nd Brazilian Symposium on Graphs, Algorithms and Combinatorics (GRACO 2005), Volume 19 of Electronic Notes in Discrete Mathematics, pp. 225-231.
Springer,
2005.
Show abstract
Consider a simple network flow problem in which
there are n channels directed from a source
to a sink. The channel capacities are unknown and we
wish to determine a feasible network flow of value
D. Flow problems with unknown capacities
arise naturally in numerous applications, including
inter-domain traffic routing in the Internet
[Chandrayana et al. 2004], bandwidth allocation for
sending files in peer-to-peer networks, and the
distribution of physical goods like newspapers among
different points of sale. We study protocols that
probe the network by attempting to send a flow of at
most D units through the network. If the flow
is not feasible, the protocol is told on which
channels the capacity was exceeded (binary feedback)
and possibly also how many units of flow were
successfully send on these channels (throughput
feedback). For the latter, more informative, type of
feedback we present optimal protocols for minimizing
the number of rounds needed to find a feasible flow
and for minimizing the total amount of wasted
flow. For binary feedback, we show that one can
exploit the fact that network capacities are often
larger than the demand D: We present a
protocol for this situation that is asymptotically
optimal and finds a solution more quickly than the
generalized binary search protocol previously
proposed in the literature [Chandrayana et
al. 2004]. For the special case of two channels we
present a protocol that is optimal and outperforms
binary search.
- Till Tantau:
Logspace Optimization Problems and Their Approximability Properties.
In Proceedings of the 15th International Symposium on Fundamentals of Computation Theory (FCT 2005), Volume 3623 of Lecture Notes in Computer Science, pp. 92-103.
Springer,
2005.
Show abstract
This paper introduces logspace optimization problems
as analogues of the well-studied polynomial-time
optimization problems. Similarly to them, logspace
optimization problems can have vastly different
approximation properties, even though the underlying
decision problems have the same computational
complexity. Natural problems, including the shortest
path problems for directed graphs, undirected
graphs, tournaments, and forests, exhibit such a
varying complexity. In order to study the
approximability of logspace optimization problems in
a systematic way, polynomial-time approximation
classes are transferred to logarithmic
space. Appropriate reductions are defined and
optimization problems are presented that are
complete for these classes. It is shown that under
the assumption L neq NL some logspace optimization
problems cannot be approximated with a constant
ratio; some can be approximated with a constant
ratio, but do not permit a logspace approximation
scheme; and some have a logspace approximation
scheme, but cannot be solved in logarithmic space. A
new natural NL-complete problem is presented that
has a logspace approximation scheme.
- Jens Gramm, Tzvika Hartman, Till Nierhoff, Roded Sharan, Till Tantau:
On the Complexity of SNP Block Partitioning Under the Perfect Phylogeny Model.
In Proceedings of Workshop on Algorithms in Bioinformatics (WABI), Volume 4175 of Lecture Notes in Computer Science, pp. 92-102.
Springer,
2006.
Show abstract
Recent technologies for typing single nucleotide
polymorphisms (SNPs) across a population are
producing genome-wide genotype data for tens of
thousands of SNP sites. The emergence of such large
data sets underscores the importance of algorithms
for large-scale haplotyping. Common haplotyping
approaches first partition the SNPs into blocks of
high linkage-disequilibrium, and then infer
haplotypes for each block separately. We investigate
an integrated haplotyping approach where a partition
of the SNPs into a minimum number of non-contiguous
subsets is sought, such that each subset can be
haplotyped under the perfect phylogeny model. We
show that finding an optimum partition is NP-hard
even if we are guaranteed that two subsets
suffice. On the positive side, we show that a
variant of the problem, in which each subset is
required to admit a perfect path phylogeny
haplotyping, is solvable in polynomial time.
- Edith Hemaspaandra, Lane A. Hemaspaandra, Till Tantau, Osamu Watanabe:
On the Complexity of Kings.
In Proceedings of FCT 2007, Volume 4639 of Lecture Notes in Computer Science, pp. 328--340.
Springer,
2007.
Show abstract
A k-king in a directed graph is a node from which
each node in the graph can be reached via paths of
length at most~k. Recently, kings have proven
useful in theoretical computer science, in
particular in the study of the complexity of
reachability problems and semifeasible sets. In
this paper, we study the complexity of recognizing
k-kings. For each succinctly specified family of
tournaments (completely oriented digraphs), the
k-king problem is easily seen to belong to
$\Pi_2^{\mathrm p}$. We prove that the complexity of
kingship problems is a rich enough vocabulary to
pinpoint every nontrivial many-one degree in
$\Pi_2^{\mathrm p}$. That is, we show that for every
$k \ge 2$ every set in $\Pi_2^{\mathrm p}$ other
than $\emptyset$ and $\Sigma^*$ is equivalent to a
k-king problem under $\leq_{\mathrm m}^{\mathrm
p}$-reductions. The equivalence can be instantiated
via a simple padding function. Our results can be
used to show that the radius problem for arbitrary
succinctly represented graphs is $\Sigma_3^{\mathrm
p}$-complete. In contrast, the diameter problem for
arbitrary succinctly represented graphs (or even
tournaments) is $\Pi_2^{\mathrm p}$-complete.
- Andreas Jakoby, Till Tantau:
Logspace Algorithms for Computing Shortest and Longest Paths in Series-Parallel Graphs.
In Proceedings of the 27th International Conference on Foundations of Software Technology and Theoretical Computer Science (FSTTCS 2007), Volume 4855 of Lecture Notes in Computer Science, pp. 216-227.
Springer,
2007.
Show abstract
For many types of graphs, including directed acyclic graphs, undirected graphs, tournament
graphs, and graphs with bounded independence number, the shortest path problem is NL-complete.
The longest path problem is even NP-complete for many types of graphs, including undirected K5-minor-free graphs and even planar graphs. In the present paper we present logspace algorithms
for computing shortest and longest paths in series-parallel graphs where the edges can be directed
arbitrarily. The class of series-parallel graphs that we study can be characterized alternatively as the
class of K4-minor-free graphs and also as the class of graphs of tree-width 2. It is well-known that
for graphs of bounded tree-width many intractable problems can be solved efficiently, but previous
work was focused on finding algorithms with low parallel or sequential time complexity. In contrast,
our results concern the space complexity of shortest and longest path problems. In particular, our
results imply that for graphs of tree-width 2 these problems are L-complete.
- Bodo Manthey, Till Tantau:
Smoothed Analysis of Binary Search Trees and Quicksort Under Additive Noise.
In Proceedings of MFCS 2008, Volume 5162 of Lecture Notes in Computer Science, pp. 467-478.
Springer,
2008.
Show abstract
Binary search trees are a fundamental data structure and their height plays a
key role in the analysis of divide-and-conquer algorithms like quicksort.
We analyze their smoothed height under additive uniform noise: An
adversary chooses a sequence of~$n$ real numbers in the range
$[0,1]$, each number is individually perturbed by adding a value
drawn uniformly at random from an interval of size~$d$, and the
resulting numbers are inserted into a search tree.
An analysis of the smoothed tree height subject to $n$ and $d$ lies
at the heart of our paper: We prove that the smoothed height of binary search
trees is $\Theta (\sqrt{n/d} + \log n)$, where $d \ge 1/n$ may depend on~$n$.
Our analysis starts with the simpler problem of determining the
smoothed number of left-to-right maxima in a sequence.
We establish matching bounds, namely
once more $\Theta (\sqrt{n/d} + \log n)$. We also apply our findings
to the performance of the quicksort algorithm and prove that the
smoothed number of comparisons made by quicksort is
$\Theta(\frac{n}{d+1} \sqrt{n/d} + n \log n)$.
- Max Bannach, Malte Skambath, Till Tantau:
On the Parallel Parameterized Complexity of MaxSAT Variants.
In Proceedings of the 25th International Conference on Theory and Applications of Satisfiability Testing (SAT 2022), LIPIcs,
2022.
Go to website - Max Bannach, Zacharias Heinrich, Till Tantau, Rüdiger Reischuk:
Dynamic Kernels for Hitting Sets and Set Packing.
In Proceedings of the 16th International Symposium on Parameterized and Exact Computation (IPEC 2021), Volume 214 of LIPIcs, Schloss Dagstuhl - Leibniz-Zentrum für Informatik,
2021.
Go to website | Show abstract
Computing small kernels for the hitting set problem is a well-studied computational problem where we are given a hypergraph with n vertices and m hyperedges, each of size d for some small constant d, and a parameter k. The task is to compute a new hypergraph, called a kernel, whose size is polynomial with respect to the parameter k and which has a size-k hitting set if, and only if, the original hypergraph has one. State-of-the-art algorithms compute kernels of size k^d (which is a polynomial kernel size as d is a constant), and they do so in time m? 2^d poly(d) for a small polynomial poly(d) (which is a linear runtime as d is again a constant).
We generalize this task to the dynamic setting where hyperedges may continuously be added or deleted and one constantly has to keep track of a size-k^d hitting set kernel in memory (including moments when no size-k hitting set exists). This paper presents a deterministic solution with worst-case time 3^d poly(d) for updating the kernel upon hyperedge inserts and time 5^d poly(d) for updates upon deletions. These bounds nearly match the time 2^d poly(d) needed by the best static algorithm per hyperedge. Let us stress that for constant d our algorithm maintains a dynamic hitting set kernel with constant, deterministic, worst-case update time that is independent of n, m, and the parameter k. As a consequence, we also get a deterministic dynamic algorithm for keeping track of size-k hitting sets in d-hypergraphs with update times O(1) and query times O(c^k) where c = d - 1 + O(1/d) equals the best base known for the static setting.
- Max Bannach, Malte Skambath, Till Tantau:
Kernelizing the Hitting Set Problem in Linear Sequential and Constant Parallel Time.
In Proceedings of the 17th Scandinavian Symposium and Workshops on Algorithm Theory (SWAT 2020), LIPIcs,
2020.
Go to website | Show abstract
We analyze a reduction rule for computing kernels for the hitting set problem: In a hypergraph, the link of a set c of vertices consists of all edges that are supersets of c. We call such a set critical if its link has certain easy-to-check size properties. The rule states that the link of a critical c can be replaced by c. It is known that a simple linear-time algorithm for computing hitting set kernels (number of edges) at most k^d (k is the hitting set size, d is the maximum edge size) can be derived from this rule. We parallelize this algorithm and obtain the first AC^0 kernel algorithm that outputs polynomial-size kernels. Previously, such algorithms were not even known for artificial problems. An interesting application of our methods lies in traditional, non-parameterized approximation theory: Our results imply that uniform AC^0-circuits can compute a hitting set whose size is polynomial in the size of an optimal hitting set.
- Max Bannach, Till Tantau:
On the Descriptive Complexity of Color Coding.
In Proceedings of STACS 2019, LIPIcs, LIPIcs,
2019.
Go to website | Show PDF | Show abstract
Color coding is an algorithmic technique used in parameterized
complexity theory to detect ``small'' structures inside graphs. The
idea is to derandomize algorithms that first randomly
color a graph and then search for an easily-detectable, small color
pattern. We transfer color coding to the world of descriptive
complexity theory by characterizing -- purely in terms of the
syntactic structure of describing formulas -- when the powerful
second-order quantifiers representing a random coloring can be
replaced by equivalent, simple first-order formulas. Building on
this result, we identify syntactic properties of first-order
quantifiers that can be eliminated from formulas describing
parameterized problems. The result applies to many packing and
embedding problems, but also to the long path problem. Together with a
new result on the parameterized complexity of formula families
involving only a fixed number of variables, we get that many
problems lie in FPT just because of the way they are
commonly described using logical formulas.
- Max Bannach, Malte Skambath, Till Tantau:
Towards Work-Efficient Parallel Parameterized Algorithms.
In Proceedings of the 13th International Conference and Workshops on Algorithms and Computation (WALCOM 2019), Springer,
2019.
Go to website | Show PDF | Show abstract
Parallel parameterized complexity theory studies how
fixed-parameter tractable (fpt) problems can be solved in
parallel. Previous theoretical work focused on parallel
algorithms that are very fast in principle, but did not take into
account that when we only have a small number of processors
(between 2 and, say, 1024), it is more important
that the parallel algorithms are work-efficient. In the
present paper we investigate how work-efficient fpt algorithms can
be designed. We review standard methods from fpt theory, like
kernelization, search trees, and interleaving, and prove
trade-offs for them between work efficiency and runtime
improvements. This results in a toolbox for
developing work-efficient parallel fpt algorithms.
- Max Bannach, Till Tantau:
Computing Hitting Set Kernels By AC^0-Circuits.
In Proceedings of the 35th International Symposium on Theoretical Aspects of Computer Science, LIPIcs,
2018.
Go to website | Show PDF | Show abstract
Given a hypergraph $H = (V,E)$, what is the smallest subset $X
\subseteq V$ such that $e \cap X \neq \emptyset$ holds for all $e \in E$?
This problem, known as the \emph{hitting set problem,} is a
basic problem in parameterized complexity theory. There are well-known
kernelization algorithms for it, which get a hypergraph~$H$ and a
number~$k$ as input and output a hypergraph~$H'$ such that (1)
$H$ has a hitting set of size~$k$ if, and only if, $H'$ has such a
hitting set and (2) the size of $H'$ depends only on $k$
and on the maximum cardinality $d$ of edges in~$H$. The
algorithms run in polynomial time, but are highly
sequential. Recently, it has been shown that one of them can be parallelized
to a certain degree: one can compute hitting set kernels in parallel
time $O(d)$ -- but it was conjectured that this is
the best parallel algorithm possible. We
refute this conjecture and show how hitting set kernels can be
computed in \emph{constant} parallel time. For our proof, we
introduce a new, generalized notion of hypergraph sunflowers and
show how iterated applications of the color coding technique can
sometimes be collapsed into a single application.
- Max Bannach, Till Tantau:
Computing Kernels in Parallel: Lower and Upper Bounds.
In Proceedings of the 13th International Symposium on Parameterized and Exact Computation (IPEC 2018), LIPIcs,
2018.
Go to website | Show PDF | Show abstract
Parallel fixed-parameter tractability studies how parameterized problems can be solved in parallel. A surprisingly large number of parameterized problems admit a high level of parallelization, but this does not mean that we can also efficiently compute small problem kernels in parallel: known kernelization algorithms are typically highly sequential. In the present paper, we establish a number of upper and lower bounds concerning the sizes of kernels that can be computed in parallel. An intriguing finding is that there are complex trade-offs between kernel size and the depth of the circuits needed to compute them: For the vertex cover problem, an exponential kernel can be computed by AC$^0$-circuits, a quadratic kernel by TC$^0$-circuits, and a linear kernel by randomized NC-circuits with derandomization being possible only if it is also possible for the matching problem. Other natural problems for which similar (but quantitatively different) effects can be observed include tree decomposition problems parameterized by the vertex cover number, the undirected feedback vertex set problem, the matching problem, or the point line cover problem. We also present natural problems for which computing kernels is inherently sequential.
- Till Tantau:
Applications of Algorithmic Metatheorems to Space Complexity and Parallelism (Invited Talk)..
In 34th Symposium on Theoretical Aspects of Computer Science (STACS 2017), pp. 4:1--4:4.
DROPS,
2017.
Go to website | Show abstract
Algorithmic metatheorems state that if a problem can be described in a certain logic and the inputs are structured in a certain way, then the problem can be solved with a certain amount of resources. As an example, by Courcelle's Theorem all monadic second-order ("in a certain logic") properties of graphs of bounded tree width ("structured in a certain way") can be solved in linear time ("with a certain amount of resources"). Such theorems have become a valuable tool in algorithmics: If a problem happens to have the right structure and can be described in the right logic, they immediately yield a (typically tight) upper bound on the time complexity of the problem. Perhaps even more importantly, several complex algorithms rely on algorithmic metatheorems internally to solve subproblems, which considerably broadens the range of applications of these theorems. The talk is intended as a gentle introduction to the ideas behind algorithmic metatheorems, especially behind some recent results concerning space classes and parallel computation, and tries to give a flavor of the range of
- Max Bannach, Till Tantau:
Parallel Multivariate Meta-Theorems.
In Proceedings of the 11th International Symposium on Parameterized and Exact Computation (IPEC 2016), LIPIcs,
2016.
Go to website | Show PDF | Show abstract
Fixed-parameter tractability is based on the observation that many hard problems become tractable even on large inputs as long as certain input parameters are small. Originally, ``tractable'' just meant ``solvable in polynomial time,'' but especially modern hardware raises the question of whether we can also achieve ``solvable in polylogarithmic parallel time.'' A framework for this study of \emph{parallel fixed-parameter tractability} is available and a number of isolated algorithmic results have been obtained in recent years, but one of the unifying core tools of classical FPT theory has been missing: algorithmic meta-theorems. We establish two such theorems by giving new upper bounds on the circuit depth necessary to solve the model checking problem for monadic second-order logic, once parameterized by the tree width and the formula (this is a parallel version of Courcelle's Theorem) and once by the tree depth and the formula. For our proofs we refine the analysis of earlier algorithms, especially of Bodlaender's, but also need to add new ideas, especially in the context where the parallel runtime is bounded by a function of the parameter and does not depend on the length of the input.
- Malte Skambath, Till Tantau:
Offline Drawing of Dynamic Trees: Algorithmics and Document Integration.
In GD 2016: Graph Drawing and Network Visualization, Volume 9801 of LNCS, pp. 572-586.
Springer,
2016.
Go to website | Show abstract
While the algorithmic drawing of static trees is well-understood and well-supported by software tools, creating animations depicting how a tree changes over time is currently difficult: software support, if available at all, is not integrated into a document production workflow and algorithmic approaches only rarely take temporal information into consideration. During the production of a presentation or a paper, most users will visualize how, say, a search tree evolves over time by manually drawing a sequence of trees. We present an extension of the popular Open image in new window typesetting system that allows users to specify dynamic trees inside their documents, together with a new algorithm for drawing them. Running Open image in new window on the documents then results in documents in the svg format with visually pleasing embedded animations. Our algorithm produces animations that satisfy a set of natural aesthetic criteria when possible. On the negative side, we show that one cannot always satisfy all criteria simultaneously and that minimizing their violations is NP-complete.
- Max Bannach, Christoph Stockhusen, Till Tantau:
Fast Parallel Fixed-Parameter Algorithms via Color Coding.
In Proceedings of the 10th International Symposium on Parameterized and Exact Computation (IPEC 2015), LIPIcs,
2015.
Go to website | Show PDF | Show abstract
Fixed-parameter algorithms have been successfully applied to solve numerous difficult problems within acceptable time bounds on large inputs. However, most fixed-parameter algorithms are inherently \emph{sequential} and, thus, make no use of the parallel hardware present in modern computers. We show that parallel fixed-parameter algorithms do not only exist for numerous parameterized problems from the literature -- including vertex cover, packing problems, cluster editing, cutting vertices, finding embeddings, or finding matchings -- but that there are parallel algorithms working in \emph{constant} time or at least in time \emph{depending only on the parameter} (and not on the size of the input) for these problems. Phrased in terms of complexity classes, we place numerous natural parameterized problems in parameterized versions of AC$^0$. On a more technical level, we show how the \emph{color coding} method can be implemented in constant time and apply it to embedding problems for graphs of bounded tree-width or tree-depth and to model checking first-order formulas in graphs of bounded degree.
- Till Tantau:
Existential Second-order Logic over Graphs: A Complete Complexity-theoretic Classification.
In 32nd International Symposium on Theoretical Aspects of Computer Science (STACS 2015), Volume 30 of Leibniz International Proceedings in Informatics (LIPIcs), pp. 703-715.
Schloss Dagstuhl-Leibniz-Zentrum für Informatik,
2015.
Go to website | Show abstract
Descriptive complexity theory aims at inferring a problem's computational complexity from the syntactic complexity of its description. A cornerstone of this theory is Fagin's Theorem, by which a property is expressible in existential second-order logic (ESO logic) if, and only if, it is in NP. A natural question, from the theory's point of view, is which syntactic fragments of ESO logic also still characterize NP. Research on this question has culminated in a dichotomy result by Gottlob, Kolaitis, and Schwentick: for each possible quantifier prefix of an ESO formula, the resulting prefix class over graphs either contains an NP-complete problem or is contained in P. However, the exact complexity of the prefix classes inside P remained elusive. In the present paper, we clear up the picture by showing that for each prefix class of ESO logic, its reduction closure under first-order reductions is either FO, L, NL, or NP. For undirected self-loop-free graphs two containment results are especially challenging to prove: containment in L for the prefix \exists R_1\cdots \exists R_n \forall x \exists y and containment in FO for the prefix \exists M \forall x \exists y for monadic M. The complex argument by Gottlob et al. concerning polynomial time needs to be carefully reexamined and either combined with the logspace version of Courcelle's Theorem or directly improved to first-order computations. A different challenge is posed by formulas with the prefix \exists M \forall x\forall y, which we show to express special constraint satisfaction problems that lie in L.
- Christoph Stockhusen, Till Tantau:
Completeness Results for Parameterized Space Classes.
In 8th International Symposium on Parameterized and Exact Computation (IPEC 2013), Lecture Notes in Computer Science, Springer,
2013 (to appear).
Show abstract
The parameterized complexity of a problem is generally considered
``settled'' once it has been shown to lie in FPT or to be complete
for a class in the W-hierarchy or a similar parameterized hierarchy.
Several natural parameterized problems have, however, resisted such
a classification. At least in some cases, the reason is that upper
and lower bounds for their parameterized \emph{space} complexity
have recently been obtained that rule out completeness results for
parameterized \emph{time} classes. In this paper, we make progress
in this direction by proving that the associative generability
problem and the longest common subsequence problem are complete for
parameterized space classes. These classes are defined in terms of
different forms of bounded nondeterminism and in terms of
simultaneous time--space bounds. As a technical tool we introduce a
``union operation'' that translates between problems complete for
classical complexity classes and for W-classes.
- Till Tantau:
Graph Drawing in TikZ.
In Proceedings of Graph Drawing 2012, Volume 7704 of Lecture Notes in Computer Science, pp. 517-528.
Springer,
2013.
Show abstract
At the heart of every good graph drawing algorithm lies an efficient procedure for assigning canvas positions to a graph's nodes and the bend points of its edges. However, every real-world implementation of such an algorithm must address numerous problems that have little to do with the actual algorithm, like handling input and output for-mats, formatting node texts, and styling nodes and edges. We present a new framework, implemented in the Lua programming language and integrated into the TikZ graphics description language, that aims at sim-plifying the implementation of graph drawing algorithms. Implementers using the framework can focus on the core algorithmic ideas and will au-tomatically profit from the framework's pre- and post-processing steps as well as from the extensive capabilities of the TikZ graphics language and the TeX typesetting engine. Algorithms already implemented using the framework include the Reingold-Tilford tree drawing algorithm, a mod-ular version of Sugiyama's layered algorithm, and several force-based multilevel algorithms.
- Michael Elberfeld, Andreas Jakoby, Till Tantau:
Algorithmic Meta Theorems for Circuit Classes of Constant and Logarithmic Depth.
In Proceedings of the 29th International Symposium on Theoretical Aspects of Computer Science (STACS 2012), Volume 14 of Leibniz International Proceedings in Informatics (LIPIcs), pp. 66-77.
Schloss Dagstuhl-Leibniz-Zentrum für Informatik,
2012.
Show PDF | Go to website | Show abstract
An algorithmic meta theorem for a logic and a class C of structures states that all problems ex- pressible in this logic can be solved efficiently for inputs from C. The prime example is Courcelle’s Theorem, which states that monadic second-order (mso) definable problems are linear-time solv- able on graphs of bounded tree width. We contribute new algorithmic meta theorems, which state that mso-definable problems are (a) solvable by uniform constant-depth circuit families (AC0 for decision problems and TC0 for counting problems) when restricted to input structures of bounded tree depth and (b) solvable by uniform logarithmic-depth circuit families (NC1 for decision prob- lems and #NC1 for counting problems) when a tree decomposition of bounded width in term representation is part of the input. Applications of our theorems include a TC0-completeness proof for the unary version of integer linear programming with a fixed number of equations and extensions of a recent result that counting the number of accepting paths of a visible pushdown automaton lies in #NC1. Our main technical contributions are a new tree automata model for unordered, unranked, labeled trees; a method for representing the tree automata’s computations algebraically using convolution circuits; and a lemma on computing balanced width-3 tree de- compositions of trees in TC0, which encapsulates most of the technical difficulties surrounding earlier results connecting tree automata and NC1.
- Michael Elberfeld, Christoph Stockhusen, Till Tantau:
On the Space Complexity of Parameterized Problems.
In Proceedings of the 7th International Symposium on Parameterized and Exact Computation (IPEC 2012), Volume 7535 of Lecture Notes in Computer Science, pp. 206-217.
Springer,
2012.
Go to website | Show abstract
Parameterized complexity theory measures the complexity of computational problems predominantly in terms of their parameterized time complexity. The purpose of the present paper is to demonstrate that the study of parameterized space complexity can give new insights into the complexity of well-studied parameterized problems like the feedback vertex set problem. We show that the undirected and the directed feedback vertex set problems have different parameterized space complexities, unless L = NL; which explains why the two problem variants seem to necessitate different algorithmic approaches even though their parameterized time complexity is the same. For a number of further natural parameterized problems, including the longest common subsequence problem and the acceptance problem for multi-head automata, we show that they lie in or are complete for different parameterized space classes; which explains why previous attempts at proving completeness of these problems for parameterized time classes have failed.
- Michael Elberfeld, Martin Grohe, Till Tantau:
Where First-Order and Monadic Second-Order Logic Coincide.
In Proceedings of the 27th Annual ACM/IEEE Symposium on Logic in Computer Science (LICS 2012), pp. 265-274.
IEEE Computer Society,
2012.
Show PDF | Go to website | Show abstract
We study on which classes of graphs first-order logic (FO) and monadic second-order logic (MSO) have the same expressive power. We show that for each class of graphs that is closed under taking subgraphs, FO and MSO have the same expressive power on the class if, and only if, it has bounded tree depth. Tree depth is a graph invariant that measures the similarity of a graph to a star in a similar way that tree width measures the similarity of a graph to a tree. For classes just closed under taking induced subgraphs, we show an analogous result for guarded second-order logic (GSO), the variant of MSO that not only allows quantification over vertex sets but also over edge sets. A key tool in our proof is a Feferman-Vaught-type theorem that is constructive and still works for unbounded partitions.
- Michael Elberfeld, Andreas Jakoby, Till Tantau:
Logspace Versions of the Theorems of Bodlaender and Courcelle.
In Proceedings of the 51st Annual IEEE Symposium on Foundations of Computer Science (FOCS 2010), pp. 143-152.
IEEE Computer Society,
2010.
Go to website | Show abstract
Bodlaender's Theorem states that for every k there is a linear-time algorithm that decides whether an input graph has tree width k and, if so, computes a width-k tree composition. Courcelle's Theorem builds on Bodlaender's Theorem and states that for every monadic second-order formula φ and for
every k there is a linear-time algorithm that decides whether a given logical structure A of tree width at most k satisfies φ. We prove that both theorems still hold when ``linear time'' is replaced by ``logarithmic space.'' The transfer of the powerful theoretical framework of monadic second-order logic and bounded tree width to logarithmic space allows us to settle a number of both old and recent open problems in the logspace world.
- Michael Elberfeld, Till Tantau:
Phylogeny- and Parsimony-Based Haplotype Inference with Constraints.
In Proceedings of the 21st Annual Symposium on Combinatorial Pattern Matching (CPM 2010), Volume 6129 of Lecture Notes in Computer Science, pp. 177-189.
Springer,
2010.
Go to website | Show abstract
Haplotyping, also known as haplotype phase prediction, is the problem of predicting likely haplotypes based on genotype data. One fast computational haplotyping method is based on an evolutionary model where a perfect phylogenetic tree is sought that explains the observed data. In their CPM 2009 paper, Fellows et al. studied an extension of this approach that incorporates prior knowledge in the form of a set of candidate haplotypes from which the right haplotypes must be chosen. While this approach may help to increase the accuracy of haplotyping methods, it was conjectured that the resulting formal problem constrained perfect phylogeny haplotyping might be NP-complete. In the present paper we present a polynomial-time algorithm for it. Our algorithmic ideas also yield new fixed-parameter algorithms for related haplotyping problems based on the maximum parsimony assumption.
- Michael Elberfeld, Ilka Schnoor, Till Tantau:
Influence of Tree Topology Restrictions on the Complexity of Haplotyping with Missing Data.
In Proceedings of the 6th Annual Conference on Theory and Applications of Models of Computation (TAMC 2009), Volume 5532 of Lecture Notes in Computer Science, pp. 201-210.
Springer,
2009.
Show PDF | Go to website | Show abstract
Haplotyping, also known as haplotype phase prediction, is the problem of predicting likely haplotypes from genotype data. One fast haplotyping method is based on an evolutionary model where a perfect phylogenetic tree is sought that explains the observed data. Unfortunately, when data entries are missing as is often the case in laboratory data, the resulting incomplete perfect phylogeny haplotyping problem IPPH is NP-complete and no theoretical results are known concerning its approximability, fixed-parameter tractability, or exact algorithms for it. Even radically simplified versions, such as the restriction to phylogenetic trees consisting of just two directed paths from a given root, are still NP-complete; but here a fixed-parameter algorithm is known. We show that such drastic and ad hoc simplifications are not necessary to make IPPH fixed-parameter tractable: We present the first theoretical analysis of an algorithm, which we develop in the course of the paper, that works for arbitrary instances of IPPH. On the negative side we show that restricting the topology of perfect phylogenies does not always reduce the computational complexity: while the incomplete directed perfect phylogeny problem is well-known to be solvable in polynomial time, we show that the same problem restricted to path topologies is NP-complete.
- Michael Elberfeld, Till Tantau:
Computational Complexity of Perfect-Phylogeny-Related Haplotyping Problems.
In Proceedings of the 33rd International Symposium on Mathematical Foundations of Computer Science (MFCS 2008), Volume 5162 of Lecture Notes in Computer Science, pp. 299-310.
Springer,
2008.
Go to website | Show abstract
Haplotyping, also known as haplotype phase prediction, is the
problem of predicting likely haplotypes based on genotype data. This
problem, which has strong practical applications, can be approached
using both statistical as well as combinatorial methods. While the
most direct combinatorial approach, maximum
parsimony, leads to NP-complete problems, the perfect phylogeny
model proposed by Gusfield yields a problem, called PPH, that can be
solved in polynomial (even linear) time. Even this may
not be fast enough when the whole genome is studied, leading to the
question of whether parallel algorithms can be used to solve the PPH
problem. In the present paper we answer this question affirmatively,
but we also give lower complexity bounds on its complexity. In
detail, we show that the problem lies in Mod$_2$L, a subclass of the
circuit complexity class NC$^2$, and is hard for
logarithmic space and thus presumably not in NC$^1$. We also
investigate variants of the PPH problem that have been studied in
the literature, like the perfect path phylogeny haplotyping problem
and the combined problem where a perfect phylogeny of maximal parsimony
is sought, and show that some of these variants are TC$^0$-complete or
lie in AC$^0$.
- Till Tantau:
A Logspace Approximation Scheme for the Shortest Path Problem for Graphs with Bounded Independence Number.
In Proceedings of the 21st International Symposium on Theoretical Aspects of Computer Science (STACS 2004), Volume 2996 of Lecture Notes in Computer Science, pp. 326-337.
Springer,
2004.
Show PDF | Show abstract
How difficult is it to find a path between two
vertices in finite directed graphs whose
independence number is bounded by some constant
k? The independence number of a graph is the
largest number of vertices that can be picked such
that there is no edge between any two of them. The
complexity of this problem depends on the exact
question we ask: Do we only wish to tell whether a
path exists? Do we also wish to construct such a
path? Are we required to construct the shortest
path? Concerning the first question, it is known
that the reachability problem is first-order
definable for all k. In contrast, the
corresponding reachability problems for many other
types of finite graphs, including dags and trees,
are not first-order definable. Concerning the second
question, in this paper it is shown that we cannot
only construct paths in logarithmic space, but that
there even exists a logspace approximation scheme
for constructing them. In contrast, for directed
graphs, undirected graphs, and dags we cannot
construct paths in logarithmic space (let alone
approximate the shortest one), unless complexity
class collapses occur. Concerning the third
question, it is shown that even telling whether the
shortest path has a certain length is NL-complete
and thus as difficult as for arbitrary directed
graphs.