- Max Bannach, Zacharias Heinrich, Till Tantau, Rüdiger Reischuk:
Dynamic Kernels for Hitting Sets and Set Packing.
In Proceedings of the 16th International Symposium on Parameterized and Exact Computation (IPEC 2021), Band 214 von LIPIcs, Schloss Dagstuhl - Leibniz-Zentrum für Informatik,
2021.
Website anzeigen | Zusammenfassung anzeigen
Computing small kernels for the hitting set problem is a well-studied computational problem where we are given a hypergraph with n vertices and m hyperedges, each of size d for some small constant d, and a parameter k. The task is to compute a new hypergraph, called a kernel, whose size is polynomial with respect to the parameter k and which has a size-k hitting set if, and only if, the original hypergraph has one. State-of-the-art algorithms compute kernels of size k^d (which is a polynomial kernel size as d is a constant), and they do so in time m? 2^d poly(d) for a small polynomial poly(d) (which is a linear runtime as d is again a constant).
We generalize this task to the dynamic setting where hyperedges may continuously be added or deleted and one constantly has to keep track of a size-k^d hitting set kernel in memory (including moments when no size-k hitting set exists). This paper presents a deterministic solution with worst-case time 3^d poly(d) for updating the kernel upon hyperedge inserts and time 5^d poly(d) for updates upon deletions. These bounds nearly match the time 2^d poly(d) needed by the best static algorithm per hyperedge. Let us stress that for constant d our algorithm maintains a dynamic hitting set kernel with constant, deterministic, worst-case update time that is independent of n, m, and the parameter k. As a consequence, we also get a deterministic dynamic algorithm for keeping track of size-k hitting sets in d-hypergraphs with update times O(1) and query times O(c^k) where c = d - 1 + O(1/d) equals the best base known for the static setting.
- Max Bannach, Till Tantau:
On the Descriptive Complexity of Color Coding.
MDPI Algorithms, 2021.
Special Issue: Parameterized Complexity and Algorithms for Nonclassical Logics
Website anzeigen | Zusammenfassung anzeigen
Color coding is an algorithmic technique used in parameterized complexity theory to detect “small” structures inside graphs. The idea is to derandomize algorithms that first randomly color a graph and then search for an easily-detectable, small color pattern. We transfer color coding to the world of descriptive complexity theory by characterizing—purely in terms of the syntactic structure of describing formulas—when the powerful second-order quantifiers representing a random coloring can be replaced by equivalent, simple first-order formulas. Building on this result, we identify syntactic properties of first-order quantifiers that can be eliminated from formulas describing parameterized problems. The result applies to many packing and embedding problems, but also to the long path problem. Together with a new result on the parameterized complexity of formula families involving only a fixed number of variables, we get that many problems lie in FPT just because of the way they are commonly described using logical formulas.
- Marcel Wienöbst, Max Bannach, Maciej Liskiewicz:
Extendability of Causal Graphical Models: Algorithms and Computational Complexity.
In Proc. of the Thirty-Seventh Conference on Uncertainty in Artificial Intelligence (UAI 2021), S. 1248-1257.
PMLR,
2021.
Website anzeigen - Marcel Wienöbst, Max Bannach, Maciej Liskiewicz:
Polynomial-Time Algorithms for Counting and Sampling Markov Equivalent DAGs.
In Proceedings of the Thirty-Fifth AAAI Conference on Artificial Intelligence (AAAI'21), S. 12198-12206.
AAAI Press,
2021.
Website anzeigen | PDF anzeigen | Zusammenfassung anzeigen
Counting and sampling directed acyclic graphs
from a Markov equivalence class are fundamental tasks in graphical causal
analysis. In this paper, we show that these tasks can be performed in
polynomial time, solving a long-standing open problem in this
area. Our algorithms are effective and easily implementable.
Experimental results show that the algorithms significantly
outperform state-of-the-art methods.
- Marcel Wienöbst, Max Bannach, Maciej Liskiewicz:
Recent Advances in Counting and Sampling Markov Equivalent DAGs.
In Proceeding of 44th German Conference on AI (KI 2021), Band 12873 von Lecture Notes in Computer Science, S. 271-275.
Springer,
2021.
Website anzeigen | Zusammenfassung anzeigen
Counting and sampling directed acyclic graphs (DAGs) from a Markov equivalence class are fundamental tasks in graphical causal analysis. In this paper, we discuss recently proposed polynomial-time algorithms for these tasks. The presented result solves a long-standing open problem in graphical modelling. Experiments show that the proposed algorithms are implementable and effective in practice. Our paper is an extended abstract of the work [24], honored as an AAAI-21 Distinguished Paper at the 35th AAAI Conference on Artificial Intelligence.
- Max Bannach, Till Tantau:
Computing Hitting Set Kernels By AC^0-Circuits.
Theory of Computing Systems, 64(3):374--399, 2020.
Website anzeigen | Zusammenfassung anzeigen
Given a hypergraph $H = (V,E)$, what is the smallest subset $X
\subseteq V$ such that $e \cap X \neq \emptyset$ holds for all $e \in E$?
This problem, known as the \emph{hitting set problem,} is a
basic problem in parameterized complexity theory. There are well-known
kernelization algorithms for it, which get a hypergraph~$H$ and a
number~$k$ as input and output a hypergraph~$H'$ such that (1)
$H$ has a hitting set of size~$k$ if, and only if, $H'$ has such a
hitting set and (2) the size of $H'$ depends only on $k$
and on the maximum cardinality $d$ of edges in~$H$. The
algorithms run in polynomial time and can be parallelized
to a certain degree: one can compute hitting set kernels in parallel
time $O(d)$ -- but it was conjectured that this is
the best parallel algorithm possible. We
refute this conjecture and show how hitting set kernels can be
computed in \emph{constant} parallel time. For our proof, we
introduce a new, generalized notion of hypergraph sunflowers and
show how iterated applications of the color coding technique can
sometimes be collapsed into a single application.
- Max Bannach, Malte Skambath, Till Tantau:
Kernelizing the Hitting Set Problem in Linear Sequential and Constant Parallel Time.
In Proceedings of the 17th Scandinavian Symposium and Workshops on Algorithm Theory (SWAT 2020), LIPIcs,
2020.
Website anzeigen | Zusammenfassung anzeigen
We analyze a reduction rule for computing kernels for the hitting set problem: In a hypergraph, the link of a set c of vertices consists of all edges that are supersets of c. We call such a set critical if its link has certain easy-to-check size properties. The rule states that the link of a critical c can be replaced by c. It is known that a simple linear-time algorithm for computing hitting set kernels (number of edges) at most k^d (k is the hitting set size, d is the maximum edge size) can be derived from this rule. We parallelize this algorithm and obtain the first AC^0 kernel algorithm that outputs polynomial-size kernels. Previously, such algorithms were not even known for artificial problems. An interesting application of our methods lies in traditional, non-parameterized approximation theory: Our results imply that uniform AC^0-circuits can compute a hitting set whose size is polynomial in the size of an optimal hitting set.
- Max Bannach, Sebastian Berndt, Martin Schuster, Marcel Wienöbst:
PACE Solver Description: PID*.
In Proceedings of the 15th International Symposium on Parameterized and Exact Computation (IPEC 2020), LIPIcs,
2020.
Website anzeigen | Zusammenfassung anzeigen
This document provides a short overview of our treedepth solver PID^{?} in the version that we submitted to the exact track of the PACE challenge 2020. The solver relies on the positive-instance driven dynamic programming (PID) paradigm that was discovered in the light of earlier iterations of the PACE in the context of treewidth. It was recently shown that PID can be used to solve a general class of vertex pursuit-evasion games - which include the game theoretic characterization of treedepth. Our solver PID^{?} is build on top of this characterization.
- Max Bannach, Sebastian Berndt, Martin Schuster, Marcel Wienöbst:
PACE Solver Description: Fluid.
In Proceedings of the 15th International Symposium on Parameterized and Exact Computation (IPEC 2020), LIPIcs,
2020.
Website anzeigen | Zusammenfassung anzeigen
This document describes the heuristic for computing treedepth decompositions of undirected graphs used by our solve fluid. The heuristic runs four different strategies to find a solution and finally outputs the best solution obtained by any of them. Two strategies are score-based and iteratively remove the vertex with the best score. The other two strategies iteratively search for vertex separators and remove them. We also present implementation strategies and data structures that significantly improve the run time complexity and might be interesting on their own.
- Max Bannach, Sebastian Berndt, Marten Maack, Matthias Mnich, Alexandra Lassota, Malin Rau, Malte Skambath:
Solving Packing Problems with Few Small Items Using Rainbow Matchings.
In Proceedings of the 45th International Symposium on Mathematical Foundations of Computer Science (MFCS 2020), LIPIcs,
2020.
Website anzeigen | Website anzeigen | Zusammenfassung anzeigen
An important area of combinatorial optimization are packing problems. While fundamental problems like bin packing, multiple knapsack, and bin covering have been studied intensively from the viewpoint of approximation algorithms, the use of parameterized techniques for this problems is rather limited. For instances containing no "small'' items, known matching algorithms give exact polynomial-time algorithm. Following the distance from triviality approach, our parameter k is the distance from such easy instances, i. e., the number of small items.
Our main results are randomized fixed-parameter tractable algorithms for vector-versions of bin packing, multiple knapsack, and bin covering parameterized by k, which run in time 4^k * k! * n^O(1) with one-sided error. To achieve this, we introduce a colored matching problem to which we reduce all these packing problems. The colored matching problem is natural in itself and we expect it to be useful for other applications as well. We also present a deterministic parameterized algorithm for bin packing with run time O( (k!)^2 * k * 2^k + n log(n) ).
- Tom Hartmann, Max Bannach, Martin Middendorf, Peter F. Stadler, Nicolas Wieseke, Marc Hellmuth:
Complete Edge-Colored Permutation Graphs.
Technischer Bericht ,
,
2020.
Website anzeigen | Zusammenfassung anzeigen
We introduce the concept of complete edge-colored permutation graphs as complete graphs that are the edge-disjoint union of "classical" permutation graphs. We show that a graph G =(V,E) is a complete edge-colored permutation graph if and only if each monochromatic subgraph of G is a "classical" permutation graph and G does not contain a triangle with 3 different colors. Using the modular decomposition as a framework we demonstrate that complete edge-colored permutation graphs are characterized in terms of their strong prime modules, which induce also complete edge-colored permutation graphs. This leads to an O(|V|^2)-time recognition algorithm. We show, moreover, that complete edge-colored permutation graphs form a superclass of so-called symbolic ultrametrics and that the coloring of such graphs is always a Gallai coloring.
- Max Bannach, Zacharias Heinrich, Rüdiger Reischuk, Till Tantau:
Dynamic Kernels for Hitting Sets and Set Packing.
Technischer Bericht ,
Electronic Colloquium on Computational Complexity,
2019.
Website anzeigen | Zusammenfassung anzeigen
Computing kernels for the hitting set problem (the problem of
finding a size-$k$ set that intersects each hyperedge of a
hypergraph) is a well-studied computational problem. For hypergraphs
with $m$ hyperedges, each of size at most~$d$, the best algorithms
can compute kernels of size $O(k^d)$ in time $O(2^d m)$. In this
paper we generalize the task to the \emph{dynamic} setting where
hyperedges may be continuously added and deleted and we always have
to keep track of a hitting set kernel (including moments when no
size-$k$ hitting set exists). We present a deterministic solution,
based on a novel data structure, that needs worst-case time
$O^*(3^d)$ for updating the kernel upon hyperedge inserts and
time~$O^*(5^d)$ for updates upon deletions -- thus nearly matching
the time $O^*(2^d)$ needed by the best static algorithm per
hyperedge. As a novel technical feature, our approach does not use
the standard replace-sunflowers-by-their-cores methodology, but
introduces a generalized concept that is actually easier to compute
and that allows us to achieve a kernel size of $\sum_{i=0}^d k^i$
rather than the typical size $d!\cdot k^d$ resulting from the Sunflower
Lemma. We also show that our approach extends to the dual problem of
finding packings in hypergraphs (the problem of finding $k$ pairwise
disjoint hyperedges), albeit with a slightly larger kernel size of
$\sum_{i=0}^d d^i(k-1)^i$.
- Max Bannach, Till Tantau:
On the Descriptive Complexity of Color Coding.
In Proceedings of STACS 2019, LIPIcs, LIPIcs,
2019.
Website anzeigen | PDF anzeigen | Zusammenfassung anzeigen
Color coding is an algorithmic technique used in parameterized
complexity theory to detect ``small'' structures inside graphs. The
idea is to derandomize algorithms that first randomly
color a graph and then search for an easily-detectable, small color
pattern. We transfer color coding to the world of descriptive
complexity theory by characterizing -- purely in terms of the
syntactic structure of describing formulas -- when the powerful
second-order quantifiers representing a random coloring can be
replaced by equivalent, simple first-order formulas. Building on
this result, we identify syntactic properties of first-order
quantifiers that can be eliminated from formulas describing
parameterized problems. The result applies to many packing and
embedding problems, but also to the long path problem. Together with a
new result on the parameterized complexity of formula families
involving only a fixed number of variables, we get that many
problems lie in FPT just because of the way they are
commonly described using logical formulas.
- Max Bannach:
Parallel Parameterized Algorithms.
Universität zu Lübeck, Institut für Theoretische Informatik,
2019.
Gutachter: Till Tantau, Heribert Vollmer.
Website anzeigen | Zusammenfassung anzeigen
Die parametrisierte Algorithmik ist ein Schlüsselbereich des
modernen Algorithmenentwurfes sowie der Komplexitätstheorie. Die
fast ausschließlich untersuchte Ressource in diesem Bereich ist dabei
die sequentielle Zeit, obwohl
die Parallelverarbeitung ein zentrales und vielfach untersuchtes
Teilgebiet der klassischen Algorithmik ist. Sowohl das Identifizieren eines geeigneten
Parameters als auch die direkte Beschleunigung durch
Parallelisierung verfolgen das gleiche Ziel: möglichst viele
Instanzen eines an sich nicht effizient lösbaren Problems dennoch
zu lösen. Es ist daher naheliegend, beide Forschungsgebiete
miteinander zu verbinden – und genau diese Art von Integration ist
das Ziel dieser Arbeit.
Ich präsentiere eine Vielzahl von parametrisierten
Komplexitätsklassen und entwickle eine Sammlung von parallelen
parametrisierten Basisalgorithmen. Dabei werden nahezu alle
Techniken, die die parametrisierte Komplextheorie zu bieten hat, von einem parallelen Standpunkt aus untersucht –
unter anderem Color Coding, beschränkte Suchbäume, Kernelisierung,
strukturelle Zerlegungen von Graphen sowie algorithmische
Metatheoreme.
Außerdem illustriere ich, wie die letzten zwei Techniken in der
Praxis genutzt werden können, indem ich zwei Software-Bibliotheken
vorstelle: eine zum Berechnen optimaler Baumzerlegungen und eine
für die Modellprüfung eines Fragmentes der monadischen
Prädikatenlogik zweiter Stufe.
- Max Bannach:
Parallele Parametrisierte Algorithmen.
In Ausgezeichnete Informatikdissertationen 2019, Band D-20 von LNI, S. 29--38.
GI,
2019.
Website anzeigen | Zusammenfassung anzeigen
Die parametrisierte Algorithmik ist ein Schlüsselbereich des modernen Algorithmenentwurfes sowie der Komplexitätstheorie. Die fast ausschließlich untersuchte Ressource in diesem Bereich ist dabei die sequentielle Zeit, obwohl die Parallelverarbeitung ein zentrales und vielfach untersuchtes Teilgebiet der klassischen Algorithmik ist. Sowohl das Identifizieren eines geeigneten Parameters als auch die direkte Beschleunigung durch Parallelisierung verfolgen das gleiche Ziel: möglichst viele Instanzen eines an sich nicht effizient lösbaren Problems dennoch zu lösen. Es ist daher naheliegend, beide Forschungsgebiete miteinander zu verbinden – und genau diese Art von Integration ist das Ziel dieser Arbeit. Ich präsentiere eine Vielzahl von parametrisierten Komplexitätsklassen und entwickle eine Sammlung von parallelen parametrisierten Basisalgorithmen. Dabei werden nahezu alle Techniken, die die parametrisierte Komplexitätstheorie zu bieten hat, von einem parallelen Standpunkt aus untersucht – unter anderem Color Coding, beschränkte Suchbäume, Kernelisierung, strukturelle Zerlegungen von Graphen sowie algorithmische Metasätze. Außerdem illustriere ich, wie die letzten zwei Techniken in der Praxis genutzt werden können, indem ich zwei Software-Bibliotheken vorstelle: eine zum Berechnen optimaler Baumzerlegungen und eine für die Modellprüfung eines Fragmentes der monadischen Prädikatenlogik zweiter Stufe.
- Max Bannach, Sebastian Berndt:
Positive-Instance Driven Dynamic Programming for Graph Searching.
In Proceedings of the 16th Algorithms and Data Structures Symposium (WADS 2019), Springer,
2019.
PDF anzeigen | Zusammenfassung anzeigen
Research on the similarity of a graph to being a tree – called the treewidth of the graph – has seen an enormous rise within the last decade, but a practically fast algorithm for this task has been discovered only recently by Tamaki (ESA 2017). It is based on dynamic program- ming and makes use of the fact that the number of positive subinstances is typically substantially smaller than the number of all subinstances. Al- gorithms producing only such subinstances are called positive-instance driven (PID). We give an alternative and intuitive view on this algo- rithm from the perspective of the corresponding configuration graphs in certain two-player games. This allows us to develop PID-algorithms for a wide range of important graph parameters such as treewidth, pathwidth, q-branched treewidth, treedepth and dependency-treewidth. We analyze the worst case behavior of the approach on some well-known graph classes and perform an experimental evaluation on real world and random graphs.
- Max Bannach, Sebastian Berndt:
Practical Access to Dynamic Programming on Tree Decompositions.
MDPI Algorithms, 2019.
Special Issue: New Frontiers in Parameterized Complexity and Algorithms
Website anzeigen | Zusammenfassung anzeigen
Parameterized complexity theory has led to a wide range of algorithmic breakthroughs within the last few decades, but the practicability of these methods for real-world problems is still not well understood. We investigate the practicability of one of the fundamental approaches of this field: dynamic programming on tree decompositions. Indisputably, this is a key technique in parameterized algorithms and modern algorithm design. Despite the enormous impact of this approach in theory, it still has very little influence on practical implementations. The reasons for this phenomenon are manifold. One of them is the simple fact that such an implementation requires a long chain of non-trivial tasks (as computing the decomposition, preparing it, …). We provide an easy way to implement such dynamic programs that only requires the definition of the update rules. With this interface, dynamic programs for various problems, such as 3-coloring, can be implemented easily in about 100 lines of structured Java code. The theoretical foundation of the success of dynamic programming on tree decompositions is well understood due to Courcelle’s celebrated theorem, which states that every MSO-definable problem can be efficiently solved if a tree decomposition of small width is given. We seek to provide practical access to this theorem as well, by presenting a lightweight model checker for a small fragment of MSO1
(that is, we do not consider “edge-set-based” problems). This fragment is powerful enough to describe many natural problems, and our model checker turns out to be very competitive against similar state-of-the-art tools.
- Max Bannach, Malte Skambath, Till Tantau:
Towards Work-Efficient Parallel Parameterized Algorithms.
In Proceedings of the 13th International Conference and Workshops on Algorithms and Computation (WALCOM 2019), Springer,
2019.
Website anzeigen | PDF anzeigen | Zusammenfassung anzeigen
Parallel parameterized complexity theory studies how
fixed-parameter tractable (fpt) problems can be solved in
parallel. Previous theoretical work focused on parallel
algorithms that are very fast in principle, but did not take into
account that when we only have a small number of processors
(between 2 and, say, 1024), it is more important
that the parallel algorithms are work-efficient. In the
present paper we investigate how work-efficient fpt algorithms can
be designed. We review standard methods from fpt theory, like
kernelization, search trees, and interleaving, and prove
trade-offs for them between work efficiency and runtime
improvements. This results in a toolbox for
developing work-efficient parallel fpt algorithms.
- Tom Hartmann, Max Bannach, Martin Middendorf:
Sorting Signed Permutations by Inverse Tandem Duplication Random Losses.
In Proceedings of the 17th Asia Pacific Bioinformatics Conference (APBC 2019), ,
2019.
PDF anzeigen | Zusammenfassung anzeigen
Gene order evolution of unichromosomal genomes, for example mitochondrial genomes, has been modelled mostly by four major types of genome rearrangements: inversions, transpositions, inverse transpositions, and tandem duplication random losses. Generalizing models that include all those rearrangements while admitting computational tractability are rare. In this paper we study such a rearrangement model, namely the inverse tandem duplication random loss (iTDRL) model, where an iTDRL duplicates and inverts a continuous segment of a gene order followed by the random loss of one of the redundant copies of each gene. The iTDRL rearrangement has currently been proposed by several authors suggesting it to be a possible mechanisms of mitochondrial gene order evolution. We initiate the algorithmic study of this new model of genome rearrangement on signed permutations by proving that a shortest rearrangement scenario that transforms one given gene order into another given gene order can be obtained in quasilinear time. Furthermore, we show that the length of such a scenario, i. e., the minimum number of iTDRLs in the transformation, can be computed in linear time.
- Tom Hartmann, Max Bannach, Martin Middendorf:
Sorting Signed Permutations by Inverse Tandem Duplication Random Losses.
IEEE/ACM Transactions on Computational Biology and Bioinformatics, 2019.
Website anzeigen | Zusammenfassung anzeigen
Gene order evolution of unichromosomal genomes, for example mitochondrial genomes, has been modelled mostly by four major types of genome rearrangements: inversions, transpositions, inverse transpositions, and tandem duplication random losses. Generalizing models that include all those rearrangements while admitting computational tractability are rare. In this paper we study such a rearrangement model, namely the inverse tandem duplication random loss (iTDRL) model, where an iTDRL duplicates and inverts a continuous segment of a gene order followed by the random loss of one of the redundant copies of each gene. The iTDRL rearrangement has currently been proposed by several authors suggesting it to be a possible mechanisms of mitochondrial gene order evolution. We initiate the algorithmic study of this new model of genome rearrangement by proving that a shortest rearrangement scenario that transforms one given gene order into another given gene order can be obtained in quasilinear time. Furthermore, we show that the length of such a scenario, i. e., the minimum number of iTDRLs in the transformation, can be computed in linear time.
- Max Bannach, Till Tantau:
Computing Hitting Set Kernels By AC^0-Circuits.
In Proceedings of the 35th International Symposium on Theoretical Aspects of Computer Science, LIPIcs,
2018.
Website anzeigen | PDF anzeigen | Zusammenfassung anzeigen
Given a hypergraph $H = (V,E)$, what is the smallest subset $X
\subseteq V$ such that $e \cap X \neq \emptyset$ holds for all $e \in E$?
This problem, known as the \emph{hitting set problem,} is a
basic problem in parameterized complexity theory. There are well-known
kernelization algorithms for it, which get a hypergraph~$H$ and a
number~$k$ as input and output a hypergraph~$H'$ such that (1)
$H$ has a hitting set of size~$k$ if, and only if, $H'$ has such a
hitting set and (2) the size of $H'$ depends only on $k$
and on the maximum cardinality $d$ of edges in~$H$. The
algorithms run in polynomial time, but are highly
sequential. Recently, it has been shown that one of them can be parallelized
to a certain degree: one can compute hitting set kernels in parallel
time $O(d)$ -- but it was conjectured that this is
the best parallel algorithm possible. We
refute this conjecture and show how hitting set kernels can be
computed in \emph{constant} parallel time. For our proof, we
introduce a new, generalized notion of hypergraph sunflowers and
show how iterated applications of the color coding technique can
sometimes be collapsed into a single application.
- Max Bannach, Till Tantau:
Computing Kernels in Parallel: Lower and Upper Bounds.
In Proceedings of the 13th International Symposium on Parameterized and Exact Computation (IPEC 2018), LIPIcs,
2018.
Website anzeigen | PDF anzeigen | Zusammenfassung anzeigen
Parallel fixed-parameter tractability studies how parameterized problems can be solved in parallel. A surprisingly large number of parameterized problems admit a high level of parallelization, but this does not mean that we can also efficiently compute small problem kernels in parallel: known kernelization algorithms are typically highly sequential. In the present paper, we establish a number of upper and lower bounds concerning the sizes of kernels that can be computed in parallel. An intriguing finding is that there are complex trade-offs between kernel size and the depth of the circuits needed to compute them: For the vertex cover problem, an exponential kernel can be computed by AC$^0$-circuits, a quadratic kernel by TC$^0$-circuits, and a linear kernel by randomized NC-circuits with derandomization being possible only if it is also possible for the matching problem. Other natural problems for which similar (but quantitatively different) effects can be observed include tree decomposition problems parameterized by the vertex cover number, the undirected feedback vertex set problem, the matching problem, or the point line cover problem. We also present natural problems for which computing kernels is inherently sequential.
- Max Bannach, Sebastian Berndt:
Practical Access to Dynamic Programming on Tree Decompositions.
In Proceedings of the 26th Annual European Symposium on Algorithms, LIPIcs,
2018.
Website anzeigen | PDF anzeigen | Zusammenfassung anzeigen
Parameterized complexity theory has lead to a wide range of algorithmic
breakthroughs within the last decades, but the practicability of these
methods for real-world problems is still not well understood. We investigate the practicability of one
of the fundamental approaches of this field: dynamic programming on tree
decompositions. Indisputably, this is a key technique in parameterized
algorithms and modern algorithm design. Despite the enormous impact of this
approach in theory, it still has very little influence on practical
implementations. The reasons for this phenomenon are manifold. One of them is
the simple fact that such an implementation requires a long chain of
non-trivial tasks (as computing the decomposition, preparing
it,\dots). We provide an easy way to implement such dynamic programs that only
requires the definition of the update rules.
With this interface, dynamic programs for
various problems, such as \Lang{3-coloring}, can be implemented easily in
about 100 lines of structured Java code.
The theoretical foundation of the success of dynamic programming on tree
decompositions is well understood due to Courcelle's celebrated theorem, which
states that every \textsc{MSO}-definable problem can be efficiently solved if
a tree decomposition of small width is given. We seek to provide practical
access to this theorem as well, by presenting a lightweight model-checker for
a small fragment of \textsc{MSO}. This fragment is powerful enough to
describe many natural problems, and our model-checker turns out to be very
competitive against similar state-of-the-art tools.
- Max Bannach, Sebastian Berndt, Thorsten Ehlers, Dirk Nowotka:
SAT-Encodings of Tree Decompositions.
In Proceedings of SAT Competition 2018: Solver and Benchmark Descriptions, ,
2018.
Website anzeigen | Zusammenfassung anzeigen
We suggest some benchmarks based on a propositional encoding of tree decompositions of graphs.