Accepted Papers

Accepted Papers

Nikolai Karpov, Marcin Pilipczuk and Anna Zych-Pawlewicz: An exponential lower bound for cut sparsifiers in planar graphs

Given an edge-weighted graph $G$ with a set $\term$ of $k$ terminals, a \emph{mimicking network} is a graph with the same set of terminals that exactly preserves the sizes of minimum cuts between any partition of the terminals. A natural question in the area of graph compression is to provide as small mimicking networks as possible for input graph $G$ being either an arbitrary graph or coming from a specific graph class.

In this note we show an exponential lower bound for cut mimicking networks in planar graphs: there are edge-weighted planar graphs with $k$ terminals that require $2^{k-2}$ edges in any mimicking network. This nearly matches an upper bound of $\Oh(k 2^{2k})$ of Krauthgamer and Rika [SODA 2013, arXiv:1702.05951] and is in sharp contrast with the $\Oh(k^2)$ upper bound under the assumption that all terminals lie on a single face [Goranci, Henzinger, Peng, arXiv:1702.01136]. As a side result we show a hard instance for the double-exponential upper bounds given by Hagerup, Katajainen, Nishimura, and Ragde~[JCSS 1998] and by Khan and Raghavendra~[IPL 2014].

Tom van der Zanden and Hans L. Bodlaender: Computing Treewidth on the GPGPU

We present a parallel algorithm for computing the treewidth of a graph on a GPU. We implement this algorithm in OpenCL, and experimentally evaluate its performance. Our algorithm is based on an $O^*(2^{n})$-time algorithm that explores the elimination orderings of the graph using a Held-Karp like dynamic programming approach. We use Bloom filters to detect duplicate solutions.

GPU programming presents unique challenges and constraints, such as constraints on the use of memory and the need to limit warp divergence. We experiment with various optimizations to see if it is possible to work around these issues. We achieve a very large speed up (up to $80\times$) compared to running the same algorithm on the CPU.

Radu Curticapean, Holger Dell, Fedor Fomin, Leslie Ann Goldberg and John Lapinskas: A Fixed-Parameter Perspective on #BIS

 The problem of (approximately) counting the independent sets of a bipartite graph (#BIS) is the canonical approximate counting problem that is complete in the intermediate complexity class #RH\Pi_1. It is believed that #BIS does not have an efficient approximation algorithm but also that it is not NP-hard. We study the robustness of the intermediate complexity of #BIS by considering variants of the problem parameterised by the size of the independent set. We exhaustively map the complexity landscape for three problems, with respect to exact computation and approximation and with respect to conventional and parameterised complexity. The three problems are counting independent sets of a given size, counting independent sets with a given number of vertices in one vertex class and counting maximum independent sets amongst those with a given number of vertices in one vertex class. Among other things, we show that all of these problems are NP-hard to approximate within any polynomial ratio. (This is surprising because the corresponding problems without the size parameter are complete in #RH\Pi_1, and hence are not believed to be NP-hard.) We also show that the first problem is #W[1]-hard to solve exactly but admits an FPTRAS, whereas the other two are W[1]-hard to approximate even within any polynomial ratio. Finally, we show that, when restricted to graphs of bounded degree, all three problems have efficient exact fixed-parameter algorithms.

Marin Bougeret and Ignasi Sau: How much does a treedepth modulator help to obtain polynomial kernels beyond sparse graphs?

 In the last years, kernelization with structural parameters has been an active area of research within the field of parameterized complexity. As a relevant example, Gajarsk{\`y} \emph{et al}.~[ESA 2013] proved that every graph problem satisfying a property called finite integer index admits a linear kernel on graphs of bounded expansion and an almost linear kernel on nowhere dense graphs, parameterized by the size of a $c$-treedepth modulator, which is a vertex set whose removal results in a graph of treedepth at most $c$ for a fixed integer $c \geq 1$. The authors left as further research to investigate this parameter on general graphs, and in particular to find problems that, while admitting polynomial kernels on sparse graphs, behave differently on general graphs.

In this article we answer this question by finding two very natural such problems: we prove that \textsc{Vertex Cover} admits a polynomial kernel on general graphs for any integer $c \geq 1$, and that \textsc{Dominating Set} does {\sl not} for any integer $c \geq 2$ even on degenerate graphs, unless $\text{NP} \subseteq \text{coNP}/\text{poly}$. For the positive result, we build on the techniques of Jansen and Bodlaender~[STACS 2011], and for the negative result we use a polynomial parameter transformation for $c\geq 3$ and an \textsc{or}-cross-composition for $c = 2$. As existing results imply that \textsc{Dominating Set} admits a polynomial kernel on degenerate graphs for $c = 1$, our result provides a dichotomy about the existence of polynomial problems for \textsc{Dominating Set} on degenerate graphs with this parameter.

Julien Baste, Didem Gözüpek, Christophe Paul, Ignasi Sau, Mordechai Shalom and Dimitrios M. Thilikos: Parameterized complexity of finding a spanning tree with minimum reload cost diameter

We study the minimum diameter spanning tree problem under the reload cost model (\textsc{Diameter-Tree} for short) introduced by Wirth and Steffan~(2001). In this problem, given an undirected edge-colored graph $G$, reload costs on a path arise at a node where the path uses consecutive edges of different colors. The objective is to find a spanning tree of $G$ of minimum diameter with respect to the reload costs. We initiate a systematic study of the parameterized complexity of the \textsc{Diameter-Tree} problem by considering the following parameters: the cost of a solution, and the treewidth and the maximum degree $\Delta$ of the input graph. We prove that \textsc{Diameter-Tree} is {\sf para}-\np-hard for any combination of two of these three parameters, and that it is \fpt parameterized by the three of them. We also prove that the problem can be solved in polynomial time on cactus graphs. This result is somehow surprising since we prove \textsc{Diameter-Tree} to be \np-hard on graphs of treewidth two, which is best possible as the problem can be trivially solved on forests. When the reload costs satisfy the triangle inequality, Wirth and Steffan~(2001) proved that the problem can be solved in polynomial time on graphs with $\Delta = 3$, and Galbiati~(2008) proved that it is \np-hard if $\Delta = 4$. Our results show, in particular, that without the requirement of the triangle inequality, the problem is \np-hard if $\Delta = 3$, which is also best possible. Finally, in the case where the reload costs are polynomially bounded by the size of the input graph, we prove that \textsc{Diameter-Tree} is in {\sf XP} and {\sf W}[1]-hard parameterized by the treewidth plus $\Delta$.

Julien Baste, Ignasi Sau and Dimitrios M. Thilikos: Optimal algorithms for hitting (topological) minors on graphs of bounded treewidth

For a fixed collection of graphs ${\cal F}$, the \textsc{$\Fcal$-M-Deletion} problem consists in, given a graph $G$ and an integer $k$, decide whether there exists $S \subseteq V(G)$ with $|S| \leq k$ such that $G \setminus S$ does not contain any of the graphs in ${\cal F}$ as a minor. We are interested in the parameterized complexity of \textsc{$\Fcal$-M-Deletion} when the parameter is the treewidth of $G$, denoted by $\tw$. Our objective is to determine, for a fixed ${\cal F}$, the smallest function $f_{{\cal F}}$ such that \textsc{$\Fcal$-M-Deletion} can be solved in time $f_{{\cal F}}(\tw) \cdot n^{\Ocal(1)}$ on $n$-vertex graphs. Using and enhancing the machinery of boundaried graphs and small sets of representatives introduced by Bodlaender \emph{et al.}~[J ACM, 2016], we prove that when all the graphs in ${\cal F}$ are connected and at least one of them is planar, then $f_{{\cal F}}(w) = 2^{\Ocal (w \cdot\log w)}$. When ${\cal F}$ is a singleton containing a clique, a cycle, or a path on $i$ vertices, we prove the following asymptotically tight bounds:
\begin{itemize}
\item $f_{\{K_4\}}(w) = 2^{\Theta (w \cdot \log w)}$.
\item $f_{\{C_i\}}(w) = 2^{\Theta (w)}$ for every $i \leq 4$, and $f_{\{C_i\}}(w) = 2^{\Theta (w \cdot\log w)}$ for every $i \geq 5$.
\item $f_{\{P_i\}}(w) = 2^{\Theta (w)}$ for every $i \leq 4$, and $f_{\{P_i\}}(w) = 2^{\Theta (w \cdot \log w)}$ for every $i \geq 6$.
\end{itemize}
The lower bounds hold unless the Exponential Time Hypothesis fails, and the superexponential ones are inspired by a reduction of Marcin Pilipczuk~[Discrete Appl Math, 2016]. The single-exponential algorithms use, in particular, the rank-based approach introduced by Bodlaender \emph{et al}.~[Inform Comput, 2015]. We also consider the version of the problem where the graphs in ${\cal F}$ are forbidden as {\sl topological} minors, and prove essentially the same set of results holds.

Eva-Maria Hols and Stefan Kratsch: Smaller parameters for vertex cover kernelization

 We revisit the topic of polynomial kernels for Vertex Cover relative to structural parameters. Our starting point is a recent paper due to Fomin and Strømme [WG 2016] who gave a kernel with O(|X|^12) vertices when X is a vertex set such that each connected component of G-X contains at most one cycle, i.e., X is a modulator to a pseudoforest. We strongly generalize this result by using modulators to d-quasi-forests, i.e., graphs where each connected component has a feedback vertex set of size at most d, and obtain kernels with O(|X|^{3d+9}) vertices. Our result relies on proving that minimal blocking sets in a d-quasi-forest have size at most d+2. This bound is tight and there is a related lower bound of O(|X|^{d+2-eps}) on the bit size of kernels.

In fact, we also get bounds for minimal blocking sets of more general graph classes: For d-quasi-bipartite graphs, where each connected component can be made bipartite by deleting at most d vertices, we get the same tight bound of d+2 vertices. For graphs whose connected components each have a vertex cover of cost at most d more than the best fractional vertex cover, which we call d-quasi-integral, we show that minimal blocking sets have size at most 2d+2, which is also tight. Combined with existing randomized polynomial kernelizations this leads to randomized polynomial kernelizations for modulators to d-quasi-bipartite and d-quasi-integral graphs. There are lower bounds of O(|X|^{d+2-eps}) and O(|X|^{2d+2-eps}) for the bit size of such kernels.

Édouard Bonnet, Nick Brettell, O-Joung Kwon and Dániel Marx: Generalized feedback vertex set problems on bounded-treewidth graphs: chordality is the key to single-exponential parameterized algorithms

It has long been known that \textsc{Feedback Vertex Set} can be solved in time $2^{\mathcal{O}(w\log w)}n^{\mathcal{O}(1)}$ on graphs of treewidth $w$, but it was only recently that this running time was improved to $2^{\mathcal{O}(w)}n^{\mathcal{O}(1)}$, that is, to single-exponential parameterized by treewidth. We investigate which generalizations of \textsc{Feedback Vertex Set} can be solved in a similar running time. Formally, for a class of graphs~$\mathcal{P}$, \textsc{Bounded $\mathcal{P}$-Block Vertex Deletion} asks, given a graph~$G$ on $n$ vertices and positive integers~$k$ and~$d$, whether $G$ contains a set~$S$ of at most $k$ vertices such that each block of $G-S$ has at most $d$ vertices and is in $\mathcal{P}$. Assuming that $\mathcal{P}$ is recognizable in polynomial time and satisfies a certain natural hereditary condition, we give a sharp characterization of when single-exponential parameterized algorithms are possible for fixed values of $d$:
\begin{itemize}
\item if $\mathcal{P}$ consists only of chordal graphs, then the problem can be solved in time $2^{\mathcal{O}(wd^2)} n^{\mathcal{O}(1)}$,
\item if $\mathcal{P}$ contains a graph with an induced cycle of length $\ell\ge 4$, then the problem is not solvable in time $2^{o(w\log w)} n^{\mathcal{O}(1)}$
even for fixed $d=\ell$, unless the ETH fails.
\end{itemize}
We also study a similar problem, called \textsc{Bounded $\mathcal{P}$-Component Vertex Deletion}, where the target graphs have connected components of small size instead of having blocks of small size, and we show analogous results. For this problem, we also show that if $d$ is part of the input and $\cP$ contains all chordal graphs, then it cannot be solved in time $f(w)n^{o(w)}$ for some function $f$, unless ETH fails.

Bart M. P. Jansen, Marcin Pilipczuk and Marcin Wrochna: Turing Kernelization for Finding Long Paths in Graph Classes Excluding a Topological Minor

The notion of Turing kernelization investigates whether a polynomial-time algorithm can solve an NP-hard problem, when it is aided by an oracle that can be queried for the answers to bounded-size subproblems. One of the main open problems in this direction is whether k-Path admits a polynomial Turing kernel: can a polynomial-time algorithm determine whether an undirected graph has a simple path of length k, using an oracle that answers queries of size k^O(1)? We show this can be done when the input graph avoids a fixed graph H as a topological minor, thereby significantly generalizing an earlier result for bounded-degree and K_{3,t}-minor-free graphs. Moreover, we show that k-Path even admits a polynomial Turing kernel when the input graph is not H-topological-minor-free itself, but contains a known vertex modulator of size bounded polynomially in the parameter, whose deletion makes it so. To obtain our results, we build on the graph minors decomposition to show that any H-topological-minor-free graph that does not contain a k-path, has a separation that can safely be reduced after communication with the oracle.

Petr Hlineny, Filip Pokrývka and Bodhayan Roy: FO model checking of geometric graphs

Over the past two decades the main focus of research into first-order (FO) model checking algorithms has been on sparse relational structures – culminating in the FPT algorithm by Grohe, Kreutzer and Siebertz for FO model checking of nowhere dense classes of graphs [STOC’14].
On contrary to that, only little is currently known about FO model checking of dense classes of graphs or other structures; one remarkable exception being the FPT algorithm by Gajarský et al. for FO model checking of posets of bounded width [FOCS’15]. We study the FO model checking problem for dense graph classes definable by geometric means (intersection and visibility graphs), with help of the aforementioned poset algorithm. We obtain new nontrivial FPT results, e.g., for restricted subclasses of circular-arc, circle, box, disk, and polygon-visibility graphs. We also complement the tractability results by related hardness reductions.

Bart M. P. Jansen and Astrid Pieterse: Optimal Data Reduction for Graph Coloring Using Low-Degree Polynomials

The theory of kernelization can be used to rigorously analyze data reduction for graph coloring problems. Here, the aim is to reduce a q-Coloring input to an equivalent but smaller input whose size is provably bounded in terms of structural properties, such as the size of a minimum vertex cover. In this paper we settle two open problems about data reduction for q-Coloring.
First, we use a recent technique of finding redundant constraints by representing them as low-degree polynomials, to obtain a kernel of bitsize O(k^(q−1) log k) for q-Coloring parameterized by Vertex Cover for any q >= 3. This size bound is optimal up to k^o(1) factors assuming NP is not in coNP/poly, and improves on the previous-best kernel of size O(k^q).
Our second result shows that 3-Coloring does not admit non-trivial sparsification: assuming NP is not in coNP/poly, the parameterization by the number of vertices n admits no (generalized) kernel of size O(n^(2−e)) for any e > 0. Until now, such a lower bound was only known for coloring with q >= 4 colors.

Julien Baste and Dimitrios Thilikos: Contraction-Bidimensionality of Geometric Intersection Graphs

Given a graph G, we define bcg(G) as the minimum k for which G can be contracted to the uniformly triangulated grid Γ_k. A graph class G has the SQGC property if every graph G ∈ G has treewidth O(bcg(G)^c) for some 1 ≤ c < 2. The SQGC property is important for algorithm design as it defines the applicability horizon of a series of meta-algorithmic results, in the framework of bidimensionality theory, related to fast parameterized algorithms, kernelization, and approximation schemes. These results apply to a wide family of problems, namely problems that are contraction-bidimensional. Our main combinatorial result reveals a general family of graph classes that satisfy the SQGC property and includes bounded-degree string graphs. This considerably extends the applicability of bidimensionality theory for several intersection graph classes of 2-dimensional geometrical objects.

Lei Shang, Michele Garraffa, Federico Della Croce and T’Kindt Vincent: Merging nodes in search trees: an exact exponential algorithm for the single machine total tardiness scheduling problem

 This paper proposes an exact exponential algorithm for the single machine total tardiness problem. It exploits the structure of a basic branch-and-reduce framework based on the well known Lawler’s decomposition property. The proposed algorithm, called branch-and-merge, is an improvement of the branch-and-reduce technique with the embedding of a node merging operation. Its time complexity converges to O^∗(2.247n) keeping the space complexity polynomial. The branch-and-merge technique is likely to be generalized to other sequencing problems with similar decomposition properties.

Lars Jaffke, O-Joung Kwon and Jan Arne Telle: Polynomial-time algorithms for the Longest Induced Path and Induced Disjoint Paths problems on graphs of bounded mim-width

 We give the first polynomial-time algorithms on graphs of bounded maximum induced matching width (mim-width) for problems that are not locally checkable. In particular, we give n^O(w)-time algorithms on graphs of mim-width at most w, when given a decomposition, for the following problems: Longest Induced Path, Induced Disjoint Paths and H-Induced Topological Minor for fixed H. Our results imply that the following graph classes have polynomial-time algorithms for these three problems: Interval and Bi-Interval graphs, Circular Arc, Permutation and Circular Permutation graphs, Convex graphs, k-Trapezoid, Circular k-Trapezoid, k-Polygon, Dilworth-k and Co-k-Degenerate graphs for fixed k.

Édouard Bonnet, Panos Giannopoulos and Michael Lampis: On the Parameterized Complexity of Red-Blue Points Separation

 We study the following geometric separation problem: Given a set $R$ of red points and a set $B$ of blue points in the plane find a minimum-size set of lines that separate $R$ from $B$. We show that, in its full generality, parameterized by the number of lines $k$ in the solution, the problem is unlikely to be solvable significantly faster than the brute-force $n^{O(k)}$-time algorithm, where $n$ is the total number of points. Indeed, we show that an algorithm running in time $f(k)n^{o(k/ \log k)}$, for any computable function $f$, would disprove ETH. Our reduction crucially relies on selecting lines from a set with a large number of different slopes (i.e., this number is not a function of $k$).

Conjecturing that the problem variant where the lines are required to be axis-parallel is FPT in the number of lines, we show the following preliminary result. Separating $R$ from $B$ with a minimum-size set of axis-parallel lines is FPT in the size of either set, and can be solved in time $O^*(9^{|\mathcal B|})$ (assuming that $B$ is the smallest set).

Akanksha Agrawal, Saket Saurabh and Prafullkumar Tale: On the Parameterized Complexity of Contraction to Generalization of Trees

For a family of graphs $\cal F$, the $\mathcal{F}$-\textsc{Contraction} problem takes as an input a graph $G$ and an integer $k$, and the goal is to decide if there exists $S \subseteq E(G)$ of size at most $k$ such that $G/S$ belongs to $\cal F$. Here, $G/S$ is the graph obtained from $G$ by contracting all the edges in $S$. Heggernes et al.~[{\em Algorithmica (2014)}] were the first to study edge contraction problems in the realm of Parameterized Complexity. They studied $\cal F$-\textsc{Contraction} when $\cal F$ is a simple family of graphs such as trees and paths. In this paper, we study the $\mathcal{F}$-\textsc{Contraction} problem, where $\cal F$ generalizes the family of trees. In particular, we define this generalization in a “parameterized way”. Let $\mathbb{T}_\ell$ be the family of graphs such that each graph in $\mathbb{T}_\ell$ can be made into a tree by deleting at most $\ell$ edges. Thus, the problem we study is \treelc. We design an \FPT\ algorithm for \treelc\ running in time $\mathcal{O}((\ncol)^{\mathcal{O}(k + \ell)} \cdot n^{\mathcal{O}(1)})$. Furthermore, we show that the problem does not admit a polynomial kernel when parameterized by $k$. Inspired by the negative result for the kernelization, we design a lossy kernel for \treelc\ of size $ \mathcal{O}([k(k + 2\ell)] ^{(\lceil {\frac{\alpha}{\alpha-1}\rceil + 1)}})$.

Mark de Berg, Sándor Kisfaludi-Bak and Gerhard J. Woeginger: The Dominating Set Problem in Geometric Intersection Graphs

We study the parameterized complexity of dominating sets in geometric
intersection graphs.
In one dimension, we investigate intersection graphs induced by
translates of a fixed pattern Q that consists of a finite number of intervals
and a finite number of isolated points. We prove that Dominating Set on such
intersection graphs is polynomially solvable whenever Q contains at least one
interval, and whenever Q contains no intervals and for any two point pairs in
Q the distance ratio is rational. The remaining case where Q contains no
intervals but does contain an irrational distance ratio is shown to be NP-
complete and contained in FPT (when parameterized by the solution size).
In two and higher dimensions, we prove that Dominating Set is contained
in W[1] for intersection graphs of semi-algebraic sets with constant
description complexity. This generalizes known results from the literature.
Finally, we establish W[1]-hardness for a large class of intersection graphs.

Petr Golovach, Pinar Heggernes, Paloma Lima and Pedro Montealegre: Finding Connected Secluded Subgraphs

Problems related to finding induced subgraphs satisfying given properties form one of the most studied areas within graph algorithms. Such problems have given rise to breakthrough results and led to development of new techniques both within the traditional P/NP dichotomy and within parameterized complexity. The \Pi-Subgraph problem asks whether an input graph contains an induced subgraph on at least k vertices satisfying graph property \Pi. For many applications, it is desirable that the found subgraph has as few connections to the rest of the graph as possible, which gives rise to the Secluded \Pi-Subgraph problem. Here, input k is the size of the desired subgraph, and input t is a limit on the number of neighbors this subgraph has in the rest of the graph. This problem has been studied from a parameterized perspective, and unfortunately it turns out to be W[1]-hard for many graph properties \Pi, even when parameterized by k+t. We show that the situation changes when we are looking for a connected induced subgraph satisfying \Pi. In particular, we show that the Secluded \Pi-Subgraph problem is fixed parameter tractable when parameterized by just t for many important graph properties \Pi.

Jean Cardinal, Jerri Nummenpalo and Emo Welzl: Solving and Sampling with Many Solutions: Satisfiability and Other Hard Problems

 We investigate parameterizing hard combinatorial problems by the size of the solution set compared to all solution candidates. Our main result is a uniform sampling algorithm for solutions of 2-CNF formulas that runs in expected time $O^*(\varepsilon^{-\bestconstant})$ where $\varepsilon$ is the fraction of assignments that are satisfying. This improves significantly over the trivial sampling bound of expected~$\Theta^*(\varepsilon^{-1})$. The regime of $\varepsilon$ where we improve over previous algorithms is $\varepsilon = \Omega(0.708^n)$. We also consider algorithms for 3-SAT with an $\varepsilon$ fraction of satisfying assignments, and prove that it can be solved in $O^*(\varepsilon^{-2.27})$ deterministic time, and in $O^*(\varepsilon^{-0.936})$ randomized time. Finally, to further demonstrate the applicability of this framework, we also explore how similar techniques can be used for vertex cover problems.

Ralph Bottesch: Relativization and Interactive Proof Systems in Parameterized Complexity Theory

 We introduce some classical complexity-theoretic techniques to Parameterized Complexity. First, we study relativization for the machine models that were used by Chen, Flum, and Grohe (2005) to characterize a number of parameterized complexity classes. Here we obtain a new and non-trivial characterization of the A-Hierarchy in terms of oracle machines, and parameterize a famous result of Baker, Gill, and Solovay (1975), by proving that, relative to specific oracles, FPT and A[1] can either coincide or differ (a similar statement holds for FPT and W[P]). Second, we initiate the study of interactive proof systems in the parameterized setting, and show that every problem in the class AW[SAT] has a proof system with “short” interactions, in the sense that the number of rounds is upper-bounded in terms of the parameter value alone.

Yasuaki Kobayashi, Hiromu Ohtsuka and Hisao Tamaki: An improved fixed-parameter algorithm for one-page crossing minimization

In this paper, we consider the problem of minimizing the number of crossings in a one-page book embedding,
which we call {\em one-page crossing minimization}.
Here, we are given a graph $G$ with $n$ vertices together with a non-negative integer $k$ and
are asked whether $G$ can be embedded into a single page with at most $k$ crossings.
Bannister and Eppstein (GD 2014) showed that this problem is fixed-parameter tractable.
Their algorithm is derived through the application of Courcelle’s theorem
(on graph properties definable in the monadic second-order logic of graphs)
and runs in $f(L) n$ time,
where $L = 2^{O(k^2)}$ is the length of the formula defining the property
that the one-page crossing number is at most $k$ and
$f$ is a computable function without any known upper bound expressible as an elementary function.
We give an explicit dynamic programming algorithm with a drastically improved running time of $2^{O(k \log k)}n$.

George Manoussakis: An Output Sensitive Algorithm for Maximal Clique Enumeration in Sparse Graphs

The \textit{degeneracy} of a graph $G$ is the smallest integer $k$ such that every subgraph of $G$ contains a vertex of degree at most $k$. Given an $n$-order $k$-degenerate graph $G$, we prove an algorithm enumerating all its maximal cliques. Assuming that $\alpha$ is the number of maximal cliques of $G$, our algorithm has setup time $\mathcal{O}(n(k^2+s(k))$ and enumeration time $\alpha \mathcal{O} ((k+1)f(k))$ where $s(k)$ (resp. $f(k)$) is the preprocessing time (resp. enumeration time) for maximal clique enumeration in a general graph $k$-order graph. This is the first output sensitive algorithm whose enumeration time depends only on the degeneracy of the graph.

Andreas Björklund, Petteri Kaski and Ryan Williams: Generalized Kakeya sets for polynomial evaluation and faster computation of fermionants

We present two new data structures for computing values of an $n$-variate polynomial $P$ of degree at most $d$ over a finite field of $q$ elements. Assuming that $d$ divides $q-1$, our first data structure relies on $(d+1)^{n+2}$ tabulated values of $P$ to produce the value of $P$ at any of the $q^n$ points using $O(nqd^2)$ arithmetic operations in the finite field. Assuming that $s$ divides $d$ and $d/s$ divides $q-1$, our second data structure assumes that $P$ satisfies a degree-separability condition and relies on $(d/s+1)^{n+s}$ tabulated values to produce the value of $P$ at any point using $O\!\left(nq^ssq\right)$ arithmetic operations. Our data structures are based on generalizing upper-bound constructions due to Mockenhaupt and Tao (2004), Saraf and Sudan (2008), and Dvir (2009) for Kakeya sets in finite vector spaces from linear to higher-degree polynomial curves.

As an application we show that the new data structures enable a faster algorithm for computing integer-valued {\em fermionants}, a family of self-reducible polynomial functions introduced by Chandrasekharan and Wiese (2011) that captures numerous fundamental algebraic and combinatorial invariants such as the determinant, the permanent, the number of Hamiltonian cycles in a directed multigraph, as well as certain partition functions of strongly correlated electron systems in statistical physics. In particular, a corollary of our main theorem for fermionants is that the permanent of an $m\times m$ integer matrix with entries bounded in absolute value by a constant can be computed in time $2^{m-\Omega\left(\sqrt{m/\log\log m}\right)}$, improving an earlier algorithm of Bj\”orklund (2016) that runs in time $2^{m-\Omega\left(\sqrt{m/\log m}\right)}$.

Vikraman Arvind, Johannes Köbler, Sebastian Kuhnert and Jacobo Torán: Finding Small Weight Isomorphisms with Additional Constraints is Fixed-Parameter Tractable

Lubiw showed [SICOMP 10, 1981] that several variants of Graph Isomorphism are NP-complete, where the solutions are required to satisfy certain additional constraints. One of these, called Isomorphism With Restrictions, is to decide for two given graphs X₁=(V,E₁) and X₂=(V,E₂) and a subset R⊆V×V of forbidden pairs whether there is an isomorphism π from X₁ to X₂ such that π(i)≠j for all (i,j)∈R. We prove that this problem and several of its generalizations are in fact in FPT:

– The problem of deciding whether there is an isomorphism between two graphs that moves k vertices and satisfies Lubiw-style constraints is in FPT, with k and the size of R as parameters. The problem remains in FPT even if a conjunction of disjunctions of such constraints is allowed. As a consequence of the main result it follows that the problem to decide whether there is a isomorphism that moves exactly k vertices is in FPT. This solves a question left open in our article on exact weight automorphisms [STACS 2017].

– Checking if there is an isomorphism π between two graphs with compl(π)=t is also in FPT with t as parameter, where compl(π) is the Cayley measure defined as the minimum number t such that π can be expressed as a product of t transpositions.

– We consider a more general problem in which the vertex set of a graph X is partitioned into Red and Blue, and we are interested in an automorphism that stabilizes Red and Blue and moves exactly k vertices in Blue, where k is the parameter. This problem was introduced by [Downey and Fellows 1999], and we showed [STACS 2017] that it is W[1]-hard even with color classes of size 4 inside Red. Now, for size at most 3 color classes inside Red, we show the problem is in FPT.

In the non-parameterized setting, all these problems are NP-complete. Also, they all generalize in several ways the problem to decide whether there is a isomorphism between two graphs that moves at most k vertices, shown to be in FPT by Schweitzer [ESA 2011].

Michael Lampis and Valia Mitsou: Treewidth with a Quantifier Alternation Revisited

In this paper we take a closer look at the parameterized complexity of $\exists\forall$-SAT, the prototypical complete problem of the class $\Sigma_2^p$, the second level of the polynomial hierarchy. We provide a number of tight fine-grained bounds on the complexity of this problem and its variants with respect to the most important structural graph parameters. Specifically, we show the following lower bounds (assuming the ETH):

– It is impossible to decide $\exists\forall$-SAT in time less than double-exponential in the input formula’s treewidth. More strongly, we establish the same bound with respect to the formula’s primal vertex cover, a much more restrictive measure. This lower bound, which matches the performance of known algorithms, shows that the degeneration of the performance of treewidth-based algorithms to a tower of exponentials already begins in problems with one quantifier alternation.

– For the more general $\exists\forall$-CSP problem over a non-boolean domain of size $B$, there is no algorithm running in time $2^{B^{o(vc)}}$, where $vc$ is the input’s primal vertex cover.

– $\exists\forall$-SAT is already NP-hard even when the input formula has constant modular treewidth (or clique-width), indicating that dense graph parameters are less useful for problems in $\Sigma_2^p$.

– For the two weighted versions of $\exists\forall$-SAT recently introduced by de Haan and Szeider, called $\exists_k\forall$-SAT and $\exists\forall_k$-SAT, we give tight upper and lower bounds parameterized by treewidth (or primal vertex cover) and the weight $k$. Interestingly, the complexity of these two problems turns out to be quite different: one is double-exponential in treewidth, while the other is
double-exponential in $k$.

We complement the above negative results by showing a double-exponential FPT algorithm for QBF parameterized by vertex cover, showing that for this parameter the complexity never goes beyond double-exponential, for any number of quantifier alternations.

Karthekeyan Chandrasekaran and Sahand Mozaffari: Odd Multiway Cut in Directed Acyclic Graphs

We investigate the odd multiway node (edge) cut problem where the input is a graph with a specified collection of terminal nodes and the goal is to find a smallest subset of non-terminal nodes (edges) to delete so that the terminal nodes do not have an odd length path between them. In earlier work, Lokshtanov and Ramanujan showed that both odd multiway node cut and odd multiway edge cut are fixed-parameter tractable (FPT) when parameterized by the size of the solution in undirected graphs. In this work, we focus on directed acyclic graphs (DAGs) and design a fixed-parameter algorithm. Our main contribution is an extension of the shadow-removal framework for parity problems in DAGs. We complement our FPT results with tight approximability as well as polyhedral results for 2 terminals in DAGs. Additionally, we show inapproximability results for odd multiway edge cut in undirected graphs even for 2 terminals.

Johannes K. Fichte, Markus Hecher, Stefan Woltran and Michael Morak: DynASP2.5: Dynamic Programming on Tree Decompositions in Action

 A vibrant theoretical research area are efficient exact parameterized
algorithms. Very recent solving competitions such as the PACE
challenge show that there is also increasing practical interest in the
parameterized algorithms community. An important research question is
whether dedicated parameterized exact algorithms exhibit certain
practical relevance and one can even beat well-established problem
solvers.We consider the logic-based declarative modeling language and problem
solving framework Answer Set Programming (ASP). State-of-the-art ASP
solvers rely considerably on SAT-based algorithms. An ASP solver
(DynASP2), which is based on a classical dynamic programming on tree
decompositions, has been published very recently. Unfortunately,
DynASP2 can outperform modern ASP solvers on programs of small
treewidth only if the question of interest is to count the number of
solutions. In this paper, we describe underlying concepts of our new
implementation (DynASP2.5) that shows competitive behavior to
state-of-the-art ASP solvers even for finding just one solution when
solving problems as the Steiner tree problem that have been modeled in
ASP on graphs with low treewidth. Our implementation is based on a
novel approach that we call multi-pass dynamic programming
(M-DP_SINC).

Lech Duraj, Marvin Künnemann and Adam Polak: Tight Conditional Lower Bounds for Longest Common Increasing Subsequence

 We consider the canonical generalization of the well-studied Longest Increasing Subsequence problem to multiple sequences, called k-LCIS: Given k integer sequences X_1,…,X_k of length at most n, the task is to determine the length of the longest common subsequence of X_1,…,X_k that is also strictly increasing. Especially for the case of k=2 (called LCIS for short), several algorithms have been proposed that require quadratic time in the worst case.

Assuming the Strong Exponential Time Hypothesis (SETH), we prove a tight lower bound, specifically, that no algorithm solves LCIS in (strongly) subquadratic time. Interestingly, the proof makes no use of normalization tricks common to hardness proofs for similar problems such as LCS. We further strengthen this lower bound (1) to rule out O((nL)^{1-eps}) time algorithms for LCIS, where L denotes the solution size, and to rule out O(n^{k-eps}) time algorithms for k-LCIS. We obtain the same conditional lower bounds for the related Longest Common Weakly Increasing Subsequence problem.

David Eppstein and Denis Kurz: K-Best Solutions of MSO Problems on Tree-Decomposable Graphs

We show that, for any graph optimization problem in which the feasible
solutions can be expressed by a formula in monadic second-order logic
describing sets of vertices or edges and in which the goal is to minimize
the sum of the weights in the selected sets, we can find the k best
solutions for n-vertex graphs of bounded treewidth in time O(n + k log n).
In particular, this applies to the problem of finding the k shortest
simple paths between given vertices in directed graphs of bounded
treewidth, giving an exponential speedup in the per-path cost over
previous algorithms.