You are on page 1of 15

Reducing the value of the optimum: FPT

inapproximability for Set Cover and Clique, in super


exponential time in opt
July 16, 2013
Abstract
In Fixed Parameter Tractability (FPT) theory, we are given a problem with a special parameter k. In this paper we are only interested in k equal the size of the optimum. A FPT algorithm
for a problem is an exact algorithm that runs in time h(k) nO(1) for a function h that may be
arbitrarily large. In FPT approximation we seek a g(k) ratio that runs in time h(k) nO(1) , so
that h, g are two increasing functions. For such results we want to minimize g and h.
FPT inapproximability is the oposite of FPT approximability. Albeit, its not hard to see
that for minimization problems we need the inequality k opt and for maximization we need
k opt (with opt the optimum of some concrete instance).
A more strict notion of FPT inapproximability assumes that k = opt with opt the optimum
value of some concrete instance. Clearly, if we allow k opt (for minimization problems) it
may be easier to prove inapproximability than when restricting k to opt. In this paper we adopt
the latter definition for inapproximability. As it will become clear later, opt will always be the
value of a yes instance in some gap reduction from 3-SAT.
Thus a FPT inapproximability in opt is defined as follows. We given a problem and an
instance I of the problem, we show that the problem can not be approximated within g(opt) in
time h(opt) nO(1) for some increasing functions h, g, with opt the value of the solution for I.
An inapproximability result would like to get h, g that are as large as possible.
We study FPT inapproximability in opt for Clique and Set Cover and of the Minimum
size Maximal Independent Set Problem. We restrict h(opt) to time super exponential in opt.
Otherwise, some hardness results may be directly translated to FPT inapproximability results.
Fellows [6] conjectured that Clique and Set Cover admit no g(opt) ratio approximation in time
h(opt) nO(1) for any pair of increasing functions h, g.
We prove that under the exponential time hypothesis (ETH) [11] and the projection game
conjecture [15], there are two constants f , > 0 so that Set Cover is (log opt)1+ -inapproximable,
f
even in time exp((opt (log opt) )). This running time is significantly higher than exponential
time in opt. Under a qualitatively better version of the projection game conjecture, we can
0
d0
show that Set-Cover admits no opt d approximation for some constant d 0 , in time exp(2opt )
which is almost double exponential time in opt.
For the Clique problem, we show that under the ETH there exists a constant > 0 so that
for any increasing function h, Clique admits no 1 approximation in time h(opt) nO(1) . In
[5] it was shown that Clique for opt log n can not be solved in time significantly smaller than
nopt . We improve one aspect of [5], namely, we can show inapproximability for opt log n
which is stronger than ruling an exact solution.

The Minimum Maximal Independent Set problem is, given a parameter k, find if there
exists a maximal independent set of size at most k. For Minimum Maximal Independent Set
we are able to show that unless P = NP, for any increasing function g, h, there is no g(opt)
approximation for the problem in time h(opt) for any g, h, namely, we are able to prove the
Fellows conjecture for MMIS. This seems to be the only non trivial problem for which we can
prove such a result so far.

1
1.1

Introduction
Motivation

Our paper is motivated by an important conjecture by Fellows. The conjecture concerns parameterized approximation for Set Cover and Clique. We use a convention in fixed parameter tractability
for maximization problems: a solution is not admisable unless its of size w(1) (namely, super
constant). Otherwise, by returning a single vertex we approximate Clique within opt.
Given this, a version of the Fellows conjecture can be described as follows:
Conjecture 1.1. The Fellows conjecture [6]:
For any pair of increasing functions h, g, Clique and Set-Cover admit no g(opt) approximation
that runs in h(opt) nO(1)
We are only interested in time h(opt) that is super exponential in opt. We try to avoid a
situation that hardness results are directly translated to FPT inapproximability. Indeed, if h may be
subexponential then in many cases a polynomial gap reductions will imply FPT inapproximability.
In any polynomial gap reduction from 3-SAT, an instance I of 3-SAT is mapped to an instance
of size |I|D of some problem, for a constant D > 1. In addition, many times opt |I|D holds.
1/D0
1/D0
Then for any D0 > D, 2opt
= 2o(|I|) . Thus setting h(opt) = 2opt , is a direct way to get FPT
inapproximability from a regular inapproximability result. However, if we insist that h(opt) will be
super exponential in opt, proving FPT inapproximability, is equivalent to creating a gap reduction
with very small optimum. This is by no means an easy task. The two inapproximability results
for Clique [10] and for Set Cover [17] seem hard to use, if a super exponential time is required.
The reductions from an instance I of 3-SAT to Clique and Set-Cover derive an optimum of size
quite close to |I|D for D > 1. Thus even exponential time in opt is out of the question, unless we
transform the instance to a different instance with smaller opt. A thing that seems highly difficult
to do. To overcome this difficulty, we use other inapproximability results. Proving the Fellows
conjecture seems beyond the reach of current knowledge. Thus we try to prove such results for
large g and h.
Another problem we study is the Minimum Maximal Independent Set problem that seeks a
minimum size independent set that is also maximal (namely is a dominating set). This problem
seems quite hard to approximate within better than n. For every feasible set S, every super set of S
and every subset of S may not be feasible. The problem is known to be W[2]-Hard [7] and unless
P = NP it admits no n1 approximation [9]. There are very few problems we are aware of that

admit such a strong hardness without using any PCP theorem. Another example is a beautiful n
inapproximability for the maximum number of disjoint paths in a directed graph [8].
Remark: In all future calculation we assume that all numbers are integral. Fixing this using
floors and ceilings is elementary.

Previous work

The following relation is known among the parameterized complexity classes: FPT W [1]
W [2]. We are going to assume the ETH and thus use the fact that it is known that under ETH
Clique and Set-Cover do not have an FPT when parameterized by the size of the solution.
Corollary 2.1. (FPT6= W[1]), i.e., C LIQUE cannot be solved in time f (k)nc where f is any function
depending only on k, and n is the number of vertices of the graph G.
Corollary 2.1 implies the following weaker claim:
Corollary 2.2. (FPT6= W[2]), i.e., S ET C OVER cannot be solved in time f (k)nc where f is any
function depending only on k, and n is the size of the universe.
In proving FPT for a problem, there is always a parameter k. We are only interested in k that
is the value of the optimum. For inapproximability we set k = opt which is a more strict inapproximability notion. In the parameterized by k inapproximability for a minimization problems,
the goal is to show that there is no g(k) ratio in time h(k)nO(1) , for some k opt (in maximization
we impose k opt). Setting k = opt (namely, k is the optimum of some concrete instance) is a
more strict notion than showing inapproximability of some k opt. Hence we adopt the latter
stricter version. We note that opt is the optimum of some instance and as we shall see it will be the
optimum of a yes instance in gap reductions from 3-SAT.
To the best of our knowledge, the effort of showing FPT inapproximability for Clique and Set
Cover started with [2].
Theorem 2.3. [2] Under the ETH and PGC, there exist constants F1, F2 > 0 such that the Set
F
Cover problem does not admit a FPT approximation algorithm with ratio opt F1 in time 2opt 2 .
The above theorem uses F2 < 1 hence uses time subexponential time opt and is not suited for
this paper.
Theorem 2.4. Unless NP SUBEXP, for every 0 < < 1 there exists a constant F = F( ) > 0
F
such that Clique admits no FPT approximation within opt 1 , in time 2opt .
As F < 1 in the above construction the running time here two is subexponential in opt and this
theorem is not suited for this paper.
We note that the price of wanting super exponential time is getting much weaker inapproximability result than [2]. however, it is our belief that the definition of FPT inapproximability should
demand super exponential running time in opt.
In [2] a large collection of W [1]-hard problem for which Fellows conjecture does not apply,
is presented, namely, some problems are given f (opt) ratio. And in fact, the running time is just
polynomial in the size of the input. All these problems are not only W [1]-hard but also admits a
strong inapproximability results (at least Label-Cover hardness. See [15]) Thus for some reason
some W [1]-hard problems seem to behave different than Clique.
Theorem 2.5. [2] Directed Multicut, Directed Steiner Tree, Directed Steiner Forest, Directed
Steiner Network and Minimum Size Edge Cover admit g(opt)-approximation algorithms for some
small function g (the largest approximation ratio we give is opt 2 ). The running time is polynomial
in n
Theorem 2.6. [2] Strongly Connected Steiner Subgraph problem (which is W[1]-hard and does not
have any polynomial time constant factor approximation) admits a 2 factor h(opt)nO(1) algorithm.
3

This seems to be the only natural problem for which such a result is known.
Previous work from inapproximability theory:
Theorem 2.7. [10, 18] Unless P = NP, Clique can not be approximated within n1 .
The [10] reduction was randomized but in [18] this result is derandomized and it is achieved
under the assumption that P 6= NP. Note that there are stronger inapproximability results for Clique
[13] but we can not use them because the running time of this reduction is quasi polynomial (in
fact if the reductions would have been polynomial, this would contradict the ETH).
Theorem 2.8. [17] Unless P = NP Set Cover admits no better than c ln n approximation for some
constant c.
These inapproximability results are close to best possible. Clique admits a a trivial n ratio
approximation. For Set Cover it is known (folklore) that the natural greedy algorithm gives a
ln n + 1 approximation.

A technique to reduce opt

This paper illustrates the importance of linear reductions. There is a linear reduction from an
instance I of 3-Sat to Clique. More precisely, the number of edges in the graph is not linear in |I|,
but the number of vertices is linear in I. This is enough for proving our result for Clique.
To give hardness to Set Cover we use an almost linear PCP (see [15]). 1 Say that a PCP has size
|I| (n) for (n) = o(|I|). Our technique changes the optimum to (n), hence making it smaller.
There are standard reduction from almost linear PCP [15], to Set Cover for example the one of
[14]. Note that the number of elements in the Set Cover reduction is not almost linear in |I| but
rather polynomial in |I| (as we shall see). But for our result, its enough that the number of sets is
almost linear in |I|.
In [15] its is conjectures that there exists a PCP of size |I| (I) poly(1/), with gap 1/ and
alphabet of size poly(1/), for (I) = o(|I|). By a proper choice of , and the use of our technique,
we turn the optimum to (|I|) = o(|I|) which is enough to guarantee time super exponential in opt.

Complexity Conjectures Assumed In The Paper

In this section, we briefly describe the complexity conjectures assumed in this paper.

4.1

Exponential Time Hypothesis

Impagliazzo et al. [11] formulated the following conjecture which is known as the Exponential
Time Hypothesis (ETH).
E XPONENTIAL T IME H YPOTHESIS (ETH)
3-SAT cannot be solved in 2o(n) (n + m)O(1) time where n is the number of variables and m
is the number of clauses.
Using the Sparsification Lemma of Calabro et al. [1], the following lemma was shown.
Lemma 4.1. Assuming ETH, 3-SAT cannot be solved in 2o(m) (n + m)O(1) time where n is the
number of variables and m is the number of clauses.
1 Its

not hard to see that if there is a reduction from 3-SAT to Set-Cover with linear number of sets, the ETH fails

4.2

The Projection Games Conjecture

The main conjecture used in this paper from the world of approximation algorithms is the Projection Games Conjecture due to Moshkovitz [15].
P ROJECTION G AMES C ONJECTURE (PGC)
There exists c > 0, such that for every n1c , it is possible to design an almost-linear size
PCP Theorem with completeness 1 and soundness and alphabet size poly( 1 ).
Moshkovitz is able to show that under PGC there is no (1 ) ln n-approximation for Set Cover
unless P = NP. The PCP of [15] is in fact a Labelcover instance [15]. In other words, the prover
queries two entries of the PCP string. The following can be obtained from the PGC using standard
reductions (see [14]).
Conjecture 4.2. [15] There exists a reduction from a SAT instance of size I to an instance of set

cover with at most n = |I| 2log (|I|) poly( 1 ) sets so that the gap between a yes and a no instance
is ( 1 )
While the almost linear term of the [15] conjecture is not specified, we did the natural thing;
We used the almost linear expression of the paper [16], which is last PCP paper by Moshkovitz
before [15].
The conjecture of Fellows is:
Conjecture 4.3. For any two increasing functions f , g, Clique and Set Cover admit no g(opt)
approximation in time h(opt)nO(1) .

4.3

Formal definition of the problems


C LIQUE
Input : An undirected graph G = (V, E), and an integer k
Problem: Does G have a clique of size at least k?
Parameter: k

S ET C OVER
Input : A universe
U = {u1 , u2 , . . . , un } and a collection S = {S1 , S2 , . . . , Sm } of subsets of
Sm
U such that j=1 S j = U.
S
Problem: Does there exists a subcollection S 0 S of size k such that Si S 0 Si = U?
Parameter: k
M INIMUM S IZE M AXIMAL I NDEPENDENT S ET:
Input : An undirected graph G = (V, E), and an integer k
Problem: Does G have a maximal independent set of size at most k?
Parameter: k

Our Results

Theorem 5.1. Under the ETH and the PGC conjectures, the exists two constants f , > 0 so that
f
Set cover is (log1+ (opt)) inapproximable, even in time exp((opt (log opt) )) for some constant
f.
This time much larger than exponential in opt and so meets the main requirement of the paper.
Theorem 5.2. If there exists a PCP of size |I| polylog|I| log(1/) and gap 1/ then set cover
0
d 00
admits no opt d ratio, for some constant d 0 , even in time exp(2opt ) for some constant d 00 .
This PCP, was conjecture to exist, in a private communication, by Dana Moshkovitz, who is a
leading expert in the subject of PCP. Note that the running times is almost doubly exponential in
opt.
Theorem 5.3. Under the ETH there exists a constant so that for every function h, Clique admits
no 1 approximation, even in h(opt)n(O(1) time
The above theorem is proved by creating an instance of Clique with constant gap, so that the
optimum size clique can be f (opt) for any f , however small. This seems to be a new construction.
We can compare our result to the one of [5]. In [5] it is shown that for opt log n, Clique can
not be solved in significantly smaller time than nopt time, unless NP has subexponential simulation.
More specifically, if Clique can be solved in much smaller time than nopt , any solution for an NPC
problem
p that makes f (n) non deterministic steps, implies a deterministic solution in time roughly
exp( f (n)). This implies that NP admits a subexponential algorithm.
Theorem 5.3 works for any opt and opt log n in particular. Thus for opt log n Clique has
no 1 inapproximability in time h(opt)nO(1) for any increasing function h. The improvement
over [5] is that we show inapproximability and not rule out exact solutions. For such small values
of opt it may well be that getting an exact solution is considerably harder than getting 1 approximation. However, our improvement is not strict in the following sense. The times in [5] and
our paper are non comparable, because [5] allows slightly super polynomial time while we allow
h(opt) poly(n) time for any function h. It may be the case that combining the ideas of this paper
and [5] would give a running time which is h(opt) times the running time of [5].
Remark: We prove our Clique inapproximability under the ETH while in the [5] an exact solution is ruled out under a somewhat stronger assumption, namely, that NP admits no subexponential
simulation.
We find a well known problem for which we are able to show that the Fellows conjecture hold
true.
Theorem 5.4. For any two increasing functions g, h, MMIS admits no g(opt) ratio in time h(opt)
nO(1)
Except such a trivial result for 3-coloring (see below) that is the only problem for which we are
able to prove Fellows conjecture on, so far. We will discuss in details the special properties of the
(simple) reduction from 3-SAT to MMIS, and why does it allow proving the conjecture.
In the rest of the paper we will assume that all numbers are integers. This is done to simplify
the claims. An elementary use of ceiling and floors can correct this point.
Avoiding a constant optimum: Consider the 3-Coloring problem, We (trivially) show that the
Fellows conjecture holds for this problem. In [4] it is shown that 3-coloring admits no constant
6

approximation for any constant unless a variant of the (well known) Khot 2 to 1 projection game
conjecture holds. Take the yes instance of the problem (in which opt = 3). For any g, g(opt) = g(3)
is constant and h(opt) nO(1) is just polynomial time. By the above hardness of [4], it is clear that
the Fellows conjecture applies for 3-coloring. The lesson to be derived from this example is that
we should only try to prove the Fellows conjecture for problems with non constant opt.

6
6.1

Super exponential time in opt inapproximability for S ET C OVER


Proof of Theorem 5.1

We choose = c/ ln2 n for a large enough constant c for Conjecture 4.2. (interestingly, in [15] a
choice of = c/ log4 n is required. We however can afford to choose slightly larger ).
Thus the PGC implies:
Corollary 6.1. (A variant of [14] applied on [15]) There exists a reduction from the Projection

game conjecture to Set-Cover so that the number


of sets is is |I| 2log |I| polylog(|I|)poly(1/),

and the number of elements is O |I|3 poly(n) so that in case of a yes instance there the value of

the optimum is |I| 2log |I| polylog(|I|) and case of a no instance, the solution size is c ln |I| |I|

2log |I| polylog(|I|) for some constant c.


We briefly explain how do we reach the above corollary. The length of the PCP in [15] is taken

from [16]. This length is |I| 2log |I| polylog(|I|)(1/)c1 for some constant c1 . The PCP of [15]
is equivalent to a Labelcover instance. In Labelcover, we have two provers and a collection of
questions for the two provers. The total number of questions is the size of the PCP. Two questions
that can be asked together is called a query. The verifier asks a query. After getting the answers
from both provers, the verifier performs a consistency test and answers yes if this test is successful,
and no otherwise.
Note that the alphabet et size in [15] is (1/)c2 for some constant c2 and so clearly the number
of answers per query can not be more than that. The size the Label-Cover instance (for both yes
and no instances) is the number of questions times the number of answers per question and so is:

|I| 2log |I| polylog(|I|)(1/)c1 +c2 . The o(|I|) term is taken from [16]. The choice of allows us

to write the size of the Labelcover instance as |I| 2log |I| polylog(|I|)(1/)c2 .
It is always the case that for a yes instance we can choose a single answer to every question

and satisfy all queries (see both [14] and [15]). Thus for a yes instance we can pick |I| 2log |I|
polylog(|I|) answers and satisfy all queries.
Given any strategy (single answer per question), for a no instance the prover accepts for at most
1/ log2 n of the queries (the fraction of queries that is accepted is ). This 1/ gap combined with
a gadget in
pwhich every answer for a prover is a set (see [14]) gives a gap reduction for Set Cover
with gap 1/ = (log |I|). The size of the optimum in a yes instance for Set Cover is the number

of answers to satisfy all questions namely, |I| 2log |I| polylog(|I|). For a no instance the optimum
is (log |I|) larger.
Note that in [14] a ground set of elements is defined for any query. A collection of partitions
is deterministically chosen per ground set of elements. There is a bijection between partitions and
sets. Even though the construction is deterministic, to understand this better one can think of every
partition as a random half of the ground set of elements. The size of every ground set of elements
is M = (2dlog n ) for some constant d for reasons to be clear soon. For a yes instance there is a
trap door namely its possible to cover any ground set of elements with 2 sets (partitions) namely,
7

a partition and its complement. But in a no instance we need to use about ln M = (log(|I|) sets
(partitions) to cover the ground set (just as covering a set with random partition). This leads to
ln n/2 inapproximability. The size of the ground set is M =(2dlog n ) for some constant d because
as we saw the gap for Set Cover is roughly min{log M, 1/ }, which by the choices of M and it
is c ln n for some constant c.
Thus the number of elements is indeed bounded by |I|3 poly(n) because the number of queries
is less than |I|3 .

6.2

The new construction

We now describe a way to change the Set Cover instance to a new one. The idea is to make the
optimum much smaller. Let the set cover instance be G(X,Y, E) with X the sets, Y the elements.
The set E is a simple way to describe which element belong to which set. In particular, there is an
edge between a set and an element, if the element belongs to the set.
We are interested in large subsets of X. Select all subsets of size |I|/(log(|I|) of X. Such a
big set will be called a collection set to distinguish between these super sets and the sets of the
instance we start with. Let S be a typical collectionSset, S = {s1 , s2 , . . .} with every si a set, namely,
a collection of elements. Connect every such S to sS s namely, to all the elements that belong to
some s S.
This defines a new Set-Cover instance with the collection sets as the sets, and the same
elements. The inclusion of elements in a collection set is as described above.
Claim 6.2. The size of the new graph is 2o(|I|) as required by the ETH
Proof. The number of collection sets is



|I| 2log |I| polylog(|I|)

|I|
log |I| .

We use

n
k
k (ne/k)

to get that the number of sets is at most



|I|/log|I|

e 2log |I| polylog(|I|)


= 2o(I) .

Proving the claim.


2
Remark: The term log(I) from the I/ log(I) term is swallowed by the polylog(|I|) term.
Claim 6.3. The gap for Set-Cover between a yes and a no instance remains the same
Proof. We only need to know that the optimum for the set cover instance is X and for a no instance
it is X c ln n.
Clearly, any minimum solution will take the original optimum and break it to disjoint sets.
For a minimization cover problem, there is no point to taking two collection sets A, B so that
A B 6= 0/ because this will just mean covering some elements more than once. Thus the size of
the new optimum for yes instance is
X
.
|I|/ log(I)
8

On the other hand the size of the new optimum for a no instance is
X c ln n
,
|I|/ log(I)
proving the claim. 2
|I|

Claim 6.4. The new optimum value for a yes instance is 2log

polylog(I)

Proof. As discussed above we should divide |I| 2log |I| polylog(I) for < 1 by |I|/log(|I|). The
claim follows as an additional log |I| terms is swallowed by the polyog(|I|) term. 2
Since the gap is ln |I| up to constants we get:
Claim 6.5. The gap is larger than (log opt)1+ for some constant > 0.

Proof. The new optimum is 2log |I| polylog(I). Thus log opt = O(log |I|). Thus for any constant
> 0 so that 1 + < 1/ we get:
((log opt)1+ ) = o(log |I|)
Which implies that the gap is actually larger than (log opt)1+ .

(1)
2

We now finish the proof of Theorem 5.1. Clearly for d > 1,



 log1 |I|
d
log |I|
2
polylog(I)
= o(|I|).
1

This can also be written as exp(opt (log |I|) ).


We re-write this term as a function of opt.
As log(opt) = O(log |I|), if we set f = (1 1/)/ we get that (log opt) f = log1 |I|.
1
f
Therefore we got that exp(opt (log |I|) ) = exp((opt (log opt) ) ) = o(2|I| ).
Assume the contrary of Theorem 5.1 for the sake of contradiction. Building the new Set Cover
graph requires 2o(|I|) time. After that the new optimum is as described above. The gap between a
yes and no instance is (opt 1+ ), as discussed above. Also, as we saw above, a running time of
f
exp((opt (log opt) )) = 2o(|I|)
Thus by choosing c small enough, a c log opt 1+ f approximation implies that we can tell a yes
f
instance of 3-SAT from a no instance of 3-SAT. If this is done in time exp((opt (log opt) )) = 2o(|I|) ,
then building the graph plus the running time of the approximation algorithm is bounded by in
2o(|I|) . This contradicts the ETH.

6.3

Proof of Theorem 5.2

For proving this theorem we assume:


Conjecture 6.6. There exists a size |I| polylog(I)pol(1/) PCP with gap 1/

Is the conjecture reliable?


Note that there exists already a PCP of size even smaller than the above. In fact in [3] a PCP
is presented whose size is is |I| polylog(|I|). The down size is that the inapproximability of this
PCP [3] is 2. Improving the inapproximability to polylogarithmic does not seem far fetched.
We now use the above conjecture and show a much stronger FPT inapproximability for SetCover.
Proof of Theorem 5.2: Choose 1/ to be 1/ log2 |I|. This choice implies that the number of
sets is |I| polylog(I).
Now, we make every subset of X of size |I|/(d log log |I|) a collection set, with d a large
enough constant.
The number of sets in the instance is:


|I| polylog(I)
|I|
dlog log |I|


and is 2o(|I|) if d is large enough. This is implied by the inequality nk (ne/k)k . The reason for

the major improvement is that the term 2log I is gone.


After this change, the size of the optimum for a yes instance becomes polylog(|I|). Thus for
0
some d 0 opt d = o(c ln |I|). This gives the promised gap.
0
d0
d 00
Also as opt d = o(c log |I|), 2opt = o(|I|) and exp(2(opt ) ) = 2o(|I|) for some constant d 00 . This
ends the proof of Theorem 5.2.

6.4

Discussion of this specific technique

Our technique is very specialized, and this technique alone can not be used to prove the Fellows
conjecture for Set-Cover. Only a linear reduction from 3-SAT to Set Cover can be used to prove
the conjecture. However such a reduction can not exist as it contradicts the ETH.
There is a large evidence that a linear PCP can not exist. The ultimate PCP we may expect
(albeit this is not known even for constant ) is a PCP of size |I| poly(1/) with gap 1/. For
the choice of 1/ = polylog(n) the inapproximability is almost the same as in Theorem 5.2. This
shows that Theorem 5.2 got the best result possible if we only use almost linear PCP and our
technique.
Other methods: It is plausible that we can make the optimum smaller by random sampling
techniques, and combine it with the above technique of making collection sets, we would be able
to prove the Fellows conjecture. However at this stage we were not able to achieve that. It may
be the case that the current PCP theory does not allow the proof of the Fellows conjecture for Set
Cover. In fact we suspect that a new type of PCP, namely, a parameterized version of the PCP
theorem has to be invented for proving the Fellow conjecture, at least for Set-Cover.

A constant lower bound for Clique in arbitrarily large time


in opt

.
We use the elementary reduction from 3 SAT with a constant gap to the Clique problem. In
this case the size of instance I of the 3-SAT is denoted by |I| = m +n with n the number of variables
and m the number of clauses.
10

In the reduction, we add 7 vertices for every fixed clause in the 3-SAT instance. One per each 7
satisfying assignments to the clause. Thus we get a correspondence: a vertex in the clique graph is
a legal assignment to some clause. We join two vertices (assignments for two clauses) by an edge
if the assignments do not contradict. This can be a fake edges namely edges from assignment to C1
to an assignment C2 so that C1 and C2 share no variables. However, if C1 and C2 share a literal or a
variable, the two assignment are joined by an edge only if the truth value for the mutual variable(s)
is the same. These edges are called true edges. Clearly the seven possible assignments for a fixed
clause C are an independent set, because they disagree (by definition) on the value of at least one
variable.
Theorem 7.1. The standard PCP theorem: There exists a reduction from any NPC language L
to 3 SAT with n variables and m clauses so that a yes instance is mapped into a 3-SAT instance
such that all the m clauses can be simultaneously satisfied. In the case of a no instance, at most
1 fraction of the clauses can be simultaneously satisfied.
The following simple claim is standard and is proved for completeness.
Claim 7.2. If the 3-SAT is a yes instance there is a clique of size m with m the number of clauses.
For a no instance, the maximum clique set is of size at most (1 )m
Proof. In case of a yes instance there is a truth assignment F that satisfies all clauses. Fix a clause
C. Among the 7 possible truth assignment per clause, we take the one that agrees with F. The
claim for a yes instance follows. For a no instance say for the sake of contradiction that the clique
is of size larger than (1 )m. Ignore the fake edges. Recall that the 7 assignments to the same
clause are an independent set, because by definition they disagree on some variable. Hence there
could be at most one vertex (assignment of values for the variables of the clause) for every clause.
In particular, there is a one-to-one function from vertices in the clique to clauses.
The clique of size larger than (1 )m corresponds to a collection of clauses larger than (1
)m because the above assignment is one-to-one. However, this gives a collection of more than
(1 )m clauses that can be simultaneously satisfied for a no instance. Indeed, every two clauses
that share a variable assign the same value to the variable because these two vertices are a part of
a clique. This is a contradiction. 2
We do the following transformation that is a modification of what we did for for Set-Cover.
The number of vertices is 7m. Let f (m) be any (slowly) increasing function of m. We make every
subset of m/ f (m) vertices of the 7m vertices a single vertex. Such a vertex is called a collection
vertex. Two collection vertices A, B, are connected by an edge, if A B is a clique, and even more
importantly A B = 0.
/ The last condition, namely, the fact that two sets that are connected must
be disjoint is not needed in the Set Cover reduction, but it is crucial here.
Claim 7.3. The new instance of the Clique problem has size 2o(m)
Proof. Using
   
n
ne k

,
k
k
we get that the number of collection vertices sets is at most (7 e f (m))7m/ f (m) . Note that this is
also equal to: 2log(7e f (m))7m/ f (m) . The number of edges in the new Clique instant is at most the
number of vertices squared hence at most 22log(7e f (m))7m/ f (m) which is at most 2o(m) because f is
an increasing function. This size is within the limitation of the ETH (because of the Sparsification
lemma). 2
11

Claim 7.4. The gap of 1 remains.


Proof. Because two different collection vertices A and B are adjacent, only if A B is a clique,
and A, B are disjoint, it follows that largest clique size of collection vertices for a yes instance
is m/(m/ f (m) = f (m). Indeed, the vertices inside collection vertices are disjoint. Therefore a
clique can not constrain more than f (m) collection vertices, otherwise we find a clique of size
larger than m. On the other hand, this value f (m) can be reached if we decompose the maximum
clique of size m into f (m) vertex disjoint cliques of size m/ f (m).
For a no instance, because the sets are disjoint, the max clique for new clique instance is at
most (1 ) f (m). Otherwise, since the sets are disjoint, we get a clique in the original graph of
size larger than (1 )m. This means that the gap is kept. 2
The optimum for a yes instance is f (m). Additionally, we can make f as small as we wish.
Claim 7.5. For any strictly increasing function h, it is not possible to approximate the Clique
problem within better than 1 even if time h(opt) (m + n)O(1) time.
Proof. Let h be the inverse function of h. Setting f (m) = h (m) we get that the optimum for a
yes instance is at most h (m). As h is increasing h is increasing as well hence we can apply the
above. Note that the clique built is of size 2o(m) as required in the ETH conjecture.
Thus if we have a better than 1 approximation in time h(opt) (n + m)O(1) because the
approximation is 1 we can tell between a yes and a no instance of 3-SAT.
The running time h(opt) turns out to be h(h1 )(m) = m which is well within ETH time. The
claim follows. 2

Proving the Fellow conjecture for the MMIS problem

We start from the following simple observation.


Claim 8.1. Say that we have a 3-SAT instance with n variables and m clauses. Let q(n) be an
arbitrary increasing function of n. If there exists a gap reduction from 3-SAT to an instance I 0 of
some new problem, so that the optimum opt 0 for a yes instance can be made opt 0 = f (opt) for
as small f as we want, and the value in the no instance is always at least q(opt), the Fellows
conjecture applies for the problem.
Proof. Consider any two increasing functions g, h. As we can make the value of the optimum
in a yes instance opt 0 = f (opt) for f as small as we wish, we can make it small enough so that
h(opt 0 ) = h( f (opt)) is polynomial in n. Additionally, since the value for a no instance is q(m)
for some increasing function q, we can make f small enough so that g(opt 0 ) opt 0 = g( f (opt))
f (opt) < q(opt). This implies a g(opt 0 ) gap between the yes instance of the new problem and the
no instance of the new problem. 2
We use an elementary reduction by [12].
Let q(n) be an arbitrary increasing function of q(n). We start with 3-SAT with m clauses and n
variables. We only use the fact that the problem is NPC. For every variable xi add a vertex vi , and
a vertex vi representing xi . For every xi add an edge between vi and vi . Duplicate every clause C,
q(n) times (namely, we will have q(n) new vertices per clause). The vertices of C will be denoted
q(n)
W (C) = {wCi }i=1 .
12

Do the following for every clause C. For every variable xi , if xi appears in C add an edge
between vi to all vertices in W (C). Otherwise, if xi C add edges from vi to all of the vertices of
W (C).
We now discuss the properties of this reduction. Note that when we say opt it is the value of
the optimum of a yes instance in some gap reduction.
We note that if the formula is satisfiable, we can take a subset of vertices that corresponds to a
satisfying assignment. This is the union of two sets QY {vi } and QN {vi } so that for every x j ,
either v j Qy or v j QN . Namely, the subset are defined according to a satisfying assignment. As
the assignment is legal, the union QY QN will not contain an edge because every xi gets either a
T value or a F value but never both. In addition, every clause vertex will be covered by the vertex
that satisfies the clause. More precisely, for every clause C there exists some xi so that either
xi C QY or xi C QN . The corresponding vertex vi or vi will be a neighbor of all the copies
of C. Thus the size of the optimum is n and is linear in the number of variables.
As we shall see, the optimum for a no instance not linear in the 3-SAT instance. But the fact
that the optimum for a yes instance is linear in the number of variables, (even if its not linear in
any other sense) turns out to be enough.
We now do the usual trick for reducing the value of the optimum in the case of a yes instance.
For some arbitrarily small f (n) we look at every set of size n/ f (n) that does not contain both vi
and vi as a super vertex. Two super vertices are joined by an edge if their union corresponds to an
independent set. A super vertices is said to contain the vertices he is associated with.
The edges of supervertices are defined as follows. A super vertex is joined to the union of all
neighbors of the vertices that belong to the super vertex. By the analysis we have already seen, this
reduces the optimum in the case of a yes instance to f (n) as we can take a collection of vertices
that corresponds to a satisfying assignment and split it to f (n) disjoint sets of size n/ f (n) each.
Also, we have seen (when we prove the result for Clique) that the size of the construction is 2o(n) .
For a no instance, regardless of which collection of super vertices is taken, we must get an
independent set. This means that the collection of super vertices chosen, in case of a no instance,
is a (maybe partial) legal assignment of values to the variables (in the usual sense, namely if a
super vertex containing vi is chosen, then we assign T to xi ).
However, the truth assignment for a no instance can not satisfy all clauses. This means that for
some C, C is not connected to any of the super vertices chosen by the optimum for the no instance.
Since every C corresponds to q(n) vertices the size of the optimum in a no instance, regardless
of the super vertices chosen, is at least q(n) (all the copies of the non satisfied clause have to be
added to the independent set, for otherwise it will not be a dominating set).
Thus the situation is like in Claim 8.1 namely, we can make the optimum f (n) for as small as
we want function, and the value of a no instance is at least q(n) for some increasing function q(n).
This implies that the Fellows conjecture holds for MMIS.

Open problems

To disprove Fellows conjecture for a problem we need to give a g(opt) approximation for a problem that runs in h(opt)nO(1) time. This question was already systematically addressed in [2]. A
large collection of W[1]-hard that also have a strong inapproximability are given g(opt) approximation for some increasing function g. The algorithms of [2] run in polynomial time. However, all
problems presented were only W[1]-hard.
Open question 1: Is there an f (opt) approximation for any W[2]-hard problem, that runs in
polynomial time?
13

It may be hard to prove that such a thing does not exist as it will imply that W[1]6=W[2]. Still,
this suggests an indirect way to try and prove this important and widely believed conjecture.
Another way to make the optimum smaller is by taking a random sample.
Open question 2: Can random sampling drastically reduce the size of the optimum in certain
problems? In particular, is it possible for Clique?

References
[1] C. Calabro, R. Impagliazzo, and R. Paturi. A duality between clause width and clause density
for sat. In IEEE Conference on Computational Complexity, pages 252260, 2006.
[2] R. Chitnis, M. Hajiaghayi, and G. Kortsarz. Fixed parameter and approximation algorithms:
a new look. Manuscript, 2013.
[3] I. Dinur. The pcp theorem by gap amplification. J. ACM, 54(3):12, 2007.
[4] I. Dinur, E. Mossel, and O. Regev. Conditional hardness for approximate coloring. SIAM J.
Comput., 39(3):843873, 2009.
[5] U. Feige and J. Kilian. On limited versus polynomial nondeterminism. Chicago J. Theor.
Comput. Sci., 1997, 1997.
[6] M. Fellows, J. Guo, D. Marx, and S. Saurabh. Data reduction and problem kernels (dagstuhl
seminar 12241). Dagstuhl Reports, 2(6):2650, 2012.
[7] M. R. Fellows, C. Knauer, N. Nishimura, P. Ragde, F. A. Rosamond, U. Stege, D. M. Thilikos,
and S. Whitesides. Faster fixed-parameter tractable algorithms for matching and packing
problems. In ESA, pages 311322, 2004.
[8] V. Guruswami, S. Khanna, R. Rajaraman, F. B. Shepherd, and M. Yannakakis. Near-optimal
hardness results and approximation algorithms for edge-disjoint paths and related problems.
J. Comput. Syst. Sci., 67(3):473496, 2003.
[9] M. M. Halldorsson. Approximating the minimum maximal independence number. Inf. Process. Lett., 46(4):169172, 1993.
[10] J. Hastad. Clique is hard to approximate within n1 . In FOCS, pages 627636, 1996.
[11] R. Impagliazzo, R. Paturi, and F. Zane. Which problems have strongly exponential complexity? In FOCS, pages 653663, 1998.
[12] R. W. Irving. On approximating the minimum independent dominating set. Inf. Process.
Lett., 37(4):197200, 1991.
[13] S. Khot. Improved inaproximability results for maxclique, chromatic number and approximate graph coloring. In FOCS, pages 600609, 2001.
[14] C. Lund and M. Yannakakis. On the hardness of approximating minimization problems. J.
ACM, 41(5):960981, 1994.

14

[15] D. Moshkovitz. The Projection Games Conjecture and The NP-Hardness of ln nApproximating Set-Cover. APPROX, 2012.
[16] D. Moshkovitz and R. Raz. Two-query pcp with subconstant error. J. ACM, 57(5), 2010.
[17] R. Raz and S. Safra. A sub-constant error-probability low-degree test, and a sub-constant
error-probability pcp characterization of np. In STOC, pages 475484, 1997.
[18] D. Zuckerman. Linear degree extractors and the inapproximability of max clique and chromatic number. Theory of Computing, 3(1):103128, 2007.

15

You might also like