Set Cover Problem
Set Cover Problem
Set Cover Problem
of
equals the universe, the set cover problem is to identify the smallest subset of
sets
is
Simply put, the algorithm initializes the distance to the source to 0 and all other nodes to infinity. Then
for all edges, if the distance to the destination can be shortened by taking the edge, the distance is
updated to the new lower value. At each iteration
shortest paths of at most length
be
edges, the edges must be scanned
times to ensure the shortest path
has been found for all nodes. A final scan of all the edges is performed and if any distance is updated,
then a path of length
exists in the graph.
edges has been found which can only occur if at least one negative cycle
3.
a)Len(x,i) = {max(len(x1,i-1)(if x1 is less than x),len(x2,i-2)(if x2 is less
than x),len(x3,i-3)(if x3 is less than x),...len(xn,1)(if xn is less than x),0)
+ 1}
b) you could use the exact same equation but this time, add a third
parameter, ( a vector containing the longest ascending subsequence
ending with xi). To compute xi in the previous equation, simply take the
largest subvector previously computed where the tail is less then x and
then append x.)
This can be solved in O(n^2) using dynamic programming. Basically, the problem is
about building the longest palindromic subsequence in x[i...j] using the longest
subsequence for x[i+1...j], x[i,...j-1] and x[i+1,...,j-1] (if first and last letters are the
same).
Firstly, the empty string and a single character string is trivially a palindrome. Notice
that for a substring x[i,...,j], if x[i]==x[j], we can say that the length of the longest
palindrome is the longest palindrome over x[i+1,...,j-1]+2. If they don't match, the
longest palindrome is the maximum of that of x[i+1,...,j] and y[i,...,j-1].
This gives us the function:
longest(i,j)= j-i+1 if j-i<=0,
2+longest(i+1,j-1) if x[i]==x[j]
max(longest(i+1,j),longest(i,j-1)) otherwise
You can simply implement a memoized version of that function, or code a table of
longest[i][j] bottom up.
This gives you only the length of the longest subsequence, not the actual
subsequence itself. But it can easily be extended to do that as well.
The knapsack problem is interesting from the perspective of computer science for many reasons:
The decision problem form of the knapsack problem (Can a value of at least V be achieved
without exceeding the weight W?) is NP-complete, thus there is no possible algorithm both correct
and fast (polynomial-time) on all cases, unless P=NP.
While the decision problem is NP-complete, the optimization problem is NP-hard, its resolution
is at least as difficult as the decision problem, and there is no known polynomial algorithm which
can tell, given a solution, whether it is optimal (which would mean that there is no solution with a
larger V, thus solving the decision problem NP-complete).
Many cases that arise in practice, and "random instances" from some distributions, can
nonetheless be solved exactly.
time.
Meet-in-the-middle algorithm
input:asetofitemswithweightsandvaluesoutput:the
greatestcombinedvalueofasubsetpartitiontheset{1...n}intotwo
setsAandBofapproximatelyequalsizecomputetheweightsand
valuesofallsubsetsofeachsetforeachsubsetofAfindthe
subsetofBofgreatestvaluesuchthatthecombinedweightislessthan
Wkeeptrackofthegreatestcombinedvalueseensofar
To simplify things, assume all weights are strictly positive (wi > 0). We wish to maximize total value
subject to the constraint that total weight is less than or equal to W. Then for each w W,
define m[w] to be the maximum value that can be attained with total weight less than or equal
to w. m[W] then is the solution to the problem.
Observe that m[w] has the following properties:
(the sum of zero items, i.e., the summation of the empty set)
where
(To formulate the equation above, the idea used is that the solution for a knapsack is the same as the
value of one correct item plus the solution for a knapsack with smaller capacity, specifically one with
the capacity reduced by the weight of that chosen item.)
Here the maximum of the empty set is taken to be zero. Tabulating the results from
through
values of
programming solution is
up
involves examining
by
complexity does not contradict the fact that the knapsack problem is NP-complete,
since
, unlike
, is not polynomial in the length of the input to the problem. The length of the
, not to
itself.
A similar dynamic programming solution for the 0/1 knapsack problem also runs in pseudo-polynomial
time. Assume
to be the maximum value that can be attained with weight less than or equal to
(first
using items up to
items).
We can define
recursively as follows:
if
if
.
The solution can then be found by calculating
to store previous computations.
The following is pseudo code for the dynamic program:
//Input://Values(storedinarrayv)//Weights(storedinarrayw)//
Numberofdistinctitems(n)//Knapsackcapacity(W)forjfrom0toWdo
m[0,j]:=0endforforifrom1tondoforjfrom0toWdoif
w[i]<=jthenm[i,j]:=max(m[i1,j],m[i1,jw[i]]+v[i])
elsem[i,j]:=m[i1,j]endifendforendfor
This solution will therefore run in
only a 1-dimensional array
array
only
time and
space.
Note: Algorithm V should work for EVERY G, for SOME C. For every input there should
EXIST information that could help us verify whether the input is in the problem
domain or not. That is, there should not be an input where the information doesn't
exist.
2) Prove it's NP-hard.
This involves getting a known NP-complete problem like SAT (the set of boolean
expressions in the form "(A OR B OR C) AND (D OR E OR F) AND ..." where the
expression is satisfiable (ie there exists some setting for these booleans which makes
the expression true).
Then reduce the NP-complete problem to your problem in polynomial time.
That is, given some input X for SAT (or whatever NP-complete problem you are
using), create some input Y for your problem such that X is in SAT if and only if Y is in
your problem. The function f:X -> Y must run in polynomial time.
In the example above the input Y would be the graph G and the size of the vertex
cover k.
For a full proof, you'd have to prove both: