Generalized Assignment Problem Approximation Worksheets

  • Lecture 1 (Jan 16): Introduction, TSP, ATSP. (rough notes)

    The 1.5-factor approximation algorithm for TSP is due to Christofides from 1976. It is an outstanding open problem to improve this factor. The (log n)-factor approximation algorithm for ATSP is due to Frieze, Galbiati and Maffioli from 1982. This was improved in 2011 by Asadpour et al. in this paper. We may cover this later in the course. A much newer awesome result is due to Anari and Oveis Gharan.

  • Lecture 2 (Jan 23): Greedy Algorithms. (rough notes)

    The greedy algorithm for set cover problem has been discovered by many (Johnson, Lovasz, Chvatal, mainly from late seventies). The submodular function maximization is due to Nemhauser, Wolsey and Fisher from 1978. The submodular set cover was studied by Wolsey in 1982. The unconstrained (not-necessarily monotone) submodular function maximization algorithm alluded to in class is from this millenium -- a FOCS 2012 paper by Buchbinder, Feldman, Naor and Schwartz.

  • Lecture 3 (Jan 27): Euclidean TSP. (rough notes)

    Our presentation is mainly drawn from Sanjeev Arora's excellent survey on approximation algorithms for geometric problems. The current best running time for a Euclidean TSP PTAS is due to Bartal and Gottlieb from 2013. Another related impressive paper is Klein's PTAS for TSP on planar graphs.

  • Lecture 4 (Jan 30): Linear Programming Relaxations, Facility Location, GAP.(rough notes)

    The facility location algorithm covered in class is due to Shmoys, Tardos, and Aardal in 1998. This was the first constant factor approximation. The current best factor is 1.488 by Shi Li in this paper. The approximation algorithm for GAP is due to Shmoys and Tardos from 1993.

  • Lecture 5 (Feb 6): Iterated Rounding.(rough notes)

    Iterated rounding in approximation algorithms was introduced by Kamal Jain in his 1998 paper on survivable network design problem which is explored in the problem set 5. The proof done in class is due to Nagarajan, Ravi, and Singh from 2010. There is whole book written on Iterative methods written by Lau, Ravi, and Singh which has a lot of applications.

  • Lecture 6 (Feb 13): Randomized Rounding.(rough notes)

    We looked at the use of randomization in approximation algorithms. The use of randomness often makes algorithms easier both to design and analyze. Often such randomized algorithms can be "derandomized", that is, one can get deterministic algorithms which achieve the same approximation factor, but this is not known to be true in general. (Whether randomness strictly helps in computation is one of the striking questions in computer science.) The algorithm for minimum congestion routing is due to Raghavan and Thomson from 1985 and is one of the first uses of the Chernoff bounds in algorithm design. There has been PLENTY of work following it, and this blurb doesn't suffice to summarize it. The algorithm for GAP is due to Fleischer, Goemans, Mirrokni, and Sviridenko from 2006. A good reference for concentration inequalities is a book (draft here ) by Dubhashi and Panconesi.

  • Lecture 7 (Feb 17): Sublinear Approximation.(rough notes)

    We got a glimpse into the world of sublinear algorithms for approximation problems. I had planned to talk about vertex cover and set cover but only got through vertex cover due to discussions. I summarize and clarify that discussion at the end of the posted notes. These topics are covered excellently in Krzysztof Onak's Ph.D. thesis. More recent work by Onak et. al. improves the query complexity to essentially optimal.

  • Lecture 8 (Feb 20): Using the Dual.(rough notes)

    We looked at the dual of an LP. Then we described Dual fitting and the Primal Dual methodology on Set cover. Then we talked a bit in how the dual can be used to solve LPs with exponentially many variables and polynomially many constraints (like the configuration LP from last class). Finally, we looked at how primal-dual, dual fitting and randomized rounding come together to solve the online set cover problem.

  • Lecture 9 (Mar 6th): Cuts in Graphs.(scribe notes by Saravanan.)

    We looked at three problems -- the minimum s,t cut, the multiway cut, and the multicut problem. The algorithms done in class are due to Calinescu, Karloff and Rabani from 1998. The current best approximation algorithm for the multiway cut problem is (as of 2015 March) 1.2965, and is due to Sharma and Vondrak.

  • Lecture 10 (Mar 13th): The Sparsest Cut.(scribe notes by Sayan and Indranil.)

    We spent most of our time with the uniform sparsest cut and gave a O(log n) approximation algorithm. The result is due to Leighton and Rao from 1988, but the journal version from 1999 is a beautiful read, and has many generalizations in it. In particular, it gives O(log n) approximations for what is called the minimum conductance cut. The conductance of a cut is defined as the cap(S)/E[S]*E[bar(S)] rather than cap(S)/|S||bar(S)|. The general sparsest cut is via metric embeddings and is due to London, Linial, and Rabinovich from 1995. The metric embedding result itself is due to Bourgain. This we didn't have time to cover and may do so next class.

  • Lecture 11a (Mar 20th): Bourgain's theorem via padded decomposition.(rough notes)

  • Lecture 11b (Mar 20th): Semidefinite Programming.(scribe notes by Nitin and Amleshwar)

    We introduced semidefinite programming, a vector relaxation of integer programs. Goemans and Williamson's MAX-CUT algorithm that we saw in class is randomized. The derandomization is nontrivial and is due to Hariharan and Mahajan. An instance with tight integrality gap was given by Feige and Schechtman; the analysis is a piece of gem.

  • Lecture 12 (Mar 27th): Graph coloring and Miniaturization.(scribe notes by Ajith and Dheeraj)

    We discussed Karger, Motwani and Sudan's algorithm for coloring 3-colorable graphs. The current best in this line of work is that of Kawarabayashi and Thorup (2014) who achieve O(n^{1.99996}) colors; this paper uses combinatorics to fight the hard case for the previous best SDP-based algorithm. In the second part of the class, we talked about an approach pioneered by Prasad Raghavendra that may turn out to be the "ultimate" rounding method for a wide class of semidefinite relaxations. We discussed it for the MAX-CUT problem. Our presentation is based on the treatment in Gärtner and Matoušek's book.

  • Lecture 13 (Apr 7th): SDP Relaxations for Max CSP.(rough notes)

    In Lecture 11b, we'd already given an SDP relaxation for Max-2-SAT. Here, we saw how to make it tighter by adding "triangle inequality constraints". Then, we talked about the so-called canonical SDP relaxation for any Max-k-CSP, which involves scalar variables that give a probability distribution on the local assignment to each clause and vector variables that give a joint distribution on the global assignment to all the variables, as well as "first moment" and "second moment" consistency constraints among the two sets of variables. We also briefly discussed how the SDP can be rounded using the miniaturization method introduced in the last lecture.

  • Lecture 14 (Apr 11th): Proving Hardness of Approximation.(scribe notes by Anurita and Divya)

    We shift to showing hardness of approximation. For standard decision problems, we have the tool of polynomial time reductions and NP-completeness. For optimization problems, we introduced the notion of "gap problems" which are decision problems with a promise (here's a great survey on problems with promised instances). We looked at reductions from standard NP-complete problems and also approximation-preserving reductions from gap problems. Finally, we introduced PCP's.

  • Lecture 15 (Apr 17th): PCP's and Label Cover. (scribe notes by Cressida and Marilyn (part 1) and Chinmay and Raghav (part 2))

    Hastad's theorem on hardness of Gap-E3Lin mentioned in class is from this landmark paper. Hastad also showed that [1,7/8+ε]-Gap-Max3SAT is NP-hard. We saw in class the weaker fact, through a reduction from Gap-E3Lin, that it is NP-hard to approximate Max3SAT to better than a 7/8 factor. We also introduced the Label Cover problem, discussed its NP-hardness, and saw a reduction from Label Cover to Set Cover. Our presentation was based on Chapter 16 in Williamson-Shmoys.

  • Lecture 16 (Apr 25th): Unique Games.(scribe notes by Abhijat and Suprovat)

    The definition of unique games is due to a foresightful work of Subhash Khot. This survey of his is a very nice overview of the unique games conjecture and the progress so far. We saw a direct reduction from unique games to the multicut problem. We then discussed at a very high level the "long code"-based approach to constructing PCP's. A detailed discussion is out of scope of this course, but there are some great lecture notes available online: here and here.

  • Стратмор развернул монитор так, чтобы Сьюзан было. Экран отливал странным темно-бордовым цветом, и в самом его низу диалоговое окно отображало многочисленные попытки выключить «ТРАНСТЕКСТ». После каждой из них следовал один и тот же ответ: ИЗВИНИТЕ. ОТКЛЮЧЕНИЕ НЕВОЗМОЖНО Сьюзан охватил озноб.

    0 Thoughts to “Generalized Assignment Problem Approximation Worksheets

    Leave a comment

    L'indirizzo email non verrà pubblicato. I campi obbligatori sono contrassegnati *