Dagstuhl-Seminar 18051
Proof Complexity
( 28. Jan – 02. Feb, 2018 )
Permalink
Organisatoren
- Albert Atserias (UPC - Barcelona, ES)
- Jakob Nordström (KTH Royal Institute of Technology - Stockholm, SE)
- Pavel Pudlák (The Czech Academy of Sciences - Prague, CZ)
- Rahul Santhanam (University of Oxford, GB)
Kontakt
- Shida Kunz (für wissenschaftliche Fragen)
- Annette Beyer (für administrative Fragen)
Impacts
- Cops-Robber Games and the Resolution of Tseitin Formulas : article in SAT 2018: Theory and Applications of Satisfiability Testing - Galesi, Nicola; Talebanfard, Navid; Toran, Jacobo - Berlin : Springer, 2018. - pp 311-326 - (Lecture notes in computer science ; 10929 : article).
- Short Proofs in QBF Expansion - Beyersdorff, Olaf; Chew, Leroy; Clymo, Judith; Mahajan, Meena - Haifa : Electronic Colloquium on Computational Complexity, 2018. - 16 pp. - (Electronic Colloquium on Computational Complexity Report ; 102).
The study of proof complexity was initiated by Cook and Reckhow in 1979 as a way to attack the P vs. NP problem, and in the ensuing decades many powerful techniques have been discovered for understanding different proof systems. Proof complexity also gives a way of studying subsystems of Peano Arithmetic where the power of mathematical reasoning is restricted, and to quantify how complex different mathematical theorems are when measured in terms of the strength of the methods of reasoning required to establish their validity. Moreover, it allows to analyse the power and limitations of satisfiability algorithms (SAT solvers) used in industrial applications with formulas containing up to millions of variables.
During the last 10-15 years the area of proof complexity has seen a revival with many exciting results, and new connections have also been revealed with other areas such as, e.g., cryptography, algebraic complexity theory, communication complexity, and combinatorial optimization. While many longstanding open problems from the 1980s and 1990s still remain unsolved, recent progress gives hope that the area may be ripe for decisive breakthroughs.
In this Dagstuhl Seminar, we aim to bring together leading experts covering the whole spectrum of proof complexity from Frege proof systems and circuit-inspired lower bounds via geometric and algebraic proof systems all the way to bounded arithmetic, as well as prominent researchers from neighbouring fields. We hope and believe that such a seminar could have a major impact at this critical juncture. Besides the regular program including talks about recent research results, we plan to organize open problem sessions, and expect ongoing research discussions throughout the week in smaller groups.
This workshop brought together the whole proof complexity community spanning from Frege proof systems and circuit-inspired lower bounds via geometric and algebraic proof systems all the way to bounded arithmetic. In this executive summary, we first give an overview of proof complexity, and then describe the goals of the seminar week. Finally, we discuss the relation to previous workshops and conferences.
Topic of the Seminar
Ever since the groundbreaking NP-completeness paper of Cook [18], the problem of deciding whether a given propositional logic formula is satisfiable or not has been on centre stage in theoretical computer science. During the last two decades, Satisfiability has also developed from a problem of mainly theoretical interest into a practical approach for solving applied problems. Although all known Boolean satisfiability solvers (SAT solvers) have exponential running time in the worst case, enormous progress in performance has led to satisfiability algorithms becoming a standard tool for solving large-scale problems in, for example, hardware and software verification, artificial intelligence, bioinformatics, operations research, and sometimes even pure mathematics.
The study of proof complexity originated with the seminal paper of Cook and Reckhow [19]. In its most general form, a proof system for a formal language L is a predicate P(x, pi), computable in time polynomial in the sizes |x| and |pi| of the input, and having the property that for all x in L there exists a string pi (a proof) for which P(x, pi) evaluates to true, whereas for any x otin L it should hold for all strings pi that P(x, pi) evaluates to false. A proof system is said to be polynomially bounded if for every x in L there exists a proof pi_{x} for x that has size at most polynomial in |x|. A propositional proof system is a proof system for the language of tautologies in propositional logic, i.e. for formulas that always evaluate to true no matter how the values true and false are assigned to variables in the formula.
From a theoretical point of view, one important motivation for proof complexity is the intimate connection with the fundamental problem of P versus NP. Since NP is exactly the set of languages with polynomially bounded proof systems, and since Tautology can be seen to be the dual problem of Satisfiability, we have the famous theorem of [19] that NP = coNP if and only if there exists a polynomially bounded propositional proof system. Thus, if it could be shown that there are no polynomially bounded proof systems for tautologies, P ≠ NP would follow as a corollary since P is closed under complement. One way of approaching this problem is to study stronger and stronger proof systems and try to prove superpolynomial lower bounds on proof size. However, although great progress has been made in the last couple of decades for a variety of proof systems, this goal still appears very distant. A second theoretical motivation is that simple propositional proof systems provide analogues of subsystems of Peano Arithmetic where the power of mathematical reasoning is restricted. Of particular interest here are various bounded arithmetic systems, which in some sense are intended to capture feasible/polynomial-time reasoning. Proving strong lower bounds on propositional logic encodings of some combinatorial principle, say, in a propositional proof system can in this way show that establishing the validity of this principle requires more powerful mathematics than what is provided by the corresponding subsystem of Peano Arithmetic. One can thus quantify how "deep" different mathematical truths are, as well as shed light on the limits of our (human, rather than automated) proof techniques. At the same time, since it is an empirically verified fact that low-complexity proofs generalize better and are often more constructive, classifying which truths have feasible proofs is also a way to approach the classification of algorithmic problems by their computational complexity. The precise sense in which this can be formalized into a tool for the complexity theorist is one of the goals of bounded arithmetic.
A third prominent motivation for the study of proof complexity is also algorithmic but of a more practical nature. As was mentioned above, designing efficient algorithms for proving tautologies—or, equivalently, testing satisfiability—is a very important problem not only in the theory of computation but also in applied research and industry. All SAT solvers, regardless of whether they produce a written proof or not, explicitly or implicitly define a system in which proofs are searched for and rules which determine what proofs in this system look like. Proof complexity analyses what it takes to simply write down and verify the proofs that such a solver might find, ignoring the computational effort needed to actually find them. Thus, a lower bound for a proof system tells us that any algorithm, even an optimal (non-deterministic) one magically making all the right choices, must necessarily use at least the amount of a certain resource specified by this bound. In the other direction, theoretical upper bounds on some proof complexity measure give us hope of finding good proof search algorithms with respect to this measure, provided that we can design algorithms that search for proofs in the system in an efficient manner. The field of proof complexity also has rich connections to algorithmic analysis, combinatorial optimization, cryptography, artificial intelligence, and mathematical logic. A few good sources providing more details are [6, 17, 47].
A Very Selective Survey of Proof Complexity
Any propositional logic formula can be converted to a formula in conjunctive normal form (CNF) that is only linearly larger and is unsatisfiable if and only if the original formula is a tautology. Therefore, any sound and complete system that certifies the unsatisfiability of CNF formulas can be considered as a general propositional proof system.
The extensively studied resolution proof system, which appeared in [9] and began to be investigated in connection with automated theorem proving in the 1960s [21, 22, 48], is such a system where one derives new disjunctive clauses from an unsatisfiable CNF formula until an explicit contradiction is reached. Despite the apparent simplicity of resolution, the first superpolynomial lower bounds on proof size were obtained only after decades of study in 1985 [33], after which truly exponential size lower bounds soon followed in [15, 52]. It was shown in [8] that these lower bounds can be established by instead studying the width of proofs, i.e., the maximal size of clauses in the proofs, and arguing that any resolution proof for a certain formula must contain a large clause. It then follows by a generic argument that any such proof must also consist of very many clauses. Later research has led to a well-developed machinery for showing width lower bounds, and hence also size lower bounds, for resolution.
The more general proof system polynomial calculus (PC), introduced in [1, 16], instead uses algebraic geometry to reason about SAT. In polynomial calculus clauses are translated to multilinear polynomials over some (fixed) field, and a CNF formula F is shown to be unsatisfiable by proving that there is no common root for the polynomials corresponding to all the clauses, or equivalently that the multiplicative identity 1 lies in the ideal generated by these polynomials. Here the size of a proof is measured as the number of monomials in a proof when all polynomials are expanded out as linear combinations of monomials, and the width of a clause corresponds to the (total) degree of the polynomial representing the clause. It can be shown that PC is at least as strong as resolution with respect to both size and width/degree, and there are families of formulas for which PC is exponentially stronger.
In the work [36], which served, interestingly enough, as a precursor to [8], it was shown that strong lower bounds on the degree of polynomial calculus proofs are sufficient to establish strong size lower bounds. In contrast to the situation for resolution after [8], however, this has not been followed by a corresponding development of a generally applicable machinery for proving degree lower bounds. For fields of characteristic distinct from 2 it is sometimes possible to obtain lower bounds by doing an affine transformation from {0, 1} to the "Fourier basis" {−1, +1}, an idea that seems to have appeared first in [13, 28]. For fields of arbitrary characteristic a powerful technique for general systems of polynomial equations was developed in [2], which when restricted to CNF formulas F yields that polynomial calculus proofs require high degree if the corresponding clause-variable incidence graphs G(F) are good enough bipartite expander graphs. There are several provably hard formula families for which this criterion fails to apply, however, and even more formulas that are believed to be hard for both resolution and PC, but where lower bounds are only known for the former proof system and not the latter.
Another proof system that has been the focus of much research is cutting planes (CP), which was introduced in [20] as a way of formalizing the integer linear programming algorithm in [14, 27]. Here the disjunctive clauses in a CNF formula are translated to linear inequalities, and these linear inequalities are then manipulated to derive a contradiction. Thus, questions about the satisfiability of Boolean formulas are reduced to the geometry of polytopes over the real numbers. Cutting planes is easily seen to be as least as strong as resolution, since a CP proof can mimic any resolution proof line by line. An intriguing fact is that encodings of the pigeonhole principle, which are known to be hard to prove for resolution [33] and many other proof systems, are very easy to prove in cutting planes. It follows from this that not only is cutting planes never worse than resolution, but it can be exponentially stronger.
Exponential lower bounds on proof length for cutting planes were first proven in [10] for the restricted subsystem CP, where all coefficients in the linear inequalities can be at most polynomial in the formula size, and were later extended to general CP in [34, 44]. The proof technique in [44] is very specific, however, in that it works by interpolating monotone Boolean circuits for certain problems from CP proofs of related formulas with a very particular structure, and then appealing to lower bounds in circuit complexity. A longstanding open problem is to develop techniques that would apply to other formula families. For example, establishing that randomly sampled k-CNF formulas are hard to refute for CP, or that CP cannot efficiently prove the fact that the sum of all vertex degrees in an undirected graph is even (encoded in so-called Tseitin formulas), would constitute major breakthroughs.
We remark that there are also other proof systems inspired by linear and semidefinite programming, e.g., in [38, 39, 50], which are somewhat similar to but incomparable with cutting planes, and a deeper understanding of which appear even more challenging. Some notable early papers in proof complexity investigating these so-called semialgebraic proof systems were published around the turn of the millennium in [30, 31, 45], but then this area of research seems to have gone dormant. In the last few years, these proof systems have made an exciting reemergence in the context of hardness of approximation, revealing unexpected and intriguing connections between approximation and proof complexity. A precursor to this is the work by Schoenebeck [49], which gave strong integrality gaps in the so-called Lasserre SDP hierarchy using results from proof complexity. These results were later realized to be a rediscovery of results by Grigoriev [29] proving degree lower bounds for what he called the Positivstellensatz Calculus [31]. More recently we have the work of Barak et al. [4], which was the first to explicitly point out this intriguing connection between approximability and proof complexity. Following this paper, several papers have appeared that continue the fruitful exploration of the interplay between approximability and proof complexity. Results from this area also appeared in the invited talk of Boaz Barak at the International Congress of Mathematicians in 2014 (see [5]).
The paper [19] initiated research in proof complexity focused on a more general and powerful family of propositional proof systems called Frege systems. Such systems consist of a finite implicationally complete set of axioms and inference rules (let us say over connectives AND, OR, and NOT for concreteness), where new formulas are derived by substitution into the axioms and inference rules. Various forms of Frege systems (also called Hilbert systems) typically appear in logic textbooks, and typically the exact definitions vary. Such distinctions do not matter for our purposes, however—it was shown in [19] that all such systems are equivalent up to an at most polynomial blow-up in the proof size.
Frege systems are well beyond what we can prove nontrivial lower bounds for; the situation is similar to the problem of proving lower bound on the size of Boolean circuits. Therefore restricted versions of Frege systems have been studied. One natural restriction is to allow unbounded fan-in AND-OR formulas (where negations appear only in front of atomic variables) but to require that all formulas appearing in a proof have bounded depth (i.e., a bounded number of alternations between AND and OR). Such a model is an analogue of the bounded-depth circuits studied in circuit complexity, but first arose in the context of bounded first-order arithmetic in logic [12, 41]. For such bounded-depth Frege systems exponential lower bounds on proof size were obtained in [37, 42], but these lower bounds only work for depth smaller than log log n. This depth lower bound was very recently improved to sqrt{log n} in [43], but in terms of the size lower bound this recent result is much weaker. By comparison, for the corresponding class in circuit complexity strong size lower bounds are known all the way up to depth log n/ log log n. Also, if one extends the set of connectives with exclusive or (also called parity) to obtain bounded-depth Frege with parity gates, then again no lower bounds are known, although strong lower bounds have been shown for the analogous class in circuit complexity [46, 51].
The quest for lower bounds for bounded-depth Frege systems and beyond are mainly motivated by the P vs. NP problem. Regarding connections to SAT solving, it is mostly weaker proof systems such as resolution, polynomial calculus, and cutting planes that are of interest, whereas the variants of Frege systems discussed above do not seem to be suitable foundations for SAT solvers. The issue here is that not only do we want our proof system to be as powerful as possible, i.e., having short proofs for the formulas under consideration, but we also want to be able to find these proofs efficiently.
We quantify this theoretically by saying that a proof system is automatizable if there is an algorithm that finds proofs in this system in time polynomial in the length of an optimal proof. This seems to be the right notion: If there is no short proof of a formula in the system, then we cannot expect any algorithm to find a proof quickly, but if there is a short proof to be found we want an algorithm that is competitive with respect to the length of such a proof. Unfortunately, there seems to be a trade-off here in the sense that if a proof system is sufficiently powerful, then it is not automatizable. For instance, bounded-depth Frege systems are not automatizable under plausible computational complexity assumptions [11]. However, analogous results have later been shown also for resolution [3], and yet proof search is implemented successfully in this proof system in practice. This raises intriguing questions that seem to merit further study.
Goals of the Seminar
There is a rich selection of open problems that could be discussed at a workshop focused on proof complexity. Below we just give a few samples of such problems that came up during the workshop–it should be emphasized that this list is very far from exhaustive and is only intended to serve as an illustration.
For starters, there are a number of NP-complete problems for which we would like to understand the hardness with respect to polynomial calculus and other algebraic proof systems. For the problem of cliques of constant size k in graphs, there is an obvious polynomial-time algorithm (since only mbox{binom{n}{k} leq n^k} possible candidate cliques need to be checked). Whether this brute-force algorithm is optimal or not is a deep question with connections to fixed-parameter tractability and parameterized proof complexity. This is completely open for polynomial calculus, and even for resolution. The ultimate goal here would be to prove average-case lower bounds for k-clique formulas over Erdös–Rényi random graphs G(n, p) with edge probability just below the threshold p = n^{-2//(k-1)} for the appearance of k-cliques.
In contrast to the clique problem, graph colouring is NP-complete already for a constant number 3 of colours. If we believe that P ≠ NP, then, in particular, it seems reasonable to expect that this problem should be hard for polynomial calculus. No such results have been known, however. On the contrary, in the papers [23, 24, 25] recognized with the INFORMS Computing Society Prize 2010, the authors report that they used algebraic methods formalizable in polynomial calculus that "successfully solved graph problem instances having thousands of nodes and tens of thousands of edges" and that they could not find hard instances for these algorithms. This is very surprising. For resolution, it was shown in [7] that random graphs with the right edge density are exponentially hard to deal with, and it seems likely that the same should hold also for polynomial calculus. This appears to be a very challenging problem, however, but we hope that techniques from [2, 40] can be brought to bear on it.
For cutting planes, a longstanding open problem is to prove lower bounds for random k-CNF formulas or Tseitin formulas over expander graphs. An interesting direction in the last few years has been the development of new techniques for size-space trade-offs, showing that if short cutting planes proofs do exist, such proofs must at least have high space complexity in that they require a lot of memory to be verified. Such results were first obtained via a somewhat unexpected connection to communication complexity in [35], and have more recently been strengthened in [26, 32].
Admittedly, proving lower bounds for bounded-depth Frege systems and beyond is another formidable challenge, and it only seems prudent to say that this is a high-risk proposal. However, the very recent, and exciting, progress in [43] give hope that new techniques might be developed to attack also this problem.
Relation to Previous Dagstuhl Seminars
The area of proof complexity has a large intersection with computational complexity theory, and are two recurring workshops at Dagstuhl dedicated to complexity theory broadly construed, namely Computational Complexity of Discrete Problems and Algebraic Methods in Computational Complexity. However, these two workshops have had very limited coverage of topics related to proof complexity in the past.
On the more applied side, there have been two workshops SAT and Interactions and Theory and Practice of SAT Solving that have explored the connections between computational complexity and more applied satisfiability algorithms as used in industry (so-called SAT solvers). These workshops have focused on very weak proof systems, however, which are the ones that are of interest in connection to SAT solving, but have not made any connections to stronger proof systems or to bounded arithmetic.
Although proof complexity has turned out to have deep connections to both complexity theory and SAT solving, proof complexity is an interesting and vibrant enough area to merit a seminar week in its own right. This workshop at Dagstuhl provided a unique opportunity for the community to meet during a full week focusing on the latest news in various subareas and major challenges going forward.
References
- Michael Alekhnovich, Eli Ben-Sasson, Alexander A. Razborov, and Avi Wigderson. Space complexity in propositional calculus. SIAM Journal on Computing, 31(4):1184–1211, 2002. Preliminary version in STOC ’00.
- Michael Alekhnovich and Alexander A. Razborov. Lower bounds for polynomial calculus: Non-binomial case. Proceedings of the Steklov Institute of Mathematics, 242:18–35, 2003. Available at http://people.cs.uchicago.edu/~razborov/files/misha.pdf. Preliminary version in FOCS ’01.
- Michael Alekhnovich and Alexander A. Razborov. Resolution is not automatizable unless W[P] is tractable. SIAM Journal on Computing, 38(4):1347–1363, October 2008. Preliminary version in FOCS ’01.
- Boaz Barak, Fernando G. S. L. Brandão, Aram Wettroth Harrow, Jonathan A. Kelner, David Steurer, and Yuan Zhou. Hypercontractivity, sum-of-squares proofs, and their applications. In Proceedings of the 44th Annual ACM Symposium on Theory of Computing (STOC ’12), pages 307–326, May 2012.
- Boaz Barak and David Steurer. Sum-of-squares proofs and the quest toward optimal algorithms. In Proceedings of International Congress of Mathematicians (ICM), 2014.
- Paul Beame. Proof complexity. In Steven Rudich and Avi Wigderson, editors, Computational Complexity Theory, volume 10 of IAS/Park City Mathematics Series, pages 199–246. American Mathematical Society, 2004.
- Paul Beame, Joseph C. Culberson, David G. Mitchell, and Cristopher Moore. The resolution complexity of random graph k-colorability. Discrete Applied Mathematics, 153(1-3):25–47, December 2005.
- Eli Ben-Sasson and Avi Wigderson. Short proofs are narrow—resolution made simple. Journal of the ACM, 48(2):149–169, March 2001. Preliminary version in STOC ’99.
- Archie Blake. Canonical Expressions in Boolean Algebra. PhD thesis, University of Chicago, 1937.
- María Bonet, Toniann Pitassi, and Ran Raz. Lower bounds for cutting planes proofs with small coefficients. In Proceedings of the 27th Annual ACM Symposium on Theory of Computing (STOC ’95), pages 575–584, May 1995.
- María Luisa Bonet, Carlos Domingo, Ricard Gavaldà, Alexis Maciel, and Toniann Pitassi. Non-automatizability of bounded-depth Frege proofs. In Proceedings of the 14th Annual IEEE Conference on Computational Complexity (CCC ’99), pages 15–23, May 1999.
- Samuel R. Buss. Bounded Arithmetic. Bibliopolis, Naples, 1986. Revision of PhD thesis.
- Samuel R. Buss, Dima Grigoriev, Russell Impagliazzo, and Toniann Pitassi. Linear gaps between degrees for the polynomial calculus modulo distinct primes. Journal of Computer and System Sciences, 62(2):267–289, March 2001. Preliminary version in CCC ’99.
- Vašek Chvátal. Edmonds polytopes and a hierarchy of combinatorial problems. Discrete Mathematics, 4(1):305–337, 1973.
- Vašek Chvátal and Endre Szemerédi. Many hard examples for resolution. Journal of the ACM, 35(4):759–768, October 1988.
- Matthew Clegg, Jeffery Edmonds, and Russell Impagliazzo. Using the Groebner basis algorithm to find proofs of unsatisfiability. In Proceedings of the 28th Annual ACM Symposium on Theory of Computing (STOC ’96), pages 174–183, May 1996.
- Peter Clote and Evangelos Kranakis. Boolean Functions and Computation Models. Springer, 2002.
- Stephen A. Cook. The complexity of theorem-proving procedures. In Proceedings of the 3rd Annual ACM Symposium on Theory of Computing (STOC ’71), pages 151–158, 1971.
- Stephen A. Cook and Robert Reckhow. The relative efficiency of propositional proof systems. Journal of Symbolic Logic, 44(1):36–50, March 1979.
- William Cook, Collette Rene Coullard, and György Turán. On the complexity of cuttingplane proofs. Discrete Applied Mathematics, 18(1):25–38, November 1987.
- Martin Davis, George Logemann, and Donald Loveland. A machine program for theorem proving. Communications of the ACM, 5(7):394–397, July 1962.
- Martin Davis and Hilary Putnam. A computing procedure for quantification theory. Journal of the ACM, 7(3):201–215, 1960.
- Jesús A. De Loera, Jon Lee, Peter N. Malkin, and Susan Margulies. Hilbert’s Nullstellensatz and an algorithm for proving combinatorial infeasibility. In Proceedings of the 21st International Symposium on Symbolic and Algebraic Computation (ISSAC ’08), pages 197–206, July 2008.
- Jesús A. De Loera, Jon Lee, Peter N. Malkin, and Susan Margulies. Computing infeasibility certificates for combinatorial problems through Hilbert’s Nullstellensatz. Journal of Symbolic Computation, 46(11):1260–1283, November 2011.
- Jesús A. De Loera, Jon Lee, Susan Margulies, and Shmuel Onn. Expressing combinatorial problems by systems of polynomial equations and Hilbert’s Nullstellensatz. Combinatorics, Probability and Computing, 18(04):551–582, July 2009.
- Susanna F. de Rezende, Jakob Nordström, and Marc Vinyals. How limited interaction hinders real communication (and what it means for proof and circuit complexity). In Proceedings of the 57th Annual IEEE Symposium on Foundations of Computer Science (FOCS ’16), pages 295–304, October 2016.
- Ralph E. Gomory. An algorithm for integer solutions of linear programs. In R.L. Graves and P. Wolfe, editors, Recent Advances in Mathematical Programming, pages 269–302. McGraw-Hill, New York, 1963.
- Dima Grigoriev. Tseitin’s tautologies and lower bounds for Nullstellensatz proofs. In Proceedings of the 39th Annual IEEE Symposium on Foundations of Computer Science (FOCS ’98), pages 648–652, November 1998.
- Dima Grigoriev. Linear lower bound on degrees of Positivstellensatz calculus proofs for the parity. Theoretical Computer Science, 259(1–2):613–622, May 2001.
- Dima Grigoriev, Edward A. Hirsch, and Dmitrii V. Pasechnik. Complexity of semialgebraic proofs. Moscow Mathematical Journal, 2(4):647–679, 2002. Preliminary version in STACS ’02.
- Dima Grigoriev and Nicolai Vorobjov. Complexity of Null- and Positivstellensatz proofs. Annals of Pure and Applied Logic, 113(1–3):153–160, December 2001.
- Mika Göös and Toniann Pitassi. Communication lower bounds via critical block sensitivity. In Proceedings of the 46th Annual ACM Symposium on Theory of Computing (STOC ’14), pages 847–856, May 2014.
- Armin Haken. The intractability of resolution. Theoretical Computer Science, 39(2-3):297–308, August 1985.
- Armin Haken and Stephen A. Cook. An exponential lower bound for the size of monotone real circuits. Journal of Computer and System Sciences, 58(2):326–335, 1999.
- Trinh Huynh and Jakob Nordström. On the virtue of succinct proofs: Amplifying communication complexity hardness to time-space trade-offs in proof complexity (Extended abstract). In Proceedings of the 44th Annual ACM Symposium on Theory of Computing (STOC ’12), pages 233–248, May 2012.
- Russell Impagliazzo, Pavel Pudlák, and Jirí Sgall. Lower bounds for the polynomial calculus and the Gröbner basis algorithm. Computational Complexity, 8(2):127–144, 1999.
- Jan Krajícek, Pavel Pudlák, and Alan R. Woods. An exponential lower bound to the size of bounded depth Frege proofs of the pigeonhole principle. Random Structures and Algorithms, 7(1):15–40, 1995. Preliminary version in STOC ’92.
- Jean B. Lasserre. An explicit exact SDP relaxation for nonlinear 0-1 programs. In Proceedings of the 8th International Conference on Integer Programming and Combinatorial Optimization (IPCO ’01), volume 2081 of Lecture Notes in Computer Science, pages 293–303. Springer, June 2001.
- László Lovász and Alexander Schrijver. Cones of matrices and set-functions and 0-1 optimization. SIAM Journal on Optimization, 1(2):166–190, 1991.
- Mladen Mikša and Jakob Nordström. A generalized method for proving polynomial calculus degree lower bounds. In Proceedings of the 30th Annual Computational Complexity Conference (CCC ’15), volume 33 of Leibniz International Proceedings in Informatics (LIPIcs), pages 467–487, June 2015.
- Jeff B. Paris and Alex J. Wilkie. Counting problems in bounded arithmetic. In Methods in Mathematical Logic: Proceedings of the 6th Latin American Symposium on Mathematical Logic, volume 1130 of Lecture Notes in Mathematics, pages 317–340. Springer, 1985.
- Toniann Pitassi, Paul Beame, and Russell Impagliazzo. Exponential lower bounds for the pigeonhole principle. Computational Complexity, 3:97–140, 1993. Preliminary version in STOC ’92.
- Toniann Pitassi, Benjamin Rossman, Rocco Servedio, and Li-Yang Tan. Poly-logarithmic Frege depth lower bounds. In Proceedings of the 48th Annual ACM Symposium on Theory of Computing (STOC ’16), pages 644–657, June 2016.
- Pavel Pudlák. Lower bounds for resolution and cutting plane proofs and monotone computations. Journal of Symbolic Logic, 62(3):981–998, September 1997.
- Pavel Pudlák. On the complexity of propositional calculus. In S. Barry Cooper and John K. Truss, editors, Sets and Proofs, volume 258 of London Mathematical Society Lecture Note Series, pages 197–218. Cambridge University Press, 1999.
- Alexander A. Razborov. Lower bounds on the size of bounded depth networks over a complete basis with logical addition. Matematicheskie Zametki, 41(4):598–607, 1987. English Translation in Mathematical Notes of the Academy of Sciences of the USSR, 41(4):333–338, 1987.
- Alexander A. Razborov. Proof complexity and beyond. ACM SIGACT News, 47(2):66–86, June 2016.
- John Alan Robinson. A machine-oriented logic based on the resolution principle. Journal of the ACM, 12(1):23–41, January 1965.
- Grant Schoenebeck. Linear level Lasserre lower bounds for certain k-CSPs. In Proceedings of the 49th Annual IEEE Symposium on Foundations of Computer Science (FOCS ’08), pages 593–602, October 2008.
- Hanif D. Sherali and Warren P. Adams. A hierarchy of relaxations between the continuous and convex hull representations for zero-one programming problems. SIAM Journal on Discrete Mathematics, 3:411–430, 1990.
- Roman Smolensky. Algebraic methods in the theory of lower bounds for Boolean circuit complexity. In Proceedings of the 19th Annual ACM Symposium on Theory of Computing (STOC ’87), pages 77–82, 1987.
- Alasdair Urquhart. Hard examples for resolution. Journal of the ACM, 34(1):209–219, January 1987.
- Amirhossein Akbar Tabatabei (The Czech Academy of Sciences - Prague, CZ)
- Albert Atserias (UPC - Barcelona, ES) [dblp]
- Paul Beame (University of Washington - Seattle, US) [dblp]
- Arnold Beckmann (Swansea University, GB) [dblp]
- Olaf Beyersdorff (University of Leeds, GB) [dblp]
- Ilario Bonacina (UPC - Barcelona, ES) [dblp]
- Igor Carboni Oliveira (University of Oxford, GB) [dblp]
- Marco Carmosino (University of California - San Diego, US) [dblp]
- Leroy Nicholas Chew (University of Leeds, GB) [dblp]
- Stefan Dantchev (Durham University, GB) [dblp]
- Joanna Fijalkow (University Paris-Diderot, FR) [dblp]
- Yuval Filmus (Technion - Haifa, IL) [dblp]
- Noah Fleming (University of Toronto, CA) [dblp]
- Michael A. Forbes (University of Illinois - Urbana-Champaign, US) [dblp]
- Nicola Galesi (Sapienza University of Rome, IT) [dblp]
- Michal Garlik (UPC - Barcelona, ES) [dblp]
- Joshua A. Grochow (University of Colorado - Boulder, US) [dblp]
- Tuomas Hakoniemi (UPC - Barcelona, ES) [dblp]
- Johan Hastad (KTH Royal Institute of Technology - Stockholm, SE) [dblp]
- Edward A. Hirsch (Steklov Institute - St. Petersburg, RU) [dblp]
- Pavel Hrubes (The Czech Academy of Sciences - Prague, CZ) [dblp]
- Dmitry Itsykson (Steklov Institute - St. Petersburg, RU) [dblp]
- Raheleh Jalali (Czech Academy of Sciences - Brno, CZ) [dblp]
- Emil Jerabek (The Czech Academy of Sciences - Prague, CZ) [dblp]
- Jan Johannsen (LMU München, DE) [dblp]
- Leszek Kolodziejczyk (University of Warsaw, PL) [dblp]
- Antonina Kolokolova (Memorial University of Newfoundland - St. John's, CA) [dblp]
- Oliver Kullmann (Swansea University, GB) [dblp]
- Massimo Lauria (Sapienza University of Rome, IT) [dblp]
- Meena Mahajan (Institute of Mathematical Sciences - Chennai, IN) [dblp]
- Barnaby Martin (Durham University, GB) [dblp]
- Moritz Müller (Universität Wien, AT) [dblp]
- Jakob Nordström (KTH Royal Institute of Technology - Stockholm, SE) [dblp]
- Jan Pich (Universität Wien, AT) [dblp]
- Aaron Potechin (KTH Royal Institute of Technology - Stockholm, SE) [dblp]
- Pavel Pudlák (The Czech Academy of Sciences - Prague, CZ) [dblp]
- Ninad Rajgopal (University of Oxford, GB) [dblp]
- Kilian Risse (KTH Royal Institute of Technology - Stockholm, SE)
- Robert Robere (University of Toronto, CA) [dblp]
- Rahul Santhanam (University of Oxford, GB) [dblp]
- Dmitry Sokolov (KTH Royal Institute of Technology - Stockholm, SE) [dblp]
- Neil Thapen (The Czech Academy of Sciences - Prague, CZ) [dblp]
- Jacobo Torán (Universität Ulm, DE) [dblp]
- Iddo Tzameret (Royal Holloway, University of London, GB) [dblp]
- Alasdair Urquhart (University of Toronto, CA) [dblp]
- Marc Vinyals (TIFR Mumbai, IN) [dblp]
Klassifikation
- data structures / algorithms / complexity
- verification / logic
Schlagworte
- Proof complexity
- computational complexity
- logic
- bounded arithmetic
- satisfiability algorithms