Symmetric functions show up in several areas of mathematics including enumerative combinatorics and representation theory. Tewodros Amdeberhan conjectures equalities of Σn character sums over a new set called Ev(λ). When investigating the alternating sum of characters for Ev(λ) written in terms of the inner product of Schur functions and power sum symmetric functions, we found an equality between the alternating sum of power sum symmetric polynomials and a product of monomial symmetric polynomials. As a consequence, a special case of an alternating sum of Σn characters over the set Ev(λ) equals 0.
Seminars in Combinatorics
The theory of symmetric polynomials plays a key role in Representation Theory, Schubert Calculus, and Algebraic Combinatorics. Fundamental rules like the Pieri, Murnaghan-Nakayama, and Littlewood-Richardson rules describe the decomposition of products of Schubert classes into Schubert classes. We focus on the decomposition of polynomial representatives of Schubert classes in homology and K-homology of the affine Grassmannian of SLn, as well as quantum Schubert classes in quantum cohomology and K-cohomology of the full flag manifold of type A. Specifically, we explore how to use the Peterson isomorphism to connect formulas between homology and quantum cohomology, and between K-homology and quantum K-cohomology, extending techniques from the work of Lam-Shimozono on Schubert classes.
Post-Lie algebras appeared in 2007 in algebraic combinatorics, and independently in 2008 in the study of numerical schemes on homogeneous spaces. Gavrilov's K-map is a particular Hopf algebra isomorphism, which can be naturally described in the context of free post-Lie algebras. Post-groups, which are to post-Lie algebras what groups are to Lie algebras, were defined in 2023 by C. Bai, L. Guo, Y. Sheng and R. Tang. Although skew-braces and braided groups are older equivalent notions, their reformulation as post-groups brings crucial new information on their structure. After giving an account of the above-mentioned structures, I shall introduce free post-groups, and describe a group isomorphism which can be seen as an analogon of Gavrilov's K-map for post-groups.
The classical Schur duality is a simple yet powerful concept which relates the representations of the symmetric group and general linear Lie algebra, as well as combinatorics of symmetric functions. This admits a quantum deformation to a duality between a quantum group and Hecke algebra of type A. In this talk, we will describe several new simple diagrammatic (monoidal/quotient) categories, where old and new algebras behind (affine/cyclotomic) Schur duality emerge naturally. Our construction has new combinatorial implications on symmetric functions and RSK correspondence.
In type A, the Macdonald polynomials and the integral from Macdonald polynomials are related by a plethystic transformation. We interpret this plethystic transformation geometrically as a relationship between nilpotent parabolic Springer fibres and nilpotent Lusztig varieties. This points the way to a generalization of modified Macdonald polynomials and integral form Macdonald polynomials to all Lie types. But these generalizations are not polynomials, they are elements of the Iwahori-Hecke algebra of the finite Weyl group. This work concerns the generalization of, and connection between, a 1997 paper of Halverson-Ram (which counts points of nilpotent Lusztig varieties over a finite field) and a 2017 paper of Mellit (which counts points of nilpotent parabolic affine Springer fibres over a finite field).
We will discuss a graph that encodes the divisibility properties of integers by primes. We will prove that this graph has a strong local expander property almost everywhere. We then obtain several consequences in number theory, beyond the traditional parity barrier, by combining the main result with Matomaki-Radziwill. (This is joint work with M. Radziwill.) For instance: for λ the Liouville function (that is, the completely multiplicative function with λ(p) = −1 for every prime),
(1/log x) ∑n ≤ x λ(n) λ(n+1)/n=O(1/√(log log x)),
which is stronger than well-known results by Tao and Tao-Teravainen. We also manage to prove, for example, that λ(n+1) averages to 0 at almost all scales when n restricted to have a specific number of prime divisors Ω(n)=k, for any "popular" value of k (that is, k = log log N+O(√(log log N)) for n ≤ N).
Any non-negative univariate polynomial over the reals can be written as a sum of squares. This gives a simple-to-verify certificate of non-negativity of the polynomial. Rooted in Hilbert's 17th problem, there's now more than a century's work that finds the right multivariate generalizations called Positivstellensatz theorems (due to Krivine, Stengle, and Putinar). Beginning in the late 1980s, researchers (initially independent) in optimization, quantum information, and proof complexity theory found an algorithmic counterpart, the sum-of-squares algorithm, of these results. Over the past decade, this algorithmic theory has matured into a powerful tool for designing efficient algorithms for basic problems in algorithm design.
In this talk, I will outline a couple of highlights from these recent developments:
1) Algorithmic Robust Statistics: In the 1960s, Tukey and Huber observed that most statistical estimators are brittle -- they break down with almost no guarantee if the model postulated for data has minor misspecification (say because of 1% outliers). In response, they initiated the field of robust statistics. Over the past five years, a new blueprint, based on the sum-of-squares algorithm, has emerged for efficient robust statistics in high dimensions with new connections to finding efficiently verifiable certificates of concentration and anti-concentration properties of high dimensional probability distributions.
2) The Kikuchi Matrix Method: Finding (or proving that there is none) a solution that satisfies 99% of a given system of k-sparse linear equations (i.e., k non-zero coefficients in each equation) over finite fields is a basic NP-hard problem and thus, unlikely to admit efficient algorithms. In the early 2000s, motivated by whether the hard instances are "pathological", researchers explored whether "semirandom" equations - arbitrary systems with right-hand sides generated uniformly and independently at random - could admit efficient algorithms that output efficiently verifiable certificates of unsatisfiability.
Recently, a restricted class of sum-of-squares proofs was at the heart of efficient algorithms for such semirandom sparse linear equations. Surprisingly, these algorithms have led to new progress in extremal combinatorics and coding theory.
Suppose that A is a finite, non-empty subset of a cyclic group of either infinite or prime order. We show that if the difference set A-A is 'not too large', then there is a non-zero group element with at least as many as (2+o(1))|A|2/|A-A| representations as a difference of two elements of A; that is, the second largest number of representations is, essentially, twice the average. Here the coefficient 2 is best possible.
This talk will explore properties and algorithms pertaining to very large graphs. These graphs serve as models for a multitude of complex systems, showcasing traits like robustness, randomness, dynamics, and distinctiveness, while also sharing essential commonalities. Understanding these graphs' fundamental characteristics, including low global density but high local density, degree distribution, low diameter, and clustering coefficient, is vital for analysing complex system behaviour. Complex systems research also addresses critical aspects such as partitioning populations into communities with shared behaviours and interests. The presentation will delve into the modelling of large graphs and discuss algorithms focused on community detection within them.
The Newell-Littlewood numbers are defined in terms of the Littlewood-Richardson coefficients from algebraic combinatorics. Both appear in representation theory as tensor product multiplicities for a classical Lie group. This talk concerns the question: Which multiplicities are non-zero? In 1998, Klyachko established common linear inequalities defining both the eigencone for sums of Hermitian matrices and the saturated Littlewood-Richardson cone. We prove some analogues of Klyachko's non-vanishing results for the Newell-Littlewood numbers.
We consider the Schramm-Loewner evolution (SLEκ) with κ = 4, the critical value of κ > 0 at or below which SLEκ is a simple curve and above which it is self-intersecting. We show that the range of an SLE4 curve is a.s. conformally removable. Such curves arise as the conformal welding of a pair of independent critical (γ = 2) Liouville quantum gravity (LQG) surfaces along their boundaries and our result implies that this conformal welding is unique. In order to establish this result, we give a new sufficient condition for a set X ⊆ ℂ to be conformally removable which applies in the case that X is not necessarily the boundary of a simply connected domain. We will also describe how this theorem can be applied to obtain the conformal removability of the SLEκ curves for κ ∈ (4,8) in the case that the adjacency graph of connected components of the complement is a.s. connected. This talk will assume no prior knowledge of SLE or LQG.
We overview recent progress on two of the most classic problems in combinatorial optimization: the matching problem and the travelling salesman problem. We focus on deterministic parallel algorithms for the perfect matching problem and the first constant-factor approximation algorithm for the asymmetric travelling salesman problem. While these questions pose seemingly different challenges, recent progress has been achieved using similar polyhedral techniques. In particular, for both problems, we will explain the use linear programming formulations, even exponential-sized ones, to extract structure from problem instances to guide the design of better algorithms.
The FKN theorem states that if a Boolean function on the Boolean cube is close to degree 1, then it is close to a dictator. Similarly, the Kindler-Safra theorem states that if a Boolean function on the Boolean cube is close to degree d, then it is close to a junta.
The picture becomes more interesting when we study functions on the p-biased Boolean cube. If a Boolean function is 𝜀-close to degree 1, then (up to negation) it is O(𝜀)-close to a maximum of O(√𝜀/p+1) coordinates, and so O(√𝜀+p)-close to a constant function. A similar statement holds for functions close to degree d, but the corresponding structure is more complicated.Another setting we might discuss is functions on the symmetric group. If a Boolean function is 𝜀-close to degree 1, then it is O(√𝜀)-close to a dictator (suitably defined), and O(𝜀)-close (up to negation) to a union of “mostly disjoint” cosets. A similar statement should hold for degree d, where the function ought to be close to a (suitably defined) decision tree of depth poly(d).
We show that Erdős-Renyi random graph with constant density has correspondence chromatic number O(n/√(log n)); this matches a prediction from linear Hadwiger’s conjecture for correspondence colouring. The proof follows from a sufficient condition for correspondence colourability in terms of the numbers of independent sets. We conjecture the truth to be of order O(n/log n) as suggested by the random correspondence assignment.
In 1947, Paul Erdos famously gave a construction of graphs with no large cliques and independent sets via the probabilistic method. Since then, Erdos’ ingenious probabilistic insight and the emergence of structures in random graphs have motivated fundamental developments in combinatorics and probability. In this talk, I highlight several recent results in random graph studies with interesting connections to additive combinatorics, theoretical computer science and probability.
In joint work with David Conlon, Jacob Fox and Liana Yepremyan, we study structures in random Cayley graphs of general groups. Given a fixed finite group, random Cayley graphs are constructed by choosing the generating set at random. These graphs thus reflect interesting symmetries and properties of the group, at the cost of inducing complex dependencies. We will discuss results on clique and independence numbers in random Cayley graphs of general groups, as well as progress towards a conjecture of Alon on Cayley graphs with small clique and independence number. These questions are naturally connected with some fundamental problems in additive combinatorics, where we surprisingly discover that in some of them the group structure is superfluous.
Certain important aspects in the study of structures in random graphs can be phrased in terms of thresholds. In joint work with Jinyoung Park, building on insights in our resolution of Talagrand’s selector process conjecture, we prove the Kahn-Kalai conjecture, that thresholds of general monotone properties are closely predicted by expectation obstructions. The Kahn-Kalai conjecture is a beautiful milestone towards the understanding of emergence of general structures, and yet to complete the quest, it remains to study these expectation obstructions. This latter task can prove to be highly challenging in several cases and bring in interesting connections. As an illustration, I will discuss joint work with Vishesh Jain that determines the threshold for edge-colorings of complete bipartite graphs by exploiting connections to the Lovasz Local Lemma and local uniformity properties, which have played an important role in sampling solutions to constraint satisfaction problems.
For a subset A of a finite-dimensional vector space over a finite field, It is known that 2A-2A must contain a subgroup of density that is quasi-polynomial in the density of A. We show an analogue result for the special linear group over a finite field, except we achieve a polynomial density for the subgroup. The main tool used is a new hyper-contractive estimate for the special linear group, that has origins in the study of 2-to-2 games.
A graph is said to be a (1-dimensional) expander if the second eigenvalue of its adjacency matrix is bounded away from 1, or almost-equivalently, if it has no sparse vertex cuts. There are several natural ways to generalize the notion of expansion to hypergraphs/simplicial complexes, but one such way is 2-dimensional spectral expansion, in which the local expansion of vertex links/neighborhoods (remarkably) witnesses global expansion. While 1-dimensional expansion is known to be achieved by, e.g., random regular graphs, very few examples of sparse 2-dimensional expanders are known, and at present all are algebraic. It is an open question whether sparse 2-dimensional expanders are natural and "abundant" or "rare." In this talk, we'll give some evidence towards abundance: we show that the set of triangles in a random geometric graph on a high-dimensional sphere yields an expanding simplicial complex of arbitrarily small polynomial degree.
In the random geometric graph model, we identify each of our n vertices with an independently and uniformly sampled vector from the d-dimensional unit sphere, and we connect pairs of vertices whose vectors are 'sufficiently close' such that the marginal probability of an edge is p.
We investigate the problem of distinguishing an Erdős-Rényi graph from a random geometric graph. When p = c/n for a constant c, we prove that if d ≥ poly log n, the total variation distance between the two distributions is close to 0; this improves upon the best previous bound of Brennan, Bresler, and Nagaraj (2020), which required d ≫ n3/2, and further our result is nearly tight, resolving a conjecture of Bubeck, Ding, Eldan, and Rácz (2016) up to logarithmic factors. We also obtain improved upper bounds on the statistical indistinguishability thresholds in d for the full range of p satisfying 1/n ≤ p ≤ 1/2, improving upon the previous bounds by polynomial factors.
In this talk, we will discuss the key ideas in our proof, which include:
• Analyzing the Belief Propagation algorithm to characterize the distributions of (subsets of) the random vectors conditioned on producing a particular graph.
• Sharp estimates for the area of the intersection of a random sphere cap with an arbitrary subset of the sphere, which are proved using optimal transport maps and entropy-transport inequalities on the unit sphere.
A regular partition 𝒫 of a 3-uniform hypergraph H=(V,E) consists of a partition V=V1 ∪ ... ∪ Vt and for each ij ∈ [t] choose 2, a partition Vi × Vj = Pij1 ∪ ... ∪ Pij𝓁 so that certain quasirandomness properties hold. The complexity of 𝒫 is the pair (t,𝓁). In this talk, we present results which show that if a 3-uniform hypergraph H has VC2-dimension at most k, then there is a regular partition for H of complexity (t,𝓁), where 𝓁 is bounded by a polynomial in the degree of regularity. This is a vast improvement on the bound arising from the proof of this regularity lemma in general, in which the bound generated for 𝓁 is of Wowzer type. This result can be seen as a higher arity analogue of the efficient regularity lemmas for graphs and hypergraphs of bounded VC-dimension due to Alon-Fischer-Newman, Lovász-Szegedy, and Fox-Pach-Suk.
A central question in additive combinatorics is to understand how large arithmetic progression-free sets can be. In this talk, I will focus on this question for high-dimensional generalization of arithmetic progressions (AP) known as corners. A (2-dimensional) corner is a triple of the form (x,y),(x+d,y),(x,y+d) for some d>0 in [N] × [N]. Extending this definition to higher dimensions, a k-dimensional corner in [N]k is a (k+1)-tuple defined similarly for some d. While it is known that corner-free sets have a vanishingly small density, the precise bounds on their size remain unknown. Until recently, the best-known corner-free sets were derived from constructions of AP-free sets: a construction of a 3-term AP-free set by Behrend from 1946, and a generalization by Rankin for k-term APs in 1961. New results by Linial and Shraibman (CCC 2021) and Green (New Zealand Journal of Mathematics 2021) changed this picture; they improved the upper bound for k=2 by adopting a communication complexity point of view.
I will discuss our recent work where we employ the same perspective of communication complexity and obtain the first improvement on the upper bound of the size of high-dimensional (k>2) corner-free sets since the original construction of Rankin.
