0% found this document useful (0 votes)
12 views

Notebook

The document is a table of contents for the IME++ ACM-ICPC Team Notebook, detailing various algorithms and data structures. It includes sections on topics such as graph algorithms, string algorithms, and data structures, with corresponding page numbers. The document serves as a comprehensive guide for competitive programming techniques and strategies.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
12 views

Notebook

The document is a table of contents for the IME++ ACM-ICPC Team Notebook, detailing various algorithms and data structures. It includes sections on topics such as graph algorithms, string algorithms, and data structures, with corresponding page numbers. The document serves as a comprehensive guide for competitive programming techniques and strategies.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 40

IME++

4.13 Fast Kuhn . . . . . . . . . . . . . . . . . . . . 14 7.5 Closest Pair of Points . . .. . . . . . . . . . . . 29


IME++ ACM-ICPC Team Notebook 4.14
4.15
.
Find Cycle of size 3 and 4
Floyd Warshall . . . . . .
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
14
14
7.6
7.7
Half Plane Intersection
Lines . . . . . . . .
. .
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
30
30
4.16 Hungarian . . . . . . . . . . . . . . . . . . . . 15 7.8 Minkowski Sum . . . . . .. . . . . . . . . . . . 31

Contents 4.17 Hungarian Navarro . . . . . . . . . . . . . . . . 15 7.9 Nearest Neighbour . . . .. . . . . . . . . . . . 31


4.18 Toposort . . . . . . . . . . . . . . . . . . . . 15 7.10 Polygons . . . . . . . .. . . . . . . . . . . . 31
4.19 Strongly Connected Components . . . . . . . . . . . 15 7.11 Stanford Delaunay . . . .. . . . . . . . . . . . 33
4.20 MST (Kruskal) . . . . . . . . . . . . . . . . . . 16 7.12 Ternary Search . . . . . .. . . . . . . . . . . . 33
1 Flags + Template + vimrc 1
Max Bipartite Cardinality Matching (Kuhn) . . . . . . . Delaunay Triangulation . . .. . . . . . . . . . . .
1.1 Flags . . . . . . . . . . . . . . . . . . . . . . 1
4.21 16 7.13
Voronoi Diagram . . . . .. . . . . . . . . . . .
33
Lowest Common Ancestor . . . . . . . . . . . . . .
Template . . . . . . . . . . . . . . . . . . . .
4.22 16 7.14 34
1.2 1
Max Weight on Path . . . . . . . . . . . . . . . . Delaunay Triangulation (emaxx) . . . . . . . . . . .
1.3 vimrc . . . . . . . . . . . . . . . . . . . . . 2
4.23 16 7.15
Closest Pair of Points 3D . . . . . . . . . . . . . .
35
4.24 Min Cost Max Flow . . . . . . . . . . . . . . . . 16 7.16 36
4.25 MST (Prim) . . . . . . . . . . . . . . . . . . . 17
2 Data Structures 2 Shortest Path (SPFA) . . . . . . . . . . . . . . .
. . . . .. . . . . . . . . . . .
4.26 17 8 Miscellaneous 36
Small to Large . . . . . . . . . . . . . . . . . .
2.1 Bit Binary Search 2
Bit Range . . . . . . . .. . . . . . . . . . . . 4.27 17 8.1 Bitset . . . . . . . . . . . . . . . . . . . . . 36
Stoer Wagner (Stanford) . . . . . . . . . . . . . .
2.2 2
4.28 17 builtin . . . . . . . . . . . . . . . . . . . . .
2.3 Bit . . . . . . . . . .. . . . . . . . . . . . 2
4.29 Tarjan . . . . . . . . . . . . . . . . . . . . . 17
8.2
Date . . . . . . . . . . . . . . . . . . . . . .
36

Bit 2D . . . . . . . . .. . . . . . . . . . . . 8.3 36
Zero One BFS . . . . . . . . . . . . . . . . . .
2.4 2
4.30 18 8.4 Parentesis to Poslish (ITA) . . . . . . . . . . . . . 37
2.5 Centroid Decomposition . .. . . . . . . . . . . . 2
8.5 Merge Sort (Inversion Count) . . . . . . . . . . . . 37
2.6 Color Update . . . . . .. . . . . . . . . . . . 2
Modular Int (Struct) . . . . . . . . . . . . . . . .
2.7 Heavy-Light Decomposition (new) . . . . . . . . . . 3 5 Strings 18 8.6 37
. . . . . . . . . . . . . . . . . . 8.7 Parallel Binary Search . . . . . . . . . . . . . . . 37
2.8 Heavy-Light Decomposition . . . . . . . . . . . . . 3 5.1 Aho-Corasick 18
Aho-Corasick (emaxx) . . . . . . . . . . . . . . . 8.8 prime numbers . . . . . . . . . . . . . . . . . . 37
2.9 Heavy-Light Decomposition (Lamarca) . . . . . . . . . 3 5.2 18
Python . . . . . . . . . . . . . . . . . . . . .
2.10 .
Lichao Tree (ITA) . . . . . . . . . . . . . . . . 4 5.3 Booths Algorithm . . . . . . . . . . . . . . . . . 18 8.9 38
Knuth-Morris-Pratt (Automaton) . . . . . . . . . . . 8.10 Sqrt Decomposition . . . . . . . . . . . . . . . . 38
2.11 Merge Sort Tree . . . . . . . . . . . . . . . . . 4 5.4 19
Latitude Longitude (Stanford) . . . . . . . . . . . .
2.12 Minimum Queue . . . . . . . . . . . . . . . . . 4 5.5 Knuth-Morris-Pratt . . . . . . . . . . . . . . . . 19 8.11 38
Manacher . . . . . . . . . . . . . . . . . . . . 8.12 Week day . . . . . . . . . . . . . . . . . . . . 38
2.13 Ordered Set . . . . . . . . . . . . . . . . . . . 4 5.6 19
2.14 Dynamic Segment Tree (Lazy Update) . . . . . . . . . 4 5.7 Manacher 2 . . . . . . . . . . . . . . . . . . . 19
2.15 Dynamic Segment Tree . . . . . . . . . . . . . . . 5 5.8 Rabin-Karp . . . . . . . . . . . . . . . . . . . 19 9 Math Extra 38
2.16 Iterative Segment Tree . . . . . . . . . . . . . . . 5 5.9 Recursive-String Matching . . . . . . . . . . . . . 19 9.1 Combinatorial formulas . . . . . . . . . . . . . . . 38
2.17 Mod Segment Tree . . . . . . . . . . . . . . . . 5 5.10 String Hashing . . . . . . . . . . . . . . . . . . 20 9.2 Number theory identities . . . . . . . . . . . . . . 38
2.18 Persistent Segment Tree (Naum) . . . . . . . . . . . 6 5.11 String Multihashing . . . . . . . . . . . . . . . . 20 9.3 Stirling Numbers of the second kind . . . . . . . . . . 38
2.19 Persistent Segment Tree . . . . . . . . . . . . . . 6 5.12 Suffix Array . . . . . . . . . . . . . . . . . . . 20 9.4 Burnside’s Lemma . . . . . . . . . . . . . . . . . 38
2.20 Struct Segment Tree . . . . . . . . . . . . . . . . 6 5.13 Suffix Automaton . . . . . . . . . . . . . . . . . 21 9.5 Numerical integration . . . . . . . . . . . . . . . 39
2.21 Segment Tree . . . . . . . . . . . . . . . . . . 6 5.14 Suffix Tree . . . . . . . . . . . . . . . . . . . 21
2.22 Segment Tree 2D . . . . . . . . . . . . . . . . . 6 5.15 Z Function . . . . . . . . . . . . . . . . . . . 22
Set Of Intervals . . . . . . . . . . . . . . . . . .
2.23
2.24 Sparse Table . . . . . . . . . . . . . . . . . . .
7
7 6 Mathematics 22 1 Flags + Template + vimrc
2.25 Sparse Table 2D . . . . . . . . . . . . . . . . . 7 6.1 Basics . . . . . . . . . . . . . . . . . . . . . 22
2.26 Splay Tree . . . . . . . . . . . . . . . . . . . . 7 Advanced . . . . . . . . . . . . . . . . . . . .
2.27 KD Tree (Stanford) . . . . . . . . . . . . . . . . 8
6.2
6.3 Discrete Log (Baby-step Giant-step) . . . . . . . . . .
22
22
1.1 Flags
2.28 Treap . . . . . . . . . . . . . . . . . . . . . 8 6.4 Euler Phi . . . . . . . . . . . . . . . . . . . . 23
2.29 Trie . . . . . . . . . . . . . . . . . . . . . . 9 6.5 Extended Euclidean and Chinese Remainder . . . . . . . 23 g++ -fsanitize=address,undefined -fno-omit-frame-pointer -g -
2.30 Union Find . . . . . . . . . . . . . . . . . . . 9 6.6 Fast Fourier Transform(Tourist) . . . . . . . . . . . 23 Wall -Wshadow -std=c++17 -Wno-unused-result -Wno-sign-
2.31 Union Find (Partial Persistent) . . . . . . . . . . . 9 6.7 Fast Fourier Transform . . . . . . . . . . . . . . . 24 compare -Wno-char-subscripts
2.32 Union Find (Rollback) . . . . . . . . . . . . . . . 10 6.8 Fast Walsh-Hadamard Transform . . . . . . . . . . . 24
6.9 Gaussian Elimination (extended inverse) . . . . . . . . 24
3 Dynamic Programming 10 6.10 Gaussian Elimination (modulo prime) . . . . . . . . . 25
3.1 Convex Hull Trick (emaxx) . . . . . . . . . . . . . 10 6.11 Gaussian Elimination (xor) . . . . . . . . . . . . . 25 1.2 Template
3.2 Convex Hull Trick . . . . . . . . . . . . . . . . . 10 6.12 Gaussian Elimination (double) . . . . . . . . . . . . 25
3.3 Divide and Conquer Optimization . . . . . . . . . . . 10 6.13 Golden Section Search (Ternary Search) . . . . . . . . 25
3.4 Knuth Optimization . . . . . . . . . . . . . . . . 11 6.14 Josephus . . . . . . . . . . . . . . . . . . . . 25
#include <bits/stdc++.h>
using namespace std;
3.5 Longest Increasing Subsequence . . . . . . . . . . . 11 6.15 Matrix Exponentiation . . . . . . . . . . . . . . . 25
3.6 SOS DP . . . . . . . . . . . . . . . . . . . . 11 6.16 Mobius Inversion . . . . . . . . . . . . . . . . . 26 #define st first
3.7 Steiner tree . . . . . . . . . . . . . . . . . . . 11 6.17 Mobius Function . . . . . . . . . . . . . . . . . 26 #define
#define
nd second
mp make_pair
6.18 Number Theoretic Transform . . . . . . . . . . . . 26 #define cl(x, v) memset((x), (v), sizeof(x))
6.19 Pollard-Rho . . . . . . . . . . . . . . . . . . . 26 #define gcd(x,y) __gcd((x),(y))
4 Graphs 11
Pollard-Rho Optimization . . . . . . . . . . . . . .
4.1 2-SAT Kosaraju . . . . . . . . . . . . . . . . . 11
6.20
. . . . . . . . . . . . . . . . . .
26
#ifndef ONLINE_JUDGE
4.2 2-SAT Tarjan . . . . . . . . . . . . . . . . . . 12
6.21 Prime Factors
Primitive Root . . . . . . . . . . . . . . . . . .
27 #define db(x) cerr << #x << " == " << x << endl
Shortest Path (Bellman-Ford) . . . . . . . . . . . . 6.22 27 #define dbs(x) cerr << x << endl
4.3 12
4.4 BFS . . . . . . . . . . . . . . . . . . . . . . 12
6.23 Sieve of Eratosthenes . . . . . . . . . . . . . . . 27 #define _ << ", " <<
4.5 Block Cut . . . . . . . . . . . . . . . . . . . . 12
6.24 Simpson Rule . . . . . . . . . . . . . . . . . . 27
#else
#define db(x) ((void)0)
4.6 Articulation points and bridges . . . . . . . . . . . . 12
6.25 Simplex (Stanford) . . . . . . . . . . . . . . . . 27 #define dbs(x) ((void)0)
#endif
4.7 DFS . . . . . . . . . . . . . . . . . . . . . . 12
4.8 Shortest Path (Dijkstra) . . . . . . . . . . . . . . 12 7 Geometry 28 typedef long long ll;
4.9 Max Flow . . . . . . . . . . . . . . . . . . . . 13 7.1 Miscellaneous . . . . . . . . . . . . . . . . . . 28 typedef long double ld;
4.10 Dominator Tree . . . . . . . . . . . . . . . . . . 13 7.2 Basics (Point) . . . . . . . . . . . . . . . . . . 28
typedef pair<int, int> pii;
4.11 Erdos Gallai . . . . . . . . . . . . . . . . . . . 14 7.3 Radial Sort . . . . . . . . . . . . . . . . . . . 28 typedef pair<int, pii> piii;
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

1
4.12 Eulerian Path 14 7.4 Circle 29 typedef pair<ll, ll> pll;
IME++
typedef pair<ll, pll> plll; return qry(x2, y2) - qry(x2, y1 - 1) - qry(x1 -
1, y2) + qry(x1 - 1, y1 - 1);
const
const
ld EPS = 1e-9, PI = acos(-1.);
ll LINF = 0x3f3f3f3f3f3f3f3f;
2.3 Bit }
const int INF = 0x3f3f3f3f, MOD = 1e9+7;; // range upd
const int N = 1e5+5; void upd(T x1, T y1, T x2, T y2, T v) {
// Fenwick Tree / Binary Indexed Tree upd(x1, y1, v);
int main() { ll bit[N]; upd(x1, y2 + 1, -v);
ios_base::sync_with_stdio(false); upd(x2 + 1, y1, -v);
cin.tie(NULL); void add(int p, int v) { upd(x2 + 1, y2 + 1, v);
//freopen("in", "r", stdin); for (p += 2; p < N; p += p & -p) bit[p] += v; }
//freopen("out", "w", stdout); } };
return 0;
} ll query(int p) {
ll r = 0;
for (p += 2; p; p -= p & -p) r += bit[p];
}
return r;
2.5 Centroid Decomposition
1.3 vimrc
// Centroid decomposition

syntax on vector<int> adj[N];


set et ts=2 sw=0 sts=-1 ai nu hls cindent 2.4 Bit 2D int forb[N], sz[N], par[N];
nnoremap ; : int n, m;
vnoremap ; : unordered_map<int, int> dist[N];
noremap <c-j> 15gj // Thank you for the code tfg!
noremap <c-k> 15gk void dfs(int u, int p) {
nnoremap <s-k> i<CR><ESC> // O(N(logN)ˆ2) sz[u] = 1;
inoremap ,. <esc> template<class T = int> for(int v : adj[u]) {
vnoremap ,. <esc> struct Bit2D{ if(v != p and !forb[v]) {
nnoremap ,. <esc> vector<T> ord; dfs(v, u);
vector<vector<T>> fw, coord; sz[u] += sz[v];
}
// pts needs all points that will be used in the upd }
// if range upds remember to build with {x1, y1}, {x1, }

2 Data Structures y2 + 1}, {x2 + 1, y1}, {x2 + 1, y2 + 1}


Bit2D(vector<pair<T, T>> pts){
sort(pts.begin(), pts.end());
int find_cen(int u, int p, int qt) {
for(int v : adj[u]) {
for(auto a : pts) if(v == p or forb[v]) continue;
if(ord.empty() || a.first != ord.back()) if(sz[v] > qt / 2) return find_cen(v, u, qt);
2.1 Bit Binary Search ord.push_back(a.first); }
return u;
fw.resize(ord.size() + 1);
coord.resize(fw.size()); }
// --- Bit Binary Search in o(log(n)) --- void getdist(int u, int p, int cen) {
const int M = 20 for(auto &a : pts)
swap(a.first, a.second); for(int v : adj[u]) {
const int N = 1 << M if(v != p and !forb[v]) {
sort(pts.begin(), pts.end());
for(auto &a : pts){ dist[cen][v] = dist[v][cen] = dist[cen][u] + 1;
int lower_bound(int val){ getdist(v, u, cen);
int ans = 0, sum = 0; swap(a.first, a.second);
for(int on = std::upper_bound(ord.begin }
for(int i = M - 1; i >= 0; i--){ }
int x = ans + (1 << i); (), ord.end(), a.first) - ord.begin
(); on < fw.size(); on += on & -on) }
if(sum + bit[x] < val)
ans = x, sum += bit[x]; if(coord[on].empty() || coord[on
} ].back() != a.second) void decomp(int u, int p) {
coord[on].push_back(a. dfs(u, -1);
return ans + 1; second);
} } int cen = find_cen(u, -1, sz[u]);
forb[cen] = 1;
for(int i = 0; i < fw.size(); i++) par[cen] = p;
fw[i].assign(coord[i].size() + 1, 0); dist[cen][cen] = 0;
} getdist(cen, -1, cen);
2.2 Bit Range // point upd for(int v : adj[cen]) if(!forb[v])
void upd(T x, T y, T v){ decomp(v, cen);
for(int xx = upper_bound(ord.begin(), ord.end(), }
struct BIT { x) - ord.begin(); xx < fw.size(); xx += xx
ll b[N]={}; & -xx) // main
ll sum(int x) { for(int yy = upper_bound(coord[xx].begin decomp(1, -1);
ll r=0; (), coord[xx].end(), y) - coord[xx
for(x+=2;x;x-=x&-x) ].begin(); yy < fw[xx].size(); yy
r += b[x]; += yy & -yy)
return r; fw[xx][yy] += v;
}
void upd(int x, ll v) {
}
2.6 Color Update
for(x+=2;x<N;x+=x&-x) // point qry
b[x]+=v; T qry(T x, T y){
} T ans = 0; // Color Update - O(q log n)
}; for(int xx = upper_bound(ord.begin(), ord.end(), // Heavily inspired by Um_nik’s implementation
struct BITRange { x) - ord.begin(); xx > 0; xx -= xx & -xx) // q -> number of inserts
BIT a,b; for(int yy = upper_bound(coord[xx].begin
ll sum(int x) { (), coord[xx].end(), y) - coord[xx struct ColorUpdate {
return a.sum(x)*x+b.sum(x); ].begin(); yy > 0; yy -= yy & -yy) struct Seg {
} ans += fw[xx][yy]; int l, r, c;
void upd(int l, int r, ll v) { return ans; Seg(int _l = 0, int _r = 0, int _c = 0) : l(_l), r(_r), c(_c
a.upd(l,v), a.upd(r+1,-v); } ) {};
b.upd(l,-v*(l-1)), b.upd(r+1,v*r); bool operator<(const Seg& b) const { return l < b.l; }
} // range qry };

2
}; T qry(T x1, T y1, T x2, T y2){
IME++
set<Seg> segs; int hld_query(int u, int v) { typedef long long ll;
int l = lca(u, v);
void cut(int x) { return mult(query_up(u, l), query_up(v, l)); template<int N> struct Seg{
auto it = segs.lower_bound({ x, 0, 0 }); } ll s[4*N], lazy[4*N];
if (it == segs.begin()) return; void build(int no = 1, int l = 0, int r = N){
it--; if(r-l==1){
if (it->r == x - 1) return; s[no] = 0;
Seg s = *it; return;
segs.erase(it);
segs.insert(Seg(s.l, x - 1, s.c));
2.8 Heavy-Light Decomposition }
int mid = (l+r)/2;
segs.insert(Seg(x, s.r, s.c)); build(2*no,l,mid);
} build(2*no+1,mid,r);
// Heavy-Light Decomposition s[no] = max(s[2*no],s[2*no+1]);
void add(int l, int r, int c) { vector<int> adj[N]; }
cut(l), cut(r + 1); int par[N], h[N]; Seg(){ //build da HLD tem de ser assim, pq chama sem os
Seg s(l, r, c); parametros
auto it = segs.lower_bound(s); int chainno, chain[N], head[N], chainpos[N], chainsz[N], pos[N], build();
while (it != segs.end() and it->l <= s.r) { arrsz; }
auto it2 = it++; int sc[N], sz[N]; void updlazy(int no, int l, int r, ll x){
segs.erase(it2); s[no] += x;
} void dfs(int u) { lazy[no] += x;
segs.insert(s); sz[u] = 1, sc[u] = 0; // nodes 1-indexed (0-ind: sc[u]=-1) }
} for (int v : adj[u]) if (v != par[u]) { void pass(int no, int l, int r){
}; par[v] = u, h[v] = h[u]+1, dfs(v); int mid = (l+r)/2;
sz[u]+=sz[v]; updlazy(2*no,l,mid,lazy[no]);
if (sz[sc[u]] < sz[v]) sc[u] = v; // 1-indexed (0-ind: sc[u updlazy(2*no+1,mid,r,lazy[no]);
]<0 or ...) lazy[no] = 0;
} }
2.7 Heavy-Light Decomposition (new) } void upd(int lup, int rup, ll x, int no = 1, int l = 0, int r =
N){
void hld(int u) { if(rup<=l or r<=lup) return;
if (!head[chainno]) head[chainno] = u; // 1-indexed if(lup<=l and r<=rup){
vector<int> adj[N]; chain[u] = chainno; updlazy(no,l,r,x);
int sz[N], nxt[N]; chainpos[u] = chainsz[chainno]; return;
int h[N], par[N]; chainsz[chainno]++; }
int in[N], rin[N], out[N]; pos[u] = ++arrsz; pass(no,l,r);
int t; int mid = (l+r)/2;
if (sc[u]) hld(sc[u]); upd(lup,rup,x,2*no,l,mid);
void dfs_sz(int u = 1){ upd(lup,rup,x,2*no+1,mid,r);
sz[u] = 1; for (int v : adj[u]) if (v != par[u] and v != sc[u]) s[no] = max(s[2*no],s[2*no+1]);
for(auto &v : adj[u]) if(v != par[u]) { chainno++, hld(v); }
h[v] = h[u] + 1; } ll qry(int lq, int rq, int no = 1, int l = 0, int r = N){
par[v] = u; if(rq<=l or r<=lq) return -LLONG_MAX;
int lca(int u, int v) { if(lq<=l and r<=rq){
dfs_sz(v); while (chain[u] != chain[v]) { return s[no];
sz[u] += sz[v]; if (h[head[chain[u]]] < h[head[chain[v]]]) swap(u, v); }
if(sz[v] > sz[adj[u][0]]) u = par[head[chain[u]]]; pass(no,l,r);
swap(v, adj[u][0]); } int mid = (l+r)/2;
} if (h[u] > h[v]) swap(u, v); return max(qry(lq,rq,2*no,l,mid),qry(lq,rq,2*no+1,mid,r));
} return u; }
} };
void dfs_hld(int u = 1){
in[u] = t++; int query_up(int u, int v) { template<int N, bool IN_EDGES> struct HLD {
rin[in[u]] = u; if (u == v) return 0; int t;
for(auto v : adj[u]) if(v != par[u]) { int ans = -1; vector<int> g[N];
nxt[v] = (v == adj[u][0] ? nxt[u] : v); while (1) { int pai[N], sz[N], d[N];
dfs_hld(v); if (chain[u] == chain[v]) { int root[N], pos[N]; /// vi rpos;
} if (u == v) break; void ae(int a, int b) { g[a].push_back(b), g[b].
ans = max(ans, query(1, 1, n, chainpos[v]+1, chainpos[u])) push_back(a); }
out[u] = t - 1; ; void dfsSz(int no = 0) {
} break; if (˜pai[no]) g[no].erase(find(all(g[no]),pai[no
} ]));
int lca(int u, int v){ sz[no] = 1;
while(nxt[u] != nxt[v]){ ans = max(ans, query(1, 1, n, chainpos[head[chain[u]]], for(auto &it : g[no]) {
if(h[nxt[u]] < h[nxt[v]]) swap(u, v); chainpos[u])); pai[it] = no; d[it] = d[no]+1;
u = par[nxt[u]]; u = par[head[chain[u]]]; dfsSz(it); sz[no] += sz[it];
} } if (sz[it] > sz[g[no][0]]) swap(it, g[no
return ans; ][0]);
if(h[u] > h[v]) swap(u, v); } }
return u; }
} int query(int u, int v) { void dfsHld(int no = 0) {
int l = lca(u, v); pos[no] = t++; /// rpos.pb(no);
int query_up(int u, int v) { return max(query_up(u, l), query_up(v, l)); for(auto &it : g[no]) {
if(u == v) return 1; } root[it] = (it == g[no][0] ? root[no] :
int ans = 0; it);
while(1){ dfsHld(it); }
if(nxt[u] == nxt[v]){ }
if(u == v) break; void init() {
ans = max(ans, query(1, 0, n - 1, in[v] + 1, in[u]));
break;
2.9 Heavy-Light Decomposition root[0] = d[0] = t = 0; pai[0] = -1;
dfsSz(); dfsHld(); }
} (Lamarca) Seg<N> tree; //lembrar de ter build da seg sem nada
template <class Op>
ans = max(ans, query(1, 0, n - 1, in[nxt[u]], in[u])); void processPath(int u, int v, Op op) {
u = par[nxt[u]]; for (; root[u] != root[v]; v = pai[root[v]]) {
} #include <bits/stdc++.h> if (d[root[u]] > d[root[v]]) swap(u, v);
using namespace std; op(pos[root[v]], pos[v]); }
return ans; if (d[u] > d[v]) swap(u, v);
} #define fr(i,n) for(int i = 0; i<n; i++) op(pos[u]+IN_EDGES, pos[v]);
}

3
#define all(v) (v).begin(),(v).end()
IME++
/*
if (yl <= xl && yr <= xr) {
m[t] = nm, b[t] = nb; return;
2.12 Minimum Queue
void changeNode(int v, node val){ }
tree.upd(pos[v],val); int mid = (l + r) / 2; // O(1) complexity for all operations, except for clear,
}*/ update(t<<1, l, mid, nm, nb); // which could be done by creating another deque and using swap
void modifySubtree(int v, int val) { update(1+(t<<1), mid+1, r, nm, nb);
tree.upd(pos[v]+IN_EDGES,pos[v]+sz[v],val); } struct MinQueue {
} public: int plus = 0;
ll querySubtree(int v){ LiChao(ll *st, ll *en) : x(st) { int sz = 0;
return tree.qry(pos[v]+IN_EDGES,pos[v]+sz[v]); sz = int(en - st); deque<pair<int, int>> dq;
} for(n = 1; n < sz; n <<= 1);
void modifyPath(int u, int v, int val) { m.assign(2*n, 0); b.assign(2*n, -INF); bool empty() { return dq.empty(); }
processPath(u,v,[this, &val](int l,int r) { } void clear() { plus = 0; sz = 0; dq.clear(); }
tree.upd(l,r+1,val); }); void insert_line(ll nm, ll nb) { void add(int x) { plus += x; } // Adds x to every element in
} update(1, 0, n-1, nm, nb); the queue
ll queryPath(int u, int v) { //modificacoes geralmente } int min() { return dq.front().first + plus; } // Returns the
vem aqui (para hld soma) ll query(int i) { minimum element in the queue
ll res = -LLONG_MAX; processPath(u,v,[this,&res ll ans = -INF; int size() { return sz; }
](int l,int r) { for(int t = i+n; t; t >>= 1)
res = max(tree.qry(l,r+1),res); }); ans = max(ans, m[t] * x[i] + b[t]); void push(int x) {
return res; return ans; x -= plus;
} } int amt = 1;
}; }; while (dq.size() and dq.back().first >= x)
amt += dq.back().second, dq.pop_back();
//solves https://www.hackerrank.com/challenges/subtrees-and- /* dq.push_back({ x, amt });
paths/problem * UVa 12524 sz++;
//other problems here: https://blog.anudeep2011.com/heavy-light- */ }
decomposition/
ll w[MAXN], x[MAXN], A[MAXN], B[MAXN], dp[MAXN][MAXN]; void pop() {
const int N = 1e5+10; dq.front().second--, sz--;
char str[100]; int main(){ if (!dq.front().second) dq.pop_front();
int main(){ int N, K; }
HLD<N,false> hld; while(scanf("%d %d", &N, &K)!=EOF) { };
int n; for(int i=0; i<N; i++){
cin >> n; scanf("%lld %lld", x+i, w+i);
A[i] = w[i] + (i>0 ? A[i-1] : 0);
fr(i,n-1){ B[i] = w[i]*x[i] + (i>0 ? B[i-1] : 0);
int u, v; dp[i][1] = x[i]*A[i] - B[i];
scanf("%d%d", &u, &v); } 2.13 Ordered Set
u--,v--; for(int k=2; k<=K; k++){
hld.ae(u,v); dp[0][k] = 0;
} LiChao lc(x, x+N); #include<bits/stdc++.h>
hld.init(); for(int i=1; i<N; i++){ #include <ext/pb_ds/assoc_container.hpp>
int q; lc.insert_line(A[i-1], -dp[i-1][ using namespace std;
scanf("%d", &q); k-1]-B[i-1]); using namespace __gnu_pbds;
fr(qq,q){ dp[i][k] = x[i]*A[i] - B[i] - lc
scanf("%s", str); .query(i); typedef tree<int, null_type, less<int>, rb_tree_tag,
if(str[0]==’a’){ } tree_order_statistics_node_update> ordered_set;
int t, val; }
scanf("%d%d", &t, &val); printf("%lld\n", dp[N-1][K]); ordered_set s;
t--; } s.insert(2), s.insert(3), s.insert(7), s.insert(9);
hld.modifySubtree(t,val); return 0;
} else{ } //find_by_order returns an iterator to the element at a given
int u, v; position
scanf("%d%d", &u, &v); auto x = s.find_by_order(2);
u--,v--; cout << *x << "\n"; // 7
printf("%lld\n", hld.queryPath(u,v));
} //order_of_key returns the position of a given element
} 2.11 Merge Sort Tree cout << s.order_of_key(7) << "\n"; // 2
}
//If the element does not appear in the set, we get the position
that the element would have in the set
// Mergesort Tree - Time <O(nlogn), O(logˆ2n)> - Memory O(nlogn) cout << s.order_of_key(6) << "\n"; // 2
// Mergesort Tree is a segment tree that stores the sorted cout << s.order_of_key(8) << "\n"; // 3
subarray
2.10 Lichao Tree (ITA) // on each node.
vi st[4*N];

#include <cstdio>
#include <vector>
void build(int p, int l, int r) {
if (l == r) { st[p].pb(s[l]); return; } 2.14 Dynamic Segment Tree (Lazy Up-
build(2*p, l, (l+r)/2);
#define INF 0x3f3f3f3f3f3f3f3f
#define MAXN 1009 build(2*p+1, (l+r)/2+1, r); date)
using namespace std; st[p].resize(r-l+1);
merge(st[2*p].begin(), st[2*p].end(),
typedef long long ll; st[2*p+1].begin(), st[2*p+1].end(), #include <bits/stdc++.h>
st[p].begin());
/* } /* tested:
* LiChao Segment Tree int query(int p, int l, int r, int i, int j, int a, int b) {
https://www.spoj.com/problems/BGSHOOT/
*/ if (j < l or i > r) return 0;
ref:
https://maratona.ic.unicamp.br/MaratonaVerao2022/slides/
class LiChao { if (i <= l and j >= r) AulaSummer-SegmentTree-Aula2.pdf
vector<ll> m, b; return upper_bound(st[p].begin(), st[p].end(), b) -
lower_bound(st[p].begin(), st[p].end(), a); */
int n, sz; ll *x; vector<int> e, d, mx, lazy;
#define gx(i) (i < sz ? x[i] : x[sz-1]) return query(2*p, l, (l+r)/2, i, j, a, b) + //begin creating node 0, then start your segment tree creating
void update(int t, int l, int r, ll nm, ll nb) { query(2*p+1, (l+r)/2+1, r, i, j, a, b); node 1
ll xl = nm * gx(l) + nb, xr = nm * gx(r) + nb; } int create(){
ll yl = m[t] * gx(l) + b[t], yr = m[t] * gx(r) + mx.push_back(0);
b[t]; lazy.push_back(0);

4
if (yl >= xl && yr >= xr) return; e.push_back(0);
IME++
d.push_back(0);
return mx.size() - 1;
https://www.spoj.com/problems/ORDERSET/
https://www.eolymp.com/en/contests/8463/problems/72212
2.17 Mod Segment Tree
} https://codeforces.com/contest/474/problem/E
https://codeforces.com/problemset/problem/960/F // SegTree with mod
void push(int pos, int ini, int fim){ ref: // op1 (l, r) -> sum a[i], i = { l .. r }
if(pos == 0) return; https://maratona.ic.unicamp.br/MaratonaVerao2022/slides/ // op2 (l, r, x) -> a[i] = a[i] mod x, i = { l .. r }
if (lazy[pos]) { AulaSummer-SegmentTree-Aula2.pdf // op3 (idx, x) -> a[idx] = x;
mx[pos] += lazy[pos]; */
// RMQ (max/min) -> update: = lazy[p], incr: const int N = 1e5 + 5;
+= lazy[p] vector<int> e, d, mn;
// RSQ (sum) -> update: = (r-l+1)*lazy[p], incr: //begin creating node 0, then start your segment tree creating struct segTreeNode { ll sum, mx, mn, lz = -1; };
+= (r-l+1)*lazy[p] node 1
// Count lights on -> flip: = (r-l+1)-st[p]; int create(){ int n, m;
if (ini != fim) { mn.push_back(0); ll a[N];
if(e[pos] == 0){ e.push_back(0); segTreeNode st[4 * N];
int aux = create(); d.push_back(0);
e[pos] = aux; return mn.size() - 1; void push(int p, int l, int r) {
} } if (st[p].lz != -1) {
if(d[pos] == 0){
int aux = create(); void update(int pos, int ini, int fim, int id, int val){ st[p].mx = st[p].mn = st[p].lz;
d[pos] = aux; if(id < ini || id > fim) return; st[p].sum = (r - l + 1) * st[p].lz;
}
lazy[e[pos]] += lazy[pos]; if(ini == fim){ if (l != r) st[2 * p].lz = st[2 * p + 1].lz = st[p].lz;
lazy[d[pos]] += lazy[pos]; mn[pos] = val; st[p].lz = -1;
// update: lazy[2*p] = lazy[p], lazy[2*p+1] = return; }
lazy[p]; } }
// increment: lazy[2*p] += lazy[p], lazy[2*p+1] +=
lazy[p]; int m = (ini + fim) >> 1; void merge(int p) {
// flip: lazy[2*p] ˆ= 1, lazy[2*p+1] ˆ= if(id <= m){ st[p].mx = max(st[2 * p].mx, st[2 * p + 1].mx);
1; if(e[pos] == 0){ st[p].mn = min(st[2 * p].mn, st[2 * p + 1].mn);
} int aux = create(); st[p].sum = st[2 * p].sum + st[2 * p + 1].sum;
lazy[pos] = 0; e[pos] = aux; }
} }
} update(e[pos], ini, m, id, val); void build(int p = 1, int l = 1, int r = n) {
} if (l == r) {
void update(int pos, int ini, int fim, int p, int q, int val){ else{ st[p].mn = st[p].mx = st[p].sum = a[l];
if(pos == 0) return; if(d[pos] == 0){ return;
int aux = create(); }
push(pos, ini, fim); d[pos] = aux;
} int mid = (l + r) >> 1;
if(q < ini || p > fim) return; update(d[pos], m + 1, fim, id, val); build(2 * p, l, mid);
} build(2 * p + 1, mid + 1, r);
if(p <= ini and fim <= q){
lazy[pos] += val; mn[pos] = min(mn[e[pos]], mn[d[pos]]); merge(p);
// update: lazy[p] = k; } }
// increment: lazy[p] += k;
// flip: lazy[p] = 1; int query(int pos, int ini, int fim, int p, int q){ ll query(int i, int j, int p = 1, int l = 1, int r = n) {
push(pos, ini, fim); if(q < ini || p > fim) return INT_MAX; push(p, l, r);
return; if (r < i or l > j) return 0ll;
} if(pos == 0) return 0; if (i <= l and r <= j) return st[p].sum;
int mid = (l + r) >> 1;
int m = (ini + fim) >> 1; if(p <= ini and fim <= q) return mn[pos]; return query(i, j, 2 * p, l, mid) + query(i, j, 2 * p + 1, mid
if(e[pos] == 0){ + 1, r);
int aux = create(); int m = (ini + fim) >> 1; }
e[pos] = aux; return min(query(e[pos], ini, m, p, q), query(d[pos], m + 1,
} fim, p, q)); void module_op(int i, int j, ll x, int p = 1, int l = 1, int r =
update(e[pos], ini, m, p, q, val); } n) {
if(d[pos] == 0){ push(p, l, r);
int aux = create(); if (r < i or l > j or st[p].mx < x) return;
d[pos] = aux; if (i <= l and r <= j and st[p].mx == st[p].mn) {
} st[p].lz = st[p].mx % x;
update(d[pos], m + 1, fim, p, q, val); push(p, l, r);
}
mx[pos] = max(mx[e[pos]], mx[d[pos]]); 2.16 Iterative Segment Tree return;
}
int mid = (l + r) >> 1;
int query(int pos, int ini, int fim, int p, int q){ module_op(i, j, x, 2 * p, l, mid);
if(pos == 0) return 0; int n; // Array size module_op(i, j, x, 2 * p + 1, mid + 1, r);
int st[2*N];
push(pos, ini, fim); merge(p);
int query(int a, int b) { }
a += n; b += n;
if(q < ini || p > fim) return 0; int s = 0; void set_op(int i, int j, ll x, int p = 1, int l = 1, int r = n)
while (a <= b) { {
if(p <= ini and fim <= q) return mx[pos]; if (a%2 == 1) s += st[a++]; push(p, l, r);
if (b%2 == 0) s += st[b--]; if (r < i or l > j) return;
int m = (ini + fim) >> 1; a /= 2; b /= 2;
return max(query(e[pos], ini, m, p, q) , query(d[pos], m + if (i <= l and r <= j) {
} st[p].lz = x;
1, fim, p, q)); return s;
} push(p, l, r);
} return;
}
void update(int p, int val) { int mid = (l + r) >> 1;
p += n; set_op(i, j, x, 2 * p, l, mid);
st[p] += val;
2.15 Dynamic Segment Tree for (p /= 2; p >= 1; p /= 2)
set_op(i, j, x, 2 * p + 1, mid + 1, r);
st[p] = st[2*p]+st[2*p+1]; merge(p);
} }
#include <bits/stdc++.h>

5
/* tested:
IME++
2.18 Persistent Segment Tree (Naum) int update(int u, int i, int v) {
if (i < li[u] or ri[u] < i) return u;
void build(int p = 1, int l = 1, int r = n) {
if (l == r) { st[p] = v[l]; return; }
build(2*p, l, (l+r)/2);
// Persistent Segment Tree clone(++stsz, u); build(2*p+1, (l+r)/2+1, r);
int n; u = stsz; st[p] = min(st[2*p], st[2*p+1]); // RMQ -> min/max, RSQ -> +
int rcnt; rc[u] = update(rc[u], i, v); }
int lc[M], rc[M], st[M]; rc[u] = update(rc[u], i, v);
void push(int p, int l, int r) {
int update(int p, int l, int r, int i, int v) { if (li[u] == ri[u]) st[u] += v; if (lz[p]) {
int rt = ++rcnt; else st[u] = st[rc[u]] + st[rc[u]]; st[p] = lz[p];
if (l == r) { st[rt] = v; return rt; } // RMQ -> update: = lz[p], increment: += lz[p]
return u; // RSQ -> update: = (r-l+1)*lz[p], increment: += (r-l+1)*lz[
int mid = (l+r)/2; } p]
if (i <= mid) lc[rt] = update(lc[p], l, mid, i, v), rc[rt] = if(l!=r) lz[2*p] = lz[2*p+1] = lz[p]; // update: =,
rc[p]; increment +=
else rc[rt] = update(rc[p], mid+1, r, i, v), lc[rt] = lz[p] = 0;
lc[p]; }
st[rt] = st[lc[rt]] + st[rc[rt]]; 2.20 Struct Segment Tree }

return rt; int query(int i, int j, int p = 1, int l = 1, int r = n) {


} push(p, l, r);
// Segment Tree (range query and point update) if (l > j or r < i) return INF; // RMQ -> INF, RSQ -> 0
// Update - O(log n) if (l >= i and j >= r) return st[p];
int query(int p, int l, int r, int i, int j) { // Query - O(log n)
if (l > j or r < i) return 0; return min(query(i, j, 2*p, l, (l+r)/2),
// Memory - O(n) query(i, j, 2*p+1, (l+r)/2+1, r));
if (i <= l and r <= j) return st[p];
// RMQ -> min/max, RSQ -> +
struct Node { }
return query(lc[p], l, (l+r)/2, i, j)+query(rc[p], (l+r)/2+1, ll val;
r, i, j);
} void update(int i, int j, int v, int p = 1, int l = 1, int r = n
Node(ll _val = 0) : val(_val) {} ) {
Node(const Node& l, const Node& r) : val(l.val + r.val) {} push(p, l, r);
int main() {
scanf("%d", &n); if (l > j or r < i) return;
friend ostream& operator<<(ostream& os, const Node& a) { if (l >= i and j >= r) { lz[p] = v; push(p, l, r); return; }
for (int i = 1; i <= n; ++i) { os << a.val;
int a; update(i, j, v, 2*p, l, (l+r)/2);
return os; update(i, j, v, 2*p+1, (l+r)/2+1, r);
scanf("%d", &a); }
r[i] = update(r[i-1], 1, n, i, 1); st[p] = min(st[2*p], st[2*p+1]); // RMQ -> min/max, RSQ -> +
}; }
}
template <class T = Node, class U = int>
return 0; struct SimpleSegTree {
} int n;
vector<T> st;
SimpleSegTree(int _n) : n(_n), st(4 * n) {}
2.22 Segment Tree 2D
2.19 Persistent Segment Tree SimpleSegTree(vector<U>& v) : n((int)v.size()), st(4 * n) {
build(v, 1, 0, n - 1); // Segment Tree 2D - O(nlog(n)log(n)) of Memory and Runtime
} const int N = 1e8+5, M = 2e5+5;
int n, k=1, st[N], lc[N], rc[N];
// Persistent Segtree void build(vector<U>& v, int p, int l, int r) {
// Memory: O(n logn) if (l == r) { st[p] = T(v[l]); return; } void addx(int x, int l, int r, int u) {
// Operations: O(log n) int mid = (l + r) / 2; if (x < l or r < x) return;
build(v, 2 * p, l, mid);
int li[N], ri[N]; // [li(u), ri(u)] is the interval of node u build(v, 2 * p + 1, mid + 1, r); st[u]++;
int st[N], lc[N], rc[N]; // Value, left son and right son of st[p] = T(st[2 * p], st[2 * p + 1]); if (l == r) return;
node u }
int stsz; // Size of segment tree if(!rc[u]) rc[u] = ++k, lc[u] = ++k;
T query(int i, int j, int p, int l, int r) { addx(x, l, (l+r)/2, lc[u]);
// Returns root of initial tree. if (l >= i and j >= r) return st[p]; addx(x, (l+r)/2+1, r, rc[u]);
// i and j are the first and last elements of the tree. if (l > j or r < i) return T(); }
int init(int i, int j) { int mid = (l + r) / 2;
int v = ++stsz; return T(query(i, j, 2 * p, l, mid), query(i, j, 2 * p + 1, // Adds a point (x, y) to the grid.
li[v] = i, ri[v] = j; mid + 1, r)); void add(int x, int y, int l, int r, int u) {
} if (y < l or r < y) return;
if (i != j) {
rc[v] = init(i, (i+j)/2); T query(int i, int j) { return query(i, j, 1, 0, n - 1); } if (!st[u]) st[u] = ++k;
rc[v] = init((i+j)/2+1, j); addx(x, 1, n, st[u]);
st[v] = /* calculate value from rc[v] and rc[v] */; void update(int idx, U v, int p, int l, int r) {
} else { if (l == r) { st[p] = T(v); return; } if (l == r) return;
st[v] = /* insert initial value here */; int mid = (l + r) / 2;
} if (idx <= mid) update(idx, v, 2 * p, l, mid); if(!rc[u]) rc[u] = ++k, lc[u] = ++k;
else update(idx, v, 2 * p + 1, mid + 1, r); add(x, y, l, (l+r)/2, lc[u]);
return v; st[p] = T(st[2 * p], st[2 * p + 1]); add(x, y, (l+r)/2+1, r, rc[u]);
} } }
// Gets the sum from i to j from tree with root u void update(int idx, U v) { update(idx, v, 1, 0, n - 1); } int countx(int x, int l, int r, int u) {
int sum(int u, int i, int j) { }; if (!u or x < l) return 0;
if (j < li[u] or ri[u] < i) return 0; if (r <= x) return st[u];
if (i <= li[u] and ri[u] <= j) return st[u];
return sum(rc[u], i, j) + sum (rc[u], i, j); return countx(x, l, (l+r)/2, lc[u]) +
} countx(x, (l+r)/2+1, r, rc[u]);
// Copies node j into node i 2.21 Segment Tree }
void clone(int i, int j) { // Counts number of points dominated by (x, y)
li[i] = li[j], ri[i] = ri[j]; // Should be called with l = 1, r = n and u = 1
st[i] = st[j]; // Segment Tree (Range Query and Range Update) int count(int x, int y, int l, int r, int u) {
rc[i] = rc[j], rc[i] = rc[j]; // Update and Query - O(log n) if (!u or y < l) return 0;
} if (r <= y) return countx(x, 1, n, st[u]);
int n, v[N], lz[4*N], st[4*N];

6
// Sums v to index i from the tree with root u return count(x, y, l, (l+r)/2, lc[u]) +
IME++
count(x, y, (l+r)/2+1, r, rc[u]); } array() : top(v) {}
} T *alloc(const T &val = T()) {
for(int i = 1; i <= n; i++) cout << c[i] << " " << len[i] << " return &(*top++ = val);
\n"; }
} void dealloc(T *p) {}
};
2.23 Set Of Intervals template<class T, int MAXSIZE> struct stack {
T v[MAXSIZE], *spot[MAXSIZE], **top;
stack() {
// Set of Intervals
// Use when you have disjoint intervals
2.24 Sparse Table for(int i = 0; i < MAXSIZE; i++) {
spot[i] = v + i;
}
#include <bits/stdc++.h> top = spot + MAXSIZE;
const int N; }
using namespace std; const int M; //log2(N) T *alloc(const T &val = T()) {
int sparse[N][M]; return &(**--top = val);
const int N = 2e5 + 5;
}
void build() { void dealloc(T *p) {
#define pb push_back for(int i = 0; i < n; i++)
#define st first *top++ = p;
sparse[i][0] = v[i]; }
#define nd second
};
for(int j = 1; j < M; j++) }
typedef pair<int, int> pii; for(int i = 0; i < n; i++)
typedef pair<pii, int> piii; sparse[i][j] = namespace splay {
i + (1 << j - 1) < n template<class T> struct node {
int n, m, x, t; ? min(sparse[i][j - 1], sparse[i + (1 << j - 1)][j - 1])
set<piii> s; T *f, *c[2];
: sparse[i][j - 1]; int size;
set<pii> mosq; }
vector<piii> frogs; node() {
int c[N], len[N], p, b[N]; f = c[0] = c[1] = nullptr;
int query(int a, int b){ size = 1;
int pot = 32 - __builtin_clz(b - a) - 1; }
void in(int l, int r, int i) { return min(sparse[a][pot], sparse[b - (1 << pot) + 1][pot]);
vector<piii> add, rem; void push_down() {}
} void update() {
auto it = s.lower_bound({{l, 0}, 0});
if(it != s.begin()) it--; size = 1;
for(; it != s.end(); it++) { for(int t = 0; t < 2; t++) {
int ll = it->st.st; if(c[t]) {
int rr = it->st.nd; size += c[t]->size;
int idx = it->nd; 2.25 Sparse Table 2D }
}

if(ll > r) break; }


if(rr < l) continue; };
// 2D Sparse Table - <O(nˆ2 (log n) ˆ 2), O(1)> template<class T> struct reversible_node : node<T> {
if(ll < l) add.pb({{ll, l-1}, idx}); const int N = 1e3+1, M = 10;
if(rr > r) add.pb({{r+1, rr}, idx}); int r;
int t[N][N], v[N][N], dp[M][M][N][N], lg[N], n, m; reversible_node() : node<T>() {
rem.pb(*it);
} r = 0;
void build() { }
add.pb({{l, r}, i}); int k = 0;
for(auto x : rem) s.erase(x); void push_down() {
for(int i=1; i<N; ++i) { node<T>::push_down();
for(auto x : add) s.insert(x); if (1<<k == i/2) k++;
} if(r) {
lg[i] = k; for(int t = 0; t < 2; t++) {
} if(node<T>::c[t]) {
void process(int l, int idx) {
auto it2 = s.lower_bound({{l, 0}, 0}); node<T>::c[t]->reverse();
// Set base cases }
if(it2 != s.begin()) it2--; for(int x=0; x<n; ++x) for(int y=0; y<m; ++y) dp[0][0][x][y]
if(it2 != s.end() and it2->st.nd < l) it2++; r = 0;
= v[x][y]; }
for(int j=1; j<M; ++j) for(int x=0; x<n; ++x) for(int y=0; y }
mosq.insert({l, idx}); +(1<<j)<=m; ++y)
if(it2 == s.end() or !(it2->nd)) return; }
dp[0][j][x][y] = max(dp[0][j-1][x][y], dp[0][j-1][x][y void update() {
+(1<<j-1)]); node<T>::update();
vector<pii> rem;
int ll = it2->st.st, rr = it2->st.nd, id = it2->nd; }
// Calculate sparse table values void reverse() {
for(int i=1; i<M; ++i) for(int j=0; j<M; ++j) swap(node<T>::c[0], node<T>::c[1]);
auto it = mosq.lower_bound({ll, 0}); for(int x=0; x+(1<<i)<=n; ++x) for(int y=0; y+(1<<j)<=m;
for(; it != mosq.end(); it++) { r = r ˆ 1;
++y) }
if(it->st > rr) break; dp[i][j][x][y] = max(dp[i-1][j][x][y], dp[i-1][j][x
c[id]++; };
+(1<<i-1)][y]); template<class T, int MAXSIZE = (int)5e5, class alloc =
len[id] += b[it->nd]; }
rr += b[it->nd]; allocat::array<T, MAXSIZE + 2>> struct tree {
rem.pb(*it); alloc pool;
int query(int x1, int x2, int y1, int y2) { T *root;
} int i = lg[x2-x1+1], j = lg[y2-y1+1];
for(auto x : rem) mosq.erase(x); T *new_node(const T &val = T()) {
int m1 = max(dp[i][j][x1][y1], dp[i][j][x2-(1<<i)+1][y1]); return pool.alloc(val);
in(ll, rr, id); int m2 = max(dp[i][j][x1][y2-(1<<j)+1], dp[i][j][x2-(1<<i)
} }
+1][y2-(1<<j)+1]); tree() {
return max(m1, m2); root = new_node();
int main() { }
ios_base::sync_with_stdio(0), cin.tie(0); root->c[1] = new_node();
cin >> n >> m; root->size = 2;
for(int i = 1; i <= n; i++) { root->c[1]->f = root;
cin >> x >> t; }
void rotate(T *n) {
len[i] = t;
frogs.push_back({{x, x+t}, i}); 2.26 Splay Tree int v = n->f->c[0] == n;
} T *p = n->f, *m = n->c[v];
s.insert({{0, int(1e9)}, 0}); if(p->f) {
sort(frogs.begin(), frogs.end()); //amortized O(logn) for every operation p->f->c[p->f->c[1] == p] = n;
for(int i = frogs.size() - 1; i >= 0; i--) }
in(frogs[i].st.st, frogs[i].st.nd, frogs[i].nd); using namespace std; n->f = p->f;
n->c[v] = p;
for(int i = 1; i <= m; i++) { namespace allocat { p->f = n;
cin >> p >> b[i]; template<class T, int MAXSIZE> struct array { p->c[v ˆ 1] = m;
if(m) {

7
process(p, i); T v[MAXSIZE], *top;
IME++
m->f = p; struct node: splay::reversible_node<node> { bool cmp2(int a,int b)
} long long val, val_min, lazy; {
p->update(); node(long long v = 0) : splay::reversible_node<node>(), val( return y[a]<y[b];
n->update(); v) { }
} val_min = lazy = 0;
void splay(T *n, T *s = nullptr) { } bool cmp3(int a,int b)
while(n->f != s) { void add(long long v) { {
T *m = n->f, *l = m->f; val += v; return z[a]<z[b];
if(l == s) { val_min += v; }
rotate(n); lazy += v;
} else if((l->c[0] == m) == (m->c[0] == n)) { } void makekdtree(int node,int l,int r,int flag)
rotate(m); void push_down() { {
rotate(n); splay::reversible_node<node>::push_down(); if (l>r)
} else { for(int t = 0; t < 2; t++) { {
rotate(n); if(c[t]) { tree[node].max=-maxlongint;
rotate(n); c[t]->add(lazy); return;
} } }
} } int xl=maxlongint,xr=-maxlongint;
if(!s) { lazy = 0; int yl=maxlongint,yr=-maxlongint;
root = n; } int zl=maxlongint,zr=-maxlongint,maxc=-maxlongint;
} void update() { for (int i=l;i<=r;i++)
} splay::reversible_node<node>::update(); xl=min(xl,x[i]),xr=max(xr,x[i]),
int size() { val_min = val; yl=min(yl,y[i]),yr=max(yr,y[i]),
return root->size - 2; for(int t = 0; t < 2; t++) { zl=min(zl,z[i]),zr=max(zr,z[i]),
} if(c[t]) { maxc=max(maxc,wei[i]),
int walk(T *n, int &v, int &pos) { val_min = min(val_min, c[t]->val_min); xc[i]=x[i],yc[i]=y[i],zc[i]=z[i],wc[i]=wei[i],
n->push_down(); } biao[i]=i;
int s = n->c[0] ? n->c[0]->size : 0; } tree[node].flag=flag;
(v = s < pos) && (pos -= s + 1); } tree[node].xl=xl,tree[node].xr=xr,tree[node].yl=yl;
return s; }; tree[node].yr=yr,tree[node].zl=zl,tree[node].zr=zr;
} tree[node].max=maxc;
void insert(T *n, int pos) { const int N = 2e5 + 7; if (l==r) return;
T *c = root; splay::tree<node, N, allocat::stack<node, N + 2>> t; if (flag==0) sort(biao+l,biao+r+1,cmp1);
int v; if (flag==1) sort(biao+l,biao+r+1,cmp2);
pos++; // in main if (flag==2) sort(biao+l,biao+r+1,cmp3);
while(walk(c, v, pos), c->c[v] and (c = c->c[v])); for (int i=l;i<=r;i++)
c->c[v] = n; // to insert: x[i]=xc[biao[i]],y[i]=yc[biao[i]],
n->f = c; t.insert(t.new_node(node(x)), t.size()); z[i]=zc[biao[i]],wei[i]=wc[biao[i]];
splay(n); makekdtree(node*2,l,(l+r)/2,(flag+1)%3);
} //adding a certain value to a certain range makekdtree(node*2+1,(l+r)/2+1,r,(flag+1)%3);
T *find(int pos, int sp = true) { t.find_range(x - 1, y)->add(d); }
T *c = root;
int v; //reversing a certain range int getmax(int node,int xl,int xr,int yl,int yr,int zl,int zr)
pos++; t.find_range(x - 1, y)->reverse(); {
while((pos < walk(c, v, pos) or v) and (c = c->c[v]) xl=max(xl,tree[node].xl);
); //cycling to the right a certain range xr=min(xr,tree[node].xr);
if(sp) { d %= (y - x + 1); yl=max(yl,tree[node].yl);
splay(c); if(d) { yr=min(yr,tree[node].yr);
} node *right = t.find_range(y - d, y); zl=max(zl,tree[node].zl);
return c; right->f->c[1] = nullptr, right->f->update(), right->f->f-> zr=min(zr,tree[node].zr);
} update(), right->f = nullptr; if (tree[node].max==-maxlongint) return 0;
T *find_range(int posl, int posr) { t.insert(right, x - 1); if ((xr<tree[node].xl)||(xl>tree[node].xr)) return 0;
T *r = find(posr), *l = find(posl - 1, false); } if ((yr<tree[node].yl)||(yl>tree[node].yr)) return 0;
splay(l, r);
if(l->c[1]) { //inserting value p at position x + 1 if ((zr<tree[node].zl)||(zl>tree[node].zr)) return 0;
l->c[1]->push_down(); t.insert(t.new_node(node(p)), x); if ((tree[node].xl==xl)&&(tree[node].xr==xr)&&
} (tree[node].yl==yl)&&(tree[node].yr==yr)&&
return l->c[1]; //deleting a certain value/range (tree[node].zl==zl)&&(tree[node].zr==zr))
} t.erase_range(x - 1, y); return tree[node].max;
void insert_range(T **nn, int nn_size, int pos) { else
T *r = find(pos), *l = find(pos - 1, false), *c = l; //getting the minimum of a certain range (change this return max(getmax(node*2,xl,xr,yl,yr,zl,zr),
splay(l, r); accordingly) getmax(node*2+1,xl,xr,yl,yr,zl,
for(int i = 0; i < nn_size; i++) { t.find_range(x - 1, y)->val_min zr));
c->c[1] = nn[i]; }
nn[i]->f = c;
c = nn[i]; int main()
} {
// N 3D-rect with weights
for(int i = nn_size - 1; i >= 0; i--) {
nn[i]->update(); 2.27 KD Tree (Stanford) // find the maximum weight containing the given 3D-point
} return 0;
l->update(), r->update(), splay(nn[nn_size - 1]); }
} const int maxn=200005;
void dealloc(T *n) {
if(!n) { struct kdtree
return;
}
{
int xl,xr,yl,yr,zl,zr,max,flag; // flag=0:x axis 1:y 2: 2.28 Treap
dealloc(n->c[0]); z
dealloc(n->c[1]); } tree[5000005];
pool.dealloc(n); // Treap (probabilistic BST)
} int N,M,lastans,xq,yq; // O(logn) operations (supports lazy propagation)
void erase_range(int posl, int posr) { int a[maxn],pre[maxn],nxt[maxn];
T *n = find_range(posl, posr); int x[maxn],y[maxn],z[maxn],wei[maxn]; mt19937_64 llrand(random_device{}());
n->f->c[1] = nullptr, n->f->update(), n->f->f-> int xc[maxn],yc[maxn],zc[maxn],wc[maxn],hash[maxn],biao[maxn];
update(), n->f = nullptr; struct node {
dealloc(n); bool cmp1(int a,int b) int val;
} { int cnt, rev;
}; return x[a]<x[b]; int mn, mx, mindiff; // value-based treap only!
} } ll pri;

8
node* l;
IME++
node* r;
}
root = merge(t1.st, t2.nd); 2.29 Trie
node() {}
node(int x) : val(x), cnt(1), rev(0), mn(x), mx(x), mindiff( int get_val(int pos) { return get_val(root, pos); } // Trie <O(|S|), O(|S|)>
INF), pri(llrand()), l(0), r(0) {} int get_val(node* t, int pos) { int trie[N][26], trien = 1;
}; push(t);
if (cnt(t->l) == pos) return t->val; int add(int u, char c){
struct treap { if (cnt(t->l) < pos) return get_val(t->r, pos-cnt(t->l)-1); c-=’a’;
node* root; return get_val(t->l, pos); if (trie[u][c]) return trie[u][c];
treap() : root(0) {} } return trie[u][c] = ++trien;
˜treap() { clear(); } */ }
// -------------------
int cnt(node* t) { return t ? t->cnt : 0; } //to add a string s in the trie
int mn (node* t) { return t ? t->mn : INF; } // Value-based treap int u = 1;
int mx (node* t) { return t ? t->mx : -INF; } // used when the values needs to be ordered for(char c : s) u = add(u, c);
int mindiff(node* t) { return t ? t->mindiff : INF; } int order(node* t, int val) {
if (!t) return 0;
void clear() { del(root); } push(t);
void del(node* t) { if (t->val < val) return cnt(t->l) + 1 + order(t->r, val);
if (!t) return;
del(t->l); del(t->r); }
return order(t->l, val); 2.30 Union Find
delete t;
t = 0; bool has(node* t, int val) { /*
} if (!t) return 0;
push(t); **************************************************************************
void push(node* t) { if (t->val == val) return 1;
if (!t or !t->rev) return; return has((t->val > val ? t->l : t->r), val); * DSU (DISJOINT SET UNION / UNION-FIND)
swap(t->l, t->r); } *
if (t->l) t->l->rev ˆ= 1; * Time complexity: Unite - O(alpha n)
if (t->r) t->r->rev ˆ= 1; void insert(int val) { *
* Find - O(alpha n)
t->rev = 0; if (has(root, val)) return; // avoid repeated values
} push(root); *
node* x = new node(val); * Usage: find(node), unite(node1, node2), sz[find(node)]
void update(node*& t) { auto t = split(root, order(root, val)); *
if (!t) return; root = merge(merge(t.st, x), t.nd); * Notation: par: vector of parents
t->cnt = cnt(t->l) + cnt(t->r) + 1; } *
* sz: vector of subsets sizes, i.e. size of the
t->mn = min(t->val, min(mn(t->l), mn(t->r))); subset a node is in
t->mx = max(t->val, max(mx(t->l), mx(t->r))); void erase(int val) { *
t->mindiff = min(mn(t->r) - t->val, min(t->val - mx(t->l), if (!has(root, val)) return; *******************************************************************************
min(mindiff(t->l), mindiff(t->r)))); */
} auto t1 = split(root, order(root, val)); int par[N], sz[N];
auto t2 = split(t1.nd, 1);
node* merge(node* l, node* r) { delete t2.st; int find(int a) { return par[a] == a ? a : par[a] = find(par[a])
push(l); push(r); root = merge(t1.st, t2.nd); ; }
node* t; }
if (!l or !r) t = l ? l : r; void unite(int a, int b) {
else if (l->pri > r->pri) l->r = merge(l->r, r), t = l; // Get the maximum difference between values if ((a = find(a)) == (b = find(b))) return;
else r->l = merge(l, r->l), t = r; int querymax(int i, int j) { if (sz[a] < sz[b]) swap(a, b);
update(t); if (i == j) return -1; par[b] = a; sz[a] += sz[b];
return t; auto t1 = split(root, j+1); }
} auto t2 = split(t1.st, i);
// in main
// pos: amount of nodes in the left subtree or int ans = mx(t2.nd) - mn(t2.nd); for (int i = 1; i <= n; i++) par[i] = i, sz[i] = 1;
// the smallest position of the right subtree in a 0-indexed root = merge(merge(t2.st, t2.nd), t1.nd);
array return ans;
pair<node*, node*> split(node* t, int pos) { }
if (!t) return {0, 0};
push(t); // Get the minimum difference between values
int querymin(int i, int j) { 2.31 Union Find (Partial Persistent)
if (cnt(t->l) < pos) { if (i == j) return -1;
auto x = split(t->r, pos-cnt(t->l)-1); auto t2 = split(root, j+1);
t->r = x.st; auto t1 = split(t2.st, i); /*
update(t); **************************************************************************
return { t, x.nd }; int ans = mindiff(t1.nd);
} root = merge(merge(t1.st, t1.nd), t2.nd); * DSU (DISJOINT SET UNION / UNION-FIND)
return ans; *
auto x = split(t->l, pos); } * Time complexity: Unite - O(log n)
t->l = x.nd; // ------------------ *
update(t); * Find - O(log n)
return { x.st, t }; void reverse(int l, int r) { *
} auto t2 = split(root, r+1); * Usage: find(node), unite(node1, node2), sz[find(node)]
auto t1 = split(t2.st, l); *
// Position-based treap t1.nd->rev = 1; * Notation: par: vector of parents
// used when the values are just additional data root = merge(merge(t1.st, t1.nd), t2.nd); *
// the positions are known when it’s built, after that you } * sz: vector of subsets sizes, i.e. size of the
// query to get the values at specific positions subset a node is in *
// 0-indexed array! void print() { print(root); printf("\n"); } * his: history: time when it got a new parent
/* void print(node* t) { *
void insert(int pos, int val) { if (!t) return; * t: current time
push(root); push(t); *
node* x = new node(val); print(t->l); *******************************************************************************
auto t = split(root, pos); printf("%d ", t->val); */
root = merge(merge(t.st, x), t.nd); print(t->r);
} } int t, par[N], sz[N], his[N];
};
void erase(int pos) { int find(int a, int t){
auto t1 = split(root, pos); if(par[a] == a) return a;
auto t2 = split(t1.nd, 1); if(his[a] > t) return a;
delete t2.st;

9
return find(par[a], t);
IME++
} }
pair<vector<Point>, vector<Point>> ch(Point *v){
void unite(int a, int b){ vector<Point> hull, vecs; // Ternary search query - O(logn) for each query
if(find(a, t) == find(b, t)) return; for(int i = 0; i < n; i++){ /*
a = find(a, t), b = find(b, t), t++; if(hull.size() and hull.back().x == v[i].x) continue; type query(type x) {
if(sz[a] < sz[b]) swap(a, b); int lo = 0, hi = nh-1;
sz[a] += sz[b], par[b] = a, his[b] = t; while(vecs.size() and vecs.back()*(v[i] - hull.back()) <= 0) while (lo < hi) {
} vecs.pop_back(), hull.pop_back(); int mid = (lo+hi)/2;
if (eval(mid, x) > eval(mid+1, x)) hi = mid;
//in main if(hull.size()) else lo = mid+1;
for(int i = 0; i < N; i++) par[i] = i, sz[i] = 1, his[i] = 0; vecs.pb((v[i] - hull.back()).ccw()); }
return eval(lo, x);
hull.pb(v[i]); // return -eval(lo, x); ATTENTION: Uncomment for minimum
} CHT
return {hull, vecs}; }
2.32 Union Find (Rollback) }
// better use geometry line_intersect (this assumes s and t are
ll get(ll x) { not parallel)
Point query = {x, 1}; ld intersect_x(line s, line t) { return (t.b - s.b)/(ld)(s.m - t
/* auto it = lower_bound(vecs.begin(), vecs.end(), query, []( .m); }
*********************************************************************************
Point a, Point b) { ld intersect_y(line s, line t) { return s.b + s.m * intersect_x(
return a%b > 0; s, t); }
* DSU (DISJOINT SET UNION / UNION-FIND) }); */
* return query*hull[it - vecs.begin()];
* Time complexity: Unite - O(alpha n) }
*
* Rollback - O(1)

* Find - O(alpha n)
*
3.3 Divide and Conquer Optimization
*
* Usage: find(node), unite(node1, node2), sz[find(node)]
3.2 Convex Hull Trick
* /*
* Notation: par: vector of parents // Convex Hull Trick **************************************************************************
*
* sz: vector of subsets sizes, i.e. size of the // ATTENTION: This is the maximum convex hull. If you need the
subset a node is in * DIVIDE AND CONQUER OPTIMIZATION ( dp[i][k] = min j<k {dp[j][k
* minimum -1] + C(j,i)} ) *
* sp: stack containing node and par from last op // CHT use {-b, -m} and modify the query function. * Description: searches for bounds to optimal point using the
* monotocity condition*
* ss: stack containing node and size from last op // In case of floating point parameters swap long long with long * Condition: L[i][k] <= L[i+1][k]
* double *
*********************************************************************************
typedef long long type; * Time Complexity: O(K*Nˆ2) becomes O(K*N*logN)
*/ struct line { type b, m; }; *
int par[N], sz[N]; * Notation: dp[i][k]: optimal solution using k positions, until
line v[N]; // lines from input position i *
stack <pii> sp, ss; int n; // number of lines L[i][k]: optimal point, smallest j which minimizes
*
// Sort slopes in ascending order (in main): dp[i][k] *
int find (int a) { return par[a] == a ? a : find(par[a]); } sort(v, v+n, [](line s, line t){ C(i,j): cost for splitting range [j,i] to j and i
*
return (s.m == t.m) ? (s.b < t.b) : (s.m < t.m); }); *
void unite (int a, int b) {
if ((a = find(a)) == (b = find(b))) return; *******************************************************************************
// nh: number of lines on convex hull */
if (sz[a] < sz[b]) swap(a, b); // pos: position for linear time search
ss.push({a, sz[a]}); // hull: lines in the convex hull const int N = 1e3+5;
sp.push({b, par[b]}); int nh, pos;
sz[a] += sz[b]; line hull[N]; ll dp[N][N];
par[b] = a;
} bool check(line s, line t, line u) { //Cost for using i and j
// verify if it can overflow. If it can just divide using long ll C(ll i, ll j);
void rollback() { double
par[sp.top().st] = sp.top().nd; sp.pop(); return (s.b - t.b)*(u.m - s.m) < (s.b - u.b)*(t.m - s.m); void compute(ll l, ll r, ll k, ll optl, ll optr){
sz[ss.top().st] = ss.top().nd; ss.pop(); } // stop condition
} if(l > r) return;
// Add new line to convex hull, if possible
int main(){ // Must receive lines in the correct order, otherwise it won’t ll mid = (l+r)/2;
for (int i = 0; i < N; i++) par[i] = i, sz[i] = 1; work //best : cost, pos
return 0; void update(line s) { pair<ll,ll> best = {LINF,-1};
} // 1. if first lines have the same b, get the one with bigger
m //searchs best: lower bound to right, upper bound to left
// 2. if line is parallel to the one at the top, ignore for(ll i = optl; i <= min(mid, optr); i++){
// 3. pop lines that are worse best = min(best, {dp[i][k-1] + C(i,mid), i});
// 3.1 if you can do a linear time search, use }
3 Dynamic Programming // 4. add new line dp[mid][k] = best.first;
ll opt = best.second;
if (nh == 1 and hull[nh-1].b == s.b) nh--;
if (nh > 0 and hull[nh-1].m >= s.m) return; compute(l, mid-1, k, optl, opt);
3.1 Convex Hull Trick (emaxx) while (nh >= 2 and !check(hull[nh-2], hull[nh-1], s)) nh--;
pos = min(pos, nh); }
compute(mid + 1, r, k, opt, optr);
hull[nh++] = s;
} //Iterate over k to calculate
struct Point{ ll solve(){
ll x, y; type eval(int id, type x) { return hull[id].b + hull[id].m * x; //dimensions of dp[N][K]
Point(ll x = 0, ll y = 0):x(x), y(y) {} } int n, k;
Point operator-(Point p){ return Point(x - p.x, y - p.y); }
Point operator+(Point p){ return Point(x + p.x, y + p.y); } // Linear search query - O(n) for all queries //Initialize DP
Point ccw(){ return Point(-y, x); } // Only possible if the queries always move to the right for(ll i = 1; i <= n; i++){
ll operator%(Point p){ return x*p.y - y*p.x; } type query(type x) { //dp[i,1] = cost from 0 to i
ll operator*(Point p){ return x*p.x + y*p.y; } while (pos+1 < nh and eval(pos, x) < eval(pos+1, x)) pos++; dp[i][1] = C(0, i);
bool operator<(Point p) const { return x == p.x ? y < p.y : x return eval(pos, x); }

10
< p.x; } // return -eval(pos, x); ATTENTION: Uncomment for minimum
}; CHT for(ll l = 2; l <= k; l++){
IME++
compute(1, n, l, 1, n); if (v < dp[i][j]) int mn = INF;
} a[i][j] = k, dp[i][j] = v; for(int i = 1; i <= n; i++) mn = min(mn, dp[i][(1 << t) - 1]);
} return mn;
/*+ Iterate over i to get min{dp[i][k]}, don’t forget cost //+ Iterate over i to get min{dp[i][j]} for each j, don’t }
from n to i forget cost from n to
for(ll i=1;i<=n;i++){ }
ll rest = ; }
ans = min(ans,dp[i][k] + rest);

}
*/
}
4 Graphs
3.5 Longest Increasing Subsequence
4.1 2-SAT Kosaraju
// Longest Increasing Subsequence - O(nlogn)
3.4 Knuth Optimization // /*
// dp(i) = max j<i { dp(j) | a[j] < a[i] } + 1 **************************************************************************
//
// Knuth DP Optimization - O(nˆ3) -> O(nˆ2) * 2-SAT (TELL WHETHER A SERIES OF STATEMENTS CAN OR CANNOT BE
// // int dp[N], v[N], n, lis; FEASIBLE AT THE SAME TIME) *
// 1) dp[i][j] = min i<k<j { dp[i][k] + dp[k][j] } + C[i][j]
// 2) dp[i][j] = min k<i { dp[k][j-1] + C[k][i] } memset(dp, 63, sizeof dp); *
// for (int i = 0; i < n; ++i) { * Time complexity: O(V+E)
// Condition: A[i][j-1] <= A[i][j] <= A[i+1][j] // increasing: lower_bound
// A[i][j] is the smallest k that gives an optimal answer to dp[ // non-decreasing: upper_bound *
i][j] int j = lower_bound(dp, dp + lis, v[i]) - dp; * Usage: n -> number of variables, 1-indexed
// dp[j] = min(dp[j], v[i]); *
// reference (pt-br): https://algorithmmarch.wordpress.com lis = max(lis, j + 1); * p = v(i) -> picks the "true" state for variable i
/2016/08/12/a-otimizacao-de-pds-e-o-garcom-da-maratona/ } *
// * p = nv(i) -> picks the "false" state for variable i, i.
e. ˜i *
// 1) dp[i][j] = min i<k<j { dp[i][k] + dp[k][j] } + C[i][j] * add(p, q) -> add clause (p v q) (which also means ˜p =>
int n; q, which also means ˜q => p) *
int dp[N][N], a[N][N]; 3.6 SOS DP * run2sat() -> true if possible, false if impossible
*
// declare the cost function * val[i] -> tells if i has to be true or false for
int cost(int i, int j) { // O(N * 2ˆN) that solution *
// ... // A[i] = initial values *******************************************************************************
} // Calculate F[i] = Sum of A[j] for j subset of i */
for(int i = 0; i < (1 << N); i++)
void knuth() { F[i] = A[i]; int n, vis[2*N], ord[2*N], ordn, cnt, cmp[2*N], val[N];
// calculate base cases for(int i = 0; i < N; i++) vector<int> adj[2*N], adjt[2*N];
memset(dp, 63, sizeof(dp)); for(int j = 0; j < (1 << N); j++)
for (int i = 1; i <= n; i++) dp[i][i] = 0; if(j & (1 << i)) // for a variable u with idx i
F[j] += F[j ˆ (1 << i)]; // u is 2*i and !u is 2*i+1
// set initial a[i][j] // (a v b) == !a -> b ˆ !b -> a
for (int i = 1; i <= n; i++) a[i][i] = i;
int v(int x) { return 2*x; }
for (int j = 2; j <= n; ++j) int nv(int x) { return 2*x+1; }
for (int i = j; i >= 1; --i){
for (int k = a[i][j-1]; k <= a[i+1][j]; ++k) { 3.7 Steiner tree // add clause (a v b)
ll v = dp[i][k] + dp[k][j] + cost(i, j); void add(int a, int b){
adj[aˆ1].push_back(b);
// store the minimum answer for d[i][k] // Steiner-Tree O(2ˆt*nˆ2 + n*3ˆt + APSP) adj[bˆ1].push_back(a);
// in case of maximum, use v > dp[i][k] adjt[b].push_back(aˆ1);
if (v < dp[i][j]) // N - number of nodes adjt[a].push_back(bˆ1);
a[i][j] = k, dp[i][j] = v; // T - number of terminals }
} // dist[N][N] - Adjacency matrix
//+ Iterate over i to get min{dp[i][j]} for each j, don’t // steiner_tree() = min cost to connect first t nodes, 1-indexed void dfs(int x){
forget cost from n to // dp[i][bit_mask] = min cost to connect nodes active in bitmask vis[x] = 1;
} rooting in i for(auto v : adj[x]) if(!vis[v]) dfs(v);
} // min{dp[i][bit_mask]}, i <= n if root doesn’t matter ord[ordn++] = x;
}
int n, t, dp[N][(1 << T)], dist[N][N];
// 2) dp[i][j] = min k<i { dp[k][j-1] + C[k][i] } void dfst(int x){
int n, maxj; int steiner_tree() { cmp[x] = cnt, vis[x] = 0;
int dp[N][J], a[N][J]; for (int k = 1; k <= n; ++k) for(auto v : adjt[x]) if(vis[v]) dfst(v);
for (int i = 1; i <= n; ++i) }
// declare the cost function for (int j = 1; j <= n; ++j)
int cost(int i, int j) { dist[i][j] = min(dist[i][j], dist[i][k] + dist[k][j]); bool run2sat(){
// ... for(int i = 1; i <= n; i++) {
} for(int i = 1; i <= n; i++) if(!vis[v(i)]) dfs(v(i));
for(int j = 0; j < (1 << t); j++) if(!vis[nv(i)]) dfs(nv(i));
void knuth() { dp[i][j] = INF; }
// calculate base cases for(int i = 1; i <= t; i++) dp[i][1 << (i-1)] = 0; for(int i = ordn-1; i >= 0; i--)
memset(dp, 63, sizeof(dp)); if(vis[ord[i]]) cnt++, dfst(ord[i]);
for (int i = 1; i <= n; i++) dp[i][1] = // ... for(int msk = 0; msk < (1 << t); msk++) { for(int i = 1; i <= n; i ++){
for(int i = 1; i <= n; i++) { if(cmp[v(i)] == cmp[nv(i)]) return false;
// set initial a[i][j] for(int ss = msk; ss > 0; ss = (ss - 1) & msk) val[i] = cmp[v(i)] > cmp[nv(i)];
for (int i = 1; i <= n; i++) a[i][1] = 1, a[n+1][i] = n; dp[i][msk] = min(dp[i][msk], dp[i][ss] + dp[i][msk - ss }
]); return true;
for (int j = 2; j <= maxj; j++) }
for (int i = n; i >= 1; i--){ if(dp[i][msk] != INF)
for (int k = a[i][j-1]; k <= a[i+1][j]; k++) { for(int j = 1; j <= n; j++) int main () {
ll v = dp[k][j-1] + cost(k, i); dp[j][msk] = min(dp[j][msk], dp[i][msk] + dist[i][j]); for (int i = 1; i <= n; i++) {
} if (val[i]); // i-th variable is true

11
// store the minimum answer for d[i][k] } else // i-th variable is false
// in case of maximum, use v > dp[i][k] }
IME++
}
*********************************************************************************
*/
4.2 2-SAT Tarjan const int N = 1e5+10; // Maximum number of nodes
int dist[N], par[N];
// 2-SAT - O(V+E)
vector <int> adj[N];
queue <int> q;
4.6 Articulation points and bridges
// For each variable x, we create two nodes in the graph: u and
!u void bfs (int s) {
// If the variable has index i, the index of u and !u are: 2*i // Articulation points and Bridges O(V+E)
memset(dist, 63, sizeof(dist)); int par[N], art[N], low[N], num[N], ch[N], cnt;
and 2*i+1 dist[s] = 0;
// Adds a statment u => v q.push(s);
void add(int u, int v){ void articulation(int u) {
adj[u].pb(v); low[u] = num[u] = ++cnt;
while (!q.empty()) { for (int v : adj[u]) {
adj[vˆ1].pb(uˆ1); int u = q.front(); q.pop();
} if (!num[v]) {
for (auto v : adj[u]) if (dist[v] > dist[u] + 1) { par[v] = u; ch[u]++;
par[v] = u; articulation(v);
//0-indexed variables; starts from var_0 and goes to var_n-1 dist[v] = dist[u] + 1;
for(int i = 0; i < n; i++){ if (low[v] >= num[u]) art[u] = 1;
q.push(v); if (low[v] > num[u]) { /* u-v bridge */ }
tarjan(2*i), tarjan(2*i + 1); }
//cmp is a tarjan variable that says the component from a low[u] = min(low[u], low[v]);
} }
certain node }
if(cmp[2*i] == cmp[2*i + 1]) //Invalid else if (v != par[u]) low[u] = min(low[u], num[v]);
if(cmp[2*i] < cmp[2*i + 1]) //Var_i is true }
else //Var_i is false }
for (int i = 0; i < n; ++i) if (!num[i])
}
//its just a possible solution!
4.5 Block Cut articulation(i), art[i] = ch[i]>1;

// Tarjan for Block Cut Tree (Node Biconnected Componentes) - O(


n + m)
4.3 Shortest Path (Bellman-Ford) #define pb push_back 4.7 DFS
#include <bits/stdc++.h>
using namespace std;
/*
/* const int N = 1e5+5; **************************************************************************
*********************************************************************************
// Regular Tarjan stuff * DFS (DEPTH-FIRST SEARCH)
* BELLMAN-FORD ALGORITHM (SHORTEST PATH TO A VERTEX - WITH int n, num[N], low[N], cnt, ch[N], art[N]; *
NEGATIVE COST) * vector<int> adj[N], st; * Time complexity: O(V+E)
* Time complexity: O(VE) *
* int lb[N]; // Last block that node is contained *
* Usage: dist[node] int bn; // Number of blocks * Notation: adj[x]: adjacency list for node x
vector<int> blc[N]; // List of nodes from block *
* * vis[i]: visited state for node i (0 or 1)
* Notation: m: number of edges void dfs(int u, int p) { *
* num[u] = low[u] = ++cnt; *******************************************************************************
* n: number of vertices ch[u] = adj[u].size(); */
* st.pb(u);
* (a, b, w): edge between a and b with weight w const int N = 1e5+10;
* if (adj[u].size() == 1) blc[++bn].pb(u); int vis[N];
* s: starting node vector<int> adj[N];
* for(int v : adj[u]) {
*********************************************************************************
if (!num[v]) { void dfs(int u) {
*/ dfs(v, u), low[u] = min(low[u], low[v]); vis[u] = 1;
const int N = 1e4+10; // Maximum number of nodes if (low[v] == num[u]) { for (int v : adj[u]){
vector<int> adj[N], adjw[N]; if (p != -1 or ch[u] > 1) art[u] = 1; if (!vis[v]) {
int dist[N], v, w; blc[++bn].pb(u); dfs(v);
while(blc[bn].back() != v) }
memset(dist, 63, sizeof(dist)); blc[bn].pb(st.back()), st.pop_back(); }
dist[0] = 0; } // vis[u] = 0;
for (int i = 0; i < n-1; ++i) } // Uncomment the line above if you need to
for (int u = 0; u < n; ++u) else if (v != p) low[u] = min(low[u], num[v]), ch[v]--; // traverse only one path at a time (backtracking)
for (int j = 0; j < adj[u].size(); ++j) } }
v = adj[u][j], w = adjw[u][j],
dist[v] = min(dist[v], dist[u]+w); if (low[u] == num[u]) st.pop_back();
}
// Nodes from 1 .. n are blocks 4.8 Shortest Path (Dijkstra)
// Nodes from n+1 .. 2*n are articulations
4.4 BFS vector<int> bct[2*N]; // Adj list for Block Cut Tree
/*
void build_tree() { **************************************************************************
/* for(int u=1; u<=n; ++u) for(int v : adj[u]) if (num[u] > num[v
]) {
********************************************************************************* * DIJKSTRA’S ALGORITHM (SHORTEST PATH TO A VERTEX)
if (lb[u] == lb[v] or blc[lb[u]][0] == v) /* edge u-v *
* BFS (BREADTH-FIRST SEARCH) belongs to block lb[u] */; * Time complexity: O((V+E)logE)
* else { /* edge u-v belongs to block cut tree */; *
* Time complexity: O(V+E) int x = (art[u] ? u + n : lb[u]), y = (art[v] ? v + n : lb * Usage: dist[node]
* [v]);
* Usage: bfs(node) bct[x].pb(y), bct[y].pb(x); *
} * Notation: m: number of edges
* } *
* Notation: s: starting node } * (a, b, w): edge between a and b with weight w
* *
* adj[i]: adjacency list for node i void tarjan() { * s: starting node
* for(int u=1; u<=n; ++u) if (!num[u]) dfs(u, -1); *

12
* vis[i]: visited state for node i (0 or 1) for(int b=1; b<=bn; ++b) for(int u : blc[b]) lb[u] = b; * par[v]: parent node of u, used to rebuild the
* build_tree(); shortest path *
IME++
memset(h, 0, sizeof h);
********************************************************************************* *
*/ queue<int> q; * To find min flow that satisfies just do a binary search in the
h[src] = 1; (Old Sink -> Old Source) edge *
vector<int> adj[N], adjw[N]; q.push(src); * The cost of this edge represents all the flow from old network
int dist[N]; while(!q.empty()) { *
int u = q.front(); q.pop(); * Min flow = Sum(L) that arrives in Old Sink + flow that leaves
memset(dist, 63, sizeof(dist)); for(int i : g[u]) { (Old Sink -> Old Source) *
priority_queue<pii> pq; int v = edgs[i].v; *******************************************************************************
pq.push(mp(0,0)); if (!h[v] and edgs[i].f < edgs[i].c) */
q.push(v), h[v] = h[u] + 1;
while (!pq.empty()) { }
int u = pq.top().nd; } int main () {
int d = -pq.top().st; return h[snk]; clear();
pq.pop(); } return 0;
}
if (d > dist[u]) continue; int dfs (int u, int flow) {
for (int i = 0; i < adj[u].size(); ++i) { if (!flow or u == snk) return flow;
int v = adj[u][i]; for (int &i = ptr[u]; i < g[u].size(); ++i) {
int w = adjw[u][i]; edge &dir = edgs[g[u][i]], &rev = edgs[g[u][i]ˆ1];
if (dist[u] + w < dist[v])
dist[v] = dist[u]+w, pq.push(mp(-dist[v], v));
int v = dir.v;
if (h[v] != h[u] + 1) continue;
4.10 Dominator Tree
} int inc = min(flow, dir.c - dir.f);
} inc = dfs(v, inc); // a node u is said to be dominating node v if, from every path
if (inc) { from the entry point to v you have to pass through u
dir.f += inc, rev.f -= inc; // so this code is able to find every dominator from a specific
return inc; entry point (usually 1)
} // for directed graphs obviously
4.9 Max Flow }
return 0; const int N = 1e5 + 7;
}
// Dinic - O(Vˆ2 * E) vector<int> adj[N], radj[N], tree[N], bucket[N];
// Bipartite graph or unit flow - O(sqrt(V) * E) int dinic() { int sdom[N], par[N], dom[N], dsu[N], label[N], arr[N], rev[N],
// Small flow - O(F * (V + E)) int flow = 0; cnt;
// USE INF = 1e9! while (bfs()) {
memset(ptr, 0, sizeof ptr); void dfs(int u) {
/* while (int inc = dfs(src, INF)) flow += inc; cnt++;
}
********************************************************************************* arr[u] = cnt;
return flow; rev[cnt] = u;
* DINIC (FIND MAX FLOW / BIPARTITE MATCHING) } label[cnt] = cnt;
* sdom[cnt] = cnt;
* Time complexity: O(EVˆ2) //Recover Dinic dsu[cnt] = cnt;
* void recover(){ for(auto e : adj[u]) {
* Usage: dinic() for(int i = 0; i < edgs.size(); i += 2){ if(!arr[e]) {
//edge (u -> v) is being used with flow f dfs(e);
* if(edgs[i].f > 0) { par[arr[e]] = arr[u];
* add_edge(from, to, capacity) int v = edgs[i].v; }
* int u = edgs[iˆ1].v; radj[arr[e]].push_back(arr[u]);
* Testcase: } }
} }
* }
* add_edge(src, 1, 1); add_edge(1, snk, 1); add_edge(2, 3, int find(int u, int x = 0) {
INF); * /* if(u == dsu[u]) {
* add_edge(src, 2, 1); add_edge(2, snk, 1); add_edge(3, 4, **********************************************************************************************
return (x ? -1 : u);
INF); * }
* add_edge(src, 2, 1); add_edge(3, snk, 1); * FLOW WITH DEMANDS int v = find(dsu[u], x + 1);
* if(v == -1) {
* add_edge(src, 2, 1); add_edge(4, snk, 1); => dinic() = 4 * return u;
* * }
********************************************************************************* if(sdom[label[dsu[u]]] < sdom[label[u]]) {
*/ * label[u] = label[dsu[u]];
* 1 - Finding an arbitrary flow }
#include <bits/stdc++.h> dsu[u] = v;
using namespace std; * return (x ? v : label[u]);
* Assume a network with [L, R] on edges (some may have L = 0), }
const int N = 1e5+1, INF = 1e9; let’s call it old network. *
struct edge {int v, c, f;}; * Create a New Source and New Sink (this will be the src and snk void unite(int u, int v) {
for Dinic). * dsu[v] = u;
int n, src, snk, h[N], ptr[N]; * Modelling Network: }
vector<edge> edgs;
vector<int> g[N]; * // in main
* 1) Every edge from the old network will have cost R - L
void add_edge (int u, int v, int c) { * dfs(1);
int k = edgs.size(); * 2) Add an edge from New Source to every vertex v with cost: for(int i = cnt; i >= 1; i--) {
edgs.push_back({v, c, 0}); * for(auto e : radj[i]) {
edgs.push_back({u, 0, 0}); * Sum(L) for every (u, v). (sum all L that LEAVES v) sdom[i] = min(sdom[i], sdom[find(e)]);
g[u].push_back(k); * }
g[v].push_back(k+1); * 3) Add an edge from every vertex v to New Sink with cost: if(i > 1) {
} * bucket[sdom[i]].push_back(i);
* Sum(L) for every (v, w). (sum all L that ARRIVES v) }
void clear() { * for(auto e : bucket[i]) {
memset(h, 0, sizeof h); * 4) Add an edge from Old Source to Old Sink with cost INF ( int v = find(e);
memset(ptr, 0, sizeof ptr); circulation problem) * if(sdom[e] == sdom[v]) {
edgs.clear(); * The Network will be valid if and only if the flow saturates dom[e] = sdom[e];
for (int i = 0; i < N; i++) g[i].clear(); the network (max flow == sum(L)) * } else {
src = 0; * dom[e] = v;
snk = N-1; }
} * }
* 2 - Finding Min Flow

13
if(i > 1) {
bool bfs() { unite(par[i], i);
IME++
} break; for(int y : lk[x]) for(int z:lk[y]) if(w[z]) {
} } ans=(ans+go[x].size()+go[y].size()+go[z].size() - 6);
for(int i = 2; i <= cnt; i++) { } if(ans) return true;
if(dom[i] != sdom[i]) { }
dom[i] = dom[dom[i]]; for(int y:lk[x]) w[y] = 0;
} }
tree[rev[i]].push_back(rev[dom[i]]); return false;
}
tree[rev[dom[i]]].push_back(rev[i]); 4.13 Fast Kuhn }
bool circle4() {
const int N = 1e5+5; for(int i = 1; i <= n; i++) w[i] = 0;
int ans = 0;
int x, marcB[N], matchB[N], matchA[N], ans, n, m, p; for(int x = 1; x <= n; x++) {
4.11 Erdos Gallai vector<int> adj[N]; for(int y:go[x]) for(int z:lk[y]) if(pos[z] > pos[x]) {
ans = (ans+w[z]);
bool dfs(int v){ w[z]++;
for(int i = 0; i < adj[v].size(); i++){ if(ans) return true;
// Erdos-Gallai - O(nlogn) }
// check if it’s possible to create a simple graph (undirected int viz = adj[v][i];
if(marcB[viz] == 1 ) continue; for(int y:go[x]) for(int z : lk[y]) w[z] = 0;
edges) from }
// a sequence of vertice’s degrees marcB[viz] = 1;
return false;
bool gallai(vector<int> v) { }
vector<ll> sum; if((matchB[viz] == -1) || dfs(matchB[viz])){
sum.resize(v.size()); matchB[viz] = v; inline bool cmp(const int &x, const int &y) {
matchA[v] = viz; return deg[x] < deg[y];
sort(v.begin(), v.end(), greater<int>()); return true; }
sum[0] = v[0]; }
for (int i = 1; i < v.size(); i++) sum[i] = sum[i-1] + v[i]; int main() {
if (sum.back() % 2) return 0; } cin.tie(nullptr)->sync_with_stdio(false);
return false; cin >> n >> m;
for (int k = 1; k < v.size(); k++) { }
int p = lower_bound(v.begin(), v.end(), k, greater<int>()) - int x, y;
v.begin(); int main(){ for(int i = 0; i < n; i++) {
if (p < k) p = k; //... cin >> x >> y;
if (sum[k-1] > 1ll*k*(p-1) + sum.back() - sum[p-1]) return for(int i = 0; i<=n; i++) matchA[i] = -1; }
0; for(int j = 0; j<=m; j++) matchB[j] = -1;
} for(int i = 1; i <= n; i++) {
return 1; bool aux = true; deg[i] = 0, go[i].clear(), lk[i].clear();
} while(aux){ }
for(int j=1; j<=m; j++) marcB[j] = 0; while (m--){
aux = false; int a, b;
for(int i=1; i<=n; i++){ cin >> a >> b;
if(matchA[i] != -1) continue; deg[a]++, deg[b]++;
4.12 Eulerian Path if(dfs(i)){
ans++;
go[a].push_back(b);
go[b].push_back(a);
aux = true; }
}
vector<int> ans, adj[N]; }
int in[N]; for(int i = 1; i <= n; i++) id[i]= i;
} sort(id+1, id+1+n, cmp);
//... for(int i = 1; i<= n; i++) pos[id[i]]=i;
void dfs(int v){ }
while(adj[v].size()){ for(int x = 1; x<= n; x++) {
int x = adj[v].back(); for(int y:go[x]) {
adj[v].pop_back(); if(pos[y]>pos[x]) lk[x].push_back(y);
dfs(x); }
};
}
ans.pb(v); 4.14 Find Cycle of size 3 and 4
} if(circle3()) {
cout << "3" << endl;
// Verify if there is an eulerian path or circuit #include <bits/stdc++.h> return 0;
vector<int> v; };
for(int i = 0; i < n; i++) if(adj[i].size() != in[i]){ using lint = int64_t;
if(abs((int)adj[i].size() - in[i]) != 1) //-> There is no if(circle4()) {
valid eulerian circuit/path constexpr int MOD = int(1e9) + 7; cout << "4" << endl;
v.pb(i); constexpr int INF = 0x3f3f3f3f; return 0;
} constexpr int NINF = 0xcfcfcfcf; };
constexpr lint LINF = 0x3f3f3f3f3f3f3f3f;
if(v.size()){ cout << "5" << endl;
if(v.size() != 2) //-> There is no valid eulerian path #define endl ’\n’ return 0;
if(in[v[0]] > adj[v[0]].size()) swap(v[0], v[1]); }
if(in[v[0]] > adj[v[0]].size()) //-> There is no valid const long double PI = acosl(-1.0);
eulerian path
adj[v[1]].pb(v[0]); // Turn the eulerian path into a eulerian int cmp_double(double a, double b = 0, double eps = 1e-9) {
circuit return a + eps > b ? b + eps > a ? 0 : 1 : -1;
} } 4.15 Floyd Warshall
dfs(0); using namespace std;
for(int i = 0; i < cnt; i++) /*
if(adj[i].size()) //-> There is no valid eulerian circuit/path #define P 1000000007 **************************************************************************
in this case because the graph is not conected #define N 330000
* FLOYD-WARSHALL ALGORITHM (SHORTEST PATH TO ANY VERTEX)
ans.pop_back(); // Since it’s a curcuit, the first and the last int n, m; *
are repeated vector<int> go[N], lk[N]; * Time complexity: O(Vˆ3)
reverse(ans.begin(), ans.end()); int w[N], deg[N], pos[N], id[N]; *
* Usage: dist[from][to]
int bg = 0; // Is used to mark where the eulerian path begins bool circle3() { *
if(v.size()){ int ans = 0; * Notation: m: number of edges
for(int i = 0; i < ans.size(); i++) for(int i = 1; i <= n; i++) w[i] = 0; *

14
if(ans[i] == v[1] and ans[(i + 1)%ans.size()] == v[0]){ for(int x = 1; x <= n; x++) { * n: number of vertices
bg = i + 1; for(int y : lk[x]) w[y] = 1; *
IME++
* (a, b, w): edge between a and b with weight w cost.resize(lines);
for (auto& line : cost) line.assign(cols, 0);
4.18 Toposort
*
}
*********************************************************************************
*/ /*
void clear() { **************************************************************************
int adj[N][N]; // no-edge = INF pairV.assign(cols, 0);
way.assign(cols, 0); * KAHN’S ALGORITHM (TOPOLOGICAL SORTING)
for (int k = 0; k < n; ++k) pv.assign(cols, 0); *
for (int i = 0; i < n; ++i) pu.assign(lines, 0); *
for (int j = 0; j < n; ++j) } * Time complexity: O(V+E)
adj[i][j] = min(adj[i][j], adj[i][k]+adj[k][j]); *
void update(int i, int j, T val) { * Notation: adj[i]: adjacency matrix for node i
if (is_zero_indexed) i++, j++; *
if (is_max) val = -val; * n: number of vertices
if (swap_coord) swap(i, j); *
4.16 Hungarian assert(i < lines);
* e: number of edges
*
assert(j < cols); * a, b: edge between a and b
// Hungarian - O(m*nˆ2) *
cost[i][j] = val; * inc: number of incoming arcs/edges
// Assignment Problem } *
* q: queue with the independent vertices
int n, m; T run() {
int pu[N], pv[N], cost[N][M]; *
T _INF = numeric_limits<T>::max(); * tsort: final topo sort, i.e. possible order to
int pairV[N], way[M], minv[M], used[M]; for (int i = 1, j0 = 0; i < lines; i++) { traverse graph *
pairV[0] = i; *******************************************************************************
void hungarian() { minv.assign(cols, _INF);
for(int i = 1, j0 = 0; i <= n; i++) { */
used.assign(cols, 0);
pairV[0] = i; do {
memset(minv, 63, sizeof minv); used[j0] = 1; vector <int> adj[N];
memset(used, 0, sizeof used); int i0 = pairV[j0], j1; int inc[N]; // number of incoming arcs/edges
do { T delta = _INF;
used[j0] = 1; for (int j = 1; j < cols; j++) { // undirected graph: inc[v] <= 1
int i0 = pairV[j0], delta = INF, j1; if (used[j]) continue; // directed graph: inc[v] == 0
for(int j = 1; j <= m; j++) { T cur = cost[i0][j] - pu[i0] - pv[j];
if(used[j]) continue; if (cur < minv[j]) minv[j] = cur, way[j] = j0; queue<int> q;
int cur = cost[i0][j] - pu[i0] - pv[j]; if (minv[j] < delta) delta = minv[j], j1 = j; for (int i = 1; i <= n; ++i) if (inc[i] <= 1) q.push(i);
if(cur < minv[j]) minv[j] = cur, way[j] = j0; }
if(minv[j] < delta) delta = minv[j], j1 = j; while (!q.empty()) {
} for (int j = 0; j < cols; j++) { int u = q.front(); q.pop();
if (used[j]) pu[pairV[j]] += delta, pv[j] -= delta; for (int v : adj[u])
for(int j = 0; j <= m; j++) { else minv[j] -= delta; if (inc[v] > 1 and --inc[v] <= 1)
if(used[j]) pu[pairV[j]] += delta, pv[j] -= delta; } q.push(v);
else minv[j] -= delta; j0 = j1; }
} } while (pairV[j0]);
j0 = j1;
} while(pairV[j0]); do {
int j1 = way[j0];
do {
int j1 = way[j0];
pairV[j0] = pairV[j1];
j0 = j1;
4.19 Strongly Connected Components
pairV[j0] = pairV[j1]; } while (j0);
j0 = j1; }
} while(j0); /*
} **************************************************************************
ans = 0;
} for (int j = 1; j < cols; j++) if (pairV[j]) ans += cost[ * KOSARAJU’S ALGORITHM (GET EVERY STRONGLY CONNECTED COMPONENTS
pairV[j]][j]; (SCC)) *
// in main
// for(int j = 1; j <= m; j++) * Description: Given a directed graph, the algorithm generates a
if (is_max) ans = -ans; list of every *
// if(pairV[j]) ans += cost[pairV[j]][j]; if (is_zero_indexed) {
// * strongly connected components. A SCC is a set of points in
for (int j = 0; j + 1 < cols; j++) pairV[j] = pairV[j + which you can reach *
1], pairV[j]--; * every point regardless of where you start from. For instance,
pairV[cols - 1] = -1; cycles can be *
} * a SCC themselves or part of a greater SCC.
if (swap_coord) {
4.17 Hungarian Navarro vector<int> pairV_sub(lines, 0);
*
* This algorithm starts with a DFS and generates an array called
for (int j = 0; j < cols; j++) if (pairV[j] >= 0) "ord" which *
pairV_sub[pairV[j]] = j; * stores vertices according to the finish times (i.e. when it
// Hungarian - O(nˆ2 * m) swap(pairV, pairV_sub); reaches "return"). *
template<bool is_max = false, class T = int, bool } * Then, it makes a reversed DFS according to "ord" list. The set
is_zero_indexed = false> of points *
struct Hungarian { return ans; * visited by the reversed DFS defines a new SCC.
bool swap_coord = false; } *
int lines, cols; }; * One of the uses of getting all SCC is that you can generate a
T ans; new DAG (Directed *
template <bool is_max = false, bool is_zero_indexed = false> * Acyclic Graph), easier to work with, in which each SCC being a
vector<int> pairV, way; struct HungarianMult : public Hungarian<is_max, long double, "supernode" of *
vector<bool> used; is_zero_indexed> { * the DAG.
vector<T> pu, pv, minv; using super = Hungarian<is_max, long double, is_zero_indexed>;
vector<vector<T>> cost; *
HungarianMult(int _n, int _m) : super(_n, _m) {} * Time complexity: O(V+E)
Hungarian(int _n, int _m) { *
if (_n > _m) { void update(int i, int j, long double x) { * Notation: adj[i]: adjacency list for node i
swap(_n, _m); super::update(i, j, log2(x)); *
swap_coord = true; } * adjt[i]: reversed adjacency list for node i
} }; *
* ord: array of vertices according to their
lines = _n + 1, cols = _m + 1; finish time *

15
* ordn: ord counter
clear(); *
IME++
* scc[i]: supernode assigned to i // ...
* sort(edges.begin(), edges.end());
* scc_cnt:amount of supernodes in the graph
*
for (auto e : edges)
if (find(e.nd.st) != find(e.nd.nd))
4.23 Max Weight on Path
********************************************************************************* unite(e.nd.st, e.nd.nd), cost += e.st;
*/ // Using LCA to find max edge weight between (u, v)
const int N = 2e5 + 5; return 0;
} const int N = 1e5+5; // Max number of vertices
vector<int> adj[N], adjt[N]; const int K = 20; // Each 1e3 requires ˜ 10 K
int n, ordn, scc_cnt, vis[N], ord[N], scc[N]; const int M = K+5;
int n; // Number of vertices
//Directed Version vector <pii> adj[N];
void dfs(int u) {
vis[u] = 1; 4.21 Max Bipartite Cardinality Match- int vis[N], h[N], anc[N][M], mx[N][M];
for (auto v : adj[u]) if (!vis[v]) dfs(v);
ord[ordn++] = u; ing (Kuhn) void dfs (int u) {
vis[u] = 1;
} for (auto p : adj[u]) {
int v = p.st;
void dfst(int u) { /* int w = p.nd;
scc[u] = scc_cnt, vis[u] = 0; if (!vis[v]) {
for (auto v : adjt[u]) if (vis[v]) dfst(v); *********************************************************************************
h[v] = h[u]+1;
} KUHN’S ALGORITHM (FIND GREATEST NUMBER OF MATCHINGS - anc[v][0] = u;
*
BIPARTITE GRAPH) * mx[v][0] = w;
// add edge: u -> v Time complexity: O(VE) dfs(v);
void add_edge(int u, int v){ *
* }
adj[u].push_back(v); Notation: ans: number of matchings }
adjt[v].push_back(u); *
* }
} b[j]: matching edge b[j] <-> j
*
* void build () {
//Undirected version: adj[i]: adjacency list for node i // cl(mn, 63) -- Don’t forget to initialize with INF if min
/* *
* edge!
int par[N]; vis: visited nodes anc[1][0] = 1;
*
* dfs(1);
void dfs(int u) { x: counter to help reuse vis list for (int j = 1; j <= K; j++) for (int i = 1; i <= n; i++) {
vis[u] = 1; *
* anc[i][j] = anc[anc[i][j-1]][j-1];
for (auto v : adj[u]) if(!vis[v]) par[v] = u, dfs(v); mx[i][j] = max(mx[i][j-1], mx[anc[i][j-1]][j-1]);
ord[ordn++] = u; *********************************************************************************
*/ }
} }
// TIP: If too slow, shuffle nodes and try again.
void dfst(int u) { int x, vis[N], b[N], ans; int mxedge (int u, int v) {
scc[u] = scc_cnt, vis[u] = 0; int ans = 0;
for (auto v : adj[u]) if(vis[v] and u != par[v]) dfst(v); bool match(int u) {
} if (vis[u] == x) return 0; if (h[u] < h[v]) swap(u, v);
vis[u] = x; for (int j = K; j >= 0; j--) if (h[anc[u][j]] >= h[v]) {
// add edge: u -> v for (int v : adj[u]) ans = max(ans, mx[u][j]);
void add_edge(int u, int v){ if (!b[v] or match(b[v])) return b[v]=u; u = anc[u][j];
adj[u].push_back(v); return 0; }
adj[v].push_back(u); } if (u == v) return ans;
} for (int j = K; j >= 0; j--) if (anc[u][j] != anc[v][j]) {
for (int i = 1; i <= n; ++i) ++x, ans += match(i); ans = max(ans, mx[u][j]);
*/ ans = max(ans, mx[v][j]);
// Maximum Independent Set on bipartite graph u = anc[u][j];
// run kosaraju MIS + MCBM = V v = anc[v][j];
void kosaraju(){ }
for (int i = 1; i <= n; ++i) if (!vis[i]) dfs(i); // Minimum Vertex Cover on bipartite graph return max({ans, mx[u][0], mx[v][0]});
for (int i = ordn - 1; i >= 0; --i) if (vis[ord[i]]) scc_cnt MVC = MCBM }
++, dfst(ord[i]);
}

4.22 Lowest Common Ancestor 4.24 Min Cost Max Flow


4.20 MST (Kruskal) // USE INF = 1e9!
// Lowest Common Ancestor <O(nlogn), O(logn)>
const int N = 1e6, M = 25; /*
/* int anc[M][N], h[N], rt; **************************************************************************
*********************************************************************************
// TODO: Calculate h[u] and set anc[0][u] = parent of node u for * MIN COST MAX FLOW (MINIMUM COST TO ACHIEVE MAXIMUM FLOW)
* KRUSKAL’S ALGORITHM (MINIMAL SPANNING TREE - INCREASING EDGE each u *
SIZE) * * Description: Given a graph which represents a flow network
* Time complexity: O(ElogE) // build (sparse table) where every edge has *
* anc[0][rt] = rt; // set parent of the root to itself * a capacity and a cost per unit, find the minimum cost to
* Usage: cost, sz[find(node)] for (int i = 1; i < M; ++i) establish the maximum *
* for (int j = 1; j <= n; ++j) * possible flow from s to t.
* Notation: cost: sum of all edges which belong to such MST anc[i][j] = anc[i-1][anc[i-1][j]]; *
* * Note: When adding edge (a, b), it is a directed edge!
* sz: vector of subsets sizes, i.e. size of the // query *
subset a node is in * int lca(int u, int v) { * Usage: min_cost_max_flow()
if (h[u] < h[v]) swap(u, v);
********************************************************************************* *
*/ for (int i = M-1; i >= 0; --i) if (h[u]-(1<<i) >= h[v]) * add_edge(from, to, cost, capacity)
u = anc[i][u]; *
// + Union-find * Notation: flw: max flow
if (u == v) return u; *
int cost = 0; * cst: min cost to achieve flw
vector <pair<int, pair<int, int>>> edges; //mp(dist, mp(node1, for (int i = M-1; i >= 0; --i) if (anc[i][u] != anc[i][v]) *
node2)) u = anc[i][u], v = anc[i][v]; * Testcase:

16
return anc[0][u];
int main () { } *
IME++
* add_edge(src, 1, 0, 1); add_edge(1, snk, 0, 1); add_edge pq.push(mp(0, 0)); }
(2, 3, 1, INF); * // now here you can do what the query wants
* add_edge(src, 2, 0, 1); add_edge(2, snk, 0, 1); add_edge while (!pq.empty()) { // there are cnt[c] vertex in subtree v color with c
(3, 4, 1, INF); * int u = pq.top().nd; if (keep == 0) {
* add_edge(src, 2, 0, 1); add_edge(3, snk, 0, 1); pq.pop(); for (auto u : vec[v]) {
* if (vis[u]) continue; cnt[color[u]]--;
* add_edge(src, 2, 0, 1); add_edge(4, snk, 0, 1); => flw = vis[u]=1; }
4, cst = 3 * for (int i = 0; i < adj[u].size(); ++i) { }
int v = adj[u][i];
********************************************************************************* }
*/ int w = adjw[u][i];
if (!vis[v]) pq.push(mp(-w, v));
// w: weight or cost, c : capacity }
struct edge {int v, f, w, c; }; }
int n, flw_lmt=INF, src, snk, flw, cst, p[N], d[N], et[N];
4.28 Stoer Wagner (Stanford)
vector<edge> e;
vector<int> g[N];
void add_edge(int u, int v, int w, int c) {
4.26 Shortest Path (SPFA) // a is a N*N matrix storing the graph we use; a[i][j]=a[j][
i]
int k = e.size(); memset(use,0,sizeof(use));
g[u].push_back(k); ans=maxlongint;
g[v].push_back(k+1); // Shortest Path Faster Algoritm O(VE) for (int i=1;i<N;i++)
e.push_back({ v, 0, w, c }); int dist[N], inq[N]; {
e.push_back({ u, 0, -w, 0 }); memcpy(visit,use,505*sizeof(int));
} cl(dist,63); memset(reach,0,sizeof(reach));
queue<int> q; memset(last,0,sizeof(last));
void clear() { q.push(0); dist[0] = 0; inq[0] = 1; t=0;
flw_lmt = INF; for (int j=1;j<=N;j++)
for(int i=0; i<=n; ++i) g[i].clear(); while (!q.empty()) { if (use[j]==0) {t=j;break;}
e.clear(); int u = q.front(); q.pop(); inq[u]=0; for (int j=1;j<=N;j++)
} for (int i = 0; i < adj[u].size(); ++i) { if (use[j]==0) reach[j]=a[t][j],last[j]=t;
int v = adj[u][i], w = adjw[u][i]; visit[t]=1;
void min_cost_max_flow() { if (dist[v] > dist[u] + w) { for (int j=1;j<=N-i;j++)
flw = 0, cst = 0; dist[v] = dist[u] + w; {
while (flw < flw_lmt) { if (!inq[v]) q.push(v), inq[v] = 1; maxc=maxk=0;
memset(et, 0, (n+1) * sizeof(int)); } for (int k=1;k<=N;k++)
memset(d, 63, (n+1) * sizeof(int)); } if ((visit[k]==0)&&(reach[k]>maxc)) maxc=reach[k
deque<int> q; } ],maxk=k;
q.push_back(src), d[src] = 0; c2=maxk,visit[maxk]=1;
for (int k=1;k<=N;k++)
while (!q.empty()) { if (visit[k]==0) reach[k]+=a[maxk][k],last[k]=
int u = q.front(); q.pop_front(); maxk;
et[u] = 2; 4.27 Small to Large }
c1=last[c2];
for(int i : g[u]) { sum=0;
edge &dir = e[i]; for (int j=1;j<=N;j++)
// Imagine you have a tree with colored vertices, and you want if (use[j]==0) sum+=a[j][c2];
int v = dir.v; to do some type of query on every subtree about the colors
if (dir.f < dir.c and d[u] + dir.w < d[v]) { ans=min(ans,sum);
inside use[c2]=1;
d[v] = d[u] + dir.w; // complexity: O(nlogn)
if (et[v] == 0) q.push_back(v); for (int j=1;j<=N;j++)
else if (et[v] == 2) q.push_front(v); if ((c1!=j)&&(use[j]==0)) {a[j][c1]+=a[j][c2];a[c1][
vector<int> adj[N], vec[N]; j]=a[j][c1];}
et[v] = 1; int sz[N], color[N], cnt[N];
p[v] = i; }
} void dfs_size(int v = 1, int p = 0) {
} sz[v] = 1;
} for (auto u : adj[v]) {
if (d[snk] > INF) break;
if (u != p) {
dfs_size(u, v); 4.29 Tarjan
sz[v] += sz[u];
int inc = flw_lmt - flw; }
for (int u=snk; u != src; u = e[p[u]ˆ1].v) { } // Tarjan for SCC and Edge Biconnected Componentes - O(n + m)
edge &dir = e[p[u]]; } vector<int> adj[N];
inc = min(inc, dir.c - dir.f); stack<int> st;
} void dfs(int v = 1, int p = 0, bool keep = false) { bool inSt[N];
int Max = -1, bigchild = -1;
for (int u=snk; u != src; u = e[p[u]ˆ1].v) { for (auto u : adj[v]) { int id[N], cmp[N];
edge &dir = e[p[u]], &rev = e[p[u]ˆ1]; if (u != p && Max < sz[u]) { int cnt, cmpCnt;
dir.f += inc; Max = sz[u];
rev.f -= inc; bigchild = u; void clear(){
cst += inc * dir.w; } memset(id, 0, sizeof id);
} } cnt = cmpCnt = 0;
for (auto u : adj[v]) { }
if (!inc) break; if (u != p && u != bigchild) {
flw += inc; dfs(u, v, 0); int tarjan(int n){
} } int low;
} } id[n] = low = ++cnt;
if (bigchild != -1) { st.push(n), inSt[n] = true;
dfs(bigchild, v, 1);
swap(vec[v], vec[bigchild]); for(auto x : adj[n]){
} if(id[x] and inSt[x]) low = min(low, id[x]);
4.25 MST (Prim) vec[v].push_back(v);
cnt[color[v]]++;
else if(!id[x]) {
int lowx = tarjan(x);
for (auto u : adj[v]) { if(inSt[x])
if (u != p && u != bigchild) { low = min(low, lowx);
// Prim - MST O(ElogE) for (auto x : vec[u]) { }
vi adj[N], adjw[N]; cnt[color[x]]++; }
int vis[N]; vec[v].push_back(x);

17
} if(low == id[n]){
priority_queue<pii> pq; } while(st.size()){
IME++
int x = st.top(); }
inSt[x] = false; #include <bits/stdc++.h>
cmp[x] = cmpCnt; void add(string &p, int id = -1) { using namespace std;
int u = 0;
st.pop(); if (id == -1) id = cnt++; int remap(char c) {
if(x == n) break; if (islower(c)) return c - ’a’;
} for (char ch : p) { return c - ’A’ + 26;
cmpCnt++; int c = remap(ch); }
} if (nodes[u].nxt[c] == -1) {
return low; nodes[u].nxt[c] = (int)nodes.size(); const int K = 52;
} nodes.push_back(Node(u, c));
} struct Aho {
struct Node {
u = nodes[u].nxt[c]; int nxt[K];
} int par = -1;
4.30 Zero One BFS if (nodes[u].str_idx != -1) rep.push_back({ id, nodes[u].
int link = -1;
int go[K];
str_idx }); bitset<1005> ids;
// 0-1 BFS - O(V+E) else nodes[u].str_idx = id; char pch;
nodes[u].has_end = true;
const int N = 1e5 + 5; } Node(int p = -1, char ch = ’$’) : par { p }, pch { ch } {
fill(begin(nxt), end(nxt), -1);
int dist[N]; void build() { fill(begin(go), end(go), -1);
vector<pii> adj[N]; build_done = true; }
deque<pii> dq; queue<int> q; };

void zero_one_bfs (int x){ for (int i = 0; i < ALPHA_SIZE; i++) { vector<Node> nodes;
cl(dist, 63); if (nodes[0].nxt[i] != -1) q.push(nodes[0].nxt[i]);
dist[x] = 0; else nodes[0].nxt[i] = 0; Aho() : nodes (1) {}
dq.push_back({x, 0}); }
while(!dq.empty()){ void add_string(const string& s, int id) {
int u = dq.front().st; while(q.size()) { int u = 0;
int ud = dq.front().nd; int u = q.front(); for (char ch : s) {
dq.pop_front(); ord.push_back(u); int c = remap(ch);
if(dist[u] < ud) continue; q.pop(); if (nodes[u].nxt[c] == -1) {
for(auto x : adj[u]){ nodes[u].nxt[c] = nodes.size();
int v = x.st; int j = nodes[nodes[u].p].link; nodes.emplace_back(u, ch);
int w = x.nd; if (j == -1) nodes[u].link = 0; }
if(dist[u] + w < dist[v]){ else nodes[u].link = nodes[j].nxt[nodes[u].char_p];
dist[v] = dist[u] + w; u = nodes[u].nxt[c];
if(w) dq.push_back({v, dist[v]}); nodes[u].has_end |= nodes[nodes[u].link].has_end; }
else dq.push_front({v, dist[v]});
} for (int i = 0; i < ALPHA_SIZE; i++) { nodes[u].ids.set(id);
} if (nodes[u].nxt[i] != -1) q.push(nodes[u].nxt[i]); }
} else nodes[u].nxt[i] = nodes[nodes[u].link].nxt[i];
} } int get_link(int u) {
} if (nodes[u].link == -1) {
} if (u == 0 or nodes[u].par == 0) nodes[u].link = 0;
else nodes[u].link = go(get_link(nodes[u].par), nodes[u].
int match(string &s) { pch);
if (!cnt) return 0; }
5 Strings if (!build_done) build();
}
return nodes[u].link;

ans = 0;
occur = vector<int>(cnt); int go(int u, char ch) {
5.1 Aho-Corasick occur_aux = vector<int>(nodes.size()); int c = remap(ch);
if (nodes[u].go[c] == -1) {
int u = 0; if (nodes[u].nxt[c] != -1) nodes[u].go[c] = nodes[u].nxt[c
// Aho-Corasick for (char ch : s) { ];
int c = remap(ch); else nodes[u].go[c] = (u == 0) ? 0 : go(get_link(u), ch);
// Build: O(sum size of patterns) u = nodes[u].nxt[c]; nodes[u].ids |= nodes[nodes[u].go[c]].ids;
// Find total number of matches: O(size of input string) occur_aux[u]++; }
// Find number of matches for each pattern: O(num of patterns + } return nodes[u].go[c];
size of input string) }
for (int i = (int)ord.size() - 1; i >= 0; i--) {
// ids start from 0 by default! int v = ord[i]; bitset<1005> run(const string& s) {
int fv = nodes[v].link; bitset<1005> bs;
template <int ALPHA_SIZE = 62> occur_aux[fv] += occur_aux[v]; int u = 0;
struct Aho { if (nodes[v].str_idx != -1) { for (char ch : s) {
struct Node { occur[nodes[v].str_idx] = occur_aux[v]; int c = remap(ch);
int p, char_p, link = -1, str_idx = -1, nxt[ALPHA_SIZE]; ans += occur_aux[v]; if (go(u, ch) == -1) assert(0);
bool has_end = false; } bs |= nodes[u].ids;
Node(int _p = -1, int _char_p = -1) : p(_p), char_p(_char_p) } u = nodes[u].nxt[c];
{ if (u == -1) u = 0;
fill(nxt, nxt + ALPHA_SIZE, -1); for (pair<int, int> x : rep) occur[x.first] = occur[x.second }
} ]; bs |= nodes[u].ids;
}; return ans; return bs;
} }
vector<Node> nodes = { Node() }; }; };
int ans, cnt = 0;
bool build_done = false;
vector<pair<int, int>> rep;
vector<int> ord, occur, occur_aux;
// change this if different alphabet
5.2 Aho-Corasick (emaxx) 5.3 Booths Algorithm
int remap(char c) {
if (islower(c)) return c - ’a’;

18
if (isalpha(c)) return c - ’A’ + 26; // Aho Corasick - <O(sum(m)), O(n + #matches)> // Booth’s Algorithm - Find the lexicographically least rotation
return c - ’0’ + 52; // Multiple string matching of a string in O(n)
IME++
hs = ((hs*B)%MOD + s[i])%MOD;
string least_rotation(string s) { int manacher() { hhs = (hs - s[i-m]*E%MOD + MOD)%MOD;
s += s; int n = strlen(s); if (hs == hp) { /* matching position i-m+1 */ }
vector<int> f((int)s.size(), -1); }
int k = 0; string p (2*n+3, ’#’); }
for (int j = 1; j < (int)s.size(); j++) { p[0] = ’ˆ’;
int i = f[j - k - 1]; for (int i = 0; i < n; i++) p[2*(i+1)] = s[i];
while (i != -1 and s[j] != s[k + i + 1]) { p[2*n+2] = ’$’;
if (s[j] < s[k + i + 1]) k = j - i - 1;
}
i = f[i]; int k = 0, r = 0, m = 0;
int l = p.length(); 5.9 Recursive-String Matching
for (int i = 1; i < l; i++) {
if (s[j] != s[k + i + 1]) { int o = 2*k - i;
if (s[j] < s[k]) k = j; lps[i] = (r > i) ? min(r-i, lps[o]) : 0; void p_f(char *s, int *pi) {
f[j - k] = -1; while (p[i + 1 + lps[i]] == p[i - 1 - lps[i]]) lps[i]++; int n = strlen(s);
} else f[j - k] = i + 1; if (i + lps[i] > r) k = i, r = i + lps[i]; pi[0]=pi[1]=0;
} m = max(m, lps[i]); for(int i = 2; i <= n; i++) {
} pi[i] = pi[i-1];
return s.substr(k, (int)s.size() / 2); return m; while(pi[i]>0 and s[pi[i]]!=s[i])
} } pi[i]=pi[pi[i]];
if(s[pi[i]]==s[i-1])
pi[i]++;
}
}
5.4 Knuth-Morris-Pratt (Automaton) 5.7 Manacher 2 int main() {
//...
// KMP Automaton - <O(26*pattern), O(text)> // Mancher O(n) //Initialize prefix function
char p[N]; //Pattern
// max size pattern vector<int> d1, d2; int len = strlen(p); //Pattern size
const int N = 1e5 + 5; int pi[N]; //Prefix function
// d1 -> odd : size = 2 * d1[i] - 1, palindrome from i - d1[i] + p_f(p, pi);
int cnt, nxt[N+1][26]; 1 to i + d1[i] - 1
// d2 -> even : size = 2 * d2[i], palindrome from i - d2[i] to i // Create KMP automaton
void prekmp(string &p) { + d2[i] - 1 int A[N][128]; //A[i][j]: from state i (size of largest
nxt[0][p[0] - ’a’] = 1; suffix of text which is prefix of pattern), append
for(int i = 1, j = 0; i <= p.size(); i++) { void manacher(string &s) { character j -> new state A[i][j]
for(int c = 0; c < 26; c++) nxt[i][c] = nxt[j][c]; int n = s.size(); for( char c : ALPHABET )
if(i == p.size()) continue; d1.resize(n), d2.resize(n); A[0][c] = (p[0] == c);
nxt[i][p[i] - ’a’] = i+1; for(int i = 0, l1 = 0, l2 = 0, r1 = -1, r2 = -1; i < n; i++) for( int i = 1; p[i]; i++ ) {
j = nxt[j][p[i] - ’a’]; { for( char c : ALPHABET ) {
} if(i <= r1) { if(c==p[i])
} d1[i] = min(d1[r1 + l1 - i], r1 - i + 1); A[i][c]=i+1; //match
} else
void kmp(string &s, string &p) { if(i <= r2) { A[i][c]=A[pi[i]][c]; //try second largest suffix
for(int i = 0, j = 0; i < s.size(); i++) { d2[i] = min(d2[r2 + l2 - i + 1], r2 - i + 1); }
j = nxt[j][s[i] - ’a’]; } }
if(j == p.size()) cnt++; //match i - j + 1 while(i - d1[i] >= 0 and i + d1[i] < n and s[i - d1[i]]
} == s[i + d1[i]]) { //Create KMP "string appending" automaton
} d1[i]++; // g_n = g_(n-1) + char(n) + g_(n-1)
} // g_0 = "", g_1 = "a", g_2 = "aba", g_3 = "abacaba", ...
while(i - d2[i] - 1 >= 0 and i + d2[i] < n and s[i - d2[ int F[M][N]; //F[i][j]: from state j (size of largest suffix
i] - 1] == s[i + d2[i]]) { of text which is prefix of pattern), append string g_i
d2[i]++; -> new state F[i][j]
5.5 Knuth-Morris-Pratt }
if(i + d1[i] - 1 > r1) {
for(int i = 0; i < m; i++) {
for(int j = 0; j <= len; j++) {
l1 = i - d1[i] + 1; if(i==0)
r1 = i + d1[i] - 1; F[i][j] = j; //append empty string
// Knuth-Morris-Pratt - String Matching O(n+m) else {
char s[N], p[N]; }
if(i + d2[i] - 1 > r2) { int x = F[i-1][j]; //append g_(i-1)
int b[N], n, m; // n = strlen(s), m = strlen(p); x = A[x][j]; //append character j
l2 = i - d2[i];
r2 = i + d2[i] - 1; x = F[i-1][x]; //append g_(i-1)
void kmppre() { F[i][j] = x;
b[0] = -1; }
} }
for (int i = 0, j = -1; i < m; b[++i] = ++j) }
while (j >= 0 and p[i] != p[j]) }
}
j = b[j];
} //Create number of matches matrix
int K[M][N]; //K[i][j]: from state j (size of largest suffix
void kmp() {
for (int i = 0, j = 0; i < n;) { 5.8 Rabin-Karp of text which is prefix of pattern), append string g_i
-> K[i][j] matches
while (j >= 0 and s[i] != p[j]) j=b[j]; for(int i = 0; i < m; i++) {
i++, j++; for(int j = 0; j <= len; j++) {
if (j == m) { // Rabin-Karp - String Matching + Hashing O(n+m) if(i==0)
// match position i-j const int B = 31; K[i][j] = (j==len); //append empty string
j = b[j]; char s[N], p[N]; else {
} int n, m; // n = strlen(s), m = strlen(p) int x = F[i-1][j]; //append g_(i-1)
} x = A[x][j]; //append character j
} void rabin() {
if (n<m) return; K[i][j] = K[i-1][j] /*append g_(i-1)*/ + (x==len
) /*append character j*/ + K[i-1][x]; /*
ull hp = 0, hs = 0, E = 1; append g_(i-1)*/
for (int i = 0; i < m; ++i) }
5.6 Manacher hp = ((hp*B)%MOD + p[i])%MOD, }
hs = ((hs*B)%MOD + s[i])%MOD, }
E = (E*B)%MOD; //number of matches in g_k
// Manacher (Longest Palindromic String) - O(n) int answer = K[0][k];

19
int lps[2*N+5]; if (hs == hp) { /* matching position 0 */ } //...
char s[N]; for (int i = m; i < n; ++i) { }
IME++
return ans; }
}
5.10 String Hashing Hash operator-(const Hash& b) const {
Hash<N> fhash(int l, int r) {
if (!l) return h[r];
Hash ans; return h[r] - h[l - 1] * p[r - l + 1];
for (int i = 0; i < N; i++) ans.hs[i] = sub(hs[i], b.hs[i], }
// String Hashing mods[i]);
// Rabin Karp - O(n + m) return ans; static Hash<N> shash(string& s, int pr = 313) {
} Hash<N> ans;
// max size txt + 1 for (int i = 0; i < (int)s.size(); i++) ans = pr * ans + s[i
const int N = 1e6 + 5; Hash operator*(const Hash& b) const { ];
Hash ans; return ans;
// lowercase letters p = 31 (remember to do s[i] - ’a’ + 1) for (int i = 0; i < N; i++) ans.hs[i] = mul(hs[i], b.hs[i], }
// uppercase and lowercase letters p = 53 (remember to do s[i] - mods[i]);
’a’ + 1) return ans; friend int rabin_karp(string& s, string& pt) {
// any character p = 313 } PolyHash hs = PolyHash(s);
Hash<N> hp = hs.shash(pt);
const int MOD = 1e9+9; Hash operator+(int b) const { int cnt = 0;
ull h[N], p[N]; Hash ans; for (int i = 0, m = (int)pt.size(); i + m <= (int)s.size();
ull pr = 313; for (int i = 0; i < N; i++) ans.hs[i] = add(hs[i], b, mods[i i++) {
]); if (hs.fhash(i, i + m - 1) == hp) {
int cnt; return ans; // match at i
} cnt++;
}
void build(string &s) { Hash operator*(int b) const { }
p[0] = 1, p[1] = pr; Hash ans;
for(int i = 1; i <= s.size(); i++) { for (int i = 0; i < N; i++) ans.hs[i] = mul(hs[i], b, mods[i return cnt;
h[i] = ((p[1]*h[i-1]) % MOD + s[i-1]) % MOD; ]); }
p[i] = (p[1]*p[i-1]) % MOD; return ans; };
} }
}
friend Hash operator*(int a, const Hash& b) {
// 1-indexed Hash ans;
ull fhash(int l, int r) {
}
return (h[r] - ((h[l-1]*p[r-l+1]) % MOD) + MOD) % MOD;
for (int i = 0; i < N; i++) ans.hs[i] = mul(b.hs[i], a, b.
mods[i]); 5.12 Suffix Array
return ans;
}
ull shash(string &pt) {
ull h = 0; // Suffix Array O(nlogn)
friend ostream& operator<<(ostream& os, const Hash& b) { // s.push(’$’);
for(int i = 0; i < pt.size(); i++) for (int i = 0; i < N; i++) os << b.hs[i] << " \n"[i == N -
h = ((h*pr) % MOD + pt[i]) % MOD; vector<int> suffix_array(string &s){
1]; int n = s.size(), alph = 256;
return h; return os;
} vector<int> cnt(max(n, alph)), p(n), c(n);
}
}; for(auto c : s) cnt[c]++;
void rabin_karp(string &s, string &pt) {
build(s); for(int i = 1; i < alph; i++) cnt[i] += cnt[i - 1];
template <int N> vector<int> Hash<N>::mods = { (int) 1e9 + 9, ( for(int i = 0; i < n; i++) p[--cnt[s[i]]] = i;
ull hp = shash(pt); int) 1e9 + 33, (int) 1e9 + 87 };
for(int i = 0, m = pt.size(); i + m <= s.size(); i++) { for(int i = 1; i < n; i++)
if(fhash(i+1, i+m) == hp) { c[p[i]] = c[p[i - 1]] + (s[p[i]] != s[p[i - 1]]);
// In case you need to generate the MODs, uncomment this:
// match at i // Obs: you may need this on your template
cnt++; vector<int> c2(n), p2(n);
// mt19937_64 llrand((int) chrono::steady_clock::now().
} time_since_epoch().count());
} for(int k = 0; (1 << k) < n; k++){
// In main: gen<>(); int classes = c[p[n - 1]] + 1;
} /* fill(cnt.begin(), cnt.begin() + classes, 0);
template <int N> vector<int> Hash<N>::mods;
template<int N = 3> for(int i = 0; i < n; i++) p2[i] = (p[i] - (1 << k) + n)%n;
void gen() { for(int i = 0; i < n; i++) cnt[c[i]]++;
while (Hash<N>::mods.size() < N) {
5.11 String Multihashing int mod;
for(int
for(int
i
i
=
=
1; i <
n - 1;
classes; i++) cnt[i] += cnt[i - 1];
i >= 0; i--) p[--cnt[c[p2[i]]]] = p2[i];
bool is_prime;
do { c2[p[0]] = 0;
// String Hashing mod = (int) 1e8 + (int) (llrand() % (int) 9e8); for(int i = 1; i < n; i++){
// Rabin Karp - O(n + m) is_prime = true; pair<int, int> b1 = {c[p[i]], c[(p[i] + (1 << k))%n]};
template <int N = 3> for (int i = 2; i * i <= mod; i++) { pair<int, int> b2 = {c[p[i - 1]], c[(p[i - 1] + (1 << k))%
struct Hash { if (mod % i == 0) { n]};
int hs[N]; is_prime = false; c2[p[i]] = c2[p[i - 1]] + (b1 != b2);
static vector<int> mods; break; }
}
static int add(int a, int b, int mod) { return a >= mod - b ? } c.swap(c2);
a + b - mod : a + b; } } while (!is_prime); }
static int sub(int a, int b, int mod) { return a - b < 0 ? a - Hash<N>::mods.push_back(mod); return p;
b + mod : a - b; } } }
static int mul(int a, int b, int mod) { return 1ll * a * b % }
mod; } */ // Longest Common Prefix with SA O(n)
vector<int> lcp(string &s, vector<int> &p){
Hash(int x = 0) { fill(hs, hs + N, x); } template <int N = 3> int n = s.size();
struct PolyHash { vector<int> ans(n - 1), pi(n);
bool operator<(const Hash& b) const { vector<Hash<N>> h, p; for(int i = 0; i < n; i++) pi[p[i]] = i;
for (int i = 0; i < N; i++) {
if (hs[i] < b.hs[i]) return true; PolyHash(string& s, int pr = 313) { int lst = 0;
if (hs[i] > b.hs[i]) return false; int sz = (int)s.size(); for(int i = 0; i < n - 1; i++){
} p.resize(sz + 1); if(pi[i] == n - 1) continue;
return false; h.resize(sz + 1); while(s[i + lst] == s[p[pi[i] + 1] + lst]) lst++;
}
p[0] = 1, h[0] = s[0]; ans[pi[i]] = lst;
Hash operator+(const Hash& b) const { for (int i = 1; i < sz; i++) { lst = max(0, lst - 1);
Hash ans; h[i] = pr * h[i - 1] + s[i]; }
p[i] = pr * p[i - 1];

20
for (int i = 0; i < N; i++) ans.hs[i] = add(hs[i], b.hs[i],
mods[i]); } return ans;
IME++
} if (!d[v]) substr_cnt(v);
d[u] += d[v];
// Longest Repeated Substring O(n)
int lrs = 0; }
} 5.14 Suffix Tree
for (int i = 0; i < n; ++i) lrs = max(lrs, lcp[i]);
ll substr_cnt() {
// Longest Common Substring O(n) memset(d, 0, sizeof d); // Suffix Tree
// m = strlen(s); substr_cnt(0); // Build: O(|s|)
// strcat(s, "$"); strcat(s, p); strcat(s, "#"); return d[0] - 1; // Match: O(|p|)
// n = strlen(s); }
int lcs = 0; template<int ALPHA_SIZE = 62>
for (int i = 1; i < n; ++i) if ((sa[i] < m) != (sa[i-1] < m)) // k-th Substring - O(|s|) struct SuffixTree {
lcs = max(lcs, lcp[i]); // Just find the k-th path in the automaton. struct Node {
// Can be done with the value d calculated in previous problem. int p, link = -1, l, r, nch = 0;
// To calc LCS for multiple texts use a slide window with vector<int> nxt;
minqueue // Smallest cyclic shift - O(|s|) Node(int _l = 0, int _r = -1, int _p = -1) : p(_p), l(_l), r
// The numver of different substrings of a string is n*(n + 1)/2 // Build the automaton for string s + s. And adapt previous dp (_r), nxt(ALPHA_SIZE, -1) {}
- sum(lcs[i])
// to only count paths with size |s|.
int len() { return r - l + 1; }
int next(char ch) { return nxt[remap(ch)]; }
// Number of occurences - O(|p|) // change this if different alphabet
5.13 Suffix Automaton vector<int> t[2*N]; int remap(char c) {
if (islower(c)) return c - ’a’;
void occur_count(int u) { if (isalpha(c)) return c - ’A’ + 26;
for(int v : t[u]) occur_count(v), cnt[u] += cnt[v]; return c - ’0’ + 52;
// Suffix Automaton Construction - O(n) } }
const int N = 1e6+1, K = 26; void build_tree() { void setEdge(char ch, int nx) {
int sl[2*N], len[2*N], sz, last; for(int i=1; i<=sz; ++i) int c = remap(ch);
ll cnt[2*N]; t[sl[i]].push_back(i); if (nxt[c] != -1 and nx == -1) nch--;
map<int, int> adj[2*N]; occur_count(0); else if (nxt[c] == -1 and nx != -1) nch++;
} nxt[c] = nx;
void add(int c) { }
int u = sz++; ll occur_count(char *p) { };
len[u] = len[last] + 1; // Call build tree once per automaton
cnt[u] = 1; int u = 0; string s;
for(int i=0; p[i]; ++i) { long long num_diff_substr = 0;
int p = last; u = adj[u][p[i]]; vector<Node> nodes;
while(p != -1 and !adj[p][c]) if (!u) break; queue<int> leaves;
adj[p][c] = u, p = sl[p]; } pair<int, int> st = { 0, 0 };
return !u ? 0 : cnt[u]; int ls = 0, rs = -1, n;
if (p == -1) sl[u] = 0; }
else { int size() { return rs - ls + 1; }
int q = adj[p][c]; // First occurence - (|p|)
if (len[p] + 1 == len[q]) sl[u] = q; // Store the first position of occurence fp. SuffixTree(string &_s) {
else { // Add the the code to add function: s = _s;
int r = sz++; // fp[u] = len[u] - 1; // Add this if you want every suffix to be a node
len[r] = len[p] + 1; // fp[r] = fp[q]; // s += ’$’;
sl[r] = sl[q]; n = (int)s.size();
adj[r] = adj[q]; // To answer a query, just output fp[u] - strlen(p) + 1 nodes.reserve(2 * n + 1);
while(p != -1 and adj[p][c] == q) // where u is the state corresponding to string p nodes.push_back(Node());
adj[p][c] = r, p = sl[p]; //for (int i = 0; i < n; i++) extend();
sl[q] = sl[u] = r; // All occurences - O(|p| + |ans|) }
} // All the occurences can reach the first occurence via suffix
} links. pair<int, int> walk(pair<int, int> _st, int l, int r) {
// So every state that contains a occreunce is reacheable by the int u = _st.first;
last = u; // first occurence state in the suffix link tree. Just do a DFS int d = _st.second;
} in this
// tree, starting from the first occurence. while (l <= r) {
void clear() { // OBS: cloned nodes will output same answer twice. if (d == nodes[u].len()) {
for(int i=0; i<=sz; ++i) adj[i].clear(); u = nodes[u].next(s[l]), d = 0;
last = 0; if (u == -1) return { u, d };
sz = 1; } else {
sl[0] = -1; // Smallest substring not contained in the string - O(|s| * K)
// Just do a dynamic programming: if (s[nodes[u].l + d] != s[l]) return { -1, -1 };
} if (r - l + 1 + d < nodes[u].len()) return { u, r - l +
// d[u] = 1 // if d does not have 1 transition
// d[u] = 1 + min d[v] // otherwise 1 + d };
void build(char *s) { l += nodes[u].len() - d;
clear(); d = nodes[u].len();
for(int i=0; s[i]; ++i) add(s[i]); }
} // LCS of 2 Strings - O(|s| + |t|)
// Build automaton of s and traverse the automaton wih string t }
// Pattern matching - O(|p|) // mantaining the current state and the current lenght.
// When we have a transition: update state, increase lenght by return { u, d };
bool check(char *p) { }
int u = 0, ok = 1; one.
for(int i=0; p[i]; ++i) { // If we don’t update state by suffix link and the new lenght
will int split(pair<int, int> _st) {
u = adj[u][p[i]]; int u = _st.first;
if (!u) ok = 0; // should be reduced (if bigger) to the new state length.
// Answer will be the maximum length of the whole traversal. int d = _st.second;
}
return ok; if (d == nodes[u].len()) return u;
} // LCS of n Strings - O(n*|s|*K) if (!d) return nodes[u].p;
// Create a new string S = s_1 + d1 + ... + s_n + d_n,
// Substring count - O(|p|) // where d_i are delimiters that are unique (d_i != d_j). Node& nu = nodes[u];
ll d[2*N]; // For each state use DP + bitmask to calculate if it can int mid = (int)nodes.size();
// reach a d_i transition without going through other d_j. nodes.push_back(Node(nu.l, nu.l + d - 1, nu.p));
void substr_cnt(int u) { // The answer will be the biggest len[u] that can reach all nodes[nu.p].setEdge(s[nu.l], mid);
d[u] = 1; // d_i’s. nodes[mid].setEdge(s[nu.l + d], u);

21
for(auto p : adj[u]) { nu.p = mid;
int v = p.second; nu.l += d;
IME++
return mid; r = n / (n / l);
} vector<int> zfunction(const string& s){ // floor(n / i) has the same value for l <= i <= r
vector<int> z (s.size()); }
int getLink(int u) { for (int i = 1, l = 0, r = 0, n = s.size(); i < n; i++){
if (nodes[u].link != -1) return nodes[u].link; if (i <= r) z[i] = min(z[i-l], r - i + 1); /* Recurrence using matriz
if (nodes[u].p == -1) return 0; while (i + z[i] < n and s[z[i]] == s[z[i] + i]) z[i]++; h[i + 2] = a1 * h[i + 1] + a0 * h[i]
int to = getLink(nodes[u].p); if (i + z[i] - 1 > r) l = i, r = i + z[i] - 1; [h[i] h[i-1]] = [h[1] h[0]] * [a1 1] ˆ (i - 1)
pair<int, int> nst = { to, nodes[to].len() }; } [a0 0] */
return nodes[u].link = split(walk(nst, nodes[u].l + (nodes[u return z;
].p == 0), nodes[u].r)); } /* Fibonacci in O(log(N)) with memoization
} f(0) = f(1) = 1
f(2*k) = f(k)ˆ2 + f(k - 1)ˆ2
bool match(string &p) { f(2*k + 1) = f(k)*[f(k) + 2*f(k - 1)] */
int u = 0, d = 0;
for (char ch : p) {
if (d == min(nodes[u].r, rs) - nodes[u].l + 1) {
u = nodes[u].next(ch), d = 1;
6 Mathematics /* Wilson’s Theorem Extension
B = b1 * b2 * ... * bm (mod n) = +-1, all bi <= n such that gcd
(bi, n) = 1
if (u == -1) return false; if(n <= 4 or n = (odd prime)ˆk or n = 2 * (odd prime)ˆk) B =
} else { -1; for any k
if (ch != s[nodes[u].l + d]) return false; 6.1 Basics else B = 1; */
d++;
} /* Stirling numbers of the second kind
} // Greatest Common Divisor & Lowest Common Multiple S(n, k) = Number of ways to split n numbers into k non-empty
return true; ll gcd(ll a, ll b) { return b ? gcd(b, a%b) : a; } sets
} ll lcm(ll a, ll b) { return a/gcd(a, b)*b; } S(n, 1) = S(n, n) = 1
S(n, k) = k * S(n - 1, k) + S(n - 1, k - 1)
void extend() { // Multiply caring overflow Sr(n, k) = S(n, k) with at least r numbers in each set
int mid; ll mulmod(ll a, ll b, ll m = MOD) { Sr(n, k) = k * Sr(n - 1, k) + (n - 1) * Sr(n - r, k - 1)
assert(rs != n - 1); ll r=0; (r - 1)
rs++; for (a %= m; b; b>>=1, a=(a*2)%m) if (b&1) r=(r+a)%m; S(n - d + 1, k - d + 1) = S(n, k) where if indexes i, j belong
num_diff_substr += (int)leaves.size(); return r; to the same set, then |i - j| >= d */
do { }
pair<int, int> nst = walk(st, rs, rs); /* Burnside’s Lemma
if (nst.first != -1) { st = nst; return; } // Another option for mulmod is using long double |Classes| = 1 / |G| * sum(K ˆ C(g)) for each g in G
mid = split(st); ull mulmod(ull a, ull b, ull m = MOD) { G = Different permutations possible
int leaf = (int)nodes.size(); ull q = (ld) a * (ld) b / (ld) m; C(g) = Number of cycles on the permutation g
num_diff_substr++; ull r = a * b - q * m; K = Number of states for each element
leaves.push(leaf); return (r + m) % m;
nodes.push_back(Node(rs, n - 1, mid)); } Different ways to paint a necklace with N beads and K colors:
nodes[mid].setEdge(s[rs], leaf); G = {(1, 2, ... N), (2, 3, ... N, 1), ... (N, 1, ... N - 1)}
int to = getLink(mid); // Fast exponential gi = (i, i + 1, ... i + N), (taking mod N to get it right) i =
st = { to, nodes[to].len() }; ll fexp(ll a, ll b, ll m = MOD) { 1 ... N
} while (mid); ll r=1; i -> 2i -> 3i ..., Cycles in gi all have size n / gcd(i, n), so
} for (a %= m; b; b>>=1, a=(a*a)%m) if (b&1) r=(r*a)%m; C(gi) = gcd(i, n)
return r; Ans = 1 / N * sum(K ˆ gcd(i, n)), i = 1 ... N
void pop() { } (For the brave, you can get to Ans = 1 / N * sum(euler_phi(N /
assert(ls <= rs);
ls++; d) * K ˆ d), d | N) */
int leaf = leaves.front();
leaves.pop(); /* Mobius Inversion
Node* nlf = &nodes[leaf];
while (!nlf->nch) {
6.2 Advanced Sum of gcd(i, j), 1 <= i, j <= N?
sum(k->N) k * sum(i->N) sum(j->N) [gcd(i, j) == k], i = a * k,
if (st.first != leaf) { j = b * k
nodes[nlf->p].setEdge(s[nlf->l], -1); = sum(k->N) k * sum(a->N/k) sum(b->N/k) [gcd(a, b) == 1]
/* Line integral = integral(sqrt(1 + (dy/dx)ˆ2)) dx */
num_diff_substr -= min(nlf->r, rs) - nlf->l + 1; = sum(k->N) k * sum(a->N/k) sum(b->N/k) sum(d->N/k) [d | a] * [
leaf = nlf->p; /* Multiplicative Inverse over MOD for all 1..N - 1 < MOD in O(N d | b] * mi(d)
nlf = &nodes[leaf]; ) = sum(k->N) k * sum(d->N/k) mi(d) * floor(N / kd)ˆ2, l = kd, l
} else { Only works for prime MOD. If all 1..MOD - 1 needed, use N = MOD <= N, k | l, d = l / k
if (st.second != min(nlf->r, rs) - nlf->l + 1) { */
int mid = split(st); = sum(l->N) floor(N / l)ˆ2 * sum(k|l) k * mi(l / k)
ll inv[N]; If f(n) = sum(x|n)(g(x) * h(x)) with g(x) and h(x)
st.first = mid; inv[1] = 1;
num_diff_substr -= min(nlf->r, rs) - nlf->l + 1; multiplicative, than f(n) is multiplicative
for(int i = 2; i < N; ++i)
nodes[mid].setEdge(s[nlf->l], -1); inv[i] = MOD - (MOD / i) * inv[MOD % i] % MOD; Hence, g(l) = sum(k|l) k * mi(l / k) is multiplicative
*nlf = nodes[mid]; = sum(l->N) floor(N / l)ˆ2 * g(l) */
nodes[nlf->p].setEdge(s[nlf->l], leaf); /* Catalan
nodes.pop_back(); f(n) = sum(f(i) * f(n - i - 1)), i in [0, n - 1] = (2n)! / ((n /* Frobenius / Chicken McNugget
} +1)! * n!) = ... n, m given, gcd(n, m) = 1, we want to know if it’s possible to
break; If you have any function f(n) (there are many) that follows create N = a * n + b * m
} this sequence (0-indexed): N, a, b >= 0
} 1, 1, 2, 5, 14, 42, 132, 429, 1430, 4862, 16796, 58786, 208012, The greatest number NOT possible is n * m - n - m
742900, 2674440 We can NOT create (n - 1) * (m - 1) / 2 numbers */
if (leaf and !nlf->nch) { than it’s the Catalan function */
leaves.push(leaf); ll cat[N];
int to = getLink(nlf->p); cat[0] = 1;
pair<int, int> nst = { to, nodes[to].len() }; for(int i = 1; i + 1 < N; i++) // needs inv[i + 1] till inv[N -
st = walk(nst, nlf->l + (nlf->p == 0), nlf->r);
nlf->l = rs - nlf->len() + 1;
1] 6.3 Discrete Log (Baby-step Giant-step)
cat[i] = 2ll * (2ll * i - 1) * inv[i + 1] % MOD * cat[i
nlf->r = n - 1; - 1] % MOD;
} // O(sqrt(m))
} /* Floor(n / i), i = [1, n], has <= 2 * sqrt(n) diff values. // Solve c * aˆx = b mod(m) for integer x >= 0.
}; Proof: i = [1, sqrt(n)] has sqrt(n) diff values. // Return the smallest x possible, or -1 if there is no solution
For i = [sqrt(n), n] we have that 1 <= n / i <= sqrt(n) // If all solutions needed, solve c * aˆx = b mod(m) and (a*b) *
and thus has <= sqrt(n) diff values. aˆy = b mod(m)
*/ // x + k * (y + 1) for k >= 0 are all solutions
/* l = first number that has floor(N / l) = x // Works for any integer values of c, a, b and positive m
5.15 Z Function r = last number that has floor(N / r) = x
N / r >= floor(N / l) // Corner Cases:

22
r <= N / floor(N / l)*/ // 0ˆx = 1 mod(m) returns x = 0, so you may want to change it to
// Z-Function - O(n) for(int l = 1, r; l <= n; l = r + 1){ -1
IME++
// You also may want to change for 0ˆx = 0 mod(1) to return x = if(c % g) return false; //
1 instead x *= c / g; // FFT made by tourist. It if faster and more supportive,
// We leave it like it is because you might be actually checking y *= c / g; although it requires more lines of code.
for mˆx = 0ˆx mod(m) if(a < 0) x = -x; // Also, it allows operations with MOD, which is usually an
// which would have x = 0 as the actual solution. if(b < 0) y = -y; issue in FFT problems.
ll discrete_log(ll c, ll a, ll b, ll m){ return true; //
c = ((c % m) + m) % m, a = ((a % m) + m) % m, b = ((b % } namespace fft {
m) + m) % m; typedef double dbl;
if(c == b) // auxiliar to find_all_solutions
return 0; void shift_solution (ll &x, ll &y, ll a, ll b, ll cnt) { struct num {
x += cnt * b; dbl x, y;
ll g = __gcd(a, m); y -= cnt * a; num() { x = y = 0; }
if(b % g) return -1; } num(dbl x, dbl y) : x(x), y(y) {}
};
if(g > 1){ // Find the amount of solutions of
ll r = discrete_log(c * a / g, a, b / g, m / g); // ax + by = c inline num operator+ (num a, num b) { return num(a.x + b.x, a.
return r + (r >= 0); // in given intervals for x and y y + b.y); }
} ll find_all_solutions (ll a, ll b, ll c, ll minx, ll maxx, ll inline num operator- (num a, num b) { return num(a.x - b.x, a.
miny, ll maxy) { y - b.y); }
unordered_map<ll, ll> babystep; ll x, y, g = __gcd(a, b); inline num operator* (num a, num b) { return num(a.x * b.x - a
ll n = 1, an = a % m; if(!diof(a, b, c, x, y)) return 0; .y * b.y, a.x * b.y + a.y * b.x); }
a /= g; b /= g; inline num conj(num a) { return num(a.x, -a.y); }
// set n to the ceil of sqrt(m):
while(n * n < m) n++, an = (an * a) % m; int sign_a = a>0 ? +1 : -1; int base = 1;
int sign_b = b>0 ? +1 : -1; vector<num> roots = {{0, 0}, {1, 0}};
// babysteps: vector<int> rev = {0, 1};
ll bstep = b; shift_solution (x, y, a, b, (minx - x) / b);
for(ll i = 0; i <= n; i++){ if (x < minx) const dbl PI = acosl(-1.0);
babystep[bstep] = i; shift_solution (x, y, a, b, sign_b);
bstep = (bstep * a) % m; if (x > maxx) void ensure_base(int nbase) {
} return 0; if(nbase <= base) return;
int lx1 = x;
// giantsteps: rev.resize(1 << nbase);
ll gstep = c * an % m; shift_solution (x, y, a, b, (maxx - x) / b); for(int i=0; i < (1 << nbase); i++) {
for(ll i = 1; i <= n; i++){ if (x > maxx) rev[i] = (rev[i >> 1] >> 1) + ((i & 1) << (nbase - 1));
if(babystep.find(gstep) != babystep.end()) shift_solution (x, y, a, b, -sign_b); }
return n * i - babystep[gstep]; int rx1 = x; roots.resize(1 << nbase);
gstep = (gstep * an) % m;
} shift_solution (x, y, a, b, - (miny - y) / a); while(base < nbase) {
return -1; if (y < miny) dbl angle = 2*PI / (1 << (base + 1));
} shift_solution (x, y, a, b, -sign_a); for(int i = 1 << (base - 1); i < (1 << base); i++) {
if (y > maxy) roots[i << 1] = roots[i];
return 0; dbl angle_i = angle * (2 * i + 1 - (1 << base));
int lx2 = x; roots[(i << 1) + 1] = num(cos(angle_i), sin(angle_i));
}
6.4 Euler Phi shift_solution (x, y, a, b, - (maxy - y) / a);
if (y > maxy) }
base++;
shift_solution (x, y, a, b, sign_a); }
// Euler phi (totient) int rx2 = x;
int ind = 0, pf = primes[0], ans = n; void fft(vector<num> &a, int n = -1) {
while (1ll*pf*pf <= n) { if (lx2 > rx2) if(n == -1) {
if (n%pf==0) ans -= ans/pf; swap (lx2, rx2); n = a.size();
while (n%pf==0) n /= pf; int lx = max (lx1, lx2); }
pf = primes[++ind]; int rx = min (rx1, rx2); assert((n & (n-1)) == 0);
} int zeros = __builtin_ctz(n);
if (n != 1) ans -= ans/n; if (lx > rx) return 0; ensure_base(zeros);
return (rx - lx) / abs(b) + 1; int shift = base - zeros;
// IME2014 } for(int i = 0; i < n; i++) {
int phi[N]; if(i < (rev[i] >> shift)) {
void totient() { bool crt_auxiliar(ll a, ll b, ll m1, ll m2, ll &ans){ swap(a[i], a[rev[i] >> shift]);
for (int i = 1; i < N; ++i) phi[i]=i; ll x, y; }
for (int i = 2; i < N; i+=2) phi[i]>>=1; if(!diof(m1, m2, b - a, x, y)) return false; }
for (int j = 3; j < N; j+=2) if (phi[j]==j) { ll lcm = m1 / __gcd(m1, m2) * m2; for(int k = 1; k < n; k <<= 1) {
phi[j]--; ans = ((a + x % (lcm / m1) * m1) % lcm + lcm) % lcm; for(int i = 0; i < n; i += 2 * k) {
for (int i = 2*j; i < N; i+=j) phi[i]=phi[i]/j*(j-1); return true; for(int j = 0; j < k; j++) {
} } num z = a[i+j+k] * roots[j+k];
} a[i+j+k] = a[i+j] - z;
// find ans such that ans = a[i] mod b[i] for all 0 <= i < n or a[i+j] = a[i+j] + z;
return false if not possible }
// ans + k * lcm(b[i]) are also solutions }
bool crt(int n, ll a[], ll b[], ll &ans){ }
if(!b[0]) return false; }
6.5 Extended Euclidean and Chinese Re- ans = a[0] % b[0];
ll l = b[0]; vector<num> fa, fb;
mainder for(int i = 1; i < n; i++){
if(!b[i]) return false;
vector<int> multiply(vector<int> &a, vector<int> &b) {
int need = a.size() + b.size() - 1;
if(!crt_auxiliar(ans, a[i] % b[i], l, b[i], ans)) return int nbase = 0;
// Extended Euclid: false; while((1 << nbase) < need) nbase++;
void euclid(ll a, ll b, ll &x, ll &y) { l *= (b[i] / __gcd(b[i], l)); ensure_base(nbase);
if (b) euclid(b, a%b, y, x), y -= x*(a/b); } int sz = 1 << nbase;
else x = 1, y = 0; return true; if(sz > (int) fa.size()) {
} } fa.resize(sz);
}
// find (x, y) such that a*x + b*y = c or return false if it’s for(int i = 0; i < sz; i++) {
not possible int x = (i < (int) a.size() ? a[i] : 0);
// [x + k*b/gcd(a, b), y - k*a/gcd(a, b)] are also solutions int y = (i < (int) b.size() ? b[i] : 0);
bool diof(ll a, ll b, ll c, ll &x, ll &y){ 6.6 Fast Fourier Transform(Tourist) }
fa[i] = num(x, y);

23
euclid(abs(a), abs(b), x, y);
ll g = abs(__gcd(a, b)); fft(fa, sz);
IME++
num r(0, -0.25 / sz);
for(int i = 0; i <= (sz >> 1); i++) {
6.7 Fast Fourier Transform // WARNING: assert n is a power of two!
void fwht(ll* a, int n, bool inv) {
int j = (sz - i) & (sz - 1); for(int l=1; 2*l <= n; l<<=1) {
num z = (fa[j] * fa[j] - conj(fa[i] * fa[i])) * r; // Fast Fourier Transform - O(nlogn) for(int i=0; i < n; i+=2*l) {
if(i != j) { for(int j=0; j<l; j++) {
fa[j] = (fa[i] * fa[i] - conj(fa[j] * fa[j])) * r; /* ll u = a[i+j], v = a[i+l+j];
} // Use struct instead. Performance will be way better!
fa[i] = z; typedef complex<ld> T; a[i+j] = (u+v) % MOD;
} T a[N], b[N]; a[i+l+j] = (u-v+MOD) % MOD;
fft(fa, sz); */ // % is kinda slow, you can use add() macro instead
vector<int> res(need); // #define add(x,y) (x+y >= MOD ? x+y-MOD : x+y)
for(int i = 0; i < need; i++) { struct T { }
res[i] = fa[i].x + 0.5; ld x, y; }
} T() : x(0), y(0) {} }
return res; T(ld a, ld b=0) : x(a), y(b) {}
} if(inv) {
T operator/=(ld k) { x/=k; y/=k; return (*this); } for(int i=0; i<n; i++) {
vector<int> multiply_mod(vector<int> &a, vector<int> &b, int m T operator*(T a) const { return T(x*a.x - y*a.y, x*a.y + y*a.x a[i] = a[i] / n;
, int eq = 0) { ); } }
int need = a.size() + b.size() - 1; T operator+(T a) const { return T(x+a.x, y+a.y); } }
int nbase = 0; T operator-(T a) const { return T(x-a.x, y-a.y); } }
while ((1 << nbase) < need) nbase++; } a[N], b[N];
ensure_base(nbase);
int sz = 1 << nbase; // a: vector containing polynomial /* FWHT AND
if (sz > (int) fa.size()) { // n: power of two greater or equal product size Matrix : Inverse
fa.resize(sz); /* 0 1 -1 1
} // Use iterative version! 1 1 1 0
for (int i = 0; i < (int) a.size(); i++) { void fft_recursive(T* a, int n, int s) { */
int x = (a[i] % m + m) % m; if (n == 1) return; void fwht_and(vi &a, bool inv) {
fa[i] = num(x & ((1 << 15) - 1), x >> 15); T tmp[n]; vi ret = a;
} for (int i = 0; i < n/2; ++i) ll u, v;
fill(fa.begin() + a.size(), fa.begin() + sz, num {0, 0}); tmp[i] = a[2*i], tmp[i+n/2] = a[2*i+1]; int tam = a.size() / 2;
fft(fa, sz); for(int len = 1; 2 * len <= tam; len <<= 1) {
if (sz > (int) fb.size()) { fft_recursive(&tmp[0], n/2, s); for(int i = 0; i < tam; i += 2 * len) {
fb.resize(sz); fft_recursive(&tmp[n/2], n/2, s); for(int j = 0; j < len; j++) {
} u = ret[i + j];
if (eq) { T wn = T(cos(s*2*PI/n), sin(s*2*PI/n)), w(1,0); v = ret[i + len + j];
copy(fa.begin(), fa.begin() + sz, fb.begin()); for (int i = 0; i < n/2; i++, w=w*wn) if(!inv) {
} else { a[i] = tmp[i] + w*tmp[i+n/2], ret[i + j] = v;
for (int i = 0; i < (int) b.size(); i++) { a[i+n/2] = tmp[i] - w*tmp[i+n/2]; ret[i + len + j] = u + v;
int x = (b[i] % m + m) % m; } }
fb[i] = num(x & ((1 << 15) - 1), x >> 15); */ else {
} ret[i + j] = -u + v;
fill(fb.begin() + b.size(), fb.begin() + sz, num {0, 0}); void fft(T* a, int n, int s) { ret[i + len + j] = u;
fft(fb, sz); for (int i=0, j=0; i<n; i++) { }
} if (i>j) swap(a[i], a[j]); }
dbl ratio = 0.25 / sz; for (int l=n/2; (jˆ=l) < l; l>>=1); }
num r2(0, -1); } }
num r3(ratio, 0); a = ret;
num r4(0, -ratio); for(int i = 1; (1<<i) <= n; i++){ }
num r5(0, 1); int M = 1 << i;
for (int i = 0; i <= (sz >> 1); i++) { int K = M >> 1;
int j = (sz - i) & (sz - 1); T wn = T(cos(s*2*PI/M), sin(s*2*PI/M)); /* FWHT OR
num a1 = (fa[i] + conj(fa[j])); for(int j = 0; j < n; j += M) { Matrix : Inverse
num a2 = (fa[i] - conj(fa[j])) * r2; T w = T(1, 0); 1 1 0 1
num b1 = (fb[i] + conj(fb[j])) * r3; for(int l = j; l < K + j; ++l){ 1 0 1 -1
num b2 = (fb[i] - conj(fb[j])) * r4; T t = w*a[l + K]; */
if (i != j) { a[l + K] = a[l]-t; void fft_or(vi &a, bool inv) {
num c1 = (fa[j] + conj(fa[i])); a[l] = a[l] + t; vi ret = a;
num c2 = (fa[j] - conj(fa[i])) * r2; w = wn*w; ll u, v;
num d1 = (fb[j] + conj(fb[i])) * r3; } int tam = a.size() / 2;
num d2 = (fb[j] - conj(fb[i])) * r4; } for(int len = 1; 2 * len <= tam; len <<= 1) {
fa[i] = c1 * d1 + c2 * d2 * r5; } for(int i = 0; i < tam; i += 2 * len) {
fb[i] = c1 * d2 + c2 * d1; } for(int j = 0; j < len; j++) {
} u = ret[i + j];
fa[j] = a1 * b1 + a2 * b2 * r5; // assert n is a power of two greater of equal product size v = ret[i + len + j];
fb[j] = a1 * b2 + a2 * b1; // n = na + nb; while (n&(n-1)) n++; if(!inv) {
} void multiply(T* a, T* b, int n) { ret[i + j] = u + v;
fft(fa, sz); fft(a,n,1); ret[i + len + j] = u;
fft(fb, sz); fft(b,n,1); }
vector<int> res(need); for (int i = 0; i < n; i++) a[i] = a[i]*b[i]; else {
for (int i = 0; i < need; i++) { fft(a,n,-1); ret[i + j] = v;
long long aa = fa[i].x + 0.5; for (int i = 0; i < n; i++) a[i] /= n; ret[i + len + j] = u - v;
long long bb = fb[i].x + 0.5; } }
long long cc = fa[i].y + 0.5; }
res[i] = (aa + ((bb % m) << 15) + ((cc % m) << 30)) % m; // Convert to integers after multiplying: }
} // (int)(a[i].x + 0.5); }
return res; a = ret;
} }
vector<int> square_mod(vector<int> &a, int m) {
}
return multiply_mod(a, a, m, 1); 6.8 Fast Walsh-Hadamard Transform
}
// Fast Walsh-Hadamard Transform - O(nlogn) 6.9 Gaussian Elimination (extended in-
//
// Multiply two polynomials, but instead of xˆa * xˆb = xˆ(a+b) verse)

24
// we have xˆa * xˆb = xˆ(a XOR b).
//
IME++
// Gauss-Jordan Elimination with Scaled Partial Pivoting for(int i = m-1; i >= 0; i--) { //solve triangular system if(abs(A[i][j]) < EPS) continue;
// Extended to Calculate Inverses - O(nˆ3) for(int j = m-1; j > i; j--) for(int k = 0; k < m+1; k++) { //Swap lines
// To get more precision choose m[j][i] as pivot the element A[i][m] = (A[i][m] - mulmod(A[i][j],X[j],p)+p)%p; swap(A[l][k],A[j][k]);
such that m[j][i] / mx[j] is maximized. X[i] = mulmod(A[i][m],inv(A[i][i],p),p); }
// mx[j] is the element with biggest absolute value of row j. } for(int i = j+1; i < n; i++) { //eliminate column
double t=A[i][j]/A[j][j];
ld C[N][M]; // N = 1000, M = 2*N+1; for(int k = j; k < m+1; k++)
int row, col; A[i][k]-=t*A[j][k];
}
bool elim() {
for(int i=0; i<row; ++i) {
6.11 Gaussian Elimination (xor) }
int p = i; // Choose the biggest pivot for(int i = m-1; i >= 0; i--) { //solve triangular system
for(int j=i; j<row; ++j) if (abs(C[j][i]) > abs(C[p][i])) p for(int j = m-1; j > i; j--)
= j; // Gauss Elimination for xor boolean operations A[i][m] -= A[i][j]*X[j];
for(int j=i; j<col; ++j) swap(C[i][j], C[p][j]); // Return false if not possible to solve X[i]=A[i][m]/A[i][i];
// Use boolean matrixes 0-indexed }
if (!C[i][i]) return 0; // n equations, m variables, O(n * m * m)
// eq[i][j] = coefficient of j-th element in i-th equation
ld c = 1/C[i][i]; // Normalize pivot line // r[i] = result of i-th equation
for(int j=0; j<col; ++j) C[i][j] *= c; // Return ans[j] = xj that gives the lexicographically greatest
for(int k=i+1; k<col; ++k) {
solution (if possible)
// (Can be changed to lexicographically least, follow the
6.13 Golden Section Search (Ternary
ld c = -C[k][i]; // Remove pivot variable from other lines
for(int j=0; j<col; ++j) C[k][j] += c*C[i][j];
comments in the code)
// WARNING!! The arrays get changed during de algorithm Search)
}
} bool eq[N][M], r[N], ans[M];
double gss(double l, double r) {
// Make triangular system a diagonal one bool gauss_xor(int n, int m){ double m1 = r-(r-l)/gr, m2 = l+(r-l)/gr;
for(int i=row-1; i>=0; --i) for(int j=i-1; j>=0; --j) { for(int i = 0; i < m; i++) double f1 = f(m1), f2 = f(m2);
ld c = -C[j][i]; ans[i] = true; while(fabs(l-r)>EPS) {
for(int k=i; k<col; ++k) C[j][k] += c*C[i][k]; int lid[N] = {0}; // id + 1 of last element present in i if(f1>f2) l=m1, f1=f2, m1=m2, m2=l+(r-l)/gr, f2=f(m2);
} -th line of final matrix else r=m2, f2=f1, m2=m1, m1=r-(r-l)/gr, f1=f(m1);
int l = 0; }
return 1; for(int i = m - 1; i >= 0; i--){ return l;
} for(int j = l; j < n; j++) }
if(eq[j][i]){ // pivot
// Finds inv, the inverse of matrix m of size n x n. swap(eq[l], eq[j]);
// Returns true if procedure was successful. swap(r[l], r[j]);
bool inverse(int n, ld m[N][N], ld inv[N][N]) { }
for(int i=0; i<n; ++i) for(int j=0; j<n; ++j)
C[i][j] = m[i][j], C[i][j+n] = (i == j);
if(l == n || !eq[l][i])
continue;
6.14 Josephus
lid[l] = i + 1;
row = n, col = 2*n; for(int j = l + 1; j < n; j++){ // eliminate
bool ok = elim(); // UFMG
column /* Josephus Problem - It returns the position to be, in order to
if(!eq[j][i]) not die. O(n)*/
for(int i=0; i<n; ++i) for(int j=0; j<n; ++j) inv[i][j] = C[i continue;
][j+n]; /* With k=2, for instance, the game begins with 2 being killed
for(int k = 0; k <= i; k++) and then n+2, n+4, ... */
return ok; eq[j][k] ˆ= eq[l][k];
} ll josephus(ll n, ll k) {
r[j] ˆ= r[l]; if(n==1) return 1;
} else return (josephus(n-1, k)+k-1)%n+1;
// Solves linear system m*x = y, of size n x n l++;
bool linear_system(int n, ld m[N][N], ld *x, ld *y) { }
}
for(int i = 0; i < n; ++i) for(int j = 0; j < n; ++j) C[i][j] for(int i = n - 1; i >= 0; i--){ // solve triangular
= m[i][j]; /* Another Way to compute the last position to be killed - O(d *
matrix log n) */
for(int j = 0; j < n; ++j) C[j][n] = x[j]; for(int j = 0; j < lid[i + 1]; j++) ll josephus(ll n, ll d) {
r[i] ˆ= (eq[i][j] && ans[j]); ll K = 1;
row = n, col = n+1; // for lexicographically least just delete the
bool ok = elim(); while (K <= (d - 1)*n) K = (d * K + d - 2) / (d - 1);
for bellow return d * n + 1 - K;
for(int j = lid[i + 1]; j + 1 < lid[i]; j++){ }
for(int j=0; j<n; ++j) y[j] = C[j][n]; ans[j] = true;
return ok; r[i] ˆ= eq[i][j];
} }
if(lid[i])
ans[lid[i] - 1] = r[i];
else if(r[i]) 6.15 Matrix Exponentiation
return false;
6.10 Gaussian Elimination (modulo }
return true; /*
prime) } This code assumes you are multiplying two matrices that can
be multiplied: (A nxp * B pxm)
Matrix fexp assumes square matrices
*/
//ll A[N][M+1], X[M]
for(int j=0; j<m; j++) { //collumn to eliminate
6.12 Gaussian Elimination (double) const int MOD = 1e9 + 7;
typedef long long ll;
int l = j; typedef long long type;
for(int i=j+1; i<n; i++) //find nonzero pivot
if(A[i][j]%p) //Gaussian Elimination struct matrix{
l=i; //double A[N][M+1], X[M] //matrix n x m
for(int k = 0; k < m+1; k++) { //Swap lines vector<vector<type>> a;
swap(A[l][k],A[j][k]); // if n < m, there’s no solution int n, m;
} // column m holds the right side of the equation matrix() = default;
for(int i = j+1; i < n; i++) { //eliminate column // X holds the solutions
ll t=mulmod(A[i][j],inv(A[j][j],p),p); matrix(int _n, int _m) : n(_n), m(_m){
for(int k = j; k < m+1; k++) for(int j=0; j<m; j++) { //collumn to eliminate a.resize(n, vector<type>(m));
A[i][k]=(A[i][k]-mulmod(t,A[j][k],p)+p)%p; int l = j; }
} for(int i=j+1; i<n; i++) //find largest pivot

25
} if(abs(A[i][j])>abs(A[l][j])) matrix operator *(matrix other){
l=i; matrix result(this->n, other.m);
IME++
for(int i = 0; i < result.n; i++){ for(ll i = 2; i < N; i++) if(!sieve[i]){ while(b){
for(int j = 0; j < result.m; j++){ for(ll j = i; j < N; j += i) sieve[j] = i, mob[j] *= -1; if(b & 1) ans = addmod(ans, a, m);
for(int k = 0; k < this->m; k++){ for(ll j = i*i; j < N; j += i*i) mob[j] = 0; a = addmod(a, a, m);
result.a[i][j] = (result.a[i][j] + a[i][k] * } b >>= 1;
other.a[k][j]); } }
//result.a[i][j] = (result.a[i][j] + (a[i][k return ans;
] * other.a[k][j]) % MOD) % MOD; /* }
} //Calculate Mobius for 1 integer
} //O(sqrt(n)) ll fexp(ll a, ll b, ll n){
} int mobius(int n){ ll r = 1;
return result; if(n == 1) return 1; while(b){
} int p = 0; if(b & 1) r = mulmod(r, a, n);
}; for(int i = 2; i*i <= n; i++) a = mulmod(a, a, n);
if(n%i == 0){ b >>= 1;
matrix identity(int n){ n /= i; }
matrix id(n, n); p++; return r;
for(int i = 0; i < n; i++) id.a[i][i] = 1; if(n%i == 0) return 0; }
return id; }
} if(n > 1) p++; bool miller(ll a, ll n){
return p&1 ? -1 : 1; if (a >= n) return true;
matrix fexp(matrix b, ll e){ } ll s = 0, d = n - 1;
matrix ans = identity(b.n); */ while(d % 2 == 0) d >>= 1, s++;
while(e){ ll x = fexp(a, d, n);
if(e & 1) ans = (ans * b); if (x == 1 || x == n - 1) return true;
b = b * b; for (int r = 0; r < s; r++, x = mulmod(x,x,n)){
e >>= 1; if (x == 1) return false;
}
return ans;
6.18 Number Theoretic Transform }
if (x == n - 1) return true;
} return false;
// Number Theoretic Transform - O(nlogn) }

// if long long is not necessary, use int instead to improve bool isprime(ll n){
if(n == 1) return false;
6.16 Mobius Inversion performance
const int mod = 20*(1<<23)+1; int base[] = {2, 3, 5, 7, 11, 13, 17, 19, 23, 29, 31,
37};
const int root = 3;
for (int i = 0; i < 12; ++i) if (!miller(base[i], n))
// multiplicative function calculator ll w[N]; return false;
// euler_phi and mobius are multiplicative return true;
// if another f[N] needed just remove comments // a: vector containing polynomial }
// O(N) // n: power of two greater or equal product size
void ntt(ll* a, int n, bool inv) { ll pollard(ll n){
bool p[N]; for (int i=0, j=0; i<n; i++) { ll x, y, d, c = 1;
vector<ll> primes; if (i>j) swap(a[i], a[j]); if (n % 2 == 0) return 2;
ll g[N]; for (int l=n/2; (jˆ=l) < l; l>>=1); while(true){
// ll f[N]; } y = x = 2;
while(true){
void mfc(){ // TODO: Rewrite this loop using FFT version x = addmod(mulmod(x,x,n), c, n);
// if g(1) != 1 than it’s not multiplicative ll k, t, nrev; y = addmod(mulmod(y,y,n), c, n);
g[1] = 1; w[0] = 1; y = addmod(mulmod(y,y,n), c, n);
// f[1] = 1; k = exp(root, (mod-1) / n, mod); if (x == y) break;
primes.clear(); for (int i=1;i<=n;i++) w[i] = w[i-1] * k % mod; d = __gcd(abs(x-y), n);
primes.reserve(N / 10); for(int i=2; i<=n; i<<=1) for(int j=0; j<n; j+=i) for(int l=0; if (d > 1) return d;
for(ll i = 2; i < N; i++){ l<(i/2); l++) { }
if(!p[i]){ int x = j+l, y = j+l+(i/2), z = (n/i)*l; c++;
primes.push_back(i); t = a[y] * w[inv ? (n-z) : z] % mod; }
for(ll j = i; j < N; j *= i){ a[y] = (a[x] - t + mod) % mod; }
g[j] = // g(pˆk) you found a[x] = (a[j+l] + t) % mod;
// f[j] = f(pˆk) you found } vector<ll> factor(ll n){
p[j] = (j != i); if (n == 1 || isprime(n)) return {n};
} nrev = exp(n, mod-2, mod); ll f = pollard(n);
} if (inv) for(int i=0; i<n; ++i) a[i] = a[i] * nrev % mod; vector<ll> l = factor(f), r = factor(n / f);
for(ll j : primes){ } l.insert(l.end(), r.begin(), r.end());
if(i * j >= N || i % j == 0) sort(l.begin(), l.end());
break; // assert n is a power of two greater of equal product size return l;
for(ll k = j; i * k < N; k *= j){ // n = na + nb; while (n&(n-1)) n++; }
g[i * k] = g[i] * g[k]; void multiply(ll* a, ll* b, int n) {
// f[i * k] = f[i] * f[k]; ntt(a, n, 0); //n <
2,047 base = {2};
p[i * k] = true; ntt(b, n, 0); //n <
9,080,191 base = {31, 73};
} for (int i = 0; i < n; i++) a[i] = a[i]*b[i] % mod; //n <
2,152,302,898,747 base = {2, 3, 5, 7, 11};
} ntt(a, n, 1); //n <
318,665,857,834,031,151,167,461 base = {2, 3, 5, 7, 11,
} } 13, 17, 19, 23, 29, 31, 37};
} //n < 3,317,044,064,679,887,385,961,981 base = {2, 3, 5, 7, 11,
13, 17, 19, 23, 29, 31, 37, 41};

6.17 Mobius Function 6.19 Pollard-Rho


// factor(N, v) to get N factorized in vector v
6.20 Pollard-Rho Optimization
// 1 if n == 1 // O(N ˆ (1 / 4)) on average
// 0 if exists x | n%(xˆ2) == 0 // Miller-Rabin - Primarily Test O(|base|*(logn)ˆ2)
// else (-1)ˆk, k = #(p) | p is prime and n%p == 0 ll addmod(ll a, ll b, ll m){ // We recomend you to use pollard-rho.cpp! I’ve never needed
if(a >= m - b) return a + b - m; this code, but here it is.
//Calculate Mobius for all integers using sieve return a + b; // This uses Brent’s algorithm for cycle detection
//O(n*log(log(n))) } //
void mobius() { std::mt19937 rng((int) std::chrono::steady_clock::now().

26
for(int i = 1; i < N; i++) mob[i] = 1; ll mulmod(ll a, ll b, ll m){ time_since_epoch().count());
ll ans = 0;
IME++
ull func(ull x, ull n, ull c) { return (mulmod(x, x, n) + c) % n } // Two-phase simplex algorithm for solving linear programs of
; // f(x) = (xˆ2 + c) % n; } the form
//
ull pollard(ull n) { // maximize cˆT x
// Finds a positive divisor of n // subject to Ax <= b
ull x, y, d, c;
ull pot, lam;
6.21 Prime Factors //
//
x >= 0
if(n % 2 == 0) return 2; // INPUT: A -- an m x n matrix
if(isprime(n)) return n; // Prime factors (up to 9*10ˆ13. For greater see Pollard Rho) // b -- an m-dimensional vector
vi factors; // c -- an n-dimensional vector
while(1) { int ind=0, pf = primes[0]; // x -- a vector where the optimal solution will be
y = x = 2; d = 1; while (pf*pf <= n) { stored
pot = lam = 1; while (n%pf == 0) n /= pf, factors.pb(pf); //
while(1) { pf = primes[++ind]; // OUTPUT: value of the optimal solution (infinity if unbounded
c = rng() % n; } // above, nan if infeasible)
if(c != 0 and (c+2)%n != 0) break; if (n != 1) factors.pb(n); //
} // To use this code, create an LPSolver object with A, b, and c
while(1) { as
if(pot == lam) { // arguments. Then, call Solve(x).
x = y;
pot <<= 1;
lam = 0; 6.22 Primitive Root #include
#include
<iostream>
<iomanip>
} #include <vector>
y = func(y, n, c); #include <cmath>
lam++; // Finds a primitive root modulo p #include <limits>
d = gcd(x >= y ? x-y : y-x, n); // To make it works for any value of p, we must add calculation
if (d > 1) { of phi(p) using namespace std;
if(d == n) break; // n is 1, 2, 4 or pˆk or 2*pˆk (p odd in both cases) typedef long double DOUBLE;
else return d; ll root(ll p) { typedef vector<DOUBLE> VD;
} ll n = p-1; typedef vector<VD> VVD;
} vector<ll> fact; typedef vector<int> VI;
} const DOUBLE EPS = 1e-9;
} for (int i=2; i*i<=n; ++i) if (n % i == 0) {
fact.push_back (i); struct LPSolver {
void fator(ull n, vector<ull> &v) { while (n % i == 0) n /= i; int m, n;
// prime factorization of n, put into a vector v. } VI B, N;
// VVD D;
// for each prime factor of n, it is repeated the amount of if (n > 1) fact.push_back (n);
times LPSolver(const VVD &A, const VD &b, const VD &c) :
// that it divides n for (int res=2; res<=p; ++res) { m(b.size()), n(c.size()), N(n + 1), B(m), D(m + 2, VD(n + 2)
// bool ok = true; ) {
// ex : n == 120, v = {2, 2, 2, 3, 5}; for (size_t i=0; i<fact.size() && ok; ++i) for (int i = 0; i < m; i++) for (int j = 0; j < n; j++) D[i
// ok &= exp(res, (p-1) / fact[i], p) != 1; ][j] = A[i][j];
// if (ok) return res; for (int i = 0; i < m; i++) { B[i] = n + i; D[i][n] = -1; D[
if(isprime(n)) { v.pb(n); return; } } i][n + 1] = b[i]; }
vector<ull> w, t; w.pb(n); t.pb(1); for (int j = 0; j < n; j++) { N[j] = j; D[m][j] = -c[j]; }
return -1; N[n] = -1; D[m + 1][n] = 1;
while(!w.empty()) { } }
ull bck = w.back();
ull div = pollard(bck); void Pivot(int r, int s) {
for (int i = 0; i < m + 2; i++) if (i != r)
if(div == w.back()) { for (int j = 0; j < n + 2; j++) if (j != s)
int amt = 0; 6.23 Sieve of Eratosthenes D[i][j] -= D[r][j] * D[i][s] / D[r][s];
for(int i=0; i < (int) w.size(); i++) { for (int j = 0; j < n + 2; j++) if (j != s) D[r][j] /= D[r][
int cur = 0; s];
while(w[i] % div == 0) { // Sieve of Erasthotenes for (int i = 0; i < m + 2; i++) if (i != r) D[i][s] /= -D[r
w[i] /= div; int p[N]; vi primes; ][s];
cur++; D[r][s] = 1.0 / D[r][s];
} for (ll i = 2; i < N; ++i) if (!p[i]) { swap(B[r], N[s]);
amt += cur * t[i]; for (ll j = i*i; j < N; j+=i) p[j]=1; }
if(w[i] == 1) { primes.pb(i);
swap(w[i], w.back()); } bool Simplex(int phase) {
swap(t[i], t.back()); int x = phase == 1 ? m + 1 : m;
w.pop_back(); while (true) {
t.pop_back(); int s = -1;
} for (int j = 0; j <= n; j++) {
}
while(amt--) v.pb(div);
6.24 Simpson Rule if (phase == 2 && N[j] == -1) continue;
if (s == -1 || D[x][j] < D[x][s] || D[x][j] == D[x][s]
} && N[j] < N[s]) s = j;
else { // Simpson Integration Rule }
int amt = 0; // define the function f if (D[x][s] > -EPS) return true;
while(w.back() % div == 0) { double f(double x) { int r = -1;
w.back() /= div; // ... for (int i = 0; i < m; i++) {
amt++; } if (D[i][s] < EPS) continue;
} if (r == -1 || D[i][n + 1] / D[i][s] < D[r][n + 1] / D[r
amt *= t.back(); double simpson(double a, double b, int n = 1e6) {
if(w.back() == 1) { ][s] ||
double h = (b - a) / n; (D[i][n + 1] / D[i][s]) == (D[r][n + 1] / D[r][s]) &&
w.pop_back(); double s = f(a) + f(b);
t.pop_back(); B[i] < B[r]) r = i;
for (int i = 1; i < n; i += 2) s += 4 * f(a + h*i); }
} for (int i = 2; i < n; i += 2) s += 2 * f(a + h*i); if (r == -1) return false;
return s*h/3; Pivot(r, s);
w.pb(div); }
t.pb(amt); }
} }
}
DOUBLE Solve(VD &x) {
int r = 0;
// the divisors will not be sorted, so you need to sort it
afterwards 6.25 Simplex (Stanford) for (int i = 1; i < m; i++) if (D[i][n + 1] < D[r][n + 1]) r

27
sort(v.begin(), v.end()); = i;
IME++
if (D[r][n + 1] < -EPS) { // Randolph Franklin); returns 1 for strictly interior points, 0 bool operator <(const point &p) const { return (x < p.x) or
Pivot(r, n); for (x == p.x and y < p.y); }
if (!Simplex(1) || D[m + 1][n + 1] < -EPS) return - // strictly exterior points, and 0 or 1 for the remaining points
numeric_limits<DOUBLE>::infinity(); . // 0 => same direction
for (int i = 0; i < m; i++) if (B[i] == -1) { // Note that it is possible to convert this into an *exact* test // 1 => p is on the left
int s = -1; using //-1 => p is on the right
for (int j = 0; j <= n; j++) // integer arithmetic by taking care of the division int dir(point o, point p) {
if (s == -1 || D[i][j] < D[i][s] || D[i][j] == D[i][s] appropriately type x = (*this - o) % (p - o);
&& N[j] < N[s]) s = j; // (making sure to deal with signs properly) and then by writing return ge(x,0) - le(x,0);
Pivot(i, s); exact }
} // tests for checking point on polygon boundary
} bool PointInPolygon(const vector<PT> &p, PT q) { bool on_seg(point p, point q) {
if (!Simplex(2)) return numeric_limits<DOUBLE>::infinity(); bool c = 0; if (this->dir(p, q)) return 0;
x = VD(n); for (int i = 0; i < p.size(); i++){ return ge(x, min(p.x, q.x)) and le(x, max(p.x, q.x)) and
for (int i = 0; i < m; i++) if (B[i] < n) x[B[i]] = D[i][n + int j = (i+1)%p.size(); ge(y, min(p.y, q.y)) and le(y, max(p.y, q.y));
1]; if ((p[i].y <= q.y && q.y < p[j].y || }
return D[m][n + 1]; p[j].y <= q.y && q.y < p[i].y) &&
} q.x < p[i].x + (p[j].x - p[i].x) * (q.y - p[i].y) / ld abs() { return sqrt(x*x + y*y); }
}; (p[j].y - p[i].y)) type abs2() { return x*x + y*y; }
c = !c; ld dist(point q) { return (*this - q).abs(); }
int main() { } type dist2(point q) { return (*this - q).abs2(); }
return c;
const int m = 4; } ld arg() { return atan2l(y, x); }
const int n = 3;
DOUBLE _A[m][n] = { // Project point on vector y
{ 6, -1, 0 }, point project(point y) { return y * ((*this * y) / (y * y));
{ -1, -5, 0 }, }
{ 1, 5, 1 },
{ -1, -5, -1 }
7.2 Basics (Point) // Project point on line generated by points x and y
}; point project(point x, point y) { return x + (*this - x).
DOUBLE _b[m] = { 10, -4, 5, -5 }; project(y-x); }
DOUBLE _c[n] = { 1, -1, 0 }; #include <bits/stdc++.h>
ld dist_line(point x, point y) { return dist(project(x, y));
VVD A(m); using namespace std; }
VD b(_b, _b + m);
VD c(_c, _c + n); #define st first ld dist_seg(point x, point y) {
for (int i = 0; i < m; i++) A[i] = VD(_A[i], _A[i] + n); #define nd second return project(x, y).on_seg(x, y) ? dist_line(x, y) :
#define pb push_back min(dist(x), dist(y));
LPSolver solver(A, b, c); #define cl(x,v) memset((x), (v), sizeof(x)) }
VD x; #define db(x) cerr << #x << " == " << x << endl
DOUBLE value = solver.Solve(x); #define dbs(x) cerr << x << endl point rotate(ld sin, ld cos) { return point(cos*x - sin*y,
#define _ << ", " << sin*x + cos*y); }
cerr << "VALUE: " << value << endl; // VALUE: 1.29032 point rotate(ld a) { return rotate(sin(a), cos(a)); }
cerr << "SOLUTION:"; // SOLUTION: 1.74194 0.451613 1 typedef long long ll;
for (size_t i = 0; i < x.size(); i++) cerr << " " << x[i]; typedef long double ld; // rotate around the argument of vector p
cerr << endl; typedef pair<int,int> pii; point rotate(point p) { return rotate(p.y / p.abs(), p.x / p
return 0; typedef pair<int, pii> piii; .abs()); }
} typedef pair<ll,ll> pll;
typedef pair<ll, pll> plll; };
typedef vector<int> vi;
typedef vector <vi> vii; int direction(point o, point p, point q) { return p.dir(o, q); }

const ld EPS = 1e-9, PI = acos(-1.); point rotate_ccw90(point p) { return point(-p.y,p.x); }


7 Geometry const
const
ll LINF = 0x3f3f3f3f3f3f3f3f;
int INF = 0x3f3f3f3f, MOD = 1e9+7;
point rotate_cw90(point p) { return point(p.y,-p.x); }

const int N = 1e5+5; //for reading purposes avoid using * and % operators, use the
functions below:
type dot(point p, point q) { return p.x*q.x + p.y*q.y; }
7.1 Miscellaneous typedef long double type;
//for big coordinates change to long long type cross(point p, point q) { return p.x*q.y - p.y*q.x; }

bool ge(type x, type y) { return x + EPS > y; } //double area


/* bool le(type x, type y) { return x - EPS < y; } type area_2(point a, point b, point c) { return cross(a,b) +
1) Square (n = 4) is the only regular polygon with integer bool eq(type x, type y) { return ge(x, y) and le(x, y); } cross(b,c) + cross(c,a); }
coordinates int sign(type x) { return ge(x, 0) - le(x, 0); }
//angle between (a1 and b1) vs angle between (a2 and b2)
2) Pick’s theorem: A = i + b/2 - 1 struct point { //1 : bigger
A: area of the polygon type x, y; //-1 : smaller
i: number of interior points //0 : equal
b: number of points on the border point() : x(0), y(0) {} int angle_less(const point& a1, const point& b1, const point& a2
point(type _x, type _y) : x(_x), y(_y) {} , const point& b2) {
3) Conic Rotations point p1(dot( a1, b1), abs(cross( a1, b1)));
Given elipse: Axˆ2 + Bxy + Cyˆ2 + Dx + Ey + F = 0 point operator -() { return point(-x, -y); } point p2(dot( a2, b2), abs(cross( a2, b2)));
Convert it to: Axˆ2 + Bxy + Cyˆ2 + Dx + Ey = 1 (this formula point operator +(point p) { return point(x + p.x, y + p.y); if(cross(p1, p2) < 0) return 1;
suits better for elipse, before doing this verify F = } if(cross(p1, p2) > 0) return -1;
0) point operator -(point p) { return point(x - p.x, y - p.y); return 0;
Final conversion: A(x + D/2A)ˆ2 + C(y + E/2C)ˆ2 = 1 + Dˆ2/4A } }
+ Eˆ2/4C
B != 0 (Rotate): point operator *(type k) { return point(x*k, y*k); } ostream &operator<<(ostream &os, const point &p) {
theta = atan2(b, c-a)/2.0; point operator /(type k) { return point(x/k, y/k); } os << "(" << p.x << "," << p.y << ")";
A’ = (a + c + b/sin(2.0*theta))/2.0; // A return os;
C’ = (a + c - b/sin(2.0*theta))/2.0; // C //inner product }
D’ = d*sin(theta) + e*cos(theta); // D type operator *(point p) { return x*p.x + y*p.y; }
E’ = d*cos(theta) - e*sin(theta); // E //cross product
Remember to rotate again after! type operator %(point p) { return x*p.y - y*p.x; }
*/
bool operator ==(const point &p) const{ return x == p.x and 7.3 Radial Sort
y == p.y; }

28
// determine if point is in a possibly non-convex polygon (by bool operator !=(const point &p) const{ return x != p.x or
William y != p.y; } #include "basics.cpp"
IME++
point origin; if (C.contains(p[i])) continue; pair<point, point> tan1 = C.getTangentPoint(A);
C = circle(p[i], 0.0); pair<point, point> tan2 = C.getTangentPoint(B);
/* for(int j = 0; j < i; j++) { ans = 1e+30;
below < above if (C.contains(p[j])) continue; ans = min(ans, getd2(tan1.first, tan2.first));
order: [pi, 2 * pi) C = circle((p[j] + p[i])*0.5, 0.5*p[j].dist(p[i])); ans = min(ans, getd2(tan1.first, tan2.second));
*/ for(int k = 0; k < j; k++) { ans = min(ans, getd2(tan1.second, tan2.first));
if (C.contains(p[k])) continue; ans = min(ans, getd2(tan1.second, tan2.second));
int above(point p){ C = circumcircle(p[j], p[i], p[k]); }
if(p.y == origin.y) return p.x > origin.x; } printf("%.18f\n", ans);
return p.y > origin.y; } return 0;
} } }*/
return C;
bool cmp(point p, point q){ }
int tmp = above(q) - above(p);
if(tmp) return tmp > 0; // compute intersection of line through points a and b with
return p.dir(origin,q) > 0;
//Be Careful: p.dir(origin,q) == 0
// circle centered at c with radius r > 0
vector<point> circle_line_intersection(point a, point b, point c
7.5 Closest Pair of Points
} , ld r) {
vector<point> ret; #include "basics.cpp"
b = b - a; //DIVIDE AND CONQUER METHOD
a = a - c; //Warning: include variable id into the struct point
ld A = dot(b, b);
7.4 Circle ld B = dot(a, b);
ld C = dot(a, a) - r*r;
struct cmp_y {
bool operator()(const point & a, const point & b) const {
ld D = B*B - A*C; return a.y < b.y;
#include "basics.cpp" if (D < -EPS) return ret; }
#include "lines.cpp" ret.push_back(c + a + b*(sqrt(D + EPS) - B)/A); };
if (D > EPS)
struct circle { ret.push_back(c + a + b*(-B - sqrt(D))/A); ld min_dist = LINF;
point c; return ret; pair<int, int> best_pair;
ld r; } vector<point> pts, stripe;
circle() { c = point(); r = 0; } int n;
circle(point _c, ld _r) : c(_c), r(_r) {} vector<point> circle_circle_intersection(point a, point b, ld r,
ld area() { return acos(-1.0)*r*r; } ld R) { void upd_ans(const point & a, const point & b) {
ld chord(ld rad) { return 2*r*sin(rad/2.0); } vector<point> ret; ld dist = sqrt((a.x - b.x)*(a.x - b.x) + (a.y - b.y)*(a.y -
ld sector(ld rad) { return 0.5*rad*area()/acos(-1.0); } ld d = sqrt(a.dist2(b)); b.y));
bool intersects(circle other) { if (d > r + R || d + min(r, R) < max(r, R)) return ret; if (dist < min_dist) {
return le(c.dist(other.c), r + other.r); ld x = (d*d - R*R + r*r)/(2*d); min_dist = dist;
} ld y = sqrt(r*r - x*x); // best_pair = {a.id, b.id};
bool contains(point p) { return le(c.dist(p), r); } point v = (b - a)/d; }
pair<point, point> getTangentPoint(point p) { ret.push_back(a + v*x + rotate_ccw90(v)*y); }
ld d1 = c.dist(p), theta = asin(r/d1); if (y > 0)
point p1 = (c - p).rotate(-theta); ret.push_back(a + v*x - rotate_ccw90(v)*y); void closest_pair(int l, int r) {
point p2 = (c - p).rotate(theta); return ret; if (r - l <= 3) {
p1 = p1*(sqrt(d1*d1 - r*r)/d1) + p; } for (int i = l; i < r; ++i) {
p2 = p2*(sqrt(d1*d1 - r*r)/d1) + p; for (int j = i + 1; j < r; ++j) {
return make_pair(p1,p2); //GREAT CIRCLE upd_ans(pts[i], pts[j]);
} }
}; double gcTheta(double pLat, double pLong, double qLat, double }
qLong) { sort(pts.begin() + l, pts.begin() + r, cmp_y());
circle circumcircle(point a, point b, point c) { pLat *= acos(-1.0) / 180.0; pLong *= acos(-1.0) / 180.0; return;
circle ans; // convert degree to radian }
point u = point((b - a).y, -(b - a).x); qLat *= acos(-1.0) / 180.0; qLong *= acos(-1.0) / 180.0;
point v = point((c - a).y, -(c - a).x); return acos(cos(pLat)*cos(pLong)*cos(qLat)*cos(qLong) + int m = (l + r) >> 1;
point n = (c - b)*0.5; cos(pLat)*sin(pLong)*cos(qLat)*sin(qLong) + type midx = pts[m].x;
ld t = cross(u,n)/cross(v,u); sin(pLat)*sin(qLat)); closest_pair(l, m);
ans.c = ((a + c)*0.5) + (v*t); } closest_pair(m, r);
ans.r = ans.c.dist(a);
return ans; double gcDistance(double pLat, double pLong, double qLat, double merge(pts.begin() + l, pts.begin() + m, pts.begin() + m, pts
} qLong, double radius) { .begin() + r, stripe.begin(), cmp_y());
return radius*gcTheta(pLat, pLong, qLat, qLong); copy(stripe.begin(), stripe.begin() + r - l, pts.begin() + l
point compute_circle_center(point a, point b, point c) { } );
//circumcenter
b = (a + b)/2; int stripe_sz = 0;
c = (a + c)/2; /* for (int i = l; i < r; ++i) {
return compute_line_intersection(b, b + rotate_cw90(a - b), * Codeforces 101707B if (abs(pts[i].x - midx) < min_dist) {
c, c + rotate_cw90(a - c)); */ for (int j = stripe_sz - 1; j >= 0 && pts[i].y -
} /* stripe[j].y < min_dist; --j)
point A, B; upd_ans(pts[i], stripe[j]);
int inside_circle(point p, circle c) { circle C; stripe[stripe_sz++] = pts[i];
if (fabs(p.dist(c.c) - c.r)<EPS) return 1; }
else if (p.dist(c.c) < c.r) return 0; double getd2(point a, point b) { }
else return 2; double h = dist(a, b); }
} //0 = inside/1 = border/2 = outside double r = C.r;
double alpha = asin(h/(2*r)); int main(){
circle incircle( point p1, point p2, point p3 ) { while (alpha < 0) alpha += 2*acos(-1.0); //read and save in vector pts
ld m1 = p2.dist(p3); return dist(a, A) + dist(b, B) + r*2*min(alpha, 2*acos min_dist = LINF;
ld m2 = p1.dist(p3); (-1.0) - alpha); stripe.resize(n);
ld m3 = p1.dist(p2); } sort(pts.begin(), pts.end());
point c = (p1*m1 + p2*m2 + p3*m3)*(1/(m1 + m2 + m3)); closest_pair(0, n);
ld s = 0.5*(m1 + m2 + m3); int main() { }
ld r = sqrt(s*(s - m1)*(s - m2)*(s - m3))/s; scanf("%lf %lf", &A.x, &A.y);
return circle(c, r); scanf("%lf %lf", &B.x, &B.y);
} scanf("%lf %lf %lf", &C.c.x, &C.c.y, &C.r); //LINE SWEEP
double ans; int n; //amount of points
circle minimum_circle(vector<point> p) { if (distToLineSegment(C.c, A, B) >= C.r) { point pnt[N];
random_shuffle(p.begin(), p.end()); ans = dist(A, B);
}

29
circle C = circle(p[0], 0.0); struct cmp_y {
for(int i = 0; i < (int)p.size(); i++) { else { bool operator()(const point & a, const point & b) const {
IME++
if(a.y == b.y) return a.x < b.x; if(q.size()>1) return a + (b - a)*r;
return a.y < b.y; p.back()=pos(q.back(),q[q.size()-2]); }
} }
}; while(q.size()>1 && !left(p.back(),q.front())) point project_point_segment(point c, point a, point b) {
q.pop_back(), p.pop_back(); ld r = dot(b - a, b - a);
ld closest_pair() { if(q.size() <= 2) return polygon(); //Nao forma poligono ( if (fabs(r) < EPS) return a;
sort(pnt, pnt+n); pode nao ter intersecao) r = dot(c - a, b - a)/r;
ld best = numeric_limits<double>::infinity(); if(!cmp(q.back().v % q.front().v)) return polygon(); //Lados if (le(r, 0)) return a;
set<point, cmp_y> box; paralelos -> area infinita if (ge(r, 1)) return b;
point ult = pos(q.back(),q.front()); return a + (b - a)*r;
box.insert(pnt[0]); }
int l = 0; bool ok = 1;
for(int i=0; i < (int) line.size(); i++) ld distance_point_line(point c, point a, point b) {
for (int i = 1; i < n; i++){ if(!left_equal(ult,line[i])){ ok=0; break; } return c.dist2(project_point_line(c, a, b));
while(l < i and pnt[i].x - pnt[l].x > best) }
box.erase(pnt[l++]); if(ok) p.push_back(ult); //Se formar um poligono fechado
for(auto it = box.lower_bound({0, pnt[i].y - best}); it != polygon ret; ld distance_point_ray(point c, point a, point b) {
box.end() and pnt[i].y + best >= it->y; it++) for(int i=0; i < (int) p.size(); i++) return c.dist2(project_point_ray(c, a, b));
best = min(best, hypot(pnt[i].x - it->x, pnt[i].y - it->y) ret.pb(p[i]); }
); return ret;
box.insert(pnt[i]); } ld distance_point_segment(point c, point a, point b) {
} }; return c.dist2(project_point_segment(c, a, b));
return best; }
} //
// Detect whether there is a non-empty intersection in a set of //not tested
halfplanes ld distance_point_plane(ld x, ld y, ld z,
// Complexity O(n) ld a, ld b, ld c, ld d)
// {
7.6 Half Plane Intersection // By Agnez
// }
return fabs(a*x + b*y + c*z - d)/sqrt(a*a + b*b + c*c);
pair<char, point> half_inter(vector<pair<point,point> > &vet){
// Intersection of halfplanes - O(nlogn) random_shuffle(all(vet)); bool lines_parallel(point a, point b, point c, point d) {
// Points are given in counterclockwise order point p; return fabs(cross(b - a, d - c)) < EPS;
// rep(i,0,sz(vet)) if(ccw(vet[i].x,vet[i].y,p) != 1){ }
// by Agnez point dir = (vet[i].y-vet[i].x)/abs(vet[i].y-vet
[i].x); bool lines_collinear(point a, point b, point c, point d) {
typedef vector<point> polygon; point l = vet[i].x - dir*1e15; return lines_parallel(a, b, c, d)
point r = vet[i].x + dir*1e15; && fabs(cross(a-b, a-c)) < EPS
int cmp(ld x, ld y = 0, ld tol = EPS) { if(r<l) swap(l,r); && fabs(cross(c-d, c-a)) < EPS;
return (x <= y + tol) ? (x + tol < y) ? -1 : 0 : 1; } rep(j,0,i){ }
if(ccw(point(),vet[i].x-vet[i].y,vet[j].
bool comp(point a, point b){ x-vet[j].y)==0){ point lines_intersect(point p, point q, point a, point b) {
if(ccw(vet[j].x, vet[j].y, p) == point r = q - p, s = b - a, c(p%q, a%b);
if((cmp(a.x) > 0 || (cmp(a.x) == 0 && cmp(a.y) > 0) ) && ( 1) if (eq(r%s,0)) return point(LINF, LINF);
cmp(b.x) < 0 || (cmp(b.x) == 0 && cmp(b.y) < 0))) continue; return point(point(r.x, s.x) % c, point(r.y, s.y) % c) / (r%
return 1; return mp(0,point()); s);
if((cmp(b.x) > 0 || (cmp(b.x) == 0 && cmp(b.y) > 0) ) && ( } }
cmp(a.x) < 0 || (cmp(a.x) == 0 && cmp(a.y) < 0))) if(ccw(vet[j].x, vet[j].y, l) != 1)
return 0; l = max(l, line_intersect(vet[i //be careful: test line_line_intersection before using this
ll R = a%b; ].x,vet[i].y,vet[j].x,vet[j function
if(R) return R > 0; ].y)); point compute_line_intersection(point a, point b, point c, point
return false; if(ccw(vet[j].x, vet[j].y, r) != 1) d) {
} r = min(r, line_intersect(vet[i b = b - a; d = c - d; c = c - a;
].x,vet[i].y,vet[j].x,vet[j assert(dot(b, b) > EPS && dot(d, d) > EPS);
namespace halfplane{ ].y)); return a + b*cross(c, d)/cross(b, d);
struct L{ if(!(l<r)) return mp(0,point()); }
point p,v; }
L(){} p=r; bool line_line_intersect(point a, point b, point c, point d) {
L(point P, point V):p(P),v(V){} } if(!lines_parallel(a, b, c, d)) return true;
bool operator<(const L &b)const{ return comp(v, b.v); } return mp(1, p); if(lines_collinear(a, b, c, d)) return true;
}; } return false;
vector<L> line; }
void addL(point a, point b){line.pb(L(a,b-a));}
bool left(point &p, L &l){ return cmp(l.v % (p-l.p))>0; } //rays in direction a -> b, c -> d
bool left_equal(point &p, L &l){ return cmp(l.v % (p-l.p))>=0; bool ray_ray_intersect(point a, point b, point c, point d){
} 7.7 Lines if (a.dist2(c) < EPS || a.dist2(d) < EPS ||
void init(){ line.clear(); } b.dist2(c) < EPS || b.dist2(d) < EPS) return true;
if (lines_collinear(a, b, c, d)) {
point pos(L &a, L &b){ #include "basics.cpp" if(ge(dot(b - a, d - c), 0)) return true;
point x=a.p-b.p; //functions tested at: https://codeforces.com/group/3qadGzUdR4/ if(ge(dot(a - c, d - c), 0)) return true;
ld t = (b.v % x)/(a.v % b.v); contest/101706/problem/B return false;
return a.p+a.v*t; }
} //WARNING: all distance functions are not realizing sqrt if(!line_line_intersect(a, b, c, d)) return false;
operation point inters = lines_intersect(a, b, c, d);
polygon intersect(){ //Suggestion: for line intersections check if(ge(dot(inters - c, d - c), 0) && ge(dot(inters - a, b - a
sort(line.begin(), line.end()); line_line_intersection and then use ), 0)) return true;
deque<L> q; //linhas da intersecao compute_line_intersection return false;
deque<point> p; //pontos de intersecao entre elas }
q.push_back(line[0]); point project_point_line(point c, point a, point b) {
for(int i=1; i < (int) line.size(); i++){ ld r = dot(b - a,b - a); bool segment_segment_intersect(point a, point b, point c, point
while(q.size()>1 && !left(p.back(), line[i])) if (fabs(r) < EPS) return a; d) {
q.pop_back(), p.pop_back(); return a + (b - a)*dot(c - a, b - a)/dot(b - a, b - a); if (a.dist2(c) < EPS || a.dist2(d) < EPS ||
while(q.size()>1 && !left(p.front(), line[i])) }
q.pop_front(), p.pop_front(); b.dist2(c) < EPS || b.dist2(d) < EPS) return true;
if(!cmp(q.back().v % line[i].v) && !left(q.back().p,line[i point project_point_ray(point c, point a, point b) { int d1, d2, d3, d4;
])) ld r = dot(b - a, b - a); d1 = direction(a, b, c);
q.back() = line[i]; if (fabs(r) < EPS) return a; d2 = direction(a, b, d);
d3 = direction(c, d, a);

30
else if(cmp(q.back().v % line[i].v)) r = dot(c - a, b - a) / r;
q.push_back(line[i]), p.push_back(point()); if (le(r, 0)) return a; d4 = direction(c, d, b);
IME++
if (d1*d2 < 0 and d3*d4 < 0) return 1;
return a.on_seg(c, d) or b.on_seg(c, d) or //ITA MINKOWSKI
}
c.on_seg(a, b) or d.on_seg(a, b);
typedef vector<point> polygon;
7.9 Nearest Neighbour
bool segment_line_intersect(point a, point b, point c, point d){ /*
if(!line_line_intersect(a, b, c, d)) return false; // Closest Neighbor - O(n * log(n))
* Minkowski sum const ll N = 1e6+3, INF = 1e18;
point inters = lines_intersect(a, b, c, d); Distance between two polygons P and Q:
if(inters.on_seg(a, b)) return true; Do Minkowski(P, Q) ll n, cn[N], x[N], y[N]; // number of points, closes neighbor, x
return false; Ans = min(ans, dist((0, 0), edge)) coordinates, y coordinates
} */ ll sqr(ll i) { return i*i; }
//ray in direction c -> d polygon minkowski(polygon & A, polygon & B) { ll dist(int i, int j) { return sqr(x[i]-x[j]) + sqr(y[i]-y[j]);
bool segment_ray_intersect(point a, point b, point c, point d){ polygon P; point v1, v2; }
sort_lex_hull(A), sort_lex_hull(B); ll dist(int i) { return i == cn[i] ? INF : dist(i, cn[i]); }
if (a.dist2(c) < EPS || a.dist2(d) < EPS ||
b.dist2(c) < EPS || b.dist2(d) < EPS) return true; int n1 = A.size(), n2 = B.size();
P.push_back(A[0] + B[0]); bool cpx(int i, int j) { return x[i] < x[j] or (x[i] == x[j] and
if (lines_collinear(a, b, c, d)) { y[i] < y[j]); }
if(c.on_seg(a, b)) return true; for(int i = 0, j = 0; i < n1 || j < n2;) { bool cpy(int i, int j) { return y[i] < y[j] or (y[i] == y[j] and
if(ge(dot(d - c, a - c), 0)) return true; v1 = A[(i + 1)%n1] - A[i%n1]; x[i] < x[j]); }
return false; v2 = B[(j + 1)%n2] - B[j%n2];
} if (j == n2 || cross(v1, v2) > EPS) { ll calc(int i, ll x0) {
if(!line_line_intersect(a, b, c, d)) return false; P.push_back(P.back() + v1); i++; ll dlt = dist(i) - sqr(x[i]-x0);
point inters = lines_intersect(a, b, c, d); } return dlt >= 0 ? ceil(sqrt(dlt)) : -1;
if(!inters.on_seg(a, b)) return false; else if (i == n1 || cross(v1, v2) < -EPS) { }
if(ge(dot(inters - c, d - c), 0)) return true; P.push_back(P.back() + v2); j++;
return false; } void updt(int i, int j, ll x0, ll &dlt) {
} else { if (dist(i) > dist(i, j)) cn[i] = j, dlt = calc(i, x0);
P.push_back(P.back() + (v1 + v2)); }
//ray in direction a -> b i++; j++;
bool ray_line_intersect(point a, point b, point c, point d){ } void cmp(vi &u, vi &v, ll x0) {
if (a.dist2(c) < EPS || a.dist2(d) < EPS || } for(int a=0, b=0; a<u.size(); ++a) {
b.dist2(c) < EPS || b.dist2(d) < EPS) return true; P.pop_back(); ll i = u[a], dlt = calc(i, x0);
if (!line_line_intersect(a, b, c, d)) return false; sort_lex_hull(P); while(b < v.size() and y[i] > y[v[b]]) b++;
point inters = lines_intersect(a, b, c, d); return P; for(int j = b-1; j >= 0 and y[i] - dlt <= y[v[j]]; j--)
if(!line_line_intersect(a, b, c, d)) return false; } updt(i, v[j], x0, dlt);
if(ge(dot(inters - a, b - a), 0)) return true; for(int j = b; j < v.size() and y[i] + dlt >= y[v[j]]; j++)
return false; // Given two polygons, returns the minkowski sum of them. updt(i, v[j], x0, dlt);
} // }
// By Agnez }
ld distance_segment_line(point a, point b, point c, point d){ bool comp(point a, point b){
if(segment_line_intersect(a, b, c, d)) return 0; if((a.x > 0 || (a.x==0 && a.y>0) ) && (b.x < 0 || (b.x void slv(vi &ix, vi &iy) {
return min(distance_point_line(a, c, d), distance_point_line ==0 && b.y<0))) return 1; int n = ix.size();
(b, c, d)); if((b.x > 0 || (b.x==0 && b.y>0) ) && (a.x < 0 || (a.x if (n == 1) { cn[ix[0]] = ix[0]; return; }
} ==0 && a.y<0))) return 0;
ll R = a%b; int m = ix[n/2];
ld distance_segment_ray(point a, point b, point c, point d){ if(R) return R > 0;
if(segment_ray_intersect(a, b, c, d)) return 0; return a*a < b*b; vi ix1, ix2, iy1, iy2;
ld min1 = distance_point_segment(c, a, b); } for(int i=0; i<n; ++i) {
ld min2 = min(distance_point_ray(a, c, d), if (cpx(ix[i], m)) ix1.push_back(ix[i]);
distance_point_ray(b, c, d)); polygon poly_sum(polygon a, polygon b){ else ix2.push_back(ix[i]);
return min(min1, min2); //Lembre de nao ter pontos repetidos
} // passar poligonos ordenados if (cpx(iy[i], m)) iy1.push_back(iy[i]);
// se nao tiver pontos colineares, pode usar: else iy2.push_back(iy[i]);
ld distance_segment_segment(point a, point b, point c, point d){ //pivot = *min_element(all(a)); }
if(segment_segment_intersect(a, b, c, d)) return 0; //sort(all(a),radialcomp);
ld min1 = min(distance_point_segment(c, a, b), //a.resize(unique(all(a))-a.begin()); slv(ix1, iy1);
distance_point_segment(d, a, b)); //pivot = *min_element(all(b)); slv(ix2, iy2);
ld min2 = min(distance_point_segment(a, c, d), //sort(all(b),radialcomp);
distance_point_segment(b, c, d)); //b.resize(unique(all(b))-b.begin()); cmp(iy1, iy2, x[m]);
return min(min1, min2); cmp(iy2, iy1, x[m]);
if(!sz(a) || !sz(b)) return polygon(0); }
} if(min(sz(a),sz(b)) < 2){
polygon ret(0); void slv(int n) {
ld distance_ray_line(point a, point b, point c, point d){ rep(i,0,sz(a)) rep(j,0,sz(b)) ret.pb(a[i]+b[j]);
if(ray_line_intersect(a, b, c, d)) return 0; vi ix, iy;
return ret; ix.resize(n);
ld min1 = distance_point_line(a, c, d); }
return min1; iy.resize(n);
polygon ret; for(int i=0; i<n; ++i) ix[i] = iy[i] = i;
} ret.pb(a[0]+b[0]); sort(ix.begin(), ix.end(), cpx);
int pa = 0, pb = 0; sort(iy.begin(), iy.end(), cpy);
ld distance_ray_ray(point a, point b, point c, point d){
if(ray_ray_intersect(a, b, c, d)) return 0; while(pa < sz(a) || pb < sz(b)){ slv(ix, iy);
ld min1 = min(distance_point_ray(c, a, b), point p = ret.back(); }
distance_point_ray(a, c, d)); if(pb == sz(b) || (pa < sz(a) && comp((a[(pa+1)%
return min1; sz(a)]-a[pa]),(b[(pb+1)%sz(b)]-b[pb]))))
} p = p + (a[(pa+1)%sz(a)]-a[pa]), pa++;
else p = p + (b[(pb+1)%sz(b)]-b[pb]), pb++;
ld distance_line_line(point a, point b, point c, point d){
if(line_line_intersect(a, b, c, d)) return 0;
//descomentar para tirar pontos colineares (o
poligono nao pode ser degenerado)
7.10 Polygons
return distance_point_line(a, c, d); // while(sz(ret) > 1 && !ccw(ret[sz(ret)-2], ret[sz
} (ret)-1], p))
// ret.pop_back(); #include "basics.cpp"
ret.pb(p); #include "lines.cpp"
}
assert(ret.back() == ret[0]); //Monotone chain O(nlog(n))
#define REMOVE_REDUNDANT
7.8 Minkowski Sum ret.pop_back();
return ret; #ifdef REMOVE_REDUNDANT
} bool between(const point &a, const point &b, const point &c) {
return (fabs(area_2(a,b,c)) < EPS && (a.x-b.x)*(c.x-b.x) <=

31
#include "basics.cpp" 0 && (a.y-b.y)*(c.y-b.y) <= 0);
#include "polygons.cpp" }
IME++
#endif 0;
for(int i=0; i<n; ++i) { if (a.ini.x < b.ini.x) return direction(a.ini, a.fim, b.ini)
//new change: <= 0 / >= 0 became < 0 / > 0 (yet to be tested) point p = v[i] - m, q = v[(i+1)%n] - m; < 0;
type x = p % q; return direction(a.ini, b.fim, b.ini) < 0;
void convex_hull(vector<point> &pts) { c = c + (p + q) * x; }
sort(pts.begin(), pts.end()); da += x;
pts.erase(unique(pts.begin(), pts.end()), pts.end()); } bool is_simple_polygon(const vector<point> &pts){
vector<point> up, dn; vector <pair<point, pii>> eve;
for (int i = 0; i < pts.size(); i++) { return c / (3 * da); vector <pair<edge, int>> edgs;
while (up.size() > 1 && area_2(up[up.size()-2], up.back } set <pair<edge, int>> sweep;
(), pts[i]) > 0) up.pop_back(); int n = (int)pts.size();
while (dn.size() > 1 && area_2(dn[dn.size()-2], dn.back //O(nˆ2) for(int i = 0; i < n; i++){
(), pts[i]) < 0) dn.pop_back(); bool is_simple(const vector<point> &p) { point l = min(pts[i], pts[(i + 1)%n]);
up.push_back(pts[i]); for (int i = 0; i < p.size(); i++) { point r = max(pts[i], pts[(i + 1)%n]);
dn.push_back(pts[i]); for (int k = i+1; k < p.size(); k++) { eve.pb({l, {0, i}});
} int j = (i+1) % p.size(); eve.pb({r, {1, i}});
pts = dn; int l = (k+1) % p.size(); edgs.pb(make_pair(edge(l, r), i));
for (int i = (int) up.size() - 2; i >= 1; i--) pts.push_back if (i == l || j == k) continue; }
(up[i]); if (segment_segment_intersect(p[i], p[j], p[k], p[l sort(eve.begin(), eve.end());
])) for(auto e : eve){
#ifdef REMOVE_REDUNDANT return false; if(!e.nd.st){
if (pts.size() <= 2) return; } auto cur = sweep.lower_bound(edgs[e.nd.nd]);
dn.clear(); } pair<edge, int> above, below;
dn.push_back(pts[0]); return true; if(cur != sweep.end()){
dn.push_back(pts[1]); } below = *cur;
for (int i = 2; i < pts.size(); i++) { if(!adj(below.nd, e.nd.nd, n) and
if (between(dn[dn.size()-2], dn[dn.size()-1], pts[i])) bool point_in_triangle(point a, point b, point c, point cur){ segment_segment_intersect(pts[below.nd],
dn.pop_back(); ll s1 = abs(cross(b - a, c - a)); pts[(below.nd + 1)%n], pts[e.nd.nd], pts[(e
dn.push_back(pts[i]); ll s2 = abs(cross(a - cur, b - cur)) + abs(cross(b - cur, c .nd.nd + 1)%n]))
} - cur)) + abs(cross(c - cur, a - cur)); return false;
if (dn.size() >= 3 && between(dn.back(), dn[0], dn[1])) { return s1 == s2; }
dn[0] = dn.back(); } if(cur != sweep.begin()){
dn.pop_back(); above = *(--cur);
} void sort_lex_hull(vector<point> &hull){ if(!adj(above.nd, e.nd.nd, n) and
pts = dn; if(compute_signed_area(hull) < 0) reverse(hull.begin(), hull segment_segment_intersect(pts[above.nd],
#endif .end()); pts[(above.nd + 1)%n], pts[e.nd.nd], pts[(e
} int n = hull.size(); .nd.nd + 1)%n]))
return false;
//avoid using long double for comparisons, change type and //Sort hull by x }
remove division by 2 int pos = 0; sweep.insert(edgs[e.nd.nd]);
type compute_signed_area(const vector<point> &p) { for(int i = 1; i < n; i++) if(hull[i] < hull[pos]) pos = i; }
type area = 0; rotate(hull.begin(), hull.begin() + pos, hull.end()); else{
for(int i = 0; i < p.size(); i++) { } auto below = sweep.upper_bound(edgs[e.nd.nd]);
int j = (i+1) % p.size(); auto cur = below, above = --cur;
area += p[i].x*p[j].y - p[j].x*p[i].y; //determine if point is inside or on the boundary of a polygon ( if(below != sweep.end() and above != sweep.begin()){
} O(logn)) --above;
return area; bool point_in_convex_polygon(vector<point> &hull, point cur){ if(!adj(below->nd, above->nd, n) and
} int n = hull.size(); segment_segment_intersect(pts[below->nd],
//Corner cases: point outside most left and most right pts[(below->nd + 1)%n], pts[above->nd], pts
ld compute_area(const vector<point> &p) { wedges [(above->nd + 1)%n]))
return fabs(compute_signed_area(p) / 2.0); if(cur.dir(hull[0], hull[1]) != 0 && cur.dir(hull[0], hull return false;
} [1]) != hull[n - 1].dir(hull[0], hull[1])) }
return false; sweep.erase(cur);
ld compute_perimeter(vector<point> &p) { if(cur.dir(hull[0], hull[n - 1]) != 0 && cur.dir(hull[0], }
ld per = 0; hull[n - 1]) != hull[1].dir(hull[0], hull[n - 1])) }
for(int i = 0; i < p.size(); i++) { return false; return true;
int j = (i+1) % p.size(); }
per += p[i].dist(p[j]); //Binary search to find which wedges it is between
} int l = 1, r = n - 1; //code copied from https://github.com/tfg50/Competitive-
return per; while(r - l > 1){ Programming/blob/master/Biblioteca/Math/2D%20Geometry/
} int mid = (l + r)/2; ConvexHull.cpp
if(cur.dir(hull[0], hull[mid]) <= 0)l = mid; int maximize_scalar_product(vector<point> &hull, point vec) {
//not tested else r = mid; // this code assumes that there are no 3 colinear points
// TODO: test this code. This code has not been tested, please } int ans = 0;
do it before proper use. return point_in_triangle(hull[l], hull[l + 1], hull[0], cur) int n = hull.size();
// http://codeforces.com/problemset/problem/975/E is a good ; if(n < 20) {
problem for testing. } for(int i = 0; i < n; i++) {
point compute_centroid(vector<point> &p) { if(hull[i] * vec > hull[ans] * vec) {
point c(0,0); // determine if point is on the boundary of a polygon (O(N)) ans = i;
ld scale = 6.0 * compute_signed_area(p); bool point_on_polygon(vector<point> &p, point q) { }
for (int i = 0; i < p.size(); i++){ for (int i = 0; i < p.size(); i++) }
int j = (i+1) % p.size(); if (q.dist2(project_point_segment(p[i], p[(i+1)%p.size()], q } else {
c = c + (p[i]+p[j])*(p[i].x*p[j].y - p[j].x*p[i].y); )) < EPS) return true; if(hull[1] * vec > hull[ans] * vec) {
} return false; ans = 1;
return c / scale; } }
for(int rep = 0; rep < 2; rep++) {
} //Shamos - Hoey for test polygon simple in O(nlog(n)) int l = 2, r = n - 1;
inline bool adj(int a, int b, int n) {return (b == (a + 1)%n or while(l != r) {
// TODO: test this code. This code has not been tested, please a == (b + 1)%n);} int mid = (l + r + 1) / 2;
do it before proper use. bool flag = hull[mid] * vec >=
// http://codeforces.com/problemset/problem/975/E is a good struct edge{ hull[mid-1] * vec;
problem for testing. point ini, fim; if(rep == 0) { flag = flag &&
point centroid(vector<point> &v) { edge(point ini = point(0,0), point fim = point(0,0)) : ini( hull[mid] * vec >= hull[0]
int n = v.size(); ini), fim(fim) {} * vec; }
type da = 0; }; else { flag = flag || hull[mid
point m, c; -1] * vec < hull[0] * vec;
//< here means the edge on the top will be at the begin }
for(point p : v) m = m + p; bool operator < (const edge& a, const edge& b) { if(flag) {

32
m = m / n; if (a.ini == b.ini) return direction(a.ini, a.fim, b.fim) < l = mid;
IME++
} else {
r = mid - 1;
int n = x.size();
vector<T> z(n);
7.13 Delaunay Triangulation
} vector<triple> ret;
} /*
if(hull[ans] * vec < hull[l] * vec) { for (int i = 0; i < n; i++) Complexity: O(nlogn)
ans = l; z[i] = x[i] * x[i] + y[i] * y[i]; Code by Monogon: https://codeforces.com/blog/entry/85638
} This code doesn’t work when two points have the same x
} for (int i = 0; i < n-2; i++) { coordinate.
} for (int j = i+1; j < n; j++) { This is handled simply by rotating all input points by 1 radian
return ans; for (int k = i+1; k < n; k++) { and praying to the geometry gods.
} if (j == k) continue;
double xn = (y[j]-y[i])*(z[k]-z[i]) - (y[k]- The definition of the Voronoi diagram immediately shows signs of
//find tangents related to a point outside the polygon, y[i])*(z[j]-z[i]); applications.
essentially the same for maximizing scalar product double yn = (x[k]-x[i])*(z[j]-z[i]) - (x[j]- * Given a set S of n points and m query points p1,...,pm, we
int tangent(vector<point> &hull, point vec, int dir_flag) { x[i])*(z[k]-z[i]); can answer for each query point, its nearest neighbor in S.
// this code assumes that there are no 3 colinear points double zn = (x[j]-x[i])*(y[k]-y[i]) - (x[k]- This can be done in O((n+q)log(n+q)) offline by sweeping the
// dir_flag = -1 for right tangent x[i])*(y[j]-y[i]); Voronoi diagram and query points.
// dir_flag = 1 for left taangent bool flag = zn < 0; Or it can be done online with persistent data structures.
int ans = 0; for (int m = 0; flag && m < n; m++)
int n = hull.size(); flag = flag && ((x[m]-x[i])*xn + * For each Delaunay triangle, its circumcircle does not
if(n < 20) { (y[m]-y[i])*yn + strictly contain any points in S. (In fact, you can also
for(int i = 0; i < n; i++) { (z[m]-z[i])*zn <= 0); consider this the defining property of Delaunay
if(hull[ans].dir(vec, hull[i]) == if (flag) ret.push_back(triple(i, j, k)); triangulation)
dir_flag) { }
ans = i; } * The number of Delaunay edges is at most 3n - 6, so there is
} } hope for an efficient construction.
} return ret;
} else { } * Each point p belongs to S is adjacent to its nearest
if(hull[ans].dir(vec, hull[1]) == dir_flag) { neighbor with a Delaunay edge.
ans = 1; int main()
} { * The Delaunay triangulation maximizes the minimum angle in
for(int rep = 0; rep < 2; rep++) { T xs[]={0, 0, 1, 0.9}; the triangles among all possible triangulations.
int l = 2, r = n - 1; T ys[]={0, 1, 0, 0.9};
while(l != r) { vector<T> x(&xs[0], &xs[4]), y(&ys[0], &ys[4]); * The Euclidean minimum spanning tree is a subset of Delaunay
int mid = (l + r + 1) / 2; vector<triple> tri = delaunayTriangulation(x, y); edges.
bool flag = hull[mid - 1].dir(
vec, hull[mid]) == dir_flag //expected: 0 1 3 */
; // 0 3 2
if(rep == 0) { flag = flag && ( #include <bits/stdc++.h>
hull[0].dir(vec, hull[mid]) int i;
== dir_flag); } for(i = 0; i < tri.size(); i++) #define ll long long
else { flag = flag || (hull[0]. printf("%d %d %d\n", tri[i].i, tri[i].j, tri[i].k); #define sz(x) ((int) (x).size())
dir(vec, hull[mid - 1]) != return 0; #define all(x) (x).begin(), (x).end()
dir_flag); } } #define vi vector<int>
if(flag) { #define pii pair<int, int>
l = mid; #define rep(i, a, b) for(int i = (a); i < (b); i++)
} else { using namespace std;
r = mid - 1; template<typename T>
} 7.12 Ternary Search using minpq = priority_queue<T, vector<T>, greater<T>>;
}
if(hull[ans].dir(vec, hull[l]) == using ftype = long double;
dir_flag) { //Ternary Search - O(log(n)) const ftype EPS = 1e-12, INF = 1e100;
ans = l; //Max version, for minimum version just change signals
} struct pt {
} ll ternary_search(ll l, ll r){ ftype x, y;
} while(r - l > 3) { pt(ftype x = 0, ftype y = 0) : x(x), y(y) {}
return ans; ll m1 = (l+r)/2;
} ll m2 = (l+r)/2 + 1; // vector addition, subtraction, scalar multiplication
ll f1 = f(m1), f2 = f(m2); pt operator+(const pt &o) const {
//if(f1 > f2) l = m1; return pt(x + o.x, y + o.y);
if (f1 < f2) l = m1; }
else r = m2; pt operator-(const pt &o) const {
7.11 Stanford Delaunay } return pt(x - o.x, y - o.y);
ll ans = 0; }
for(int i = l; i <= r; i++){ pt operator*(const ftype &f) const {
// Slow but simple Delaunay triangulation. Does not handle ll tmp = f(i); return pt(x * f, y * f);
// degenerate cases (from O’Rourke, Computational Geometry in C) //ans = min(ans, tmp); }
// ans = max(ans, tmp);
// Running time: O(nˆ4) } // rotate 90 degrees counter-clockwise
// return ans; pt rot() const {
// INPUT: x[] = x-coordinates } return pt(-y, x);
// y[] = y-coordinates }
// //Faster version - 300 iteratons up to 1e-6 precision
// OUTPUT: triples = a vector containing m triples of indices double ternary_search(double l, double r, int No = 300){ // dot and cross products
// corresponding to triangle vertices // for(int i = 0; i < No; i++){ ftype dot(const pt &o) const {
while(r - l > EPS){ return x * o.x + y * o.y;
#include<vector> double m1 = l + (r - l) / 3; }
using namespace std; double m2 = r - (r - l) / 3; ftype cross(const pt &o) const {
// if (f(m1) > f(m2)) return x * o.y - y * o.x;
typedef double T; if (f(m1) < f(m2)) }
l = m1;
struct triple { else // length
int i, j, k; r = m2; ftype len() const {
triple() {} } return hypotl(x, y);
triple(int i, int j, int k) : i(i), j(j), k(k) {} return f(l); }
}; }
// compare points lexicographically

33
vector<triple> delaunayTriangulation(vector<T>& x, vector<T>& y) bool operator<(const pt &o) const {
{ return make_pair(x, y) < make_pair(o.x, o.y);
IME++
} rep(i, 0, n) v[i] = {p[i], i}; #include <random>
}; sort(all(v)); // sort points by coordinate, remember
original indices for the delaunay edges std::mt19937 rng((int) std::chrono::steady_clock::now().
// check if two vectors are collinear. It might make sense to } time_since_epoch().count());
use a // update the remove event for the arc at position it
// different EPS here, especially if points have integer void upd(beach::iterator it) { struct PT {
coordinates if(it->i == -1) return; // doesn’t correspond to a real typedef long long T;
bool collinear(pt a, pt b) { point T x, y;
return abs(a.cross(b)) < EPS; valid[-it->id] = false; // mark existing remove event as PT(T _x = 0, T _y = 0) : x(_x), y(_y){}
} invalid PT operator +(const PT &p) const { return PT(x+p.x,y+p.y
auto a = prev(it); ); }
if(collinear(it->q - it->p, a->p - it->p)) return; // PT operator -(const PT &p) const { return PT(x-p.x,y-p.y
// intersection point of lines ab and cd. Precondition is that doesn’t generate a vertex event ); }
they aren’t collinear it->id = --ti; // new vertex event ID PT operator *(T c) const { return PT(x*c,y*c);
pt lineline(pt a, pt b, pt c, pt d) { valid.push_back(true); // label this ID true }
return a + (b - a) * ((c - a).cross(d - c) / (b - a).cross(d pt c = circumcenter(it->p, it->q, a->p); //PT operator /(double c) const { return PT(x/c,y/c);
- c)); ftype x = c.x + (c - it->p).len(); }
} // event is generated at time x. T operator *(const PT &p) const { return x*p.x+y*p.y;
// make sure it passes the sweep-line, and that the arc }
// circumcircle of points a, b, c. Precondition is that abc is a truly shrinks to 0 T operator %(const PT &p) const { return x*p.y-y*p.x;
non-degenerate triangle. if(x > sweepx - EPS && a->gety(x) + EPS > it->gety(x)) { }
pt circumcenter(pt a, pt b, pt c) { Q.push(event(x, it->id, it)); //double operator !() const { return sqrt(x*x+y*y
b = (a + b) * 0.5; } ); }
c = (a + c) * 0.5; } //double operator ˆ(const PT &p) const { return atan2(*
return lineline(b, b + (b - a).rot(), c, c + (c - a).rot()); // add Delaunay edge this%p, *this*p);}
} void add_edge(int i, int j) { bool operator < (const PT &p) const { return x != p.x ?
if(i == -1 || j == -1) return; x < p.x : y < p.y; }
// x coordinate of sweep-line edges.push_back({v[i].second, v[j].second}); bool operator == (const PT &p)const { return x == p.x &&
ftype sweepx; } y == p.y; }
// handle a point event
// an arc on the beacah line is given implicitly by the focus p, void add(int i) { friend std::ostream& operator << (std::ostream &os,
// the focus q of the following arc, and the position of the pt p = v[i].first; const PT &p) {
sweep-line. // find arc to split return os << p.x << ’ ’ << p.y;
struct arc { auto c = line.lower_bound(p.y); }
mutable pt p, q; // insert new arcs. passing the following iterator gives friend std::istream& operator >> (std::istream &is, PT &
mutable int id = 0, i; a slight speed-up p) {
arc(pt p, pt q, int i) : p(p), q(q), i(i) {} auto b = line.insert(c, arc(p, c->p, i)); return is >> p.x >> p.y;
auto a = line.insert(b, arc(c->p, p, c->i)); }
// get y coordinate of intersection with following arc. add_edge(i, c->i); };
// don’t question my magic formulas upd(a); upd(b); upd(c);
ftype gety(ftype x) const { } struct Segment {
if(q.y == INF) return INF; // handle a vertex event typedef long double T;
x += EPS; void remove(beach::iterator it) { PT p1, p2;
pt med = (p + q) * 0.5; auto a = prev(it); T a, b, c;
pt dir = (p - med).rot(); auto b = next(it);
ftype D = (x - p.x) * (x - q.x); line.erase(it); Segment() {}
return med.y + ((med.x - x) * dir.x + sqrtl(D) * dir.len a->q = b->p;
()) / dir.y; add_edge(a->i, b->i); Segment(PT st, PT en) {
} upd(a); upd(b); p1 = st, p2 = en;
bool operator<(const ftype &y) const { } a = -(st.y - en.y);
return gety(sweepx) < y; // X is a value exceeding all coordinates b = st.x - en.x;
} void solve(ftype X = 1e9) { c = a * en.x + b * en.y;
bool operator<(const arc &o) const { // insert two points that will always be in the beach }
return gety(sweepx) < o.gety(sweepx); line,
} // to avoid handling edge cases of an arc being first or T plug(T x, T y) {
}; last // plug >= 0 is to the right
X *= 3; return a * x + b * y - c;
// the beach line will be stored as a multiset of arc objects line.insert(arc(pt(-X, -X), pt(-X, X), -1)); }
using beach = multiset<arc, less<>>; line.insert(arc(pt(-X, X), pt(INF, INF), -1));
// create all point events T plug(PT p) {
// an event is given by rep(i, 0, n) { return plug(p.x, p.y);
// x: the time of the event Q.push(event(v[i].first.x, i, line.end())); }
// id: If >= 0, it’s a point event for index id. }
// If < 0, it’s an ID for a vertex event ti = 0; bool inLine(PT p) { return (p - p1) % (p2 - p1) == 0; }
// it: if a vertex event, the iterator for the arc to be valid.assign(1, false); bool inSegment(PT p) {
deleted while(!Q.empty()) { return inLine(p) && (p1 - p2) * (p - p2) >= 0 &&
struct event { event e = Q.top(); Q.pop(); (p2 - p1) * (p - p1) >= 0;
ftype x; sweepx = e.x; }
int id; if(e.id >= 0) {
beach::iterator it; add(e.id); PT lineIntersection(Segment s) {
event(ftype x, int id, beach::iterator it) : x(x), id(id), }else if(valid[-e.id]) { long double A = a, B = b, C = c;
it(it) {} remove(e.it); long double D = s.a, E = s.b, F = s.c;
bool operator<(const event &e) const { } long double x = (long double) C * E - (long
return x > e.x; } double) B * F;
} } long double y = (long double) A * F - (long
}; }; double) C * D;
long double tmp = (long double) A * E - (long
struct fortune { double) B * D;
beach line; // self explanatory x /= tmp;
vector<pair<pt, int>> v; // (point, original index) y /= tmp;
priority_queue<event> Q; // priority queue of point and return PT(x, y);
vertex events 7.14 Voronoi Diagram }
vector<pii> edges; // delaunay edges
vector<bool> valid; // valid[-id] == true if the vertex bool polygonIntersection(const std::vector<PT> &poly) {
event with corresponding id is valid //TFG50 Voronoi - source code: https://github.com/tfg50/ long double l = -1e18, r = 1e18;
int n, ti; // number of points, next available vertex ID Competitive-Programming/tree/master/Biblioteca/Math/2D%20 for(auto p : poly) {
fortune(vector<pt> p) { Geometry long double z = plug(p);
n = sz(p); l = std::max(l, z);

34
#include<bits/stdc++.h>
v.resize(n); #include <chrono> r = std::min(r, z);
IME++
}
return l - r > eps;
7.15 Delaunay Triangulation (emaxx) delete e->rev();
delete e->rot;
} delete e;
}; #include <bits/stdc++.h> }
typedef long long ll;
std::vector<PT> cutPolygon(std::vector<PT> poly, Segment seg) { QuadEdge* connect(QuadEdge* a, QuadEdge* b) {
int n = (int) poly.size(); bool ge(const ll& a, const ll& b) { return a >= b; } QuadEdge* e = make_edge(a->dest(), b->origin);
std::vector<PT> ans; bool le(const ll& a, const ll& b) { return a <= b; } splice(e, a->lnext());
for(int i = 0; i < n; i++) { bool eq(const ll& a, const ll& b) { return a == b; } splice(e->rev(), b);
double z = seg.plug(poly[i]); bool gt(const ll& a, const ll& b) { return a > b; } return e;
if(z > -eps) { bool lt(const ll& a, const ll& b) { return a < b; } }
ans.push_back(poly[i]); int sgn(const ll& a) { return a >= 0 ? a ? 1 : 0 : -1; }
} bool left_of(pt p, QuadEdge* e) {
double z2 = seg.plug(poly[(i + 1) % n]); struct pt { return gt(p.cross(e->origin, e->dest()), 0);
if((z > eps && z2 < -eps) || (z < -eps && z2 > ll x, y; }
eps)) { pt() { }
ans.push_back(seg.lineIntersection( pt(ll _x, ll _y) : x(_x), y(_y) { } bool right_of(pt p, QuadEdge* e) {
Segment(poly[i], poly[(i + 1) % n]) pt operator-(const pt& p) const { return lt(p.cross(e->origin, e->dest()), 0);
)); return pt(x - p.x, y - p.y); }
} }
} ll cross(const pt& p) const { template <class T>
return ans; return x * p.y - y * p.x; T det3(T a1, T a2, T a3, T b1, T b2, T b3, T c1, T c2, T c3) {
} } return a1 * (b2 * c3 - c2 * b3) - a2 * (b1 * c3 - c1 * b3) +
ll cross(const pt& a, const pt& b) const { a3 * (b1 * c2 - c1 * b2);
Segment getBisector(PT a, PT b) { return (a - *this).cross(b - *this); }
Segment ans(a, b); }
std::swap(ans.a, ans.b); ll dot(const pt& p) const { bool in_circle(pt a, pt b, pt c, pt d) {
ans.b *= -1; return x * p.x + y * p.y; // If there is __int128, calculate directly.
ans.c = ans.a * (a.x + b.x) * 0.5 + ans.b * (a.y + b.y) } // Otherwise, calculate angles.
* 0.5; ll dot(const pt& a, const pt& b) const { #if defined(__LP64__) || defined(_WIN64)
return ans; return (a - *this).dot(b - *this); __int128 det = -det3<__int128>(b.x, b.y, b.sqrLength(), c.x,
} } c.y,
ll sqrLength() const { c.sqrLength(), d.x, d.y, d.
return this->dot(*this); sqrLength());
} det += det3<__int128>(a.x, a.y, a.sqrLength(), c.x, c.y, c.
bool operator==(const pt& p) const { sqrLength(), d.x,
// BE CAREFUL! return eq(x, p.x) && eq(y, p.y); d.y, d.sqrLength());
// the first point may be any point } det -= det3<__int128>(a.x, a.y, a.sqrLength(), b.x, b.y, b.
// O(Nˆ3) }; sqrLength(), d.x,
std::vector<PT> getCell(std::vector<PT> pts, int i) { d.y, d.sqrLength());
std::vector<PT> ans; const pt inf_pt = pt(1e18, 1e18); det += det3<__int128>(a.x, a.y, a.sqrLength(), b.x, b.y, b.
ans.emplace_back(0, 0); sqrLength(), c.x,
ans.emplace_back(1e6, 0); struct QuadEdge { c.y, c.sqrLength());
ans.emplace_back(1e6, 1e6); pt origin; return det > 0;
ans.emplace_back(0, 1e6); QuadEdge* rot = nullptr; #else
for(int j = 0; j < (int) pts.size(); j++) { QuadEdge* onext = nullptr; auto ang = [](pt l, pt mid, pt r) {
if(j != i) { bool used = false; ll x = mid.dot(l, r);
ans = cutPolygon(ans, getBisector(pts[i QuadEdge* rev() const { ll y = mid.cross(l, r);
], pts[j])); return rot->rot; long double res = atan2((long double)x, (long double)y);
} } return res;
} QuadEdge* lnext() const { };
return ans; return rot->rev()->onext->rot; long double kek = ang(a, b, c) + ang(c, d, a) - ang(b, c, d)
} } - ang(d, a, b);
QuadEdge* oprev() const { if (kek > 1e-8)
// O(Nˆ2) expected time return rot->onext->rot; return true;
std::vector<std::vector<PT>> getVoronoi(std::vector<PT> pts) { } else
// assert(pts.size() > 0); pt dest() const { return false;
int n = (int) pts.size(); return rev()->origin; #endif
std::vector<int> p(n, 0); } }
for(int i = 0; i < n; i++) { };
p[i] = i; pair<QuadEdge*, QuadEdge*> build_tr(int l, int r, vector<pt>& p)
} QuadEdge* make_edge(pt from, pt to) { {
shuffle(p.begin(), p.end(), rng); QuadEdge* e1 = new QuadEdge; if (r - l + 1 == 2) {
std::vector<std::vector<PT>> ans(n); QuadEdge* e2 = new QuadEdge; QuadEdge* res = make_edge(p[l], p[r]);
ans[0].emplace_back(0, 0); QuadEdge* e3 = new QuadEdge; return make_pair(res, res->rev());
ans[0].emplace_back(w, 0); QuadEdge* e4 = new QuadEdge; }
ans[0].emplace_back(w, h); e1->origin = from; if (r - l + 1 == 3) {
ans[0].emplace_back(0, h); e2->origin = to; QuadEdge *a = make_edge(p[l], p[l + 1]), *b = make_edge(
for(int i = 1; i < n; i++) { e3->origin = e4->origin = inf_pt; p[l + 1], p[r]);
ans[i] = ans[0]; e1->rot = e3; splice(a->rev(), b);
} e2->rot = e4; int sg = sgn(p[l].cross(p[l + 1], p[r]));
for(auto i : p) { e3->rot = e2; if (sg == 0)
for(auto j : p) { e4->rot = e1; return make_pair(a, b->rev());
if(j == i) break; e1->onext = e1; QuadEdge* c = connect(b, a);
auto bi = getBisector(pts[j], pts[i]); e2->onext = e2; if (sg == 1)
if(!bi.polygonIntersection(ans[j])) e3->onext = e4; return make_pair(a, b->rev());
continue; e4->onext = e3; else
ans[j] = cutPolygon(ans[j], getBisector( return e1; return make_pair(c->rev(), c);
pts[j], pts[i])); } }
ans[i] = cutPolygon(ans[i], getBisector( int mid = (l + r) / 2;
pts[i], pts[j])); void splice(QuadEdge* a, QuadEdge* b) { QuadEdge *ldo, *ldi, *rdo, *rdi;
} swap(a->onext->rot->onext, b->onext->rot->onext); tie(ldo, ldi) = build_tr(l, mid, p);
} swap(a->onext, b->onext); tie(rdi, rdo) = build_tr(mid + 1, r, p);
return ans; } while (true) {
} if (left_of(rdi->origin, ldi)) {
void delete_edge(QuadEdge* e) { ldi = ldi->lnext();
splice(e, e->oprev()); continue;
}

35
splice(e->rev(), e->rev()->oprev());
delete e->rev()->rot; if (right_of(ldi->origin, rdi)) {
IME++
rdi = rdi->rev()->onext;
continue; #define st first pll cur = {cfloor(pts[i].x, min_dist), cfloor(pts[i].y,
} #define nd second min_dist)};
break; for(int dx = -1; dx <= 1; dx++)
} typedef long long ll; for(int dy = -1; dy <= 1; dy++)
QuadEdge* basel = connect(rdi->rev(), ldi); typedef long double ld; for(auto p : f[{cur.st + dx, cur.nd + dy}])
auto valid = [&basel](QuadEdge* e) { return right_of(e->dest typedef pair<ll,ll> pll; min_dist = min(min_dist, pts[i].dist2(pts[p
(), basel); }; ]));
if (ldi->origin == ldo->origin) const ld EPS = 1e-9, PI = acos(-1.); }
ldo = basel->rev(); const ll LINF = 0x3f3f3f3f3f3f3f3f; }
if (rdi->origin == rdo->origin) const int N = 1e5+5;
rdo = basel; int main(){
while (true) { typedef long long type; ios_base::sync_with_stdio(false);
QuadEdge* lcand = basel->rev()->onext; cin.tie(NULL);
if (valid(lcand)) { struct point { cin >> n;
while (in_circle(basel->dest(), basel->origin, lcand type x, y, z; pts.resize(n);
->dest(), for(int i = 0; i < n; i++) cin >> pts[i].x >> pts[i].y >>
lcand->onext->dest())) { point() : x(0), y(0), z(0) {} pts[i].z;
QuadEdge* t = lcand->onext; point(type _x, type _y, type _z) : x(_x), y(_y) , z(_z) {} sort(pts.begin(), pts.end());
delete_edge(lcand); closest_pair(0, n);
lcand = t; point operator -() { return point(-x, -y, -z); } cout << setprecision(15) << fixed << sqrt((ld)min_dist) << "
} point operator +(point p) { return point(x + p.x, y + p.y, z \n";
} + p.z); } return 0;
QuadEdge* rcand = basel->oprev(); point operator -(point p) { return point(x - p.x, y - p.y, z }
if (valid(rcand)) { - p.z); }
while (in_circle(basel->dest(), basel->origin, rcand
->dest(), point operator *(type k) { return point(x*k, y*k, z*k); }
rcand->oprev()->dest())) { point operator /(type k) { return point(x/k, y/k, z/k); }
QuadEdge* t = rcand->oprev();
delete_edge(rcand);
rcand = t;
bool operator ==(const point &p) const{ return x == p.x and
y == p.y and z == p.z; }
8 Miscellaneous
} bool operator !=(const point &p) const{ return x != p.x or
} y != p.y or z != p.z; }
if (!valid(lcand) && !valid(rcand))
break;
bool operator <(const point &p) const { return (z < p.z) or
(z == p.z and y < p.y) or (z == p.z and y == p.y and x
8.1 Bitset
if (!valid(lcand) || < p.x); }
(valid(rcand) && in_circle(lcand->dest(), lcand-> //Goes through the subsets of a set x :
origin, type abs2() { return x*x + y*y + z*z; } int b = 0;
rcand->origin, rcand-> type dist2(point q) { return (*this - q).abs2(); } do {
dest()))) // process subset b
basel = connect(rcand, basel->rev()); }; } while (b=(b-x)&x);
else
basel = connect(basel->rev(), lcand->rev()); ll cfloor(ll a, ll b) {
} ll c = abs(a);
return make_pair(ldo, rdo); ll d = abs(b);
} if (a * b > 0) return c/d;
return -(c + d - 1)/d; 8.2 builtin
vector<tuple<pt, pt, pt>> delaunay(vector<pt> p) { }
sort(p.begin(), p.end(), [](const pt& a, const pt& b) {
ll min_dist = LINF; __builtin_ctz(x) // trailing zeroes
return lt(a.x, b.x) || (eq(a.x, b.x) && lt(a.y, b.y)); __builtin_clz(x) // leading zeroes
}); pair<int, int> best_pair;
vector<point> pts; __builtin_popcount(x) // # bits set
auto res = build_tr(0, (int)p.size() - 1, p); __builtin_ffs(x) // index(LSB) + 1 [0 if x==0]
QuadEdge* e = res.first; int n;
vector<QuadEdge*> edges = {e}; // Add ll to the end for long long [__builtin_clzll(x)]
while (lt(e->onext->dest().cross(e->dest(), e->origin), 0)) //Warning: include variable id into the struct point
e = e->onext; void upd_ans(const point & a, const point & b) {
auto add = [&p, &e, &edges]() { ll dist = (a.x - b.x)*(a.x - b.x) + (a.y - b.y)*(a.y - b.y)
QuadEdge* curr = e; + (a.z - b.z)*(a.z - b.z);
if (dist < min_dist) {
do {
curr->used = true; min_dist = dist;
// best_pair = {a.id, b.id};
8.3 Date
p.push_back(curr->origin);
edges.push_back(curr->rev()); }
curr = curr->lnext(); } struct Date {
} while (curr != e); int d, m, y;
}; void closest_pair(int l, int r) { static int mnt[], mntsum[];
add(); if (r - l <= 3) {
p.clear(); for (int i = l; i < r; ++i) { Date() : d(1), m(1), y(1) {}
int kek = 0; for (int j = i + 1; j < r; ++j) { Date(int d, int m, int y) : d(d), m(m), y(y) {}
while (kek < (int)edges.size()) { upd_ans(pts[i], pts[j]); Date(int days) : d(1), m(1), y(1) { advance(days); }
if (!(e = edges[kek++])->used) }
add(); } bool bissexto() { return (y%4 == 0 and y%100) or (y%400 == 0);
} return; }
vector<tuple<pt, pt, pt>> ans; }
for (int i = 0; i < (int)p.size(); i += 3) { int mdays() { return mnt[m] + (m == 2)*bissexto(); }
ans.push_back(make_tuple(p[i], p[i + 1], p[i + 2])); int m = (l + r) >> 1; int ydays() { return 365+bissexto(); }
} type midz = pts[m].z;
return ans; closest_pair(l, m); int msum() { return mntsum[m-1] + (m > 2)*bissexto(); }
} closest_pair(m, r); int ysum() { return 365*(y-1) + (y-1)/4 - (y-1)/100 + (y-1)
/400; }
//map opposite side
map<pll, vector<int>> f; int count() { return (d-1) + msum() + ysum(); }
for(int i = m; i < r; i++){
f[{cfloor(pts[i].x, min_dist), cfloor(pts[i].y, min_dist
7.16 Closest Pair of Points 3D }
)}].push_back(i);
int day() {
int x = y - (m<3);
return (x + x/4 - x/100 + x/400 + mntsum[m-1] + d + 6)%7;
//find }
#include <bits/stdc++.h> for(int i = l; i < m; i++){
if((midz - pts[i].z) * (midz - pts[i].z) >= min_dist)

36
void advance(int days) {
using namespace std; continue; days += count();
IME++
d = m = 1, y = 1 + days/366;
days -= count();
8.5 Merge Sort (Inversion Count) Modular& operator*=(const Modular& b) { return *this = *this *
b; }
while(days >= ydays()) days -= ydays(), y++; Modular& operator/=(const Modular& b) { return *this = *this /
while(days >= mdays()) days -= mdays(), m++; b; }
d += days; // Merge-sort with inversion count - O(nlog n) };
}
}; int n, inv; using Mint = Modular<MOD>;
vector<int> v, ans;
int Date::mnt[13] = {0, 31, 28, 31, 30, 31, 30, 31, 31, 30, 31,
30, 31}; void mergesort(int l, int r, vector<int> &v){
int Date::mntsum[13] = {}; if(l == r) return;
for(int i=1; i<13; ++i) Date::mntsum[i] = Date::mntsum[i-1] +
Date::mnt[i];
int mid = (l+r)/2;
mergesort(l, mid, v), mergesort(mid+1, r, v); 8.7 Parallel Binary Search
int i = l, j = mid + 1, k = l;
while(i <= mid or j <= r){
if(i <= mid and (j > r or v[i] <= v[j])) ans[k++] = v[i // Parallel Binary Search - O(nlog n * cost to update data
++]; structure + qlog n * cost for binary search condition)
8.4 Parentesis to Poslish (ITA) }
else ans[k++] = v[j++], inv += j-k;
struct Query { int i, ans; /*+ query related info*/ };
for(int i = l; i <= r; i++) v[i] = ans[i]; vector<Query> req;
}
#include <cstdio> void pbs(vector<Query>& qs, int l /* = min value*/, int r /* =
#include <map> //in main max value*/) {
#include <stack> ans.resize(v.size()); if (qs.empty()) return;
using namespace std;
if (l == r) {
/* for (auto& q : qs) req[q.i].ans = l;
* Parenthetic to polish expression conversion return;
}
*/
8.6 Modular Int (Struct)
inline bool isOp(char c) { int mid = (l + r) / 2;
return c==’+’ || c==’-’ || c==’*’ || c==’/’ || c==’ˆ’; // mid = (l + r + 1) / 2 if different from simple upper/lower
} bound
// Struct to do basic modular arithmetic
inline bool isCarac(char c) { for (int i = l; i <= mid; i++) {
template <int MOD> // add value to data structure
return (c>=’a’ && c<=’z’) || (c>=’A’ && c<=’Z’) || (c>=’ struct Modular { }
0’ && c<=’9’); int v;
} vector<Query> vl, vr;
static int minv(int a, int m) { for (auto& q : qs) {
int paren2polish(char* paren, char* polish) { a %= m; if (/* cond */) vl.push_back(q);
map<char, int> prec; assert(a); else vr.push_back(q);
prec[’(’] = 0; return a == 1 ? 1 : int(m - ll(minv(m, a)) * ll(m) / a); }
prec[’+’] = prec[’-’] = 1; }
prec[’*’] = prec[’/’] = 2; pbs(vr, mid + 1, r);
prec[’ˆ’] = 3; Modular(ll _v = 0) : v(int(_v % MOD)) {
int len = 0; if (v < 0) v += MOD; for (int i = l; i <= mid; i++) {
stack<char> op; } // remove value from data structure
for (int i = 0; paren[i]; i++) { }
if (isOp(paren[i])) { bool operator==(const Modular& b) const { return v == b.v; }
while (!op.empty() && prec[op.top()] >= bool operator!=(const Modular& b) const { return v != b.v; } pbs(vl, l, mid);
prec[paren[i]]) { }
polish[len++] = op.top(); op.pop friend Modular inv(const Modular& b) { return Modular(minv(b.v
(); , MOD)); }
}
op.push(paren[i]); friend ostream& operator<<(ostream& os, const Modular& b) {
}
else if (paren[i]==’(’) op.push(’(’);
return os << b.v; }
friend istream& operator>>(istream& is, Modular& b) { 8.8 prime numbers
else if (paren[i]==’)’) { ll _v;
for (; op.top()!=’(’; op.pop()) is >> _v;
polish[len++] = op.top(); b = Modular(_v); 2 3 5 7 11 13 17 19 23 29
op.pop(); return is; 31 37 41 43 47 53 59 61 67 71
} } 73 79 83 89 97 101 103 107 109 113
else if (isCarac(paren[i])) 127 131 137 139 149 151 157 163 167 173
polish[len++] = paren[i]; Modular operator+(const Modular& b) const { 179 181 191 193 197 199 211 223 227 229
} Modular ans; 233 239 241 251 257 263 269 271 277 281
for(; !op.empty(); op.pop()) ans.v = v >= MOD - b.v ? v + b.v - MOD : v + b.v; 283 293 307 311 313 317 331 337 347 349
polish[len++] = op.top(); return ans; 353 359 367 373 379 383 389 397 401 409
polish[len] = 0; } 419 421 431 433 439 443 449 457 461 463
return len; 467 479 487 491 499 503 509 521 523 541
} Modular operator-(const Modular& b) const { 547 557 563 569 571 577 587 593 599 601
Modular ans; 607 613 617 619 631 641 643 647 653 659
/* ans.v = v < b.v ? v - b.v + MOD : v - b.v; 661 673 677 683 691 701 709 719 727 733
* TEST MATRIX return ans; 739 743 751 757 761 769 773 787 797 809
*/ } 811 821 823 827 829 839 853 857 859 863
877 881 883 887 907 911 919 929 937 941
int main() { Modular operator*(const Modular& b) const { 947 953 967 971 977 983 991 997 1009 1013
int N, len; Modular ans; 1019 1021 1031 1033 1039 1049 1051 1061 1063 1069
char polish[400], paren[400]; ans.v = int(ll(v) * ll(b.v) % MOD); 1087 1091 1093 1097 1103 1109 1117 1123 1129 1151
scanf("%d", &N); return ans; 1153 1163 1171 1181 1187 1193 1201 1213 1217 1223
for (int j=0; j<N; j++) { } 1229 1231 1237 1249 1259 1277 1279 1283 1289 1291
scanf(" %s", paren); 1297 1301 1303 1307 1319 1321 1327 1361 1367 1373
paren2polish(paren, polish); Modular operator/(const Modular& b) const { 1381 1399 1409 1423 1427 1429 1433 1439 1447 1451
printf("%s\n", polish); return (*this) * inv(b); 1453 1459 1471 1481 1483 1487 1489 1493 1499 1511
} } 1523 1531 1543 1549 1553 1559 1567 1571 1579 1583
return 0; 1597 1601 1607 1609 1613 1619 1621 1627 1637 1657
} Modular& operator+=(const Modular& b) { return *this = *this + 1663 1667 1669 1693 1697 1699 1709 1721 1723 1733
b; } 1741 1747 1753 1759 1777 1783 1787 1789 1801 1811

37
Modular& operator-=(const Modular& b) { return *this = *this - 1823 1831 1847 1861 1867 1871 1873 1877 1879 1889
b; } 1901 1907 1913 1931 1933 1949 1951 1973 1979 1987
IME++
Pn
k nk = n2n−1

struct rect k=1
970’997 971’483 921’281’269 999’279’733 { Pn 2 n
1’000’000’009 1’000’000’021 1’000’000’409 1’005’012’527
};
double x, y, z;
k=1k P k = (n + n2 )2n−2
m+n r m
 n 
=
ll convert(rect& P)
{ n
r Qk k=0 k
n−k+i
r−k

8.9 Python ll Q;
Q.r = sqrt(P.x*P.x+P.y*P.y+P.z*P.z);
k = i=1 i
Q.lat = 180/M_PI*asin(P.z/Q.r);
Q.lon = 180/M_PI*acos(P.x/sqrt(P.x*P.x+P.y*P.y));
# reopen
import sys return Q;
sys.stdout = open(’out’,’w’) }
sys.stdin = open(’in’ ,’r’)
rect convert(ll& Q)
9.2 Number theory identities
//Dummy example {
R = lambda: map(int, input().split()) rect P;
n, k = R(), P.x = Q.r*cos(Q.lon*M_PI/180)*cos(Q.lat*M_PI/180); Lucas’ Theorem: For non-negative integers m and n
v, t = [], [0]*n P.y = Q.r*sin(Q.lon*M_PI/180)*cos(Q.lat*M_PI/180);
for p, c, i in sorted(zip(R(), R(), range(n))): P.z = Q.r*sin(Q.lat*M_PI/180); and a prime p,
t[i] = sum(v)+c
v += [c] return P;
v = sorted(v)[::-1] }   Y k  
if len(v) > k:
v.pop() m mi
print(’ ’.join(map(str, t)))
int main() ≡ (mod p),
{
rect A; n i=0
ni
ll B;
A.x = -1.0; A.y = 2.0; A.z = -3.0; where
8.10 Sqrt Decomposition B = convert(A);
cout << B.r << " " << B.lat << " " << B.lon << endl;
// Square Root Decomposition (Mo’s Algorithm) - O(nˆ(3/2)) m = mk pk + mk−1 pk−1 + · · · + m1 p + m0
const int N = 1e5+1, SQ = 500; A = convert(B);
int n, m, v[N]; cout << A.x << " " << A.y << " " << A.z << endl;
}
void add(int p) { /* add value to aggregated data structure */ } is the base p representation of m, and similarly for n.
void rem(int p) { /* remove value from aggregated data structure
*/ }
struct query { int i, l, r, ans; } qs[N]; 8.12 Week day 9.3 Stirling Numbers of the second kind
bool c1(query a, query b) {
if(a.l/SQ != b.l/SQ) return a.l < b.l;
}
return a.l/SQ&1 ? a.r > b.r : a.r < b.r; int v[] = { 0, 3, 2, 5, 0, 3, 5, 1, 4, 6, 2, 4 };
int day(int d, int m, int y) {
Number of ways to partition a set of n numbers into k
bool c2(query a, query b) { return a.i < b.i; }
y -= m<3;
return (y + y/4 - y/100 + y/400 + v[m-1] + d)%7;
non-empty subsets.
}
/* inside main */
int l = 0, r = -1;   k  
sort(qs, qs+m, c1); n 1 X (k−j) k
for (int i = 0; i < m; ++i) { = (−1) jn
query &q = qs[i]; k k! j=0 j
while (r < q.r) add(v[++r]);
while (r > q.r) rem(v[r--]);
9 Math Extra
while (l < q.l) rem(v[l++]);
while (l > q.l) add(v[--l]); Recurrence relation:
q.ans = /* calculate answer */; 9.1 Combinatorial formulas
}  
Pn 2 0
sort(qs, qs+m, c2); // sort to original order
Pnk=0 k 3 = n(n + 1)(2n + 1)/6 =1
k = n 2
(n + 1)2 /4 0
Pnk=0 4 5 4 3
Pnk=0 k 5 = (6n6 + 15n5 + 10n − n)/30    
8.11 Latitude Longitude (Stanford) k = (2n + 6n + 5n 4
− n2
)/12 n 0
Pnk=0 k = =1
x = (x n+1
− 1)/(x − 1) 0 n
/* Pnk=0 k n+1
Converts from rectangular coordinates to latitude/longitude and
k=0 kx n!= (x − (n + 1)x + nxn+2 )/(x − 1)2      
vice n n+1 n n
k = (n−k)!k! =k +
versa. Uses degrees (not radians).
*/ k k k−1
n n−1
+ n−1
  
#include <iostream> k = k k−1
#include <cmath> n n n−1

k  = n−k k
using namespace std;
n n−k+1 n
 9.4 Burnside’s Lemma
struct ll k = k k−1
n+1 n+1 n
{
 
double r, lat, lon; k  = n−k+1 k Let G be a finite group that acts on a set X. For each

38
}; n n−k n
k+1 = k+1 k g in G let X g denote the set of elements in X that are
IME++
fixed by g, which means X g = {x ∈ X|g(x) = x}. Burn- 9.5 Numerical integration
side’s lemma assers the following formula for the number
RK4: to integrate ẏ = f (t, y) with y0 = y(t0 ), compute
of orbits, denoted |X/G|:
k1 = f (tn , yn )
h h
k2 = f (tn + , yn + k1 )
2 2
h h
k3 = f (tn + , yn + k2 )
2 2
k 4 = f (tn + h, yn + hk 3)
1 X g
|X/G| = |X | h
|G| yn+1 = yn + (k1 + 2k2 + 2k3 + k4 )
g∈G 6

39
IME++
S R X Assunto Descricao Diff

40

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy