Editorial

Download as pdf or txt
Download as pdf or txt
You are on page 1of 29

Machine Translated by Google

The Editorial of Goodbye 2022 Round

Nanako & m_99


& pornjoke & SteamTurbine & triple__a & Nezzar

Since the spoiler tag is only supported on Codeforces

Hint for Chinese problem solving will not be foldable

beg to be excused
Machine Translated by Google

Problem A. Koxia and Whiteboards


Hint

Hint 1

The final answer will be The sum of the values in .

Hint 2

After all operations are completed, it will always be among the final number.

Hint 3

consider and situation at the time. According to Hint 2, is always among the final number, what about?

Solution

This problem can be solved naturally using a greedy algorithm - for , we can replace the current smallest

element. The time complexity required for each set of test data is ÿ

Another approach is to add to the final sum first, and for the other We can choose the number freely

and maximize their sum. Therefore, sort or similar practices can be used in

Solve this problem within the time complexity, which is also the correct solution expected by the questioner.

Code (m_99)

#include <stdio.h>
#include <bits/stdc++.h>
using namespace std; #define
rep(i,n) for (int i = 0; i < (n); ++i)
#define Inf32 1000000001
#define Inf64 4000000000000000001

int main(){

int _t;
cin>>_t;

rep(_,_t){
int n,m;
cin>>n>>m;
vector<long long> a(n+m);
rep(i,n+m)scanf("%lld",&a[i]);

sort(a.begin(),a.end()-1);
reverse(a.begin(),a.end());

long long ans = 0;


rep(i,n)ans += a[i];
Machine Translated by Google

cout<<ans<<endl;
}

return 0;
}
Machine Translated by Google

Problem B. Koxia and Permutation


Hint
Hint 1

When , the cost of all permutations is .

Hint 2

When , the minimum cost is always.

Solution
, each permutation will have the same cost.

When , the minimum cost will be at least . This is because there will be at least one interval that contains and contributes the cost to

the part, and the part at this time will contribute at least the cost.

The minimum cost does indeed always reach this lower bound, and we can construct

permutations of the form such that the cost is , regardless of the value of .

The time complexity required for each set of test data is. With careful implementation, other constructions may yield satisfactory answers.

Code (Time)
#include <iostream>

#define MULTI int _T; cin >> _T; while(_T--) using namespace std;

typedef long long ll;

int n, k;

int main () {

ios::sync_with_stdio(0); cin.tie(0);

MULTI {
cin >> n >> k;
int l = 1, r = n, _
= 1;

while (l <= r) cout << ((_ ^= 1) ? l++ : r--) << ' ';
cout << endl;
}

}
Machine Translated by Google

Problem C. Koxia and Number Theory


Hint
Hint 1

If they are not identical, then this is a trivial case of NO .

Hint 2

If they are pairwise different, can you construct an example so that the answer is NO ?

Hint 3

Maybe you're thinking about properties like parity? Try to generalize your ideas.

Hint 3.5

Consider every prime number. Consider the Chinese Remainder Theorem.

Hint 4

How many prime numbers should we check? Consider the pigeonhole principle.

Solution
First of all, they should be different from each other. The reason and , so this is a kind of NO

lies in the trivial situation.

For a given, let us remember. Conditions "For Hengcheng

"Establish" is equivalent to "Each prime number can only be divided into at most one". So, for a prime number, can we determine,

Does it always divide by at least two (regardless of the value of )?

A small but instructive example is . The answer to this example is NO because:

sometimes, ÿ

ÿ
sometimes, there is

That is, if we consider the array in the modulo sense, we get twice, so any value will . where both and

result in two elements in it being divisible.

This idea can also be generalized to larger prime numbers. For a prime number, let be in

the number of occurrences in . if

, then we immediately output NO.

Although There are many prime numbers in , but we only need to check the prime numbers less than or equal to . This is because according to pigeon

the nest principle It cannot be satisfied for larger prime numbers. due to less than

states that there are at most prime numbers equal to , we are This problem is solved within the time complexity.

However, we The reason is that for a prime number, if, otherwise

It is necessary to make will be a multiple of . So it’s true

In fact, means that for all prime numbers . if

holds, then we can list a system of congruence equations and use the Chinese remainder theorem to solve the appropriate one; if there is at least one
Machine Translated by Google

prime numbers such that , then any value will result in at least two occurrences.

Code (Time)
#include <iostream>
#include <algorithm> #define
MULTI int _T; cin >> _T; while(_T--)
using namespace std;
typedef long long ll;

const int N = 105;


const int INF = 0x3f3f3f3f;
template <typename T> bool chkmin (T &x, T y) {return y < x ? x = y, 1 :
0;}
template <typename T> bool chkmax (T &x, T y) {return y > x ? x = y, 1 :
0;}

int n;
ll a[N];

int cnt[N];

int main () {

ios::sync_with_stdio(0);
cin.tie(0);

MULTI {
cin >> n;
for (int i = 1;i <= n;++i) {
cin >> a[i];
}

int isDistinct = 1;
luck(a + 1, a + n + 1);
for (int i = 1;i <= n - 1;++i) {
if (a[i] == a[i + 1]) isDistinct = 0;
}

if (isDistinct == 0) {
cout << "NO" << endl;
continue;
}

int CRT_able = 1; for (int


mod = 2;mod <= n / 2;++mod) {
fill(cnt, cnt + mod, 0);
for (int i = 1;i <= n;++i) {
cnt[a[i] % mod]++;
}

if (*min_element(cnt, cnt + mod) >= 2) CRT_able = 0;


Machine Translated by Google

cout << (CRT_able ? "YES" : "NO") << endl;

}
Machine Translated by Google

Problem D. Koxia and Game


Hint
Hint 1

If both are determined, how to determine who wins?

Hint 2

If only it is determined, try to design an algorithm to determine whether it exists so that Koxia wins.

Hint 2.5

If you can't solve a problem in Hint 2 , try thinking about how it relates to graph theory.

Hint 3

Try to discuss the structure of each connected component in the graph to calculate the quantity.

Solution
First, let's consider how a proper array can make Koxia win.

Lemma 1 In each round of the game, Koxia should remove the appropriate elements in so that the remaining two elements are

the same, that is, Mahiru 's choice does not actually affect the outcome.

In the first round, if Koxia leaves two different elements, Mahiru can always block the formation. This means that Koxia can
only win if there is only one choice for Mahiru . It will be a permutation of a certain number. Following
after deciding, a similar discussion and so on, we conclude that Koxia can only win if there
is only one choice for Mahiru in each round of the game.

Lemma 2 Let be an array of length , where is one of and . Koxia wins if and only if a permutation is possible.

According to Lemma 1, when there is a possible permutation, , then Koxia can force Mahiru to
we let it be chosen in every round, so Koxia wins. If it is
impossible to be a permutation, we can use a reduction similar to Lemma 1 to prove that there is no array such that Koxia
wins.

First, we need an algorithm to determine whether a permutation is possible.

The input data Treated as edges, we transform this problem into a graph theory problem on a graph of size . Can

can be a permutation if and only if there is a way to specify the direction of each edge such that each node is pointed by exactly one edge. It

is easy to see that this is equivalent to the fact that for each connected component, the number of edges is equal to the number of vertices. This

can be determined by union search or graph traversal in time complexity of or .

To solve the counting problem, let us consider the structure of each connected component. Each satisfaction connected components of

can be viewed as a tree plus an additional edge, and this edge can be divided into two categories.

The additional edges form a loop with other tree edges. On the ring, we have two options for edge selection (clockwise and

counterclockwise), and after that the other options for edge selection will be fixed (edges pointing away from the ring).
Machine Translated by Google

Additional edges form self-loops. The value at this time does not affect the structure of the graph, so any value within is legal, while the plan for selecting points on other edges

will be fixed.

Therefore, when the answer is non-zero, the final answer to this question is the , which

number of connected components represented as non-self- represents the connection into a self-loop

Portion quantity. The time complexity is loops, or .

Code (Nanako, DSU)

#include <iostream>
#include <numeric>

#define MULTI int _T; cin >> _T; while(_T--) using namespace std;

typedef long long ll;

const int N = 1e5 + 5;


const int mod = 998244353;

int n;
int a[N], b[N];

int fa[N], cnt_v[N], cnt_e[N], selfloop[N]; int display[N];

void init () {
iota(f + 1, f + n + 1, 1);

fill(cnt_v + 1, cnt_v + n + 1, 1); fill(cnt_e + 1, cnt_e + n +


1, 0); fill(self + 1, self + n + 1, 0);

fill(vis + 1, vis + n + 1, 0);


}

int getfa (int x) {


return fa[x] == x ? x : fa[x] = getfa(fa[x]);
}

void merge (int u, int v) {


u = getfa(u);
v = getfa(v); cnt_v[u]
+= cnt_v[v]; cnt_e[u] += cnt_e[v];

self[u] |= self[v];
fa[v] = u;
}

int main () {

ios::sync_with_stdio(0); cin.tie(0);

MULTI {
cin >> n;
for (int i = 1;i <= n;++i) {
Machine Translated by Google

cin >> a[i];

for (int i = 1;i <= n;++i) {

cin >> b[i];

heat();

for (int i = 1;i <= n;++i) {

if (getfa(a[i]) != getfa(b[i])) merge(a[i], b[i]); cnt_e[getfa(a[i])]++; if (a[i] == b[i]) selfloop[getfa(a[i])] =

1;

ll ans = 1;

for (int i = 1;i <= n;++i) if (vis[getfa(i)] == 0) { if (cnt_v[getfa(i)] != cnt_e[getfa(i)]) ans = 0; *

(selfloop[getfa(i)] ? n : 2) % mod;
else years = years

show[getfa(i)] = 1;

cout << ans << endl;

Code (zengminghao, DFS)

#include <bits/stdc++.h>

using namespace std;

const int N = 1e5 + 5;

const int P = 998244353;

int n, a[N], b[N];

vector <int> G[N];

bool show[N];

int vertex, edge, self_loop; void dfs(int x) {

if (vis[x]) return ;

vis[x] = true;

vertex++;

for (auto y : G[x]) {

edge++;

dfs(y);

if (y == x) {

self_loop++;
}

void solve() {
Machine Translated by Google

scanf("%d", &n);

for (int i = 1; i <= n; i++) scanf("%d", &a[i]);

for (int i = 1; i <= n; i++) scanf("%d", &b[i]);

for (int i = 1; i <= n; i++) G[i].clear();

for (int i = 1; i <= n; i++) {

G[a[i]].push_back(b[i]); G[b[i]].push_back(a[i]);

int ans = 1;

for (int i = 1; i <= n; i++) vis[i] = false;

for (int i = 1; i <= n; i++) {

if (vis[i]) continue ;

vertex = 0;

edge = 0;

self_loop = 0; dfs(i);

if (edge != 2 * vertex) {

years = 0;

} else if (self_loop) {
*
years = 1ll * years n % P;

} else {

years = years * 2 % P;

printf("%d\n", ans);

int main() {

int t;

scanf("%d", &t);

while (t--) {

solve();

return 0;

}
Machine Translated by Google

Problem E. Koxia and Tree


Hint
Hint 1

Try to solve a classic problem - find the sum of the pairwise distances of specified nodes on the tree.

Hint 2

A move operation is added, but the direction of the edges is fixed. Find the sum of the pairwise distances of specified nodes on the tree.

Hint 2.5

If you can't solve the problem in Hint 2 , try thinking about why the questioner ordered each edge to be traversed only once.

Hint 3

How does maintenance (the probability that a butterfly exists at a node) help you calculate the answer when the edges are in random directions?

Solution
At first glance, we can easily think of a classic problem - finding the sum of the pairwise distances of a specified node on the tree. For any

edge, if there are specified nodes and specified nodes on both sides, there will eventually be pairs of nodes passing through

this edge. Without loss of generality, we take node as the root and define it as the number of specified nodes in the subtree. By taking the sum

for each edge, we get the answer to this question, which after division will also be equal to the expected distance between two nodes
calculate

(chosen uniformly at random from n nodes).

Next let's consider Hint 2 - consider a move operation, but with the direction of the edge fixed. Let us define and be the number of butterflies in

the subtree, but represent the initial value and the real-time value respectively. A very important observation is that although the butterfly is

moving, since each edge is passed by the butterfly at most once, we can always say. This property allows us to take the actual values discussed

and add them up to arrive at the answer in constant time complexity if we maintain the butterfly's position correctly.

When we further introduce random direction, if we define it as "the probability that there is a butterfly at the node", then "moving from node

to node" will be equivalent to , which allows us to easily maintain the real-time value of . Similarly, by taking the discussed values (but

combining probability calculations for each case, rather than performing specific moves), we arrive at the final answer. The total time complexity

is.

Code (Time)
#include <iostream>
#include <vector>
using namespace std; typedef
long long ll;

const int N = 3e5 + 5;


const int mod = 998244353;
const int inv2 = 499122177;

ll qpow (ll n, ll m) {
Machine Translated by Google

ll ret = 1;

while (m) {

if (m & 1) ret = ret * n % against;


n=n *
n % against;
m >>= 1;

return right;

ll getinv (ll a) {

return qpow(a, mod - 2);


}

int n, k;

int a[N];

int u[N], v[N];

vector <int> e[N];

int fa[N];

ll p[N], sum[N];

void dfs (int u, int f) {

sum[u] = p[u];
for (int v : e[u]) if (v != f) {

dfs(v, u);

fa[v] = u;

sum[u] += sum[v];

int main () {

ios::sync_with_stdio(0); cin.tie(0);

cin >> n >> k;

for (int i = 1;i <= k;++i) {

cin >> a[i];

p[a[i]] = 1;
}

for (int i = 1;i <= n - 1;++i) {

cin >> u[i] >> v[i];

e[u[i]].push_back(v[i]); e[v[i]].push_back(u[i]);

dfs(1, -1);

ll ans = 0;

for (int i = 1;i <= n - 1;++i) {

if (fa[u[i]] == v[i]) swap(u[i], v[i]); ll puv = p[u[i]] * (1 - p[v[i]] + mod) %

mod;

ll pvu = p[v[i]] * (1 - p[u[i]] + mod) % mod;


Machine Translated by Google

ll delta = 0;

delta -= puv delta -= pvu * sum[v[i]] % mod * (k - sum[v[i]]) % mod;

* sum[v[i]] % mod * (k - sum[v[i]]) % mod;

delta += puv * (sum[v[i]] + 1) % mod * (k - sum[v[i]] - 1) %

against;

delta += pvu * (sum[v[i]] - 1) % mod * (k - sum[v[i]] + 1) %

against;

ans = (ans + sum[v[i]] * (k - sum[v[i]]) + delta * inv2) % mod;

ans = (ans % mod + mod) % mod;

p[u[i]] = p[v[i]] = 1ll * (p[u[i]] + p[v[i]]) * inv2 % mod;

cost << years * getinv(1ll * k * (k - 1) / 2 % mod) % mod << endl;

}
Machine Translated by Google

Problem F. Koxia and Sequence


Hint
Hint 1

According to symmetry, for any non-negative integer, The number of good sequences when good sequence number when

the quantities,... are the same.

Hint 2

Try to consider each person's contribution to the answer independently.

Hint 3

Due to the nature of bitwise XOR, we can compute the answer in a modular sense.

Hint 4

Counting the number of alternatives for which bitwise OR is exactly equal is difficult, but counting the number of alternatives for which bitwise OR is a subset is relatively simple.

one. Therefore, consider inclusion and exclusion.

Hint 5

Consider Lucas ' theorem /Kummer's theorem.

Hint 6

"There are a total of balls. Consider the sum of the number of Pairs of non-negative integers, calculated from the previous ball

options that satisfy the problem of choosing n from the next ball and choosing n from the next ball."

Since this is equivalent to randomly selecting n from n balls, the answer to this question is . This identity is

Vandermonde identity.

Solution
Let denote the number of good sequences when , we have this , and if it is an even . because

number, the answer is . Otherwise, the answer is to satisfy " When the number of good sequences is odd

XOR sum of . Considering each bit independently, we transform the problem into "for each, calculate the bit modulus

"The number of good sequences that last".

Let denote the answer when is a subset (i.e.), we can use induction to prove the answer to the original question

A case is the XOR sum of those subsets that satisfy yes. Therefore, the new goal is to compute, for each

is the number of subsets whose th bit modulus is ÿ.

According to the corollary of Lucas 's theorem or Kummer 's theorem, we know that it is equivalent to "

Yes subset". The number of good sequences of subsets of lengths and and and or or is equal in the modular sense to

. If there is a subset that is not, then considering the Vandermonde , the product is also .

identity in the modular sense, the value should be equal to . Similar to before, we transform the problem into - for

For each, calculate the subset satisfying " is and the bit modulus is and is a subset of (

)" of the quantity.

To sum up, by enumerating and respectively, the time complexity is obtained as ÿ


Machine Translated by Google

Code (errorgorn)

#include <bits/stdc++.h>

using namespace std;

#define int long long #define ll long


long #define ii pair<ll,ll> #define
iii pair<ii,ll>

#define fi first
#define se second
#define endl '\n' #define

debug(x) cout << #x << ": " << x << endl

#define pub push_back #define


pob pop_back #define puf
push_front #define pof pop_front
#define lb lower_bound #define ub
upper_bound

#define rep(x,start,end) for(int x=(start)-((start)>(end));x!=(end)-


((start)>(end));((start)<(end)?x++:x--))
#define all(x) (x).begin(),(x).end()
#define sz(x) (int)(x).size()

mt19937 rng(chrono::system_clock::now().time_since_epoch().count());

int n,a,b;

bool isSub(int i,int j){


if (i<0 || j<0) return false; return (j&i)==i;

signed main()
{ ios::sync_with_stdio(0);
cin.tie(0);
cout.tie(0);
cin.exceptions(ios::badbit | ios::failbit);

cin>>n>>a>>b;

int ans=0;
for (int sub=b;sub;sub=(sub-1)&b) rep(bit,0,20) if (sub&(1<<bit)){
if (isSub(a-(1<<bit),n*sub-(1<<bit))){
ans^=(1<<bit);
}

}
Machine Translated by Google

cout<<ans*(n%2)<<endl;
}
Machine Translated by Google

G Koxia and Bracket


Hint
Hint 1

What special properties does the deleted bracket sequence have?

Hint 2

Try using DP solves this problem.

Hint 3

If there are no regular requirements, can multiple parentheses be processed quickly with one operation?

Hint 4

Can you combine the previous idea with divide and conquer?

Solution

Solution
Let us consider what properties the deleted subsequence of parentheses has.

First, it must be a bracket sequence of the form ))((((((((()) . The proof is simple: if there is a deleted ) on the right side of ( , then we can keep this pair of ()

without destroying the remaining sequence Regularity.

This property means that we can divide the original sequence into two parts and delete only ) in the left part and only ( in the right part . Now let us try to

find the dividing point between the two parts: consider a sequence based on brackets prefix sum, where each ( is replaced by 1, and each ) is replaced by

-1.

We define a position as Special if and only if the number corresponding to this position is smaller than the previous minimum value. It is not difficult to find

that whenever a Special position appears , we must delete an additional one before this position so that the bracket sequence meets the conditions again.

Considering the above ideas, we can find that only the farthest Special position (before ) may be deleted, so we can use this position as the dividing point.

We now address the issues on both sides separately. It is worth pointing out that they are essentially the same, since we can transform the problem of only

deleting into a problem of only deleting . For example, if you remove only ( from (()((()())) , it is equivalent to removing only ) from (()()))()) .

For the deletion-only problem , a sufficient condition for the sequence to be regular is that after the operation is completed, each number in the prefix sum must be greater than

Also considering the above ideas, let us design the state, which means that when considering the (th ) , in addition to satisfying the prefix and restrictions,

the number of additional ) deleted is ( ) number of plans.


Machine Translated by Google

The translator-optimized solution obtained by deleting the part of ) and Multiply together to get the answer. Time complexity, compiled

deleting the part of ( can complete the operation in about 9 seconds, but this is not enough to pass this question.

Solution

Let us consider by what properties the transfer without Special position can be further optimized. For status, at

After processing ) , the transfer status is as follows:

We find that this transfer equation behaves in convolutional form. Therefore, we can optimize this convolution through NTT , a single

The time complexity of the operation is , and due to the existence of Special position, the worst-case global complexity of this approach is

for ÿ

Consider how this can be combined with practice. For states and individuals ), we consider their pairs

contribution. We found that if it is satisfied, then the state transition will not be affected by the Special position in any case.

Based on the above ideas, we can adopt a blocking method based on periodic reconstruction: set the reconstruction period, and within one cycle,

For the part, we use the DP approach to process, and for the part, we cycle in a round

After finishing, use NTT to calculate the answer.

time complexity , by setting appropriate, we can optimize the time complexity to

. Although the time complexity is still high, considering the low constants of

Pass this question.

Solution
Consider combining the idea of extracting parts for NTT with divide and conquer. Assume that the interval that needs to be processed now is

, the dp polynomial passed at the same time is, we perform the following operations:

Count the number of Special positions in the interval , extract the corresponding state points in the polynomial, and convolve it with the Ministry of

current interval separately.

Pass the corresponding state part of the polynomial into the interval and continue the operation. perform operations on it, and then get the result

Directly add the polynomials obtained by the above two steps and return the resulting polynomial.

How can I calculate the time complexity of doing the above operation? Let us analyze the operations of passing in the left range and the right range respectively:

When the left interval is passed in, the size of the polynomial used for NTT operation is the Special value contained in the interval.

the right interval minus the number in the left interval Contains the number of Special positions, that is, the number of positions in

Contains the number of Special positions. This number will not exceed the length of the right interval.

When passing in the right interval, the size of the polynomial will not exceed the length of the left interval.

At the same time, the length of the combinatorial polynomial multiplied by is the interval length + 1.
Machine Translated by Google

To sum up, the size of the two polynomials performing NTT operations in the interval will not exceed the interval length + 1. Therefore, the time complexity of this approach is the time complexity of divide

and conquer NTT , that is ÿ

Code (errorgorn)

#include <bits/stdc++.h>

#include <ext/pb_ds/assoc_container.hpp> #include <ext/pb_ds/


tree_policy.hpp> #include <ext/rope>

using namespace std;


using namespace __gnu_pbds; using
namespace __gnu_cxx;

#define int long long #define ll long


long #define ii pair<ll,ll> #define
iii pair<ii,ll> #define fi first

#define se second
#define endl '\n'

#define debug(x) cout << #x << ": " << x << endl

#define pub push_back #define


pob pop_back #define puf
push_front #define pof pop_front
#define lb lower_bound #define ub
upper_bound

#define rep(x,start,end) for(auto x=(start)-((start)>(end));x!=(end)-


((start)>(end));((start)<(end)?x++:x--))
#define all(x) (x).begin(),(x).end()
#define sz(x) (int)(x).size()

#define indexed_set
tree<ll,null_type,less<ll>,rb_tree_tag,tree_order_statistics_node_update
>

//change less to less_equal for non distinct pbds, but erase will bug

mt19937 rng(chrono::system_clock::now().time_since_epoch().count());

const int MOD=998244353;

ll qexp(ll b,ll p,int m){


ll res=1;
while (p){
if (p&1) res=(res*b)%m;
b=(b*b)%m;
p>>=1;
Machine Translated by Google

return res;
}

ll inv(ll i){
return qexp(i,MOD-2,MOD);
}

ll fix(ll i){
i%=MOD;
if (i<0) i+=MOD;
return i;
}

ll fac[1000005];
ll ifac[1000005];

ll nCk(int i,int j){


if (i<j) return 0;
return fac[i]*ifac[j]%MOD*ifac[i-j]%MOD;
}

//https://github.com/kth-competitive-programming/kactl/blob/
main/content/numerical/NumberTheoreticTransform.h const ll mod = (119 << 23) + 1, root = 62; // = 998244353 // For p
< 2^30 there is also e.g. 5 << 25, 7 << 26, 479 << 21

// and 483 << 21 (same root). The last two are > 10^9.
typedef vector<int> vi;
typedef vector<ll> vl;
void ntt(vl &a) {

int n = sz(a), L = 31 - __builtin_clz(n); static vl rt(2, 1);

for (static int k = 2, s = 2; k < n; k *= 2, s++) {


rt.resize(n);
ll z[] = {1, qexp(root, mod >> s, mod)}; rep(i,k,2*k) rt[i] = rt[i / 2] *
z[i & 1] % mod;
}

we ripped(n);
rep(i,0,n) rev[i] = (rev[i / 2] | (i & 1) << L) / 2; rep(i,0,n) if (i < rev[i]) swap(a[i], a[rev[i]]);

for (int k = 1; k < n; k *= 2)


for (int i = 0; i < n; i += 2 * k) rep(j,0,k) {
ll z = rt[j + k] * a[i + j + k] % mod, &ai = a[i + j]; a[i + j + k] = ai - z + (z > ai ? mod : 0);

ai += (ai + z >= mod ? z - mod : z);


}

vl conv(const vl &a, const vl &b) {


if (a.empty() || b.empty()) return {};
Machine Translated by Google

int s = sz(a) + sz(b) - 1, B = 32 - __builtin_clz(s), n = 1 << B; int inv = qexp(n, mod - 2, mod);

vl L(a), R(b), out(n);


L.resize(n), R.resize(n);
ntt(L), ntt(R);
rep(i,0,n) out[-i & (n - 1)] = (ll)L[i] * R[i] % mod * inv % mod;
ntt(out);
return {out.begin(), out.begin() + s};
}

vector<int> v;

vector<int> solve(int l,int r,vector<int> poly){


if (poly.empty()) return poly;

if (l==r){
poly=conv(poly,{1,1});
poly.erase(poly.begin(),poly.begin()+v[l]); return poly;

int m=l+r>>1;
int num=0;
rep(x,l,r+1) num+=v[x];
num=min(num,w(poly));

vector<int> small(poly.begin(),poly.begin()+num);
poly.erase(poly.begin(),poly.begin()+num);

vector<int> mul;
rep(x,0,r-l+2) mul.pub(nCk(r-l+1,x)); poly=conv(poly,mul);

small=solve(m+1,r,solve(l,m,small));
poly.resize(max(sz(poly),sz(small))); rep(x,0,sz(small))
poly[x]=(poly[x]+small[x])%MOD;

return poly;
}

int solve(string s){


if (s=="") return 1;
v.clear();

int mn=0,curr=0;
for (auto it:s){
if (it=='(') curr++;
else{
curr--;
Machine Translated by Google

if (curr<mn){
mn=curr;
v.pub(1);
}

else{
v.pub(0);
}

return solve(0,sz(v)-1,{1})[0];
}

int n;
string s;
int pref[500005];

signed main()
{ ios::sync_with_stdio(0); cin.tie(0);

cout.tie(0);
cin.exceptions(ios::badbit | ios::failbit);

do[0]=1;
rep(x,1,1000005) fac[x]=fac[x-1]*x%MOD;
ifac[1000004]=inv(fac[1000004]);
rep(x,1000005,1) ifac[x-1]=ifac[x]*x%MOD;

cin>>s;
n=sh(s);
pref[0]=0;
rep(x,0,n) pref[x+1]=pref[x]+(s[x]=='('?1:-1);

int pos=min_element(pref,pref+n+1)-pref; string


a=s.substr(0,pos),b=s.substr(pos,n-pos);
reverse(all(b)); for (auto &it:b) it^=1;
cout<<solve(a)*solve(b)%MOD<<endl;
}
Machine Translated by Google

Problem H. Koxia, Mahiru and Winter Festival


Hint
Hint 1

Under what circumstances is the minimum blocking degree?

Hint 2

Suppose you have a black box that gives you a solution with grid size when , try giving a solution with grid size
when .

Preface

This is a special case of congestion minimization . The generalized version of this problem is NP-Hard , but can be solved
on the special structure of this problem.

The only situation where the maximum degree of blocking is . This can be proven using the pigeonhole principle - if there exists or , then the sum of the

minimum lengths of all routes will exceed the total number of edges, that is, there is always an edge that is traveled more than once.

Our current goal is to try to construct a route plan such that the maximum blocking degree is . We will show that this is
possible for arbitrary input data. Let's show some pictures first as a draft to express our general idea and refine its details later.

Solution (sketch)
Machine Translated by Google
Machine Translated by Google

Solution (details)
This approach is based on an inductive approach. The base case sum is Trivial . We assume that we solve all cases where the

grid size is /, and now we treat it as a black box for solving the case where the grid size is /.

For the case where the grid size is , first, we connect the following routes using only the outermost edges:

Use the left and bottom edges to connect the route from top to bottom; Use the right
and bottom edges to connect the route from top to bottom; Use the top and right edges
to connect the route from top to bottom. , connecting the departing, left-to-right route;
using the left and upper edges, connecting the arriving, left-to-right route. If this route connects the same pair of points
as the previous route, we use the left, upper, and right edges to connect any left-to-right route.

So far, there are two routes from top to bottom and one from left to right that need to be connected. We only need to move their

starting and ending points one space closer to the center, maintaining their relative order. In this way, we reduce the original

problem to a problem of size , and we already know how to solve it.

Code (SteamTurbine)

#include <bits/stdc++.h>
#define FOR(i,s,e) for (int i=(s); i<(e); i++)
#define FOE(i,s,e) for (int i=(s); i<=(e); i++) #define FOD(i,s,e) for (int i=(s)-1;
i>=(e); i--) #define PB push_back using namespace std;

struct Paths{
/* store paths in order */
Machine Translated by Google

vector<vector<pair<int, int>>> NS, EW;

Paths(){

NS.clear();

EW.clear();

};

Paths solve(vector<int> p, vector<int> q){

int n = p.size();
Paths Ret;

Ret.NS.resize(n);

Ret.EW.resize(n);

// Base case

if (n == 0) return Ret;

if (n == 1){

Ret.NS[0].PB({1, 1});

Ret.EW[0].PB({1, 1});

return Right;

// Route NS flow originating from (1, 1) and (1, n) using leftmost

and rightmost edges

FOE(i,1,n){

Ret.NS[0].PB({i, 1});

Ret.NS[n-1].PB({i, n});

// Routing to final destination using bottom edges FOE(i,2,p[0]) Ret.NS[0].PB({n, i});

FOD(i,n,p[n-1]) Ret.NS[n-1].PB({n, i});

// Create p'[] for n-2 instance vector<int> p_new(0);

FOE(i,1,n-2) p_new.PB(p[i] - (p[i]>p[0])

- (p[i]>p[n-1]));

// Route EW flow originating from (1, 1) using topmost and rightmost

edges

FOE(i,1,n) Ret.EW[0].PB({1, i});

FOE(i,2,q[0]) Ret.EW[0].PB({i, n});

// Route EW flow originating in (m, 1) with q[m] as small as possible

int m = 1;

// special handle so congestion is 1 if possible if (p[0] == 1 && p[n-1] == n && q[0] == 1

&& q[n-1] == n){


m = n - 1;

FOE(i,1,n) Ret.EW[n-1].PB({n, i});

}
Machine Translated by Google

else{

FOR(i,1,n) if (q[i] < q[m]) m = i;

// Route(m+1, 1) --> (1, 1) --> (1, n) --> (q[m], n)

FOD(i,m+2,2) Ret.EW[m].PB({i, 1});

FOR(i,1,n) Ret.EW[m].PB({1, i});

FOE(i,1,q[m]) Ret.EW[m].PB({i, n});


}

// Create q'[] for n-2 instance vector<int> q_new(0);

FOR(i,1,n) if (i != m) q_new.PB(q[i] -

(q[i]>q[0]) - (q[i]>q[m]));

if (n > 1){

Paths S = solve(p_new, q_new); int t;

// connect NS paths

FOR(i,1,n-1){

Ret.NS[i].PB({1, i+1});

for (auto [x, y]: S.NS[i-1]){ Ret.NS[i].PB({x+1, y+1}); t

= y + 1;

Ret.NS[i].PB({n, t});

if (p[i] != t) Ret.NS[i].PB({n, p[i]});


}

// connect EW paths
int l = 0;

FOR(i,1,n) if (i != m){

Ret.EW[i].PB({i+1, 1});

if (i > m) Ret.EW[i].PB({i, 1});

for (auto [x, y]: S.EW[l]){

Ret.EW[i].PB({x+1, y+1});
t = x + 1;

Ret.EW[i].PB({t, n});

if (q[i] != t) Ret.EW[i].PB({q[i], n});


++l;

return Right;

int main(){
Machine Translated by Google

int n;
vector<int> p, q;

scanf("%d", &n);
p.resize(n), q.resize(n); FOR(i,0,n)
scanf("%d", &p[i]); FOR(i,0,n) scanf("%d", &q[i]);

Paths Solution = solve(p, q);

for (auto path: Solution.NS){


printf("%d", path.size());
for (auto [x, y]: path) printf(" %d %d", x, y); puts("");

for (auto path: Solution.EW){


printf("%d", path.size()); for (auto [x, y]:
path) printf(" %d %d", x, y);
puts("");
}

return 0;
}

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy