Editorial
Editorial
Editorial
beg to be excused
Machine Translated by Google
Hint 1
Hint 2
After all operations are completed, it will always be among the final number.
Hint 3
consider and situation at the time. According to Hint 2, is always among the final number, what about?
Solution
This problem can be solved naturally using a greedy algorithm - for , we can replace the current smallest
element. The time complexity required for each set of test data is ÿ
Another approach is to add to the final sum first, and for the other We can choose the number freely
and maximize their sum. Therefore, sort or similar practices can be used in
Solve this problem within the time complexity, which is also the correct solution expected by the questioner.
Code (m_99)
#include <stdio.h>
#include <bits/stdc++.h>
using namespace std; #define
rep(i,n) for (int i = 0; i < (n); ++i)
#define Inf32 1000000001
#define Inf64 4000000000000000001
int main(){
int _t;
cin>>_t;
rep(_,_t){
int n,m;
cin>>n>>m;
vector<long long> a(n+m);
rep(i,n+m)scanf("%lld",&a[i]);
sort(a.begin(),a.end()-1);
reverse(a.begin(),a.end());
cout<<ans<<endl;
}
return 0;
}
Machine Translated by Google
Hint 2
Solution
, each permutation will have the same cost.
When , the minimum cost will be at least . This is because there will be at least one interval that contains and contributes the cost to
the part, and the part at this time will contribute at least the cost.
The minimum cost does indeed always reach this lower bound, and we can construct
permutations of the form such that the cost is , regardless of the value of .
The time complexity required for each set of test data is. With careful implementation, other constructions may yield satisfactory answers.
Code (Time)
#include <iostream>
#define MULTI int _T; cin >> _T; while(_T--) using namespace std;
int n, k;
int main () {
ios::sync_with_stdio(0); cin.tie(0);
MULTI {
cin >> n >> k;
int l = 1, r = n, _
= 1;
while (l <= r) cout << ((_ ^= 1) ? l++ : r--) << ' ';
cout << endl;
}
}
Machine Translated by Google
Hint 2
If they are pairwise different, can you construct an example so that the answer is NO ?
Hint 3
Maybe you're thinking about properties like parity? Try to generalize your ideas.
Hint 3.5
Hint 4
How many prime numbers should we check? Consider the pigeonhole principle.
Solution
First of all, they should be different from each other. The reason and , so this is a kind of NO
"Establish" is equivalent to "Each prime number can only be divided into at most one". So, for a prime number, can we determine,
sometimes, ÿ
ÿ
sometimes, there is
That is, if we consider the array in the modulo sense, we get twice, so any value will . where both and
This idea can also be generalized to larger prime numbers. For a prime number, let be in
Although There are many prime numbers in , but we only need to check the prime numbers less than or equal to . This is because according to pigeon
the nest principle It cannot be satisfied for larger prime numbers. due to less than
states that there are at most prime numbers equal to , we are This problem is solved within the time complexity.
holds, then we can list a system of congruence equations and use the Chinese remainder theorem to solve the appropriate one; if there is at least one
Machine Translated by Google
prime numbers such that , then any value will result in at least two occurrences.
Code (Time)
#include <iostream>
#include <algorithm> #define
MULTI int _T; cin >> _T; while(_T--)
using namespace std;
typedef long long ll;
int n;
ll a[N];
int cnt[N];
int main () {
ios::sync_with_stdio(0);
cin.tie(0);
MULTI {
cin >> n;
for (int i = 1;i <= n;++i) {
cin >> a[i];
}
int isDistinct = 1;
luck(a + 1, a + n + 1);
for (int i = 1;i <= n - 1;++i) {
if (a[i] == a[i + 1]) isDistinct = 0;
}
if (isDistinct == 0) {
cout << "NO" << endl;
continue;
}
}
Machine Translated by Google
Hint 2
If only it is determined, try to design an algorithm to determine whether it exists so that Koxia wins.
Hint 2.5
If you can't solve a problem in Hint 2 , try thinking about how it relates to graph theory.
Hint 3
Try to discuss the structure of each connected component in the graph to calculate the quantity.
Solution
First, let's consider how a proper array can make Koxia win.
Lemma 1 In each round of the game, Koxia should remove the appropriate elements in so that the remaining two elements are
the same, that is, Mahiru 's choice does not actually affect the outcome.
In the first round, if Koxia leaves two different elements, Mahiru can always block the formation. This means that Koxia can
only win if there is only one choice for Mahiru . It will be a permutation of a certain number. Following
after deciding, a similar discussion and so on, we conclude that Koxia can only win if there
is only one choice for Mahiru in each round of the game.
Lemma 2 Let be an array of length , where is one of and . Koxia wins if and only if a permutation is possible.
According to Lemma 1, when there is a possible permutation, , then Koxia can force Mahiru to
we let it be chosen in every round, so Koxia wins. If it is
impossible to be a permutation, we can use a reduction similar to Lemma 1 to prove that there is no array such that Koxia
wins.
The input data Treated as edges, we transform this problem into a graph theory problem on a graph of size . Can
can be a permutation if and only if there is a way to specify the direction of each edge such that each node is pointed by exactly one edge. It
is easy to see that this is equivalent to the fact that for each connected component, the number of edges is equal to the number of vertices. This
To solve the counting problem, let us consider the structure of each connected component. Each satisfaction connected components of
can be viewed as a tree plus an additional edge, and this edge can be divided into two categories.
The additional edges form a loop with other tree edges. On the ring, we have two options for edge selection (clockwise and
counterclockwise), and after that the other options for edge selection will be fixed (edges pointing away from the ring).
Machine Translated by Google
Additional edges form self-loops. The value at this time does not affect the structure of the graph, so any value within is legal, while the plan for selecting points on other edges
will be fixed.
Therefore, when the answer is non-zero, the final answer to this question is the , which
number of connected components represented as non-self- represents the connection into a self-loop
#include <iostream>
#include <numeric>
#define MULTI int _T; cin >> _T; while(_T--) using namespace std;
int n;
int a[N], b[N];
void init () {
iota(f + 1, f + n + 1, 1);
self[u] |= self[v];
fa[v] = u;
}
int main () {
ios::sync_with_stdio(0); cin.tie(0);
MULTI {
cin >> n;
for (int i = 1;i <= n;++i) {
Machine Translated by Google
heat();
1;
ll ans = 1;
(selfloop[getfa(i)] ? n : 2) % mod;
else years = years
show[getfa(i)] = 1;
#include <bits/stdc++.h>
bool show[N];
if (vis[x]) return ;
vis[x] = true;
vertex++;
edge++;
dfs(y);
if (y == x) {
self_loop++;
}
void solve() {
Machine Translated by Google
scanf("%d", &n);
G[a[i]].push_back(b[i]); G[b[i]].push_back(a[i]);
int ans = 1;
if (vis[i]) continue ;
vertex = 0;
edge = 0;
self_loop = 0; dfs(i);
if (edge != 2 * vertex) {
years = 0;
} else if (self_loop) {
*
years = 1ll * years n % P;
} else {
years = years * 2 % P;
printf("%d\n", ans);
int main() {
int t;
scanf("%d", &t);
while (t--) {
solve();
return 0;
}
Machine Translated by Google
Try to solve a classic problem - find the sum of the pairwise distances of specified nodes on the tree.
Hint 2
A move operation is added, but the direction of the edges is fixed. Find the sum of the pairwise distances of specified nodes on the tree.
Hint 2.5
If you can't solve the problem in Hint 2 , try thinking about why the questioner ordered each edge to be traversed only once.
Hint 3
How does maintenance (the probability that a butterfly exists at a node) help you calculate the answer when the edges are in random directions?
Solution
At first glance, we can easily think of a classic problem - finding the sum of the pairwise distances of a specified node on the tree. For any
edge, if there are specified nodes and specified nodes on both sides, there will eventually be pairs of nodes passing through
this edge. Without loss of generality, we take node as the root and define it as the number of specified nodes in the subtree. By taking the sum
for each edge, we get the answer to this question, which after division will also be equal to the expected distance between two nodes
calculate
Next let's consider Hint 2 - consider a move operation, but with the direction of the edge fixed. Let us define and be the number of butterflies in
the subtree, but represent the initial value and the real-time value respectively. A very important observation is that although the butterfly is
moving, since each edge is passed by the butterfly at most once, we can always say. This property allows us to take the actual values discussed
and add them up to arrive at the answer in constant time complexity if we maintain the butterfly's position correctly.
When we further introduce random direction, if we define it as "the probability that there is a butterfly at the node", then "moving from node
to node" will be equivalent to , which allows us to easily maintain the real-time value of . Similarly, by taking the discussed values (but
combining probability calculations for each case, rather than performing specific moves), we arrive at the final answer. The total time complexity
is.
Code (Time)
#include <iostream>
#include <vector>
using namespace std; typedef
long long ll;
ll qpow (ll n, ll m) {
Machine Translated by Google
ll ret = 1;
while (m) {
return right;
ll getinv (ll a) {
int n, k;
int a[N];
int fa[N];
ll p[N], sum[N];
sum[u] = p[u];
for (int v : e[u]) if (v != f) {
dfs(v, u);
fa[v] = u;
sum[u] += sum[v];
int main () {
ios::sync_with_stdio(0); cin.tie(0);
p[a[i]] = 1;
}
e[u[i]].push_back(v[i]); e[v[i]].push_back(u[i]);
dfs(1, -1);
ll ans = 0;
mod;
ll delta = 0;
against;
against;
}
Machine Translated by Google
According to symmetry, for any non-negative integer, The number of good sequences when good sequence number when
Hint 2
Hint 3
Due to the nature of bitwise XOR, we can compute the answer in a modular sense.
Hint 4
Counting the number of alternatives for which bitwise OR is exactly equal is difficult, but counting the number of alternatives for which bitwise OR is a subset is relatively simple.
Hint 5
Hint 6
"There are a total of balls. Consider the sum of the number of Pairs of non-negative integers, calculated from the previous ball
options that satisfy the problem of choosing n from the next ball and choosing n from the next ball."
Since this is equivalent to randomly selecting n from n balls, the answer to this question is . This identity is
Vandermonde identity.
Solution
Let denote the number of good sequences when , we have this , and if it is an even . because
number, the answer is . Otherwise, the answer is to satisfy " When the number of good sequences is odd
XOR sum of . Considering each bit independently, we transform the problem into "for each, calculate the bit modulus
Let denote the answer when is a subset (i.e.), we can use induction to prove the answer to the original question
A case is the XOR sum of those subsets that satisfy yes. Therefore, the new goal is to compute, for each
According to the corollary of Lucas 's theorem or Kummer 's theorem, we know that it is equivalent to "
Yes subset". The number of good sequences of subsets of lengths and and and or or is equal in the modular sense to
. If there is a subset that is not, then considering the Vandermonde , the product is also .
identity in the modular sense, the value should be equal to . Similar to before, we transform the problem into - for
For each, calculate the subset satisfying " is and the bit modulus is and is a subset of (
Code (errorgorn)
#include <bits/stdc++.h>
#define fi first
#define se second
#define endl '\n' #define
mt19937 rng(chrono::system_clock::now().time_since_epoch().count());
int n,a,b;
signed main()
{ ios::sync_with_stdio(0);
cin.tie(0);
cout.tie(0);
cin.exceptions(ios::badbit | ios::failbit);
cin>>n>>a>>b;
int ans=0;
for (int sub=b;sub;sub=(sub-1)&b) rep(bit,0,20) if (sub&(1<<bit)){
if (isSub(a-(1<<bit),n*sub-(1<<bit))){
ans^=(1<<bit);
}
}
Machine Translated by Google
cout<<ans*(n%2)<<endl;
}
Machine Translated by Google
Hint 2
Hint 3
If there are no regular requirements, can multiple parentheses be processed quickly with one operation?
Hint 4
Can you combine the previous idea with divide and conquer?
Solution
Solution
Let us consider what properties the deleted subsequence of parentheses has.
First, it must be a bracket sequence of the form ))((((((((()) . The proof is simple: if there is a deleted ) on the right side of ( , then we can keep this pair of ()
This property means that we can divide the original sequence into two parts and delete only ) in the left part and only ( in the right part . Now let us try to
find the dividing point between the two parts: consider a sequence based on brackets prefix sum, where each ( is replaced by 1, and each ) is replaced by
-1.
We define a position as Special if and only if the number corresponding to this position is smaller than the previous minimum value. It is not difficult to find
that whenever a Special position appears , we must delete an additional one before this position so that the bracket sequence meets the conditions again.
Considering the above ideas, we can find that only the farthest Special position (before ) may be deleted, so we can use this position as the dividing point.
We now address the issues on both sides separately. It is worth pointing out that they are essentially the same, since we can transform the problem of only
deleting into a problem of only deleting . For example, if you remove only ( from (()((()())) , it is equivalent to removing only ) from (()()))()) .
For the deletion-only problem , a sufficient condition for the sequence to be regular is that after the operation is completed, each number in the prefix sum must be greater than
Also considering the above ideas, let us design the state, which means that when considering the (th ) , in addition to satisfying the prefix and restrictions,
The translator-optimized solution obtained by deleting the part of ) and Multiply together to get the answer. Time complexity, compiled
deleting the part of ( can complete the operation in about 9 seconds, but this is not enough to pass this question.
Solution
Let us consider by what properties the transfer without Special position can be further optimized. For status, at
We find that this transfer equation behaves in convolutional form. Therefore, we can optimize this convolution through NTT , a single
The time complexity of the operation is , and due to the existence of Special position, the worst-case global complexity of this approach is
for ÿ
Consider how this can be combined with practice. For states and individuals ), we consider their pairs
contribution. We found that if it is satisfied, then the state transition will not be affected by the Special position in any case.
Based on the above ideas, we can adopt a blocking method based on periodic reconstruction: set the reconstruction period, and within one cycle,
For the part, we use the DP approach to process, and for the part, we cycle in a round
. Although the time complexity is still high, considering the low constants of
Solution
Consider combining the idea of extracting parts for NTT with divide and conquer. Assume that the interval that needs to be processed now is
, the dp polynomial passed at the same time is, we perform the following operations:
Count the number of Special positions in the interval , extract the corresponding state points in the polynomial, and convolve it with the Ministry of
Pass the corresponding state part of the polynomial into the interval and continue the operation. perform operations on it, and then get the result
Directly add the polynomials obtained by the above two steps and return the resulting polynomial.
How can I calculate the time complexity of doing the above operation? Let us analyze the operations of passing in the left range and the right range respectively:
When the left interval is passed in, the size of the polynomial used for NTT operation is the Special value contained in the interval.
the right interval minus the number in the left interval Contains the number of Special positions, that is, the number of positions in
Contains the number of Special positions. This number will not exceed the length of the right interval.
When passing in the right interval, the size of the polynomial will not exceed the length of the left interval.
At the same time, the length of the combinatorial polynomial multiplied by is the interval length + 1.
Machine Translated by Google
To sum up, the size of the two polynomials performing NTT operations in the interval will not exceed the interval length + 1. Therefore, the time complexity of this approach is the time complexity of divide
Code (errorgorn)
#include <bits/stdc++.h>
#define se second
#define endl '\n'
#define debug(x) cout << #x << ": " << x << endl
#define indexed_set
tree<ll,null_type,less<ll>,rb_tree_tag,tree_order_statistics_node_update
>
//change less to less_equal for non distinct pbds, but erase will bug
mt19937 rng(chrono::system_clock::now().time_since_epoch().count());
return res;
}
ll inv(ll i){
return qexp(i,MOD-2,MOD);
}
ll fix(ll i){
i%=MOD;
if (i<0) i+=MOD;
return i;
}
ll fac[1000005];
ll ifac[1000005];
//https://github.com/kth-competitive-programming/kactl/blob/
main/content/numerical/NumberTheoreticTransform.h const ll mod = (119 << 23) + 1, root = 62; // = 998244353 // For p
< 2^30 there is also e.g. 5 << 25, 7 << 26, 479 << 21
// and 483 << 21 (same root). The last two are > 10^9.
typedef vector<int> vi;
typedef vector<ll> vl;
void ntt(vl &a) {
we ripped(n);
rep(i,0,n) rev[i] = (rev[i / 2] | (i & 1) << L) / 2; rep(i,0,n) if (i < rev[i]) swap(a[i], a[rev[i]]);
int s = sz(a) + sz(b) - 1, B = 32 - __builtin_clz(s), n = 1 << B; int inv = qexp(n, mod - 2, mod);
vector<int> v;
if (l==r){
poly=conv(poly,{1,1});
poly.erase(poly.begin(),poly.begin()+v[l]); return poly;
int m=l+r>>1;
int num=0;
rep(x,l,r+1) num+=v[x];
num=min(num,w(poly));
vector<int> small(poly.begin(),poly.begin()+num);
poly.erase(poly.begin(),poly.begin()+num);
vector<int> mul;
rep(x,0,r-l+2) mul.pub(nCk(r-l+1,x)); poly=conv(poly,mul);
small=solve(m+1,r,solve(l,m,small));
poly.resize(max(sz(poly),sz(small))); rep(x,0,sz(small))
poly[x]=(poly[x]+small[x])%MOD;
return poly;
}
int mn=0,curr=0;
for (auto it:s){
if (it=='(') curr++;
else{
curr--;
Machine Translated by Google
if (curr<mn){
mn=curr;
v.pub(1);
}
else{
v.pub(0);
}
return solve(0,sz(v)-1,{1})[0];
}
int n;
string s;
int pref[500005];
signed main()
{ ios::sync_with_stdio(0); cin.tie(0);
cout.tie(0);
cin.exceptions(ios::badbit | ios::failbit);
do[0]=1;
rep(x,1,1000005) fac[x]=fac[x-1]*x%MOD;
ifac[1000004]=inv(fac[1000004]);
rep(x,1000005,1) ifac[x-1]=ifac[x]*x%MOD;
cin>>s;
n=sh(s);
pref[0]=0;
rep(x,0,n) pref[x+1]=pref[x]+(s[x]=='('?1:-1);
Hint 2
Suppose you have a black box that gives you a solution with grid size when , try giving a solution with grid size
when .
Preface
This is a special case of congestion minimization . The generalized version of this problem is NP-Hard , but can be solved
on the special structure of this problem.
The only situation where the maximum degree of blocking is . This can be proven using the pigeonhole principle - if there exists or , then the sum of the
minimum lengths of all routes will exceed the total number of edges, that is, there is always an edge that is traveled more than once.
Our current goal is to try to construct a route plan such that the maximum blocking degree is . We will show that this is
possible for arbitrary input data. Let's show some pictures first as a draft to express our general idea and refine its details later.
Solution (sketch)
Machine Translated by Google
Machine Translated by Google
Solution (details)
This approach is based on an inductive approach. The base case sum is Trivial . We assume that we solve all cases where the
grid size is /, and now we treat it as a black box for solving the case where the grid size is /.
For the case where the grid size is , first, we connect the following routes using only the outermost edges:
Use the left and bottom edges to connect the route from top to bottom; Use the right
and bottom edges to connect the route from top to bottom; Use the top and right edges
to connect the route from top to bottom. , connecting the departing, left-to-right route;
using the left and upper edges, connecting the arriving, left-to-right route. If this route connects the same pair of points
as the previous route, we use the left, upper, and right edges to connect any left-to-right route.
So far, there are two routes from top to bottom and one from left to right that need to be connected. We only need to move their
starting and ending points one space closer to the center, maintaining their relative order. In this way, we reduce the original
Code (SteamTurbine)
#include <bits/stdc++.h>
#define FOR(i,s,e) for (int i=(s); i<(e); i++)
#define FOE(i,s,e) for (int i=(s); i<=(e); i++) #define FOD(i,s,e) for (int i=(s)-1;
i>=(e); i--) #define PB push_back using namespace std;
struct Paths{
/* store paths in order */
Machine Translated by Google
Paths(){
NS.clear();
EW.clear();
};
int n = p.size();
Paths Ret;
Ret.NS.resize(n);
Ret.EW.resize(n);
// Base case
if (n == 0) return Ret;
if (n == 1){
Ret.NS[0].PB({1, 1});
Ret.EW[0].PB({1, 1});
return Right;
FOE(i,1,n){
Ret.NS[0].PB({i, 1});
Ret.NS[n-1].PB({i, n});
- (p[i]>p[n-1]));
edges
int m = 1;
}
Machine Translated by Google
else{
FOR(i,1,n) if (i != m) q_new.PB(q[i] -
(q[i]>q[0]) - (q[i]>q[m]));
if (n > 1){
// connect NS paths
FOR(i,1,n-1){
Ret.NS[i].PB({1, i+1});
= y + 1;
Ret.NS[i].PB({n, t});
// connect EW paths
int l = 0;
FOR(i,1,n) if (i != m){
Ret.EW[i].PB({i+1, 1});
Ret.EW[i].PB({x+1, y+1});
t = x + 1;
Ret.EW[i].PB({t, n});
return Right;
int main(){
Machine Translated by Google
int n;
vector<int> p, q;
scanf("%d", &n);
p.resize(n), q.resize(n); FOR(i,0,n)
scanf("%d", &p[i]); FOR(i,0,n) scanf("%d", &q[i]);
return 0;
}