Editorial - AUST 2022
Editorial - AUST 2022
Editorial - AUST 2022
Editorial: You have find the summation of last M integers (numbers) of Kth
permutation of consists of N integers numbered from 1, 2, 3, … , N. But, K ≤10 .
As, K ≤10, for any sequence, 1, 2, 3, 4, 5, …, N-4, N-3, N-2, N-1, N all the
permutation/changes will be observed in last 4 integers (at most).
So, you can always take last 𝑚𝑖𝑛(4, 𝑁) numbers and perform next permutation until
you reach Kth permutation. Rest of the part can be done using a simple math formula
to calculate the summation of series.
Editorial: When we sort the subarrays lexicographically first the subarrays that start
with 1 will come, then 2, then 3 and so on. When two subarrays have the same left
endpoint, the one with smaller length is also lexicographically smaller than the other.
For each value we know the number of subarrays that will have this value’s position
as the left endpoint. Calculate cumulative sum of these values. Now, we can do
binary search on this array to find which value will be in the left endpoint of our
desired subarray. After that, we can easily find the right endpoint too.
Editorial: If we observe the structure of the tree that satisfies the two conditions, we
will find it to be a star. So, we just need to find a node which has N-1 nodes adjacent
to it, now just take the sum of N-1 edges and find the minimum cost.
Problem D (Nim Heaps)
Setter: Pritom Kundu
Tester: Anik Sarker
Alter: Rafid Bin Mostofa
Category: Math
First let us solve the problem ignoring the condition that the array must be non
decreasing. Suppose we have fixed the first i elements, let us count the number of
𝑖
ways we can choose the (i+1)th element. There are 2 subsets consisting of the first
elements. No two of them can have the same xor sum. The (i+1)th element cannot
𝑖 𝑘 𝑖
be one of these 2 values. Thus there are 2 − 2 possible values for the (i+1)th
element.
Now note that, no two elements can be equal (otherwise, those two have xor sum of
0). Thus every correct sequence will be strictly increasing and hence distinct. Thus
we can simply divide our previous answer by 𝑛!. Thus our answer is
𝑛 𝑛 𝑛 𝑘−1
𝑓(𝑛, 𝑘) = (2 − 1)(2 − 2) ··· (2 − 2 )
𝑘
𝑘(𝑘−1)/2 𝑖
Where, 𝐹(𝑘) = 2 and 𝐺(𝑘) = ∏ (2 − 1)
𝑖=1
Both F and G can be precalculated in O(k) time beforehand. Then queries can be
answered in O(1).
The key to solve this problem is to think in terms of the permutation cycle. For the
cycle containing 1, it’s always better to swap the element with 1. For other cycles,
either merge it with 1’s cycle initially or make all the swaps with the smallest element
in the cycle. Do which one gives you the smaller cost.
https://ideone.com/XRBZim
Proof: We’ll think about everything in terms of the permutation cycle. Any swap
either breaks one permutation cycle into two or merges two permutation
cycles into one.
So we are given some cycles, our goal is to do some operations to reduce each
permutation cycle to size 1.
Let’s think what is the optimal way to break a cycle of length k into k cycles of length
1(ignore everything outside this cycle for now). Assume the elements are a1, a2….ak.
We need at least k-1 operations. The ultimate sum would be result of at least 2*(k-1)
numbers and each of a1, a2…..ak would exist at least once in the sum. And the
remaining k-2 numbers would be no smaller than min(a1, a2…..ak). So the sum can
not be smaller than
a1+a2+....ak + (k-2)*min(a1, a2…..ak)
and this is achievable. This is important and the whole problem is kind of dependent
on this idea.
We use set si to denote the i’th initial cycle to avoid confusion as cycles are merging
and breaking.
This graph has some connected components, different connected components don’t
interfere with each other, so we can think of them independently.
Let’s say there is a connected component with c nodes and we try to calculate the
lowest cost for sorting elements within this component’s set. Assume the sets
associated with nodes in this component are s1, s2…sc.
As there are c nodes, then there must have been at least c merge operations and as
at least one node from each set has participated in the merge operation,
minimum cost for merge operations would be >= m1+m2+...mc +
(c-2)*min(m1,m2,...mc)
There would be at least |s1|+...+|sc| - 1 break operation and again all the elements in
all the sets must participate in the break operation at least once, so minimum cost for
breaking would be >= sum(s1)+...+sum(sc) + (|s1|+...+|sc| - 1)*min(m1,...mc).
What we can do is that we merge all the other sets into C. Then just start breaking
this big cycle, every operation would be between M and some other element x,
resulting in separating x from the cycle.
So the big picture turns out to be that we can achieve the lowest cost by merging all
the sets in this specific connected component(except S) into the set containing the
smallest element(which is S here). And then separating out each element one by
one. So from the standpoint of a specific set si(except S), we can view it like thisi at
first we are merging it into S with cost M+mi and then making |si| operations with total
cost |si|*M + sum(si), occuring a total cost of,
M*(|si|+1) + sum(si) + mi
And for S we are getting cost
sum(S) + (|S|-2)*M.
This point of view enables us to get rid of mathematical equations from the rest of
the proof.
So what we have established till now is that, there exists an optimal sequence of
operation where we can group the permutation cycles initially, then for each group
we’ll just merge all cycles into the cycle containing the smallest number and then
break that big cycle one element at a time. And different groups don’t interfere with
each other. Also cost for these operations can be thought as independent per cycle.
Now it is easy to see that for any cycle it is better to merge it to 1’s cycle than any
other cycle. So for each group we can just break the cycle containing the smallest
number within itself and merge the rest of them to 1’s cycle and break it later. So
how to check weather for a cycle si if we should merge it to 1’s cycle or break it
within itself?
Breaking it within itself gives us cost |si-2|*mi + sum(si)
And merging it into 1’s cycle and later breaking it gives us cost, 1+mi + |si| + sum(si)
Just do whichever gives you the smaller cost.
https://ideone.com/6CbfDh
https://ideone.com/3ACBYU
Problem I (Beautiful Blocks (Easy))
Setter: Ashiqul Islam
Tester: Arghya Pal
Alter: Pritom Kundu
Tags: Observation, DP
Solution:
Let's say,
A block is X-block if the path has X cells in the block.
this is too slow. instead of keeping both the count of 3-blocks and 1-blocks, we can keep
only their difference.
(block_index, how many blocks has been taken, difference between 3-block count and
1-block count)
Instead of considering all the blocks in the dp, we can only consider O(N) blocks. The
relevant blocks can be found as follows:
Let's say for block, a1 = its max, a2 = 2nd max, a3 = 3rd max, a4 = 4th max.
Let's sort the blocks three times in decreasing order of: a1, (a1+a2), (a1+a2+a3).
The relevant blocks are the union of the first (n-1) blocks from all the 3 sorted arrays.
Github Link:
https://github.com/Contest-Problems/beautiful_blocks
Now the solution for easy version is too slow for the harder version. Let's solve for K = 1. You
can easily make it work for the given Ks.
Let’s sort the blocks in descending order of sum of their best+2nd best block. Now we can
observe that for any two indices i,j such that (1<= i < j <= n), it’s always better to take the i’th
block as 2-block rather than the j’th block. So there exists a prefix of the blocks where we will
take all the blocks and from the rest we will only take 1-blocks and 3-blocks.
ii) what is the maximum we can achieve if we take a fixed number( N-1 - length of the prefix)
of blocks from the rest of the blocks and again we need to know it for all different values of
(3_blocks - 1_blocks). It turns out that for a suffix we can calculate this value in O(nlogn)
complexity. So for n suffixes, total complexity sums up to be O(n^2*logn).
If we have to take k 1-blocks and 0 3-blocks, it’s obvious we will take the k blocks which
have the highest a1( a1 denotes the maximum cell in the block).
Now if we need to take k-1 1-blocks and 1 3-blocks, we can either turn one 1-block into
3-block or remove a 1-block and take a previously skipped block as 3-block.
In general if we have a set of 1-blocks and 3-blocks which gives us best result for (x,k-x)
configuration(x 3-blocks and k-x 1-blocks), then for obtaining best result for (x+1,k-x-1)
configuration we can just turn one 1-block into 3-block or remove a 1-block and take a
previously skipped block as 3-block. This idea seems very intuitive but the life of a
problem-setter is not so easy, you have to prove it also :(
(Un)fortunately the proof is easy hard in this case.(changed my mind while writing the proof,
not easy to write, definitely not easy to read)
Essentially, what we are doing is that while going from (x,k-x) configuration to (x+1,k-x-1)
configuration, we are keeping the set of 3-blocks intact and just adding another 3-block, also
just changing exactly one 1-block(either turning it into 3-block or removing it).
Lets assume,
A is the set of skipped blocks, B is the set of 1-blocks, C is the set of 3-blocks for
some (x, k-x) configuration. …..(assumption 1)
One thing that we can observe is that going from (x,k-x) to (x+1,k-x-1) can be done
without any “direct exchange” between sets.
Direct exchange between two sets X, Y takes place if at least one block from X goes
to Y and one block from Y goes to X.
Let’s say A1, B1 and C1 corresponding sets for 0-blocks, 1-blocks and 3-blocks which
gives us maximum result for (x+1, k-x-1) configuration and among all such sets C1 is a set
such that |C1 intersection C| is maximum. If C is not a subset of C1, that means there was at
least one block c in C that is now at some other set.
i) If c is now in B1, then
1) Either there exist a block b in B which is now at C1, or
2) There exist a block b in B which is now at A1 and there is a block a in A which
is now at C1
ii) If c is now in A1, then
1) Either there exist a block a in A which is now at C1, or
2) There exists a block a in A which is now at B1 and there is a block b in B
which is now at C1.
In all these cases we can cyclically reverse the position of the blocks and we will achieve a
better or equal result.(Otherwise assumption 1 can not be true)
We can do this as long as C is not a subset of C1, every time we will be increasing the |C
intersection C1|.
In the same way, we can see that if any block goes from B to A1 and a block goes from A to
B1, we can reverse this as well.
Github Link:
https://github.com/Contest-Problems/beautiful_blocks