Editorial for Removal Sorting
Submitting an official solution before solving the problem yourself is a bannable offence.
Author:
Editorialist:
Now we get to the harder problems. The goal is to find when the earliest point in time when the array \(A\) will be sorted when deleting elements in it given from \(D\).
Something to keep in mind here is that your algorithm will be at least \(O(n)\) as it takes \(n\) reads to tell if \(A\) is sorted.
There's a trivial \(O(n^2)\) solution, where you simply iterate through \(D\), deleting elements from \(A\) as you go and checking to see if its sorted after every deletion. There were a few incorrectly coded attempts at this method, which is just as well, because it will not run in time.
If we do a binary search on \(D\) to find where the cross over point is instead of doing a linear seach, our algorithm goes from \(O(n^2)\) to \(O(n \log n)\), good enough to fit in the time restriction.
Comments
There is an \(O(n)\) solution to this question, which is worth mentioning because it's interesting (and the solution method we used in contest.)
Let's define the partial inversion count of an array as the number of pairs of adjacent elements that are in the wrong order for it to be sorted. For instance, the partial inversion count of
[2, 1, 4, 3]
would be 2, since the adjacent pairs(2, 1)
and(4, 3)
are in the wrong order. Clearly we can compute the partial inversion count of any array in \(O(n)\), and we can compute it for the array \(A\) at the start.We make the following observation: an array is sorted iff the partial inversion count of the array is zero, since this means that all pairs of adjacent elements are sorted, and therefore the overall array is sorted.
Now, we process deletions as they come. For each index that we wish to delete, we want to maintain any changes in the partial inversion count as a result of this deletion. For instance, suppose that the array is currently
[5, 1, 2, _, 4, 3, 6, 7]
and we wish to delete the element at index 4. Our partial inversion count is currently 2.We note that deleting an element will only change any of the pairs of adjacent elements that contain this element, i.e. all of the other pairs will be unaffected. For instance, the deletion of index 4 from the above array will only have the following effects:
(2, 4)
of adjacent elements no longer exists (since the element 4 is no longer in the array)(4, 3)
of adjacent elements no longer exists(2, 3)
is now a pair of adjacent elements.So, the first two pairs will no longer contribute to the partial inversion count, and the last pair will begin to contribute to the partial inversion count. Thus, we decrement the partial inversion count if the first pair was out of order, and decrement if the second pair was out of order; then, we increment if the third pair was out of order. Therefore we can handle element deletions and the changes to the partial inversion count in \(O(1)\) per deletion, and check after each deletion if the partial inversion count is 0 (at which point the array is sorted, and we are done).
Note that we have not yet described the method with which we determine the first undeleted element before and after the element which we wish to delete. For instance, in the above example, we need to be able to determine the first element still in the array before and after index 4. We can do this by maintaining a doubly linked list, in which we store, for each element, the closest element before which is undeleted, and the closest element after which is undeleted. Let's suppose that we store these in arrays named
prev
andnext
, whereprev[i]
is the index of the first undeleted element before indexi
, and similarly fornext[i]
. Deleting indexi
can be performed in the following way:(If you look over that a few times, you should be able to understand intuitively why it works.) Note that special considerations must be made for the first and last undeleted elements in the array.
Maintaining the doubly linked list clearly only takes \(O(1)\) operations for each deletion, and we can look up the previous and next elements in the array which are still in the array in \(O(1)\) by a simple lookup in
prev
andnext
, so each deletion takes overall \(O(1) + O(1) + O(1) = O(1)\) time. We process at most \(n\) deletions, so overall our complexity for handling deletions is \(O(1) \cdot O(n) = O(n)\). We add to this the complexity of the initial calculation of the partial inversion count, which is \(O(n)\), so overall the complexity is \(O(n) + O(n) = O(n)\).Nice solution, and thanks for sharing!
I came up with something very similar when writing this question. However, it is a little different in some ways. I think yours is a bit more elegant, but I will share mine anyway.
We can keep two monotonically increasing pointers. One into \(A\), and one into \(D\). Say that the pointer into \(A\) points to \(A_i\), and \(A_{i+1}\) is the first not-deleted element to its right. Also, say that the current pointer into \(D\) points to \(D_j\). While \(A_i\) is in order (\(A_{i} \leq A_{i+1}\)), increment \(i\). If it is not in order, delete the element at \(D_j\), then increment \(j\). Keep doing this until \(A_i\) is in order again. Once \(i = |A|-1\), we've found the solution: it is just \(j\). We can do all of this using the same doubly linked list method explained well above.
This is also \(O(n)\) using a similar complexity analysis as above.