| Task | #15 |
|---|---|
| Position | 2 |
| Score | 4,915 |
| Best Score | 3,347 |
| Points | 0.681 |
Problem Statement
You decided to buy 1000 shares of the company HLFUN, and subscribed to receive offers from the exchange.
Order updates come to stdin, one update per line:
+ <price> <size>(when a new sell order is added)- <position>(when an order is deleted at the specified position)= <size>(when someone buys<size>shares from the top of the orderbook)
where <size> is the number of shares, <price> is the offered price, <position> is the order position in the orderbook.
You need to keep orders sorted in your orderbook to process ‘remove’ and ‘buy’ correctly. The orders with better (lower) price take priority over orders with higher price. Within the same price, the orders that came earlier take priority over orders that came later. Position 0 refers to the current best offer.
| Example input | Orderbook state |
|---|---|
+ 1137 100 | (1137,100) |
+ 1130 10 | (1130,10), (1137,100) |
+ 1130 50 | (1130,10), (1130,50), (1137,100) |
- 0 | (1130,50), (1137,100) |
+ 1150 200 | (1130,50), (1137,100), (1150,200) |
= 200 | (1150,150) |
total cost of the last buy is 50 * 1130 + 100 * 1137 + 50 * 1150
Once 1,000,000 updates are processed, you buy your 1000 shares from the top of the orderbook, starting from the best offer. Calculate the total cost of your purchase, and print it to stdout.
Generator source: https://gist.github.com/obender12/12fc47b7285029fbb95991a7209aa239
Input Generator
Port the generator provided to my input generation suite.
Solution Sketch
The same look back trick as (2025-03-27) Breaking Highload Orderbook applies, just over a longer time horizon. I had to look back 100 pages instead of a couple. Something something . I’m sure there’s some fancy math on bounded binary random walks with gaussian noise that explains why, but my pass rates would drop abysmally if I did 70 or 60 pages. So 100 pages it was.
(un)fortunately Josu/yenw0d also caught on to this trick, and I had to grind and make an orderbook that was actually quick. I used an optimized fenwick tree (inspiration from algorithmica) for the delete by position functionality, and added a fast path for lifts that take out multiple levels, like the last one that takes out almost half the book1
Thoughts
I think this is a ‘party trick’ problem. Very few order books have index-based deletes, most have UID’s per order and deletes based on those ID’s, and order flow looks very very different (lot of contention for queue position, activity in microbursts etc etc). I think there was a lot of potential for this to be better. I wanted to come up with a more ‘real’ orderbook problem, since I’m not competing maybe I can finally get to it now :)
Footnotes
-
I want my circuit breaker ↩