Question:

As part of a programming assignment I saw recently, students were asked to find the big O value of their function for solving a puzzle. I was bored, and decided to write the program myself. However, my solution uses a pattern I saw in the problem to skip large portions of the calculations.

Big O shows how the time increases based on a scaling `n`

, but as `n`

scales, once it reaches the resetting of the pattern, the time it takes resets back to low values as well. My thought was that it was `O(nlogn % k)`

when `k+1`

is when it resets. Another thought is that as it has a hard limit, the value is `O(1)`

, since that is big O of any constant. Is one of those right, and if not, how should the limit be represented?

As an example of the reset, the `k`

value is 31336.
At n=31336, it takes 31336 steps but at n=31337, it takes 1.

The code is:

<pre class="lang-py prettyprint-override">```
def Entry(a1, q):
F = [a1]
lastnum = a1
q1 = q % 31336
rows = (q / 31336)
for i in range(1, q1):
lastnum = (lastnum * 31334) % 31337
F.append(lastnum)
F = MergeSort(F)
print lastnum * rows + F.index(lastnum) + 1
```

`MergeSort`

is a standard merge sort with `O(nlogn)`

complexity.

It's `O(1)`

and you can derive this from big O's <a href="http://en.wikipedia.org/wiki/Big_O_notation" rel="nofollow">definition</a>. If `f(x)`

is the complexity of your solution, then:

<img alt="Equation 2" class="b-lazy" data-src="https://chart.googleapis.com/chart?cht=tx&chl=f%28x%29%20%5Cleq%20Mg%28x%29" data-original="https://chart.googleapis.com/chart?cht=tx&chl=f%28x%29%20%5Cleq%20Mg%28x%29" src="https://etrip.eimg.top/images/2019/05/07/timg.gif" />

with

<img alt="Equation 2" class="b-lazy" data-src="https://chart.googleapis.com/chart?cht=tx&chl=g%28x%29%20%3D%201" data-original="https://chart.googleapis.com/chart?cht=tx&chl=g%28x%29%20%3D%201" src="https://etrip.eimg.top/images/2019/05/07/timg.gif" />

and with any `M > 470040`

(it's `nlogn`

for `n = 31336`

) and `x > 0`

. And this implies from the definition that:

<img alt="Equation 1" class="b-lazy" data-src="https://chart.googleapis.com/chart?cht=tx&chl=f%28x%29%20%3D%20O%281%29" data-original="https://chart.googleapis.com/chart?cht=tx&chl=f%28x%29%20%3D%20O%281%29" src="https://etrip.eimg.top/images/2019/05/07/timg.gif" />

Answer2:Well, an easy way that I use to think about big-O problems is to think of n as so big it may as well be infinity. If you don't get particular about byte-level operations on very big numbers (because q % 31336 would scale up as q goes to infinity and is not actually constant), then your intuition is right about it being O(1).

Imagining q as close to infinity, you can see that q % 31336 is obviously between 0 and 31335, as you noted. This fact limits the number of array elements, which limits the sort time to be some constant amount (n * log(n) ==> 31335 * log(31335) * C, for some constant C). So it is constant time for the whole algorithm.

But, in the real world, multiplication, division, and modulus all do scale based on input size. You can look up Karatsuba algorithm if you are interested in figuring that out. I'll leave it as an exercise.

Answer3:If there are a few different instances of this problem, each with its own <em>k</em> value, then the complexity of the method is not O(1), but instead `O(k·ln k)`

.