Retrieval of an arbitrary item from a linked list is O(n). Arbitrary means any of the elements are eligible for selection, without loss of generality.
However, retrieval of a specific item from a linked list has nothing to do with how many items are in the list. It is therefore O(1). Retrieving the 50th item from a linked list takes the same amount of time, no matter whether the list has 50 elements, 50M elements, or 50^50! elements.
The original claim was
To my eye it's trivial to see where you go wrong i.e.
Retrieving the nth item in a linked list is "O(n)"
But i expect you to try and escape on a technicality.
Further claimed was
This is (one place) where you go wrong. The number of items in a data structure is not the only consideration in big o analysis of an algorithm.
In the case of retrieving an item from a linked list, another factor that must be considered is the index of the item you are retrieving. And quickly you find that the higher the index, the higher the cost of retrieval. Classic O(n). Easy to see when you set k == n. That's worst case big o analysis of linked list retrieval. Average case is also O(n) because on average it'll take n / 2 operations to retrieve an element from the list, and the 1 / 2 is a constant factor we disregard, since the total cost still scales linearly in proportion with n.
Things that are O(1) take the same amount of time regardless of the input. Critically, you must allow input to vary to make a O(1) claim. Retrieving at item from an array is a classic example of this. No matter what the index of the item you wish to retrieve is, you perform the same number of steps to get it. Multiply the size of the type by the index and add to the address of the first element. Works the same for any index.
If you can change the input and get a different runtime, you're not dealing with a O(1) algo. I think this is clearly true for retrieving an item from a linked list.
You can't use big o analysis the way you're trying to do it. You don't get to cherry pick the input and say "Because I always choose 10, and because that always takes the same amount of time, the operation is O(1)."
You can say "This input always takes the same amount of time" but that doesn't tell you anything about the characteristics of the underlying algorithm. You certainly can not claim to make a statement about big o runtime based only on one datapoint.
Fundamentally you are using big o analysis wrong.
In order to make a big o claim you have to allow the input to vary.
Otherwise you're just saying "Doing x takes y long" but you certainly cannot use big o notation to describe that. Not correctly at least.
Things that are O(n) scale in direct proportion to n. To retrieve the nth item from a linked list you have to visit all n elements. There are no shortcuts, and the critical part to understand is that the input you choose dictates the runtime of the algorithm.
I think the other main place you go crooked is by thinking that knowing n (or k as you call it) changes anything.
Being able to predict the cost of an operation does not make it O(1).
Getting the same cost when performing the same operation on the same type of data structure does not make the algorithm O(1). You don't get to just consider size of the data structure and disregard all other factors when doing big o analysis.
You're not wrong to say
But that doesn't mean
It's still O(n), where n is the index you want to retrieve. The cost of the operation can be described in general terms in proportion to the input. The cost of retrieving an item from a linked list varies according to the index of the item. Even if you know the index, even if you can calculate the cost before running the algorithm, that doesn't change the big o analysis of the algorithm to O(1). It's still O(n), you just happen to know the cost of this O(n) specificially instead of talking about the characteristics of the algo generally.