Hey everyone!
So, here is my take on this question:
If we take a look at Wikipedia’s article on sorting algorithms, there are quite a few algorithms that achieve O(n) operations to sort in the best case scenario. We could be lucky enough that many of the possible words we would use fall in this scenario. This is by no means a proof, and actually proving that this is indeed the case is probably really hard to do.
However, when we sort the string, we are putting a lot of effort into, well, sorting. And a sorted string has a lot of structure, which does make the task of checking for repeated letters easier, but which we don’t really use to a full extent. What I’m trying to say is that sorting our string feels like solving a harder problem than we first set out (namely to check for repeating characters).
So this made a light go on in my head: perhaps we shouldn’t sort the string. Perhaps the best performance is achieved by some other kind of reasoning.
Well, turns out that this guy found a pretty awesome method to deal with this, which I’ll just summarize below. It is actually pretty intuitive!
So, what you do when the word-to-be-checked comes to you is:
- grab a sheet of paper
- write down the alphabet (or, if you are feeling lazy, just get a page with it printed)
- go through each character of the word, marking the letters of your sheet when you find them in the word
- if at some point you find yourself marking a letter that has already been marked, the word is not an isogram
- otherwise, if you reach the end of the word without making a double-mark, then the word is indeed an isogram.
Without getting in technical details, this algorithm is O(n) because we only check each character in the string once. The work in the beginning of writing the alphabet in our sheet is constant, and will not increase with the word length.
Finally, I’d like to give a compliment to @sillypog, for his elegant solution. That was awesome thinking, dude!