For our psuedocode we are using TMap to use Big O Notation to find the words as fast as possible within the map, right?
// Use Big O Notation
// to find the letters
// create a map
// in the map test the guess letters to the letters in the map
// return true if none are repeating
// if repeating
// return false
From a higher level aren’t we just creating a table of letters and testing it each time as fast as possible to check to see if the guess is and isogram or not?
No that’s not right. Ben is using Big O Notation to choose an algorithm that finds whether a word is an isogram based on the complexity of the algorithm.
Big O Notion is used to describe an algorithm’s complexity it has little to do with speed. It’s to do with how it scales.
For example if an algorithm is O(1) that would mean it has constant time complexity. However long it takes it will take the same amount of time regardless of the input size for example. That time could be 1 second or 1 year but it will be the same for any size N.
If an algorithm is O(n) then that means it scales linearly, for example say an algorithm took 10 seconds with 1 word then you would expect that with O(n) that it would take approximately at least 100 seconds with 10 words (Approximately because remember Big O is only about the worst offender and not the exact complexity). Similarly if that had O(n^2) then with 10 you would expect it to be in the range of 1,000 seconds.
What it’s doing is looping the letters of the guess and with each letter testing to see if it’s in the map, if it isn’t add it to the map, otherwise it’s already in the map so not a unique letter and not an isogram.
This actually has worse performance than other methods. See this thread for other ways to solve this