If you were to implement a system using this theoretical model then it would work, but not particularly efficiently. Both operating system and CPU designers try hard to extract more performance from the system. Apart from making the processors, memory and so on faster the best approach is to maintain caches of useful information and data that make some operations faster. Linux uses a number of memory management related caches:
One commonly implemented hardware cache is in the CPU; a cache of Page Table Entries. In this case, the CPU does not read the page table directly but instead caches translations for pages is it needs them. These are the Translation Look-aside Buffers and contain copies of the information kept in the operating system's page table. When the reference to the virtual address is made, the CPU will attempt to find a matching TLB entry. If it finds one, it can directly translate the virtual address into a physical one and perform the correct operation on the data. If the CPU cannot find a matching TLB entry then it must get the operating system to help. It does this by raising an exception. In essence this means signalling the the operating system that a TLB miss has occurred. A system specific mechanism is used to deliver that exception to the operating system code that can fix things up. That code fixes things up by generating a TLB entry for the address mapping. When the exception has been cleared, the CPU will make another attempt to translate the virtual address. This time it will work because there is now a valid entry in the TLB for that address.
The drawback of using caches, hardware or otherwise, is that in order to save effort Linux must use more time and space maintaining these caches and if the caches become corrupted then the system will crash.