So what is "real time" anyway? I spent a lot of time as an electronics engineer using microprocessors (Z80s) in industrial systems that processed data in real time and were a part of a production control system. These simple systems were interrupt-driven and often contained more than one processor, so the work load could be divided up.
Windows NT is not designed for real-time work. It schedules time between many competing tasks, and although the programmer can change the priority of these tasks, there is still no absolute way to guarantee that all the time is spent running your application. This isn't reasonable if you think about it for a moment. Consider memory management, for example. Some part of the CPU time must be spent keeping track of memory pages. Some part of the CPU must deal with I/O requests. While the CPU is doing these other tasks, it's not running your application. Of course, it may well be that your application is the one causing the memory pages to be swapped, the I/O system to be used, and so on. The bottom line is that you just can't expect to get all the CPU time.
But does that matter? In many cases, the overhead required to run the system is tiny compared to the CPU power available on a modern machine, and so for many practical purposes, your application does have all the CPU to play with. You do have to deal with variable interrupt latency issues and so on, but on the whole you have a lot of power to play with.
So if you feel that a 66 MHz Pentium box can do what you want, there's a very real possibility that the same box can run your task under Windows NT with little or no overhead.