It's not easy to measure disk use by applications, though it is important. To measure how efficiently your application is using the disks, chart the memory and cache counters.
Applications rarely read or write directly to disk. The file system first maps application code and data into the file system cache and copies it from the cache into the working set of the application. When the application creates or changes data, the data is mapped into the cache and then written back to disk in batches. The exceptions are when an application requests a single write-through to disk or tells the file system not to use the cache at all for a file, usually because it is doing its own buffering.
Fortunately, the same design characteristic that improves an application's use of cache and memory also reduces its transfers from the disk. This characteristic is locality of reference, that is, having a program's references to the same data execute in sequence or close in time. When references are localized, the data the program needs is more likely to be in its working set, or certainly in the cache, and is less likely to be paged out when needed. Also, when a program reads sequentially, the Cache Manager does read aheads, that is, it recognizes the data request pattern and reads in larger blocks of data on each transfer.
If your application is causing a lot of paging, run it under controlled circumstances while logging Cache: Copy Read Hits %, Cache: Read Ahead/sec, Memory: Pages Input/sec, Memory: Pages Output/sec. Then try reorganizing or redesigning your data structures and repeat the test.
Also, use the file operation counters on the System object.
The relevant system counters are
These count file control and data operations for the whole system. Unlike the disk counters, the count read and write requests from the file system to devices and count time in the cache.