Java I/O theory in system level

时间:2023-03-09 18:43:00
Java I/O theory in system level

参考文章: JAVA NIO之浅谈内存映射文件原理与DirectMemory

     Java NIO 2.0 : Memory-Mapped Files | MappedByteBuffer Tutorial

How Java I/O Works Internally at Lower Level?

1. JAVA I/O theory at lower system level

Before this post, We assume you are fmailiar with basic JAVA I/O operations.

Here is the content:

  • Buffer Handling and Kernel vs User Space
  • Virtual Memory Memory
  • Paging
  • File/Block Oriented
  • I/O File Locking
  • Stream Oriented I/O

1.1 Buffer Handling and Kernel vs User Space

Buffers and how Buffers are handled are the basis of I/O. "Input/Output" means nothing more than moving data from buffer to somewhere or move data from somewhere to the buffer in user space.

Commonly, processes send I/O requests to the OS that data in user space Buffer will be drained to buffer in kernel space(write operation), and the OS performs an incredilby complex transfer. Here is data flow diagram:

Java I/O theory in system level

The image above shows s simplified "logical" diagram of how block data moves from an external device, such as a hard disk, to user space memory. Firstly, the process send system call read(), and kernel catch the call and issuing a command to the disk controller to fetch data from disk. The disk controller writes the data directly into kernel memory buffer by DMA. Then the kernel copies the data from temporary buffer in kernel space to the buffer in the user space. Write operation is similar to read operation.

After first read operation, the kernel will cache and/or perfetch data, so the data you request may already in the kernel space. You can try read a big file 3 times, you will find that second and thrid time is far more fast than the first. Here is an example:

    static void CpFileStreamIO () throws IOException {
String inFileStr = "***kimchi_v2.pdf";
String outFileStr = "./kimchi_v2.pdf";
long startTime, elapsedTime; // for speed benchmarking
int bufferSizeKB = 4;
int bufferSize = bufferSizeKB * 1024; int repeatedTimes = 5;
System.out.println("Using Buffered Stream");
try (BufferedInputStream in = new BufferedInputStream(new FileInputStream(inFileStr));
BufferedOutputStream out = new BufferedOutputStream(new FileOutputStream(outFileStr))) {
for (int i = 0; i < repeatedTimes; i++) {
copyFile(in, out);
}
} catch (IOException ex) {
ex.printStackTrace();
} } private static void copyFile(BufferedInputStream in, BufferedOutputStream out) throws IOException {
long startTime;
long elapsedTime;
startTime = System.nanoTime();
int bytescount;
while ((bytescount = in.read()) != -1) {
out.write(bytescount);
}
elapsedTime = System.nanoTime() - startTime;
System.out.println("Elapsed time is " + (elapsedTime / 1000000.0) + " msec");
}

Firstly we create an BufferedInputStream and BufferedOutputStream instance then we copy strings in file from BufferedInputStream to BufferedOutputStream 5 times. Here is the console output:

Using Buffered Stream
Elapsed time is 85.175 msec
Elapsed time is 0.005 msec
Elapsed time is 0.003 msec
Elapsed time is 0.003 msec
Elapsed time is 0.004 msec

In this example, the first time cost is far more longer because kernel need to read data from harddisk, while from the second time, user space can fetch data from buffers in kernel space.

1.2 Virtual Memory

Virtual memory means that virtual addresses are used more than pgysical memory(RAM or other internal storage) . Virtual memory brings 2 advantages:

1. More than one virtual address can be mapped to the same physical memory location, Which can reduce redundance of data in memory.

2. Virtual memory space can be larger than the physical memory space available. For example a user process can allocate 4G memory even the RAM is 1G only.

So to transfering data between user space and kernel space, we can only map the physical address of the virtual address in kernel space to the virtual address in kernel. For DMA (which can access only physical memory addresses) can fill a buffer that is simultaneousy visible to both the kernel and a user space process. This eliminates copies between kernel and user space, but the kernel and user buffer should share teh same page alignment. Buffers must also be a multiple of the block size used by disk controller(512bytes usually). And the virtual and physical memory are divided into pages, and the virtual and physical memory page sizes are always the same.

1.3 Memory Paging

Aligning memory page sizes as mutiples of the disk block size allows the kernel to directly command the disk controller the write pages back to disks and reload them from disks. And all disk I/O are done at the page level. Modern CUPs contain a subsystem known as teh Memory Mangement Unit(MMU). This device logically sits between the CPU and physical memory. CPU needs it's mapping information needed to translate virtual addresses to physical memory addresses.

1.4 File I/O

File I/O always occures within the context of a filesystem. Filesystem is quite different concept from disk. Filesystem is an high level of abstracion. Filesystem is particular method of arranging and interpreting data. Our processes are always interacts with fs, not the disk directly. It defines the concept of names, paths, files, directories and other abstact object.

A filesystem organizes (in hard disk) a sequence of uniformy sized data blocks. Some blocks store inodes about where to find meta data, and other store real data. Adnd filesystems pages sizes range from 2KB to *KB as multiples of memory page size.

Here is the process to find a file from file system:

1. Determine which filesystem page needed(according the path of file, if an file path has  "/root" as prefix mean to find the file in disks mounted as "/root" mountpoint )

2. Allocate enough memory pages in kernel space to hold the identified fileysystem pages.

3. Establish mappings between memory pages and the filesystem pages stored in disk.

4. Instructions runs in CPU may need code in a virtual address which is in memory(MMU find the page desired not in memory), then CPU raise page faults for each of those memory pages.

5. Linux operating system will allocate more pages to the process, filling those pages with data from disk, configuring the MMU, and CPU continue works.

The Filesystem data is cached libk other memory pages. On subsequent I/O requests, some or all of the file data may still be present in physical and can be reused without rereading from disk. Just like the coying file example in 1.1.

1.4 FIle Locking

File locking is a scheme in which a process can prevent others from accessing data stored in the private space.

1.5 Stream I/O

Not all I/O is block-oriented, There is also stream I/O modled as on a pipeline. The bytes of an I/O stream must be accessed sequentially. TTY(console) devices, print ports, and network connections are common examples of streams.

Streams are generally, but not necessarily, slower than block devices and are often as the source of the intermittent input. Most OS allows streams to be placed into non-blocking mode, which permits a process to check if input is avavible on the stream without getting stuck to waiting input.

Another ability is for stream is readiness selection. This is similiar to non-blocking mode, but offloads the check for whether the process is ready to the operating system. The operating system can be told to watch a collection of streams and returns an indication to the process about which streams are ready.This ability permits a process to multiplex many active streams using common code and a single thread by leveraging the readiness information returned by the operating system. This is widely used in network servers to handle large numbers of network connections. Readiness selection is essential for high-volume scaling.