I'm running into an issue with GDB and some buffers allocated in kernel space. The buffers are allocated by a kernel module that is supposed to allocate contiguous blocks of memory, and then memory mapped into userspace via a mmap() call. GDB, however, can't seem to access these blocks at any time. For example, after hitting a breakpoint in GDB:
我遇到了一个GDB和内核空间中分配的缓冲区的问题。缓冲区是由一个内核模块分配的,该模块应该分配连续的内存块,然后通过mmap()调用将内存映射到用户空间。然而,GDB似乎在任何时候都无法访问这些块。例如,在GDB中的断点之后:
(gdb) x /10xb 0x4567e000
0x4567e000: Cannot access memory at address 0x4567e000
However, looking at the application's currently mapped memory regions in /proc//smaps shows:
然而,看看应用程序当前在/proc//smaps中的内存区域显示:
4567e000-456d3000 rwxs 8913f000 00:0d 883 /dev/cmem
Size: 340 kB
Rss: 340 kB
Pss: 0 kB
Shared_Clean: 0 kB
Shared_Dirty: 0 kB
Private_Clean: 0 kB
Private_Dirty: 0 kB
Referenced: 0 kB
Swap: 0 kB
The reason I'm even looking into this is because at some point during the run, this buffer address (or another allocated in a similar manner) causes a SIGSEGV.
我之所以这样做,是因为在运行期间,这个缓冲区地址(或其他以类似方式分配的地址)会导致SIGSEGV。
Program received signal SIGSEGV, Segmentation fault.
[Switching to Thread 0x49aea490 (LWP 652)]
0x402e4ea8 in fwrite () from /lib/libc.so.6
(gdb)
(gdb)
(gdb) where
#0 0x402e4ea8 in fwrite () from /lib/libc.so.6
#1 0x000eb394 in EncryptedWriter::Write (this=0x198600, buffRaw=0x4567e000 <Address 0x4567e000 out of bounds>, iLenRaw=719) at encrypted_writer.cpp:397
#2 0x0006b0f4 in EncryptionWrapper::Write (this=0x3ab2698, buffer=0x4567e000, size=719) at encryption.cpp:54
This segfault occurs despite the fact that the buffer had been used heavily up until the crash, and the /proc//smaps file still shows this buffer to be mapped as above.
尽管缓冲区在崩溃之前一直被大量使用,但是这个segfault还是发生了,而/proc//smaps文件仍然显示这个缓冲区被映射到上面。
I am completely at a loss as to why this might be happening, and why the mapping seems valid in /proc but never in GDB.
我完全不知道为什么会发生这种情况,以及为什么这种映射在/proc中是有效的,而在GDB中却从来没有。
2 个解决方案
#1
5
About why gdb cannot access the memory you want, I believe Linux does not make I/O memory accessible via ptrace().
关于为什么gdb不能访问您想要的内存,我相信Linux不会通过ptrace()来实现I/O内存。
According to cmemk.c (which I found in linuxutils_2_25.tar.gz), mmap() does indeed set the VM_IO flag on the memory in question.
根据cmemk。c(我在linuxutils _2_ 25.tar.gz中发现),mmap()确实在内存中设置了VM_IO标志。
To access this memory from gdb, add a function to your program that reads this memory and have gdb call this function.
要从gdb中访问此内存,请向程序中添加一个函数,该函数读取此内存并让gdb调用此函数。
#2
0
See examining-mmaped-addresses-using-gdb discussion in another thread and especially the answer here. You should be able to add a custom vm_operations_struct
to your VMA in the module's mmap implementation.
在另一个线程中,特别是在这里,可以看到examining-mmap -addresses- gdb的讨论。您应该能够在模块的mmap实现中向VMA中添加自定义的vm_operations_struct。
Also see mm/memory.c in the Linux kernel. When get_user_pages()
fails the code will try to call the custom vma->vm_ops->access
implementation in your driver to access the memory.
也看到mm /内存。在Linux内核中。当get_user_pages()失败时,代码将尝试调用定制的vma->vm_ops->访问实现,以访问内存。
#1
5
About why gdb cannot access the memory you want, I believe Linux does not make I/O memory accessible via ptrace().
关于为什么gdb不能访问您想要的内存,我相信Linux不会通过ptrace()来实现I/O内存。
According to cmemk.c (which I found in linuxutils_2_25.tar.gz), mmap() does indeed set the VM_IO flag on the memory in question.
根据cmemk。c(我在linuxutils _2_ 25.tar.gz中发现),mmap()确实在内存中设置了VM_IO标志。
To access this memory from gdb, add a function to your program that reads this memory and have gdb call this function.
要从gdb中访问此内存,请向程序中添加一个函数,该函数读取此内存并让gdb调用此函数。
#2
0
See examining-mmaped-addresses-using-gdb discussion in another thread and especially the answer here. You should be able to add a custom vm_operations_struct
to your VMA in the module's mmap implementation.
在另一个线程中,特别是在这里,可以看到examining-mmap -addresses- gdb的讨论。您应该能够在模块的mmap实现中向VMA中添加自定义的vm_operations_struct。
Also see mm/memory.c in the Linux kernel. When get_user_pages()
fails the code will try to call the custom vma->vm_ops->access
implementation in your driver to access the memory.
也看到mm /内存。在Linux内核中。当get_user_pages()失败时,代码将尝试调用定制的vma->vm_ops->访问实现,以访问内存。