如何在Linux / Unix中创建大小超过2GB的文件?

时间:2021-10-06 01:55:28

I have this home work where I have to transfer a very big file from one source to multiple machines using bittorrent kinda of algorithm. Initially I am cutting the files in to chunks and I transfer chunks to all the targets. Targets have the intelligence to share the chunks they have with other targets. It works fine. I wanted to transfer a 4GB file so I tarred four 1GB files. It didn't error out when I created the 4GB tar file but at the other end while assembling all the chunks back to the original file it errors out saying file size limit exceeded. How can I go about solving this 2GB limitation problem?

我有这个家庭作业,我必须使用bittorrent算法将一个非常大的文件从一个源传输到多台机器。最初我将文件切割成块,然后将块传输到所有目标。目标具有分享他们与其他目标的块的智能。它工作正常。我想传输一个4GB的文件,所以我把四个1GB的文件。当我创建4GB tar文件时没有出错,但在另一端将所有块组装回原始文件时,它错误地说超出了文件大小限制。我怎样才能解决这个2GB限制问题?

5 个解决方案

#1


11  

I can think of two possible reasons:

我可以想到两个可能的原因:

  • You don't have Large File Support enabled in your Linux kernel
  • 您的Linux内核中未启用大文件支持
  • Your application isn't compiled with large file support (you might need to pass gcc extra flags to tell it to use 64-bit versions of certain file I/O functions. e.g. gcc -D_FILE_OFFSET_BITS=64)
  • 您的应用程序未使用大文件支持进行编译(您可能需要传递gcc额外标志以告知它使用64位版本的某些文件I / O函数。例如gcc -D_FILE_OFFSET_BITS = 64)

#2


4  

This depends on the filesystem type. When using ext3, I have no such problems with files that are significantly larger.

这取决于文件系统类型。使用ext3时,我对没有明显更大的文件没有这样的问题。

If the underlying disk is FAT, NTFS or CIFS (SMB), you must also make sure you use the latest version of the appropriate driver. There are some older drivers that have file-size limits like the ones you experience.

如果底层磁盘是FAT,NTFS或CIFS(SMB),则还必须确保使用相应驱动程序的最新版本。有一些较旧的驱动程序具有与您遇到的文件大小限制相同的文件大小限制。

#3


3  

Could this be related to a system limitation configuration ?

这可能与系统限制配置有关吗?

$ ulimit -a
vi /etc/security/limits.conf
vivek       hard  fsize  1024000

If you do not want any limit remove fsize from /etc/security/limits.conf.

如果您不想要任何限制,请从/etc/security/limits.conf中删除fsize。

#4


1  

If your system supports it, you can get hints with: man largefile.

如果您的系统支持它,您可以获得以下提示:man largefile。

#5


1  

You should use fseeko and ftello, see fseeko(3) Note you should define #define _FILE_OFFSET_BITS 64

你应该使用fseeko和ftello,参见fseeko(3)注意你应该定义#define _FILE_OFFSET_BITS 64

#define _FILE_OFFSET_BITS 64
#include <stdio.h>

#1


11  

I can think of two possible reasons:

我可以想到两个可能的原因:

  • You don't have Large File Support enabled in your Linux kernel
  • 您的Linux内核中未启用大文件支持
  • Your application isn't compiled with large file support (you might need to pass gcc extra flags to tell it to use 64-bit versions of certain file I/O functions. e.g. gcc -D_FILE_OFFSET_BITS=64)
  • 您的应用程序未使用大文件支持进行编译(您可能需要传递gcc额外标志以告知它使用64位版本的某些文件I / O函数。例如gcc -D_FILE_OFFSET_BITS = 64)

#2


4  

This depends on the filesystem type. When using ext3, I have no such problems with files that are significantly larger.

这取决于文件系统类型。使用ext3时,我对没有明显更大的文件没有这样的问题。

If the underlying disk is FAT, NTFS or CIFS (SMB), you must also make sure you use the latest version of the appropriate driver. There are some older drivers that have file-size limits like the ones you experience.

如果底层磁盘是FAT,NTFS或CIFS(SMB),则还必须确保使用相应驱动程序的最新版本。有一些较旧的驱动程序具有与您遇到的文件大小限制相同的文件大小限制。

#3


3  

Could this be related to a system limitation configuration ?

这可能与系统限制配置有关吗?

$ ulimit -a
vi /etc/security/limits.conf
vivek       hard  fsize  1024000

If you do not want any limit remove fsize from /etc/security/limits.conf.

如果您不想要任何限制,请从/etc/security/limits.conf中删除fsize。

#4


1  

If your system supports it, you can get hints with: man largefile.

如果您的系统支持它,您可以获得以下提示:man largefile。

#5


1  

You should use fseeko and ftello, see fseeko(3) Note you should define #define _FILE_OFFSET_BITS 64

你应该使用fseeko和ftello,参见fseeko(3)注意你应该定义#define _FILE_OFFSET_BITS 64

#define _FILE_OFFSET_BITS 64
#include <stdio.h>