This question already has an answer here:
这个问题已经有了答案:
- Downloading large files reliably in PHP 13 answers
- 在PHP 13中可靠地下载大型文件
Using PHP, I am trying to serve large files (up to possibly 200MB) which aren't in a web accessible directory due to authorization issues. Currently, I use a readfile()
call along with some headers to serve the file, but it seems that PHP is loading it into memory before sending it. I intend to deploy on a shared hosting server, which won't allow me to use much memory or add my own Apache modules such as X-Sendfile.
使用PHP,我试图服务于由于授权问题而不在web可访问目录中的大型文件(可能高达200MB)。目前,我使用readfile()调用和一些头文件来为文件提供服务,但是PHP似乎在发送文件之前将其加载到内存中。我打算部署到一个共享主机服务器上,它不允许我使用太多内存或添加我自己的Apache模块,比如X-Sendfile。
I can't let my files be in a web accessible directory for security reasons. Does anybody know a method that is less memory intensive which I could deploy on a shared hosting server?
出于安全考虑,我不能让我的文件位于web可访问目录中。有人知道我可以在共享主机服务器上部署的内存密集型的方法吗?
EDIT:
编辑:
if(/* My authorization here */) {
$path = "/uploads/";
$name = $row[0]; //This is a MySQL reference with the filename
$fullname = $path . $name; //Create filename
$fd = fopen($fullname, "rb");
if ($fd) {
$fsize = filesize($fullname);
$path_parts = pathinfo($fullname);
$ext = strtolower($path_parts["extension"]);
switch ($ext) {
case "pdf":
header("Content-type: application/pdf");
break;
case "zip":
header("Content-type: application/zip");
break;
default:
header("Content-type: application/octet-stream");
break;
}
header("Content-Disposition: attachment; filename=\"".$path_parts["basename"]."\"");
header("Content-length: $fsize");
header("Cache-control: private"); //use this to open files directly
while(!feof($fd)) {
$buffer = fread($fd, 1*(1024*1024));
echo $buffer;
ob_flush();
flush(); //These two flush commands seem to have helped with performance
}
}
else {
echo "Error opening file";
}
fclose($fd);
4 个解决方案
#1
11
If you use fopen
and fread
instead of readfile
, that should solve your problem.
如果您使用fopen和fread而不是readfile,那么应该可以解决您的问题。
There's a solution in the PHP's readfile
documentation showing how to use fread
to do what you want.
PHP的readfile文档中有一个解决方案,展示了如何使用fread来实现您的需求。
#2
4
To download large files from server, I have changed the below settings in php.ini file:
为了从服务器下载大型文件,我更改了php中的以下设置。ini文件:
Upload_max_filesize - 1500 M
Max_input_time - 1000
Memory_limit - 640M
Max_execution_time - 1800
Post_max_size - 2000 M
Now, I am able to upload and download 175MB video on server. Since, I have the dedicated server. So, making these changes were easy.
现在,我可以在服务器上上传和下载175MB的视频。因为我有专用的服务器。所以,做这些改变很容易。
Below is the PHP script to download the file. I have no made any changes in this code snippet for large file size.
下面是下载该文件的PHP脚本。对于大文件大小的代码片段,我没有做任何修改。
// Begin writing headers
ob_clean(); // Clear any previously written headers in the output buffer
if($filetype=='application/zip')
{
if(ini_get('zlib.output_compression'))
ini_set('zlib.output_compression', 'Off');
$fp = @fopen($filepath, 'rb');
if (strstr($_SERVER['HTTP_USER_AGENT'], "MSIE"))
{
header('Content-Type: "$content_type"');
header('Content-Disposition: attachment; filename="'.$filename.'"');
header('Expires: 0');
header('Cache-Control: must-revalidate, post-check=0, pre-check=0');
header("Content-Transfer-Encoding: binary");
header('Pragma: public');
header("Content-Length: ".filesize(trim($filepath)));
}
else
{
header('Content-Type: "$content_type"');
header('Content-Disposition: attachment; filename="'.$filename.'"');
header("Content-Transfer-Encoding: binary");
header('Expires: 0');
header('Pragma: no-cache');
header("Content-Length: ".filesize(trim($filepath)));
}
fpassthru($fp);
fclose($fp);
}
elseif($filetype=='audio'|| $filetype=='video')
{
global $mosConfig_absolute_path,$my;
ob_clean();
header("Pragma: public");
header('Expires: 0');
header('Cache-Control: no-store, no-cache, must-revalidate');
header('Cache-Control: pre-check=0, post-check=0, max-age=0');
header("Cache-Control: public");
header("Content-Description: File Transfer");
header("Content-Type: application/force-download");
header("Content-Type: $content_type");
header("Content-Length: ".filesize(trim($filepath)));
header("Content-Disposition: attachment; filename=\"$filename\"");
// Force the download
header("Content-Transfer-Encoding: binary");
@readfile($filepath);
}
else{ // for all other types of files except zip,audio/video
ob_clean();
header("Pragma: public");
header('Expires: 0');
header('Cache-Control: no-store, no-cache, must-revalidate');
header('Cache-Control: pre-check=0, post-check=0, max-age=0');
header("Cache-Control: public");
header("Content-Description: File Transfer");
header("Content-Type: $content_type");
header("Content-Length: ".filesize(trim($filepath)));
header("Content-Disposition: attachment; filename=\"$filename\"");
// Force the download
header("Content-Transfer-Encoding: binary");
@readfile($filepath);
}
exit;
#3
3
If you care about performance, there is xsendfile, available in apache, nginx and lighttpd as module. Check the readfile()
doc's users comments.
如果您关心性能,则有xsendfile,可以在apache、nginx和lighttpd作为模块使用。检查readfile() doc的用户评论。
There are also modules for these webservers which accept a url with an additional hash value which allows downloading the file for a short time period. This can be also used to solve authorization issues.
这些web服务器也有一些模块,它们接受带有附加散列值的url,可以在短时间内下载文件。这也可以用来解决授权问题。
#4
2
You could also handle this in the style of the Gordian Knot - that is to say, sidestep the problem entirely. Keep the files in a non-accessible directory, and when a download is initiated you can simply
你也可以用Gordian结的方式来处理——也就是说,完全回避问题。将文件保存在一个不可访问的目录中,当开始下载时,您可以简单地进行下载
tempstring = rand();
symlink('/filestore/filename.extension', '/www/downloads'.tempstring.'-filename.extension');
echo("Your download is available here: <a href='/downloads/'.tempstring.'-filename.extension');
and setup a cronjob to unlink()
any download links older than 10 minutes. Virtually no processing of your data is required, no massaging of HTTP headers, etc.
并设置一个cronjob来unlink()任何超过10分钟的下载链接。实际上不需要对数据进行处理,也不需要对HTTP头进行修改等等。
There are even a couple libraries out there for just this purpose.
甚至有几个库就是为了这个目的。
#1
11
If you use fopen
and fread
instead of readfile
, that should solve your problem.
如果您使用fopen和fread而不是readfile,那么应该可以解决您的问题。
There's a solution in the PHP's readfile
documentation showing how to use fread
to do what you want.
PHP的readfile文档中有一个解决方案,展示了如何使用fread来实现您的需求。
#2
4
To download large files from server, I have changed the below settings in php.ini file:
为了从服务器下载大型文件,我更改了php中的以下设置。ini文件:
Upload_max_filesize - 1500 M
Max_input_time - 1000
Memory_limit - 640M
Max_execution_time - 1800
Post_max_size - 2000 M
Now, I am able to upload and download 175MB video on server. Since, I have the dedicated server. So, making these changes were easy.
现在,我可以在服务器上上传和下载175MB的视频。因为我有专用的服务器。所以,做这些改变很容易。
Below is the PHP script to download the file. I have no made any changes in this code snippet for large file size.
下面是下载该文件的PHP脚本。对于大文件大小的代码片段,我没有做任何修改。
// Begin writing headers
ob_clean(); // Clear any previously written headers in the output buffer
if($filetype=='application/zip')
{
if(ini_get('zlib.output_compression'))
ini_set('zlib.output_compression', 'Off');
$fp = @fopen($filepath, 'rb');
if (strstr($_SERVER['HTTP_USER_AGENT'], "MSIE"))
{
header('Content-Type: "$content_type"');
header('Content-Disposition: attachment; filename="'.$filename.'"');
header('Expires: 0');
header('Cache-Control: must-revalidate, post-check=0, pre-check=0');
header("Content-Transfer-Encoding: binary");
header('Pragma: public');
header("Content-Length: ".filesize(trim($filepath)));
}
else
{
header('Content-Type: "$content_type"');
header('Content-Disposition: attachment; filename="'.$filename.'"');
header("Content-Transfer-Encoding: binary");
header('Expires: 0');
header('Pragma: no-cache');
header("Content-Length: ".filesize(trim($filepath)));
}
fpassthru($fp);
fclose($fp);
}
elseif($filetype=='audio'|| $filetype=='video')
{
global $mosConfig_absolute_path,$my;
ob_clean();
header("Pragma: public");
header('Expires: 0');
header('Cache-Control: no-store, no-cache, must-revalidate');
header('Cache-Control: pre-check=0, post-check=0, max-age=0');
header("Cache-Control: public");
header("Content-Description: File Transfer");
header("Content-Type: application/force-download");
header("Content-Type: $content_type");
header("Content-Length: ".filesize(trim($filepath)));
header("Content-Disposition: attachment; filename=\"$filename\"");
// Force the download
header("Content-Transfer-Encoding: binary");
@readfile($filepath);
}
else{ // for all other types of files except zip,audio/video
ob_clean();
header("Pragma: public");
header('Expires: 0');
header('Cache-Control: no-store, no-cache, must-revalidate');
header('Cache-Control: pre-check=0, post-check=0, max-age=0');
header("Cache-Control: public");
header("Content-Description: File Transfer");
header("Content-Type: $content_type");
header("Content-Length: ".filesize(trim($filepath)));
header("Content-Disposition: attachment; filename=\"$filename\"");
// Force the download
header("Content-Transfer-Encoding: binary");
@readfile($filepath);
}
exit;
#3
3
If you care about performance, there is xsendfile, available in apache, nginx and lighttpd as module. Check the readfile()
doc's users comments.
如果您关心性能,则有xsendfile,可以在apache、nginx和lighttpd作为模块使用。检查readfile() doc的用户评论。
There are also modules for these webservers which accept a url with an additional hash value which allows downloading the file for a short time period. This can be also used to solve authorization issues.
这些web服务器也有一些模块,它们接受带有附加散列值的url,可以在短时间内下载文件。这也可以用来解决授权问题。
#4
2
You could also handle this in the style of the Gordian Knot - that is to say, sidestep the problem entirely. Keep the files in a non-accessible directory, and when a download is initiated you can simply
你也可以用Gordian结的方式来处理——也就是说,完全回避问题。将文件保存在一个不可访问的目录中,当开始下载时,您可以简单地进行下载
tempstring = rand();
symlink('/filestore/filename.extension', '/www/downloads'.tempstring.'-filename.extension');
echo("Your download is available here: <a href='/downloads/'.tempstring.'-filename.extension');
and setup a cronjob to unlink()
any download links older than 10 minutes. Virtually no processing of your data is required, no massaging of HTTP headers, etc.
并设置一个cronjob来unlink()任何超过10分钟的下载链接。实际上不需要对数据进行处理,也不需要对HTTP头进行修改等等。
There are even a couple libraries out there for just this purpose.
甚至有几个库就是为了这个目的。