在MySQL DB上导入大文件。

时间:2022-08-05 02:48:07

I want to insert about 50,000 mysql query for 'insert' in mysql db, for this i have 2 options,

我想在mysql db中插入5万个mysql查询,因为我有两个选项,

1- Directly import the (.sql) file: Following error is occur " You probably tried to upload too large file. Please refer to documentation for ways to workaround this limit. "

1-直接导入(.sql)文件:发生以下错误“您可能尝试上传过大的文件”。请参阅文档了解解决此限制的方法。”

2- Use php code to insert these queries in form of different chunks from the (.sql) file. here is my code:

2-使用php代码以不同的块(.sql)文件的形式插入这些查询。这是我的代码:

<?php

// Configure DB
include "config.php";

// Get file data
$file = file('country.txt'); 

// Set pointers & position variables
$position = 0;
$eof = 0; 

while ($eof < sizeof($file))
{
    for ($i = $position; $i < ($position + 2); $i++)
    {
        if ($i < sizeof($file))
        {
            $flag = mysql_query($file[$i]);

            if (isset($flag))
            {
                echo "Insert Successfully<br />";
                $position++;
            }

            else
            {
                echo mysql_error() . "<br>\n";
            }
        }

        else
        {
            echo "<br />End of File";
            break;
        }
    }

    $eof++;
}

?>

But memory size error is occur however i have extend memory limit from 128M to 256M or even 512M.

但是内存大小错误发生了,但是我将内存限制从128M扩展到256M甚至5.12 m。

Then i think that if i could be able to load a limited rows from (.sql) file like 1000 at a time and execute mysql query then it may be import all records from file to db. But here i dont have any idea for how to handle file start location to end and how can i update the start and end location, so that it will not fetch the previously fetched rows from .sql file.

然后我想,如果我能够一次从(.sql)文件加载有限的行,比如1000,并执行mysql查询,那么它可能会将所有记录从文件导入到db。但是在这里,我不知道如何处理文件开始位置结束,以及如何更新开始和结束位置,这样它就不会从.sql文件中获取先前获取的行。

4 个解决方案

#1


4  

Here is the code you need, now prettified! =D

这是你需要的代码,现在可以修改了!= D

<?php

include('config.php');

$file = @fopen('country.txt', 'r');

if ($file)
{
    while (!feof($file))
    {
        $line = trim(fgets($file));
        $flag = mysql_query($line);

        if (isset($flag))
        {
            echo 'Insert Successfully<br />';
        }

        else
        {
            echo mysql_error() . '<br/>';
        }

        flush();
    }

    fclose($file);
}

echo '<br />End of File';

?>

Basically it's a less greedy version of your code, instead of opening the whole file in memory it reads and executes small chunks (one liners) of SQL statements.

基本上,它是代码的一个不那么贪婪的版本,而不是在内存中打开整个文件,它读取并执行SQL语句的小块(一行)。

#2


2  

Instead of loading the entire file into memory, which is what's done when using the file function, a possible solution would be to read it line by line, using a combinaison of fopen, fgets, and fclose -- the idea being to read only what you need, deal with the lines you have, and only then, read the next couple of ones.

而不是整个文件加载到内存中,这是做什么使用文件函数时,一个可能的解决方案是逐行读它,使用fopen的combinaison fgets,文件关闭,这个想法是只读你需要,处理线,也只有到那时,读接下来的几个。


Additionnaly, you might want to take a look at this answer : Best practice: Import mySQL file in PHP; split queries

另外,您可能想看看这个答案:最佳实践:在PHP中导入mySQL文件;将查询

There is no accepted answer yet, but some of the given answers might already help you...

目前还没有公认的答案,但一些已知的答案可能已经对你有所帮助了……

#3


1  

Use the command line client, it is far more efficient, and should easily handle 50K inserts:

使用命令行客户端,效率更高,应该可以轻松处理50K插入:

 mysql -uUser -p <db_name> < dump.sql

#4


0  

I read recently about inserting lots of queries into a database to quickly. The article suggested using the sleep() (or usleep) function to delay a few seconds between queries so as not to overload the MySQL server.

我最近读到过关于向数据库中快速插入大量查询的文章。本文建议使用sleep()(或usleep)函数在查询之间延迟几秒钟,以免导致MySQL服务器过载。

#1


4  

Here is the code you need, now prettified! =D

这是你需要的代码,现在可以修改了!= D

<?php

include('config.php');

$file = @fopen('country.txt', 'r');

if ($file)
{
    while (!feof($file))
    {
        $line = trim(fgets($file));
        $flag = mysql_query($line);

        if (isset($flag))
        {
            echo 'Insert Successfully<br />';
        }

        else
        {
            echo mysql_error() . '<br/>';
        }

        flush();
    }

    fclose($file);
}

echo '<br />End of File';

?>

Basically it's a less greedy version of your code, instead of opening the whole file in memory it reads and executes small chunks (one liners) of SQL statements.

基本上,它是代码的一个不那么贪婪的版本,而不是在内存中打开整个文件,它读取并执行SQL语句的小块(一行)。

#2


2  

Instead of loading the entire file into memory, which is what's done when using the file function, a possible solution would be to read it line by line, using a combinaison of fopen, fgets, and fclose -- the idea being to read only what you need, deal with the lines you have, and only then, read the next couple of ones.

而不是整个文件加载到内存中,这是做什么使用文件函数时,一个可能的解决方案是逐行读它,使用fopen的combinaison fgets,文件关闭,这个想法是只读你需要,处理线,也只有到那时,读接下来的几个。


Additionnaly, you might want to take a look at this answer : Best practice: Import mySQL file in PHP; split queries

另外,您可能想看看这个答案:最佳实践:在PHP中导入mySQL文件;将查询

There is no accepted answer yet, but some of the given answers might already help you...

目前还没有公认的答案,但一些已知的答案可能已经对你有所帮助了……

#3


1  

Use the command line client, it is far more efficient, and should easily handle 50K inserts:

使用命令行客户端,效率更高,应该可以轻松处理50K插入:

 mysql -uUser -p <db_name> < dump.sql

#4


0  

I read recently about inserting lots of queries into a database to quickly. The article suggested using the sleep() (or usleep) function to delay a few seconds between queries so as not to overload the MySQL server.

我最近读到过关于向数据库中快速插入大量查询的文章。本文建议使用sleep()(或usleep)函数在查询之间延迟几秒钟,以免导致MySQL服务器过载。