使用python将csv文件写入SQL Server数据库

时间:2022-03-27 12:48:10

Hi I am trying to write a csv file into a table in SQL Server database using python. I am facing errors when I pass the parameters , but I don't face any error when I do it manually. Here is the code I am executing.

您好,我正在尝试使用python将一个csv文件写入SQL Server数据库中的表中。我在传递参数时遇到了错误,但是在手动传递参数时没有遇到错误。这是我正在执行的代码。

cur=cnxn.cursor() # Get the cursor
csv_data = csv.reader(file(Samplefile.csv')) # Read the csv
for rows in csv_data: # Iterate through csv
    cur.execute("INSERT INTO MyTable(Col1,Col2,Col3,Col4) VALUES (?,?,?,?)",rows)
cnxn.commit()

Error :pyodbc.DataError: ('22001', '[22001] [Microsoft][ODBC SQL Server Driver][SQL Server]String or binary data would be truncated. (8152) (SQLExecDirectW); [01000] [Microsoft][ODBC SQL Server Driver][SQL Server]The statement has been terminated. (3621)')

错误:pyodbc。DataError: ('22001', '[22001] [Microsoft][ODBC SQL Server Driver][SQL Server]][SQL Server]字符串或二进制数据将被截断。(8152)(SQLExecDirectW);该语句已被终止。(3621)”)

However when I insert the values manually. It works fine

但是当我手动插入值时。它工作好

cur.execute("INSERT INTO MyTable(Col1,Col2,Col3,Col4) VALUES (?,?,?,?)",'A','B','C','D')

I have ensured that the TABLE is there in the database, data types are consistent with the data I am passing. Connection and cursor are also correct. The data type of rows is "list"

我已经确保了表在数据库中,数据类型与我正在传递的数据是一致的。连接和光标也是正确的。行的数据类型是“list”

3 个解决方案

#1


18  

Consider building the query dynamically to ensure the number of placeholders matches your table and CSV file format. Then it's just a matter of ensuring your table and CSV file are correct, instead of checking that you typed enough ? placeholders in your code.

考虑动态构建查询,以确保占位符的数量与表和CSV文件格式匹配。那么只需要确保您的表和CSV文件是正确的,而不是检查您输入的内容是否足够?代码中的占位符。

The following example assumes

下面的示例假设

  1. CSV file contains column names in the first line
  2. CSV文件在第一行包含列名
  3. Connection is already built
  4. 连接已经建立
  5. File name is test.csv
  6. 文件的名字是test.csv
  7. Table name is MyTable
  8. 表名是MyTable
  9. Python 3
  10. Python 3
...
with open ('test.csv', 'r') as f:
    reader = csv.reader(f)
    columns = next(reader) 
    query = 'insert into MyTable({0}) values ({1})'
    query = query.format(','.join(columns), ','.join('?' * len(columns)))
    cursor = connection.cursor()
    for data in reader:
        cursor.execute(query, data)
    cursor.commit()

If column names are not included in the file:

如果文件中不包含列名:

...
with open ('test.csv', 'r') as f:
    reader = csv.reader(f)
    data = next(reader) 
    query = 'insert into dbo.Test values ({0})'
    query = query.format(','.join('?' * len(data)))
    cursor = connection.cursor()
    cursor.execute(query, data)
    for data in reader:
        cursor.execute(query, data)
    cursor.commit()

#2


0  

Have you tried passing the columns as arguments for e.g.

你试过把这些专栏作为论据吗?

for rows in csv_data: # Iterate through csv
    cur.execute("INSERT INTO MyTable(Col1,Col2,Col3,Col4) VALUES (?,?,?,?)", *rows)

#3


0  

Thanks everyone, I got it sorted out. The error was due to the size restriction restriction of table. It changed the column capacity like from col1 varchar(10) to col1 varchar(35) etc. Now its working fine.

谢谢大家,我搞定了。该错误是由于表的尺寸限制限制造成的。它将列容量从col1 varchar(10)改为col1 varchar(35)等,现在运行良好。

@beargle : I really liked your way of programming it

我真的很喜欢你的编程方式

#1


18  

Consider building the query dynamically to ensure the number of placeholders matches your table and CSV file format. Then it's just a matter of ensuring your table and CSV file are correct, instead of checking that you typed enough ? placeholders in your code.

考虑动态构建查询,以确保占位符的数量与表和CSV文件格式匹配。那么只需要确保您的表和CSV文件是正确的,而不是检查您输入的内容是否足够?代码中的占位符。

The following example assumes

下面的示例假设

  1. CSV file contains column names in the first line
  2. CSV文件在第一行包含列名
  3. Connection is already built
  4. 连接已经建立
  5. File name is test.csv
  6. 文件的名字是test.csv
  7. Table name is MyTable
  8. 表名是MyTable
  9. Python 3
  10. Python 3
...
with open ('test.csv', 'r') as f:
    reader = csv.reader(f)
    columns = next(reader) 
    query = 'insert into MyTable({0}) values ({1})'
    query = query.format(','.join(columns), ','.join('?' * len(columns)))
    cursor = connection.cursor()
    for data in reader:
        cursor.execute(query, data)
    cursor.commit()

If column names are not included in the file:

如果文件中不包含列名:

...
with open ('test.csv', 'r') as f:
    reader = csv.reader(f)
    data = next(reader) 
    query = 'insert into dbo.Test values ({0})'
    query = query.format(','.join('?' * len(data)))
    cursor = connection.cursor()
    cursor.execute(query, data)
    for data in reader:
        cursor.execute(query, data)
    cursor.commit()

#2


0  

Have you tried passing the columns as arguments for e.g.

你试过把这些专栏作为论据吗?

for rows in csv_data: # Iterate through csv
    cur.execute("INSERT INTO MyTable(Col1,Col2,Col3,Col4) VALUES (?,?,?,?)", *rows)

#3


0  

Thanks everyone, I got it sorted out. The error was due to the size restriction restriction of table. It changed the column capacity like from col1 varchar(10) to col1 varchar(35) etc. Now its working fine.

谢谢大家,我搞定了。该错误是由于表的尺寸限制限制造成的。它将列容量从col1 varchar(10)改为col1 varchar(35)等,现在运行良好。

@beargle : I really liked your way of programming it

我真的很喜欢你的编程方式