以下是我需要帮助的代码。我必须在1,300,000行以上运行它,这意味着最多需要40分钟才能插入〜300,000行。我认为批量插入是加快速度的途径吗?还是因为我要通过for data in reader:部分遍历行?#Opens the prepped csv filewith open (os.path.join(newpath,outfile), 'r') as f: #hooks csv reader to file reader = csv.reader(f) #pulls out the columns (which match the SQL table) columns = next(reader) #trims any extra spaces columns = [x.strip(' ') for x in columns] #starts SQL statement query = 'bulk insert into SpikeData123({0}) values ({1})' #puts column names in SQL query 'query' query = query.format(','.join(columns), ','.join('?' * len(columns))) print 'Query is: %s' % query #starts curser from cnxn (which works) cursor = cnxn.cursor() #uploads everything by row for data in reader: cursor.execute(query, data) cursor.commit()我有目的地动态地选择列标题(因为我想创建尽可能多的pythonic代码)。SpikeData123是表名。
3 回答
- 3 回答
- 0 关注
- 1041 浏览
添加回答
举报
0/150
提交
取消