在python中下载带有请求的大型文件请求是个很好的图书馆。我想用它来下载大文件(>1GB)。问题是不可能将整个文件保存在内存中,我需要以块的形式读取它。下面的代码出现了问题import requestsdef DownloadFile(url)
local_filename = url.split('/')[-1]
r = requests.get(url)
f = open(local_filename, 'wb')
for chunk in r.iter_content(chunk_size=512 * 1024):
if chunk: # filter out keep-alive new chunks
f.write(chunk)
f.close()
return因为某种原因它不是这样工作的。在将响应保存到文件之前,它仍然会将响应加载到内存中。更新如果您需要一个可以从FTP下载大文件的小客户机(Python2.x/3.x),您可以找到它这里..它支持多线程和重新连接(它确实监视连接),还为下载任务调优套接字参数。
3 回答

慕的地10843
TA贡献1785条经验 获得超8个赞
Response.raw
shutil.copyfileobj()
:
import requestsimport shutildef download_file(url): local_filename = url.split('/')[-1] with requests.get(url, stream=True) as r: with open(local_filename, 'wb') as f: shutil.copyfileobj(r.raw, f) return local_filename

不负相思意
TA贡献1777条经验 获得超10个赞
with
def DownloadFile(url): local_filename = url.split('/')[-1] r = requests.get(url) with open(local_filename, 'wb') as f: for chunk in r.iter_content(chunk_size=1024): if chunk: # filter out keep-alive new chunks f.write(chunk) return
f.flush()
os.fsync()
with open(local_filename, 'wb') as f: for chunk in r.iter_content(chunk_size=1024): if chunk: # filter out keep-alive new chunks f.write(chunk) f.flush() os.fsync(f.fileno())
添加回答
举报
0/150
提交
取消