2 回答
TA贡献1712条经验 获得超3个赞
此脚本将遍历所有页面并将它们保存到标准 csv 和~|~分隔文本文件中:
import requests
import numpy as np
import pandas as pd
from bs4 import BeautifulSoup
url = 'https://www.msamb.com/ApmcDetail/ArrivalPriceInfo'
detail_url = 'https://www.msamb.com/ApmcDetail/DataGridBind?commodityCode={code}&apmcCode=null'
headers = {'Referer': 'https://www.msamb.com/ApmcDetail/ArrivalPriceInfo'}
soup = BeautifulSoup(requests.get(url).content, 'html.parser')
values = [(o['value'], o.text) for o in soup.select('#CommoditiesId option') if o['value']]
all_data = []
for code, code_name in values:
print('Getting info for code {} {}'.format(code, code_name))
soup = BeautifulSoup(requests.get(detail_url.format(code=code), headers=headers).content, 'html.parser')
current_date = ''
for row in soup.select('tr'):
if row.select_one('td[colspan]'):
current_date = row.get_text(strip=True)
else:
row = [td.get_text(strip=True) for td in row.select('td')]
all_data.append({
'Date': current_date,
'Commodity': code_name,
'APMC': row[0],
'Variety': row[1],
'Unit': row[2],
'Quantity': row[3],
'Lrate': row[4],
'Hrate': row[5],
'Modal': row[6],
})
df = pd.DataFrame(all_data)
print(df)
df.to_csv('data.csv') # <-- saves standard csv
np.savetxt('data.txt', df, delimiter='~|~', fmt='%s') # <-- saves .txt file with '~|~' delimiter
印刷:
...
Getting info for code 08071 TOMATO
Getting info for code 10006 TURMERIC
Getting info for code 08075 WAL BHAJI
Getting info for code 08076 WAL PAPDI
Getting info for code 08077 WALVAD
Getting info for code 07011 WATER MELON
Getting info for code 02009 WHEAT(HUSKED)
Getting info for code 02012 WHEAT(UNHUSKED)
Date Commodity APMC Variety Unit Quantity Lrate Hrate Modal
0 18/07/2020 AMBAT CHUKA PANDHARPUR ---- NAG 50 5 5 5
1 16/07/2020 AMBAT CHUKA PANDHARPUR ---- NAG 50 5 5 5
2 15/07/2020 AMBAT CHUKA PANDHARPUR ---- NAG 100 9 9 9
3 13/07/2020 AMBAT CHUKA PANDHARPUR ---- NAG 16 7 7 7
4 13/07/2020 AMBAT CHUKA PUNE LOCAL NAG 2400 4 7 5
... ... ... ... ... ... ... ... ... ...
4893 12/07/2020 WHEAT(HUSKED) SHIRUR No. 2 QUINTAL 2 1400 1400 1400
4894 17/07/2020 WHEAT(UNHUSKED) SANGLI-MIRAJ ---- QUINTAL 863 4000 4600 4300
4895 16/07/2020 WHEAT(UNHUSKED) SANGLI-MIRAJ ---- QUINTAL 475 4000 4500 4250
4896 15/07/2020 WHEAT(UNHUSKED) SANGLI-MIRAJ ---- QUINTAL 680 3900 4400 4150
4897 13/07/2020 WHEAT(UNHUSKED) SANGLI-MIRAJ ---- QUINTAL 1589 3900 4450 4175
[4898 rows x 9 columns]
节省data.txt:
0~|~18/07/2020~|~AMBAT CHUKA~|~PANDHARPUR~|~----~|~NAG~|~50~|~5~|~5~|~5
1~|~16/07/2020~|~AMBAT CHUKA~|~PANDHARPUR~|~----~|~NAG~|~50~|~5~|~5~|~5
2~|~15/07/2020~|~AMBAT CHUKA~|~PANDHARPUR~|~----~|~NAG~|~100~|~9~|~9~|~9
3~|~13/07/2020~|~AMBAT CHUKA~|~PANDHARPUR~|~----~|~NAG~|~16~|~7~|~7~|~7
4~|~13/07/2020~|~AMBAT CHUKA~|~PUNE~|~LOCAL~|~NAG~|~2400~|~4~|~7~|~5
5~|~12/07/2020~|~AMBAT CHUKA~|~PUNE~|~LOCAL~|~NAG~|~1700~|~3~|~8~|~5
6~|~19/07/2020~|~APPLE~|~KOLHAPUR~|~----~|~QUINTAL~|~3~|~9000~|~14000~|~11500
7~|~18/07/2020~|~APPLE~|~KOLHAPUR~|~----~|~QUINTAL~|~12~|~8500~|~15000~|~11750
8~|~18/07/2020~|~APPLE~|~NASHIK~|~DILICIOUS- No.1~|~QUINTAL~|~110~|~9000~|~16000~|~13000
9~|~18/07/2020~|~APPLE~|~SANGLI-PHALE BHAJIPALAM~|~LOCAL~|~QUINTAL~|~8~|~12000~|~16000~|~14000
10~|~17/07/2020~|~APPLE~|~MUMBAI-FRUIT MARKET~|~----~|~QUINTAL~|~264~|~9000~|~12000~|~10500
...
来自 LibreOffice 的 csv 文件的屏幕截图:
TA贡献1802条经验 获得超5个赞
您可以将它们保存到 txt 文件中,您可以这样做df = pd.read_csv("out.txt",delimiter='~|~')
,或者
date = df['Date'] commodity = df['Commodity']
您可以将 apmc 附加到列表中,并在最后附加 read_as 数据框。
添加回答
举报