开发环境
解释器版本: python 3.8
代码编辑器: pycharm 2021.2
第三方模块
requests: pip install requests
csv
爬虫案例的步骤
1.确定url地址(链接地址)
2.发送网络请求
3.数据解析(筛选数据)
4.数据的保存(数据库(mysql\mongodb\redis), 本地文件)
爬虫程序全部代码
分析网页
打开开发者工具,搜索关键字,找到正确url
导入模块
import requests # 发送网络请求 import csv
请求数据
url = f'https://xueqiu.com/service/v5/stock/screener/quote/list?page=1&size=30&order=desc&order_by=amount&exchange=CN&market=CN&type=sha&_=1637908787379' # 伪装 headers = { # 浏览器伪装 'User-Agent': 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/96.0.4664.45 Safari/537.36' } response = requests.get(url, headers=headers) json_data = response.json()
解析数据
data_list = json_data['data']['list'] for data in data_list: data1 = data['symbol'] data2 = data['name'] data3 = data['current'] data4 = data['chg'] data5 = data['percent'] data6 = data['current_year_percent'] data7 = data['volume'] data8 = data['amount'] data9 = data['turnover_rate'] data10 = data['pe_ttm'] data11 = data['dividend_yield'] data12 = data['market_capital'] print(data1, data2, data3, data4, data5, data6, data7, data8, data9, data10, data11, data12) data_dict = { '股票代码': data1, '股票名称': data2, '当前价': data3, '涨跌额': data4, '涨跌幅': data5, '年初至今': data6, '成交量': data7, '成交额': data8, '换手率': data9, '市盈率(TTM)': data10, '股息率': data11, '市值': data12, } csv_write.writerow(data_dict)
翻页
对比1、2、3页数据url,找到规律
for page in range(1, 56): url = f'https://xueqiu.com/service/v5/stock/screener/quote/list?page={page}&size=30&order=desc&order_by=amount&exchange=CN&market=CN&type=sha&_=1637908787379'
保存数据
file = open('data2.csv', mode='a', encoding='utf-8', newline='') csv_write = csv.DictWriter(file, fieldnames=['股票代码','股票名称','当前价','涨跌额','涨跌幅','年初至今','成交量','成交额','换手率','市盈率(TTM)','股息率','市值']) csv_write.writeheader() file.close()
实现效果
数据可视化全部代码
导入数据
import pandas as pd from pyecharts import options as opts from pyecharts.charts import Bar
读取数据
data_df = pd.read_csv('data2.csv') df = data_df.dropna() df1 = df[['股票名称', '成交量']] df2 = df1.iloc[:20] print(df2['股票名称'].values) print(df2['成交量'].values)
可视化图表
c = ( Bar() .add_xaxis(list(df2['股票名称'])) .add_yaxis("股票成交量情况", list(df2['成交量'])) .set_global_opts( title_opts=opts.TitleOpts(title="成交量图表 - Volume chart"), datazoom_opts=opts.DataZoomOpts(), ) .render("data.html") ) print('数据可视化结果完成,请在当前目录下查找打开 data.html 文件!')
效果展示
以上就是Python爬取股票交易数据并数据可视化的详细内容,更多关于Python股票数据爬取的资料请关注服务器之家其它相关文章!
原文链接:https://blog.csdn.net/m0_48405781/article/details/121640081