一、原理
分析网站
打开重庆交通大学新闻网站http://news.cqjtu.edu.cn/xxtz.htm Chrome浏览器右键点击查看网页源代码 找到新闻标题所在位置,也就是需要爬取的内容。 不难发现新闻时间和标题在div标签内,同时被一个li标签包含,则可以找到所有的li标签再从里面找合适的div标签。
二、实现
实现代码
import requests
from bs4 import BeautifulSoup
import csv
from tqdm import tqdm
import urllib.request, urllib.error
subjects = []
Headers = {
"User-Agent": "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/95.0.4638.69 Safari/537.36 Edg/95.0.1020.53"
}
csvHeaders = ['时间', '标题']
print('信息爬取中:\n')
for pages in tqdm(range(1, 65 + 1)):
request = urllib.request.Request(f'http://news.cqjtu.edu.cn/xxtz/{pages}.htm', headers=Headers)
html = ""
try:
response = urllib.request.urlopen(request)
html = response.read().decode("utf-8")
except urllib.error.URLError as e:
if hasattr(e, "code"):
print(e.code)
if hasattr(e, "reason"):
print(e.reason)
soup = BeautifulSoup(html, 'html5lib')
subject = []
li = soup.find_all('li')
for l in li:
if l.find_all('div',class_="time") is not None and l.find_all('div',class_="right-title") is not None:
for time in l.find_all('div',class_="time"):
subject.append(time.string)
for title in l.find_all('div',class_="right-title"):
for t in title.find_all('a',target="_blank"):
subject.append(t.string)
if subject:
print(subject)
subjects.append(subject)
subject = []
with open('test.csv', 'w', newline='') as file:
fileWriter = csv.writer(file)
fileWriter.writerow(csvHeaders)
fileWriter.writerows(subjects)
print('\n信息爬取完成!!!')
三、结果
爬取过程
爬取结果
四、总结
爬取网页需要一定的html基础
五、参考
|