普通打標(biāo)簽
odue_df=df_train_stmt.loc[(df_train_stmt.AGE3>0)|(df_train_stmt.AGE4>0)|(df_train_stmt.AGE5>0)|(df_train_stmt.AGE6>0),['XACCOUNT']].drop_duplicates()
odue_df['label']=1
cust_df=df_acct[['CUSTR_NBR','XACCOUNT']].drop_duplicates()
#做合并
df_y=pd.merge(cust_df,odue_df,how='left',on='XACCOUNT').groupby('CUSTR_NBR').agg({'label':max}).reset_index().fillna(0)
使用函數(shù)來打標(biāo)簽
#標(biāo)注標(biāo)簽 Label
def label(row):
if row['Date_received'] == 'null':
return -1
if row['Date'] != 'null':
td = pd.to_datetime(row['Date'], format='%Y%m%d') - pd.to_datetime(row['Date_received'], format='%Y%m%d')
if td = pd.Timedelta(15, 'D'):
return 1
return 0
dfoff['label'] = dfoff.apply(label, axis=1)
#打標(biāo)簽,判斷天數(shù)
def get_label(s):
s = s.split(':')
if s[0]=='null':
return 0
elif (date(int(s[0][0:4]),int(s[0][4:6]),int(s[0][6:8]))-date(int(s[1][0:4]),int(s[1][4:6]),int(s[1][6:8]))).days=15:
return 1
else:
return -1
dataset2.label = dataset2.label.apply(get_label)
補(bǔ)充:python 根據(jù)標(biāo)簽名獲取標(biāo)簽內(nèi)容
看代碼吧~
import re
import json
import requests
from bs4 import BeautifulSoup
import lxml.html
from lxml import etree
result = requests.get('http://example.webscraping.com/places/default/view/Algeria-4')
with open('123.html', 'wb') as f:
f.write(result.content)
# print(parse_regex(result.text))
test_data = """
div>
ul>
li class="item-0">a href="link1.html" rel="external nofollow" rel="external nofollow" id="places_neighbours__row">9,596,960first item/a>/li>
li class="item-1">a href="link2.html" rel="external nofollow" >second item/a>/li>
li class="item-inactive">a href="link3.html" rel="external nofollow" >third item/a>/li>
li class="item-1">a href="link4.html" rel="external nofollow" id="places_neighbours__row">fourth item/a>/li>
li class="item-0">a href="link5.html" rel="external nofollow" rel="external nofollow" >fifth item/a>/li>
li class="good-0">a href="link5.html" rel="external nofollow" rel="external nofollow" >fifth item/a>/li>
/ul>
book>
title lang="aaengbb">Harry Potter/title>
price id="places_neighbours__row">29.99/price>
/book>
book>
title lang="zh">Learning XML/title>
price>39.95/price>
/book>
book>
title>Python/title>
price>40/price>
/book>
/div>
"""
# //div/ul/li/a[@id] 選取a標(biāo)簽中帶有id屬性的標(biāo)簽
# //div/ul/li/a 選取所有a標(biāo)簽
# //div/ul/li[2]/a
"""
/ 從根標(biāo)簽開始 必須具有嚴(yán)格的父子關(guān)系
// 從當(dāng)前標(biāo)簽 后續(xù)節(jié)點(diǎn)含有即可選出
* 通配符 選擇所有
//div/book[1]/title 選擇div下第一個(gè)book標(biāo)簽的title標(biāo)簽
//div/book[1]/tittle[@lang="zh"] 選擇div下第一個(gè)book標(biāo)簽的title標(biāo)簽并且內(nèi)容是zh的title標(biāo)簽
//div/book/title //book/title //title 具有相同結(jié)果 只不過選取路徑不一樣
//book/title/@* 將title所有的屬性值選出來
//book/title/text() 將title的內(nèi)容選擇出來,使用內(nèi)置函數(shù)
//a[@href="link1.html" rel="external nofollow" rel="external nofollow" and @id="places_neighbours_row"]
//div/book/[last()]/title/text() 將最后一個(gè)book元素選出
//div/book[price > 39]/title/text() 將book子標(biāo)簽price數(shù)值大于39的選擇出來
//li[starts-with(@class,'item')] 將class屬性前綴是item的選出來
//title[contains(@lang,"eng")]將title屬性lang含有eng關(guān)鍵字的標(biāo)簽選出
"""
html = lxml.html.fromstring(test_data) # 加載任意一個(gè)字符串
html_data = html.xpath('//title[contains(@lang,"eng")]') # xpath 查找路徑
# print(dir(html_data[0])) # 查看html_data有什么功能
print(html_data)
for i in html_data:
print(i.text)
以上為個(gè)人經(jīng)驗(yàn),希望能給大家一個(gè)參考,也希望大家多多支持腳本之家。
您可能感興趣的文章:- python爬蟲之異常捕獲及標(biāo)簽過濾詳解
- python 如何獲取頁面所有a標(biāo)簽下href的值
- Python深度學(xué)習(xí)之圖像標(biāo)簽標(biāo)注軟件labelme詳解
- python中Tkinter實(shí)現(xiàn)分頁標(biāo)簽的示例代碼
- Python 實(shí)現(xiàn)自動(dòng)完成A4標(biāo)簽排版打印功能
- Python氣泡提示與標(biāo)簽的實(shí)現(xiàn)
- Python 生成VOC格式的標(biāo)簽實(shí)例
- 基于python3生成標(biāo)簽云代碼解析