主頁 > 知識(shí)庫 > 人臉識(shí)別具體案例(李智恩)

人臉識(shí)別具體案例(李智恩)

熱門標(biāo)簽:房產(chǎn)電銷外呼系統(tǒng) 南京銷售外呼系統(tǒng)軟件 上海機(jī)器人外呼系統(tǒng)哪家好 地圖標(biāo)注的意義點(diǎn) 地圖制圖標(biāo)注位置改變是移位嗎 蓋州市地圖標(biāo)注 浙江電銷卡外呼系統(tǒng)好用嗎 315電話機(jī)器人廣告 地圖標(biāo)注微信發(fā)送位置不顯示

項(xiàng)目環(huán)境:python3.6

一、項(xiàng)目結(jié)構(gòu)

二、數(shù)據(jù)集準(zhǔn)備

數(shù)據(jù)集準(zhǔn)備分為兩步:

  1. 獲取圖片.
  2. 提取人臉.

1、獲取圖片

首先可以利用爬蟲,從百度圖片上批量下載圖片,但注意下載數(shù)據(jù)集所用的關(guān)鍵詞不要和之后識(shí)別任務(wù)的關(guān)鍵詞太接近,否則若有圖片重合,就會(huì)產(chǎn)生“識(shí)別得很準(zhǔn)”的錯(cuò)覺。下面的程序?yàn)榕老x部分,在name.txt文件中寫好要搜索的關(guān)鍵詞,即可使用。

# 爬蟲部分,存放到 name + ‘文件'
    #############################################################################################
    if GET_PIC == 1:
        headers = {
            'Accept-Language': 'zh-CN,zh;q=0.8,zh-TW;q=0.7,zh-HK;q=0.5,en-US;q=0.3,en;q=0.2',
            'Connection': 'keep-alive',
            'User-Agent': 'Mozilla/5.0 (X11; Linux x86_64; rv:60.0) Gecko/20100101 Firefox/60.0',
            'Upgrade-Insecure-Requests': '1'
        }
        A = requests.Session()
        A.headers = headers
        tm = int(input('請(qǐng)輸入每類圖片的下載數(shù)量 '))
        numPicture = tm
        line_list = []
        with open('./name.txt', encoding='utf-8') as file:
            line_list = [k.strip() for k in file.readlines()]  # 用 strip()移除末尾的空格
        for word in line_list:
            url = 'https://image.baidu.com/search/flip?tn=baiduimageie=utf-8word=' + word + 'pn='
            tot = Find(url, A)
            Recommend = recommend(url)  # 記錄相關(guān)推薦
            print('經(jīng)過檢測(cè)%s類圖片共有%d張' % (word, tot))
            file = word + '文件'
            y = os.path.exists(file)
            if y == 1:
                print('該文件已存在,無需創(chuàng)建')
            else:
                os.mkdir(file)
            t = 0
            tmp = url
            while t  numPicture:
                try:
                    url = tmp + str(t)
                    # result = requests.get(url, timeout=10)
                    # 這里搞了下
                    result = A.get(url, timeout=10, allow_redirects=False)
                    print(url)
                except error.HTTPError as e:
                    print('網(wǎng)絡(luò)錯(cuò)誤,請(qǐng)調(diào)整網(wǎng)絡(luò)后重試')
                    t = t + 60
                else:
                    dowmloadPicture(result.text, word)
                    t = t + 60
            numPicture = numPicture + tm
        print('當(dāng)前搜索結(jié)束,開始提取人臉')
    #############################################################################################

下載圖片時(shí)要注意區(qū)分,將IU的圖片放在一個(gè)文件夾下,Other的放在另一文件夾下。訓(xùn)練集和測(cè)試集都要如此。如下圖所示:

每個(gè)文件夾內(nèi)都是下圖形式:

在IU文件夾內(nèi)圖片如下所示:

對(duì)于文件夾內(nèi)文件的命名,可以利用以下這段程序,按順序重命名。

import os
raw_train_root_1 = 'E:/Table/學(xué)習(xí)數(shù)據(jù)集/find_iu/data/raw/train/IU/'
raw_train_root_2 = 'E:/Table/學(xué)習(xí)數(shù)據(jù)集/find_iu/data/raw/train/Other/'
raw_test_root_1 = 'E:/Table/學(xué)習(xí)數(shù)據(jù)集/find_iu/data/raw/test/IU/'
raw_test_root_2 = 'E:/Table/學(xué)習(xí)數(shù)據(jù)集/find_iu/data/raw/test/Other/'
raw_roots = [raw_train_root_1, raw_train_root_2, raw_test_root_1, raw_test_root_2]
for path in raw_roots:
    # 獲取該目錄下所有文件,存入列表中
    fileList = os.listdir(path)
    n = 0
    for i in fileList:
        # 設(shè)置舊文件名(就是路徑+文件名)
        oldname = path + os.sep + fileList[n]  # os.sep添加系統(tǒng)分隔符
        # 設(shè)置新文件名
        newname = path + os.sep + str(n) + '.JPG'
        os.rename(oldname, newname)  # 用os模塊中的rename方法對(duì)文件改名
        print(oldname, '======>', newname)
        n += 1

2.提取人臉

提取人臉,需要用到一個(gè)人臉識(shí)別庫face_recognition庫。face_recognition庫的下載步驟參考:

https://www.jb51.net/article/209870.htm

主要有三步,可以直接在anaconda的命令行界面復(fù)制使用:

  1. pip install CMake -i https://pypi.douban.com/simple
  2. pip install dlib==19.7.0 -i https://pypi.douban.com/simple
  3. pip install face_recognition -i https://pypi.douban.com/simple

筆者已嘗試,確實(shí)可用。

使用下述的函數(shù)就可以獲得一張圖片對(duì)應(yīng)的人臉,返回值就是人臉圖片。

# 找到圖片中的人臉
#############################################################################################
def find_face(path):
    # Load the jpg file into a numpy array
    image = face_recognition.load_image_file(path)
    # Find all the faces in the image using the default HOG-based model.
    # This method is fairly accurate, but not as accurate as the CNN model and not GPU accelerated.
    # See also: find_faces_in_picture_cnn.py
    face_locations = face_recognition.face_locations(image) # 可以選擇 model="cnn"
    if len(face_locations) == 0:
        return None
    else:
        for face_location in face_locations:
            # Print the location of each face in this image
            top, right, bottom, left = face_location
            # You can access the actual face itself like this:
            face_image = image[top:bottom, left:right]
            pil_image = Image.fromarray(face_image)
            return pil_image
#############################################################################################

對(duì)數(shù)據(jù)集進(jìn)行操作之后,就可以獲得處理后的人臉圖片。之所以不用人物圖訓(xùn)練,而是提取出人臉后再進(jìn)行訓(xùn)練,是考慮到人物圖像中干擾因素太多,且經(jīng)過試驗(yàn)后發(fā)現(xiàn)識(shí)別的效果非常差,于是加入這個(gè)提取人臉的環(huán)節(jié)。對(duì)數(shù)據(jù)集的操作代碼如下:

# 將訓(xùn)練集和測(cè)試集中的raw圖片處理,提取出人臉圖片
#############################################################################################
if __name__ == '__main__':  # 主函數(shù)入口
    raw_train_root_1 = 'E:/Table/學(xué)習(xí)數(shù)據(jù)集/find_iu/data/raw/train/IU/'
    raw_train_root_2 = 'E:/Table/學(xué)習(xí)數(shù)據(jù)集/find_iu/data/raw/train/Other/'
    raw_test_root_1 = 'E:/Table/學(xué)習(xí)數(shù)據(jù)集/find_iu/data/raw/test/IU/'
    raw_test_root_2 = 'E:/Table/學(xué)習(xí)數(shù)據(jù)集/find_iu/data/raw/test/Other/'
    raw_roots = [raw_train_root_1, raw_train_root_2, raw_test_root_1, raw_test_root_2]
    img_raw_train_1 = os.listdir(raw_train_root_1)
    img_raw_train_2 = os.listdir(raw_train_root_2)
    img_raw_test_1 = os.listdir(raw_test_root_1)
    img_raw_test_2 = os.listdir(raw_test_root_2)
    img_raws = [img_raw_train_1, img_raw_train_2, img_raw_test_1, img_raw_test_2]
    new_path_train_1 = 'E:/Table/學(xué)習(xí)數(shù)據(jù)集/find_iu/data/processed/train/IU/'
    new_path_train_2 = 'E:/Table/學(xué)習(xí)數(shù)據(jù)集/find_iu/data/processed/train/Other/'
    new_path_test_1 = 'E:/Table/學(xué)習(xí)數(shù)據(jù)集/find_iu/data/processed/test/IU/'
    new_path_test_2 = 'E:/Table/學(xué)習(xí)數(shù)據(jù)集/find_iu/data/processed/test/Other/'
    new_paths = [new_path_train_1, new_path_train_2, new_path_test_1, new_path_test_2]
    for raw_root, img_raw, new_path in zip(raw_roots, img_raws, new_paths):
        n = 0
        for i in range(len(img_raw)):
            try:
                img = Image.open(raw_root + img_raw[i])
            except:
                print('a file error, continue')
                continue
            else:
                img_train = find_face(raw_root + img_raw[i])
                if img_train == None:
                    continue
                else:
                    # img_train.save(new_path + '%d.JPG'%n)
                    # print(raw_root + img_raw[i])
                    n += 1
        print('在%d張圖片中,共找到%d張臉' % (len(img_raw), n))
#############################################################################################

處理前的圖片數(shù)據(jù)均存放在raw文件夾中,處理后的存放在processed文件夾中,如下圖:

兩個(gè)文件夾的內(nèi)部結(jié)構(gòu)完全一樣:

三、網(wǎng)絡(luò)模型

1、圖像處理

將圖片裁剪為112×92大小,使用RGB圖像,(這里試過用灰度圖像,但好像效果不會(huì)更好,就放棄了),在對(duì)圖片進(jìn)行歸一化處理。

data_transform = transforms.Compose([
        # transforms.Grayscale(num_output_channels=1),  # 彩色圖像轉(zhuǎn)灰度圖像num_output_channels默認(rèn)1
        transforms.Resize(112),
        transforms.CenterCrop((112, 92)),  # 中心裁剪為112*92
        transforms.ToTensor(),
        transforms.Normalize(mean=[0.5, 0.5, 0.5], std=[0.5, 0.5, 0.5])
        # transforms.Normalize(mean=0.5, std=0.5)
    ])

使用孿生神經(jīng)網(wǎng)絡(luò)(Siamese Network)

class SiameNetwork(nn.Module):
    def __init__(self):
        super(SiameNetwork, self).__init__()
        # input: h=112, w=92
        self.conv1 = torch.nn.Sequential(
            torch.nn.Conv2d(in_channels=3,  # 輸入單通道
                            out_channels=16,  # 16個(gè)3*3卷積核
                            kernel_size=3,  # 卷積核尺寸
                            stride=2,  # 卷積核滑動(dòng)步長(zhǎng), 1的話圖片大小不變,2的話會(huì)大小會(huì)變?yōu)?h/2)*(w/2)
                            padding=1),  # 邊緣填充大小,如果要保持原大小,kernel_size//2
            torch.nn.BatchNorm2d(16),  # 標(biāo)準(zhǔn)化,前面卷積后有16個(gè)圖層
            torch.nn.ReLU()  # 激活函數(shù)
        )  # output: h=56, w=46
        self.conv2 = torch.nn.Sequential(
            torch.nn.Conv2d(16, 32, 3, 2, 1),
            torch.nn.BatchNorm2d(32),
            torch.nn.ReLU()
        )  # output: h=28, w=23
        self.conv3 = torch.nn.Sequential(
            torch.nn.Conv2d(32, 64, 3, 2, 1),
            torch.nn.BatchNorm2d(64),
            torch.nn.ReLU()
        )  # output: h=14, w=12
        self.conv4 = torch.nn.Sequential(
            torch.nn.Conv2d(64, 64, 2, 2, 0),
            torch.nn.BatchNorm2d(64),
            torch.nn.ReLU()
        )  # output: h=7, w=6
        self.mlp1 = torch.nn.Linear(7 * 6 * 64, 100)  # 需要計(jì)算conv4的輸出尺寸,每次卷積的輸出尺寸(size - kernal + 2*padding)/stride + 1
        self.mlp2 = torch.nn.Linear(100, 10)
    def forward(self, x):
        x = self.conv1(x)
        x = self.conv2(x)
        x = self.conv3(x)
        x = self.conv4(x)
        x = self.mlp1(x.view(x.size(0), -1))  # view展平
        x = self.mlp2(x)
        return x

四、具體代碼

1.get_face.py

from PIL import Image
import face_recognition
import os
# 找到圖片中的人臉
#############################################################################################
def find_face(path):
    # Load the jpg file into a numpy array
    image = face_recognition.load_image_file(path)
    # Find all the faces in the image using the default HOG-based model.
    # This method is fairly accurate, but not as accurate as the CNN model and not GPU accelerated.
    # See also: find_faces_in_picture_cnn.py
    face_locations = face_recognition.face_locations(image) # 可以選擇 model="cnn"
    if len(face_locations) == 0:
        return None
    else:
        for face_location in face_locations:
            # Print the location of each face in this image
            top, right, bottom, left = face_location
            # You can access the actual face itself like this:
            face_image = image[top:bottom, left:right]
            pil_image = Image.fromarray(face_image)
            return pil_image
#############################################################################################
# 將訓(xùn)練集和測(cè)試集中的raw圖片處理,提取出人臉圖片
#############################################################################################
if __name__ == '__main__':  # 主函數(shù)入口
    raw_train_root_1 = 'E:/Table/學(xué)習(xí)數(shù)據(jù)集/find_iu/data/raw/train/IU/'
    raw_train_root_2 = 'E:/Table/學(xué)習(xí)數(shù)據(jù)集/find_iu/data/raw/train/Other/'
    raw_test_root_1 = 'E:/Table/學(xué)習(xí)數(shù)據(jù)集/find_iu/data/raw/test/IU/'
    raw_test_root_2 = 'E:/Table/學(xué)習(xí)數(shù)據(jù)集/find_iu/data/raw/test/Other/'
    raw_roots = [raw_train_root_1, raw_train_root_2, raw_test_root_1, raw_test_root_2]
    img_raw_train_1 = os.listdir(raw_train_root_1)
    img_raw_train_2 = os.listdir(raw_train_root_2)
    img_raw_test_1 = os.listdir(raw_test_root_1)
    img_raw_test_2 = os.listdir(raw_test_root_2)
    img_raws = [img_raw_train_1, img_raw_train_2, img_raw_test_1, img_raw_test_2]
    new_path_train_1 = 'E:/Table/學(xué)習(xí)數(shù)據(jù)集/find_iu/data/processed/train/IU/'
    new_path_train_2 = 'E:/Table/學(xué)習(xí)數(shù)據(jù)集/find_iu/data/processed/train/Other/'
    new_path_test_1 = 'E:/Table/學(xué)習(xí)數(shù)據(jù)集/find_iu/data/processed/test/IU/'
    new_path_test_2 = 'E:/Table/學(xué)習(xí)數(shù)據(jù)集/find_iu/data/processed/test/Other/'
    new_paths = [new_path_train_1, new_path_train_2, new_path_test_1, new_path_test_2]
    for raw_root, img_raw, new_path in zip(raw_roots, img_raws, new_paths):
        n = 0
        for i in range(len(img_raw)):
            try:
                img = Image.open(raw_root + img_raw[i])
            except:
                print('a file error, continue')
                continue
            else:
                img_train = find_face(raw_root + img_raw[i])
                if img_train == None:
                    continue
                else:
                    # img_train.save(new_path + '%d.JPG'%n)
                    # print(raw_root + img_raw[i])
                    n += 1
        print('在%d張圖片中,共找到%d張臉' % (len(img_raw), n))
#############################################################################################

2.find_iu.py

import torch
import torchvision
import torch.nn as nn
from torch.autograd import Variable
from torchvision import datasets, transforms
from torch.utils.data import DataLoader
import cv2   #opencv庫,用于圖片可視化
import numpy as np
import os
from utils import draw_result
from network import SiameNetwork
from get_face import find_face
if __name__ == '__main__':  # 主函數(shù)入口
    # 設(shè)置參數(shù)
    #############################################################################################
    path = 'E:/Table/學(xué)習(xí)數(shù)據(jù)集/find_iu/result/'    # 存放和生成結(jié)果的路徑標(biāo)志
    epochs = 20       #訓(xùn)練周期
    BATCH_SIZE = 16    #批量樣本大小
    NUM_WORKERS = 0
    #############################################################################################
    # 數(shù)據(jù)處理
    #############################################################################################
    data_transform = transforms.Compose([
        # transforms.Grayscale(num_output_channels=1),  # 彩色圖像轉(zhuǎn)灰度圖像num_output_channels默認(rèn)1
        transforms.Resize(112),
        transforms.CenterCrop((112, 92)),  # 中心裁剪為112*92
        transforms.ToTensor(),
        transforms.Normalize(mean=[0.5, 0.5, 0.5], std=[0.5, 0.5, 0.5])
        # transforms.Normalize(mean=0.5, std=0.5)
    ])
    train_dataset = datasets.ImageFolder(root = r'E:/Table/學(xué)習(xí)數(shù)據(jù)集/find_iu/data/processed/train',
                                         transform = data_transform)
    test_dataset = datasets.ImageFolder(root = r'E:/Table/學(xué)習(xí)數(shù)據(jù)集/find_iu/data/processed/test',
                                         transform = data_transform)
    train_loader = DataLoader(train_dataset, batch_size=BATCH_SIZE, shuffle=True, num_workers=NUM_WORKERS)
    test_loader = DataLoader(test_dataset, batch_size=BATCH_SIZE, shuffle=True, num_workers=NUM_WORKERS)
    image, labels = next(iter(train_loader))       #數(shù)據(jù)可視化
    img = torchvision.utils.make_grid(image, nrow = 10)
    img = img.numpy().transpose(1, 2, 0)
    cv2.imshow('img', img)    #展示圖像
    cv2.waitKey(0)  #按下任一按鍵后開始工作
    print("data ready!")
    #############################################################################################
    #配置設(shè)備、損失函數(shù)和優(yōu)化器
    #############################################################################################
    device = torch.device('cuda')
    model = SiameNetwork().to(device)
    cost = torch.nn.CrossEntropyLoss()        #定義損失函數(shù),使用交叉熵
    optimizer = torch.optim.Adam(model.parameters(), lr=0.0008, weight_decay=0.001)            #Adam優(yōu)化器
    print("device ready!")
    #############################################################################################
    #訓(xùn)練過程,訓(xùn)練周期由epochs決定
    #############################################################################################
    draw_epoch = []   #記錄訓(xùn)練階段
    draw_loss = []    #記錄訓(xùn)練損失,用于繪制
    draw_train_acc = []   #記錄訓(xùn)練準(zhǔn)確度,用于繪制
    draw_val_loss = []   #記錄測(cè)試損失,用于繪制
    draw_val_acc = []  # 記錄測(cè)試準(zhǔn)確度,用于繪制
    for epoch in range(epochs):
        #訓(xùn)練過程
        sum_loss = 0.0
        sum_val_loss = 0.0
        train_correct = 0
        test_correct = 0
        for data in train_loader:
            inputs,labels = data
            inputs,labels = Variable(inputs).cuda(),Variable(labels).cuda()
            optimizer.zero_grad()        #將上一batch梯度清零
            outputs = model(inputs)
            loss = cost(outputs, labels)
            loss.backward()             #反向傳播
            optimizer.step()
            _, id = torch.max(outputs.data, 1)
            sum_loss += loss.data
            train_correct += torch.sum(id == labels.data)
        for data in test_loader:              # 模型測(cè)試
            inputs,labels = data
            inputs,labels = Variable(inputs).cuda(),Variable(labels).cuda()
            outputs = model(inputs)
            val_loss = cost(outputs, labels)
            _,id = torch.max(outputs.data, 1)
            sum_val_loss += val_loss.data
            test_correct += torch.sum(id == labels.data)
        print('[%d,%d] train loss:%.03f      train acc:%.03f%%'
              %(epoch + 1, epochs, sum_loss / len(train_loader), (100 * train_correct / len(train_dataset))))
        print('        val loss:%.03f        val acc:%.03f%%'
              %(sum_val_loss / len(test_loader), (100 * test_correct / len(test_dataset))))
        draw_epoch.append(epoch+1)       # 用于后續(xù)畫圖的數(shù)據(jù)
        draw_loss.append(sum_loss / len(train_loader))
        draw_train_acc.append(100 * train_correct / len(train_dataset))
        draw_val_loss.append(sum_val_loss / len(test_loader))
        draw_val_acc.append(100 * test_correct / len(test_dataset))
        np.savetxt('%s/train_loss.txt'%(path), draw_loss, fmt="%.3f")        # 保存損失數(shù)據(jù)
        np.savetxt('%s/train_acc.txt'%(path), draw_train_acc, fmt="%.3f")  # 保存準(zhǔn)確率數(shù)據(jù)
        np.savetxt('%s/val_loss.txt'%(path), draw_val_loss, fmt="%.3f")     # 保存損失數(shù)據(jù)
        np.savetxt('%s/val_acc.txt'%(path), draw_val_acc, fmt="%.3f")  # 保存準(zhǔn)確率數(shù)據(jù)
    print("train ready!")
    #############################################################################################
    #數(shù)據(jù)可視化
    #############################################################################################
    draw_result(draw_epoch, path)   # 繪圖函數(shù)
    print("draw ready!")
    #############################################################################################
    #模型的存儲(chǔ)和載入
    #############################################################################################
    torch.save(model.state_dict(), "parameter.pkl") #save
    print("save ready!")
    #############################################################################################

3.spider_iu.py

import re
import requests
from urllib import error
from bs4 import BeautifulSoup
import os
import torch
from torch.autograd import Variable
from torchvision import datasets, transforms
from torch.utils.data import DataLoader
from network import SiameNetwork
from utils import cv_imread
import cv2
from PIL import Image
import shutil
from get_face import find_face
# 設(shè)置參數(shù)
#############################################################################################
GET_PIC = 0    # 1 執(zhí)行這步,0 不執(zhí)行
GET_FACE = 0
GET_IU = 1
#############################################################################################
num = 0
numPicture = 0
file = ''
List = []
# 爬蟲所用函數(shù)
#############################################################################################
def Find(url, A):
    global List
    print('正在檢測(cè)圖片總數(shù),請(qǐng)稍等.....')
    t = 0
    i = 1
    s = 0
    while t  1000:
        Url = url + str(t)
        try:
            # 這里搞了下
            Result = A.get(Url, timeout=7, allow_redirects=False)
        except BaseException:
            t = t + 60
            continue
        else:
            result = Result.text
            pic_url = re.findall('"objURL":"(.*?)",', result, re.S)  # 先利用正則表達(dá)式找到圖片url
            s += len(pic_url)
            if len(pic_url) == 0:
                break
            else:
                List.append(pic_url)
                t = t + 60
    return s
def recommend(url):
    Re = []
    try:
        html = requests.get(url, allow_redirects=False)
    except error.HTTPError as e:
        return
    else:
        html.encoding = 'utf-8'
        bsObj = BeautifulSoup(html.text, 'html.parser')
        div = bsObj.find('div', id='topRS')
        if div is not None:
            listA = div.findAll('a')
            for i in listA:
                if i is not None:
                    Re.append(i.get_text())
        return Re
def dowmloadPicture(html, keyword):
    global num
    # t =0
    pic_url = re.findall('"objURL":"(.*?)",', html, re.S)  # 先利用正則表達(dá)式找到圖片url
    print('找到關(guān)鍵詞:' + keyword + '的圖片,即將開始下載圖片...')
    for each in pic_url:
        print('正在下載第' + str(num + 1) + '張圖片,圖片地址:' + str(each))
        try:
            if each is not None:
                pic = requests.get(each, timeout=7)
            else:
                continue
        except BaseException:
            print('錯(cuò)誤,當(dāng)前圖片無法下載')
            continue
        else:
            string = file + r'\\' + keyword + '_' + str(num) + '.jpg'
            fp = open(string, 'wb')
            fp.write(pic.content)
            fp.close()
            num += 1
        if num >= numPicture:
            return
#############################################################################################
if __name__ == '__main__':  # 主函數(shù)入口
    # 爬蟲部分,存放到 name + ‘文件'
    #############################################################################################
    if GET_PIC == 1:
        headers = {
            'Accept-Language': 'zh-CN,zh;q=0.8,zh-TW;q=0.7,zh-HK;q=0.5,en-US;q=0.3,en;q=0.2',
            'Connection': 'keep-alive',
            'User-Agent': 'Mozilla/5.0 (X11; Linux x86_64; rv:60.0) Gecko/20100101 Firefox/60.0',
            'Upgrade-Insecure-Requests': '1'
        }
        A = requests.Session()
        A.headers = headers
        tm = int(input('請(qǐng)輸入每類圖片的下載數(shù)量 '))
        numPicture = tm
        line_list = []
        with open('./name.txt', encoding='utf-8') as file:
            line_list = [k.strip() for k in file.readlines()]  # 用 strip()移除末尾的空格
        for word in line_list:
            url = 'https://image.baidu.com/search/flip?tn=baiduimageie=utf-8word=' + word + 'pn='
            tot = Find(url, A)
            Recommend = recommend(url)  # 記錄相關(guān)推薦
            print('經(jīng)過檢測(cè)%s類圖片共有%d張' % (word, tot))
            file = word + '文件'
            y = os.path.exists(file)
            if y == 1:
                print('該文件已存在,無需創(chuàng)建')
            else:
                os.mkdir(file)
            t = 0
            tmp = url
            while t  numPicture:
                try:
                    url = tmp + str(t)
                    # result = requests.get(url, timeout=10)
                    # 這里搞了下
                    result = A.get(url, timeout=10, allow_redirects=False)
                    print(url)
                except error.HTTPError as e:
                    print('網(wǎng)絡(luò)錯(cuò)誤,請(qǐng)調(diào)整網(wǎng)絡(luò)后重試')
                    t = t + 60
                else:
                    dowmloadPicture(result.text, word)
                    t = t + 60
            numPicture = numPicture + tm
        print('當(dāng)前搜索結(jié)束,開始提取人臉')
    #############################################################################################
    # 將訓(xùn)練集和測(cè)試集中的raw圖片處理,提取出人臉圖片,從file+'文件'到‘待分辨人臉'
    ############################################################################################
    if GET_FACE == 1:
        if GET_PIC == 0:
            file = '韓國女藝人文件'
        raw_root = 'E:/Table/學(xué)習(xí)數(shù)據(jù)集/find_iu/'+ file + '/'
        img_raw = os.listdir(raw_root)
        new_path = 'E:/Table/學(xué)習(xí)數(shù)據(jù)集/find_iu/待分辨人臉/'
        n = 0
        for i in range(len(img_raw)):
            try:
                img = Image.open(raw_root + img_raw[i])
            except:
                print('a file error, continue')
                continue
            else:
                img_train = find_face(raw_root + img_raw[i])
                if img_train == None:
                    continue
                else:
                    img_train.save(new_path + '%d.JPG' % n)
                    print(raw_root + img_raw[i])
                    n += 1
        print('在%d張圖片中,共找到%d張臉' % (len(img_raw), n))
        print('提取人臉結(jié)束,開始尋找IU')
    #############################################################################################
    # 開始判別,從'待分辨人臉‘中找出IU存放到'IU_pic‘
    #############################################################################################
    if GET_IU == 1:
        data_transform = transforms.Compose([
            # transforms.Grayscale(num_output_channels=1),  # 彩色圖像轉(zhuǎn)灰度圖像num_output_channels默認(rèn)1
            transforms.Resize(112),
            transforms.CenterCrop((112, 92)),  # 中心裁剪為112*92
            transforms.ToTensor(),
            transforms.Normalize(mean=[0.5, 0.5, 0.5],std=[0.5, 0.5, 0.5])
            # transforms.Normalize(mean=0.5, std=0.5)
        ])
        device = torch.device('cuda')
        model = SiameNetwork().to(device)
        model.load_state_dict(torch.load('parameter.pkl'))  # load
        model.eval()
        judge_root = 'E:/Table/學(xué)習(xí)數(shù)據(jù)集/find_iu/待分辨人臉/'
        img_judge = os.listdir(judge_root)
        new_path = 'E:/Table/學(xué)習(xí)數(shù)據(jù)集/find_iu/IU_pic/'
        result = []
        n = 0
        for i in range(len(img_judge)):
            try:
                img = Image.open(judge_root + img_judge[i])
            except:
                print('a file error, continue')
                continue
            else:
                img = img.convert('RGB')
                print(judge_root + img_judge[i])
                input = data_transform(img)
                input = input.unsqueeze(0)  # 這里經(jīng)過轉(zhuǎn)換后輸出的input格式是[C,H,W],網(wǎng)絡(luò)輸入還需要增加一維批量大小B
                # 增加一維,輸出的img格式為[1,C,H,W]
                input = Variable(input.cuda())
                output = model(input)  # 將圖片輸入網(wǎng)絡(luò)得到輸出
                _, id = torch.max(output.data, 1)   # 0是IU,1是其他
                if id.item() == 0:
                    shutil.copy(judge_root + img_judge[i], new_path)
                    n += 1
        print('/n在%d張圖片中,共找到%d張IU的圖片'%(len(img_judge), n))
    #############################################################################################

4.file_deal.py

import os
raw_train_root_1 = 'E:/Table/學(xué)習(xí)數(shù)據(jù)集/find_iu/data/raw/train/IU/'
raw_train_root_2 = 'E:/Table/學(xué)習(xí)數(shù)據(jù)集/find_iu/data/raw/train/Other/'
raw_test_root_1 = 'E:/Table/學(xué)習(xí)數(shù)據(jù)集/find_iu/data/raw/test/IU/'
raw_test_root_2 = 'E:/Table/學(xué)習(xí)數(shù)據(jù)集/find_iu/data/raw/test/Other/'
raw_roots = [raw_train_root_1, raw_train_root_2, raw_test_root_1, raw_test_root_2]
for path in raw_roots:
    # 獲取該目錄下所有文件,存入列表中
    fileList = os.listdir(path)
    n = 0
    for i in fileList:
        # 設(shè)置舊文件名(就是路徑+文件名)
        oldname = path + os.sep + fileList[n]  # os.sep添加系統(tǒng)分隔符
        # 設(shè)置新文件名
        newname = path + os.sep + str(n) + '.JPG'
        os.rename(oldname, newname)  # 用os模塊中的rename方法對(duì)文件改名
        print(oldname, '======>', newname)
        n += 1

5.network.py

import torch
import torch.nn as nn
class SiameNetwork(nn.Module):
    def __init__(self):
        super(SiameNetwork, self).__init__()
        # input: h=112, w=92
        self.conv1 = torch.nn.Sequential(
            torch.nn.Conv2d(in_channels=3,  # 輸入單通道
                            out_channels=16,  # 16個(gè)3*3卷積核
                            kernel_size=3,  # 卷積核尺寸
                            stride=2,  # 卷積核滑動(dòng)步長(zhǎng), 1的話圖片大小不變,2的話會(huì)大小會(huì)變?yōu)?h/2)*(w/2)
                            padding=1),  # 邊緣填充大小,如果要保持原大小,kernel_size//2
            torch.nn.BatchNorm2d(16),  # 標(biāo)準(zhǔn)化,前面卷積后有16個(gè)圖層
            torch.nn.ReLU()  # 激活函數(shù)
        )  # output: h=56, w=46
        self.conv2 = torch.nn.Sequential(
            torch.nn.Conv2d(16, 32, 3, 2, 1),
            torch.nn.BatchNorm2d(32),
            torch.nn.ReLU()
        )  # output: h=28, w=23
        self.conv3 = torch.nn.Sequential(
            torch.nn.Conv2d(32, 64, 3, 2, 1),
            torch.nn.BatchNorm2d(64),
            torch.nn.ReLU()
        )  # output: h=14, w=12
        self.conv4 = torch.nn.Sequential(
            torch.nn.Conv2d(64, 64, 2, 2, 0),
            torch.nn.BatchNorm2d(64),
            torch.nn.ReLU()
        )  # output: h=7, w=6
        self.mlp1 = torch.nn.Linear(7 * 6 * 64, 100)  # 需要計(jì)算conv4的輸出尺寸,每次卷積的輸出尺寸(size - kernal + 2*padding)/stride + 1
        self.mlp2 = torch.nn.Linear(100, 10)
    def forward(self, x):
        x = self.conv1(x)
        x = self.conv2(x)
        x = self.conv3(x)
        x = self.conv4(x)
        x = self.mlp1(x.view(x.size(0), -1))  # view展平
        x = self.mlp2(x)
        return x

6.utils.py

import matplotlib as mpl
import matplotlib.pyplot as plt
import numpy as np
import cv2
# 繪制訓(xùn)練、測(cè)試的損失、準(zhǔn)確度
#############################################################################################
def draw_result(draw_epoch, path):
    show_loss = np.loadtxt('%s/train_loss.txt' % (path))   # 讀取txt文件,不同優(yōu)化器的損失
    show_train_acc = np.loadtxt('%s/train_acc.txt' % (path))  # 讀取不同模型的準(zhǔn)確度
    show_val_loss = np.loadtxt('%s/val_loss.txt' % (path))  # 讀取txt文件,不同優(yōu)化器的損失
    show_val_acc = np.loadtxt('%s/val_acc.txt' % (path))  # 讀取不同模型的準(zhǔn)確度
    mpl.rc('font',family='Times New Roman', weight='semibold', size=9)  # 設(shè)置matplotlib中所有繪圖風(fēng)格的設(shè)置
    font1 = {'weight' : 'semibold', 'size' : 11}  #設(shè)置文字風(fēng)格
    fig = plt.figure(figsize = (7,5))    #figsize是圖片的大小`
    ax1 = fig.add_subplot(2, 2, 1)       # ax1是子圖的名字
    ax1.plot(draw_epoch, show_loss,color = 'red', label = u'AdaPID', linewidth =1.0)
    ax1.legend()   #顯示圖例
    ax1.set_title('Training Loss', font1)
    ax1.set_xlabel(u'Epoch', font1)
    ax2 = fig.add_subplot(2, 2, 2)
    ax2.plot(draw_epoch, show_val_loss,color = 'red', label = u'Adam', linewidth =1.0)
    ax2.legend()   #顯示圖例
    ax2.set_title('Validation Loss', font1)
    ax2.set_xlabel(u'Epoch', font1)
    ax3 = fig.add_subplot(2, 2, 3)
    ax3.plot(draw_epoch, show_train_acc,color = 'red', label = u'Adam', linewidth =1.0)
    ax3.legend()   #顯示圖例
    ax3.set_title('Training Accuracy', font1)
    ax3.set_xlabel(u'Epoch', font1)
    ax4 = fig.add_subplot(2, 2, 4)
    ax4.plot(draw_epoch, show_val_acc,color = 'red', label = u'Adam', linewidth =1.0)
    ax4.legend()   #顯示圖例
    ax4.set_title('Validation Accuracy', font1)
    ax4.set_xlabel(u'Epoch', font1)
    plt.subplots_adjust(left=None, bottom=None, right=None, top=None, wspace=None, hspace=0.45) # hspace為子圖上下間距
    plt.savefig('%s/show_curve.jpg' % (path), dpi=300)
#############################################################################################
# 用于解決cv.imread不能讀取中文路徑的問題
#############################################################################################
def cv_imread(filePath):
    # 核心就是下面這句,一般直接用這句就行,直接把圖片轉(zhuǎn)為mat數(shù)據(jù)
    cv_img = cv2.imdecode(np.fromfile(filePath, dtype=np.uint8), -1)
    # imdecode讀取的是rgb,如果后續(xù)需要opencv處理的話,需要轉(zhuǎn)換成bgr,轉(zhuǎn)換后圖片顏色會(huì)變化
    # cv_img=cv2.cvtColor(cv_img,cv2.COLOR_RGB2BGR)
    return cv_img
#############################################################################################

總結(jié)

總體而言,這是一個(gè)新人的興趣之作,但是限于GPU性能無法使用太復(fù)雜的網(wǎng)絡(luò),最后識(shí)別的效果不佳,若讀者有興趣,也可以去替換一下網(wǎng)絡(luò),改善一下數(shù)據(jù)集,嘗試提升識(shí)別性能。更多相關(guān)人臉識(shí)別內(nèi)容請(qǐng)搜索腳本之家以前的文章或繼續(xù)瀏覽下面的相關(guān)文章,希望大家以后多多支持腳本之家!

您可能感興趣的文章:
  • 用Python做個(gè)個(gè)性的動(dòng)畫掛件讓桌面不單調(diào)
  • 自己用python做的一款超炫酷音樂播放器
  • Python做個(gè)自定義動(dòng)態(tài)壁紙還可以放視頻
  • 使用python svm實(shí)現(xiàn)直接可用的手寫數(shù)字識(shí)別
  • 基礎(chǔ)語音識(shí)別-食物語音識(shí)別baseline(CNN)
  • 詳細(xì)過程帶你用Python做車牌自動(dòng)識(shí)別系統(tǒng)

標(biāo)簽:雙鴨山 貴州 金華 克拉瑪依 臨汾 赤峰 陽泉 日照

巨人網(wǎng)絡(luò)通訊聲明:本文標(biāo)題《人臉識(shí)別具體案例(李智恩)》,本文關(guān)鍵詞  人臉,識(shí)別,具體,案例,李智恩,;如發(fā)現(xiàn)本文內(nèi)容存在版權(quán)問題,煩請(qǐng)?zhí)峁┫嚓P(guān)信息告之我們,我們將及時(shí)溝通與處理。本站內(nèi)容系統(tǒng)采集于網(wǎng)絡(luò),涉及言論、版權(quán)與本站無關(guān)。
  • 相關(guān)文章
  • 下面列出與本文章《人臉識(shí)別具體案例(李智恩)》相關(guān)的同類信息!
  • 本頁收集關(guān)于人臉識(shí)別具體案例(李智恩)的相關(guān)信息資訊供網(wǎng)民參考!
  • 推薦文章