簡介
雖然各廠商為我們提供了優質的人臉識別SDK,但其中包含了較多的無意義代碼,例如錯誤處理,檢測,剖析,而開發者在接入時往往不是非常關心這些事情,RxArcFace旨在將虹軟人臉識別SDK的模板化操作封裝,并結合RxJava2,帶給開發者流暢的開發體驗
關于虹軟人臉識別SDK
虹軟人臉人臉識別SDK:ArcFace 離線SDK,包含人臉檢測、性別檢測、年齡檢測、人臉識別、圖像質量檢測、RGB活體檢測、IR活體檢測等能力,初次使用時需聯網激活,激活后即可在本地無網絡環境下工作,可根據具體的業務需求結合人臉識別SDK靈活地進行應用層開發。
基礎版本暫不支持
圖像質量檢測
以及離線激活
;
0. 引子
人臉識別在當今已不是稀奇的功能,許多業務場景都能看到人臉識別的影子。作為移動應用開發者,選擇接入合適的SDK能為我們帶來更高效的開發體驗;本文首先將以虹軟人臉識別SDK基礎方法為切入點逐漸探討,但官方SDK未免過于繁瑣,所以文章帶領讀者將其封裝,基于官方方法打造自己的高可用,多場景可用的Util,使人臉開發無需繁瑣的過程即可輕松接入。
SDK準備工作請參考
https://ai.arcsoft.com.cn/manual/docs#/139
https://ai.arcsoft.com.cn/manual/docs#/140 只需看3.1
本文將不再累述
1. 方法介紹(摘自 虹軟安卓接入詳情)
1.activeOnline
功能描述
用于在線激活SDK。
方法
int activeOnline(Context context, String appId, String sdkKey)
初次使用SDK時需要對SDK先進行激活,激活后無需重復調用;
調用此接口時必須為聯網狀態,激活成功后即可離線使用;
參數說明
參數 | 類型 | 描述 |
---|---|---|
context | in | 上下文信息 |
appId | in | 官網獲取的APP_ID |
sdkKey | in | 官網獲取的SDK_KEY |
返回值
成功返回ErrorInfo.MOK
、ErrorInfo.MERR_ASF_ALREADY_ACTIVATED
,失敗詳見 錯誤碼列表。
2.init
功能描述
初始化引擎。
該接口至關重要,清楚的了解該接口參數的意義,可以避免一些問題以及對項目的設計都有一定的幫助。
方法
int init(
Context context,
DetectMode detectMode,
DetectFaceOrientPriority detectFaceOrientPriority,
int detectFaceScaleVal,
int detectFaceMaxNum,
int combinedMask
)
參數說明
參數 | 類型 | 描述 |
---|---|---|
context | in | 上下文信息 |
detectMode | in | VIDEO模式:處理連續幀的圖像數據 IMAGE模式:處理單張的圖像數據 |
detectFaceOrientPriority | in | 人臉檢測角度,推薦單一角度檢測; |
detectFaceScaleVal | in | 識別的最小人臉比例(圖片長邊與人臉框長邊的比值) VIDEO模式取值范圍[2,32],推薦值為16 IMAGE模式取值范圍[2,32],推薦值為32 |
detectFaceMaxNum | in | 最大需要檢測的人臉個數,取值范圍[1,50] |
combinedMask | in | 需要啟用的功能組合,可多選 |
3.detectFaces(傳入分離的圖像信息數據)
方法
int detectFaces(
byte[] data,
int width,
int height,
int format,
List<FaceInfo> faceInfoList
)
參數說明
參數 | 類型 | 描述 |
---|---|---|
data | in | 圖像數據 |
width | in | 圖像寬度,為4的倍數 |
height | in | 圖像高度,在NV21格式下要求為2的倍數; BGR24/GRAY/DEPTH_U16格式無限制; |
format | in | 圖像的顏色格式 |
faceInfoList | out | 檢測到的人臉信息 |
返回值
成功返回ErrorInfo.MOK
,失敗詳見 錯誤碼列表。
detectFaceMaxNum
參數的設置,對能否檢測到人臉以及檢測到幾張人臉都有決定性的作用。
4.process(傳入分離的圖像信息數據)
方法
int process(
byte[] data,
int width,
int height,
int format,
List<FaceInfo> faceInfoList,
int combinedMask
)
參數說明
參數 | 類型 | 描述 |
---|---|---|
data | in | 圖像數據 |
width | in | 圖片寬度,為4的倍數 |
height | in | 圖片高度,在NV21格式下要求為2的倍數 BGR24格式無限制 |
format | in | 支持NV21/BGR24 |
faceInfoList | in | 人臉信息列表 |
combinedMask | in | 檢測的屬性(ASF_AGE、ASF_GENDER、 ASF_FACE3DANGLE、ASF_LIVENESS),支持多選 檢測的屬性須在引擎初始化接口的combinedMask參數中啟用 |
重要參數說明
- combinedMask
process接口中支持檢測
ASF_AGE
、ASF_GENDER
、ASF_FACE3DANGLE
、ASF_LIVENESS
四種屬性,但是想檢測這些屬性,必須在初始化引擎接口中對想要檢測的屬性進行初始化。
關于初始化接口中combinedMask
和process
接口中combinedMask
參數之間的關系,舉例進行詳細說明,如下圖所示:
-
process
接口中combinedMask
支持傳入的屬性有ASF_AGE
、ASF_GENDER
、ASF_FACE3DANGLE
、ASF_LIVENESS
。 - 初始化中傳入了
ASF_FACE_DETECT
、ASF_FACERECOGNITION
、ASF_AGE
、ASF_LIVENESS
屬性。 - process可傳入屬性組合只有
ASF_AGE
、ASF_LIVENESS
、ASF_AGE | ASF_LIVENESS
。
返回值
成功返回ErrorInfo.MOK
,失敗詳見 錯誤碼列表。
5.extractFaceFeature(傳入分離的圖像信息數據)
方法
int extractFaceFeature(
byte[] data,
int width,
int height,
int format,
FaceInfo faceInfo,
FaceFeature feature
)
參數說明
參數 | 類型 | 描述 |
---|---|---|
data | in | 圖像數據 |
width | in | 圖片寬度,為4的倍數 |
height | in | 圖片高度,在NV21格式下要求為2的倍數; BGR24/GRAY/DEPTH_U16格式無限制; |
format | in | 圖像的顏色格式 |
faceInfo | in | 人臉信息(人臉框、人臉角度) |
feature | out | 提取到的人臉特征信息 |
返回值
成功返回ErrorInfo.MOK
,失敗詳見 錯誤碼列表。
6.compareFaceFeature(可選擇比對模型)
方法
int compareFaceFeature (
FaceFeature feature1,
FaceFeature feature2,
CompareModel compareModel,
FaceSimilar faceSimilar
)
參數說明
參數 | 類型 | 描述 |
---|---|---|
feature1 | in | 人臉特征 |
feature2 | in | 人臉特征 |
compareModel | in | 比對模型 |
faceSimilar | out | 比對相似度 |
返回值
成功返回ErrorInfo.MOK
,失敗詳見 錯誤碼列表。
使用 RxArcFace
- clone項目到本地 https://github.com/ZYF99/RxArcFace.git
-
在需要使用的項目中 引入RxArcFace的Module
-
選中剛才克隆下的項目文件夾中的RxArcFaceModule
- 在自己項目的app的build.gradle中添加依賴
implementation project(path: ':RxArcFacelibrary')
添加權限
<uses-permission android:name="android.permission.INTERNET" />
<uses-permission android:name="android.permission.CAMERA"/>
<uses-feature android:name="android.hardware.camera" />
<uses-feature android:name="android.hardware.autofocus"/>
將需要匹配的數據類實現 IFaceDetect
接口
data class Person(
val id: Long? = null,
val name: String? = null,
val avatar: String? = null, //添加avatar屬性
var faceCode: String? = null //添加faceCode可變屬性
) : IFaceDetect {
override fun getFaceCodeJson(): String? {
return faceCode
}
override fun getAvatarUrl(): String? {
return avatar
}
override fun bindFaceCode(faceCodeJson: String?) {
faceCode = faceCodeJson
}
}
也許你會問為什么我還需要自己添加faceCode屬性和avatar屬性呢?
其實并不是需要你自己去添加,往往我們在接入人臉識別功能時,我們早就有了自己的數據類,這跟數據類很可能是后端返回給我們的,而我們有時候很難決定后端會給我們什么樣的數據, faceCode
和 avatar
只是說我們的數據類必須有這兩種東西(一個人臉特征,一個頭像),它們可以是你之前就有的,也可以是你后來添加的,假如后端本身就返回給我們一個 屬性作為人臉特征,那么我們直接在 getFaceCodeJson
返回它就好,avatar
同理。
攝像頭采集圖像
private var camera: Camera? = null
//初始化相機、surfaceView
private fun initCameraOrigin(surfaceView: SurfaceView) {
surfaceView.holder.addCallback(object : SurfaceHolder.Callback {
override fun surfaceCreated(holder: SurfaceHolder) {
//surface創建時執行
if (camera == null) {
if (Build.VERSION.SDK_INT >= Build.VERSION_CODES.LOLLIPOP) {
camera = openCamera(this@MainActivity) { data, camera, resWidth, resHeight ->
if (data != null && data.size > 1) {
//TODO 人臉匹配
}
}
}
}
//調整攝像頭方向
camera?.let { setCameraDisplayOrientation(this@MainActivity, it) }
//開始預覽
holder.let { camera?.startPreview(it) }
}
override fun surfaceChanged(
holder: SurfaceHolder,
format: Int,
width: Int,
height: Int
) {
}
override fun surfaceDestroyed(holder: SurfaceHolder) {
camera.releaseCamera()
camera = null
}
})
}
override fun onPause() {
camera?.setPreviewCallback(null)
camera.releaseCamera()//釋放相機資源
camera = null
super.onPause()
}
override fun onDestroy() {
camera?.setPreviewCallback(null)
camera.releaseCamera()//釋放相機資源
camera = null
super.onDestroy()
}
使用人臉識別匹配
if (data != null && data.size > 1) {
matchHumanFaceListByArcSoft(
data = data,
width = resWidth,
height = resHeight,
humanList = listOfPerson,
doOnMatchedHuman = { matchedPerson ->
Toast.makeText(
this@MainActivity,
"匹配到${matchedPerson.name}",
Toast.LENGTH_SHORT
).show()
isFaceDetecting = false
},
doOnMatchMissing = {
Toast.makeText(
this@MainActivity,
"沒匹配到人,正在錄入",
Toast.LENGTH_SHORT
).show()
//為一個新的人綁定人臉數據
bindFaceCodeByByteArray(
Person(name = "帥哥"),
data,
resWidth,
resHeight
).doOnSuccess {
//往當前列表加入新注冊的人
listOfPerson.add(it)
Toast.makeText(
this@MainActivity,
"錄入成功",
Toast.LENGTH_SHORT
).show()
isFaceDetecting = false
}.subscribe()
},
doFinally = { }
)
}
完整的Activity代碼
package com.lxh.rxarcface
import android.hardware.Camera
import android.os.Build
import androidx.appcompat.app.AppCompatActivity
import android.os.Bundle
import android.util.Log
import android.view.SurfaceHolder
import android.view.SurfaceView
import android.widget.Toast
import com.lxh.rxarcfacelibrary.bindFaceCodeByByteArray
import com.lxh.rxarcfacelibrary.initArcSoftEngine
import com.lxh.rxarcfacelibrary.isFaceDetecting
import com.lxh.rxarcfacelibrary.matchHumanFaceListByArcSoft
class MainActivity : AppCompatActivity() {
private var camera: Camera? = null
private var listOfPerson: MutableList<Person> = mutableListOf()
override fun onCreate(savedInstanceState: Bundle?) {
super.onCreate(savedInstanceState)
setContentView(R.layout.activity_main)
//初始化人臉識別引擎
initArcSoftEngine(
this,
"輸入官網申請的appid",
"輸入官網申請的"
)
//初始化攝像頭
initCameraOrigin(findViewById(R.id.surface_view))
}
//初始化相機、surfaceView
private fun initCameraOrigin(surfaceView: SurfaceView) {
surfaceView.holder.addCallback(object : SurfaceHolder.Callback {
override fun surfaceCreated(holder: SurfaceHolder) {
//surface創建時執行
if (camera == null) {
if (Build.VERSION.SDK_INT >= Build.VERSION_CODES.LOLLIPOP) {
camera =
openCamera(this@MainActivity) { data, camera, resWidth, resHeight ->
if (data != null && data.size > 1) {
matchHumanFaceListByArcSoft(
data = data,
width = resWidth,
height = resHeight,
humanList = listOfPerson,
doOnMatchedHuman = { matchedPerson ->
Toast.makeText(
this@MainActivity,
"匹配到${matchedPerson.name}",
Toast.LENGTH_SHORT
).show()
isFaceDetecting = false
},
doOnMatchMissing = {
Toast.makeText(
this@MainActivity,
"沒匹配到人,正在錄入",
Toast.LENGTH_SHORT
).show()
//為一個新的人綁定人臉數據
bindFaceCodeByByteArray(
Person(name = "帥哥"),
data,
resWidth,
resHeight
).doOnSuccess {
//往當前列表加入新注冊的人
listOfPerson.add(it)
Toast.makeText(
this@MainActivity,
"錄入成功",
Toast.LENGTH_SHORT
).show()
isFaceDetecting = false
}.subscribe()
},
doFinally = { }
)
}
}
}
}
//調整攝像頭方向
camera?.let { setCameraDisplayOrientation(this@MainActivity, it) }
//開始預覽
holder.let { camera?.startPreview(it) }
}
override fun surfaceChanged(
holder: SurfaceHolder,
format: Int,
width: Int,
height: Int
) {
}
override fun surfaceDestroyed(holder: SurfaceHolder) {
camera.releaseCamera()
camera = null
}
})
}
override fun onPause() {
camera?.setPreviewCallback(null)
camera.releaseCamera()//釋放相機資源
camera = null
super.onPause()
}
override fun onDestroy() {
camera?.setPreviewCallback(null)
camera.releaseCamera()//釋放相機資源
camera = null
super.onDestroy()
}
}
注意:Demo沒有檢查相機權限,自行在設置去打開權限或者自己添加權限檢測
封裝介紹
直接SDK的使用請參考官方Demo,在注冊SDK服務時下載即可。這里不介紹Demo使用,如果需要直接參考官方寫的Demo即可,另外的,用我最后的封裝會比直接使用官方SDK簡單得多
1.引入依賴
https://ai.arcsoft.com.cn/manual/docs#/140: 請確保已按照3.1引入虹軟依賴配置
//RxJava2
implementation "io.reactivex.rxjava2:rxjava:2.2.13"
implementation "io.reactivex.rxjava2:rxkotlin:2.3.0"
//Json Serializer(工具類中使用到了Moshi作為序列化工具,可自行替換為其他工具)
implementation("com.squareup.moshi:moshi-kotlin:1.9.2")
kapt("com.squareup.moshi:moshi-kotlin-codegen:1.9.2")
//Glide(工具類中使用到了Glide作為序列化工具,可自行替換為其他工具)
implementation "com.github.bumptech.glide:glide:4.10.0"
//RxJava2
implementation "io.reactivex.rxjava2:rxjava:2.2.13"
implementation 'io.reactivex.rxjava2:rxandroid:2.1.1'
implementation "io.reactivex.rxjava2:rxkotlin:2.3.0"
2.實現工具類
定義全局變量
//(虹軟)判斷為同一人的閾值,大于此值即可判斷為同一人
const val ARC_SOFT_VALUE_MATCHED = 0.8f
private var context: Context? = null
//虹軟人臉初始化分析引擎(用于整個APP種需要解析人臉圖片為虹軟人臉特征數據所使用的引擎)
//使用兩個引擎的原因是:我們從網絡或者自己的服務器獲取的人臉照片人臉方向一定正常,但Android本身Camera獲取到的圖像旋轉角度不定,初始化時又必須給一個旋轉角度
private val faceDetectEngine = FaceEngine()
//虹軟人臉識別引擎(用于人臉識別使用的引擎)
private val faceEngine = FaceEngine()
//上次檢測人臉的時間戳
var lastFaceDetectingTime = 0L
//是否正在檢測(很重要,若同一時間多個圖片交給SDK檢測,C++底層將會內存溢出)
var isFaceDetecting = false
初始化
/**
* (虹軟)初始化人臉識別引擎
* */
fun initArcSoftEngine(
contextTemp: Context,
arcAppId: String, //在官網申請的 APPID
arcSdkKey: String //在官網申請的 APPKEY
) {
context = contextTemp
val activeCode = FaceEngine.activeOnline(
context,
arcAppId,
arcSdkKey
)
Log.d("激活虹軟,結果碼:", activeCode.toString())
//人臉識別引擎
val faceEngineCode = faceEngine.init(
context,
DetectMode.ASF_DETECT_MODE_IMAGE, //檢測模式,可選 ASF_DETECT_MODE_VIDEO、ASF_DETECT_MODE_IMAGE
DetectFaceOrientPriority.ASF_OP_270_ONLY, //檢測角度,不清楚角度可將模式改為VIDEO模式并將角度設置為 ASF_OP_ALL_OUT(全角度檢測)
16,
6,
FaceEngine.ASF_FACE_RECOGNITION or FaceEngine.ASF_AGE or FaceEngine.ASF_FACE_DETECT or FaceEngine.ASF_GENDER or FaceEngine.ASF_FACE3DANGLE
)
//人臉圖片分析引擎
faceDetectEngine.init(
context,
DetectMode.ASF_DETECT_MODE_VIDEO,
DetectFaceOrientPriority.ASF_OP_ALL_OUT,
16,
6,
FaceEngine.ASF_FACE_RECOGNITION or FaceEngine.ASF_AGE or FaceEngine.ASF_FACE_DETECT or FaceEngine.ASF_GENDER or FaceEngine.ASF_FACE3DANGLE
)
Log.d("FaceEngine init", "initEngine: init $faceEngineCode")
when (faceEngineCode) {
ErrorInfo.MOK,
ErrorInfo.MERR_ASF_ALREADY_ACTIVATED -> {
}
else -> showToast("初始化虹軟人臉識別錯誤,Code${faceEngineCode}")
}
}
接下來我們需要定義一個規范,通過上面的API介紹我們知道識別其實是通過
compareFaceFeature()
方法比較兩個FaceFeature
對象,所以我們需要比較的數據類比如 一個data class Person
就需要里面有一個類型為FaceFeature
屬性。但我們可能擁有多個這樣的 class ,比如Student
、Teacher
,他們都是毫無關系的數據類,于是我用一個接口來要求每個需要人臉識別的類去實現。
定義識別實體類的接口
/**
* 作為人臉識別數據類必須實現的接口
* */
interface IFaceDetect {
//獲取特征碼Json
fun getFaceCodeJson(): String?
//獲取頭像URL
fun getAvatarUrl(): String?
//綁定特征碼
fun bindFaceCode(faceCodeJson: String?)
}
通過圖片byte數組獲取FaceFeature
/**
* (虹軟)通過人員人臉圖片byteArray,為人員綁定上特征碼
* */
@Synchronized
fun <T : IFaceDetect> bindFaceCodeByByteArray(
person: T,
imageByteArray: ByteArray,
imageWidth: Int,
imageHeight: Int
): Single<T> {
return getArcFaceCodeByImageData(
imageByteArray,
imageWidth,
imageHeight
).flatMap {
Single.just(person.apply {
bindFaceCode(it)
})
}.subscribeOn(Schedulers.io())
.observeOn(AndroidSchedulers.mainThread())
}
/**
* 通過圖片數據加載為ArcFaceCode
* */
private fun getArcFaceCodeByImageData(
imageData: ByteArray,
imageWidth: Int,
imageHeight: Int
): Single<String> {
return Single.create { emitter ->
val detectStartTime = System.currentTimeMillis()
//人臉列表
val faceInfoList: List<FaceInfo> = mutableListOf()
//?臉檢測
val detectCode = faceDetectEngine.detectFaces(
imageData,
imageWidth,
imageHeight,
FaceEngine.CP_PAF_NV21,
faceInfoList
)
if (detectCode == 0) {
//人臉剖析
val faceProcessCode = faceDetectEngine.process(
imageData,
imageWidth,
imageHeight,
FaceEngine.CP_PAF_NV21,
faceInfoList,
FaceEngine.ASF_AGE or FaceEngine.ASF_GENDER or FaceEngine.ASF_FACE3DANGLE
)
//剖析成功
if (faceProcessCode == ErrorInfo.MOK && faceInfoList.isNotEmpty()) {
//識別到的人臉特征
val currentFaceFeature = FaceFeature()
//人臉特征分析
val res = faceDetectEngine.extractFaceFeature(
imageData,
imageWidth,
imageHeight,
FaceEngine.CP_PAF_NV21,
faceInfoList[0],
currentFaceFeature
)
//人臉特征分析成功
if (res == ErrorInfo.MOK) {
Log.d(
"!!人臉轉換耗時",
"${System.currentTimeMillis() - detectStartTime}"
)
Schedulers.io().scheduleDirect {
emitter.onSuccess(globalMoshi.toJson(currentFaceFeature))
}
}
} else {
Log.d("ARCFACE", "face process finished , code is $faceProcessCode")
Schedulers.io().scheduleDirect {
emitter.onSuccess("")
}
}
} else {
Log.d(
"ARCFACE",
"face detection finished, code is " + detectCode + ", face num is " + faceInfoList.size
)
Schedulers.io().scheduleDirect {
emitter.onSuccess("")
}
}
}
}
通過圖片url獲取FaceFeature
/**
* (虹軟)通過人員人臉圖片url,獲取帶特征碼人員列表
* */
@Synchronized
fun <T : IFaceDetect> detectPersonAvatarAndBindFaceFeatureCodeByArcSoft(
personListTemp: List<T>?
): Single<List<T>> {
return Observable.fromIterable(personListTemp)
.flatMapSingle { person ->
getArcFaceCodeByPicUrl(person.getAvatarUrl())
.map { arcFaceCodeJson ->
person.bindFaceCode(arcFaceCodeJson)
person
}
}
.toList()
.subscribeOn(Schedulers.io())
}
/**
* 通過照片加載為ArcFaceCode
* */
private fun getArcFaceCodeByPicUrl(
picUrl: String?
): Single<String> {
return Single.create { emitter ->
Glide.with(context!!)
.asBitmap()
.load(picUrl)
.listener(object : RequestListener<Bitmap> {
override fun onLoadFailed(
e: GlideException?,
model: Any?,
target: Target<Bitmap>?,
isFirstResource: Boolean
): Boolean {
emitter.onSuccess("")
return false
}
override fun onResourceReady(
resource: Bitmap?,
model: Any?,
target: Target<Bitmap>?,
dataSource: DataSource?,
isFirstResource: Boolean
): Boolean {
return false
}
})
.into(object : SimpleTarget<Bitmap>() {
@Synchronized
override fun onResourceReady(
bitMap: Bitmap,
transition: Transition<in Bitmap>?
) {
val detectStartTime = System.currentTimeMillis()
//人臉列表
val faceInfoList: List<FaceInfo> = mutableListOf()
val faceByteArray = getPixelsBGR(bitMap)
//?臉檢測
val detectCode = faceDetectEngine.detectFaces(
faceByteArray,
bitMap.width,
bitMap.height,
FaceEngine.CP_PAF_BGR24,
faceInfoList
)
if (detectCode == 0) {
//人臉剖析
val faceProcessCode = faceDetectEngine.process(
faceByteArray,
bitMap.width,
bitMap.height,
FaceEngine.CP_PAF_BGR24,
faceInfoList,
FaceEngine.ASF_AGE or FaceEngine.ASF_GENDER or FaceEngine.ASF_FACE3DANGLE
)
//剖析成功
if (faceProcessCode == ErrorInfo.MOK && faceInfoList.isNotEmpty()) {
//識別到的人臉特征
val currentFaceFeature = FaceFeature()
//人臉特征分析
val res = faceDetectEngine.extractFaceFeature(
faceByteArray,
bitMap.width,
bitMap.height,
FaceEngine.CP_PAF_BGR24,
faceInfoList[0],
currentFaceFeature
)
//人臉特征分析成功
if (res == ErrorInfo.MOK) {
Log.d(
"!!人臉轉換耗時",
"${System.currentTimeMillis() - detectStartTime}"
)
Schedulers.io().scheduleDirect {
emitter.onSuccess(globalMoshi.toJson(currentFaceFeature))
}
}
} else {
Log.d("ARCFACE", "face process finished , code is $faceProcessCode")
Schedulers.io().scheduleDirect {
emitter.onSuccess("")
}
}
} else {
Log.d(
"ARCFACE",
"face detection finished, code is " + detectCode + ", face num is " + faceInfoList.size
)
Schedulers.io().scheduleDirect {
emitter.onSuccess("")
}
}
}
})
}
}
為實體數據綁定人臉特征數據
/**
* (虹軟)通過人員人臉圖片,獲取帶特征碼人員列表
* */
@Synchronized
fun <T : IFaceDetect> detectPersonAvatarAndBindFaceFeatureCodeByArcSoft(
personListTemp: List<T>?
): Single<List<T>> {
return Observable.fromIterable(personListTemp)
.flatMapSingle { person ->
getArcFaceCodeByPicUrl(person.getAvatarUrl())
.map { arcFaceCodeJson ->
person.bindFaceCode(arcFaceCodeJson)
person
}
}
.toList()
.subscribeOn(Schedulers.io())
}
從列表匹配出一個人
有了規范,我們就可以開始識別了,先寫一個從列表識別出一個人的方法
/**
* (虹軟)通過人臉圖片識別匹配列表里的人類
* */
/**
* (虹軟)通過人臉圖片識別匹配列表里的人類
* */
@Synchronized
fun <T : IFaceDetect> matchHumanFaceListByArcSoft(
data: ByteArray,
width: Int,
height: Int,
previewWidth: Int? = null,
previewHeight: Int? = null,
humanList: List<T>,
doOnMatchedHuman: (T) -> Unit,
doOnMatchMissing: (() -> Unit)? = null,
doFinally: (() -> Unit)? = null
) {
if (isFaceDetecting) return
synchronized(faceEngine) {
//Log.d(TAG_ARC_FACE, "當前線程:${Thread.currentThread().name}")
//正在檢測
isFaceDetecting = true
//上次檢測時間
lastFaceDetectingTime = System.currentTimeMillis()
//人臉列表
val faceInfoList: List<FaceInfo> = mutableListOf()
//?臉檢測
val detectCode = faceEngine.detectFaces(
data,
width,
height,
FaceEngine.CP_PAF_NV21,
faceInfoList
)
if (detectCode != 0 || faceInfoList.isEmpty()) {
Log.d(TAG_ARC_FACE, "face detection finished, code is " + detectCode + ", face num is " + faceInfoList.size)
doFinally?.invoke()
isFaceDetecting = false
return
}
//人臉剖析
val faceProcessCode = faceEngine.process(
data,
width,
height,
FaceEngine.CP_PAF_NV21,
faceInfoList,
FaceEngine.ASF_AGE or FaceEngine.ASF_GENDER or FaceEngine.ASF_FACE3DANGLE
)
//剖析失敗
if (faceProcessCode != ErrorInfo.MOK) {
Log.d(TAG_ARC_FACE, "face process finished , code is $faceProcessCode")
doFinally?.invoke()
isFaceDetecting = false
return
}
//previewWidth和previewHeight不為空表示需要人臉在畫面中間
val needAvatarInViewCenter =
if (faceInfoList.isNotEmpty()) {
previewWidth != null
&& previewHeight != null
&& isAvatarInViewCenter(faceInfoList[0].rect, previewWidth, previewHeight)
} else false
//previewWidth和previewHeight為空表示不需要人臉在畫面中間
val doNotNeedAvatarInViewCenter = previewWidth == null && previewHeight == null
when {
(faceInfoList.isNotEmpty() && needAvatarInViewCenter)
|| (faceInfoList.isNotEmpty() && doNotNeedAvatarInViewCenter) -> {
}
else -> {//無人臉,退出匹配
doFinally?.invoke()
isFaceDetecting = false
return
}
}
//識別到的人臉特征
val currentFaceFeature = FaceFeature()
//人臉特征分析
val res = faceEngine.extractFaceFeature(
data,
width,
height,
FaceEngine.CP_PAF_NV21,
faceInfoList[0],
currentFaceFeature
)
//人臉特征分析失敗
if (res != ErrorInfo.MOK) {
doFinally?.invoke()
isFaceDetecting = false
return
}
//進行遍歷匹配
val matchedMeetingPerson = humanList.find {
val faceSimilar = FaceSimilar()
val startDetectTime = System.currentTimeMillis()
if (it.getFaceCodeJson() == null || it.getFaceCodeJson()!!.isEmpty()) {
return@find false
}
val compareResult =
faceEngine.compareFaceFeature(
globalMoshi.fromJson(it.getFaceCodeJson()),
currentFaceFeature,
faceSimilar
)
Log.d(TAG_ARC_FACE, "單人匹配耗時: ${System.currentTimeMillis() - startDetectTime}")
if (compareResult == ErrorInfo.MOK) {
Log.d("相似度", faceSimilar.score.toString())
faceSimilar.score > ARC_SOFT_VALUE_MATCHED
} else {
Log.d(TAG_ARC_FACE, "對比發生錯誤: $compareResult")
false
}
}
if (matchedMeetingPerson == null) {
//匹配到的人為空
doOnMatchMissing?.invoke()
} else {
//匹配到的人
doOnMatchedHuman(matchedMeetingPerson)
}
}
}
匹配單個人
/**
* (虹軟)通過一個人臉圖片識別匹配是否為某個人類
* */
@Synchronized
fun <T : IFaceDetect> matchHumanFaceSoloByArcSoft(
data: ByteArray,
width: Int,
height: Int,
previewWidth: Int? = null,
previewHeight: Int? = null,
human: T,
doOnMatched: (T) -> Unit,
doOnMatchMissing: (() -> Unit)? = null,
doFinally: (() -> Unit)? = null
) {
matchHumanFaceListByArcSoft(
data = data,
width = width,
height = height,
previewWidth = previewWidth,
previewHeight = previewHeight,
humanList = listOf(human),
doOnMatchedHuman = doOnMatched,
doOnMatchMissing = doOnMatchMissing,
doFinally = doFinally
)
}
判斷人臉是否在預覽View的中間
/**
* 判斷人臉是否在View的中間
* */
fun isAvatarInViewCenter(rect: Rect, previewWidth: Int, previewHeight: Int): Boolean {
try {
val minSX = previewHeight / 10f
val minZY = kotlin.math.abs(previewWidth - previewHeight) / 2 + minSX
val isLeft = kotlin.math.abs(rect.left) > minZY
val isTop = kotlin.math.abs(rect.top) > minSX
val isRight = kotlin.math.abs(rect.left) + rect.width() < (previewWidth - minZY)
val isBottom = kotlin.math.abs(rect.top) + rect.height() < (previewHeight - minSX)
if (isLeft && isTop && isRight && isBottom) return true
} catch (e: Exception) {
Log.e("ARCFACE", e.localizedMessage)
}
return false
}
銷毀引擎
/**
* 銷毀人臉檢測對象
* */
fun unInitArcFaceEngine() {
faceEngine.unInit()
}
/**
* 銷毀圖片分析對象
* */
fun unInitArcFaceDetectEngine() {
faceDetectEngine.unInit()
}
獲取BGR像素的工具
/**
* 提取圖像中的BGR像素
* @param image
* @return
*/
fun getPixelsBGR(image: Bitmap): ByteArray? {
// calculate how many bytes our image consists of
val bytes = image.byteCount
val buffer = ByteBuffer.allocate(bytes) // Create a new buffer
image.copyPixelsToBuffer(buffer) // Move the byte data to the buffer
val temp = buffer.array() // Get the underlying array containing the data.
val pixels = ByteArray(temp.size / 4 * 3) // Allocate for BGR
// Copy pixels into place
for (i in 0 until temp.size / 4) {
pixels[i * 3] = temp[i * 4 + 2] //B
pixels[i * 3 + 1] = temp[i * 4 + 1] //G
pixels[i * 3 + 2] = temp[i * 4] //R
}
return pixels
}
關于上面用到的序列化,我將序列化工具的代碼也貼出來吧,方便大家直接copy使用
序列化的擴展工具(Moshi的擴展方法,ModelUtil)
import com.squareup.moshi.JsonAdapter
import com.squareup.moshi.Moshi
import com.squareup.moshi.Types
import java.lang.reflect.Type
inline fun <reified T> String?.fromJson(moshi: Moshi = globalMoshi): T? =
this?.let { ModelUtil.fromJson(this, T::class.java, moshi = moshi) }
inline fun <reified T> T?.toJson(moshi: Moshi = globalMoshi): String =
ModelUtil.toJson(this, T::class.java, moshi = moshi)
inline fun <reified T> Moshi.fromJson(json: String?): T? =
json?.let { ModelUtil.fromJson(json, T::class.java, moshi = this) }
inline fun <reified T> Moshi.toJson(t: T?): String =
ModelUtil.toJson(t, T::class.java, moshi = this)
inline fun <reified T> List<T>.listToJson(): String =
ModelUtil.listToJson(this, T::class.java)
inline fun <reified T> String.jsonToList(): List<T>? =
ModelUtil.jsonToList(this, T::class.java)
object ModelUtil {
inline fun <reified S, reified T> copyModel(source: S): T? {
return fromJson(
toJson(
any = source,
classOfT = S::class.java
), T::class.java
)
}
fun <T> toJson(any: T?, classOfT: Class<T>, moshi: Moshi = globalMoshi): String {
return moshi.adapter(classOfT).toJson(any)
}
fun <T> fromJson(json: String, classOfT: Class<T>, moshi: Moshi = globalMoshi): T? {
return moshi.adapter(classOfT).lenient().fromJson(json)
}
fun <T> fromJson(json: String, typeOfT: Type, moshi: Moshi = globalMoshi): T? {
return moshi.adapter<T>(typeOfT).fromJson(json)
}
fun <T> listToJson(list: List<T>?, classOfT: Class<T>, moshi: Moshi = globalMoshi): String {
val type = Types.newParameterizedType(List::class.java, classOfT)
val adapter: JsonAdapter<List<T>> = moshi.adapter(type)
return adapter.toJson(list)
}
fun <T> jsonToList(json: String, classOfT: Class<T>, moshi: Moshi = globalMoshi): List<T>? {
val type = Types.newParameterizedType(List::class.java, classOfT)
val adapter = moshi.adapter<List<T>>(type)
return adapter.fromJson(json)
}
}
相機的擴展工具
import android.app.Activity
import android.content.Context
import android.content.res.Configuration
import android.graphics.ImageFormat
import android.hardware.Camera
import android.hardware.camera2.CameraManager
import android.os.Build
import android.util.Log
import android.view.Surface
import android.view.SurfaceHolder
import androidx.annotation.RequiresApi
import kotlin.math.abs
private var resultWidth = 0
private var resultHeight = 0
var cameraId:Int = 0
/**
* 打開相機
* */
@RequiresApi(Build.VERSION_CODES.LOLLIPOP)
fun openCamera(
context: Context,
width: Int = 800,
height: Int = 600,
doOnPreviewCallback: (ByteArray?, Camera?, Int, Int) -> Unit
): Camera {
Camera.getNumberOfCameras()
(context.getSystemService(Context.CAMERA_SERVICE) as CameraManager).cameraIdList
cameraId = findFrontFacingCameraID()
val c = Camera.open(cameraId)
initParameters(context, c, width, height)
c.setPreviewCallback { data, camera ->
doOnPreviewCallback(
data,
camera,
resultWidth,
resultHeight
)
}
return c
}
private fun findFrontFacingCameraID(): Int {
var cameraId = -1
// Search for the back facing camera
val numberOfCameras = Camera.getNumberOfCameras()
for (i in 0 until numberOfCameras) {
val info = Camera.CameraInfo()
Camera.getCameraInfo(i, info)
if (info.facing == Camera.CameraInfo.CAMERA_FACING_FRONT) {
Log.d("CAMERA UTIL", "Camera found ,ID is $i")
cameraId = i
break
}
}
return cameraId
}
/**
* 設置相機參數
* */
fun initParameters(
context: Context,
camera: Camera,
width: Int,
height: Int
) {
//獲取Parameters對象
val parameters = camera.parameters
val size = getOptimalSize(context, parameters.supportedPreviewSizes, width, height)
parameters?.setPictureSize(size?.width ?: 0, size?.height ?: 0)
parameters?.setPreviewSize(size?.width ?: 0, size?.height ?: 0)
resultWidth = size?.width ?: 0
resultHeight = size?.height ?: 0
//設置預覽格式getOptimalSize
parameters?.previewFormat = ImageFormat.NV21
//對焦
parameters?.focusMode = Camera.Parameters.FOCUS_MODE_FIXED
//給相機設置參數
camera.parameters = parameters
}
/**
* 釋放相機資源
* */
fun Camera?.releaseCamera() {
if (this != null) {
//停止預覽
stopPreview()
setPreviewCallback(null)
//釋放相機資源
release()
}
}
/**
* 獲取相機旋轉角度
* */
fun getDisplayRotation(activity: Activity): Int {
val rotation = activity.windowManager.defaultDisplay
.rotation
when (rotation) {
Surface.ROTATION_0 -> return 0
Surface.ROTATION_90 -> return 90
Surface.ROTATION_180 -> return 180
Surface.ROTATION_270 -> return 270
}
return 90
}
/**
* 設置預覽展示角度
* */
fun setCameraDisplayOrientation(
activity: Activity,
camera: Camera
) {
// See android.hardware.Camera.setCameraDisplayOrientation for
// documentation.
val info = Camera.CameraInfo()
Camera.getCameraInfo(cameraId, info)
val degrees = getDisplayRotation(activity)
var result: Int
if (info.facing == Camera.CameraInfo.CAMERA_FACING_FRONT) {
result = (info.orientation + degrees) % 360
result = (360 - result) % 360 // compensate the mirror
} else { // back-facing
result = (info.orientation - degrees + 360) % 360
}
camera.setDisplayOrientation(result)
}
/**
* 開始相機預覽
* */
fun Camera.startPreview(surfaceHolder: SurfaceHolder) {
//根據所傳入的SurfaceHolder對象來設置實時預覽
setPreviewDisplay(surfaceHolder)
startPreview()
}
/**
* 選取與width、height比例最接近的、設置支持的size
* @param context
* @param sizes 設置支持的size序列
* @param w 相機預覽視圖的width
* @param h 相機預覽視圖的height
* @return
*/
private fun getOptimalSize(
context: Context,
sizes: List<Camera.Size>,
w: Int,
h: Int
): Camera.Size? {
val ASPECT_TOLERANCE = 0.1 //閾值,用于選取最優
var targetRatio = -1.0
val orientation = context.resources.configuration.orientation
//保證targetRatio始終大于1,因為size.width/size.height始終大于1
if (orientation == Configuration.ORIENTATION_PORTRAIT) {
targetRatio = h.toDouble() / w
} else if (orientation == Configuration.ORIENTATION_LANDSCAPE) {
targetRatio = w.toDouble() / h
}
var optimalSize: Camera.Size? = null
var minDiff = Double.MAX_VALUE
val targetHeight = w.coerceAtMost(h)
for (size in sizes) {
val ratio = size.width.toDouble() / size.height
//若大于了閾值,則繼續篩選
if (abs(ratio - targetRatio) > ASPECT_TOLERANCE) {
continue
}
if (abs(size.height - targetHeight) < minDiff) {
optimalSize = size
minDiff = abs(size.height - targetHeight).toDouble()
}
}
//若通過比例沒有獲得最優,則通過最小差值獲取最優,保證至少能得到值
if (optimalSize == null) {
minDiff = Double.MAX_VALUE
for (size in sizes) {
if (abs(size.height - targetHeight) < minDiff) {
optimalSize = size
minDiff = abs(size.height - targetHeight).toDouble()
}
}
}
return optimalSize
}