蘋果自帶的Core Image可以進行人臉識別,識別效率也很高,比使用OpenCV效果要好,而且Core Image不僅可以人臉識別,還能增加渲染效果,非常好用,但是相關資料比較少,這個項目是按照WWDC2012的一個Keynote視頻做的。
講一下大概流程,
首先換取到本地的圖片:
UIImage* image = [UIImage imageNamed:@"face"];
然后轉換成CIImage
CIImage* ciimage = [CIImage imageWithCGImage:image.CGImage];
下面創建CIDetector類實例來識別:
CIImage* ciimage = [CIImage imageWithCGImage:image.CGImage];
NSDictionary* opts = [NSDictionary dictionaryWithObject:
CIDetectorAccuracyHigh forKey:CIDetectorAccuracy];
CIDetector* detector = [CIDetector detectorOfType:CIDetectorTypeFace
context:nil options:opts];
對detector調用方法可以獲取到識別到的結果數組(可以識別多張臉):
NSArray* features = [detector featuresInImage:ciimage];
下面使用for循環,分別對結果畫出臉的位置和眼睛嘴的位置
for (CIFaceFeature *faceFeature in features){
CGFloat faceWidth = testImage.bounds.size.width/4;
//標出臉的位置
UIView* faceView = [[UIView alloc] initWithFrame:[self verticalFlipFromRect:faceFeature.bounds inSize:image.size toSize:testImage.bounds.size]];
faceView.layer.borderWidth = 1;
faceView.layer.borderColor = [[UIColor redColor] CGColor];
[testImage addSubview:faceView];
// 標出左眼
if(faceFeature.hasLeftEyePosition) {
UIView* leftEyeView = [[UIView alloc] initWithFrame:
CGRectMake(0,0, faceWidth*0.3, faceWidth*0.3)];
[leftEyeView setBackgroundColor:[[UIColor blueColor] colorWithAlphaComponent:0.3]];
[leftEyeView setCenter:[self verticalFlipFromPoint:faceFeature.leftEyePosition inSize:image.size toSize:testImage.bounds.size]];
leftEyeView.layer.cornerRadius = faceWidth*0.15;
[testImage addSubview:leftEyeView];
}
// 標出右眼
if(faceFeature.hasRightEyePosition) {
UIView* rightEyeView = [[UIView alloc] initWithFrame:
CGRectMake(0,0, faceWidth*0.3, faceWidth*0.3)];
[rightEyeView setBackgroundColor:[[UIColor blueColor] colorWithAlphaComponent:0.3]];
[rightEyeView setCenter:[self verticalFlipFromPoint:faceFeature.rightEyePosition inSize:image.size toSize:testImage.bounds.size]];
rightEyeView.layer.cornerRadius = faceWidth*0.15;
[testImage addSubview:rightEyeView];
}
// 標出嘴
if(faceFeature.hasMouthPosition) {
UIView* mouth = [[UIView alloc] initWithFrame:
CGRectMake(faceFeature.mouthPosition.x-faceWidth*0.2,
faceFeature.mouthPosition.y-faceWidth*0.2, faceWidth*0.4, faceWidth*0.4)];
[mouth setBackgroundColor:[[UIColor greenColor] colorWithAlphaComponent:0.3]];
[mouth setCenter:[self verticalFlipFromPoint:faceFeature.mouthPosition inSize:image.size toSize:testImage.bounds.size]];
mouth.layer.cornerRadius = faceWidth*0.2;
[testImage addSubview:mouth];
}
}
這里需要注意,識別結果的位置的坐標系,和蘋果普通UI的坐標系不同,這個識別結果的坐標系,是從左下角開始的,所以需要對結果位置需要變換(垂直翻轉)
-(CGRect)verticalFlipFromRect:(CGRect)originalRect inSize:(CGSize)originalSize toSize:(CGSize)finalSize{
CGRect finalRect = originalRect;
finalRect.origin.y = originalSize.height - finalRect.origin.y - finalRect.size.height;
CGFloat hRate = finalSize.width / originalSize.width;
CGFloat vRate = finalSize.height / originalSize.height;
finalRect.origin.x *= hRate;
finalRect.origin.y *= vRate;
finalRect.size.width *= hRate;
finalRect.size.height *= vRate;
return finalRect;
}
- (CGPoint)verticalFlipFromPoint:(CGPoint)originalPoint inSize:(CGSize)originalSize toSize:(CGSize)finalSize{
CGPoint finalPoint = originalPoint;
finalPoint.y = originalSize.height - finalPoint.y;
CGFloat hRate = finalSize.width / originalSize.width;
CGFloat vRate = finalSize.height / originalSize.height;
finalPoint.x *= hRate;
finalPoint.y *= vRate;
return finalPoint;
}
具體細節可以去GitHub看,里面還有實時視頻畫面的人臉識別