首先我們來想一想具體的步驟,大概流程應該是:1.打開設備的攝像頭-->2.進行二維碼圖像捕獲-->3.獲取捕獲的圖像進行解析-->4.取得解析結果進行后續處理。這些流程需要用到AVFoundation
這個庫,注意導入。
AVCaptureDevice *device = [AVCaptureDevice defaultDeviceWithMediaType:AVMediaTypeVideo];
//獲取一個AVCaptureDeviceInput對象,將上面的'攝像頭'作為輸入設備
AVCaptureDeviceInput *input = [AVCaptureDeviceInput deviceInputWithDevice:_device error:nil];
//拍完照片以后,需要一個AVCaptureMetadataOutput對象將獲取的'圖像'輸出,以便進行對其解析
AVCaptureMetadataOutput *output = [[AVCaptureMetadataOutput alloc]init];
//獲取輸出需要設置代理,在代理方法中獲取
[output setMetadataObjectsDelegate:self queue:dispatch_get_main_queue()];
//設置輸出類型,如AVMetadataObjectTypeQRCode是二維碼類型,下面還增加了條形碼。如果掃描的是條形碼也能識別
output.metadataObjectTypes = @[AVMetadataObjectTypeEAN13Code,
AVMetadataObjectTypeEAN8Code,
AVMetadataObjectTypeCode128Code,
AVMetadataObjectTypeQRCode];
//設置掃描區域
// Y <------------------+ 坐標軸
// |
// |
// |
// |
// |
// |
// |
// V
// X
//設置要點 x和y互換 H和W互換
CGFloat leadSpace = (kScreenWidth - MiddleWidth)/ 2;
[_output setRectOfInterest:CGRectMake(0.2, leadSpace / kScreenWidth ,0.8 * kScreenWidth / kScreenHeight, 0.8)];
上面完成了捕獲的設置,但是并未正在開始'掃描',要完成一次掃描的過程,需要用到AVCaptureSession
這個類,這個session類把一次掃描看做一次會話,會話開始后才是正在的'掃描'開始,具體代碼如下。
AVCaptureSession *session = [[AVCaptureSession alloc]init];
[session setSessionPreset:AVCaptureSessionPresetHigh];//掃描的質量
if ([session canAddInput:input]){
[session addInput:input];//將輸入添加到會話中
}
if ([session canAddOutput:output]){
[session addOutput:output];//將輸出添加到會話中
}
接下來我們要做的不是立即開始會話(開始掃描),如果你現在調用會話的startRunning方法的話,你會發現屏幕是一片黑,這時由于我們還沒有設置相機的取景器的大小。設置取景器需要用到AVCaptureVideoPreviewLayer
這個類。具體代碼如下:
- (void)captureOutput:(AVCaptureOutput *)captureOutput didOutputMetadataObjects:(NSArray *)metadataObjects fromConnection:(AVCaptureConnection *)connection
{
[self.session stopRunning];//停止會話
if (metadataObjects.count > 0) {
AVMetadataMachineReadableCodeObject *obj = metadataObjects[0];
NSString *result = obj.stringValue;//這就是掃描的結果啦
//對結果進行處理...
}
}
加入照明功能能讓用戶在光照條件不好的情況下順利的進行進行掃描操作,代碼如下:
////設置閃光燈
-(void)touchesBegan:(NSSet<UITouch *> *)touches withEvent:(UIEvent *)event{
NSError *error;
if (_device.hasTorch) { // 判斷設備是否有閃光燈
BOOL b = [_device lockForConfiguration:&error];
if (!b) {
if (error) {
NSLog(@"lock torch configuration error:%@", error.localizedDescription);
}
return;
}
_device.torchMode = (_device.torchMode == AVCaptureTorchModeOff ? AVCaptureTorchModeOn : AVCaptureTorchModeOff);
[_device unlockForConfiguration];
}
}
從圖片中直接讀取二維碼的功能在iOS7上面蘋果沒有實現,不過在iOS上已經填補了這一功能。主要用到的是讀取主要用到CoreImage。
//獲取圖片中的二維碼信息
- (NSString *)scQRReaderForImage:(UIImage *)qrimage{
CIContext *context = [CIContext contextWithOptions:nil];
CIDetector *detector = [CIDetector detectorOfType:CIDetectorTypeQRCode context:context options:@{CIDetectorAccuracy:CIDetectorAccuracyHigh}];
CIImage *image = [CIImage imageWithCGImage:qrimage.CGImage];
NSArray *features = [detector featuresInImage:image];
CIQRCodeFeature *feature = [features firstObject];
NSString *result = feature.messageString;
return result;
}
生成二維碼和從圖片中讀取二維碼一樣要用到CoreImage,具體步驟如下:
- (UIImage *)makeQRCodeForString:(NSString *)string{
NSString *text = string;
NSData *stringData = [text dataUsingEncoding: NSUTF8StringEncoding];
//生成
CIFilter *qrFilter = [CIFilter filterWithName:@"CIQRCodeGenerator"];
[qrFilter setValue:stringData forKey:@"inputMessage"];//通過kvo方式給一個字符串,生成二維碼
[qrFilter setValue:@"M" forKey:@"inputCorrectionLevel"];//設置二維碼的糾錯水平,越高糾錯水平越高,可以污損的范圍越大 L:7% M:15% Q:25% H:30%
//二維碼顏色
UIColor *onColor = [UIColor redColor];
UIColor *offColor = [UIColor blueColor];
//上色,如果只要白底黑塊的QRCode可以跳過這一步
CIFilter *colorFilter = [CIFilter filterWithName:@"CIFalseColor"
keysAndValues:
@"inputImage",qrFilter.outputImage,
@"inputColor0",[CIColor colorWithCGColor:onColor.CGColor],
@"inputColor1",[CIColor colorWithCGColor:offColor.CGColor],
nil];
CIImage *qrImage = colorFilter.outputImage;//拿到二維碼圖片
//放大更清楚
qrImage = [qrImage imageByApplyingTransform:CGAffineTransformMakeScale(20, 20)];
NSLog(@"%@",qrImage);
//繪制
CGSize size = [[UIImage imageWithCIImage:qrImage] size];
CGImageRef cgImage = [[CIContext contextWithOptions:nil] createCGImage:qrImage fromRect:qrImage.extent];
UIGraphicsBeginImageContext(size);
CGContextRef context = UIGraphicsGetCurrentContext();
CGContextSetInterpolationQuality(context, kCGInterpolationNone);
CGContextScaleCTM(context, 1.0, -1.0);//生成的QRCode就是上下顛倒的,需要翻轉一下
CGContextDrawImage(context, CGContextGetClipBoundingBox(context), cgImage);
UIImage *codeImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
CGImageRelease(cgImage);
return codeImage;
}
對圖片添加水印
- (UIImage *)addImageToSuperImage:(UIImage *)superImage withSubImage:(UIImage *)subImage andSubImagePosition:(CGRect)posRect{
UIGraphicsBeginImageContext(superImage.size);
[superImage drawInRect:CGRectMake(0, 0, superImage.size.width, superImage.size.height)];
//四個參數為水印圖片的位置
[subImage drawInRect:posRect];
UIImage *resultingImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return resultingImage;
}