工程地址(https://github.com/tujinqiu/KTMovieImagesTransfer)
如果你有一些連續的圖片序列,那么把它轉換成MP4再放到網絡上傳輸是一個好的選擇,因為size會小很多。從視頻里面抽取連續的圖片序列也是一個偶爾會遇到的問題。我分別嘗試使用iOS原生的API,FFmpeg和OpenCV來解決這兩個問題。
1、原生方法
使用原生方法主要是利用AVFoundation框架的api進行轉換的。
1、將視頻解成序列幀
- (NSError *)nativeTransferMovie:(NSString *)movie toImagesAtPath:(NSString *)imagesPath
{
AVURLAsset *asset = [[AVURLAsset alloc] initWithURL:[NSURL fileURLWithPath:movie] options:nil];
AVAssetImageGenerator *generator = [[AVAssetImageGenerator alloc] initWithAsset:asset];
CMTime time = asset.duration;
NSUInteger totalFrameCount = CMTimeGetSeconds(time) * kKTImagesMovieTransferFPS;
NSMutableArray *timesArray = [NSMutableArray arrayWithCapacity:totalFrameCount];
for (NSUInteger ii = 0; ii < totalFrameCount; ++ii) {
CMTime timeFrame = CMTimeMake(ii, kKTImagesMovieTransferFPS);
NSValue *timeValue = [NSValue valueWithCMTime:timeFrame];
[timesArray addObject:timeValue];
}
generator.requestedTimeToleranceBefore = kCMTimeZero;
generator.requestedTimeToleranceAfter = kCMTimeZero;
__block NSError *returnError = nil;
[generator generateCGImagesAsynchronouslyForTimes:timesArray completionHandler:^(CMTime requestedTime, CGImageRef _Nullable image, CMTime actualTime, AVAssetImageGeneratorResult result, NSError * _Nullable error) {
switch (result) {
case AVAssetImageGeneratorFailed:
returnError = error;
[self sendToMainThreadError:returnError];
break;
case AVAssetImageGeneratorSucceeded:
{
NSString *imageFile = [imagesPath stringByAppendingPathComponent:[NSString stringWithFormat:@"%lld.jpg", requestedTime.value]];
NSData *data = UIImageJPEGRepresentation([UIImage imageWithCGImage:image], 1.0);
if ([data writeToFile:imageFile atomically:YES]) {
dispatch_async(dispatch_get_main_queue(), ^{
if ([self.delegate respondsToSelector:@selector(transfer:didTransferedAtIndex:totalFrameCount:)]) {
[self.delegate transfer:self didTransferedAtIndex:requestedTime.value totalFrameCount:totalFrameCount];
}
});
NSUInteger index = requestedTime.value;
if (index == totalFrameCount - 1) {
if ([self.delegate respondsToSelector:@selector(transfer:didFinishedWithError:)]) {
[self.delegate transfer:self didFinishedWithError:nil];
}
}
} else {
returnError = [self errorWithErrorCode:KTTransferWriteError object:imageFile];
[self sendToMainThreadError:returnError];
[generator cancelAllCGImageGeneration];
}
}
break;
default:
break;
}
}];
return returnError;
}
主要是利用AVAssetImageGenerator抽取圖片,注意在調用generateCGImagesAsynchronouslyForTimes: completionHandler方法之前,用幀率和視頻時長計算幀數。
2、將序列幀壓縮為視頻
- (NSError *)nativeTransferImageFiles:(NSArray<NSString *> *)imageFiles toMovie:(NSString *)movie
{
__block NSError *returnError = nil;
AVAssetWriter *videoWriter = [[AVAssetWriter alloc] initWithURL:[NSURL fileURLWithPath:movie] fileType:AVFileTypeQuickTimeMovie error:&returnError];
if (returnError) {
[self sendToMainThreadError:returnError];
return returnError;
}
UIImage *firstImage = [UIImage imageWithContentsOfFile:[imageFiles firstObject]];
if (!firstImage) {
returnError = [self errorWithErrorCode:KTTransferReadImageError object:[imageFiles firstObject]];
[self sendToMainThreadError:returnError];
return returnError;
}
CGSize size = firstImage.size;
// h264格式
NSDictionary *videoSettings = @{AVVideoCodecKey: AVVideoCodecH264,
AVVideoWidthKey: [NSNumber numberWithInt:size.width],
AVVideoHeightKey: [NSNumber numberWithInt:size.height]};
AVAssetWriterInput* writerInput = [AVAssetWriterInput assetWriterInputWithMediaType:AVMediaTypeVideo
outputSettings:videoSettings];
AVAssetWriterInputPixelBufferAdaptor *adaptor = [AVAssetWriterInputPixelBufferAdaptor assetWriterInputPixelBufferAdaptorWithAssetWriterInput:writerInput sourcePixelBufferAttributes:nil];
dispatch_async(KTImagesMovieTransferQueue(), ^{
[videoWriter addInput:writerInput];
[videoWriter startWriting];
[videoWriter startSessionAtSourceTime:kCMTimeZero];
UIImage *tmpImage = nil;
NSUInteger index = 0;
while (index < imageFiles.count) {
if(writerInput.readyForMoreMediaData) {
CMTime presentTime = CMTimeMake(index, kKTImagesMovieTransferFPS);
tmpImage = [UIImage imageWithContentsOfFile:[imageFiles objectAtIndex:index]];
if (!tmpImage) {
returnError = [self errorWithErrorCode:KTTransferReadImageError object:[imageFiles firstObject]];
[self sendToMainThreadError:returnError];
return;
}
CVPixelBufferRef buffer = [self pixelBufferFromCGImage:[tmpImage CGImage] size:size];
if (buffer) {
[self appendToAdapter:adaptor pixelBuffer:buffer atTime:presentTime withInput:writerInput];
CFRelease(buffer);
} else {
// Finish the session
[writerInput markAsFinished];
[videoWriter finishWritingWithCompletionHandler:^{
}];
returnError = [self errorWithErrorCode:KTTransferGetBufferError object:nil];
[self sendToMainThreadError:returnError];
return;
}
}
dispatch_async(dispatch_get_main_queue(), ^{
if ([self.delegate respondsToSelector:@selector(transfer:didTransferedAtIndex:totalFrameCount:)]) {
[self.delegate transfer:self didTransferedAtIndex:index totalFrameCount:imageFiles.count];
}
});
index++;
}
// Finish the session
[writerInput markAsFinished];
[videoWriter finishWritingWithCompletionHandler:^{
if (videoWriter.status != AVAssetWriterStatusCompleted) {
returnError = videoWriter.error;
[self sendToMainThreadError:returnError];
} else {
if ([self.delegate respondsToSelector:@selector(transfer:didFinishedWithError:)]) {
[self.delegate transfer:self didFinishedWithError:nil];
}
}
}];
});
return returnError;
}
這里主要是利用AVAssetWriter和AVAssetWriterInput來進行圖片轉視頻的,通過字典可以給AVAssetWriterInput設置屬性,從而達到設置視頻屬性的辦法。
2、OpenCV
1、OpenCV導入工程
直接去官網下載iOS版本的framework,然后放進工程里面即可。
如果出現下面的錯誤:
那么是由于Objective-C默認只支持C語言,但是并不支持C++,而OpenCV用到了C++,因此將m文件的后綴改為mm,讓編譯支持C++即可。
如果出現下面的錯誤:
Undefined symbols for architecture x86_64:
"_jpeg_free_large", referenced from:
_free_pool in opencv2(jmemmgr.o)
"_jpeg_free_small", referenced from:
_free_pool in opencv2(jmemmgr.o)
_self_destruct in opencv2(jmemmgr.o)
"_jpeg_get_large", referenced from:
_alloc_large in opencv2(jmemmgr.o)
_realize_virt_arrays in opencv2(jmemmgr.o)
"_jpeg_get_small", referenced from:
_jinit_memory_mgr in opencv2(jmemmgr.o)
_alloc_small in opencv2(jmemmgr.o)
"_jpeg_mem_available", referenced from:
_realize_virt_arrays in opencv2(jmemmgr.o)
"_jpeg_mem_init", referenced from:
_jinit_memory_mgr in opencv2(jmemmgr.o)
"_jpeg_mem_term", referenced from:
_jinit_memory_mgr in opencv2(jmemmgr.o)
_self_destruct in opencv2(jmemmgr.o)
"_jpeg_open_backing_store", referenced from:
_realize_virt_arrays in opencv2(jmemmgr.o)
ld: symbol(s) not found for architecture x86_64
clang: error: linker command failed with exit code 1 (use -v to see invocation)
是缺少libjpeg庫造成的,首先去這里(https://sourceforge.net/projects/libjpeg-turbo/files/ )下載安裝libjpeg。安裝完成之后輸入命令:lipo -info /opt/libjpeg-turbo/lib/libjpeg.a,可以看到支持各個處理器架構的libjpeg.a文件的路徑,將這個這個文件加入工程即可。
如果出現下面的錯誤:
Undefined symbols for architecture x86_64:
"_CMSampleBufferGetImageBuffer", referenced from:
-[CaptureDelegate captureOutput:didOutputSampleBuffer:fromConnection:] in opencv2(cap_avfoundation.o)
CvCaptureFile::retrieveFramePixelBuffer() in opencv2(cap_avfoundation.o)
"_CMSampleBufferInvalidate", referenced from:
CvCaptureFile::retrieveFramePixelBuffer() in opencv2(cap_avfoundation.o)
"_CMTimeGetSeconds", referenced from:
-[KTImagesMovieTransfer nativeTransferMovie:toImagesAtPath:] in KTImagesMovieTransfer.o
是缺少CoreMedia.framework造成的,添加進去即可
2、將視頻解成序列幀
OpenCV的方法很簡練,通過while循環不斷取出幀,然后存盤即可:
- (NSError *)opencvTransferMovie:(NSString *)movie toImagesAtPath:(NSString *)imagesPath
{
__block NSError *returnError = nil;
dispatch_async(KTImagesMovieTransferQueue(), ^{
CvCapture *pCapture = cvCaptureFromFile(movie.UTF8String);
// 這個函數只是讀取視頻頭文件信息來獲取幀數,因此有可能有不對的情況
// NSUInteger totalFrameCount = cvGetCaptureProperty(pCapture, CV_CAP_PROP_FRAME_COUNT);
// 所以采取下面的遍歷兩遍的辦法
NSUInteger totalFrameCount = 0;
while (cvQueryFrame(pCapture)) {
totalFrameCount ++;
}
if (pCapture) {
cvReleaseCapture(&pCapture);
}
pCapture = cvCaptureFromFile(movie.UTF8String);
NSUInteger index = 0;
IplImage *pGrabImg = NULL;
while ((pGrabImg = cvQueryFrame(pCapture))) {
NSString *imagePath = [imagesPath stringByAppendingPathComponent:[NSString stringWithFormat:@"%lu.jpg", index]];
cvSaveImage(imagePath.UTF8String, pGrabImg);
dispatch_async(dispatch_get_main_queue(), ^{
if ([self.delegate respondsToSelector:@selector(transfer:didTransferedAtIndex:totalFrameCount:)]) {
[self.delegate transfer:self didTransferedAtIndex:index totalFrameCount:totalFrameCount];
}
});
index++;
}
if (pCapture) {
cvReleaseCapture(&pCapture);
}
if (index == totalFrameCount) {
dispatch_async(dispatch_get_main_queue(), ^{
if ([self.delegate respondsToSelector:@selector(transfer:didFinishedWithError:)]) {
[self.delegate transfer:self didFinishedWithError:nil];
}
});
} else {
returnError = [self errorWithErrorCode:KTTransferOpencvWrongFrameCountError object:nil];
[self sendToMainThreadError:returnError];
}
});
return returnError;
}
這里要注意的是OpenCV有一個獲取視頻屬性的函數:cvGetCaptureProperty,但是這個方法獲取視頻幀數往往不正確,因為OpenCV只是通過這個函數讀取視頻的頭信息,很有可能與實際幀數不相符合。因此上面是采用2次遍歷的辦法來獲取幀數,第一次遍歷獲取幀數,第二次遍歷執行取幀,存盤操作。另外一個需要注意的是cvQueryFrame函數返回的IplImage并不需要釋放,只需要在最后釋放一次cvReleaseCapture(&pCapture)即可。
3、將序列幀壓縮為視頻
OpenCV圖片轉換為視頻的辦法同樣很簡單
- (NSError *)opencvTransferImageFiles:(NSArray<NSString *> *)imageFiles toMovie:(NSString *)movie
{
__block NSError *returnError = nil;
UIImage *firstImage = [UIImage imageWithContentsOfFile:[imageFiles firstObject]];
if (!firstImage) {
returnError = [self errorWithErrorCode:KTTransferReadImageError object:[imageFiles firstObject]];
[self sendToMainThreadError:returnError];
return returnError;
}
CvSize size = cvSize(firstImage.size.width, firstImage.size.height);
dispatch_async(KTImagesMovieTransferQueue(), ^{
// OpenCV由于不原生支持H264(可以用其他辦法做到),這里選用MP4格式
CvVideoWriter *pWriter = cvCreateVideoWriter(movie.UTF8String, CV_FOURCC('D', 'I', 'V', 'X'), (double)kKTImagesMovieTransferFPS, size);
for (NSUInteger ii = 0; ii < imageFiles.count; ++ii) {
NSString *imageFile = [imageFiles objectAtIndex:ii];
IplImage *pImage = cvLoadImage(imageFile.UTF8String);
if (pImage) {
cvWriteFrame(pWriter, pImage);
cvReleaseImage(&pImage);
dispatch_async(dispatch_get_main_queue(), ^{
if ([self.delegate respondsToSelector:@selector(transfer:didTransferedAtIndex:totalFrameCount:)]) {
[self.delegate transfer:self didTransferedAtIndex:ii totalFrameCount:imageFiles.count];
}
});
} else {
returnError = [self errorWithErrorCode:KTTransferReadImageError object:imageFile];
[self sendToMainThreadError:returnError];
return;
}
}
cvReleaseVideoWriter(&pWriter);
dispatch_async(dispatch_get_main_queue(), ^{
if ([self.delegate respondsToSelector:@selector(transfer:didFinishedWithError:)]) {
[self.delegate transfer:self didFinishedWithError:nil];
}
});
});
return returnError;
}
首先初始化一個CvVideoWriter對象,然后一幀幀往里面寫就行了。這里初始化CvVideoWriter
CvVideoWriter *pWriter = cvCreateVideoWriter(movie.UTF8String, CV_FOURCC('D', 'I', 'V', 'X'), (double)kKTImagesMovieTransferFPS, size);
與前面的原生方法設置writerInput屬性相比較,其實非常相似,都是設置文件名,格式,大小。
NSDictionary *videoSettings = @{AVVideoCodecKey: AVVideoCodecH264, AVVideoWidthKey: [NSNumber numberWithInt:size.width], AVVideoHeightKey: [NSNumber numberWithInt:size.height]};
AVAssetWriterInput* writerInput = [AVAssetWriterInput assetWriterInputWithMediaType:AVMediaTypeVideo outputSettings:videoSettings];
3、FFmpeg
1、FFmpeg的編譯和配置
FFmpeg稍顯復雜一點,參照這個教程。
將編譯好的FFmpeg加入工程之后,除了上面這個教程所說的設置header search pathes和lib庫之外,如果遇到下面的錯誤:
Undefined symbols for architecture arm64:
"av_register_all()", referenced from:
-[KTImagesMovieTransfer ffmpegTransferMovie:toImagesAtPath:] in KTImagesMovieTransfer.o
ld: symbol(s) not found for architecture arm64
clang: error: linker command failed with exit code 1 (use -v to see invocation)
這是因為C語言的函數無法在C++中調用識別的原因,那么引用頭文件的時候請這樣引用(與在windows上用VS寫C++遇到的問題一樣的解決辦法):extern "C"的作用是讓括號包住的頭文件中的符號以C語言的方式進行編譯。這里之所以使用.mm后綴支持C++是因為之前需要支持OpenCV。如果把FFmpeg的轉換方法和OpenCV的分文件寫,就不存在這個問題。
extern "C"
{
#include <libavcodec/avcodec.h>
#include <libavformat/avformat.h>
}
未完待續。。。。