该demo以ToyCamera命名
常用类、方法、属性已在【iOS视频捕获入门篇】介绍。
1. 视频流
1.1 设备与输入
前面提到我们不能直接设备,而是需要用一个输入对象将设备封装。先来提供一个前置摄像头和一个后置摄像头。(后置摄像头我们只使用一个)
- (AVCaptureDevice *)videoFrontDevice {
if (!_captureDevice || self.captureDevice.position != AVCaptureDevicePositionFront) {
AVCaptureDeviceDiscoverySession *session = [AVCaptureDeviceDiscoverySession discoverySessionWithDeviceTypes:@[AVCaptureDeviceTypeBuiltInDualCamera, AVCaptureDeviceTypeBuiltInTelephotoCamera,AVCaptureDeviceTypeBuiltInWideAngleCamera] mediaType:AVMediaTypeVideo position:AVCaptureDevicePositionFront];
_captureDevice = session.devices.firstObject;
}
return _captureDevice;
}
- (AVCaptureDevice *)videoBackDevice {
if (!_captureDevice || self.captureDevice.position != AVCaptureDevicePositionBack) {
AVCaptureDeviceDiscoverySession *session = [AVCaptureDeviceDiscoverySession discoverySessionWithDeviceTypes:@[AVCaptureDeviceTypeBuiltInDualCamera, AVCaptureDeviceTypeBuiltInTelephotoCamera,AVCaptureDeviceTypeBuiltInWideAngleCamera] mediaType:AVMediaTypeVideo position:AVCaptureDevicePositionBack];
_captureDevice = session.devices.firstObject;
}
return _captureDevice;
}
通过Input对象来封装设备:
- (void)_addCaptureIntput {
[self.metadataDevice lockForConfiguration:nil];
AVCaptureDeviceInput *deviceInput = [[AVCaptureDeviceInput alloc] initWithDevice:self.videoBackDevice error:nil];
if ([self.captureSession canAddInput:deviceInput]) {
[self.captureSession addInput:deviceInput];
}
[self.metadataDevice unlockForConfiguration];
}
1.2 输出
先实现视频流,定义一个视频流输出:
@property (nonatomic) AVCaptureVideoDataOutput *captureVideoDataOutput;
_captureVideoDataOutput = [[AVCaptureVideoDataOutput alloc] init];
[self.captureVideoDataOutput setSampleBufferDelegate:self.delegate queue:self.sampleBufferQueue];
self.captureVideoDataOutput.videoSettings = @{(__bridge NSString *)kCVPixelBufferPixelFormatTypeKey : [NSNumber numberWithInt:kCVPixelFormatType_32BGRA]};
1.3 会话
最重要的就是会话。
_captureSession = [[AVCaptureSession alloc] init];
[self.captureSession beginConfiguration];
self.captureSession.sessionPreset = AVCaptureSessionPreset1280x720;
[self.captureDevice lockForConfiguration:nil];
AVCaptureDeviceInput *deviceInput = [[AVCaptureDeviceInput alloc] initWithDevice:self.videoBackDevice error:nil];
if ([self.captureSession canAddInput:deviceInput]) {
[self.captureSession addInput:deviceInput];
}
[self.captureDevice unlockForConfiguration];
if ([self.captureSession canAddOutput:self.captureVideoDataOutput]) {
[self.captureSession addOutput:self.captureVideoDataOutput];
}
[self.captureSession commitConfiguration];
1.4 预览页
前面的准备工作做完之后,就可以添加预览页面,这样才能看到我们捕获的视频。
- (AVCaptureVideoPreviewLayer *)videoPreviewLayer {
if (!_videoPreviewLayer) {
_videoPreviewLayer = [[AVCaptureVideoPreviewLayer alloc] initWithSession:self.captureSession];
_videoPreviewLayer.videoGravity= AVLayerVideoGravityResizeAspectFill;
_videoPreviewLayer.connection.videoOrientation = AVCaptureVideoOrientationPortrait;
}
return _videoPreviewLayer;
}
至此其实就有一个可以看的见的视频流了。但是仅仅看到不是我们所希望的,我们需要拍照、录视频等。当然对于使用视频流做追踪、处理是够了。而且使用视频输出的视频帧也可以转为图片从而实现拍照。
2. 拍照
2.1 输出
输入我们可以共用,对于正式的拍照,我们需要使用AVCapturePhotoOutput实现。
_capturePhotoOutput = [[AVCapturePhotoOutput alloc] init];
[self.captureSession addOutput:self.capturePhotoOutput];
NSMutableArray<AVVideoCodecType> *availablePhotoCodecTypes = [NSMutableArray array];
AVCapturePhotoSettings *settings;
for (AVVideoCodecType type in self.capturePhotoOutput.availablePhotoCodecTypes) {
if ([type isEqualToString:AVVideoCodecTypeJPEG]) {
settings = [AVCapturePhotoSettings photoSettingsWithFormat:@{AVVideoCodecKey: AVVideoCodecTypeJPEG}];
[availablePhotoCodecTypes addObject:AVVideoCodecTypeJPEG];
break;
}
}
if (settings) {
[self.capturePhotoOutput setPreparedPhotoSettingsArray:@[settings] completionHandler:^(BOOL prepared, NSError * _Nullable error) {
if (prepared) {
} else if (error) {
}
}];
}
2.2 获得照片
通过下面的api获得照片。每次获取照片时,我们都需要传入一个AVCapturePhotoSettings。
[self.capturePhotoOutput capturePhotoWithSettings:[AVCapturePhotoSettings photoSettingsWithFormat:@{AVVideoCodecKey: AVVideoCodecTypeJPEG}] delegate:self.delegate];
- (void)captureOutput:(AVCapturePhotoOutput *)output didFinishProcessingPhoto:(AVCapturePhoto *)photo error:(nullable NSError *)error {
if (!error) {
NSData *data = [photo fileDataRepresentation];
UIImage *image = [UIImage imageWithData:data];
[TCUtility saveImage:image];
}
}
3. 录制视频
这里录制视频没有录制音频,只有视频画面。
录制视频我们需要使用到 AVCaptureMovieFileOutput。其属性和方法在 【iOS视频捕获入门篇】已介绍
_captureMovieFileOutput = [[AVCaptureMovieFileOutput alloc] init];
if ([self.captureSession canAddOutput:self.captureMovieFileOutput]) {
[self.captureSession addOutput:self.captureMovieFileOutput];
}
我们使用一个方法就可以实现开始录制和停止录制
- (void)captureVideo {
if (self.captureMovieFileOutput.isRecording) {
[self.captureMovieFileOutput stopRecording];
}
NSString *urlString = [NSTemporaryDirectory() stringByAppendingString:[NSString stringWithFormat:@"%.0f.mov", [[NSDate date] timeIntervalSince1970] * 1000]];
NSURL *url = [NSURL fileURLWithPath:urlString];
[self.captureMovieFileOutput startRecordingToOutputFileURL:url recordingDelegate:self.delegate];
}
- (void)captureOutput:(AVCaptureFileOutput *)output didFinishRecordingToOutputFileAtURL:(NSURL *)outputFileURL fromConnections:(NSArray<AVCaptureConnection *> *)connections error:(nullable NSError *)error {
[TCUtility saveVideo:outputFileURL];
}
4. 总结
整体一个简单的相机就实现了。这里只是介绍了我们怎么来使用前面介绍的api,如果有需要更高级的相机,还需要去深入挖掘学习AVFoundation中媒体捕获的内容。
还是比较简单的,整个demo我会放在GitHub上,有兴趣的同学可以去拉下来跑一跑。
最后一篇文章就是【iOS视频捕获进阶篇】,主要介绍怎么使用AVCaptureMetadataOutput。我们可以实现人脸识别、barcode识别,VisionFramework等的使用。
瑞思拜~
|