My original inspiration for this weekend project was to capture RAW photos from my iPhone. A lot of new camera apps capture RAW, but none of them captured it in the manner I preferred.
On my Fuji cameras, I have the option to capture a JPEG alongside a RAW frame. This mode, RAW+JPEG, gives me a lot of freedom. I can quickly share a JPEG on social media until I have time to process, and properly edit the RAW version of the image. Most often I only use the JPEG, but I like having the RAW as backup—should I desire to push a photo in post.
Alas, this weekend experiment will uncover how to take RAW+JPEG stills from iPhone. My Fuji cameras can also apply digital filters to the JPEG. I’d like to do the same, and apply my own digital film filters.
Before we jump into code, let’s talk about the workflow:
AVCapturePhotoOutput
to capture RAW and RGBA1 frames.There’s a few unexpected wrinkles, which we’ll discuss along the way. You’ll also need the code you built for the last weekend project, since we’ll build upon that infrastructure.
Don’t forget my disclaimer2: I’m just an ordinary citizen, hacking on my phone to take pictures. I claim no copyright, nor do I warrant the code in any way. This is probably not the best way to build an app.
The AVCapturePhotoOutput
wants to be configured each time you attempt to capture a frame.
AVCapturePhotoOutput
provides a means to capture a JPEG and a RAW frame in a single capture request. However, this mode disables image stabilization for both captures3. Shooting in this mode produces photos that have more motion blur, and much more noise.
To avoid this, we’re gonna request two captures back-to-back, so our JPEG frame can take advantage of image stabilization, and the other processing goodies provided by iPhone’s DSP. It’s more work, and more complicated code, but it will produce better photos.
We start with a simple UI action to trigger the capture:
@IBAction func takePhoto(_ sender: AnyObject) {
let rawFormat = kCVPixelFormatType_14Bayer_RGGB
let processedFormat = NSNumber(value: kCVPixelFormatType_32BGRA)
// ... take the RAW photo and JPEG Photo
}
Inside of this method, we’re going to actually take two photos: (1) our RAW photo, and (2) our JPEG. Since both photos are requested simultaneously, they’ll appear to be the same frame4.
Next we add this to our method, which requests the BGRA
frame to be captured:
let settings = AVCapturePhotoSettings(format: [kCVPixelBufferPixelFormatTypeKey as String : processedFormat])
settings.isAutoStillImageStabilizationEnabled = true
output.capturePhoto(with: settings, delegate: self)
For each frame, we create a settings object of type AVCapturePhotoSettings
. This data structure stores the configuration of each capture.
To request the RAW frame, we follow a similar process:
let rawSettings = AVCapturePhotoSettings(rawPixelFormatType: rawFormat)
output.capturePhoto(with: rawSettings, delegate: self)
Our final task for the takePhoto(_:)
method is to cache the settings information for both capture requests, so we can later combine the JPEG and RAW frame for storage in the photo library.
Create a CaptureRequest
data structure to store the information for each request:
struct CaptureRequest {
let jpegUniqueId: Int64
let rawUniqueId: Int64
var jpegURL: URL?
var rawURL: URL?
}
The uniqueId
fields correspond to the ID generated each time an instance of AVCapturePhotoSettings
is created (remember, we create a new settings bundle each time we take a photo). You need to cache a CaptureRequest
at the end of the takePhoto(_:)
method:
let cr = CaptureRequest(jpegId: settings.uniqueID, rawId: rawSettings.uniqueID)
captureRequests.append(cr)
Now that we’ve created the requests, we have to implement a pair of delegate methods to handle the RAW and BGRA frames.
Let’s start with the RAW capture delegate method:
func capture(_ captureOutput: AVCapturePhotoOutput, didFinishProcessingRawPhotoSampleBuffer
rawSampleBuffer: CMSampleBuffer?,
previewPhotoSampleBuffer: CMSampleBuffer?,
resolvedSettings: AVCaptureResolvedPhotoSettings,
bracketSettings: AVCaptureBracketedStillImageSettings?, error: Error?) {
guard let sourceBuffer = rawSampleBuffer else { return }
guard let data = AVCapturePhotoOutput.dngPhotoDataRepresentation(forRawSampleBuffer:
sourceBuffer, previewPhotoSampleBuffer: previewPhotoSampleBuffer) else { return }
let index = captureRequests.index {
$0.rawUniqueId == resolvedSettings.uniqueID
}
guard let indexOfRequest = index else { return }
captureRequests[indexOfRequest].writeRAW(data: data)
}
This method is self-explanatory. First, we convert the captured data into a DNG representation. Then we get the CaptureRequest
object we cached in our takePhoto(_:)
method, and use it to write a temporary RAW file to disk.
The writeRAW(data:)
method on CaptureRequest
looks like this:
mutating func writeRAW(data: Data) {
do {
let path = tempURL(withPathExtension: "dng")
try data.write(to: path, options: .atomicWrite)
rawURL = path
}
catch {
// Write to RAW file FAILED
}
}
This method caches the RAW data into a temporary file, and keeps a path variable. We’ll need this path—later—when we move the RAW file to the Photo Library, after the capture is finalized.
The tempURL(withPathExtension:)
method yields a temporary path with a UUID
for a unique file name:
private func tempURL(withPathExtension ext: String) -> URL {
let uuid = NSUUID().uuidString
let url = URL(fileURLWithPath:NSTemporaryDirectory()).appendingPathComponent(uuid)
return url.appendingPathExtension(ext)
}
We’ll follow the same process for our JPEG, but we also need to filter it. The code is very similar to the code used to process the BGRA frames for the viewfinder in our last project.
Below is the delegate method for our BGRA frame, where we’ve already extracted the CVPixelBuffer
to process:
func capture(_ captureOutput: AVCapturePhotoOutput, didFinishProcessingPhotoSampleBuffer
photoSampleBuffer: CMSampleBuffer?,
previewPhotoSampleBuffer: CMSampleBuffer?,
resolvedSettings: AVCaptureResolvedPhotoSettings,
bracketSettings: AVCaptureBracketedStillImageSettings?, error: Error?) {
guard let sourceBuffer = photoSampleBuffer else { return }
guard let pb = CMSampleBufferGetImageBuffer(sourceBuffer) else {
print("sourceBuffer does not contain a CVPixelBuffer.")
return
}
// TODO: filter image
// TODO: convert to JPEG and write to disk
}
The code to process the BGRA image should look familiar. First, we create a CIImage
, and correct the orientation. Then, we use our FilterManager
instance to apply the selected filter.
Add the code below to our delegate method:
let or = simulatedOrientation.asCGImagePropertyOrientation()
let ci = CIImage(cvPixelBuffer:pb).applyingOrientation(or)
let filteredImage = self.filterManager.convertedImage(forSelectedFilter: ci)
let context = CIContext(options: nil)
guard let cg = context.createCGImage(filteredImage, from: filteredImage.extent) else {
print("couldn't create a CGImage from the filter source")
return
}
Now that we have our filtered CGImage
, we compress it into a JPEG using UImageJPEGRepresentation
. We then use the CaptureRequest
to persist the JPEG data.
guard let data = UIImageJPEGRepresentation(UIImage(cgImage: cg), 0.8) else {
print("couldn't create JPEG data from the filtered source")
return
}
let index = captureRequests.index {
$0.jpegUniqueId == resolvedSettings.uniqueID
}
guard let indexOfRequest = index else { return }
captureRequests[indexOfRequest].writeJPEG(data: data)
The writeJPEG(data:)
method is nearly identical to the writeRaw(data:)
, except we set the jpegURL
property on our CaptureRequest
.
mutating func writeJPEG(data: Data) {
do {
let path = tmpURL(withPathExtension: "jpg")
try data.write(to: path, options: .atomicWrite)
jpegURL = path
}
catch {
// Write to JPEG file FAILED
}
}
Now that we’ve cached locally both our RAW and JPEG images, this final step will store them as a single entry in the device photo library. We want our images to appear as the filtered JPEG in the Photos app, but provide a RAW alternative for apps that can request a RAW version.
This gives the desired behavior: quickly share the filtered image on social media, but retain the RAW original for further image processing.
This is accomplished with a single method from our CaptureRequest
object:
func moveToSharedLibrary() {
guard let jpeg = jpegURL, let raw = rawURL else { return }
PHPhotoLibrary.shared().performChanges({
let creationRequest = PHAssetCreationRequest.forAsset()
let creationOptions = PHAssetResourceCreationOptions()
creationOptions.shouldMoveFile = true
creationRequest.addResource(with: PHAssetResourceType.photo,
fileURL: jpeg, options: creationOptions)
creationRequest.addResource(with: PHAssetResourceType.alternatePhoto,
fileURL: raw, options: creationOptions)
},
completionHandler: { (success: Bool, error: Error?) in
if success {
print("YAY! ...PROCESSED photo saved to Camera Roll")
return
}
print("BOOM! ...something went wrong \(error)")
})
}
PHPhotoLibrary.shared()
is the singleton used to store images in the photo library. We create a PHAssetCreationRequest
and add two resources to it. By using the type .alternatePhoto
for the RAW image, we hide it from any photo apps that don’t specifically request a RAW version of a photo.
Note: If you use a PHAssetResourceCreationOptions
bundle, with the shouldMoveFile
property set, the creation request will automatically clean up our temporary files. It keeps things tidy.
There is a wrinkle: we don’t know which request will finish first, the JPEG, or the RAW request. Each time any request finishes, we have to check and see if its sibling request has also completed. If it has, we can save both to the photo library.
Add this check to your CaptureRequest
struct:
func isReady() -> Bool {
return jpegURL != nil && rawURL != nil
}
This lets us know if we have both images cached—i.e., we’re ready—so we can save everything to the photo library.
We call everything in the capture(_:didFinishCaptureForResolvedSettings: resolvedSettings:error:)
delegate method:
func capture(_ captureOutput: AVCapturePhotoOutput, didFinishCaptureForResolvedSettings
resolvedSettings: AVCaptureResolvedPhotoSettings,
error: Error?) {
let index = captureRequests.index {
$0.rawUniqueId == resolvedSettings.uniqueID || $0.jpegUniqueId == resolvedSettings.uniqueID
}
guard let indexOfRequest = index else { return }
if captureRequests[indexOfRequest].isReady() {
let request = captureRequests[indexOfRequest]
request.moveToSharedLibrary()
captureRequests.remove(at: indexOfRequest)
}
}
The code is straightforward, we get the CaptureRequest
for the frame that finished, and move it to the library if it’s ready.
This looks like a bunch of code just to store RAW files, and it is. Originally, I used AVPhotoCaptureOutput
’s ability to create a RAW+JPEG combined request, but I wasn’t happy with the results. Turns out iPhone’s DSP does a lot.
This completes all of the camera infrastructure needed to begin development of our own digital film. In the first article, we covered the basic filter infrastructure, and described a way to preview our filters before capture. In this article, we covered how to save our filtered JPEGs to the photo library, along with a RAW original.
Our next weekend project will dive into image processing, and describe how to create your own digital film. Stay tuned.
There is an option to capture JPEG buffers, however our filter workflow expects an uncompressed RGBA frame. It doesn’t make sense to have the phone compress a JPEG, only to immediately decompress it. ↩
Disclaimer: I claim no copyright for the following code. I’m releasing it into the public domain, and there is no warranty expressed or implied. Please refer to Apple Documentation for best practices. ↩
Image stabilization is always disabled for RAW captures. Because of this, the RAW+JPEG mode also disables it. Makes sense, when you think about it, albeit still frustrating. ↩
At best, they’ll be 1/30th of a second apart. Imperceptible, unless there is a fast moving subject, and you’re comparing side-by-side. ↩