This package is based on https://github.com/BeauNouvelle/FaceAware which isn’t maintained anymore. Now it uses Apple’s Vision Framework instead of Core Image, based on a comparison result it performs much faster and looking at the trend with ML it will be better with each new cpu iteration.
In most cases we can use AspectFill to fit the image to the bounds of a UIImageView without stretching or leaving whitespace, however when it comes to photos of people, it’s quite often to have the faces cropped out if they’re not perfectly centered.
This is where VisionFaceAware comes in.
It will analyze an image either through
image property, or one you set using one of the built in functions and focus in on any faces it can find within.
The most common use is with avatars.
Swift Package Manager
- In Xcode, click
https://github.com/gentique/VisionFaceAwarein the search field.
Dependency Rule“Up to Next Major Version” with “1.0.0”
There are a few ways to get your image views focussing in on faces within images.
This is the easiest method and doesn’t require writing any code.
The extension makes use of
@IBInspectable so you can turn on focusOnFaces from within IB. However you won’t actually see the extension working until you run your project.
You can set
someImageView.focusOnFaces = true
Be sure to set this after setting your image. If no image is present when this is called, there will be no faces to focus on.
It features a debug mode which draws red squares around any detected faces within an image. To enable you can set the
debug property to true.
someImageView.debug = true
You can also set this flag within interface builder.
Pull requests are welcome. For major changes, please open an issue first to discuss what you would like to change.