Skip to main content

Apple ARKit Reference Images

Front-end Development
Mobile Development

Apple is introducing the ARReferenceImage class in iOS 11.3 as part of their continuing effort to make Augmented Reality easier to create, and more accurate. It is used to detect real world images within the ARWorldTracking scene. I decided to download the beta and take this new feature of ARKit around the block to see what it can do.

What is ARReferenceImage?

A new class in ARKit that Apple describes it as An image to be recognized in the real-world environment during a world-tracking AR session.  Meaning images can be added as assets to an iOS project that will identified, and provide an ARImageAnchor that can be used as a reference point for your AR user interface layer.

How well does it work?

Here are my observations of what it is currently capable of:

Real world images are preferred over computer graphics.

Taking a picture of a business card is better than using the file you sent to the printer when creating your business card. Image recognition is improved the wider the image histogram is. Meaning more colors and contrast. Something that is more normal in photographs.

Don't expect much "magic"

The reference images you provide should match how you expect to see them through the camera. Lighting and orientation matter. This means you will want to provide several images to look for. Trying to look for something like a "White 8x11 piece of paper" is going to be tough because of how much variety there could be in it's real world presentation.

Background Matters

Much like the point made about photographs vs rendered graphics– having something to "frame" the image you are looking for is a big win.

Images recognized will need to be flat

Anything that might skew an image will negatively effect how well it will be recognized. The label of a bottle, or someone holding a sheet of paper is hard to detect.

Horizontal and Vertical planes matter

If the image you are searching for is on a "plane" you will be able to detect it much easier... AND you will be successful in detecting it from different angles.

Conclusion

This isn't going to work for just say... identifying a logo in the real world. It may work if you are looking to detect your logo on the door of your office. But it's a start, and will only improve. It's my impression that Apple is releasing new bits of functionality to get developers used to how they intend to be implemented, and when these utilities reach maturity, hopefully there will be a lot of developer acceptance from which great things will be made.