iOS app – AR Diffusion Museum

An iOS app that generates images using Stable Diffusion and displays them in AR.

AppIcon

You can generate images specifying any prompt (text) and display them on the wall in AR.

  • macOS 13.1 or newer, Xcode 14.1 or newer
  • iPhone 12+ / iOS 16.2+, iPad Pro with M1/M2 / iPadOS 16.2+

You can run the app on above mobile devices. And you can run the app on Mac, building as a Designed for iPad app.

This Xcode project uses the Apple/ml-stable-diffusion Swift Package.

  • Apple/ml-stable-diffusion Swift Package/Library
  • SwiftUI, ARKit, RealityKit

This project does not contain the CoreML models of Stable Diffusion v2 (SD2). You need to make them converting the PyTorch SD2 models using Apple converter tools. You can find the instructions of converting models in Apple’s repo on GitHub.

There is a Readme in another GitHub Repository that explains how to add Stable Diffusion CoreML models to your Xcode project. Please refer to it.

Features

  1. image generation using Stable Diffusion v2 on device
  2. showing generating images step by step
  3. saving generated images in Photo Library
  4. displaying generated images on the wall in AR
  5. automatic switching of displayed images at regular intervals
  6. automatic enlargement according to viewing distance (Large projection on outdoor walls)
  7. built-in sample images

Image

Image

Image

UI

This project provides a minimal UI. Feel free to extend it as you like.

Image

Consideration

MPS internal error

  • Currently, using CoreML Stable Diffusion Library and RealityKit API such as ModelEntity.load(name:) together often causes MPS internal errors.
  • As a workaround, the 3D model of the picture frame is replaced with a simple one now.

In action on iPad: image generation => AR display Image

References

MIT License

GitHub

View Github