Skill v1.0.2
LLM-judged scan95/1001 files
version: "1.0.2" name: realitykit description: "Build augmented reality experiences with RealityKit and ARKit on iOS. Use when adding 3D content with RealityView, loading entities and models, placing objects via raycasting, configuring AR camera sessions, handling world tracking, scene understanding, or implementing entity interactions and gestures."
RealityKit
Build AR experiences on iOS using RealityKit for rendering and ARKit for world tracking. Covers RealityView, entity management, raycasting, scene understanding, and gesture-based interactions. Targets Swift 6.3 / iOS 26+.
Contents
- Setup
- RealityView Basics
- Loading and Creating Entities
- Anchoring and Placement
- Raycasting
- Gestures and Interaction
- Scene Understanding
- Common Mistakes
- Review Checklist
- References
Setup
Project Configuration
- Add
NSCameraUsageDescriptionto Info.plist - For iOS, RealityKit uses the device camera by default via
RealityViewCameraContent(iOS 18+, macOS 15+) - No additional capabilities required for basic AR on iOS
Device Requirements
AR features require devices with an A9 chip or later. Always verify support before presenting AR UI.
import ARKitguard ARWorldTrackingConfiguration.isSupported else {showUnsupportedDeviceMessage()return}
Key Types
| Type | Platform | Role | |
|---|---|---|---|
RealityView | iOS 18+, visionOS 1+ | SwiftUI view that hosts RealityKit content | |
RealityViewCameraContent | iOS 18+, macOS 15+ | Content displayed through the device camera | |
Entity | All | Base class for all scene objects | |
ModelEntity | All | Entity with a visible 3D model | |
AnchorEntity | All | Tethers entities to a real-world anchor |
RealityView Basics
RealityView is the SwiftUI entry point for RealityKit. On iOS, it provides RealityViewCameraContent which renders through the device camera for AR.
import SwiftUIimport RealityKitstruct ARExperienceView: View {var body: some View {RealityView { content in// content is RealityViewCameraContent on iOSlet sphere = ModelEntity(mesh: .generateSphere(radius: 0.05),materials: [SimpleMaterial(color: .blue,isMetallic: true)])sphere.position = [0, 0, -0.5] // 50cm in front of cameracontent.add(sphere)}}}
Make and Update Pattern
Use the update closure to respond to SwiftUI state changes:
struct PlacementView: View {@State private var modelColor: UIColor = .redvar body: some View {RealityView { content inlet box = ModelEntity(mesh: .generateBox(size: 0.1),materials: [SimpleMaterial(color: .red,isMetallic: false)])box.name = "colorBox"box.position = [0, 0, -0.5]content.add(box)} update: { content inif let box = content.entities.first(where: { $0.name == "colorBox" }) as? ModelEntity {box.model?.materials = [SimpleMaterial(color: modelColor,isMetallic: false)]}}Button("Change Color") {modelColor = modelColor == .red ? .green : .red}}}
Loading and Creating Entities
Loading from USDZ Files
Load 3D models asynchronously to avoid blocking the main thread:
RealityView { content inif let robot = try? await ModelEntity(named: "robot") {robot.position = [0, -0.2, -0.8]robot.scale = [0.01, 0.01, 0.01]content.add(robot)}}
Programmatic Mesh Generation
// Boxlet box = ModelEntity(mesh: .generateBox(size: [0.1, 0.2, 0.1], cornerRadius: 0.005),materials: [SimpleMaterial(color: .gray, isMetallic: true)])// Spherelet sphere = ModelEntity(mesh: .generateSphere(radius: 0.05),materials: [SimpleMaterial(color: .blue, roughness: 0.2, isMetallic: true)])// Planelet plane = ModelEntity(mesh: .generatePlane(width: 0.3, depth: 0.3),materials: [SimpleMaterial(color: .green, isMetallic: false)])
Adding Components
Entities use an ECS (Entity Component System) architecture. Add components to give entities behavior:
let box = ModelEntity(mesh: .generateBox(size: 0.1),materials: [SimpleMaterial(color: .red, isMetallic: false)])// Make it respond to physicsbox.components.set(PhysicsBodyComponent(massProperties: .default,material: .default,mode: .dynamic))// Add collision shape for interactionbox.components.set(CollisionComponent(shapes: [.generateBox(size: [0.1, 0.1, 0.1])]))// Enable input targeting for gesturesbox.components.set(InputTargetComponent())
Anchoring and Placement
AnchorEntity
Use AnchorEntity to anchor content to detected surfaces or world positions:
RealityView { content in// Anchor to a horizontal surfacelet floorAnchor = AnchorEntity(.plane(.horizontal,classification: .floor,minimumBounds: [0.2, 0.2]))let model = ModelEntity(mesh: .generateBox(size: 0.1),materials: [SimpleMaterial(color: .orange, isMetallic: false)])floorAnchor.addChild(model)content.add(floorAnchor)}
Anchor Targets
| Target | Description | |
|---|---|---|
.plane(.horizontal, ...) | Horizontal surfaces (floors, tables) | |
.plane(.vertical, ...) | Vertical surfaces (walls) | |
.plane(.any, ...) | Any detected plane | |
.world(transform:) | Fixed world-space position |
Raycasting
Use RealityViewCameraContent to convert between SwiftUI view coordinates and RealityKit world space. Pair with SpatialTapGesture to place objects where the user taps on a detected surface.
Gestures and Interaction
Drag Gesture on Entities
struct DraggableARView: View {var body: some View {RealityView { content inlet box = ModelEntity(mesh: .generateBox(size: 0.1),materials: [SimpleMaterial(color: .blue, isMetallic: true)])box.position = [0, 0, -0.5]box.components.set(CollisionComponent(shapes: [.generateBox(size: [0.1, 0.1, 0.1])]))box.components.set(InputTargetComponent())box.name = "draggable"content.add(box)}.gesture(DragGesture().targetedToAnyEntity().onChanged { value inlet entity = value.entityguard let parent = entity.parent else { return }entity.position = value.convert(value.location3D,from: .local,to: parent)})}}
Tap to Select
.gesture(SpatialTapGesture().targetedToAnyEntity().onEnded { value inlet tappedEntity = value.entityhighlightEntity(tappedEntity)})
Scene Understanding
Per-Frame Updates
Subscribe to scene update events for continuous processing:
RealityView { content inlet entity = ModelEntity(mesh: .generateSphere(radius: 0.05),materials: [SimpleMaterial(color: .yellow, isMetallic: false)])entity.position = [0, 0, -0.5]content.add(entity)_ = content.subscribe(to: SceneEvents.Update.self) { event inlet time = Float(event.deltaTime)entity.position.y += sin(Float(Date().timeIntervalSince1970)) * time * 0.1}}
visionOS Note
On visionOS, ARKit provides a different API surface with ARKitSession, WorldTrackingProvider, and PlaneDetectionProvider. These visionOS-specific types are not available on iOS. On iOS, RealityKit handles world tracking automatically through RealityViewCameraContent.
Common Mistakes
DON'T: Skip AR capability checks
Not all devices support AR. Showing a black camera view with no feedback confuses users.
// WRONG -- no device checkstruct MyARView: View {var body: some View {RealityView { content in// Fails silently on unsupported devices}}}// CORRECT -- check support and show fallbackstruct MyARView: View {var body: some View {if ARWorldTrackingConfiguration.isSupported {RealityView { content in// AR content}} else {ContentUnavailableView("AR Not Supported",systemImage: "arkit",description: Text("This device does not support AR."))}}}
DON'T: Load heavy models synchronously
Loading large USDZ files on the main thread causes frame drops and hangs. The make closure of RealityView is async -- use it.
// WRONG -- synchronous load blocks the main threadRealityView { content inlet model = try! Entity.load(named: "large-scene")content.add(model)}// CORRECT -- async loadRealityView { content inif let model = try? await ModelEntity(named: "large-scene") {content.add(model)}}
DON'T: Forget collision and input target components for interactive entities
Gestures only work on entities that have both CollisionComponent and InputTargetComponent. Without them, taps and drags pass through.
// WRONG -- entity ignores gestureslet box = ModelEntity(mesh: .generateBox(size: 0.1))content.add(box)// CORRECT -- add collision and input componentslet box = ModelEntity(mesh: .generateBox(size: 0.1),materials: [SimpleMaterial(color: .red, isMetallic: false)])box.components.set(CollisionComponent(shapes: [.generateBox(size: [0.1, 0.1, 0.1])]))box.components.set(InputTargetComponent())content.add(box)
DON'T: Create new entities in the update closure
The update closure runs on every SwiftUI state change. Creating entities there duplicates content on each render pass.
// WRONG -- duplicates entities on every state changeRealityView { content in// empty} update: { content inlet sphere = ModelEntity(mesh: .generateSphere(radius: 0.05))content.add(sphere) // Added again on every update}// CORRECT -- create in make, modify in updateRealityView { content inlet sphere = ModelEntity(mesh: .generateSphere(radius: 0.05))sphere.name = "mySphere"content.add(sphere)} update: { content inif let sphere = content.entities.first(where: { $0.name == "mySphere" }) as? ModelEntity {// Modify existing entitysphere.position.y = newYPosition}}
DON'T: Ignore camera permission
RealityKit on iOS needs camera access. If the user denies permission, the view shows a black screen with no explanation.
// WRONG -- no permission handlingRealityView { content in// Black screen if camera denied}// CORRECT -- check and request permissionstruct ARContainerView: View {@State private var cameraAuthorized = falsevar body: some View {Group {if cameraAuthorized {RealityView { content in// AR content}} else {ContentUnavailableView("Camera Access Required",systemImage: "camera.fill",description: Text("Enable camera in Settings to use AR."))}}.task {let status = AVCaptureDevice.authorizationStatus(for: .video)if status == .authorized {cameraAuthorized = true} else if status == .notDetermined {cameraAuthorized = await AVCaptureDevice.requestAccess(for: .video)}}}}
Review Checklist
- [ ]
NSCameraUsageDescriptionset in Info.plist - [ ] AR device capability checked before presenting AR views
- [ ] Camera permission requested and denial handled with a fallback UI
- [ ] 3D models loaded asynchronously in the
makeclosure - [ ] Entities created in
make, modified inupdate(not created inupdate) - [ ] Interactive entities have both
CollisionComponentandInputTargetComponent - [ ] Collision shapes match the visual size of the entity
- [ ]
SceneEvents.Updatesubscriptions used for per-frame logic (not SwiftUI timers) - [ ] Large scenes use
ModelEntity(named:)async loading, notEntity.load(named:) - [ ] Anchor entities target appropriate surface types for the use case
- [ ] Entity names set for lookup in the
updateclosure
References
- Extended patterns (physics, animations, lighting, ECS): references/realitykit-patterns.md
- RealityKit framework
- RealityView
- RealityViewCameraContent
- Entity
- ModelEntity
- AnchorEntity
- ARKit framework
- ARKit in iOS
- ARWorldTrackingConfiguration
- Loading entities from a file