Integrate iOS device camera and motion features to produce augmented reality experiences in your app or game using ARKit.

ARKit Documentation

Posts under ARKit subtopic

Post

Replies

Boosts

Views

Activity

Partial Occlusion Material
I am looking for a material that functions in the same way that Occlusion Material does, except that it only partially occludes whatever is behind it. One way that I have thought of doing this was to change the opacity of the entity that was covered in Occlusion Material, however this did not change anything. Please let me know if this is possible.
2
1
158
Apr ’25
Tracking multiple ImageAnchor simultaneously on VisionOS
Using the example code posted here: https://developer.apple.com/documentation/visionOS/tracking-images-in-3d-space I can register multiple ReferenceImage s with a ImageTrackingProvider, but only one updates at a time - to have realtime updating, I can only have one ImageAnchor in my field of view at a time. Is it possible to track multiple imageAnchors at the same time in the same field of view? As in having several ImageAnchor's tracked and entities updated to the transforms of the anchor in the same frame/moment from the Apple Vision Pro?
2
1
264
Nov ’25
ARKit camera transform orientation vector doesn't match physical device heading (despite `.gravityAndHeading`)
Hi all, I'm working on an ARKit-based iOS app where I need to accurately determine the direction the device is facing to localize objects in the real world. I'm using: let config = ARWorldTrackingConfiguration() config.worldAlignment = .gravityAndHeading Thus, I would expect the world alignment to behave as given in the gravityAndHeading page. The AR session is started after verifying that CLLocationManager.headingAccuracy <= 20, and the compass appears to be calibrated. However, I'm seeing a major inconsistency: When the rear camera is physically pointed toward true North, I would expect: cameraTransform.columns.2.z ≈ -1 // (i.e. ARKit's -Z pointing North) But instead, I'm consistently seeing: cameraTransform.columns.2.z ≈ +0.97 // Implies camera is facing South Meanwhile, the translation vector behaves as expected: As I physically move North, cameraTransform.columns.3.z becomes more negative, matching the world’s +Z = South assumption. For example, let's say I have the device in landscapeRight (or landscapeLeft for UIDeviceOrientation). Let's say the device rear camera is pointing towards True North, and I start moving towards True North. I get something like this: Camera Transform = simd_float4x4( [ [0.98446155, -0.030119859, 0.172998, 0.0], [0.023979114, 0.9990097, 0.037477385, 0.0], [-0.17395553, -0.032746706, 0.98420894, 0.0], [0.024039675, -0.037087332, -0.22780673, 0.99999994] ]) As you can see, the cameraTransform.columns.2.z is positive despite the rear camera pointing towards True North, while cameraTransform.columns.3.z is correctly positive as the device is moving towards True North. So here is my question: Why is cameraTransform.columns.2.z positive when the rear camera is physically facing North? Any clarity would be deeply appreciated. I've read the documentation and tested with different heading accuracies and AR session resets, but I keep running into this orientation mismatch. Thanks in advance!
1
0
137
Jun ’25
Enterprise API with Education Account
Hello, I am trying to develop an app that broadcasts what the user sees via Apple Vision Pro. I am a graduate student studying at the university. And I have two problems, If I want to use passthrough in screen capture (in VisionOS), do I have to join Apple Developer Enterprise Program to get Enterprise API? and Can I buy Apple Developer Enterprise Program (Enterprise API) with my university account? Have any of you been able to do this? Thank you
2
1
265
Jul ’25
AR sessions fails with "Required sensor failed"
The AR based app I am working on right now is experiencing an issue. Sometimes, the AR session fails with a call to my ARSessionObserver's session(_ session: ARSession, didFailWithError error: Error) with the following error: Error Domain=com.apple.arkit.error Code=102 "Required sensor failed." NSLocalizedFailureReason="A sensor failed to deliver the required input.," NSLocalizedRecoverySuggestion="Make sure that the application has the required privacy settings." The underlying error seems to point to the CoreMotion framework: Domain=CMErrorDomain Code=102 "(null) Some people seem to have experienced this issue and solved it by making sure that the Compass Calibration switch is ON in Settings > Privacy > Location Services > System Services. For context, the ARWorldTrackingConfiguration.worldAlignment is set to .gravity The thing is it is already ON when I experience this issue. I also noticed that this issue happens way more often on the iPhone 16e than in any other device. Has anyone had similar experiences? I am looking for a way to prevent this error from happening (ideally) or handling in a way that does not affect the user. Any help is appreciated
0
1
245
Aug ’25
visionOS plane anchor rotation and wall direction are inconsistent
I have a problem with the wall plane detection using visionOS/ARKit: I am using ARKitSession's PlaneDetectionProvider detection.wall in the space of visionOS. I recorded the position and rotation information of the first detected plane, but found that the rotation value will be facing when the user starts the space. There is a deviation in different directions. That is to say, even if the plane is located on the same wall, the rotation quaternion will be different. I hope that no matter from which direction the user enters the scan, the real direction of the wall can be correctly obtained so that the virtual content can be accurately aligned with the wall. I have tried to use anchor.originFromAnchorTransform or Transform.rotation directly, but the rotation value is still affected by the user's initial orientation. In addition, I would like to know whether the user's initial orientation will affect the location information. If so, please provide a solution. Thank you!
1
0
539
Sep ’25
RealityView doesn't free up memory after disappearing
Basically, take just the Xcode 26 AR App template, where we put the ContentView as the detail end of a NavigationStack. Opening app, the app uses < 20MB of memory. Tapping on Open AR the memory usage goes up to ~700MB for the AR Scene. Tapping back, the memory stays up at ~700MB. Checking with Debug memory graph I can still see all the RealityKit classes in the memory, like ARView, ARRenderView, ARSessionManager. Here's the sample app to illustrate the issue. PS: To keep memory pressure on the system low, there should be a way of freeing all the memory the AR uses for apps that only occasionally show AR scenes.
0
1
172
Sep ’25
ARSkeleton3D modelTransform always return nil
I use ARKit for motion tracking. I get the skeleton joint coordinates and use them for animation. I didn't make any changes to the code, but I updated the iOS version from 18 to 26, and modelTransform now always returns nil. https://developer.apple.com/documentation/arkit/arskeleton3d/modeltransform(for:) For example bodyAnchor.skeleton.modelTransform(for: .init(rawValue: "head_joint")) bodyAnchor is ARBodyAnchor. I see the default skeleton on the screen, but now I can't get the coordinates out of it. I'm using an example from Apple's WWDC presentation. https://developer.apple.com/documentation/arkit/capturing-body-motion-in-3d Are there any changes in the API? Or just bug?
6
0
877
5d
ARKit Body Tracking not detecting ARBodyAnchor on iOS 26.x (FB15128723)
Since updating to iOS 26.0 (and confirmed on 26.1), ARBodyTrackingConfiguration no longer detects a valid ARBodyAnchor on devices with LiDAR (e.g., iPhone 15 Pro, iPhone 17 Pro Max). This issue reproduces in custom projects and Apple’s official sample “Capturing Body Motion in 3D”. The AR session runs normally, but the delegate call: func session(_ session: ARSession, didUpdate anchors: [ARAnchor]) never yields an ARBodyAnchor with valid joint transforms. All joints return nil when calling: body.skeleton.modelTransform(for: jointName) resulting in 0 valid joints per frame. Environment • Device: iPhone 17 Pro Max (LiDAR) • iOS: 26.0 / 26.1 • Xcode: 16.0 (stable) • Framework: ARKit + RealityKit • Configuration used: config.worldAlignment = .gravityAndHeading config.isAutoFocusEnabled = true config.environmentTexturing = .none session.run(config) Also tested: with and without frameSemantics = .bodyDetection Expected Behavior ARBodyAnchor should be detected and body.skeleton should contain ~89 valid joints with continuous updates.
4
1
1.2k
Dec ’25
The AccessoryAnchor transform does not match any of the Accessory.LocationName options.
I am using AccessoryTrackingProvider from ARKit to get the transform of the PSVR2 controller via originFromAnchorTransform of the AccessoryAnchor. I also am trying to use AnchorEntity on the controller using RealityKit However, none of the three options for Accessory.LocationName, which should be used to define the AnchorEntity target, seem to match the position on the controller which is being sent from ARKit. The picture attached is showing two transforms: RealityKit - using .gripSurface to define the AnchoringComponent.Target.accesssory location. ARKit - using originFromAnchorTransform for AccessoryTrackingProvider. They are not aligned at the same point. As for the other options of Accessory.LocationName, using .aim is located at the tip of the controller and .grip is the same position as .gripSurface but with a different orientation. I am wondering why there is not an option for Accessory.LocationName that actually matches the transform captured by ARKit?
3
0
1.4k
Jan ’26
Technical Inquiry regarding iPhone LiDAR Specifications and ARKit Data Integrity
Hardware Specifications Regarding the LiDAR scanner in the iPhone 13/14/15/16/17 Pro series, could you please provide the following technical details for academic verification: Point Cloud Density / Resolution: The effective resolution of the depth map. Sampling Frequency: The sensor's refresh rate. Accuracy Metrics: Official tolerance levels regarding depth accuracy relative to distance (specifically within 0.5m – 2m range). Data Acquisition Methodology For a scientific thesis requiring high data integrity: Does Apple recommend a custom ARKit implementation over third-party applications (e.g., Polycam) to access raw depth data? I need to confirm if third-party apps typically apply smoothing or post-processing that would obscure the sensor's native performance, which must be avoided for my error analysis.
2
0
472
Jan ’26
Access UltraWideCamera when ARSession is running
ARSession provides video stream from the wide angle camera. If ARSession uses the ultra wide camera at the same time, ARSession may provide video stream from that camera, otherwise AVCaptureSession with an ultra wide camera should be allowed to launch. It would be very very useful if we can access different cameras while ARSession is running. We'd like to cooperate with you if possible. Steps to reproduce: run an AVCaptureSession and then run an ARSession. The AVCaptureSession stops.
1
0
505
Jan ’26
LiDAR Projector Pattern iPhone 15 Pro vs. 12 Pro – Research Project Question
Dear Apple Team, I’m a high school student (vocational upper secondary school) working on my final research project about LiDAR sensors in smartphones, specifically Apple’s iPhone implementation. My current understanding (for context): I understand Apple’s LiDAR uses dToF with SPAD detectors: A VCSEL laser emits pulses, a DOE splits the beam into a dot pattern, and each spot’s return time is measured separately → point cloud generation. My specific questions: How many active projection dots does the LiDAR projector have in the iPhone 15 Pro vs. iPhone 12 Pro? Are the dots static or do they shift/move over time? How many depth measurement points does the system deliver internally (after processing)? What is the ranging accuracy (cm-level precision) of each measurement point? Experimental background: Using an IR night vision camera, I counted approximately 111 dots on the 15 Pro vs. 576 dots on the 12 Pro. Do these match the internal specifications? Photos of my measurements are available if helpful. Contact request: I would be very grateful if you could connect me with an Apple engineer or ARKit specialist who works with LiDAR technology. I would love to ask follow-up questions directly and would be happy to provide my contact details for this purpose. These specifications would be essential for my research paper. Thank you very much in advance! Best regards, Max! Vocational Upper Secondary School Hans-Leipelt-Schule Donauwörth Research Project: “LiDAR Sensor Technology in Smartphones”
6
0
804
Jan ’26
Is SceneKit depricated ?
Hi everyone! I am working on AR app and wanted to implement object occlusion because it removes drift pretty much from the object. This working great with RealityKit sample But I am unable to replicate such behaviour it with scenekit. Because scenekit does not offer object occlusion. Can we say scenekit is getting depricated, and we should re-write app in RealityKit (which is obviously a big task)?
6
0
2.0k
Nov ’25
Can not remove final World Anchor
I’ve been having some issues removing anchors. I can add anchors with no issue. They will be there the next time I run the scene. I can also get updates when ARKit sends them. I can remove anchors, but not all the time. The method I’m using is to call removeAnchor() on the data provider. worldTracking.removeAnchor(forID: uuid) // Yes, I have also tried `removeAnchor(_ worldAnchor: WorldAnchor)` This works if there are more than one anchor in a scene. When I’m down to one remaining anchor, I can remove it. It seems to succeed (does not raise an error) but the next time I run the scene the removed anchor is back. This only happens when there is only one remaining anchor. do { // This always run, but it doesn't seem to "save" the removal when there is only one anchor left. try await worldTracking.removeAnchor(forID: uuid) } catch { // I have never seen this block fire! print("Failed to remove world anchor \(uuid) with error: \(error).") } I posted a video on my website if you want to see it happening. https://stepinto.vision/labs/lab-051-issues-with-world-tracking/ Here is the full code. Can you see if I’m doing something wrong? Is this a bug? struct Lab051: View { @State var session = ARKitSession() @State var worldTracking = WorldTrackingProvider() @State var worldAnchorEntities: [UUID: Entity] = [:] @State var placement = Entity() @State var subject : ModelEntity = { let subject = ModelEntity( mesh: .generateSphere(radius: 0.06), materials: [SimpleMaterial(color: .stepRed, isMetallic: false)]) subject.setPosition([0, 0, 0], relativeTo: nil) let collision = CollisionComponent(shapes: [.generateSphere(radius: 0.06)]) let input = InputTargetComponent() subject.components.set([collision, input]) return subject }() var body: some View { RealityView { content in guard let scene = try? await Entity(named: "WorldTracking", in: realityKitContentBundle) else { return } content.add(scene) if let placementEntity = scene.findEntity(named: "PlacementPreview") { placement = placementEntity } } update: { content in for (_, entity) in worldAnchorEntities { if !content.entities.contains(entity) { content.add(entity) } } } .modifier(DragGestureImproved()) .gesture(tapGesture) .task { try! await setupAndRunWorldTracking() } } var tapGesture: some Gesture { TapGesture() .targetedToAnyEntity() .onEnded { value in if value.entity.name == "PlacementPreview" { // If we tapped the placement preview cube, create an anchor Task { let anchor = WorldAnchor(originFromAnchorTransform: value.entity.transformMatrix(relativeTo: nil)) try await worldTracking.addAnchor(anchor) } } else { Task { // Get the UUID we stored on the entity let uuid = UUID(uuidString: value.entity.name) ?? UUID() do { try await worldTracking.removeAnchor(forID: uuid) } catch { print("Failed to remove world anchor \(uuid) with error: \(error).") } } } } } func setupAndRunWorldTracking() async throws { if WorldTrackingProvider.isSupported { do { try await session.run([worldTracking]) for await update in worldTracking.anchorUpdates { switch update.event { case .added: let subjectClone = subject.clone(recursive: true) subjectClone.isEnabled = true subjectClone.name = update.anchor.id.uuidString subjectClone.transform = Transform(matrix: update.anchor.originFromAnchorTransform) worldAnchorEntities[update.anchor.id] = subjectClone print("🟢 Anchor added \(update.anchor.id)") case .updated: guard let entity = worldAnchorEntities[update.anchor.id] else { print("No entity found to update for anchor \(update.anchor.id)") return } entity.transform = Transform(matrix: update.anchor.originFromAnchorTransform) print("🔵 Anchor updated \(update.anchor.id)") case .removed: worldAnchorEntities[update.anchor.id]?.removeFromParent() worldAnchorEntities.removeValue(forKey: update.anchor.id) print("🔴 Anchor removed \(update.anchor.id)") if let remainingAnchors = await worldTracking.allAnchors { print("Remaining Anchors: \(remainingAnchors.count)") } } } } catch { print("ARKit session error \(error)") } } } }
1
2
218
May ’25