When rendering an equirectangular video on a sphere using VideoMaterial and MeshResource.generateSphere(), there is a visible black seam line running vertically on the sphere. This appears to be at the UV seam where the texture coordinates wrap from 1.0 back to 0.0.
The same video file plays without any visible seam in other 360° video players on Vision Pro, so the issue is not with the video content itself.
Here is the relevant code:
private func createVideoSphere(content: RealityViewContent, player: AVPlayer) {
let sphere = MeshResource.generateSphere(radius: 1000)
let material = VideoMaterial(avPlayer: player)
let entity = ModelEntity(mesh: sphere, materials: [material])
entity.scale *= .init(x: -1, y: 1, z: 1) // Flip to render on inside
content.add(entity)
player.play()
}
The setup is straightforward: MeshResource.generateSphere(radius: 1000) generates the sphere mesh VideoMaterial(avPlayer:) provides the video texture X scale is flipped to -1 so the texture renders on the inside of the sphere The video is a standard equirectangular 360° MP4 file
What I've tried:
I attempted to create a custom sphere mesh using MeshDescriptor with duplicate vertices at the UV seam (longitude 0°/360°) to ensure proper UV continuity. However, VideoMaterial did not render any video on the custom mesh (only audio played), and the app eventually crashed. It seems VideoMaterial may have specific mesh requirements.
Questions:
Is the black seam a known limitation of MeshResource.generateSphere() when used with VideoMaterial for 360° video?
Is there a recommended way to eliminate this UV seam — for example, a texture addressing mode or a specific mesh configuration that works with VideoMaterial?
Is there an official sample project or code example for playing 360° equirectangular video in a fully immersive space on visionOS? That would be extremely helpful as a reference.
Any guidance would be greatly appreciated. Thank you!
Discuss spatial computing on Apple platforms and how to design and build an entirely new universe of apps and games for Apple Vision Pro.
Selecting any option will automatically load the page
Post
Replies
Boosts
Views
Activity
Hi,
I would like clarification on whether the new hover effects feature introduced in vision os 26 supported pinch gestures through the psvr 2 controllers.
In your sample application, I found that this was not working. Pulling the trigger on the controller whilst looking at the 3d object did not activate the hover effect spatial event in the sample application. (The object is showing the highlight though), only pinch clicking with my fingers seem to be registering/triggering the spatial event.
I am using Vision OS 26.3
This is inconsistent with how the psvr2 controller behaves on swift ui views and ui view elements, where the trigger press does count as a button click.
The sample I used was this one:
https://developer.apple.com/documentation/compositorservices/rendering_hover_effects_in_metal_immersive_apps
My visionOS 26.3 app displays a diorama-like scene in a RealityView in a mixed immersive space, about 1 meter square, with view attachments floating above the scene.
Each view attachment fades out after user interaction, by animating the view's opacity.
What I'm observing is that depending on the position of a view attachment relative to the scene and the camera, an unwanted cutout effect is observed (presumably because of draw order issues), as shown in the right column in the screenshots below.
YouTube video link of these sequences: https://youtu.be/oTuo0okKCkc
(19 seconds)
My question:
How does visionOS determine the view attachment draw order relative to the RealityView scene?
If I better understood how the draw order is determined, I could modify my scene to ensure that the view attachments were always drawn after the scene, fixing the unwanted cutout effect.
I've successfully used ModelSortGroupComponent to control the draw order of entities within the RealityView scene, but my understanding is that this approach cannot be used with view attachments.
I've submitted FB22014370 about this issue.
Thank you.
I have a simple visionOS app that creates an Entity, writes it to the device, and then attempts to load it. However, when the entity file get overwritten, it affects the ability for the app to load it correctly.
Here is my code for saving the entity.
import SwiftUI
import RealityKit
import UniformTypeIdentifiers
struct ContentView: View {
var body: some View {
VStack {
ToggleImmersiveSpaceButton()
Button("Save Entity") {
Task {
// if let entity = await buildEntityHierarchy(from: urdfPath) {
let type = UTType.realityFile
let filename = "testing.\(type.preferredFilenameExtension ?? "bin")"
let documentsURL = FileManager.default.urls(for: .documentDirectory, in: .userDomainMask)[0]
let fileURL = documentsURL.appendingPathComponent(filename)
do {
let mesh = MeshResource.generateBox(size: 1, cornerRadius: 0.05)
let material = SimpleMaterial(color: .blue, isMetallic: true)
let modelComponent = ModelComponent(mesh: mesh, materials: [material])
let entity = Entity()
entity.components.set(modelComponent)
print("Writing \(fileURL)")
try await entity.write(to: fileURL)
} catch {
print("Failed writing")
}
}
}
}
.padding()
}
}
Every time I press "Save Entity", I see a warning similar to:
Writing file:///var/mobile/Containers/Data/Application/1140E7D6-D365-48A4-8BED-17BEA34E3F1E/Documents/testing.reality
Failed to set dependencies on asset 1941054755064863441 because NetworkAssetManager does not have an asset entity for that id.
When I open the immersive space, I attempt to load the same file:
import SwiftUI
import RealityKit
import UniformTypeIdentifiers
struct ImmersiveView: View {
@Environment(AppModel.self) private var appModel
var body: some View {
RealityView { content in
guard
let type = UTType.realityFile.preferredFilenameExtension
else {
return
}
let documentsURL = FileManager.default.urls(for: .documentDirectory, in: .userDomainMask)[0]
let fileURL = documentsURL.appendingPathComponent("testing.\(type)")
guard FileManager.default.fileExists(atPath: fileURL.path) else {
print("❌ File does not exist at path: \(fileURL.path)")
return
}
if let entity = try? await Entity(contentsOf: fileURL) {
content.add(entity)
}
}
}
}
I also get errors after I overwrite the entity (by pressing "Save Entity" after I have successfully loaded it once). The warnings that appear when the Immersive space attempts to load the new entity are:
Asset 13277375032756336327 Mesh (RealityFileAsset)URL/file:///var/mobile/Containers/Data/Application/1140E7D6-D365-48A4-8BED-17BEA34E3F1E/Documents/testing.reality/Mesh_0.compiledmesh failure: Asset provider load failed: type 'RealityFileAsset' -- RERealityArchive: Failed to open load stream for entry 'assets/Mesh_0.compiledmesh'.
Asset 8308977590385781534 Scene (RealityFileAsset)URL/file:///var/mobile/Containers/Data/Application/1140E7D6-D365-48A4-8BED-17BEA34E3F1E/Documents/testing.reality/Scene_0.compiledscene failure: Asset provider load failed: type 'RealityFileAsset' -- RERealityArchive: Failed to read archive entry.
AssetLoadRequest failed because asset failed to load '13277375032756336327 Mesh (RealityFileAsset)URL/file:///var/mobile/Containers/Data/Application/1140E7D6-D365-48A4-8BED-17BEA34E3F1E/Documents/testing.reality/Mesh_0.compiledmesh' (Asset provider load failed: type 'RealityFileAsset' -- RERealityArchive: Failed to open load stream for entry 'assets/Mesh_0.compiledmesh'.)
The order of operations to make this happen:
Launch app
Press "Save Entity" to save the entity
"Open Immersive Space" to view entity
Press "Save Entity" to overwrite the entity
"Open Immersive Space" to view entity, failed asset load request
Also
Launch app, the entity should still be save from last time the app ran
"Open Immersive Space" to view entity
Press "Save Entity" to overwrite the entity
"Open Immersive Space" to view entity, failed asset load request
NOTE: It appears I can get it to work slightly better by pressing the "Save Entity" button twice before attempting to view it again in the immersive space.
I want to:
Run ARKit on the main rear camera, and while it's running shoot high resolution pictures on the wide camera, without disturbing the AR tracking.
Is this possible?
This is a very exciting feature in 26.4 beta. But from the document, it seems can only integrate with NVIDIA CloudXR™ SDK.
I'm wondering if it's possible to use this tool to stream immersive video from Mac to Vision Pro?
When using the new RealityKit Manipulation Component on Entities, indirect input will never translate the entity - no matter what settings are applied. Direct manipulation works as expected for both translation and rotation.
Is this intended behaviour? This is different from how indirect manipulation works on Model3D. How else can we get translation from this component?
visionOS 26 Beta 2
Build from macOS 26 Beta 2 and Xcode 26 Beta 2
Attached is replicable sample code, I have tried this in other projects with the same results.
var body: some View {
RealityView { content in
// Add the initial RealityKit content
if let immersiveContentEntity = try? await Entity(named: "MovieFilmReel", in: reelRCPBundle) {
ManipulationComponent.configureEntity(immersiveContentEntity, allowedInputTypes: .all, collisionShapes: [ShapeResource.generateBox(width: 0.2, height: 0.2, depth: 0.2)])
immersiveContentEntity.position.y = 1
immersiveContentEntity.position.z = -0.5
var mc = ManipulationComponent()
mc.releaseBehavior = .stay
immersiveContentEntity.components.set(mc)
content.add(immersiveContentEntity)
}
}
}
My application calculates three distinct Meesus Double [x, y, z] Radian values to light a sphere in RealityKit with DirectionalLight. It is my understanding that I must use (simd_quatf) for each radian value to properly light the sphere in the view. The code correctly [orientates] the sphere with the combined (simd_quatf) DirectionalLight in the view, but the illumination (Z-axis) fails to properly illuminate the sphere with the expected result, compared to associated Meesus web page images. For the moment, I do not know how to correct the (Z-axis). Curious for a suggestion ... :]
// Location values.
let theLatitude: Double = 51.13107260
let theLongitude: Double = -114.01127910
let currentDate: Date = Date()
struct TheCalculatedMoonPhaseTest_ContentView: View {
var body: some View {
VStack {
if #available(macOS 15.0, *) {
RealityView { content in
let moonSphere_Entity = Entity.createSphere(radius: 0.90, color: .black)
moonSphere.Entity.name = "MoonSphere"
moonSphere.Entity.position = SIMD3<Float>(x: 0, y: 0, z: 0)
content.add(moonSphere.Entity)
let sunLight_Entity = createDirectionalLight(latitude: theLatitude, longitude: theLongitude, date: currentDate)
content.add(sunLight_Entity)
} // End of [RealityView]
} else {
// Earlier version required.
} // End of [if #available(macOS 15.0, *)]
} // End of [VStack]
.background(Color.black)
} // End of [var body: some View]
// MARK: - 🟠🟠🟠🟠 [SET THE BACKGROUND COLOUR] 🟠🟠🟠🟠
var backgroundColor: Color = Color.init(.black)
// MARK: - 🟠🟠🟠🟠 [CREATE THE DIRECTIONAL LIGHT FOR THE SPHERE] 🟠🟠🟠🟠
func createDirectionalLight(latitude: Double, longitude: Double, date: Date) -> Entity {
let directionalLight = DirectionalLight()
directionalLight.light.color = .white
directionalLight.light.intensity = 1000000
directionalLight.shadow = DirectionalLightComponent.Shadow()
directionalLight.shadow?.maximumDistance = 5
directionalLight.shadow?.depthBias = 1
// MARK: 🟠🟠🟠🟠 Retrieve the [MEESUS MOON AGE VALUES] from the [CONSTANT FOLDER] 🟠🟠🟠🟠
let theMeesusMoonAge_LunarAgeDaysValue = 25.90567592898601
if theMeesusMoonAge_LunarAgeDaysValue >= 23.10 && theMeesusMoonAge_LunarAgeDaysValue < (29.530588853 - 1.00) {
let someCalculatedX_WestEastRadian: Float = Float(1.00)
// Identify the sphere’s DirectionalLight Tilt Angle (Y) radian value ::
// Note :: The following Tilt Angle is corrected to [Zenith] with the [MeesusCalculatedTilt_Angle] minus the [MeesusCalculatedPar_Angle].
let someCalculatedY_TiltAngleRadian: Float = Float(1.3396086)
// Identify the sphere’s DirectionalLight Illumination (Z) radian Value ::
// Note :: The Meesus calculated illumination fraction is converted to degrees, then converted to a radian value.
let someCalculatedZ_IlluminationAngleRadian: Float = Float(0.45176168630244457) // <=== 14.3800% Illumination.
// Define rotation angles in radians for X, Y, and Z axes.
let x_Radians = someCalculatedX_WestEastRadian
let y_Radians = someCalculatedY_TiltAngleRadian
let z_Radians = someCalculatedZ_IlluminationAngleRadian
// Identify and separate the quaternion [simd_quatf] for each Radian.
let q_X = simd_quatf(angle: x_Radians, axis: SIMD3<Float>(1, 0, 0))
let q_Y = simd_quatf(angle: y_Radians, axis: SIMD3<Float>(0, 1, 0))
let q_Z = simd_quatf(angle: z_Radians, axis: SIMD3<Float>(0, 0, 1))
// Apply and combine the rotations, where order matters.
let combinedRotation = q_Z * q_Y * q_X
// Identify the [Combined Rotation].
// The [MyMoonMeesus] :: [WANING CRESCENT] calculated [combinedRotation] :: simd_quatf(real: 0.73715997, imag: SIMD3<Float>(0.24427173, 0.61516714, -0.13599981)) ° Radians
// Normalize the [combinedRotation].
let theNormalizesRotation = simd_normalize(combinedRotation)
// Identify the [Normalized Combined Rotation].
// The [MyMoonMeesus] :: [WANING CRESCENT] calculated [normalizedRotation] :: simd_quatf(real: 0.73715997, imag: SIMD3<Float>(0.24427173, 0.61516714, -0.13599981)) ° Radians
// Assume the [theNormalizesRotation] appears reversed.
let theCorrectedRotation = theNormalizesRotation.inverse
// Identify the [Reversed Combined Rotation].
// The [MyMoonMeesus] :: [WANING CRESCENT] calculated [correctedRotation] :: simd_quatf(real: 0.73715997, imag: SIMD3<Float>(-0.24427173, -0.61516714, 0.13599981)) ° Radians
// Apply the [Corrected Rotation] to the entity.
directionalLight.transform.rotation *= theCorrectedRotation
// Add the [directionalLight] to the scene ::
let anchor = AnchorEntity()
anchor.addChild(directionalLight)
} // End of [if theMeesusMoonAge_LunarAgeDaysValue >= 23.10 && theMeesusMoonAge_LunarAgeDaysValue < (29.530588853 - 1.00)]
return directionalLight
} // End of [func createDirectionalLight(latitude: Double, longitude: Double, date: Date) -> Entity]
} // End of [struct TheCalculatedMoonPhaseTest_ContentView: View]
// MARK: 🟠🟠🟠🟠 [ENTITY HELPER EXTENSION] 🟠🟠🟠🟠
extension Entity {
static func createSphere(radius: Float, color: NSColor) -> Entity {
let mesh = MeshResource.generateSphere(radius: radius)
var material = PhysicallyBasedMaterial()
material.baseColor = .init(tint: color)
let modelComponent = ModelComponent(mesh: mesh, materials: [material])
let entity = Entity()
entity.components.set(modelComponent)
entity.components.set(Transform())
return entity
} // End of [static func createSphere(radius: Float, color: NSColor) -> Entity]
} // End of [extension Entity]
// Application Image :: Calgary
// Website Image :: timeanddate
// mooncalc.org
I am a developer working on developing a space journal application. During the development process, I encountered several crucial strategic and technical decisions, and I would like to hear the experiences of those who have gone through similar situations. Here are the simplified versions of several questions I have.
Resource allocation: Which problem should I address first?
Design direction: In terms of interaction and UI design, how should I balance "immersion" and "usability"?
Market selection: Was it easier for a business to survive in the early stages as a B2B or B2C entity?
Cost estimation: How can I reasonably present to my investors the development costs of this project?
In order to avoid relying solely on intuition in my decisions, I created a short questionnaire, hoping to gather more structured opinions from my colleagues. If you are also exploring VisionOS, I sincerely hope you can take a few minutes to fill it out. The results are extremely important to me, and I would be more than happy to share the final summary findings with you.
Hi, I'm trying to use AnchorEntity for horizontal surfaces.
It works when the entity is being created, but I'm looking for a way to snap this entity to the nearest surface, after translating it, for example with a DragGesture.
What would be the best way to achieve this? Using raycast, creating a new anchor, trackingMode to continuous etc.
Do I need to use ARKitSession as I want continuous tracking?
The following sample code project does not seem to work as expected:
https://developer.apple.com/documentation/visionos/implementing-shareplay-for-immersive-spaces-in-visionos
Have tried to get this project working with a client, but while we were able to see nearby users and make facetime calls, the color changing cube experience always remained a single color.
Are there step-by-step instructions that Apple has used to verify this sample code so I can try to recreate this sample code's expected behavior for both nearby participants and those in a Facetime call?
Topic:
Spatial Computing
SubTopic:
General
Hi, is there a way to track feet in a visionOS app in an immersive space? I want the whole body to be visible in VR, and I want to know if the user touches an object with their foot.
I’ve seen, mainly in discussions with AIs, that ARFaceTrackingConfiguration uses the same technology as Face ID and therefore should work in complete darkness. However, I haven’t been able to achieve this. Does anyone know if this is actually true?
I'm using an iPhone 16 to test, and the Face ID works well in darkness.
I have already add liscene and entitlement for Passthrough in screen capture, but still get black background instaed of real world. Do you have any idea ? I've been stuck with this question for a long time 😢
Hello, I'm struggling trying to interact with a RealityKit entity nested (or at least visually nested) in a second one.
I think I've the same issue as mentioned on this StackOverflow post, but I can't manage to reproduce the solution. https://stackoverflow.com/questions/79244424/how-to-prioritize-a-specific-entity-when-collision-boxes-overlap-in-realitykit
What I'd like to achieve is to translate the red box using a DragGesture, while still be able to interact using a TapGesture on the sphere. Currently, I can only do one at a time.
Does anyone know the solution?
extension CollisionGroup {
static let parent: CollisionGroup = CollisionGroup(rawValue: 1 << 0)
static let child: CollisionGroup = CollisionGroup(rawValue: 1 << 1)
}
struct ImmersiveView: View {
var body: some View {
RealityView { content in
let boxMesh = MeshResource.generateBox(size: 0.35)
let boxMaterial = SimpleMaterial(color: .red.withAlphaComponent(0.25), isMetallic: false)
let boxEntity = ModelEntity(mesh: boxMesh, materials: [boxMaterial])
let sphereMesh = MeshResource.generateSphere(radius: 0.05)
let sphereMaterial = SimpleMaterial(color: .blue, isMetallic: false)
let sphereEntity = ModelEntity(mesh: sphereMesh, materials: [sphereMaterial])
content.add(sphereEntity)
content.add(boxEntity)
boxEntity.components.set(InputTargetComponent())
boxEntity.components.set(
CollisionComponent(
shapes: [ShapeResource.generateBox(size: SIMD3<Float>(repeating: 0.35))],
isStatic: true,
filter: CollisionFilter(
group: .parent,
mask: .parent.subtracting(.child)
)
)
)
sphereEntity.components.set(InputTargetComponent())
sphereEntity.components.set(HoverEffectComponent())
sphereEntity.components.set(
CollisionComponent(
shapes: [ShapeResource.generateSphere(radius: 0.05)],
isStatic: true,
filter: CollisionFilter(
group: .child,
mask: .child.subtracting(.parent)
)
)
)
}
}
}
VideoMaterial Black Screen on Vision Pro Device (Works in Simulator)
App Overview
App Name: Extn Browser
Bundle ID: ai.extn.browser
Purpose: A visionOS web browser that plays 360°/180° VR videos in an immersive sphere environment
Development Environment & SDK Versions
Component
Version
Xcode
26.2
Swift
6.2
visionOS Deployment Target
26.2
Swift Concurrency
MainActor isolation enabled
App is released in the TestFlight.
Frameworks Used
SwiftUI - UI framework
RealityKit - 3D rendering, MeshResource, ModelEntity, VideoMaterial
AVFoundation - AVPlayer, AVAudioSession
WebKit - WKWebView for browser functionality
Network - NWListener for local proxy server
Sphere Video Mechanism
The app creates an immersive 360° video experience using the following approach:
// 1. Create sphere mesh (10 meter radius for immersive viewing)
let mesh = MeshResource.generateSphere(radius: 10.0)
// 2. Create initial transparent material
var material = UnlitMaterial()
material.color = .init(tint: .clear)
// 3. Create entity and invert sphere (negative X scale)
let sphere = ModelEntity(mesh: mesh, materials: [material])
sphere.scale = SIMD3<Float>(-1, 1, 1) // Inverts normals for inside-out viewing
sphere.position = SIMD3<Float>(0, 1.5, 0) // Eye level
// 4. Create AVPlayer with video URL
let player = AVPlayer(url: videoURL)
// 5. Configure audio session for visionOS
let audioSession = AVAudioSession.sharedInstance()
try audioSession.setCategory(.playback, mode: .moviePlayback, options: [.mixWithOthers])
try audioSession.setActive(true)
// 6. Create VideoMaterial and apply to sphere
let videoMaterial = VideoMaterial(avPlayer: player)
if var modelComponent = sphere.components[ModelComponent.self] {
modelComponent.materials = [videoMaterial]
sphere.components.set(modelComponent)
}
// 7. Start playback
player.play()
ImmersiveSpace Configuration
// browserApp.swift
ImmersiveSpace(id: appModel.immersiveSpaceID) {
ImmersiveView()
.environment(appModel)
}
.immersionStyle(selection: .constant(.mixed), in: .mixed)
Entitlements
<!-- browser.entitlements -->
<key>com.apple.security.app-sandbox</key>
<true/>
<key>com.apple.security.network.client</key>
<true/>
<key>com.apple.security.network.server</key>
<true/>
Info.plist Network Configuration
<key>NSAppTransportSecurity</key>
<dict>
<key>NSAllowsArbitraryLoads</key>
<true/>
</dict>
The Issue
Behavior in Simulator: Video plays correctly on the inverted sphere surface - 360° video is visible and wraps around the user as expected.
Behavior on Physical Vision Pro: The sphere displays a black screen. No video content is visible, though the sphere entity itself is present.
Important: Not a DRM/Licensing Issue
This issue is NOT related to Digital Rights Management (DRM) or FairPlay. I have tested with:
Unlicensed raw MP4 video files (no DRM protection)
Self-hosted video content with no copy protection
Direct MP4 URLs from CDN without any licensing requirements
The same black screen behavior occurs with all unprotected video sources, ruling out DRM as the cause.
(Plain H.264 MP4, no DRM)
Screen Recording: Working in Simulator
The following screen recording demonstrates playing a 360° YouTube video in the immersive sphere on the visionOS Simulator:
https://cdn.commenda.kr/screen-001.mov
This confirms that the VideoMaterial and sphere rendering work correctly in the simulator, but the same setup shows a black screen on the physical Vision Pro device.
Observations
AVPlayer status reports .readyToPlay - The video appears to load successfully
VideoMaterial is created without errors - No exceptions thrown
Sphere entity renders - The geometry is visible (black surface)
Audio session is configured - No errors during audio session setup
Network requests succeed - The video URL is accessible from the device
Same result with local/unprotected content - DRM is not a factor
Console Logs (Device)
The logging shows:
Sphere created and added to scene
AVPlayer created with correct URL
VideoMaterial created and applied
Player status transitions to .readyToPlay
player.play() called successfully
Rate shows 1.0 (playing)
Despite all success indicators, the rendered output is black.
Questions for Apple
Are there known differences in VideoMaterial behavior between the visionOS Simulator and physical Vision Pro hardware?
Does VideoMaterial(avPlayer:) require specific video codec/format requirements that differ on device? (The test video is a standard H.264 MP4)
Is there a required Metal capability or GPU feature for VideoMaterial that may not be available in certain contexts on device?
Does the immersion style (.mixed) affect VideoMaterial rendering on hardware?
Are there additional entitlements required for video texture rendering in RealityKit on physical hardware?
Attempted Solutions
Configured AVAudioSession with .playback category
Added delay before player.play() to ensure material is applied
Verified sphere scale inversion (-1, 1, 1)
Tested multiple video URLs (including raw, unlicensed MP4 files)
Confirmed network connectivity on device
Ruled out DRM/FairPlay issues by testing unprotected content
Environment Details
Device: Apple Vision Pro
visionOS Version: 26.2
Xcode Version: 26.2
macOS Version: Darwin 25.2.0
I first started using the SwiftUI pushWindow API in visionOS 26.2, and I've reported several bugs I discovered, listed below.
Under certain circumstances, pushed window relationships may break, and this behavior affects all other apps, not just the app that caused the problem, until the next device reboot. In other cases, the system may crash and restart.
(FB21287011) When a window presented with pushWindow is dismissed, its parent window reappears in the wrong location
(FB21294645) Pinning a pushed window to a wall breaks pushWindow for all other apps on the system
(FB21594646) pushWindow interacts poorly with the window bar close app option
(FB21652261) If a window locked to a wall calls pushWindow, the original window becomes unlocked
(FB21652271) If a window locked in place calls pushWindow and the pushed window is closed, the system freezes
(FB21828413) pushWindow, UIApplication.open, and a dismissed immersive space result in multiple failures that require a device reboot
(FB21840747) visionOS randomly foregrounds a backgrounded immersive space app with a pushed window's parent window visible instead of the pushed window
(FB21864652) When a running app is selected in the visionOS home view, windows presented with pushWindow spontaneously close
(FB21873482) Pushed windows use the fixed scaling behavior instead of the dynamic scaling behavior
I'm posting the issues here in case this information is helpful to other developers. I'd also like to hear about other pushWindow issues developers have encountered, so I can watch out for them.
Questions:
I've discovered that some of the issues above can be partially worked around by applying the defaultLaunchBehavior and restorationBehavior scene modifiers to suppress window restoration and locking, which pushWindow appears to interact poorly with. Are there other recommended workarounds?
I've observed that the Photos and Settings apps, which predate the pushWindow API, are not affected by the issues I reported. Are there other more reliable ways I could achieve the same behavior as pushWindow without relying on that API?
I'd appreciate any guidance Apple engineers could provide. Thank you.
Hello,
I am experiencing an issue where lighting is disabled within a Portal Component specifically when using Progressive or Full Immersion styles in visionOS 26.2.
[Steps to Reproduce]
Create a scene in Reality Composer Pro with custom lighting (Directional Light, IBL, etc.) and load it as an Entity.
Add a PortalComponent to this Entity.
Observe the Entity in an ImmersiveSpace under different Immersion Styles.
[Observed Behavior]
Mixed Immersion: Lighting works as expected.Progressive / Full Immersion: Lighting is disabled, causing the content to render incorrectly.
[Additional Context]
Without PortalComponent: The same Reality Composer Pro scene renders lighting correctly across all Immersion Styles (Mixed, Progressive, and Full).Regression: In applications built with Xcode 16.4 / visionOS 2.5, lighting inside the Portal functioned correctly. This issue appears to have emerged with Xcode 26.2 and visionOS 26.2.
[Questions]
Is the disabling of lighting inside Portals during Full/Progressive Immersion an intended architectural change or optimization in visionOS 26.2?
Are there any workarounds, such as specific PortalComponent configurations or new API flags, to re-enable lighting in these immersion modes?
Thank you.
Topic:
Spatial Computing
SubTopic:
Reality Composer Pro
Tags:
AR / VR
RealityKit
Reality Composer Pro
I'm developing a visionOS panorama viewer app where I need to implement an auto-hiding floating menu in immersive space. The menu should:
Show for 3 seconds when entering immersive mode
Auto-hide after 3 seconds,
Reappear when user taps anywhere (using SpatialTapGesture).
Buttons should respond to gaze + pinch interaction
The Problem:
When I add .windowStyle(.plain) to achieve transparent window background for the auto-hide effect, all buttons in the menu become completely unresponsive to gaze + pinch interaction. The buttons only respond to direct finger touch (poking).
Without .windowStyle(.plain): Buttons work correctly with gaze + pinch, but I cannot achieve transparent window background for hiding.
With .windowStyle(.plain): Window can be transparent, but buttons lose gaze + pinch interaction.
Code:
App.swift:
@main
struct MyApp: App {
@StateObject private var model = AppModel()
var body: some Scene {
WindowGroup(id: "MainWindow") {
ContentView()
.environmentObject(model)
}
.defaultSize(width: 900, height: 700)
.windowResizability(.contentSize)
.windowStyle(.plain) // <-- This causes the interaction issue
ImmersiveSpace(id: "ImmersiveSpace") {
ImmersiveView()
.environmentObject(model)
}
}
}
ContentView.swift (simplified):
struct ContentView: View {
@EnvironmentObject var model: AppModel
@State private var isMenuVisible: Bool = true
var body: some View {
VStack {
if model.isImmersiveViewActive {
if isMenuVisible {
// This menu's buttons don't respond to gaze+pinch
immersiveControlMenu
}
} else {
mainMenuButtons
}
}
.glassBackgroundEffect()
}
private var immersiveControlMenu: some View {
HStack {
Button("Exit") {
exitImmersiveSpace()
}
.buttonStyle(.bordered) // Also tried .plain, same issue
}
.padding()
.glassBackgroundEffect()
}
}
ImmersiveView.swift:
struct ImmersiveView: View {
@EnvironmentObject var model: AppModel
var body: some View {
RealityView { content in
// Panorama sphere
let sphere = ModelEntity(mesh: .generateSphere(radius: 1000), materials: [material])
content.add(sphere)
// Tap detector for menu toggle
let tapDetector = Entity()
tapDetector.components.set(CollisionComponent(shapes: [.generateSphere(radius: 900)]))
tapDetector.components.set(InputTargetComponent())
content.add(tapDetector)
}
.gesture(
SpatialTapGesture()
.targetedToAnyEntity()
.onEnded { _ in
model.shouldShowMenu = true
}
)
}
}
Environment:
Xcode 26.2
visionOS 26.3
Vision Pro device
Questions:
Is .windowStyle(.plain) expected to affect button interaction behavior?
What is the recommended approach to achieve a transparent/hidden window in immersive mode while maintaining button interactivity?
Is there an alternative to .windowStyle(.plain) for hiding window chrome in visionOS?
Thank you for any guidance!
Hi everyone,
I’m working with RealityKit on visionOS and I’m seeing unexpected behavior when the user long-presses the Digital Crown, which recenters the world.
Observed behavior:
When the world is recentered via long-pressing the Crown, the models remain visually in the correct place (as expected).
However, if I query the model’s position or transform immediately after recentering (e.g. entity.position or similar), I still get the old values from before recenter.
As soon as I interact with the model using a gesture (drag/rotate/scale), the position updates and then querying it returns the correct, updated values.
So effectively:
Recenter happens
Visual position is correct
Programmatic position remains stale
First gesture causes the position to “snap” to the correct updated value
Questions:
Is there any event, notification, or callback that fires when the world is recentered due to a long press of the Crown button?
Is there a recommended way to get the updated world-space transform immediately after recenter, without waiting for a gesture?
Is this expected behavior due to deferred/lazy transform updates in RealityKit?
Right now it feels like recentering updates the coordinate system but doesn’t immediately commit new transform values to entities until some interaction occurs.
Any guidance or best-practice patterns for handling this would be appreciated.
Thanks!