Explore the integration of media technologies within your app. Discuss working with audio, video, camera, and other media functionalities.

All subtopics
Posts under Media Technologies topic

Post

Replies

Boosts

Views

Activity

Lock screen media controls for MusicKit/ ApplicationMusicPlayer
Hi, when using ApplicationMusicPlayer from MusicKit my app automatically gets the media controls on the lock screen: Play/ Pause, Skip Buttons, Playback Position etc. I would like to customize these. Tried a bunch of things, e.g. using MPRemoteCommandCenter. So far I haven't had any success. Does anyone know how I can customize the media controls of ApplicationMusicPlayer. Thank you.
2
0
520
Sep ’25
CoreMediaErrorDomain -42709 error
Hello, I am developing a video streaming service that uses FairPlay. Since around February 20th, we have started receiving reports of CoreMediaErrorDomain -42709 errors. Unfortunately, there is no documentation from Apple that explains what this error means, so we are not sure how to address or fix the issue. Most of the users who reported this error are using iOS 18.2.1 and iOS 18.3.1. Could you please advise on what we should check or how we might resolve this error?
1
1
577
Mar ’25
LockedCameraCaptureExtension and Sharing User Preferences
I have the main app that saves preferences to UserDefaults.standard. So I have this one preference that the user is able to toggle - isRawOn UserDefaults.standard.set(self.isRawOn, forKey: "isRawOn") Now, I have LockedCameraCaptureExtension which is required know if that above setting on or off during launch. Also if it's toggled within the extension, the main app should know about it on the next launch. The main app and the extension runs on separate containers and the preferences are not shared due to privacy reasons. Apple mentions of using appContext of CameraCaptureIntent, but not sure how above scenario is possible through that....unless I am missing something. Apple Reference What I have for CameraCaptureIntent: @available(iOS 18, *) struct LaunchMyAppControlIntent: CameraCaptureIntent { typealias AppContext = MyAppContext static let title: LocalizedStringResource = "LaunchMyAppControlIntent" static let description = IntentDescription("Capture photos with MyApp.") @MainActor func perform() async throws -> some IntentResult { .result() } }
1
0
561
Nov ’25
Unable to play audio via MusicKit
Hey folks, I'm running into an odd issue suddenly with an app that had a working MusicKit integration before. I'm using ApplicationMusicPlayer to play Apple Music albums and songs. I'm testing on a physical device, signed in to Apple ID, and with a valid subscription. Apple Music via the first-party app works entirely fine on this device. Attempting to play back any content at all gives the log: <ICUserIdentityStoreACAccountBackend: 0x1070bf3e0> Failed to initialize primary apple account, error=Error Domain=ICError Code=-7013 "Client is not entitled to access account store" UserInfo={NSDebugDescription=Client is not entitled to access account store} [ICUserIdentityStore] - initializing account histories with activeAccountDSID = nil, activeLockerAccountDSID = nil, timestamp = 14605951908 [ICUserIdentityStore] Failed to fetch local store account with error: Error Domain=ICError Code=-7013 "Client is not entitled to access account store" UserInfo={NSDebugDescription=Client is not entitled to access account store}. The album artwork, track names, etc, all appear in the control center playback controls, but the music doesn't play. Trying to trigger playback with control center just results in it skipping to the next track, which doesn't play either. This exact code used to work. I have the MusicKit service selected in Apple Connect. Since this isn't entitlement-based, I'm not sure how else to check that I'm set up correctly. I've tried deleting/reinstalling the app, restarting the device, cleaning/rebuilding, and deleting DerivedData, to no avail. Any help? Running Xcode 16.4 (16F6), testing on iOS 18.5 (22F76)
0
1
171
Jun ’25
PHPhotoLibrary.performChanges completionHandler not called when deleting assets on iOS 26
In my app, I use api provided in Photos framework to delete specified photo. But after upgrading to iOS 26, the delete function in some iOS device no longer work. The api will never triggers the system confirmation dialog, and the completionHandler is never called. In the iOS Photos app, deletion works correctly on the same assets, but calling the API from my app does not work. Steps to Reproduce Make sure the app has Full Photo Library Access. Execute the following code: PHPhotoLibrary.shared().performChanges({ let assetsToBeDeleted = PHAsset.fetchAssets(withLocalIdentifiers: delUrls, options: nil) PHAssetChangeRequest.deleteAssets(assetsToBeDeleted) }, completionHandler: completionHandler) Expected Behavior The system should present a confirmation dialog asking the user to delete the selected photos. After the user confirms, the deletion should occur, and the completionHandler should be called with success or error. Actual Behavior The system delete confirmation dialog does not appear. The completionHandler is never called. Environment iOS Versions: 26.1 / 26.0.1 It looks like api bug. I want to check Is it a know issue and will be fixed. Thanks
2
1
291
Nov ’25
AsyncPublisher for AVPlayerItem not working
Hello, I'm trying to subscribe to AVPlayerItem status updates using Combine and it's bridge to Swift Concurrency – .values. This is my sample code. struct ContentView: View { @State var player: AVPlayer? @State var loaded = false var body: some View { VStack { if let player { Text("loading status: \(loaded)") Spacer() VideoPlayer(player: player) Button("Load") { Task { let item = AVPlayerItem( url: URL(string: "https://sample-videos.com/video321/mp4/360/big_buck_bunny_360p_5mb.mp4")! ) player.replaceCurrentItem(with: item) let publisher = player.publisher(for: \.status) for await status in publisher.values { print(status.rawValue) if status == .readyToPlay { loaded = true break } } print("we are out") } } } else { Text("No video selected") } } .task { player = AVPlayer() } } } After I click on the "load" button it prints out 0 (as the initial status of .unknown) and nothing after – even when the video is fully loaded. At the same time this works as expected (loading status is set to true): struct ContentView: View { @State var player: AVPlayer? @State var loaded = false @State var cancellable: AnyCancellable? var body: some View { VStack { if let player { Text("loading status: \(loaded)") Spacer() VideoPlayer(player: player) Button("Load") { Task { let item = AVPlayerItem( url: URL(string: "https://sample-videos.com/video321/mp4/360/big_buck_bunny_360p_5mb.mp4")! ) player.replaceCurrentItem(with: item) let stream = AsyncStream { continuation in cancellable = item.publisher(for: \.status) .sink { if $0 == .readyToPlay { continuation.yield($0) continuation.finish() } } } for await _ in stream { loaded = true cancellable?.cancel() cancellable = nil break } } } } else { Text("No video selected") } } .task { player = AVPlayer() } } } Is this a bug or something?
0
1
192
Jun ’25
Indicate Packet Loss With AVAudioConverter for OPUS Decoding
I'm using an AVAudioConverter object to decode an OPUS stream for VoIP. The decoding itself works well, however, whenever the stream stalls (no more audio packet is available to decode because of network instability) this can be heard in crackling / abrupt stop in decoded audio. OPUS can mitigate this by indicating packet loss by passing a null pointer in the C-library to int opus_decode_float (OpusDecoder * st, const unsigned char * data, opus_int32 len, float * pcm, int frame_size, int decode_fec), see https://opus-codec.org/docs/opus_api-1.2/group__opus__decoder.html#ga9c554b8c0214e24733a299fe53bb3bd2. However, with AVAudioConverter using Swift I'm constructing an AVAudioCompressedBuffer like so:         let compressedBuffer = AVAudioCompressedBuffer(             format: VoiceEncoder.Constants.networkFormat,             packetCapacity: 1,             maximumPacketSize: data.count         )         compressedBuffer.byteLength = UInt32(data.count)         compressedBuffer.packetCount = 1   compressedBuffer.packetDescriptions! .pointee.mDataByteSize = UInt32(data.count)         data.copyBytes(             to: compressedBuffer.data .assumingMemoryBound(to: UInt8.self),             count: data.count         ) where data: Data contains the raw OPUS frame to be decoded. How can I specify data loss in this context and cause the AVAudioConverter to output PCM data whenever no more input data is available? More context: I'm specifying the audio format like this:         static let frameSize: UInt32 = 960         static let sampleRate: Float64 = 48000.0         static var networkFormatStreamDescription = AudioStreamBasicDescription(             mSampleRate: sampleRate,             mFormatID: kAudioFormatOpus,             mFormatFlags: 0,             mBytesPerPacket: 0,             mFramesPerPacket: frameSize,             mBytesPerFrame: 0,             mChannelsPerFrame: 1,             mBitsPerChannel: 0,             mReserved: 0         )         static let networkFormat = AVAudioFormat( streamDescription: &networkFormatStreamDescription )! I've tried 1) setting byteLength and packetCount to zero and 2) returning nil but setting .haveData in the AVAudioConverterInputBlock I'm using with no success.
1
1
916
May ’25
AVSpeechSynthesizer system voices (SLA clarification)
Hello, I am building an iOS-only, commercial app that uses AVSpeechSynthesizer with system voices, strictly using the APIs provided by Apple. Before distributing the app, I want to ensure that my current implementation does not conflict with the iOS Software License Agreement (SLA) and is aligned with Apple’s intended usage. For a better playback experience (more accurate estimation of utterance duration and smoother skip forward/backward during playback), I currently synthesize speech using: AVSpeechSynthesizer.write(_:toBufferCallback:) Converting the received AVAudioPCMBuffer buffers into audio data Storing the audio inside the app sandbox Playing it back using AVAudioPlayer / AVAudioEngine The cached audio is: Generated fully on-device using system voices Stored only inside the app’s private container Used only for internal playback controls (timeline, seek, skip ±5 seconds) Never shared, exported, uploaded, or exposed outside the app The alternative approaches would be: Keeping the generated audio entirely in memory (RAM) for playback purposes, without writing it to the file system at any point Or using AVSpeechSynthesizer.speak(_:) and playing speech strictly in real time which has a poorer user experience compared to my approach I have reviewed the current iOS Software License Agreement: https://www.apple.com/legal/sla/docs/iOS18_iPadOS18.pdf In particular, section (f) mentions restrictions around System Characters, Live Captions, and Personal Voice, including the following excerpt: “…use … only for your personal, non-commercial use… No other creation or use of the System Characters, Live Captions, or Personal Voice is permitted by this License, including but not limited to the use, reproduction, display, performance, recording, publishing or redistribution in a … commercial context.” I do not see a specific reference in the SLA to system text-to-speech voices used via AVSpeechSynthesizer, and I want to be certain that temporarily caching synthesized speech for internal, non-exported playback is acceptable in a commercial app. My question is: Is caching AVSpeechSynthesizer system-voice output inside the app sandbox for internal playback acceptable, or is Apple’s recommended approach to rely only on real-time playback (speak(_:)) or strictly in-memory buffering without file storage? If this question falls outside DTS technical scope and is instead a policy or licensing matter, I would appreciate guidance on the authoritative Apple documentation or the correct Apple team/contact. Thank you.
1
1
418
2w
Inquiry regarding CoreMediaErrorDomain Code=-12880 in LL-HLS playback
Hello, I am developing a custom player SDK based on AVPlayer. While testing LL-HLS streams, I intermittently encounter the following error: Error Domain=CoreMediaErrorDomain Code=-12880 Since I cannot find documentation for this specific code, could you please clarify its meaning? Specifically, I would like to know if this is a critical error that disrupts playback, or if it is just a warning that can be safely ignored. Any insights would be appreciated. Thank you.
1
0
140
3w
AVPlayer HLS High Bitrate Problem on Apple TV HD (A1625) tvOS 26
Hello, We have Video Stream app. It has HLS VOD Content. We supply 1080p, 4K Contents to users. Users were watching 1080p content before tvOS 26. Users can not watch 1080p content anymore when they update to tvOS 26. We have not changed anything at HLS playlist side and application version. This problem only occurs on Apple TV 4th Gen (A1625) tvOS 26 version. There is no problem with newer Apple TV devices. Would you help to resolve problem? Thanks in advance
2
1
595
Oct ’25
Accessing External Timecode from Blackmagic ProDock in Custom App
Hi everyone, I’m exploring using the iPhone 17 Pro with the Blackmagic ProDock in a custom capture app. The genlock functionality seems accessible via AVExternalSyncDevice and related APIs, which is great. I’m specifically curious about external timecode coming in from the ProDock: • Is there a public way to access the timecode feed in a custom app via AVFoundation or another Apple API? • If so, what is the recommended approach to read or apply that timecode during capture? • Are there any current limitations or entitlements required to access timecode from ProDock in a third-party app? I’m excited to start integrating synchronized capture in my app, and any guidance or sample patterns would be greatly appreciated. Thanks in advance! — [Artem]
2
0
311
Oct ’25
Apple Music API Library Playlist Images
Hi, I'm sending an API request to: https://api.music.apple.com/v1/me/library/playlists?limit=$limit&offset=$offset To list all of the users library playlists, however the resulting objects do not contain the playlist artwork in the JSON. I've tried adding the extend and include attributes as well but to no avail. A partial example of the response: {"id": PLAYLIST_ID, "type": "library-playlists", "href": "/v1/me/library/playlists/PLAYLIST_ID", "attributes": {"lastModifiedDate": "2024-09-18T20:18:24Z", "canEdit": true, "name": "Afro Party Anthems", "isPublic": false, "description": {"standard": "Definitive African party starters"}, "hasCatalog": false, "dateAdded": "2022-03-10T18:30:56Z", "playParams": {"id": PLAYLIST_ID, "kind": "playlist", "isLibrary": true}}, "relationships": {"catalog": {"href": "/v1/me/library/playlists/PLAYLIST_ID/catalog", "data": []}}} Is there a way to get the artwork URL without sending a request for each playlist? And if not can this be fixed?
0
1
115
Jun ’25
USDZ model files when opened in Preview turns black
On macOS 26.2 (Tahoe), the Preview app fails to render many USDZ models correctly. The issue appears inconsistent: • Some USDZ files open normally in Preview • Some USDZ files open but the viewport turns completely black (no geometry, no material, no lighting) • All the same files open correctly when opened using “Open With → Xcode” This was not the behavior on macOS 15.7.2, where the exact same USDZ files rendered consistently in Preview without failures.
1
0
156
Feb ’26
How Does iPhone 15+ (USB-C) Support UVC Devices? Is MFi Certification Required?
In the past, when using Lightning, many external devices had to go through MFi certification. However, since the iPhone 15 switched from Lightning to USB-C, is MFi certification still required? Our company has developed several UVC devices, and we have confirmed that iPads can read frames from external cameras through the external device type in AVFoundation. However, this is not supported on iPhones. We are currently exploring feasible ways to enable UVC device support on iPhones. Is MFi certification the only option? If so, is the MFi certification process for USB-C the same as it was for Lightning? Does it still require purchasing an MFi chip and manufacturing specially designed USB-C cables?
2
1
269
Mar ’25
Background Upload Extension
Hello, We are trying to use the new Background Upload Extension to improve uploads of assets (Photos, Live Photos, Videos) in the background in our application. 1 - We implemented the code to upload still images. We would like to do the same for Live Photos but we are not sure how to proceed since a Live Photo is composed of 2 resources that we would like to upload in the same job so that we keep the association between the 2 resources. Steps: A Live Photo is captured on the device. Our application is notified for new content in the Photo Library and the asset is queued for upload by our application. The system calls the background upload extension and the Live Photo is prepared for upload but we can schedule a job only for one resource (still photo or paired video) so we can not upload both resources in a single job. 2 - Is there a way to synchronise the application and the extension so that the application would not process the same data or execute the same requests than the extension. For example: our application is working with tokens and we would like to prevent those tokens to be consumed by the application and the extension at the same time. 3 - is there a command in xcode or terminal to start or stop the extension, something similar to what exists for processing tasks (https://developer.apple.com/documentation/backgroundtasks/starting-and-terminating-tasks-during-development).
3
1
614
Feb ’26
is the output frame rate of a CMIOExtension rounded or capped?
I made a CMIOExtension (a virtual camera) which generates its own output, for use in our in-house software testing. I wanted to make a video source with 29.97, 30, 59.94 and 60fps output. To this end, I created a CMIOExtensionDeviceSource which creates a CMIOExtensionDevice with one CMIOExtensionStreamSource with various stream formats contained in [CMIOExtensionStreamFormat], including one with both maxFrameDuration and minFrameDuration = CMTimeMake(value: 1000, timescale: 30000) and another with both maxFrameDuration and minFrameDuration = CMTimeMake(value: 1001, timescale: 30000) I've held off on the creation of the 59.94/60fps source for now until this problem is resolved. my virtual camera works, it produces a signal, but when I examine its associated AVCaptureDevice in the debugger, I find (lldb) po self.captureDevice?.formats[0].videoSupportedFrameRateRanges[0].maxFrameDuration ▿ Optional<CMTime> ▿ some : CMTime - value : 1000000 - timescale : 30000000 ▿ flags : CMTimeFlags - rawValue : 1 - epoch : 0 I get the same value, 1000000/30000000, or exactly 30fps, for all the formats of my AVCaptureDevice. Is there something I'm doing wrong, or do CMIOExtensionDevices always round the frame rates? I can't force CoreMediaIO to produce frames at exactly my desired frame interval, but I'd like to ensure that the average frame rate is my desired rate. How can I do that? Frame emission is governed by a repeating DispatchSourceTimer with a repeat time specified in nanoseconds with the TimerFlags set to 'strict'.
2
0
681
Jan ’26
Control system video effects support for CMIO extension
We're distributing a virtual camera with our app that does not profit in the slightest from automatically applied system video effects both to the video going in (physical camera device) or out (virtual camera device). I'm aware of setting NSCameraReactionEffectGesturesEnabledDefault in Info.plist and determining active video effects via AVCaptureDevice API. Those are obviously crutches, because having to tell users to go look for and click around in menu bar apps is the opposite of a great UX. To make our product's video output more deterministic, I'm looking for a way to tell the CMIO subsystem that our virtual camera does not support any of the system video effects. I'm seeing properties like AVCaptureDevice.Format.isPortraitEffectSupported and AVCaptureDevice.Format.isStudioLightSupported whose documentation refers to the format's ability to support these effects. Since we're setting a CMFormatDescription via CMIOExtensionStreamSource.formats I was hoping to find something in the extensions, but wasn't successful so far. Can this be done?
2
0
291
Nov ’25
CoreAudio server plugin gaining write access with SystemConfiguration.framework functions
Hi, our CourAudio server plugin utilizes the SystemConfiguration.framework to store and restore specific shared system wide settings. While our application can authenticate to utilize the SystemConfiguration.framework to gain write access to the shared configuration settings the CoreAudio server plugin obviously can't have any user interaction and therefor does not authenticate. Is it possible to authenticate the CoreAudio server plugin to gain write permissions? Are there any entitlements or other means that would allow this? Thanks!
2
0
172
Apr ’25
iPad app on macOS not asking for microphone permission
Hello, I have an iOS app that is recording audio that is working fine on iPads/iPhones. It asks for microphone permission and after that recording works. I installed the same app on my M3 MacBook via TestFlight since iPad apps are supposed to work without a change that way. The app starts fine and everything, but it never asks for Microphone permission, so I can't record. Do I need to do something to make this happen (this is not macCatalyst, its running the arm64 iPhone binary on macOS) thanks
2
1
876
Mar ’25