Please include the line below in follow-up emails for this request.
Case-ID: 11089799
When using AVSpeechUtterance and setting it to play in Mandarin, if Siri is set to Cantonese on iOS 18, it will be played in Cantonese. There is no such issue on iOS 17 and 16.
1.let utterance = AVSpeechUtterance(string: textView.text)
let voice = AVSpeechSynthesisVoice(language: "zh-CN")
utterance.voice = voice
2.In the phone settings, Siri is set to Cantonese
Audio
RSS for tagDive into the technical aspects of audio on your device, including codecs, format support, and customization options.
Selecting any option will automatically load the page
Post
Replies
Boosts
Views
Created
Hi all,
I have been quite stumped on this behavior for a little bit now, so thought it best to share here and see if someone more experience with AVAudioEngine / AVAudioSession can weigh in.
Right now I have a AVAudioEngine that I am using to perform some voice chat with and give buffers to play. This works perfectly until route changes start to occur, which causes the AVAudioEngine to reset itself, which then causes all players attached to this engine to be stopped.
Once a AVPlayerNode gets stopped due to this (but also any other time), all samples that were scheduled to be played then get purged. Where this becomes confusing for me is the completion handler gets called every time regardless of the sound actually being played.
Is there a reliable way to know if a sample needs to be rescheduled after a player has been reset?
I am not quite sure in my case what my observer of AVAudioEngineConfigurationChange needs to be doing, as this engine only handles output. All input is through a separate engine for simplicity.
Currently I am storing a queue of samples as they get sent to the AVPlayerNode for playback, and after that completion checking if the player isPlaying or not. If it's playing I assume that the sound actually was played- and if not then I leave it in the queue and assume that an observer on the route change or the configuration change will realize there are samples in the queue and reset them
Thanks for any feedback!
I’m facing a problem while trying to achieve spatial audio effects in my iOS 18 app. I have tried several approaches to get good 3D audio, but the effect never felt good enough or it didn’t work at all.
Also what mostly troubles me is I noticed that AirPods I have doesn’t recognize my app as one having spatial audio (in audio settings it shows "Spatial Audio Not Playing"). So i guess my app doesn't use spatial audio potential.
First approach uses AVAudioEnviromentNode with AVAudioEngine. Chaining position of player as well as changing listener’s doesn’t seem to change anything in how audio plays.
Here's simple how i initialize AVAudioEngine
import Foundation
import AVFoundation
class AudioManager: ObservableObject {
// important class variables
var audioEngine: AVAudioEngine!
var environmentNode: AVAudioEnvironmentNode!
var playerNode: AVAudioPlayerNode!
var audioFile: AVAudioFile?
...
//Sound set up
func setupAudio() {
do {
let session = AVAudioSession.sharedInstance()
try session.setCategory(.playback, mode: .default, options: [])
try session.setActive(true)
} catch {
print("Failed to configure AVAudioSession: \(error.localizedDescription)")
}
audioEngine = AVAudioEngine()
environmentNode = AVAudioEnvironmentNode()
playerNode = AVAudioPlayerNode()
audioEngine.attach(environmentNode)
audioEngine.attach(playerNode)
audioEngine.connect(playerNode, to: environmentNode, format: nil)
audioEngine.connect(environmentNode, to: audioEngine.mainMixerNode, format: nil)
environmentNode.listenerPosition = AVAudio3DPoint(x: 0, y: 0, z: 0)
environmentNode.listenerAngularOrientation = AVAudio3DAngularOrientation(yaw: 0, pitch: 0, roll: 0)
environmentNode.distanceAttenuationParameters.referenceDistance = 1.0 environmentNode.distanceAttenuationParameters.maximumDistance = 100.0
environmentNode.distanceAttenuationParameters.rolloffFactor = 2.0
// example.mp3 is mono sound
guard let audioURL = Bundle.main.url(forResource: "example", withExtension: "mp3") else {
print("Audio file not found")
return
}
do {
audioFile = try AVAudioFile(forReading: audioURL)
} catch {
print("Failed to load audio file: \(error)")
}
}
...
//Playing sound
func playSpatialAudio(pan: Float ) {
guard let audioFile = audioFile else { return }
// left side
playerNode.position = AVAudio3DPoint(x: pan, y: 0, z: 0)
playerNode.scheduleFile(audioFile, at: nil, completionHandler: nil)
do {
try audioEngine.start()
playerNode.play()
} catch {
print("Failed to start audio engine: \(error)")
}
...
}
Second more complex approach using PHASE did better. I’ve made an exemplary app that allows players to move audio player in 3D space. I have added reverb, and sliders changing audio position up to 10 meters each direction from listener but audio seems to only really change left to right (x axis) - again I think it might be trouble with the app not being recognized as spatial.
//Crucial class Variables:
class PHASEAudioController: ObservableObject{
private var soundSourcePosition: simd_float4x4 = matrix_identity_float4x4
private var audioAsset: PHASESoundAsset!
private let phaseEngine: PHASEEngine
private let params = PHASEMixerParameters()
private var soundSource: PHASESource
private var phaseListener: PHASEListener!
private var soundEventAsset: PHASESoundEventNodeAsset?
// Initialization of PHASE
init{
do {
let session = AVAudioSession.sharedInstance()
try session.setCategory(.playback, mode: .default, options: [])
try session.setActive(true)
} catch {
print("Failed to configure AVAudioSession: \(error.localizedDescription)")
}
// Init PHASE Engine
phaseEngine = PHASEEngine(updateMode: .automatic)
phaseEngine.defaultReverbPreset = .mediumHall
phaseEngine.outputSpatializationMode = .automatic //nothing helps
// Set listener position to (0,0,0) in World space
let origin: simd_float4x4 = matrix_identity_float4x4
phaseListener = PHASEListener(engine: phaseEngine)
phaseListener.transform = origin
phaseListener.automaticHeadTrackingFlags = .orientation
try! self.phaseEngine.rootObject.addChild(self.phaseListener)
do{
try self.phaseEngine.start();
}
catch {
print("Could not start PHASE engine")
}
audioAsset = loadAudioAsset()
// Create sound Source
// Sphere
soundSourcePosition.translate(z:3.0)
let sphere = MDLMesh.newEllipsoid(withRadii: vector_float3(0.1,0.1,0.1), radialSegments: 14, verticalSegments: 14, geometryType: MDLGeometryType.triangles, inwardNormals: false, hemisphere: false, allocator: nil)
let shape = PHASEShape(engine: phaseEngine, mesh: sphere)
soundSource = PHASESource(engine: phaseEngine, shapes: [shape])
soundSource.transform = soundSourcePosition
print(soundSourcePosition)
do {
try phaseEngine.rootObject.addChild(soundSource)
}
catch {
print ("Failed to add a child object to the scene.")
}
let simpleModel = PHASEGeometricSpreadingDistanceModelParameters()
simpleModel.rolloffFactor = rolloffFactor
soundPipeline.distanceModelParameters = simpleModel
let samplerNode = PHASESamplerNodeDefinition(
soundAssetIdentifier: audioAsset.identifier,
mixerDefinition: soundPipeline,
identifier: audioAsset.identifier + "_SamplerNode")
samplerNode.playbackMode = .looping
do {soundEventAsset = try
phaseEngine.assetRegistry.registerSoundEventAsset(
rootNode: samplerNode,
identifier: audioAsset.identifier + "_SoundEventAsset")
} catch {
print("Failed to register a sound event asset.")
soundEventAsset = nil
}
}
//Playing sound
func playSound(){
// Fire new sound event with currently set properties
guard let soundEventAsset else { return }
params.addSpatialMixerParameters(
identifier: soundPipeline.identifier,
source: soundSource,
listener: phaseListener)
let soundEvent = try! PHASESoundEvent(engine: phaseEngine,
assetIdentifier: soundEventAsset.identifier,
mixerParameters: params)
soundEvent.start(completion: nil)
}
...
}
Also worth mentioning might be that I only own personal team account
Hi all,
with my app ScreenFloat, you can record your screen, along with system- and microphone audio.
Those two audio feeds are recorded into separate audio tracks in order to individually remove or edit them later on.
Now, these recordings you create with ScreenFloat can be drag-and-dropped to other apps instantly. So far, so good, but some apps, like Slack, or VLC, or even websites like YouTube, do not play back multiple audio tracks, just one.
So what I'm trying to do is, on dragging the video recording file out of ScreenFloat, instantly baking together the two individual audio tracks into one, and offering that new file as the drag and drop file, so that all audio is played in the target app.
But it's slow. I mean, it's actually quite fast, but for drag and drop, it's slow.
My approach is this:
"Bake together" the two audio tracks into a one-track m4a audio file using AVMutableAudioMix and AVAssetExportSession
Take the video track, add the new audio file as an audio track to it, and render that out using AVAssetExportSession
For a quick benchmark, a 3'40'' movie, step 1 takes ~1.7 seconds, and step two adds another ~1.5 seconds, so we're at ~3.2 seconds. That's an eternity for a drag and drop, where the user might cancel if there's no immediate feedback.
I could also do it in one step, but then I couldn't use the AV*Passthrough preset, and that makes it take around 32 seconds then, because I assume it touches the video data (which is unnecessary in this case, so I think the two-step approach here is the fastest).
So, my question is, is there a faster way?
The best idea I can come up with right now is, when initially recording the screen with system- and microphone audio as separate tracks, to also record both of them into a third, muted, "hidden" track I could use later on, basically eliminating the need for step one and just ripping the two single audio tracks out of the movie and only have the video and the "hidden" track (then unmuted), but I'd still have a ~1.5 second delay there. Also, there's the processing and data overhead (basically doubling the movie's audio data).
All this would be great for an export operation (where one expects it to take a little time), but for a drag-and-drop operation, it's not ideal.
I've discarded the idea of doing a promise file drag, because many apps do not accept those, and I want to keep wide compatibility with all sorts of apps.
I'd appreciate any ideas or pointers.
Thank you kindly,
Matthias
Hello,
I have a CarPlay Navigation app and utilize the AVSpeechSynthesizer to speak directions to a user. Everything works great on my CarPlay simulator as well as when plugged into my GMC truck. However, I found out yesterday that one of my users with a Ford truck the audio would cut in an out.
After much troubleshooting, I was able to replicate this on my own truck when using Bluetooth to connect to CarPlay. My user was also utilizing Bluetooth. Has anyone else experienced this? Is there a fix to the problem?
import SwiftUI
import AVFoundation
class TextToSpeechService: NSObject, ObservableObject, AVSpeechSynthesizerDelegate {
private var speechSynthesizer = AVSpeechSynthesizer()
static let shared = TextToSpeechService()
override init() {
super.init()
speechSynthesizer.delegate = self
}
func configureAudioSession() {
speechSynthesizer.delegate = self
do {
try AVAudioSession.sharedInstance().setCategory(.playback, mode: .voicePrompt, options: [.mixWithOthers, .allowBluetooth])
} catch {
print("Failed to set audio session category: \(error.localizedDescription)")
}
}
func speak(_ text: String) {
Task(priority: .high) {
let speechUtterance = AVSpeechUtterance(string: text)
speechUtterance.voice = AVSpeechSynthesisVoice(language: AVSpeechSynthesisVoice.currentLanguageCode())
try AVAudioSession.sharedInstance().setActive(true, options: .notifyOthersOnDeactivation)
speechSynthesizer.speak(speechUtterance)
}
}
func speechSynthesizer(_ synthesizer: AVSpeechSynthesizer, didFinish utterance: AVSpeechUtterance) {
Task {
stopSpeech()
try AVAudioSession.sharedInstance().setActive(false)
}
}
func stopSpeech() {
speechSynthesizer.stopSpeaking(at: .immediate)
}
}
We have application using PTT Framework to record audio messages when app is backgrounded. Right now we are using AVAudioRecorder for that purpose. And problem is one specific user has frequent issue - recorded audio contains only silence.
I've checked almost everything I can imagine but didn't find any possible reason of issue.
Conditions:
AVAudioRecorder uses following configuration:
[
AVEncoderAudioQualityKey: AVAudioQuality.low.rawValue,
AVFormatIDKey : kAudioFormatMPEG4AAC,
AVNumberOfChannelsKey: 1,
AVSampleRateKey: 16000.0
]
App waits both didBeginTransmitting and didActivate audioSession from PTChannelManager (audio session has playback category at that moment)
App does AVAudioSession category change to playAndRecord
App gets routeChangeNotification with categoryChange and category = playAndRecord
There is no any interruption notifications from AVAudioSession during recording
There is no any error notification from AVAudioRecorder
Any idea what exactly I do wrong? Is there anything else I should check?
Thanks in advance.
P.S. it looks like recording audio with AudioUnit has the same issue, but let's exclude it from question atm for simplicity.
Hello! I'm use AVFoundation for preview video and audio from selected device, and I try use AVAudioEngine for preview audio in real-time, but I can't or I don't understand how select input device? I can hear only my microphone in real-time
So far, I'm using AVCaptureAudioPreviewOutput for in real-time hear audio, but I think has delay.
On iOS works easy with AVAudioEngine, but on macOS bruh...
Topic:
Media Technologies
SubTopic:
Audio
Tags:
AudioToolbox
AVAudioSession
AVAudioEngine
AVFoundation
Bug Report: ScreenCaptureKit System Audio Capture Crashes with EXC_BAD_ACCESS
Summary
When using ScreenCaptureKit to capture system audio for extended periods, the application crashes with EXC_BAD_ACCESS in Swift's error handling runtime. The crash occurs in swift_getErrorValue when trying to process an error from the SCStream delegate method didStopWithError. This appears to be a framework-level issue in ScreenCaptureKit or its underlying ReplayKit implementation.
Environment
macOS Sonoma 14.6.1
Swift 5.8
ScreenCaptureKit framework
Detailed Description
Our application captures system audio using ScreenCaptureKit's audio capture capabilities. After successfully capturing for several minutes (typically after 3-4 segments of 60-second recordings), the application crashes with an EXC_BAD_ACCESS error. The crash happens when the Swift runtime attempts to process an error in the SCStreamDelegate.stream(_:didStopWithError:) method.
The crash consistently occurs in swift_getErrorValue when attempting to access the class of what appears to be a null object. This suggests that the error being passed from the system framework to our delegate method is malformed or contains invalid memory.
Steps to Reproduce
Create an SCStream with audio capture enabled
Add audio output to the stream
Start capture and write audio data to disk
Allow the capture to run for several minutes (3-5 minutes typically triggers the issue)
The app will crash with EXC_BAD_ACCESS in swift_getErrorValue
Code Sample
func stream(_ stream: SCStream, didStopWithError error: Error) {
print("Stream stopped with error: \(error)") // Crash occurs before this line executes
}
func stream(_ stream: SCStream, didOutputSampleBuffer sampleBuffer: CMSampleBuffer, of type: SCStreamOutputType) {
guard type == .audio, sampleBuffer.isValid else { return }
// Process audio data...
}
Expected Behavior
The error should be properly propagated to the delegate method, allowing for graceful error handling and recovery.
Actual Behavior
The application crashes with EXC_BAD_ACCESS when the Swift runtime attempts to process the error in swift_getErrorValue.
Crash Log Details
Thread #35, queue = 'com.apple.NSXPCConnection.m-user.com.apple.replayd', stop reason = EXC_BAD_ACCESS (code=1, address=0x0)
frame #0: 0x0000000194c3088c libswiftCore.dylib`swift::_swift_getClass(void const*) + 8
frame #1: 0x0000000194c30104 libswiftCore.dylib`swift_getErrorValue + 40
frame #2: 0x00000001057fba30 shadow`NewScreenCaptureService.stream(stream=0x0000600002de6700, error=Swift.Error @ 0x000000016b7b5e30) at NEW+ScreenCaptureService.swift:365:15
frame #3: 0x00000001057fc050 shadow`@objc NewScreenCaptureService.stream(_:didStopWithError:) at <compiler-generated>:0
frame #4: 0x0000000219ec5ca0 ScreenCaptureKit`-[SCStreamManager stream:didStopWithError:] + 456
frame #5: 0x00000001ca68a5cc ReplayKit`-[RPScreenRecorder stream:didStopWithError:] + 84
frame #6: 0x00000001ca696ff8 ReplayKit`-[RPDaemonProxy stream:didStopWithError:] + 224
Printing description of stream._streamQueue:
error: ObjectiveC.id:4294967281:18: note: 'id' has been explicitly marked unavailable here
public typealias id = AnyObject
^
error: /var/folders/v4/3xg1hmp93gjd8_xlzmryf_wm0000gn/T/expr23-dfa421..cpp:1:65: 'id' is unavailable in Swift: 'id' is not available in Swift; use 'Any'
Swift._DebuggerSupport.stringForPrintObject(Swift.UnsafePointer<id>(bitPattern: 0x104ae08c0)!.pointee)
^~
ObjectiveC.id:2:18: note: 'id' has been explicitly marked unavailable here
public typealias id = AnyObject
^
warning: /var/folders/v4/3xg1hmp93gjd8_xlzmryf_wm0000gn/T/expr23-dfa421..cpp:5:7: initialization of variable '$__lldb_error_result' was never used; consider replacing with assignment to '_' or removing it
var $__lldb_error_result = __lldb_tmp_error
~~~~^~~~~~~~~~~~~~~~~~~~
_
Before the crash, we observed this error message in the console:
[ERROR] *****SCStream*****RemoteAudioQueueOperationHandlerWithError:1015 Error received from the remote queue -16665
Additional Context
The issue occurs consistently after approximately 3-4 successful audio segment recordings of 60 seconds each
Commenting out custom segment rotation logic does not prevent the crash
The crash involves XPC communication with Apple's ReplayKit daemon
The error appears to be corrupted or malformed when crossing the XPC boundary
Workarounds Attempted
Added proper thread safety for all published properties using DispatchQueue.main.async
Implemented more robust error handling in the delegate methods
None of these approaches prevented the crash since it occurs at the Swift runtime level before our code executes.
Impact
This issue prevents reliable long-duration audio capture using ScreenCaptureKit.
This bug significantly limits the usefulness of ScreenCaptureKit for any application requiring continuous system audio capture for more than a few minutes.
Perhaps this issue might be related to a macOS bug where the system dialog indicates that the screen is being shared, even though nothing is actually being shared. Moreover, when attempting to stop sharing, nothing happens.
The presentation "create audio drivers with DriverKit" from WWDC 2021 demonstrates how to use a dext to implement a virtual audio driver. It also says " If a virtual audio driver or device is all that is needed, the audio server plug-in driver model should continue to be used".
Indeed, in AudioDriverKit/AudioDriverKitTypes.h, there is no IOUserAudioTransportType Virtual, although CoreAudio/AudioHardwareBase.h includes kAudioDeviceTransportTypeVirtual.
For one of our products, we require virtual devices to implement a software loopback "cable". We've implemented this using the "traditional" HAL plugin, and as a proof-of-concept, also using a dext. In the dext, I tried setting the transport type to 'virt', which seems to only have the effect of changing the icon shown in Audio Midi Setup.
HAL plugins require an installer, and the installer has to kill coreaudiod in a post-install script. You have to turn off SIP to debug them. Just like AudioDriverKit drivers, they are out-of-process and run in a process not owned by the hosting app. Our HAL plugin's interface is property based; we had to write a lot of boiler-plate code to implement required properties. Writing an AudioDriverKit driver is in most respects easier - a lot of the scaffolding is implemented in the base driver, which we only alter where required. Debugging and installation is much easier.
The dext works just fine, as far as we can ascertain, just as well as a HAL plugin.
So, my question is - is the advice to use a HAL plugin for a virtual device still correct in 2025? And if so, what's the objection? We'd really prefer to ship the AudioDriverKit virtual audio device.
iPhoneやiPadにおいて、画面上のボタンなどをタップした際に、特定の楽器音を発音させる方法をご存知の方いらっしゃいませんか?
現在音楽学習アプリを作成途中で、画面上の鍵盤や指板のボタン状のframeに、単音又は和音を割当て発音させる事を考えております
SwiftUIのcodeのみで実現できないでしょうか
嘗て、MIDIのlevel1の楽器の発音機能があった様に記憶していますが、現在のOS上では同様の機能を実装してないように思えます
皆様のお知恵をお貸しください
Mobile app - Ellie's Gift
https://apps.apple.com/gb/app/ellies-gift/id1617597875
Using AVFoundation to play audio tracks within the app.
Has always been working fine across apple and android, but iphone 14 and newer devices are unable to play audio.
Any idea's or suggestions?
Hello,
We are developing a real-time speech recognition application and are utilizing AVAudioEngine with voice processing enabled on the input node. However, we have observed that enabling this mode interferes with the built-in iOS screen recording feature - specifically, the recorded video does not capture any audio when this mode is active.
Since we want users to be able to record their experience within our app, this issue significantly impacts our functionality. Is there a known workaround or recommended approach to ensure that both voice processing and screen recording can function simultaneously?
Any guidance would be greatly appreciated.
Thank you!
Is it possible to play WebM audio on iOS? Either with AVPlayer, AVAudioEngine, or some other API?
Safari has supported this for a few releases now, and I'm wondering if I missed something about how to do this. By default these APIs don't seem to work (nor does ExtAudioFileOpen).
Our usecase is making it possible for iOS users to play back audio recorded in our webapp (desktop versions of Chrome & Firefox only support webm as a destination format for MediaRecorder)
The device is connected to Bluetooth A and Bluetooth B, currently the audio is played through Bluetooth A, click the interface button, how to realize the code to switch to Bluetooth B?
Hi guys,
I am having issue in live-streaming audio from Bluetooth headset and playing it live on the iPhone speaker.
I am able to redirect audio back to the headset but this is not what I want.
The issue happens when I am trying to override output - the iPhone switches to speaker but also switches a microphone.
This is example of the code:
import AVFoundation
class AudioRecorder {
let player: AVAudioPlayerNode
let engine:AVAudioEngine
let audioSession:AVAudioSession
let audioSessionOutput:AVAudioSession
init() {
self.player = AVAudioPlayerNode()
self.engine = AVAudioEngine()
self.audioSession = AVAudioSession.sharedInstance()
self.audioSessionOutput = AVAudioSession()
do {
try self.audioSession.setCategory(AVAudioSession.Category.playAndRecord, options: [.defaultToSpeaker])
try self.audioSessionOutput.setCategory(AVAudioSession.Category.playAndRecord, options: [.allowBluetooth]) // enables Bluetooth HFP profile
try self.audioSession.setMode(AVAudioSession.Mode.default)
try self.audioSession.setActive(true)
// try self.audioSession.overrideOutputAudioPort(.speaker) // doens't work
} catch {
print(error)
}
let input = self.engine.inputNode
self.engine.attach(self.player)
let bus = 0
let inputFormat = input.inputFormat(forBus: bus)
self.engine.connect(self.player, to: engine.mainMixerNode, format: inputFormat)
input.installTap(onBus: bus, bufferSize: 512, format: inputFormat) { (buffer, time) -> Void in
self.player.scheduleBuffer(buffer)
print(buffer)
}
}
public func start() {
try! self.engine.start()
self.player.play()
}
public func stop() {
self.player.stop()
self.engine.stop()
}
}
I am not sure if this is a bug or not.
Can somebody point me into the right direction?
I there a way to design a custom audio routing?
I would also appreciate some good documentation besides AVFoundation docs.
We have the necessary background recording entitlements, and for many users... do not run into any issues.
However, there is a subset of users that routinely get recordings ending.. we have narrowed this down and believe it to be the work of the watch dog.
First we removed the entire view hierarchy when app is backgrounded. There is just 'Text("Recording")'
This got the CPU usage in profiler down to 0%. We saw massive improvements to recording success rate.
We walked away assuming that was enough. However we are still seeing the same sort of crashes. All in the background. We're using Observation to drive audio state changes to a Live Activity.
Are those Observations causing the problem? Why doesn't apple provide a better API to background audio? The internet is full of weird issues
https://stackoverflow.com/questions/76010213/why-is-my-react-native-app-sometimes-terminated-in-the-background-while-tracking
https://stackoverflow.com/questions/71656047/why-is-my-react-native-app-terminating-in-the-background-while-recording-ios-r
https://github.com/expo/expo/issues/16807
This is such a terrible user experience. And we have very little visibility into what is happening and why.
No where in apple documentation states that in order for background recording to work, the app can only be 'Text("Recording")'
It does not outline a CPU or memory threshold. It just kills us.
I have an app under development - demo here - https://youtu.be/VbAfUk_eYl0?si=s6EDBx-4G6P_QbZO - which is sort of an audio player for airdropped files - something useful to musicians who dump work in progress to their phone, make notes, revise and update.
I've been testing my handling of audio session interruption notifications, but seems to be a lot of inconsistency in how, when and why iOS delivers them, and I'm wondering if there is some rhyme or reason to it that I'm just not detecting.
For example, I am playing a song in my app. Switch to Apple Music and start playing a song there. My app gets an interruption began notification - this is consistent.
Switch back to my app, and about half the time, I will get an interruption ended notification (coupled often with a blast of the tail of whatever audio buffer was partially played when the interruption started, even though the engine was stopped - and followed by call to my AVAudioPlayerNodeCompletionCallback - is there some way to avoid this?). Half the time I don't get an interruption ended notification; my app can (as expected) end the interruption by activating the AVAudioSession and playing something.
I have not been able to determine any pattern to this behavior, other than that if my app started playing using AVAudioPlayerNode.scheduleSegment rather than scheduleFile I think the notification will be consistently delivered on app activation rather than when I activate the session programmatically.
I would like my app to behave deterministically, and would appreciate any help in deciphering what causes the inconsistent behavior in notifications from iOS.
I am developing a VOD playback app, but when I stream video to an external monitor connected via HDMI with Lightning on iOS 18 or later, the screen goes dark and I cannot confirm playback.
The app I am developing does not detect the HDMI and display the Player separately, but simply mirrors the video.
We have confirmed that the same phenomenon occurs with other services, but we were able to confirm playback with some services such as Apple TV.
Please let us know if there are any other necessary settings such as video certificates required for video playback.
We would also like to know if the problem occurs with iOS 18 or later.
Topic:
Media Technologies
SubTopic:
Audio
Hi,
Not sure if this is the right forum to ask this question in, but could you please advise if I can use Apple Digital Masters logo (badge) in my iOS app that is playing music from Apple Music service?
Topic:
Media Technologies
SubTopic:
Audio
I'm trying to write 16-bit interleaved 2-channel data captured from a LiveSwitch audio source to a AVAudioFile. The buffer and file formats match but I get a bad parameter error from the API. Does this API not support the specified format or is there some other issue?
Here is the debugger output.
(lldb) po audioFile.url
▿ file:///private/var/mobile/Containers/Data/Application/1EB14379-0CF2-41B6-B742-4C9A80728DB3/tmp/Heart%20Sounds%201
- _url : file:///private/var/mobile/Containers/Data/Application/1EB14379-0CF2-41B6-B742-4C9A80728DB3/tmp/Heart%20Sounds%201
- _parseInfo : nil
- _baseParseInfo : nil
(lldb) po error
Error Domain=com.apple.coreaudio.avfaudio Code=-50 "(null)" UserInfo={failed call=ExtAudioFileWrite(_impl->_extAudioFile, buffer.frameLength, buffer.audioBufferList)}
(lldb) po buffer.format
<AVAudioFormat 0x302a12b20: 2 ch, 44100 Hz, Int16, interleaved>
(lldb) po audioFile.fileFormat
<AVAudioFormat 0x302a515e0: 2 ch, 44100 Hz, Int16, interleaved>
(lldb) po buffer.frameLength
882
(lldb) po buffer.audioBufferList
▿ 0x0000000300941e60
- pointerValue : 12894608992
This code handles the details of converting the Live Switch frame into an AVAudioPCMBuffer.
extension FMLiveSwitchAudioFrame {
func convertedToPCMBuffer() -> AVAudioPCMBuffer {
Self.convertToAVAudioPCMBuffer(from: self)!
}
static func convertToAVAudioPCMBuffer(from frame: FMLiveSwitchAudioFrame) -> AVAudioPCMBuffer? {
// Retrieve the audio buffer and format details from the FMLiveSwitchAudioFrame
guard
let buffer = frame.buffer(),
let format = buffer.format() as? FMLiveSwitchAudioFormat else { return nil }
// Extract PCM format details from FMLiveSwitchAudioFormat
let sampleRate = Double(format.clockRate())
let channelCount = AVAudioChannelCount(format.channelCount())
// Determine bytes per sample based on bit depth
let bitsPerSample = 16
let bytesPerSample = bitsPerSample / 8
let bytesPerFrame = bytesPerSample * Int(channelCount)
let frameLength = AVAudioFrameCount(Int(buffer.dataBuffer().length()) / bytesPerFrame)
// Create an AVAudioFormat from the FMLiveSwitchAudioFormat
guard let avAudioFormat = AVAudioFormat(commonFormat: .pcmFormatInt16, sampleRate: sampleRate, channels: channelCount, interleaved: true) else {
return nil
}
// Create an AudioBufferList to wrap the existing buffer
let audioBufferList = UnsafeMutablePointer<AudioBufferList>.allocate(capacity: 1)
audioBufferList.pointee.mNumberBuffers = 1
audioBufferList.pointee.mBuffers.mNumberChannels = channelCount
audioBufferList.pointee.mBuffers.mDataByteSize = UInt32(buffer.dataBuffer().length())
audioBufferList.pointee.mBuffers.mData = buffer.dataBuffer().data().mutableBytes // Directly use LiveSwitch buffer
// Transfer ownership of the buffer to AVAudioPCMBuffer
let pcmBuffer = AVAudioPCMBuffer(pcmFormat: avAudioFormat, bufferListNoCopy: audioBufferList) /* { buffer in
// Ensure the buffer is freed when AVAudioPCMBuffer is deallocated
buffer.deallocate() // Only call this if LiveSwitch allows manual deallocation
} */
pcmBuffer?.frameLength = frameLength
return pcmBuffer
}
}
This is the handler that is invoked with every frame in order to convert it for use with AVAudioFile and optionally update a scrolling signal display on the screen.
private func onRaisedFrame(obj: Any!) -> Void {
// Bail out early if no one is interested in the data.
guard isMonitoring else { return }
// Convert LS frame to AVAudioPCMBuffer (no-copy)
let frame = obj as! FMLiveSwitchAudioFrame
let buffer = frame.convertedToPCMBuffer()
// Hand subscribers a reference to the buffer for rendering to display.
bufferPublisher?.send(buffer)
// If we have and output file, store the data there, as well.
guard let audioFile = self.audioFile else { return }
do {
try audioFile.write(from: buffer) // FIXME: This call is throwing error -50
} catch {
FMLiveSwitchLog.error(withMessage: "Failed to write buffer to audio file at \(audioFile.url): \(error)")
self.audioFile = nil
}
}
This is how the audio file is being setup.
static var recordingFormat: AVAudioFormat = {
AVAudioFormat(commonFormat: .pcmFormatInt16, sampleRate: 44_100, channels: 2, interleaved: true)!
}()
let audioFile = try AVAudioFile(forWriting: outputURL, settings: Self.recordingFormat.settings)