Hello everybody!
I’m encountering an unexpected guardrailViolation error when using Foundation Models on macOS Beta 3 (Tahoe) with an Apple M2 Pro chip. This issue didn’t occur on Beta 1 or Beta 2 using the same codebase.
Reproduction Context
I’m developing an app that leverages Foundation Models for structured generation, paired with a local database tool. After upgrading to macOS Beta 3, I started receiving this error consistently, despite no changes in the generation logic.
To isolate the issue, I opened the official WWDC sample project from the Adding intelligent app features with generative models and the same guardrailViolation error appeared without any modifications.
Simplified Working Example
I attempted to narrow down the issue by starting with a minimal prompt structure. This basic case works fine:
import Foundation
import Playgrounds
import FoundationModels
@Generable
struct GeneableLandmark {
@Guide(description: "Name of the landmark to visit")
var name: String
}
final class LandmarkSuggestionGenerator {
var landmarkSuggestion: GeneableLandmark.PartiallyGenerated?
private var session: LanguageModelSession
init(){
self.session = LanguageModelSession(
instructions: Instructions {
"""
generate a list of landmarks to visit
"""
}
)
}
func createLandmarkSuggestion(location: String) async throws {
let stream = session.streamResponse(
generating: GeneableLandmark.self,
options: GenerationOptions(sampling: .greedy),
includeSchemaInPrompt: false
) {
"""
Generate a list of landmarks to viist in \(location)
"""
}
for try await partialResponse in stream {
landmarkSuggestion = partialResponse
}
}
}
#Playground {
let generator = LandmarkSuggestionGenerator()
Task {
do {
try await generator.createLandmarkSuggestion(location: "New york")
if let suggestion = generator.landmarkSuggestion {
print("Suggested landmark: \(suggestion)")
} else {
print("No suggestion generated.")
}
} catch {
print("Error generating landmark suggestion: \(error)")
}
}
}
But as soon as I use the Sample ItineraryPlanner:
#Playground {
// Example landmark for demonstration
let exampleLandmark = Landmark(
id: 1,
name: "San Francisco",
continent: "North America",
description: "A vibrant city by the bay known for the Golden Gate Bridge.",
shortDescription: "Iconic Californian city.",
latitude: 37.7749,
longitude: -122.4194,
span: 0.2,
placeID: nil
)
let planner = ItineraryPlanner(landmark: exampleLandmark)
Task {
do {
try await planner.suggestItinerary(dayCount: 3)
if let itinerary = planner.itinerary {
print("Suggested itinerary: \(itinerary)")
} else {
print("No itinerary generated.")
}
} catch {
print("Error generating itinerary: \(error)")
}
}
}
The error pops up:
Multiline
Error generating itinerary:
guardrailViolation(FoundationModels.LanguageModelSession. >GenerationError.Context(debug
Description: "May contain sensitive or unsafe content", >underlyingErrors:
[FoundationModels. LanguageModelSession. Gene >rationError.guardrailViolation(FoundationMo dels. >LanguageModelSession.GenerationError.C ontext (debugDescription: >"May contain unsafe content", underlyingErrors: []))]))
Based on my tests:
The error may not be tied to structure complexity (since more nested structures work)
The issue may stem from the tools or prompt content used inside the ItineraryPlanner
The guardrail sensitivity may have increased or changed in Beta 3, affecting models that worked in earlier betas
Thank you in advance for your help. Let me know if more details or reproducible code samples are needed - I’m happy to provide them.
Best,
Sasha Morozov
Explore the power of machine learning and Apple Intelligence within apps. Discuss integrating features, share best practices, and explore the possibilities for your app here.
Selecting any option will automatically load the page
Post
Replies
Boosts
Views
Activity
I want to use Foundation Models in a project, but I know my users will want to avoid environmentally intensive AI work in data centers.
Does Foundation Models ever use Private Compute Cloud or any other kind of cloud-based AI system?
I'd like to be able to assure my users that the LLM usage is relatively environmentally friendly. It would be great to be able to cite a specific Apple page explaining that Foundation Models work is always done locally.
If there's any chance that work can be done in the cloud, is there a way to opt out of that?
Topic:
Machine Learning & AI
SubTopic:
Foundation Models
Just tried to write a very simple test of using foundation models, but it gave me the error like this
"ModelManager received unentitled request. Expected entitlement com.apple.modelmanager.inference
establishment of session failed with Missing entitlement: com.apple.modelmanager.inference"
The simple code is listed below:
let session: LanguageModelSession = LanguageModelSession()
let response = try? await session.respond(to: "What is the capital of France?")
print("Response: (response)")
So what's the problem of this one?
Topic:
Machine Learning & AI
SubTopic:
Foundation Models
Is there any way to ensure iOS apps we develop using Foundation Models can only be purchasable/downloadable on App Store by folks with capable devices? I would've thought there would be a Required Capabilities that App Store would hook into, but I don't seem to see it in the documentation here: https://developer.apple.com/documentation/bundleresources/information-property-list/uirequireddevicecapabilities
The closest seems to be iphone-performance-gaming-tier as that seems to target all M1 and above chips on iPhone & iPad. There is an ipad-minimum-performance-m1 that would more reasonably seem to ensure Foundation Models is likely available, but that doesn't help with iPhone. So far, it seems the only path would be to set Minimum Deployment to iOS 26 and add iphone-performance-gaming-tier as a required capability, but I'm a bit worried that capability might diverge in the future from what's Foundation Model / Apple Intelligence capable.
While I understand for the majority of apps they'll want to just selectively add in Apple Intelligence features and so can be usable by folks whose devices don't support it, the app experience I'm building doesn't make sense without the Foundation Models being available and I'd rather not have a large number of users downloading the app to be told "Sorry, you're not Apple Intelligence capable"
Hello everyone, I have a visual convolutional model and a video that has been decoded into many frames. When I perform inference on each frame in a loop, the speed is a bit slow. So, I started 4 threads, each running inference simultaneously, but I found that the speed is the same as serial inference, every single forward inference is slower. I used the mactop tool to check the GPU utilization, and it was only around 20%. Is this normal? How can I accelerate it?
When I use the FoundationModel framework to generate long text, it will always hit an error.
"Passing along Client rate limit exceeded, try again later in response to ExecuteRequest"
And stop generating.
eg. for the prompt "Write a long story", it will almost certainly hit that error after 17 seconds of generation.
do{
let session = LanguageModelSession()
let prompt: String = "Write a long story"
let response = try await session.respond(to: prompt)
}catch{}
If possible, I want to know how to prevent that error or at least how to handle it.
Is it possible to train an Adaptor for the Foundation Models to produce Generable output? If so what would the response part of the training data need to look like? Presumably, under the hood, the model is outputting JSON (or some other similar structure) that can be decoded to a Generable type. Would the response part of the training data for an Adaptor need to be in that structured format?
Topic:
Machine Learning & AI
SubTopic:
Foundation Models
Overview
I'm experiencing a critical issue where TensorFlow-metal and PyArrow seem to be incompatible when installed together in the same environment. Whenever both packages are present, TensorFlow crashes and the kernel dies during execution. Environment Details
Environment Details
macOS Version: 15.3.2
Mac Model: MacBook Pro Max M3
Python Version: 3.11
TensorFlow Version: 2.19
PyArrow Version: 19.0.0
Issue Description:
When both TensorFlow-metal and PyArrow are installed in the same Python environment, any attempt to use TensorFlow results in immediate kernel crashes. The issue appears to be a compatibility problem between these two packages rather than a problem with either package individually.
Steps to Reproduce
Create a new Python environment:
conda create -n tf-metal python=3.11
Install TensorFlow-metal:
pip install tensorflow tensorflow-metal
Install PyArrow: pip install pyarrow
Run the following minimal example:
# Create a simple model
model = tf.keras.Sequential([
tf.keras.layers.Input(shape=(2,)),
tf.keras.layers.Dense(1)
])
model.compile(optimizer='adam', loss='mse')
model.summary() # This works fine
# Generate some dummy data
X = np.random.random((100, 2))
y = np.random.random((100, 1))
# The crash happens exactly at this line
model.fit(X, y, epochs=5, batch_size=32) # CRASH: Kernel dies here
Result: Kernel crashes with no error message
What I've Tried
Reinstalling both packages in different orders Using different versions of both packages Creating isolated environments Checking system logs for additional error information
The only workaround I've found is to use separate environments for each package, which isn't practical for my workflow as I need both libraries for my data processing and machine learning pipeline.
Questions
Has anyone else encountered this specific compatibility issue? Are there known workarounds that allow both packages to coexist? Is this a known issue that's being addressed in upcoming releases?
Any insights, suggestions, or assistance would be greatly appreciated. I'm happy to provide any additional information that might help diagnose this problem. Thank you in advance for your help!
Thank you in advance for your help!
Topic:
Machine Learning & AI
SubTopic:
Core ML
As we described on the title, the model that I have built completely works on iPhone 15 / A16 Bionic, on the other hand it does not run on iPhone 16 / A18 chip with the following error message.
E5RT encountered an STL exception. msg = MILCompilerForANE error: failed to compile ANE model using ANEF. Error=_ANECompiler : ANECCompile() FAILED.
E5RT: MILCompilerForANE error: failed to compile ANE model using ANEF. Error=_ANECompiler : ANECCompile() FAILED (11)
It consumes 1.5 ~ 1.6 GB RAM on the loading the model, then the consumption is decreased to less than 100MB on the both of iPhone 15 and 16. After that, only on iPhone 16, the above error is shown on the Xcode log, the memory consumption is surged to 5 to 6GB, and the system kills the app. It works well only on iPhone 15.
This model is built with the Core ML tools. Until now, I have tried the target iOS 16 to 18 and the compute units of CPU_AND_NE and ALL. But any ways have not solved this issue. Eventually, what kindof fix should I do?
minimum_deployment_target = ct.target.iOS18
compute_units = ct.ComputeUnit.ALL
compute_precision = ct.precision.FLOAT16
Environment
MacOC 26
Xcode Version 26.0 beta 7 (17A5305k)
simulator: iPhone 16 pro
iOS: iOS 26
Problem
NLContextualEmbedding.load() fails with the following error
In simulator
Failed to load embedding from MIL representation: filesystem error: in create_directories: Permission denied ["/var/db/com.apple.naturallanguaged/com.apple.e5rt.e5bundlecache"]
filesystem error: in create_directories: Permission denied ["/var/db/com.apple.naturallanguaged/com.apple.e5rt.e5bundlecache"]
Failed to load embedding model 'mul_Latn' - '5C45D94E-BAB4-4927-94B6-8B5745C46289'
assetRequestFailed(Optional(Error Domain=NLNaturalLanguageErrorDomain Code=7 "Embedding model requires compilation" UserInfo={NSLocalizedDescription=Embedding model requires compilation}))
in #Playground
I'm new to this embedding model. Not sure if it's caused by my code or environment.
Code snippet
import Foundation
import NaturalLanguage
import Playgrounds
#Playground {
// Prefer initializing by script for broader coverage; returns NLContextualEmbedding?
guard let embeddingModel = NLContextualEmbedding(script: .latin) else {
print("Failed to create NLContextualEmbedding")
return
}
print(embeddingModel.hasAvailableAssets)
do {
try embeddingModel.load()
print("Model loaded")
} catch {
print("Failed to load model: \(error)")
}
}
Hi,
One can configure the languages of a (VN)RecognizeTextRequest with either:
.automatic: language to be detected
a specific language, say Spanish
If the request is configured with .automatic and successfully detects Spanish, will the results be exactly equivalent compared to a request made with Spanish set as language?
I could not find any information about this, and this is very important for the core architecture of my app.
Thanks!
When doing some exploratory research into using Apple Intelligence in our aviation-focused application, I noticed that there were several times that key phases would be marked as inappropriate. I tried to stifle these using prompts and rules but couldn't get it to take hold. I was encouraged by an Apple employee to go ahead and post this so that the AI team can use the feedback.
There were several terms that triggered this warning, but the two that were most prominent were:
'Tailwind'
'JFK' or 'KJFK' (NY airport ICAO/IATA codes)
Topic:
Machine Learning & AI
SubTopic:
Foundation Models
I have built a MAC-OS machine intelligence application that uses Apple Intelligence. A part of the application is to preprocess text. For longer text content I have implemented chunking to get around the token limit. However the application performance is now limited by the fact that Apple Intelligence is sequential in operation. This has a large impact on the application performance.
Is there any approach to operate Apple Intelligence in a parallel mode or even a streaming interface. As Apple Intelligence has Private Cloud Services I was hoping to be able to send multiple chunks in parallel as that would significantly improve performance.
Any suggestions would be welcome. This could also be considered a request for a future enhancement.
Topic:
Machine Learning & AI
SubTopic:
Apple Intelligence
I have a question. In China, long pressing a picture in the album can segment the target. Is this model a local model? Is there any information? Can developers use it?
I am trying to benchmark and see if the Qwen3 1.7B model can run in an iPhone SE 3 [4 GB RAM].
My core problem is - Even with weight quantization the SE 3 is not able to load into memory.
What I've tried:
I am converting a Torch model to the Core ML format using coremltools. I have tried the following combinations of quantization and context length
8 bit + 1024
8 bit + 2048
4 bit + 1024
4 bit + 2048
All the above quantizations are done with dynamic shape with the default being [1,1] in the hope that the whole context length does not get allocated in memory
The 4-bit model is approximately 865MB on disk
The 8-bit model is approximately 1.7 GB on disk
During load:
With the int4 quantization the memory spikes during intitial load a lot. Could this be because many operations are converted to int8 or fp16 as core ML does not perform operations natively on int4?
With int8 on the profiler the memory does not go above 2 GB (only 900 MB) but it is still not able to load as it shows the following error. 2GB is the limit where jetsam kills the app for the iPhone SE 3
E5RT: Error(s) occurred compiling MIL to BNNS graph:
[CreateBnnsGraphProgramFromMIL]: BNNS Graph Compile:
failed to preallocate file with error: No space left on device
for path: /var/mobile/Containers/Data/Application/
5B8BB7D2-06A6-4BAE-A042-407B6D805E7C/Library/Caches
/com.tss.qwen3-coreml/
com.apple.e5rt.e5bundlecache/
23A341/<long key>.tmp.12586_4362093968.bundle/
H14.bundle/main/main_bnns/bnns_program.bnnsir
Some online sources have suggested activation quantization but I am unsure if that will have any impact on loading [as the spike is during load and not inference]
The model spec also suggests that there is no dequantization happening (for e.g from 4 bit -> fp16)
So I had couple of queries:
Has anyone faced similar issues?
What could be the reasons for the temporary memory spike during LOAD
What are approaches that can be adopted to deal with this issue?
Any help would be greatly appreciated. Thank you.
Hello All,
I’m working on a computer-vision–heavy iOS application that uses the camera, LiDAR depth maps, and semantic segmentation to reason about the environment (object identification, localization and measurement - not just visualization).
Current architecture
I initially built the image pipeline around CIImage as a unifying abstraction. It seemed like a good idea because:
CIImage integrates cleanly with Vision, ARKit, AVFoundation, Metal, Core Graphics, etc.
It provides a rich set of out-of-the-box transforms and filters.
It is immutable and thread-safe, which significantly simplified concurrency in a multi-queue pipeline.
The LiDAR depth maps, semantic segmentation masks, etc. were treated as CIImages, with conversion to CVPixelBuffer or MTLTexture only at the edges when required.
Problem
I’ve run into cases where Core Image transformations do not preserve numeric fidelity for non-visual data.
Example:
Rendering a CIImage-backed segmentation mask into a larger CVPixelBuffer can cause label values to change in predictable but incorrect ways.
This occurs even when:
using nearest-neighbor sampling
disabling color management (workingColorSpace / outputColorSpace = NSNull)
applying identity or simple affine transforms
I’ve confirmed via controlled tests that:
Metal → CVPixelBuffer paths preserve values correctly
CIImage → CVPixelBuffer paths can introduce value changes when resampling or expanding the render target
This makes CIImage unsafe as a source of numeric truth for segmentation masks and depth-based logic, even though it works well for visualization, and I should have realized this much sooner.
Direction I’m considering
I’m now considering refactoring toward more intent-based abstractions instead of a single image type, for example:
Visual images: CIImage (camera frames, overlays, debugging, UI)
Scalar fields: depth / confidence maps backed by CVPixelBuffer + Metal
Label maps: segmentation masks backed by integer-preserving buffers (no interpolation, no transforms)
In this model, CIImage would still be used extensively — but primarily for visualization and perceptual processing, not as the container for numerically sensitive data.
Thread safety concern
One of the original advantages of CIImage was that it is thread-safe by design, and that was my biggest incentive.
For CVPixelBuffer / MTLTexture–backed data, I’m considering enforcing thread safety explicitly via:
Swift Concurrency (actor-owned data, explicit ownership)
Questions
For those may have experience with CV / AR / imaging-heavy iOS apps, I was hoping to know the following:
Is this separation of image intent (visual vs numeric vs categorical) a reasonable architectural direction?
Do you generally keep CIImage at the heart of your pipeline, or push it to the edges (visualization only)?
How do you manage thread safety and ownership when working heavily with CVPixelBuffer and Metal? Using actor-based abstractions, GCD, or adhoc?
Are there any best practices or gotchas around using Core Image with depth maps or segmentation masks that I should be aware of?
I’d really appreciate any guidance or experience-based advice. I suspect I’ve hit a boundary of Core Image’s design, and I’m trying to refactor in a way that doesn't involve too much immediate tech debt, remains robust and maintainable long-term.
Thank you in advance!
I am trying to test FoundationModels in a Swift Playground in Xcode 26.2, macOS 26.3, and am running into an issue. The following simple code generates an error:
import FoundationModels
@Generable
struct Specifications {
@Guide(description: "Search for color")
var color: String
}
I see the following error message in the console:
error: AIPlayground.playground:4:8: external macro implementation type 'FoundationModelsMacros.GenerableMacro' could not be found for macro 'Generable(description:)'; plugin for module 'FoundationModelsMacros' not found
The Xcode editor does not appear to recognize the @Generable or @Guide macros, despite importing FoundationModels. What step/setting am I missing?
After watching the What's new in App Intents session I'm attempting to create an intent conforming to URLRepresentableIntent. The video states that so long as my AppEntity conforms to URLRepresentableEntity I should not have to provide a perform method . My application will be launched automatically and passed the appropriate URL.
This seems to work in that my application is launched and is passed a URL, but the URL is in the form: FeatureEntity/{id}.
Am I missing something, or is there a trick that enables it to pass along the URL specified in the AppEntity itself?
struct MyExampleIntent: OpenIntent, URLRepresentableIntent {
static let title: LocalizedStringResource = "Open Feature"
static var parameterSummary: some ParameterSummary {
Summary("Open \(\.$target)")
}
@Parameter(title: "My feature", description: "The feature to open.")
var target: FeatureEntity
}
struct FeatureEntity: AppEntity {
// ...
}
extension FeatureEntity: URLRepresentableEntity {
static var urlRepresentation: URLRepresentation {
"https://myurl.com/\(.id)"
}
}
When I use ChatGPT in Xcode, the following error is displayed:
It was working fine before, but suddenly it became like this, without changing any configuration. Why?
Topic:
Machine Learning & AI
SubTopic:
Apple Intelligence
After exerting a custom model with nms=True.
In Xcode, the outputs show as:
confidence: MultiArray (0 × 5)
coordinates: MultiArray (0 × 4)
I want to set fixed shapes (e.g., 100 × 5, 100 × 4), but Xcode does not allow editing—the shape fields are locked. The model graph shows both outputs come directly from a NonMaximumSuppression layer.
Is it possible to set fixed output dimensions for NMS outputs in CoreML?