I'm using python 3.9.6, tensorflow 2.20.0, tensorflow-metal 1.2.0, and when I try to run
import tensorflow as tf
It gives
Traceback (most recent call last):
File "/Users/haoduoyu/Code/demo.py", line 1, in <module>
import tensorflow as tf
File "/Users/haoduoyu/Code/test/lib/python3.9/site-packages/tensorflow/__init__.py", line 438, in <module>
_ll.load_library(_plugin_dir)
File "/Users/haoduoyu/Code/test/lib/python3.9/site-packages/tensorflow/python/framework/load_library.py", line 151, in load_library
py_tf.TF_LoadLibrary(lib)
tensorflow.python.framework.errors_impl.NotFoundError: dlopen(/Users/haoduoyu/Code/test/lib/python3.9/site-packages/tensorflow-plugins/libmetal_plugin.dylib, 0x0006): Library not loaded: @rpath/_pywrap_tensorflow_internal.so
Referenced from: <8B62586B-B082-3113-93AB-FD766A9960AE> /Users/haoduoyu/Code/test/lib/python3.9/site-packages/tensorflow-plugins/libmetal_plugin.dylib
Reason: tried: '/Users/haoduoyu/Code/test/lib/python3.9/site-packages/tensorflow-plugins/../_solib_darwin_arm64/_U@local_Uconfig_Utf_S_S_C_Upywrap_Utensorflow_Uinternal___Uexternal_Slocal_Uconfig_Utf/_pywrap_tensorflow_internal.so' (no such file), '/Users/haoduoyu/Code/test/lib/python3.9/site-packages/tensorflow-plugins/../_solib_darwin_arm64/_U@local_Uconfig_Utf_S_S_C_Upywrap_Utensorflow_Uinternal___Uexternal_Slocal_Uconfig_Utf/_pywrap_tensorflow_internal.so' (no such file)
As long as I uninstall tensorflow-metal, nothing goes wrong. How can I fix this problem?
Explore the power of machine learning and Apple Intelligence within apps. Discuss integrating features, share best practices, and explore the possibilities for your app here.
Selecting any option will automatically load the page
Post
Replies
Boosts
Views
Activity
I'm trying to train a text classifier model in Create ML. The Create ML app/framework offers five algorithms. I can successfully train the model with all of the algorithms except the BERT transfer learning option. When I select this algorithm, Create ML simply stops the training process immediately after the initial feature extraction phase (with no reported error).
What I've tried:
I tried simplifying the dataset to just a few classes and short examples in case there was a problem with the data.
I tried experimenting with the number of iterations and language/script options.
I checked Console.app for logged errors and found the following for the Create ML app:
error 10:38:28.385778+0000 Create ML Couldn't read event column - category is invalid. Format string is : <private>
error 10:38:30.902724+0000 Create ML Could not encode the entity <private>. Error: <private>
I'm not sure if these errors are normal or indicative of a problem. I don't know what it means by the "event" column – I don't have an event column in my data and I don't believe there should be one. These errors are not reported when using the other algorithms.
Given that I couldn't get the app to work with BERT, I switched over to the CreateML framework and followed the code samples given in the documentation. (By the way, there's an error in the docs: the line let (trainingData, testingData) = data.stratifiedSplit(on: "text", by: 0.8) should be stratifying on "label", not on "text").
The main chunk of code looks like this:
var parameters = MLTextClassifier.ModelParameters(
validation: .split(strategy: .automatic),
algorithm: .transferLearning(.bertEmbedding, revision: 1),
language: .english
)
parameters.maxIterations = 100
let sentimentClassifier = try MLTextClassifier(
trainingData: trainingData,
textColumn: "text",
labelColumn: "label",
parameters: parameters
)
Ultimately I want to train a single multilingual model, and I believe that BERT is the best choice for this. The problem is that there doesn't seem to be a way to choose the multilingual Latin script option in the API. In the Create ML app you can theoretically do this by selecting the Latin script with language set to "Automatic", as recommended in this WWDC video (relevant section starts at around 8:02). But, as far as I can tell, ModelParameters only lets you pick a specific language.
I presume the framework must provide some way to do this, since the Create ML app uses the framework under the hood, but I can't see a way to do it. Another possibility is that the Create ML app might be misrepresenting the framework – perhaps selecting a specific language in the app doesn't actually make any difference – for example, maybe all Latin languages actually use the same model under the hood and the language selector is just there to guide people to the right choice (but this is just my speculation).
Any help would be much appreciated! If possible, I'd prefer to use the Create ML app if I can get the BERT option to work – is this actually working for anyone? Or failing that, I want to use the framework to train a multilingual Latin model with BERT, so I'm looking for instructions on how to choose that specific option or confirmation that I can just choose .english to get the correct Latin multilingual model.
I'm running Xcode 26.2 on Tahoe 21.1 on an M1 Pro MacBook Pro. I have version 6.2 of the Create ML app.
Topic:
Machine Learning & AI
SubTopic:
Create ML
I am writing an app that parses text and conducts some actions. I don't want to give too much away ;)
However, I am having a huge problem with token sizes. LanguageModelSession will of course give me the on device model 4096 available, but when you go over 4096, my code doesn't seem to be falling back to PCC, or even the system configured ChatGPT. Can anyone assist me with this? For some reason, after reading the docs, it's very unclear how this transition between the three takes place.
Topic:
Machine Learning & AI
SubTopic:
Apple Intelligence
About the Core ML model encryption mention in:https://developer.apple.com/documentation/coreml/encrypting-a-model-in-your-app
When I encrypted the model, if the machine is M chip, the model will load perfectly. One the other hand, when I test the executable on an Intel chip macbook, there will be an error:
Error Domain=com.apple.CoreML Code=9 "Operation not supported on this platform." UserInfo={NSLocalizedDescription=Operation not supported on this platform.}
Intel test machine is 2019 macbook air with
CPU: Intel i5-8210Y,
OS: 14.7.6 23H626,
With Apple T2 Security Chip.
The encrypted model do load on M2 and M4 macbook air.
If the model is NOT encrypted, it will also load on the Intel test machine.
I did not find in Core ML document that suggest if the encryption/decryption support Intel chips.
May I check if the decryption indeed does NOT support Intel chip?
I have a series of shortcuts that I’ve written that use the “Use Model” action to do various things. For example, I have a shortcut “Clipboard Markdown to Notes” that takes the content of the clipboard, creates a new note in Notes, converts the markdown content to rich text, adds it to the note etc.
One key step is to analyze the markdown content with “Use Model” and generate a short descriptive title for the note.
I use the on-device model for this, but sometimes the content and prompt exceed the context window size and the action fails with an error message to that effect.
In that case, I’d like to either repeat the action using the Cloud model, or, if the error was a refusal, to prompt the user to enter a title to use.
I‘ve tried using an IF based on whether the response had any text in it, but that didn’t work. No matter what I’ve tried, I can’t seem to find a way to catch the error from Use Model, determine what the error was, and take appropriate action.
Is there a way to do this?
(And by the way, a huge ”thank you” to whoever had the idea of making AppIntents visible in Shortcuts and adding the Use Model action — has made a huge difference already, and it lets us see what Siri will be able to use as well.)
It is vital for Apple to refine its OCR models to correctly distinguish between Khmer and Thai scripts. Incorrectly labeling Khmer text as Thai is more than a technical bug; it is a culturally insensitive error that impacts national identity, especially given the current geopolitical climate between Cambodia and Thailand. Implementing a more robust language-detection threshold would prevent these harmful misidentifications.
There is a significant logic flaw in the VNRecognizeTextRequest language detection when processing Khmer script. When the property automaticallyDetectsLanguage is set to true, the Vision framework frequently misidentifies Khmer characters as Thai.
While both scripts share historical roots, they are distinct languages with different alphabets. Currently, the model’s confidence threshold for distinguishing between these two scripts is too low, leading to incorrect OCR output in both developer-facing APIs and Apple’s native ecosystem (Preview, Live Text, and Photos).
import SwiftUI
import Vision
class TextExtractor {
func extractText(from data: Data, completion: @escaping (String) -> Void) {
let request = VNRecognizeTextRequest { (request, error) in
guard let observations = request.results as? [VNRecognizedTextObservation] else {
completion("No text found.")
return
}
let recognizedStrings = observations.compactMap { observation in
let str = observation.topCandidates(1).first?.string
return "{text: \(str!), confidence: \(observation.confidence)}"
}
completion(recognizedStrings.joined(separator: "\n"))
}
request.automaticallyDetectsLanguage = true // <-- This is the issue.
request.recognitionLevel = .accurate
let handler = VNImageRequestHandler(data: data, options: [:])
DispatchQueue.global(qos: .background).async {
do {
try handler.perform([request])
} catch {
completion("Failed to perform OCR: \(error.localizedDescription)")
}
}
}
}
Recognizing Khmer
Confidence Score is low for Khmer text. (The output is in Thai language with low confidence score)
Recognizing English
Confidence Score is high expected.
Recognizing Thai
Confidence Score is high as expected
Issues on Preview, Photos
Khmer text
Copied text
Kouk Pring Chroum Temple [19121 รอาสายสุกตีนานยารรีสใหิสรราภูชิตีนนสุฐตีย์ [รุก
เผือชิษาธอยกัตธ์ตายตราพาษชาณา ถวเชยาใบสราเบรถทีมูสินตราพาษชาณา ทีมูโษา เช็ก
อาษเชิษฐอารายสุกบดตพรธุรฯ ตากร"สุก"ผาตากรธกรธุกเยากสเผาพศฐตาสาย รัอรณาษ"ตีพย"
สเผาพกรกฐาภูชิสาเครๆผู:สุกรตีพาสเผาพสรอสายใผิตรรารตีพสๆ เดียอลายสุกตีน
ธาราชรติ ธิพรหณาะพูชุบละเาหLunet De Lajonquiere ผารูกรสาราพารผรผาสิตภพ ตารสิทูก ธิพิ
คุณที่นสายเระพบพเคเผาหนารเกะทรนภาษเราภุพเสารเราษทีเลิกสญาเราหรุฬารชสเกาก เรากุม
สงสอบานตรเราะากกต่ายภากายระตารุกเตียน
Recommended Solutions
1. Set a Threshold
Filter out the detected result where the threshold is less than or equal to 0.5, so that it would not output low quality text which can lead to the issue.
For example,
let recognizedStrings = observations.compactMap { observation in
if observation.confidence <= 0.5 {
return nil
}
let str = observation.topCandidates(1).first?.string
return "{text: \(str!), confidence: \(observation.confidence)}"
}
2. Add Khmer Language Support
This issue would never happen if the model has the capability to detect and recognize image with Khmer language.
Doc2Text GitHub: https://github.com/seanghay/Doc2Text-Swift
Hello everyone,
I’m currently working with the Message Filtering Extension and would really appreciate some clarification around its performance and operational constraints. While the extension is extremely powerful and useful, I’ve found that some important details are either unclear or not well covered in the available documentation.
There are two main areas I’m trying to understand better:
Machine learning model constraints within the extension
In our case, we already have an existing ML model that classifies messages (and are not dependant on Apple's built-in models). We’re evaluating whether and how it can be used inside the extension.
Specifically, I’m trying to understand:
Are there documented limits on the size of an ML model (e.g., maximum bundle size or model file size in MB)?
What are the memory constraints for a model once loaded into memory by the extension?
Under what conditions would the system terminate or “kick out” the extension due to memory or performance pressure?
Message processing timeouts and execution constraints
What is the timeout for processing a single received message?
At what point will the OS stop waiting for the extension’s response and allow the message by default (for example, if the extension does not respond in time)?
Any guidance, official references, or practical experience from Apple engineers or other developers would be greatly appreciated.
Thanks in advance for your help,
Is Private Cloud Compute allowed? Or are on-device Foundational Models allowed only? Thanks!
Greetings! I was trying to get a response from the LanguageModelSession but I just keep getting the following:
Error getting response: Model Catalog error: Error Domain=com.apple.UnifiedAssetFramework Code=5000 "There are no underlying assets (neither atomic instance nor asset roots) for consistency token for asset set com.apple.MobileAsset.UAF.FM.Overrides" UserInfo={NSLocalizedFailureReason=There are no underlying assets (neither atomic instance nor asset roots) for consistency token for asset set com.apple.MobileAsset.UAF.FM.Overrides}
This occurs both in macOS 15.5 running the new Xcode beta with an iOS 26 simulator, and also on a macOS 26 with Xcode beta. The simulators are both Pro iPhone 16s.
I was wondering if anyone had any advice?
Hi everyone,
I'm experiencing an inconsistent behavior with the Translation framework on iOS 18. The LanguageAvailability.status() API reports language models as .installed, but translation fails with Code 16.
Setup:
Using translationTask modifier with TranslationSession
Batch translation with explicit source/target languages
Languages: Portuguese→English, German→English
Issue:
let status = await LanguageAvailability().status(from: sourceLang, to: targetLang) // Returns: .installed
// But translation fails:
let responses = try await session.translations(from: requests)
// Error: TranslationErrorDomain Code=16 "Offline models not available"
Logs:
Language model installed: pt -> en
Language model installed: de -> en
Starting translation: de -> en
Error Domain=TranslationErrorDomain Code=16 "Translation failed"NSLocalizedFailureReason=Offline models not available for language pair
What I've tried:
Re-downloading languages in Settings
Using source: nil for auto-detection
Fresh TranslationSession.Configuration each time
Questions:
Is there a way to force model re-validation/re-download programmatically?
Should translationTask show download popup when Code 16 occurs?
Has anyone found a reliable workaround?
I've seen similar reports in threads 791357 and 777113. Any guidance appreciated!
Thanks!
Topic:
Machine Learning & AI
SubTopic:
General
We’ve encountered what appears to be a CoreML regression between macOS 26.0.1 and macOS 26.1 Beta.
In macOS 26.0.1, CoreML models run and produce correct results. However, in macOS 26.1 Beta, the same models produce scrambled or corrupted outputs, suggesting that tensor memory is being read or written incorrectly. The behavior is consistent with a low-level stride or pointer arithmetic issue — for example, using 16-bit strides on 32-bit data or other mismatches in tensor layout handling.
Reproduction
Install ON1 Photo RAW 2026 or ON1 Resize 2026 on macOS 26.0.1.
Use the newest Highest Quality resize model, which is Stable Diffusion–based and runs through CoreML.
Observe correct, high-quality results.
Upgrade to macOS 26.1 Beta and run the same operation again.
The output becomes visually scrambled or corrupted.
We are also seeing similar issues with another Stable Diffusion UNet model that previously worked correctly on macOS 26.0.1. This suggests the regression may affect multiple diffusion-style architectures, likely due to a change in CoreML’s tensor stride, layout computation, or memory alignment between these versions.
Notes
The affected models are exported using standard CoreML conversion pipelines.
No custom operators or third-party CoreML runtime layers are used.
The issue reproduces consistently across multiple machines.
It would be helpful to know if there were changes to CoreML’s tensor layout, precision handling, or MLCompute backend between macOS 26.0.1 and 26.1 Beta, or if this is a known regression in the current beta.
I'm working on my first model that detects bowling score screens, and I have it working with pictures no problem. But when it comes to video, I have a sizing issue.
I added my model to a small app I wrote for taking a picture of a Bowling Scoring Screen, where my model will frame the screens in the video feed from the camera. My model works, but my boxes are about 2/3 the size of the screens being detected. I don't understand the theory of the video stream the camera is feeding me. What I mean is that I don't want to make tweaks to the size of my rectangles by making them larger, and I'm not sure if the video feed is larger than what I'm detecting in code.
Questions I have are like is the video feed a certain resolution like 1980x something, or a much higher resolution in the 12 megapixel range?
On a static image of say 1920x something, My alignment is perfect.
AI says that it's my model training, that I'm training on square images but video is 16:9. Or that I'm producing 4:3 images in a 16:9 environment.
I'm missing something here but not sure what it is. I already wrote code to force it to fit, but reverted back to trying for a natural fit.
Topic:
Machine Learning & AI
SubTopic:
Core ML
I'm experimenting with Foundation Models and I'm trying to understand how to define a Tool whose input argument is defined at runtime. Specifically, I want a Tool that takes a single String parameter that can only take certain values defined at runtime.
I think my question is basically the same as this one: https://developer.apple.com/forums/thread/793471 However, the answer provided by the engineer doesn't actually demonstrate how to create the GenerationSchema. Trying to piece things together from the documentation that the engineer linked to, I came up with this:
let citiesDefinedAtRuntime = ["London", "New York", "Paris"]
let citySchema = DynamicGenerationSchema(
name: "CityList",
properties: [
DynamicGenerationSchema.Property(
name: "city",
schema: DynamicGenerationSchema(
name: "city",
anyOf: citiesDefinedAtRuntime
)
)
]
)
let generationSchema = try GenerationSchema(root: citySchema, dependencies: [])
let tools = [CityInfo(parameters: generationSchema)]
let session = LanguageModelSession(tools: tools, instructions: "...")
With the CityInfo Tool defined like this:
struct CityInfo: Tool {
let name: String = "getCityInfo"
let description: String = "Get information about a city."
let parameters: GenerationSchema
func call(arguments: GeneratedContent) throws -> String {
let cityName = try arguments.value(String.self, forProperty: "city")
print("Requested info about \(cityName)")
let cityInfo = getCityInfo(for: cityName)
return cityInfo
}
func getCityInfo(for city: String) -> String {
// some backend that provides the info
}
}
This compiles and usually seems to work. However, sometimes the model will try to request info about a city that is not in citiesDefinedAtRuntime. For example, if I prompt the model with "I want to travel to Tokyo in Japan, can you tell me about this city?", the model will try to request info about Tokyo, even though this is not in the citiesDefinedAtRuntime array.
My understanding is that this should not be possible – constrained generation should only allow the LLM to generate an input argument from the list of cities defined in the schema.
Am I missing something here or overcomplicating things?
What's the correct way to make sure the LLM can only call a Tool with an input parameter from a set of possible values defined at runtime?
Many thanks!
Topic:
Machine Learning & AI
SubTopic:
Foundation Models
I've created the following Foundation Models Tool, which uses the .anyOf guide to constrain the LLM's generation of suitable input arguments. When calling the tool, the model is only allowed to request one of a fixed set of sections, as defined in the sections array.
struct SectionReader: Tool {
let article: Article
let sections: [String]
let name: String = "readSection"
let description: String = "Read a specific section from the article."
var parameters: GenerationSchema {
GenerationSchema(
type: GeneratedContent.self,
properties: [
GenerationSchema.Property(
name: "section",
description: "The article section to access.",
type: String.self,
guides: [.anyOf(sections)]
)
]
)
}
func call(arguments: GeneratedContent) async throws -> String {
let requestedSectionName = try arguments.value(String.self, forProperty: "section")
...
}
}
However, I have found that the model will sometimes call the tool with invalid (but plausible) section names, meaning that .anyOf is not actually doing its job (i.e. requestedSectionName is sometimes not a member of sections).
The documentation for the .anyOf guide says, "Enforces that the string be one of the provided values."
Is this a bug or have I made a mistake somewhere?
Many thanks for any help you provide!
Topic:
Machine Learning & AI
SubTopic:
Foundation Models
Environment
MacOC 26
Xcode Version 26.0 beta 7 (17A5305k)
simulator: iPhone 16 pro
iOS: iOS 26
Problem
NLContextualEmbedding.load() fails with the following error
In simulator
Failed to load embedding from MIL representation: filesystem error: in create_directories: Permission denied ["/var/db/com.apple.naturallanguaged/com.apple.e5rt.e5bundlecache"]
filesystem error: in create_directories: Permission denied ["/var/db/com.apple.naturallanguaged/com.apple.e5rt.e5bundlecache"]
Failed to load embedding model 'mul_Latn' - '5C45D94E-BAB4-4927-94B6-8B5745C46289'
assetRequestFailed(Optional(Error Domain=NLNaturalLanguageErrorDomain Code=7 "Embedding model requires compilation" UserInfo={NSLocalizedDescription=Embedding model requires compilation}))
in #Playground
I'm new to this embedding model. Not sure if it's caused by my code or environment.
Code snippet
import Foundation
import NaturalLanguage
import Playgrounds
#Playground {
// Prefer initializing by script for broader coverage; returns NLContextualEmbedding?
guard let embeddingModel = NLContextualEmbedding(script: .latin) else {
print("Failed to create NLContextualEmbedding")
return
}
print(embeddingModel.hasAvailableAssets)
do {
try embeddingModel.load()
print("Model loaded")
} catch {
print("Failed to load model: \(error)")
}
}
I’d like to submit a feature request regarding the availability of Foundation Models in MessageFilter extensions.
Background
MessageFilter extensions play a critical role in protecting users from spam, phishing, and unwanted messages. With the introduction of Foundation Models and Apple Intelligence, Apple has provided powerful on-device natural language understanding capabilities that are highly aligned with the goals of MessageFilter.
However, Foundation Models are currently unavailable in MessageFilter extensions.
Why Foundation Models Are a Great Fit for MessageFilter
Message filtering is fundamentally a natural language classification problem. Foundation Models would significantly improve:
Detection of phishing and scam messages
Classification of promotional vs transactional content
Understanding intent, tone, and semantic context beyond keyword matching
Adaptation to evolving scam patterns without server-side processing
All of this can be done fully on-device, preserving user privacy and aligning with Apple’s privacy-first design principles.
Current Limitations
Today, MessageFilter extensions are limited to relatively simple heuristics or lightweight models. This often results in:
Higher false positives
Lower recall for sophisticated scam messages
Increased development complexity to compensate for limited NLP capabilities
Request
Could Apple consider one of the following:
Allowing Foundation Models to be used directly within MessageFilter extensions
Providing a constrained or optimized Foundation Model API specifically designed for MessageFilter
Enabling a supported mechanism for MessageFilter extensions to delegate inference to the containing app using Foundation Models
Even limited access (e.g. short text only, strict execution limits) would be extremely valuable.
Closing
Foundation Models have the potential to significantly raise the quality and effectiveness of message filtering on Apple platforms while maintaining strong privacy guarantees. Supporting them in MessageFilter extensions would be a major improvement for both developers and users.
Thank you for your consideration and for continuing to invest in on-device intelligence.
Hi! I noticed that on my father's M1 Max MacBook Pro (64gb ram) there's an option for style transfer which I don't see on my M1 MacBook Air (16gb ram). I am running macOS Tahoe and he is running macOS Sequoia.
Topic:
Machine Learning & AI
SubTopic:
Create ML
My app lets you create images with Image Playground. When the user approves an image I move it to the documents dir from the temp storage. With over a year of usage I’ve created a lot of images over time.
Out of nowhere the app stopped loading my custom creations from Image Playground saying it couldn’t find the files. It still had my VoiceOver strings I had added for each image and still had the custom categories I assigned them.
Debug code to look in the docs dir doesn’t find them. I downloaded the app’s container and only see the images I created as a test after the problem started.
But my ~70MB app is still taking up 300MB on my iPhone so it feels like they’re there but not accessible.
Is there anything else I can try?
When trying to open an encrypted CoreML model file on a system with SIP disabled, the error message is
Failed to generate key request for <...> with error: -42187
This should state that SIP is disabled and needs to be enabled.
We are developing Apple AI for overseas markets and adapting it for iPhone 17 and later models. When the system language and Siri language do not match—such as the system being in English while Siri is in Chinese—it may result in Apple AI being unusable. So, I would like to ask, how can this issue be resolved, and are there other reasons that might cause it to be unusable within the app?