How do I test the new RecognizeDocumentRequest API. Reference: https://www.youtube.com/watch?v=H-GCNsXdKzM
I am running Xcode Beta, however I only have one primary device that I cannot install beta software on.
Please provide a strategy for testing. Will simulator work?
The new capability is critical to my application, just what I need for structuring document scans and extraction.
Thank you.
Explore the power of machine learning and Apple Intelligence within apps. Discuss integrating features, share best practices, and explore the possibilities for your app here.
Selecting any option will automatically load the page
Post
Replies
Boosts
Views
Activity
When context window size exceeded, this error is not called (instead another error has shown up) to handle new session.
LanguageModelSession.GenerationError.exceededContextWindowSize
Or am I doing things wrong?
Topic:
Machine Learning & AI
SubTopic:
Foundation Models
Hi everyone,
I'm a Mac enthusiast experimenting with tensorflow-metal on my Mac Pro (2013). My question is about GPU selection in tensorflow-metal (v0.8.0), which still supports Intel-based Macs, including my machine.
I've noticed that when running TensorFlow with Metal, it automatically selects a GPU, regardless of what I specify using device indices like "gpu:0", "gpu:1", or "gpu:2". I'm wondering if there's a way to manually specify which GPU should be used via an environment variable or another method.
For reference, I’ve tried the example from TensorFlow’s guide on multi-GPU selection: https://www.tensorflow.org/guide/gpu#using_a_single_gpu_on_a_multi-gpu_system
My goal is to explore performance optimizations by using MirroredStrategy in TensorFlow to leverage multiple GPUs: https://www.tensorflow.org/guide/distributed_training#mirroredstrategy
Interestingly, I discovered that the metalcompute Python library (https://pypi.org/project/metalcompute/) allows to utilize manually selected GPUs on my system, allowing for proper multi-GPU computations. This makes me wonder:
Is there a hidden environment variable or setting that allows manual GPU selection in tensorflow-metal?
Has anyone successfully used MirroredStrategy on multiple GPUs with tensorflow-metal?
Would a bridge between metalcompute and tensorflow-metal be necessary for this use case, or is there a more direct approach?
I’d love to hear if anyone else has experimented with this or has insights on getting finer control over GPU selection. Any thoughts or suggestions would be greatly appreciated!
Thanks!
Is there anywhere we can reference error codes? I'm getting this error: "The operation couldn’t be completed. (FoundationModels.LanguageModelSession.GenerationError error 4.)" and I have no idea of what it means or what to attempt to fix.
Topic:
Machine Learning & AI
SubTopic:
Foundation Models
Tags:
Machine Learning
Create ML
Apple Intelligence
I'm experimenting with downloading an audio file of spoken content, using the Speech framework to transcribe it, then using FoundationModels to clean up the formatting to add paragraph breaks and such. I have this code to do that cleanup:
private func cleanupText(_ text: String) async throws -> String? {
print("Cleaning up text of length \(text.count)...")
let session = LanguageModelSession(instructions: "The content you read is a transcription of a speech. Separate it into paragraphs by adding newlines. Do not modify the content - only add newlines.")
let response = try await session.respond(to: .init(text), generating: String.self)
return response.content
}
The content length is about 29,000 characters. And I get this error:
InferenceError::inferenceFailed::Failed to run inference: Context length of 4096 was exceeded during singleExtend..
Is 4096 a reference to a max input length? Or is this a bug?
This is running on an M1 iPad Air, with iPadOS 26 Seed 1.
Hi everyone! 👋
I'm working on a C++ project using TensorFlow Lite and was wondering if anyone has a prebuilt TensorFlow Lite C++ library (libtensorflowlite) for macOS (Apple Silicon M1/M2) that they’d be willing to share.
I’m looking specifically for the TensorFlow Lite C++ API — something that lets me use tflite::Interpreter, tflite::FlatBufferModel, etc. Building it from source using Bazel on macOS has been quite challenging and time-consuming, so a ready-to-use .dylib or .a build along with the required headers would be incredibly helpful.
TensorFlow Lite version: v2.18.0 preferred
Target: macOS arm64 (Apple Silicon)
What I need:
libtensorflowlite.dylib or .a
Corresponding headers (ideally organized in a clean include/ folder)
If you have one available or know where I can find a reliable prebuilt version, I’d be super grateful. Thanks in advance! 🙏
Lookin for J - is this a safe place for discussing full apps ive built but not submitted or shared , I have maybe over 100 but had been unaware any assistance was provided..
is there a formal process to take to submit an app fro review to improve OS, other than during App Store review.
Topic:
Machine Learning & AI
SubTopic:
Apple Intelligence
Tags:
Design
Developer Tools
iCloud Drive
Xcode
Hi,
I have an app that uses Core Data to store user information and display it in various views. I want to know if it's possible to easily integrate this setup with FoundationModels to make it easier for the user to query and manipulate the information, and if so, how would I go about it? Can the model be pointed to the database schema file and the SQLite file sitting in the user's app group container to parse out the information needed? And/or should the NSManagedObjects be made @Generable for better output? Any guidance about this would be useful.
I am using a contact tool to help get contact from my address book. but the model ins't invoking my tool call method. Even tried with a simple tool the outcome is the same my simple tool is not being invoked.
Topic:
Machine Learning & AI
SubTopic:
Foundation Models
Hello Apple Developer Community,
I'm investigating Core ML model loading behavior and noticed that even when the compiled model path remains unchanged after an APP update, the first run still triggers an "uncached load" process. This seems to impact user experience with unnecessary delays.
Question: Does Core ML provide any public API to check whether a compiled model (from a specific .mlmodelc path) is already cached in the system?
If such API exists, we'd like to use it for pre-loading decision logic - only perform background pre-load when the model isn't cached.
Has anyone encountered similar scenarios or found official solutions? Any insights would be greatly appreciated!
Do we know what a safe max token limit is? After some iterating, I have come to believe 4096 might be the limit on device.
Could you help me out by answering any of these questions:
Is 4096 the correct limit?
Do all devices have the same limit?
Will the limit change over time or by device?
The errors I get when going over the limit do not seem to say, hey you are over, so it's just by trial and error that I figure these issues out.
Thanks for the fun new toys.
Regards,
Rob
Topic:
Machine Learning & AI
SubTopic:
Apple Intelligence
I've been struggling and Siri support to an application. I have developed it kept getting this error when I run it on MacOS:
Failed to refresh AppShortcut parameters with error: Error Domain=Foundation._GenericObjCError Code=0 "(null)"
So I found AppIntentsSampleApp and downloaded and buil it and I get a similar, but larger, error:
Failed to refresh AppShortcut parameters with error: Error Domain=RBSServiceErrorDomain Code=1 "(originator doesn't have entitlement com.apple.private.xpc.launchd.app-server AND originator doesn't have entitlement com.apple.assertiond.system-shell AND originator doesn't have entitlement com.apple.runningboard.launchprocess)" UserInfo={NSLocalizedFailureReason=(originator doesn't have entitlement com.apple.private.xpc.launchd.app-server AND originator doesn't have entitlement com.apple.assertiond.system-shell AND j
And it goes on and on.
What am I missing? I'm using Xcode 16. I don't see an option to add a Siri framework. I have tried adding both the intent and tap, intent frameworks, which does not seem to make a difference.
Topic:
Machine Learning & AI
SubTopic:
Apple Intelligence
I’m developing an activity classifier that I’d like to input using the JSON format of CoreMotion data.
I am getting the error:
Unable to parse /Users/DewG/Downloads/Testing/Step1/Testing.json. It does not appear to be in JSON record format. A SequenceType of dictionaries is expected
I've verified that the format I am using is JSON via various JSON validators, so I am expecting I'm just holding it wrong. Is there an example of a JSON file with CoreMotion data that I can model after?
Hi,
testing latest tensorflow-metal plugin with tensorflow 2.20 doesn't work..
using python
Python 3.12.11 (main, Jun 3 2025, 15:41:47) [Clang 17.0.0 (clang-1700.0.13.3)] on darwin
simple testing shows error:
import tensorflow as tf
Traceback (most recent call last):
File "", line 1, in
File "/Users/obg/npu/venv-tf/lib/python3.12/site-packages/tensorflow/init.py", line 438, in
_ll.load_library(_plugin_dir)
File "/Users/obg/npu/venv-tf/lib/python3.12/site-packages/tensorflow/python/framework/load_library.py", line 151, in load_library
py_tf.TF_LoadLibrary(lib)
tensorflow.python.framework.errors_impl.NotFoundError: dlopen(/Users/obg/npu/venv-tf/lib/python3.12/site-packages/tensorflow-plugins/libmetal_plugin.dylib, 0x0006): Library not loaded: @rpath/_pywrap_tensorflow_internal.so
Referenced from: <8B62586B-B082-3113-93AB-FD766A9960AE> /Users/obg/npu/venv-tf/lib/python3.12/site-packages/tensorflow-plugins/libmetal_plugin.dylib
Reason: tried: '/Users/obg/npu/venv-tf/lib/python3.12/site-packages/tensorflow-plugins/../_solib_darwin_arm64/_U@local_Uconfig_Utf_S_S_C_Upywrap_Utensorflow_Uinternal___Uexternal_Slocal_Uconfig_Utf/_pywrap_tensorflow_internal.so' (no such file), '/Users/obg/npu/venv-tf/lib/python3.12/site-packages/tensorflow-plugins/../_solib_darwin_arm64/_U@local_Uconfig_Utf_S_S_C_Upywrap_Utensorflow_Uinternal___Uexternal_Slocal_Uconfig_Utf/_pywrap_tensorflow_internal.so' (no such file), '/opt/homebrew/lib/_pywrap_tensorflow_internal.so' (no such file), '/System/Volumes/Preboot/Cryptexes/OS/opt/homebrew/lib/_pywrap_tensorflow_internal.so' (no such file)
tf.config.experimental.list_physical_devices('GPU')
Traceback (most recent call last):
File "", line 1, in
NameError: name 'tf' is not defined
I fixed this error by copying _pywrap_tensorflow_internal.so where it's searched..
1)mkdir /Users/obg/npu/venv-tf/lib/python3.12/site-packages/tensorflow-plugins/../_solib_darwin_arm64
2)mkdir /Users/obg/npu/venv-tf/lib/python3.12/site-packages/tensorflow-plugins/../_solib_darwin_arm64/_U@local_Uconfig_Utf_S_S_C_Upywrap_Utensorflow_Uinternal___Uexternal_Slocal_Uconfig_Utf/
3)cp /Users/obg/npu/venv-tf/lib/python3.12/site-packages/tensorflow/python/_pywrap_tensorflow_internal.so /Users/obg/npu/venv-tf/lib/python3.12/site-packages/tensorflow-plugins/../_solib_darwin_arm64/_U@local_Uconfig_Utf_S_S_C_Upywrap_Utensorflow_Uinternal___Uexternal_Slocal_Uconfig_Utf/
then fails symbol not found:
Symbol not found: __ZN10tensorflow28_AttrValue_default_instance_E
in libmetal_plugin.dylib
full log:
with import tensorflow as tf
Traceback (most recent call last):
File "", line 1, in
File "/Users/obg/npu/venv-tf/lib/python3.12/site-packages/tensorflow/init.py", line 438, in
_ll.load_library(_plugin_dir)
File "/Users/obg/npu/venv-tf/lib/python3.12/site-packages/tensorflow/python/framework/load_library.py", line 151, in load_library
py_tf.TF_LoadLibrary(lib)
tensorflow.python.framework.errors_impl.NotFoundError: dlopen(/Users/obg/npu/venv-tf/lib/python3.12/site-packages/tensorflow-plugins/libmetal_plugin.dylib, 0x0006): Symbol not found: __ZN10tensorflow28_AttrValue_default_instance_E
Referenced from: <8B62586B-B082-3113-93AB-FD766A9960AE> /Users/obg/npu/venv-tf/lib/python3.12/site-packages/tensorflow-plugins/libmetal_plugin.dylib
Expected in: <2FF91C8B-0CB6-3E66-96B7-092FDF36772E> /Users/obg/npu/venv-tf/lib/python3.12/site-packages/_solib_darwin_arm64/_U@local_Uconfig_Utf_S_S_C_Upywrap_Utensorflow_Uinternal___Uexternal_Slocal_Uconfig_Utf/_pywrap_tensorflow_internal.so
I am using the iPhone 17 Pro simulator that was included with Xcode 26.0.1. My Mac is running macOS 26. When I started the simulator for the first time I got the "Ready for Apple Intelligence" notification but when I access Image Playground in my app it says it is not available on this iPhone. Any solution to get it working on the simulator?
Does CoreML object detection only support AABB (Axis-Aligned Bounding Boxes) or also OBB (Oriented Bounded Boxes)? If not, any way to do it using Apple frameworks?
Topic:
Machine Learning & AI
SubTopic:
General
We have suddenly encountered a serious issue: our local ML models are no longer being decrypted.
Everything was set up according to the guide at https://developer.apple.com/documentation/coreml/generating-a-model-encryption-key and had been working in production, but yesterday we started receiving the following error:
Error Domain=com.apple.CoreML Code=8 "Fetching decryption key from server failed: noEntryFound("No records found"). Make sure the encryption key was generated with correct team ID." UserInfo={NSLocalizedDescription=Fetching decryption key from server failed: noEntryFound("No records found"). Make sure the encryption key was generated with correct team ID.}
We haven’t changed anything in our code. This started spontaneously affecting users of the release version as of yesterday. It also no longer works locally — we receive the same error at the moment the autogenerated function is called:
class func load(configuration: MLModelConfiguration = MLModelConfiguration(), completionHandler handler: @escaping (Swift.Result<ZingPDModel, Error>) -> Void)
I assume that I can generate a new key through Xcode, integrate it in place of the old one, and it might start working again. However, this won’t affect existing users until they update the app.
Could the issue be on Apple’s infrastructure side?
Topic:
Machine Learning & AI
SubTopic:
Core ML
I’m working on generating images using Image Playground. The code works fine for other styles but fails when using an external provider. I don’t see any other requirements mentioned in the documentation. Has anyone else encountered a similar issue?
Here’s the relevant code snippet:
https://developer.apple.com/documentation/imageplayground/imageplaygroundstyle/externalprovider?changes=_2
The error message is also not very helpful. It simply states that the creation failed.
Note: I have enabled ChatGPT Plus, and the image generation using ChatGPT styles works fine when using the Playground app.
do {
let creator = try await ImageCreator()
let concept = ImagePlaygroundConcept.text("Love")
let images = creator.images(for: [concept], style: .externalProvider, limit: 1)
for try await image in images {
// Handle image
break
}
} catch {
// Handle error
}
I’m using the iOS 26 RC, and when I print creator.availableStyles, it doesn’t display the external Provider.
[ImagePlayground.ImagePlaygroundStyle(id: "animation", _representationInfo: nil), ImagePlayground.ImagePlaygroundStyle(id: "emoji", _representationInfo: nil), ImagePlayground.ImagePlaygroundStyle(id: "illustration", _representationInfo: nil), ImagePlayground.ImagePlaygroundStyle(id: "sketch", _representationInfo: nil), ImagePlayground.ImagePlaygroundStyle(id: "messages-background", _representationInfo: nil)]
Topic:
Machine Learning & AI
SubTopic:
Apple Intelligence
Hey everyone,
Is it possible to generate XML using the “Generable” macro of the Foundation Model Framework?
Topic:
Machine Learning & AI
SubTopic:
Foundation Models
Foundation Models are driving me up the wall.
My use case: A news app - I want to summarize news articles. Sounds like a perfect use for the added-in-beta-5 "no guardrails" mode for text-to-text transformations...
... and it's true, I don't get guardrails exceptions anymore but now, the model itself frequently refuses to summarize stuff which in a way is even worse as I have to parse the output text to figure out if it failed instead of getting an exception. I mostly worked that out with my system instructions but still, the refusing to summarize makes it really tough to use.
I instructed the model to tell me why it failed if that happens.
Examples of various refusals for news articles from major sources:
"The article mentions "Visual Lookup" but does not provide details about how it integrates with iOS 26."
"The article includes unsafe content regarding a political figure's potential influence over the Federal Reserve board, which is against my guidelines."
"the article contains unsafe content."
"The article is biased and opinionated and focuses on the author's opinion."
(this is despite the instructions specifically asking for a neutral summary - I am asking it to not use bias in the output but it still refuses)
I have tons of these. Note that if I don't use the "no guardrails" mode and use a Generable instead, some of these work fine so right now I have to do two passes on much of the content since I never know which one will work.
Having a "summary mode" that often refuses to summarize current news articles (the world is not a great place, some of these stories are a bummer) is near worthless.