For practice, I have implemented accessibility labels and announcement in a very simple test app (All SwiftUI, all iOS 18).
The app is not localized, default language is English.
When running this on a German phone, odd things happen in the localization. My accessibility labels are read with an accent, but when they contain a url, the "dots" are read as the German "Punkt" (with an English Accent).
When I am providing the same text as accessibility announcement, the same text (which is in English), is read with a German voice.
I am also providing a Button with an "arrow.clockwise" image, and VoiceOver reads this, in an English Voice with "Refresh, Button". This is great and was to be expected. However, when the button is disabled, VoiceOver reads "Refresh, grau dargestellt, Button", all in an English Voice.
Is this an error? Am I doing it wrong?
The video at the link should show the issue
https://share.icloud.com/photos/0757FJW2Q3fsA_cdhMX6ls46Q
General
RSS for tagExplore best practices for creating inclusive apps that cater to users with diverse abilities
Post
Replies
Boosts
Views
Activity
Accessibility got broken after updated till XCode 16.1
There is a call to accessibilityLabel - it sets an a11y label for a title of a view.
This used to work (pronounced by VoiceOver) with XCode 15.4 + iOS 17.5.
Xcode 16.1 + iOS 18.1 + Physical device/ iOS SImulator - with Accessibility Inspector - no a11y label set.
Tried Xcode 16.2 beta 3 - the same result - accessibilityLabel does not work - a11y label is not set.
I am making an app that works on iOS 13 and above I wold like to know how can we open setting directly on a click of a button from our app
It is outrageous that Apple continue to fail to implement the Fullscreen API web standard for web apps on iPhone only, which is so important to accessibility and web app functionality.
The only possible reason for this block is commercial: to promote iOS apps instead of browser based web apps.
To quote a client from a major agency just now - a typical enquiry :
We value accessibility greatly, and we noticed that the embedded player is missing a full screen button on iPhone.
Everything else works perfectly fine, including a full screen button that appears on the mobile webpage on android devices.
Is there any way we can include a button to enable full screen view for our viewers in your player that are going to watch it on iOS devices?
To which, as usual, we have to reply:
Apple unfortunately block fullscreen mode from being used with all web applications on iPhone.
Apple will allow this to be displayed fullscreen on MacBooks and iPads, but currently not on on iPhone - so we have to hide the fullscreen button there.
So fullscreen works on all devices and browsers apart from on iPhone.
As you've seen with Android, all other devices and browsers follow the universal 'Fullscreen API' web standard to allow full screen.
You're probably familiar with seeing the fullscreen button on normal linear videos on iPhone.
These use Apple's native video player, which doesn't let buttons and scripts be used on top of it - just a single video, not an interactive web application.
Our player looks like a video player but it is actually a web app combining multiple different video clips connected together by code and styling.
They block it on iPhones for reasons known only to them, but the assumption is that it is to incentivise people to make iOS apps instead of web apps.
The web development community is hopeful that Apple will change this unfortunate restriction soon, but we have been waiting a long time in vain.
We have to send this to a lot of people. It's a very bad look for Apple.
In less than a month it will be 2025. We have been waiting years for this.
The web standard documentation showing universal support on other devices and browsers is here:
https://developer.mozilla.org/en-US/docs/Web/API/Fullscreen_API
This is not acceptable. It is time for Apple to stop blocking this important accessibility web standard for commercial reasons - only on iPhone. To whoever is in charge of these decisions in the Safari/Webkit team: Please just enable Fullscreen API for web apps on iPhone as soon as possible.
We have an app with a large audience (around 2.1M DAUs) and because of this, we build it with accessibility first in mind.
In that app, we link to specific iOS accessibility settings (such as VoiceOver, Display & Text, etc) in our menu screens, to offer the user a shortcut to customize VO behaviour, text size etc.
Unfortunately, since iOS 18, these links are no longer working, they all open the Settings app, but don't navigate.
It appears (through support) users use these links to easily access the settings, mostly older people trained to go this way in computer courses.
We used to open the settings app through the App-prefs scheme, but seems broken in iOS 18.
eg. App-prefs:root=ACCESSIBILITY&path=VOICEOVER_TITLE
I know about the AccessibilitySettings API, but seems it is only limited to once specific feature.
Is there a way we can get these links to work again?
Hello, so basically when I play some game and I get a notification it lags like badly to I think 20-30 fps and stutters, when notification is gone it works normally. Also sometimes when I open control panel its has slow and laggy animation.
Bought this phone like a week ago and this makes me sad :(.
I am an artist (singer songwriter) and I use the Photos app to manage albums related to my various creative projects. And these are some BIG issues that i am SURPRISED never came into the account or maybe were overlooked -
Missing Search Bar When Adding Photos to Albums: Why there is no search bar when adding a photo to a bag of hundred of albums? (Artists like me like to organise things into different albums and folders)
I can no longer search for albums by name after ios 18 update, which was previously very helpful in quickly locating them.
Albums can be arranged & moved in the same folder but there is no way to move albums between DIFFERENT FOLDERS and the only wat is to create a new album in that folder and select and transfer everything and delete that old album.
Hello,
I am reaching out because I believe your product, the Vision Pro, could significantly improve the quality of life for individuals with visual impairments, and I thought my personal experience might be of interest to you.
We could discuss this in more detail, but to respect your time, I’ll get straight to the point:
I have retinitis pigmentosa, a rare retinal disease for which there is currently no treatment. This condition causes a progressive narrowing of the visual field (potentially leading to blindness) and a deficit in photoreceptors (let’s just say I’m not exactly a night owl).
In my case, it has become impossible to go out alone in the dark or even see in dim light. (Goodbye evening parties—I can’t even find the entrance to a nightclub, let alone navigate the dance floor!). However, I’ve discovered that sometimes, simply looking through my phone screen and using its brightness helps me see much better.
Over the years, I’ve imagined how amazing it would be if a pair of glasses could simply display the image my eyes are supposed to perceive, but with enhanced brightness. It would allow me to live my life as freely as others, whether that’s venturing out at night or finding that elusive pen lost in the depths of my apartment. I initially looked into the Google Glass project, for example, but it pales in comparison to what Apple is now creating, don’t you think?
What amuses me most is that what some see as a tool that isolates users from reality could actually become an inclusion device for people like me, who would use it to go out and engage with the world. (I can’t count how many times I’ve gone home early in winter because of the anxiety caused by the early darkness, or turned down after-work gatherings with my DevOps colleagues.)
The Vision Pro could simply restore reality for us by enhancing what has been progressively lost.
And that’s just for nighttime! I can only imagine how helpful it could be during the day—for instance, by detecting obstacles or highlighting dangerous zones in a person’s limited field of vision. One could even use OCR technology to map the results of a visual field test and provide tailored assistance.
What incredible potential…
I dream of a day when ideas like these become a reality, and I wanted to share them with you. This wouldn’t just help me—it could help many others as well.
Thank you for taking the time to read this message. I would be delighted to contribute in any way, should these development directions resonate with you now or in the future.
Wishing you an excellent evening,
Hugo Bled
In SwiftUI, iOS 18.1.1, Xcode 16.1, the following control:
Text(12345678, format: .byteCount(style: .binary))
displays text with MB (megabytes) unit, but German VoiceOver reads it as "millibars".
I tried explicitly specify units with:
Text(12345678, format: .byteCount(style: .memory, allowedUnits: .mb))
but the result is the same (German VoiceOver still says "millibars").
Aside from creating own accessibility label, is there any way to go around that?
Hey folks,
I want VoiceOver to speak punctuation in certain cases. On iOS, there seems to be the UIAccessibilitySpeechAttributePunctuation attributed string key to achieve that, but I can't find an alternative for macOS.
What is the recommended approach for achieving the same result?
This may sound like a bit of an odd question, but this was what I was told this morning by one of our Accessibility managers.
This past June at WWDC, I scheduled a lab session with Apple's accessibility folks for a review. I had the pleasure of working with Ryan who helped give the great VoiceOver Testing Talk from WWDC 2018. I believe I've worked with him before in the labs, but regardless, no matter who I meet with in the Accessibility Labs they always provide me with some new nugget of information that I learn, no matter how well versed I might think I am in Accessibility.
After the labs, I made all the changes that Ryan suggested and also told other developers on my team of what I was taught. In our app we provide various forms, and each field component that appears in the form has a header text which we apply a header trait to.
This allows for the use of a Header Rotor to quickly navigate between all the questions in the form, say if a user wants to return to a previous field etc. I even suggested we should take the time to provide a custom rotor that would allow users to navigate to fields that may be in an error state. If say the user submits the form, and the responses are validated, if 1 or multiple fields be in error we should have a rotor to allow the user to navigate directly to those fields. They may not be able to see the Red text / red outlines of those fields.
This morning, I was told that I needed to undo that. That our headerLabel properties should not be marked with the UIAccessibilityTrait.header trait.
When I stated that it makes navigation of the form much easier via the Headers Rotor, I was told by the Accessibility Manager this is not the case.
I have the MS Teams transcript in front of me, which reads as follows (give or take a few transcript errors)
So I went ahead and I just double checked with two of my friends, who are blind and for them on their end, they both said that they would not actually use that, and could add more complexity, because they have—in addition to being blind—but there's also mobility limitations. So they actually can't even use the Rotor at all. They only can use the swipes.
Does this make sense to anyone, because it doesn't to me? Thoughts on this?
As a Mongolian user, I’ve observed that the Apple ecosystem (macOS, iOS, iPadOS) currently lacks native spellchecking support for the Mongolian language in Cyrillic script. This absence poses significant challenges for users who rely on Apple devices for communication, education, and professional work in Mongolian.
Could you share if there are any plans or roadmaps to address this gap? Additionally, I’m eager to contribute ideas, resources, or insights to help make Mongolian language support more accessible within the Apple ecosystem.
If there are any guidelines or steps I could take to advocate for or help implement this feature, I’d greatly appreciate your guidance.
My husband and I have the same iPhones. We both have location sharing on. When he uses Find My, he can see my location. He has shared his location with me, but my phone always says “No location found,” We have the exact same settings on our phones and have followed the instructions to use Find My. Is there something wrong with my phone since I cannot see his location? I have no trouble seeing the location of another family member. Or is something wrong with my husbands phone? This is so frustrating.
Hi guys,
I'm trying to add accessibility labels to a static text and custom SwiftUI views. Example:
MyView {
...
}
//.accessibilityElement()
.accessibilityElement(children: .combine)
//.accessibilityRemoveTraits(.isStaticText)
//.accessibilityAddTraits(.isButton)
.accessibilityLabel("ACCESSIBILITY LABEL")
.accessibilityHint("ACCESSIBILITY HINT")
When using 'voiceover' or 'hover text' accessibility features, focus moves only between active elements and not on static elements.
When I add .focusable() it works, but I don't want to make those elements focusable when all accessibility features are off.
I suppose I could do something like this:
.focusable(UIApplication.shared.accessibility.voiceOver.isOn || UIApplication.shared.accessibility.hoverText.isOn)
Note: this is just pseudocode, because I don't remember exactly how to detect current accessibility settings.
However using focusable() with conditions on hundreds of static texts in an app seems to be overkill. Also the accessibility focus is needed on some control containers where we already have a little more complex handling of focus with conditions in focusable(...) on parent and child elements, so extending it for accesssiblity seems to be too complicated.
Is there a simple way to tell accessiblity that an element is focusable specifically for 'hover text' and for 'voiceover'?
Example what I want to accomplish for TV content:
VStack
{
HStack {
Text(Terminator)
if parentalLock {
Image(named: .lock)
{
}
.accessibilityLabel(for: hover, "Terminator - parental lock")
Text("Sci-Fi * 8pm - 10pm * Remaining 40 min. * Live")
.accessibilityLabel(for: hover, "Sci-Fi, 8 to 10pm, Remaining 40 min. Broadcasting Live")
}
.accessibilityLabel(for: voiceover, "Terminator, Sci-Fi, 8 to 10pm, Remaining 40 min. Broadcasting Live, parental lock")```
I saw all Accessibility WWDC videos 2016, 2022, 2024 and googling it for several hours, but I coudln't find any solution for static texts and custom views. From those videos it appears .accessibilityLabel() should be enough, but it clearly works only on actvie elements and does not work for other SwiftUI views on tvOS without focusable().
Can this be done without using focusable() with conditions for detection which accessibility feature is on?
The problem with focusable would be that for accessibility I may need to read a text for parent view, but focus needs to be placed on a child element. I remember problems when focusable() is set on parent view that child was not focusable or something like that - simply put: complications in focus logic.
Thanks.
Hi I'm a new Mac user having been a long time PC user and software developer. I also have a mobility impairment that has led me to try to use Voice Control as a replacement for Dragon NaturallySpeaking on my PC.
I have been trying to use Parallels with a Windows 11 VM and Dragon for my remote work, but that seems to have broken when I downloaded the latest macOS beta.
Ideally I'd like to use Voice Control over a VPN/Remote Desktop Connection or, in a pinch, Chrome Remote Desktop. The problem I'm running into is that macOS does not seem to recognize that I am in a text field or other control when I am in the remote application.
I have a utility in Windows that will allow me to voice type into an application window even if the cursor is not over a control, but I can't seem to figure out a way to do that in macOS.
Is there a way to do what I want to do? Is there a more capable voice recognition software package for macOS?
I am running Sequoia 15.2 beta 3 at the moment.
Hello,
in an AVSpeechSynthesisProviderAudioUnit sending word position to host using AVSpeechSynthesisMarker / AVSpeechSynthesisMarker.Mark.word seems to be broken on iOS 18.
On the app/client side all the events are received immediately whereas they should be received synchronised with the audio.
The exact same code works perfectly on iOS 17
On the AVSpeechSynthesisProviderAudioUnit the AVSpeechSynthesisMarker are appended with the correct Position/SampleOffset
let wordPos = NSMakeRange(characterRange.location, characterRange.length)
let marker = AVSpeechSynthesisMarker(markerType: AVSpeechSynthesisMarker.Mark.word, forTextRange:wordPos, atByteSampleOffset:byteSampleOffset)
// also tried with
// let marker = AVSpeechSynthesisMarker(wordRange:wordPos, atByteSampleOffset:byteSampleOffset)
markerArray.append(marker)
print("word : pos \(characterRange) - offset \(byteSampleOffset)")
// send events to host
speechSynthesisOutputMetadataBlock?(markerArray, self.request!)
word : pos {7, 7} - offset 2208
word : pos {15, 8} - offset 37612
word : pos {24, 6} - offset 80368
word : pos {31, 3} - offset 118652
word : pos {35, 2} - offset 128796
...
But on the client side they are all received in the same time (at the beginning of speech) whereas on iOS 17 they arrive sync to the audio.
func speechSynthesizer(_ synthesizer: AVSpeechSynthesizer, willSpeakRangeOfSpeechString characterRange: NSRange, utterance: AVSpeechUtterance) {
print("characterRange : \(characterRange)")
}
Using Apple Voice/engine works so there is obviously something to change but documentation of AVSpeechSynthesisProviderAudioUnit / AVSpeechSynthesisMarker seems unchanged
Thanks in advance
I'm unable to access my Apple Developer Account. This is a first time login and I have made the 99$ payment already through the Apple Developer app on my Iphone. I can't access it from my Iphone as well. When I try to access it via my Mac, this is the error I'm seeing:
I have two iPad Pros
iPad Pro (12.9 inch) (3rd Generation)
iPad Pro 13-inch M4
Both with their Apple Magic Keyboard and trackpad.
With iPadOS 17 I could ctrl-space to jump between input languages. Now with 18.1.1 and 18.2 beta this is broken.
On the old iPad, the languages used to show EN, then DE. Now it shows EN DE, EN DE. I went to settings and the keyboards were shown as having English and German input for two physical keyboards. I deleted and recreated, now each keyboard has only one language and ctrl-space now alternates between EN and DE.
On the new iPad Pro, two keyboards are already set with a single input language, but ctrl-space is not changing the input, it types a space instead. There does not seem to be any way to change the input language using the keyboard.
I am writing an email to a software engineer at Starbucks. In it, I want to make him aware of a Voice Over accessibility issue that I think I know the cause of, but want to verify it here. The issue is nothing happens like it should after entering text into an edit field, then pressing enter. What should happen when pressing enter is that the next page is displayed. However, it does not. Am I right in my guess that the developer has the page hidden? If not, what could it be? Please provide code with comments to fix this issue.
Hi Team,
We are integrating SwiftUI's Charts BarMark, UI looks good but when we try setting up custom ADA it doesn't reflect/override the accessibility label/value we set manually.
Is it iOS defect or is there any workaround?
Thanks in advance.
Sample:
Chart(data) {
BarMark(
x: .value("Category", $0.department),
y: .value("Profit", $0.profit)
)
.foregroundStyle(by: .value("Product Category", $0.productCategory))
.accessibilityIdentifier("BarMark")
.accessibilityLabel("Dep: \($0.department)")
.accessibilityValue("Profile: \($0.profit) Category: \($0.productCategory)")
}