-
A minimalist design exploration
Monospace fonts are dear to many of us. Some find them more readable, consistent, and beautiful, than their proportional alternatives. Maybe we’re just brainwashed from spending years in terminals? Or are we hopelessly nostalgic? I’m not sure. But I like them, and that’s why I started experimenting with all-monospace Web.
On this page, I use a monospace grid to align text and draw diagrams. It’s generated from a simple Markdown document (using Pandoc), and the CSS and a tiny bit of Javascript renders it on the grid. The page is responsive, shrinking in character-sized steps. Standard elements should just work, at least that’s the goal. It’s semantic HTML, rendered as if we were back in the 70s.
All right, but is this even a good idea? It’s a technical and creative challenge and I like the aesthetic. If you’d like to use it, feel free to fork or copy the bits you need, respecting the license. I might update it over time with improvements and support for more standard elements.
-
Despite the fact that SQLite’s file-based nature makes working with it extremely simple, SQLite grew a considerable amount of tunables over the years, which make getting top performance out of it non-obvious (the SQLite numbers on the benchmark above are after tuning). For maximum performance, users have to choose WAL mode over journal mode, disable POSIX advisory locks, etc.
Limbo, while maintaining compatibility with SQLite’s bytecode and file format, drops a lot of the features that we consider less important for modern environments (including SQLite’s “amalgamation”, the build system that generates a single C file), providing a better out-of-the-box experience.
-
PreferenceKeys in SwiftUI provide a mechanism for passing data up the view hierarchy. Again, they’re quite useful when we need to communicate information from child views to parent views without relying on bindings or state variables.
While they might seem a bit complex at first, once you understand how they work, you will like using them to build complex components. Not to mention they’re used extensively in many of SwiftUI’s built-in views and can make your custom views more flexible and reusable.
Remember, the key points are:
- They allow data to flow up the view hierarchy. Always think child -> parent
- They’re defined by implementing the
PreferenceKey
protocol. - Child views set preferences using the
preference
modifier. - Parent views read preferences using the
onPreferenceChange
modifier.
-
3D collaboration for teams
Spatial Analogue is the easiest way to quickly visualize your 3D designs and collaborate with your team in full spatial context—all without having to write any code or publish your own app.
- Native RealityKit: Create native experiences without writing a single line of code. Publish directly to the Analogue app on Apple Vision Pro for instant viewing.
- Spatial Collaboration: Use SharePlay and Spatial Personas to review designs in real-time, enjoying face-to-face interactions no matter where team members are.
- Vantage Points: Create and navigate through various viewpoints of your designs. Review every detail from every angleVantage Points.
- Asset Variants: Quickly test different versions of the same asset. Experience real-time A/B testing, as if everyone is in the same room.
- Team Projects: Organize your team and share 3D designs with ease. Manage permissions to ensure the right people have the right access.
-
Instead of manually testing each preview, you can write a unit test that automatically runs every preview in your app or framework. Just link the SnapshotPreviews package to your test target and copy the following code:
import SnapshottingTests // This is an XCTestCase you can run in Xcode final class MyPreviewTest: PreviewLayoutTest { }
Test functions will be automatically added for every preview in the app. You can easily view the results in Xcode or automate it to run as part of CI using Fastlane or
xcodebuild
. -
Sets the navigation transition style for this view.
Add this modifier to a view that appears within a NavigationStack or a sheet, outside of any containers such as VStack.
struct ContentView: View { Namespace private var namespace var body: some View { NavigationStack { NavigationLink { DetailView() .navigationTransition(.zoom(sourceID: "world", in: namespace)) } label: { Image(systemName: "globe") .matchedTransitionSource(id: "world", in: namespace) } } } }
- _ A type that defines the transition to use when navigating to a view._
-
SwiftUI 16 makes it easier than ever before to implement hero animations that allow us to implement UIs that delight our users. With just three lines of code, we were able to add a transition that looks similar to the App Store’s Today view hero animation. And with a little bit of extra effort, we were able to improve the user experience even more.
-
Swift is Apple’s recommended language for app development, and with good reason. Its safety, efficiency, and expressiveness have made it easier than ever to build fast, polished, and robust apps for the Apple ecosystem. Recent stories about Swift on Windows and Swift on the Playdate highlight developers’ desire to take advantage of Swift on other platforms too. In this series, we explore writing native Swift apps for Android with Skip.
Since its 1.0 release earlier this year, Skip has allowed developers to create cross-platform iOS and Android apps in Swift and SwiftUI by transpiling your Swift to Android’s native Kotlin language. Now, the Skip team is thrilled to give you the ability to use native, compiled Swift for cross-platform development as well.
Part 1 of this series described bringing a native Swift toolchain to Android. Being able to compile Swift on Android, however, is only the first small step towards real-world applications. In this and upcoming installments, we introduce the other pieces necessary to go from printing “Hello World” on the console to shipping real apps on the Play Store:
- Integration of Swift functionality like logging and networking with the Android operating system.
- Bridging technology for using Android’s Kotlin/Java API from Swift, and for using Swift API from Kotlin/Java.
- The ability to power Jetpack Compose and shared SwiftUI user interfaces with native Swift
@Observable
s. - Xcode integration and tooling to build and deploy across both iOS and Android.
-
By integrating Embedded Swift with SDL3 on the ESP32-C3 and ESP32-C6, you can leverage modern programming languages and desktop-class libraries to develop rich graphical applications for embedded systems. The use of ESP-BSP simplifies targeting multiple boards, making your applications more portable.
We encourage you to explore the GitHub repository for the full source code and additional details.
-
These days, websites—or web apps if you prefer—tend to use one of two navigation schemes:
- The navigation scheme browsers provide by default—that is, you enter a URL in your browser's address bar and a navigation request returns a document as a response. Then you click on a link, which unloads the current document for another one, ad infinitum.
- The single page application pattern, which involves an initial navigation request to load the application shell and relies on JavaScript to populate the application shell with client-rendered markup with content from a back-end API for each "navigation".
The benefits of each approach have been touted by their proponents:
- The navigation scheme that browsers provide by default is resilient, as routes don't require JavaScript to be accessible. Client-rendering of markup by way of JavaScript can also be a potentially expensive process, meaning that lower-end devices may end up in a situation where content is delayed because the device is blocked processing scripts that provide content.
- On the other hand, Single Page Applications (SPAs) may provide faster navigations after the initial load. Rather than relying on the browser to unload a document for an entirely brand new one (and repeating this for every navigation) they can offer what feels like a faster, more "app-like" experience—even if that requires JavaScript to function.
In this post, we're going to talk about a third method that strikes a balance between the two approaches described above: relying on a service worker to precache the common elements of a website—such as header and footer markup—and using streams to provide an HTML response to the client as fast as possible, all while still using the browser's default navigation scheme.
-
The first agentic IDE, and then some. The Windsurf Editor is where the work of developers and AI truly flow together, allowing for a coding experience that feels like literal magic.
-
Welcome to Category Theory in Programming, a journey into the conceptual world where mathematics meets software development. This tutorial is designed for Racket programmers who are curious about the mathematical ideas underlying computational systems. It offers insights into how familiar programming concepts can be reinterpreted through the lens of category theory, and even goes further to directly borrow from category theory, using programming language constructs to describe these abstract concepts.
Category theory, a branch of mathematics that deals with abstract structures and relationships, may seem esoteric at first glance. However, its principles are deeply intertwined with the concepts and patterns we encounter in programming. Through this tutorial, we aim to bridge the gap between these two worlds, offering a unique perspective that enriches the programmer’s toolkit with new ways of thinking, problem-solving, and system design.
In the following chapters, we will explore the core concepts of category theory — objects, morphisms, categories, functors, natural transformations, Yoneda Lemma, 2-categories, (co)limits, sketches, Cartesion closed categories & typed lambda, Curry–Howard–Lambek corresponding, adjunctions, (co)monads, kan-extensions, toposes, and more — and how these can be represented and utilized within the Racket programming language. The goal is not to exhaustively cover category theory or to transform you into a category theorist. Instead, we will focus on mapping these abstract concepts into programming constructs, providing a foundation that you, the reader, can build upon and apply in your work.
Why study category theory as a programmer? The answer lies in the abstraction and generalization capabilities provided by category theory. It allows us to see beyond the specifics of a particular programming language, problem, or system, revealing the underlying structures that are common across different domains. By identifying connections between a system and the constructs of category theory, you can leverage existing categorical results and structures to expand and improve the system, applying well-established theories and techniques to refine and extend your design. This tutorial aims to open the door to this broader perspective, enriching your approach to programming.
As you embark on this journey, keep in mind that the real value of understanding category theory in the context of programming is not merely in acquiring new knowledge but in developing a new way of thinking about problems or systems. We encourage you to approach the material with an open mind and to explore how the concepts presented here can be applied or extended in your programming endeavors.
Category Theory in Programming is an invitation to explore, to question, and to discover. It is a starting point for a deeper inquiry into the vast and fascinating intersection of mathematics and programming. We hope this tutorial will inspire you to delve further into both fields, exploring new ideas and forging connections that will enhance your work as a programmer.
-
Not realizing that view initializers—not just the
body
property—are in the hot path can quickly get you into hot water, particularly if you’re doing any expensive work or are trying to use RAII with a value in a view property. So, what follows are some general recommendations for avoiding those issues that are as close to universally correct as I’m willing to claim in writing. -
While many examples of the
matchedGeometryEffect()
modifier focus on hero animations, it can also be applied in other contexts, like my custom segmented control. Here, it's not used to transition geometry when one view disappears and another appears. Instead, it matches the geometry of a non-source view to that of the source view while both remain present in the view hierarchy simultaneously.
-
Code Storage & Backup
HomePass allows you to get rid of that old notebook and store your HomeKit or Matter device's codes right on your iPhone, iPad or Mac.
-
The subdivision technique works by recursively subdividing a polygon mesh, adding new vertices based on existing ones to create smoother surfaces. This process can be executed at the modeling stage (i.e., modifiers in Blender, dynamic topology tools, etc.) or at runtime, which potentially unlocks another technique called Adaptive Level of Detail, which allows engines to dynamically adjust the level of detail based on the camera’s distance from the object (pretty much foveation for geometry). With this benefit, runtime support for subdivision on modern engines is considered crucial, and starting with visionOS 2, macOS 15, iOS 18, and iPadOS 18, RealityKit gets support of this feature at the API level.
RealityKit's SubD implementation not only creates smoother geometry at a lower cost but also aligns with visionOS's trend of incorporating USD standard features. Here's an extract of the invaluable doc Validating feature support for USD files.
-
-
Frameworkism isn't delivering. The answer isn't a different tool, it's the courage to do engineering.
-
Use AirPlay to stream or share content from your Apple devices to your Apple TV, AirPlay-compatible smart TV, or Mac. Stream a video. Share your photos. Or mirror exactly what's on your device's screen.
-
iOS 18 introduced a powerful new feature: the ability to animate UIKit views using SwiftUI animation types. This bridges the gap between the two frameworks even further, allowing us to bring the flexibility and expressiveness of SwiftUI animation system into UIKit-based projects.
SwiftUI animation API makes it simple to define animations and manage their timing and repetition. By using SwiftUI animations in UIKit, we can create smoother, more cohesive animations across our entire app, improving the overall experience for users.
-
Swift has 6 access levels ranging from open to
private
. Typically if you do not want consumers of your code to access a function or type, you can mark it private. However, Apple frameworks written in Swift, particularly SwiftUI, contain APIs that are meant to be used by other Apple frameworks, but not by 3rd party apps. This is achieved by limiting when the code can be seen at compile time, but still allowing it to be found at link time. In this post we’ll look at how you can still call these functions in your own code to use features that are not typically available.To summarize, calling an external function from your code means it must be defined in three places:
- A
.swiftinterface
file for the compiler to recognize it - A
.tbd
file for the linker to recognize it - The exported symbols of the binary at runtime for dyld to recognize it
- A
-
Discover why the operating system terminated your app when available memory was low.
iOS, iPadOS, watchOS, and tvOS have a virtual memory system that relies on all apps releasing memory when the operating system encounters memory pressure, where available memory is low and the system can’t meet the demands of all running apps. Under memory pressure, apps free memory after receiving a low-memory notification. If all running apps release enough total memory to alleviate memory pressure, your app will continue to run. But, if memory pressure continues because apps haven’t relinquished enough memory, the system frees memory by terminating applications to reclaim their memory. This is a jetsam event, and the system creates a jetsam event report with information about why it chose to jettison an app.
Jetsam event reports differ from crash reports because they contain the overall memory use of all apps and system processes on a device, they’re in JSON format, and they don’t contain the backtraces of any threads in your app. If the system jettisons your app due to memory pressure while the app is visible, it will look like your app crashed. Use jetsam event reports to identify your app’s role in jetsam events, even if your app didn’t get jettisoned.
-
Free up memory when asked to do so by the system.
If the system runs low on free memory and is unable to reclaim memory by terminating suspended apps, UIKit sends a low-memory warning to running apps. UIKit delivers low-memory warnings in the following ways:
- It calls the applicationDidReceiveMemoryWarning(_:) method of your app delegate.
- It calls the didReceiveMemoryWarning() method of any active
UIViewController
classes. - It posts a didReceiveMemoryWarningNotification object to any registered observers.
- It delivers a warning to dispatch queues of type
DISPATCH_SOURCE_TYPE_MEMORYPRESSURE
.
When your app receives a low-memory warning, free up as much memory as possible, as quickly as possible. Remove references to images, media files, or any large data files that already have an on-disk representation and can be reloaded later. Remove references to any temporary objects that you no longer need. If active tasks might consume significant amounts of memory, pause dispatch queues or restrict the number of simultaneous operations that your app performs.
-
On 13 November 1961, the Oceanic building at London Airport opened to handle long-haul flight departure. In 1979, German publisher Ravensburger brought out a game designed to help children learn to count. Around Christmas 2023, I stumbled across a copy of that vintage game. The type on the box caught my eye, and that’s where this story began.
The letterforms resembled those of Helvetica. As the corners were soft, I initially thought it might be its Rounded version. However, the typeface featured a much larger x-height, the capitals were less wide, and the glyphs also had white bits in some places, yielding a highlight effect. I had never seen this design before. My first suspicion was that it might be a Letraset face, as this would have fitted in well with the release date of the game. Unfortunately, I couldn’t find a match in a catalog by this manufacturer of rub-down type, so I contacted Florian Hardwig, who had often helped me with type research in the past. Florian was able to identify the mystery typeface. He found it in a catalog published in 1985 by Layout-Setzerei Stulle, a typesetting service in Stuttgart, Germany. Named Airport Spotlight, it’s a derivative of Airport, a typeface that Matthew Carter had designed in the early 1960s for signs at London Airport. The Stulle catalog also showed other variants, such as Airport Round, the stencilled Airport Stamp, and Carter’s original style, here listed as Airport halbfett.
Up to this point, I hadn’t really looked into the history of London Airport – since 1966 known as Heathrow Airport – and its design language in any depth, not least because there isn’t a great deal of material on the subject. To my knowledge, there are only two books on this typeface. One is “Airport Wayfinding” by Heike Nehl and Sibylle Schlaich from 2021. The other is “A Sign System Manual” by Theo Crosby, Alan Fletcher and Colin Forbes from 1970. I found the topic quite intriguing and began to do more research. As a first step, I got my hands on the two books. The former is easy to get hold of. The latter not so much: considered a design classic today, it has become a sought-after item. And when a copy does pop up, booksellers ask several hundred euros for it. Despite the limited supply, I eventually managed to find a comparably inexpensive copy from a private seller in the UK. Thanks to the help of a good friend, the book was brought from London to Antwerp by train and then sent to me in Germany by mail. The effort was more than worth it. The content and typography of the book are superb. In a next step, I contacted Matthew Carter to find out more about his Airport and how it came into being.
-
Understanding how developers problem-solve within ecosystems of practice, tooling, and social contexts is a critical step in determining which factors dampen, aid or accelerate software innovation. However, industry conceptions of developer problem-solving often focus on overly simplistic measures of output, over-extrapolate from small case studies, rely on conventional definitions of “programming” and short-term definitions of performance, fail to integrate the new economic features of the open collaborative innovation that marks software progress, and fail to integrate rich bodies of evidence about problem-solving from the social sciences. We propose an alternative to individualistic explanations for software developer problem-solving: a Cumulative Culture theory for developer problem-solving. This paper aims to provide an interdisciplinary introduction to underappreciated elements of developers’ communal, social cognition which are required for software development creativity and problem-solving, either empowering or constraining the solutions that developers access and implement. We propose that despite a conventional emphasis on individualistic explanations, developers’ problem-solving (and hence, many of the central innovation cycles in software) is better described as a cumulative culture where collective social learning (rather than solitary and isolated genius) plays a key role in the transmission of solutions, the scaffolding of individual productivity, and the overall velocity of innovation.
-
Auto-renewable subscriptions can be priced by App Store country or region (also known as storefronts.) You can choose from 800 price points in each currency, with the option for the Account Holder to submit a request for additional higher price points.
After you set a starting price for your auto-renewable subscription, you can schedule one future price change at a time, per country or region. If you schedule a second price change when you already have a change scheduled, the first scheduled change will be overwritten. You also have the option to preserve prices for existing subscribers.
-
And it’s great to start a quick chat about your current code!
The ChatGPT macOS app got updated with a nice new integration with a few apps, including Xcode!
-
The Metal shading language is a powerful C++ based language that allows apps to render stunning effects while maintaining a flexible shader development pipeline. Discover how to more easily build and extend your render pipelines using Dynamic Libraries and Function Pointers. We'll also show you how to accelerate your shader compilation at runtime with Binary Function Archives, Function Linking, and Function Stitching.
-
A description of a new stitched function. A MTLFunctionStitchingGraph object describes the function graph for a stitched function. A stitched function is a visible function you create by composing other Metal shader functions together in a function graph. A function stitching graph contains nodes for the function’s arguments and any functions it calls in the implementation. Data flows from the arguments to the end of the graph until the stitched function evaluates all of the graph’s nodes.
The graph in the figure below constructs a new function that adds numbers from two source arrays, storing the result in a third array. The function’s parameters are pointers to the source and destination arrays, and an index for performing the array lookup. The graph uses three separate MSL functions to construct the stitched function: a function to look up a value from an array, a function that adds two numbers together, and a function that stores a value to an array.
Create a MTLFunctionStitchingGraph object for each stitched function you want to create. Configure its properties to describe the new function and the nodes that define its behavior, as described below. To create a new library with these stitched functions, see MTLStitchedLibraryDescriptor.
-
Combine basic Metal Shading Language functions at runtime to create a new function.
-
By exploring the view concepts and response update mechanisms in SwiftUI, developers should grasp the following key points:
- Response code does not necessarily cause the view declaration value to be re-computed: Adding response logic in view code does not mean that the view declaration value will be re-evaluated as a result.
- Re-computation of the view declaration value requires event triggering: SwiftUI only re-evaluates the view declaration value under specific conditions (such as state changes). This process must be triggered by some event.
- Handle the construction process of view types carefully: Avoid performing time-consuming or complex operations in the constructor of view types. Because regardless of whether the view declaration value needs to be re-computed, SwiftUI may create instances of the view type multiple times.
- Optimize the computation process of the view declaration value: The computation of the view declaration value is a recursive process. By appropriate optimization, such as reducing unnecessary nested computations, you can effectively reduce computational overhead.
- Rationally split the view structure: Encapsulating the view declaration in a separate view type allows SwiftUI to better identify which views do not need to be re-computed, thereby improving the efficiency of view updates.
-
We’ve seen how injecting objects at different levels of the view hierarchy can accommodate various use cases, from global state shared across the app to context-specific state scoped to certain views. By carefully designing your environment objects based on bounded contexts and separating concerns, you can avoid common pitfalls like unnecessary re-evaluations or tightly coupled components.
It’s essential to strike a balance when using environment objects—leveraging their power for deeply nested view hierarchies while avoiding overuse that could lead to unwieldy dependencies. By passing only the necessary data to subviews and utilizing SwiftUI’s diffing mechanism, you can optimize performance and maintain a clear data flow.
Ultimately, using
@Environment
effectively can lead to modular, testable, and maintainable SwiftUI applications. With these tools in your development toolkit, you can confidently architect AI systems or any complex application that requires shared state management, ensuring scalability and elegance in your design. -
Although Swift’s Automatic Reference Counting memory management model doesn’t require us to manually allocate and deallocate memory, it still requires us to decide exactly how we want our various objects and values to be referenced.
While it’s common to hear over-simplified rules like “Always use `weak references within closures”, writing well-performing and predictable apps and systems often requires a bit more nuanced thinking than that. Like with most things within the world of software development, the best approach tends to be to throughly learn the underlying mechanics and behaviors, and then choose how to apply them within each given situation.
-
The stable foundation for software that runs forever.
-
Increasingly, the cyberinfrastructure of mathematics and mathematics education is built using GitHub to organize projects, courses, and their communities. The goal of this book is to help readers learn the basic features of GitHub available using only a web browser, and how to use these features to participate in GitHub-hosted mathematical projects with colleagues and/or students.
-
The first post of this series explored how to take advantage of Dev Containers, a useful VS Code feature that enables to run and debug Swift command line or server apps in a Linux container.
In this post you will take it a step further by having a more complex scenario: instead of storing the todos temporarily in memory, the app will store them in a PostgreSQL database. And to test it locally, you won’t need to install and run a Postgres server directly in your machine.
-
Say it with me now: If you’re trying to do more than one thing at once, something is waiting. And it just might be the most important thing.
-
The UIKit combination of UICollectionView and the UICollectionViewFlowLayout gives a lot of flexibility and control to build grid-like flow layouts. How do we do that with SwiftUI?
-
Everything You Need to Know About Live Activities and Dynamic Island in iOS
With the release of iOS 16, Apple introduced Live Activities, and later with iPhone 14 Pro, the Dynamic Island—two powerful tools that allow us to present real-time, glanceable updates directly on the Lock Screen and at the top of the screen on the Dynamic Island. These features are designed to keep users informed about ongoing activities, like delivery tracking, live sports scores, or wait times, without requiring them to unlock their devices or open the app.
In this two-part guide, we’ll discuss everything you need to know to integrate Live Activities and Dynamic Island effectively in your iOS app. We'll detail each step from understanding design constraints to setting up a Live Activity, handling updates, and adding interactions.
-
Tools, docs, and sample code to develop applications on the AWS cloud
-
While everyone who writes Swift code will use Swift Macros, not everyone should write their own Swift Macros. This book will help you determine whether writing Swift Macros is for you and show you how the best ways to make your own.
You'll create both freestanding and attached macros and get a feel for when you should and shouldn't create them, which sort of macro you should create, and how to use SwiftSyntax to implement them. Your macros will accept parameters when appropriate and will always include tests. You'll even learn to create helpful diagnostics for your macros and even FixIts.
-
It’s like an invisible world that always surrounds us, and allows us to do many amazing things: It’s how radio and TV are transmitted, it’s how we communicate using Wi-Fi or our phones. And there are many more things to discover there, from all over the world.
In this post, I’ll show you fifty things you can find there — all you need is this simple USB dongle and an antenna kit!
-
Backports the Swift 6 type Mutex to Swift 5 and all Darwin platforms via OSAllocatedUnfairLock.
-
A trace trap or invalid CPU instruction interrupted the process, often because the process violated a requirement or timeout.
A trace trap gives an attached debugger the chance to interrupt the process at a specific point in its execution. On ARM processors, this appears as
EXC_BREAKPOINT (SIGTRAP)
. On x86_64 processors, this appears asEXC_BAD_INSTRUCTION (SIGILL)
.The Swift runtime uses trace traps for specific types of unrecoverable errors — see Addressing crashes from Swift runtime errors for information on those errors. Some lower-level libraries, such as Dispatch, trap the process with this exception upon encountering an unrecoverable error, and log additional information about the error in the
Additional Diagnostic Information
section of the crash report. See Diagnostic messages for information about those messages.If you want to use the same technique in your own code for unrecoverable errors, call the
fatalError(\_:file:line:)
function in Swift, or the__builtin_trap()
function in C. These functions allow the system to generate a crash report with thread backtraces that show how you reached the unrecoverable error.An illegal CPU instruction means the program’s executable contains an instruction that the processor doesn’t implement or can’t execute.
-
Sudden app crashes are a source of bad user experience and app review rejections. Learn how crash logs can be analyzed, what information they contain and how to diagnose the causes of crashes, including hard-to-reproduce memory corruptions and multithreading issues.
-
This is a really promising development. 32GB is just small enough that I can run the model on my Mac without having to quit every other application I’m running, and both the speed and the quality of the results feel genuinely competitive with the current best of the hosted models.
Given that code assistance is probably around 80% of my LLM usage at the moment this is a meaningfully useful release for how I engage with this class of technology.
-
for platform in \ "$(PLATFORM_IOS)" \ "$(PLATFORM_MACOS)" \ "$(PLATFORM_MAC_CATALYST)" \ "$(PLATFORM_TVOS)" \ "$(PLATFORM_WATCHOS)"; \ do \ xcrun xcodebuild build \ -workspace Dependencies.xcworkspace \ -scheme Dependencies \ -configuration $(CONFIG) \ -destination platform="$$platform" || exit 1; \ done;
-
Orka provides on-demand macOS environments to power everything from simple Xcode builds to fully integrated, complex automated CI/CD pipelines.
-
A value that has a custom representation in
AnyHashable
. -
As you can see, it is quite simple to inadvertently extend the lifetime of objects with long-running async functions.
-
In 2021 we got a new Foundation type that represents a string with attributes: AttributedString. Attributes on ranges of the string can represent visual styles, accessibility features, link data and more. In contrast with the old NSAttributedString, new
AttributedString
provides type-safe API, which means you can't assign a wrong type to an attribute by mistake.AttributedString
can be used in a variety of contexts and its attributes are defined in separate collections nested under AttributeScopes. System frameworks such as Foundation, UIKit, AppKit and SwiftUI define their own scopes. -
Server side Swift has been available since end of 2015. The idea was behind the development that you can use the same language for RESTful APIs, desktop and mobile applications. With the evolution of the Swift language, the different Swift web frameworks got more robust and complex.
That’s why I was happy to read Tib’s excellent article about a new HTTP server library written in Swift, Hummingbird. I immediately liked the concept of modularity, so decided to create a tutorial to show its simplicity.
-
Head’s up: this post is a technical deep dive into the code of DocC, the Swift language documentation system. Not that my content doesn’t tend to be heavily technical, but this goes even further than usual.
-
This paper is dedicated to the hope that someone with power to act will one day see that contemporary research on education is like the following experiment by a nineteenth century engineer who worked to demonstrate that engines were better than horses. This he did by hitching a 1/8 HP motor in parallel with his team of four strong stallions. After a year of statistical research he announced a significant difference. However, it was generally thought that there was a Hawthorne effect on the horses.
-
Select the best method of scheduling background runtime for your app. If your app needs computing resources to complete tasks when it’s not running in the foreground, you can select from many strategies to obtain background runtime. Selecting the right strategies for your app depends on how it functions in the background.
Some apps perform work for a short time while in the foreground and must continue uninterrupted if they go to the background. Other apps defer that work to perform in the background at a later time or even at night while the device charges. Some apps need background processing time at varied and unpredictable times, such as when an external event or message arrives.
Apps involved in health research studies can obtain background runtime to process data essential for the study. Apps can also request to launch in the background for studies in which the user participates.
Select one or more methods for your app based on how you schedule activity in the background.
-
Refreshing and Maintaining Your App Using Background Tasks
-
Improve Rosetta performance by adding support for the total store ordering (TSO) memory model to your Linux kernel.
Rosetta is a translation process that makes it possible to run apps that contain x86_64 instructions on Apple silicon. In macOS, Rosetta allows apps built for Intel-based Mac computers to run seamlessly on Apple silicon, and enables the same capability for Intel Linux apps in ARM Linux VMs.
Rosetta enables Linux distributions running on Apple silicon to support legacy Intel binaries with the addition of a few lines of code in your virtualization-enabled app, and the creation of a directory share for Rosetta to use.
-
To make it possible to refer to the above two
ImageLoader
implementations using dot syntax, all that we have to do is to define a type-constrained extension for each one — which in turn contains a static API for creating an instance of that type. -
Learn about the fundamental concepts Swift uses to enable data-race-free concurrent code.
Traditionally, mutable state had to be manually protected via careful runtime synchronization. Using tools such as locks and queues, the prevention of data races was entirely up to the programmer. This is notoriously difficult not just to do correctly, but also to keep correct over time. Even determining the need for synchronization may be challenging. Worst of all, unsafe code does not guarantee failure at runtime. This code can often seem to work, possibly because highly unusual conditions are required to exhibit the incorrect and unpredictable behavior characteristic of a data race.
More formally, a data race occurs when one thread accesses memory while the same memory is being mutated by another thread. The Swift 6 language mode eliminates these problems by preventing data races at compile time.
-
If you’ve been in Swift ecosystem for many years then you at least encountered this error once: “Protocol ‘XX’ can only be used as a generic constraint because it has Self or associated type requirements”. Maybe you even had nightmares about it 👻!
It’s indeed one of the most common issue developers face while learning the language. And until not so long ago, it was impossible to “fix”: you had to rethink your code by avoiding casting an object to an existential.
Thankfully this time is now over! Let’s acclaim our saviour:
any
. We’ll dive into a real usecase (comparing twoEquatable
objects) to understand how it can be used to solve our issues. -
SwiftUI lets us style portions of text by interpolating
Text
inside anotherText
and applying available text modifiers, such asforegroundColor()
orfont()
.Starting from iOS 17 we can apply more intricate styling to ranges within a Text view with
foregroundStyle()
. -
A Metal shader library.
-
Modify the payload of a remote notification before it’s displayed on the user’s iOS device.
You may want to modify the content of a remote notification on a user’s iOS device if you need to:
- Decrypt data sent in an encrypted format.
- Download images or other media attachments whose size would exceed the maximum payload size.
- Update the notification’s content, perhaps by incorporating data from the user’s device.
Modifying a remote notification requires a notification service app extension, which you include inside your iOS app bundle. The app extension receives the contents of your remote notifications before the system displays them to the user, giving you time to update the notification payload. You control which notifications your extension handles.
-
A case study of gradually modernizing an established mobile application
Incremental replacement of a legacy mobile application is a challenging concept to articulate and execute. However, we believe by making the investment in the pre-requisites of legacy modernization, it is posible to yield benefits in the long term. This article explores the Strangler Fig pattern and how it can be applied to mobile applications. We chart the journey of an enterprise who refused to accept the high cost and risk associated with a full rewrite of their mobile application. By incrementally developing their new mobile app alongside a modular architecture, they were able to achieve significant uplifts in their delivery metrics.
-
Understand the structure and properties of the objects the system includes in the JSON of a crash report
Starting with iOS 15 and macOS 12, apps that generate crash reports store the data as JSON in files with an
.ips
extension. Tools for viewing these files, such as Console, translate the JSON to make it easier to read and interpret. The translated content uses field names the article Examining the fields in a crash report describes. Use the following information to understand the structure of the JSON the system uses for these crash reports and how the data maps to the field names found in the translated content.Typical JSON parsers expect a single JSON object in the body of the file. The IPS file for a crash report contains two JSON objects: an object containing IPS metadata about the report incident and an object containing the crash report data. When parsing the file, extract the JSON for the metadata object from the first line. If the
bug_type
property of the metadata object is309
, the log type for crash reports, you can extract the JSON for the crash report data from the remainder of the text. -
Identify the signs of a Swift runtime error, and address the crashes runtime errors cause.
Swift uses memory safety techniques to catch programming mistakes early. Optionals require you to think about how best to handle a
nil
value. Type safety prevents casting an object to a type that doesn’t match the object’s actual type.If you use the
!
operator to force unwrap an optional value that’snil
, or if you force a type downcast that fails with theas!
operator, the Swift runtime catches these errors and intentionally crashes the app. If you can reproduce the runtime error, Xcode logs information about the issue to the console. -
Connect your app and a website to provide both a native app and a browser experience.
Associated domains establish a secure association between domains and your app so you can share credentials or provide features in your app from your website. For example, an online retailer may offer an app to accompany their website and enhance the user experience.
Shared web credentials, universal links, Handoff, and App Clips all use associated domains. Associated domains provide the underpinning to universal links, a feature that allows an app to present content in place of all or part of its website. Users who don’t download the app get the same information in a web browser instead of the native app.
To associate a website with your app, you need to have the associated domain file on your website and the appropriate entitlement in your app. The apps in the
apple-app-site-association
file on your website must have a matching Associated Domains Entitlement. -
Convert XCTests to Swift Testing
Testpiler is an app that allows you to easily convert unit tests written in Swift from XCTest to the new Swift Testing framework. Simply add a folder containing your unit tests, or add individual test files. You can preview a diff of the proposed changes so you know exactly what will happen. When you're ready, you can convert each source file individually, or convert all selected files in a batch.
-
Read the whole formal grammar.
-
You can decrease noise for your team by limiting notifications when your team is requested to review a pull request.
-
A new frontier for AI privacy in the cloud.
Private Cloud Compute (PCC) delivers groundbreaking privacy and security protections to support computationally intensive requests for Apple Intelligence by bringing our industry-leading device security model into the cloud. Whenever possible, Apple Intelligence processes tasks locally on device, but more sophisticated tasks require additional processing power to execute more complex foundation models in the cloud. Private Cloud Compute makes it possible for users to take advantage of such models without sacrificing the security and privacy that they expect from Apple devices.
We designed Private Cloud Compute with core requirements that go beyond traditional models of cloud AI security:
- Stateless computation on personal user data: PCC must use the personal user data that it receives exclusively for the purpose of fulfilling the user’s request. User data must not be accessible after the response is returned to the user.
- Enforceable guarantees: It must be possible to constrain and analyze all the components that critically contribute to the guarantees of the overall PCC system.
- No privileged runtime access: PCC must not contain privileged interfaces that might enable Apple site reliability staff to bypass PCC privacy guarantees.
- Non-targetability: An attacker should not be able to attempt to compromise personal data that belongs to specific, targeted PCC users without attempting a broad compromise of the entire PCC system.
- Verifiable transparency: Security researchers need to be able to verify, with a high degree of confidence, that our privacy and security guarantees for PCC match our public promises.
This guide is designed to walk you through these requirements and provide the resources you need to verify them for yourself, including a comprehensive look at the technical design of PCC and the specific implementation details needed to validate it.
-
Discover SwiftUI like never before with Companion for SwiftUI—an interactive documentation hub covering all SwiftUI elements, whether you’re developing for iOS, macOS, tvOS, or watchOS. The latest update features the complete set of additions from WWDC 2024, bringing its repository to over 3300 entries. Here are some of its key features:
- ✔️ Comprehensive Coverage: Get in-depth insights into SwiftUI views, shapes, protocols, scenes, styles, property wrappers, and environment values across all Apple platforms (iOS, macOS, tvOS, and watchOS).
- ✔️ Comprehensive Coverage: Get in-depth insights into SwiftUI views, shapes, protocols, scenes, styles, property wrappers, and environment values across all Apple platforms (iOS, macOS, tvOS, and watchOS).
- ✔️ Interactive Examples: Dive into interactive examples that run within the app. Adjust associated controls to witness immediate changes in views and code, facilitating a better understanding of SwiftUI’s power.
- ✔️ Seamless Integration: Copy and paste examples directly into Xcode for quick and easy execution. Examples are ready to run, making your development process smoother.
- ✔️ Filtering Options: Tailor your learning experience by creating filters to focus on relevant API areas, whether you’re working on a legacy project, exploring the latest WWDC ’23 additions, or researching SwiftUI’s implementation of a specific framework. Switch between multiple tabs, each with its own filter.
- ✔️ Visual Learning: Need to grasp the finer details of a quad curve? No worries! Explore the .addQuadCurve() entry, and drag curve points to instantly visualize how function parameters change. Accelerate your learning curve with instant, hands-on knowledge.
- ✔️ Menu Bar Icon: Quickly find topics using the system’s menu bar icon, allowing you to jump directly to the page you’re looking for.
-
A guide on everything related to Cursor for Apple Platforms development
-
Recently, there’s been much talk and fuss about AI, and whether or not it can improve your development workflow. I wanted to touch base about how AI and its implementation in Cursor have been significantly improving my speed and efficiency.
-
Build iOS/Swift apps using Visual Studio Code
-
Learn 4 ways to refresh views in SwiftUI.
@State
@Observable
- Using
.refreshable
on a List - Using the
id
Modifier
-
Today, I’ll demonstrate how to migrate your Combine code over to AsyncAlgorithms, with a fully open-source tutorial project you can code along with.
-
Uses of the functional programming language include formal mathematics, software and hardware verification, AI for math and code synthesis, and math and computer science education.
-
For those who don’t follow Swift’s development, ABI stability has been one of its most ambitious projects and possibly it’s defining feature, and it finally shipped in Swift 5. The result is something I find endlessly fascinating, because I think Swift has pushed the notion of ABI stability farther than any language without much compromise.
So I decided to write up a bunch of the interesting high-level details of Swift’s ABI. This is not a complete reference for Swift’s ABI, but rather an abstract look at its implementation strategy. If you really want to know exactly how it allocates registers or mangles names, look somewhere else.
-
This is an extension that will allow a Global Actor to initiate a run command similar to MainActor. I took the signature from the MainActor definition itself.
extension CustomActor { public static func run<T>(resultType: T.Type = T.self, body: @CustomActor @Sendable () throws -> T) async rethrows -> T where T : Sendable { try await body() } }
-
Turn Haskell expressions into pointfree style in your browser with WASM
-
Inspired by Make, built for convenience
-
Metal provides the lowest-overhead access to the GPU, enabling you to maximize the graphics and compute potential of your app on iOS, macOS, and tvOS. Every millisecond and every bit is integral to a Metal app and the user experience–it’s your responsibility to make sure your Metal app performs as efficiently as possible by following the best practices described in this guide. Unless otherwise stated, these best practices apply to all platforms that support Metal.
-
As one of the early adopters of Apple TV and tvOS, Gilt Groupe was recently selected to present their “Gilt on TV” app at the Apple Keynote event in September.
This presentation covers Gilt's discoveries during the process of building a tvOS app from scratch in Swift.
It was presented at iOSoho on October 12, 2015 in New York City.
-
Resolution by iOS device
-
I don’t think websites were ever intended to be made only by “web professionals.” Websites are documents at heart. Just about everyone knows how to make a document in this digital age, be it Word, Google Docs, Markdown, or something else. HTML shouldn’t be an exception. Sure it’s a bit more technical than other types of documents, but it’s also very special.
It’s the document format of the web. The humble HTML document is ubiquitous. It’s everywhere. If you looked at a website today, you almost certainly saw HTML.
HTML is robust. You could look at a website made today or one made twenty years ago. They both use HTML and they both work. That is an achievement that not many document formats can claim. You also don’t need any special program to make an HTML document. Many exist, and you could use any of them. You could also just open Notepad and write HTML by hand (spoiler: we are going to do just that).
I created this web book because I wanted something for people who don’t consider themselves professional web developers. Imagine if Word documents were only ever created by “Word professionals.” No. Knowing how to write some HTML and put it on the web is a valuable skill that is useful to all sorts of professional and personal pursuits. It doesn’t belong only to those of us who make websites as a career. HTML is for everyone. HTML is for people.
-
Make your app more responsive by examining the event-handling and rendering loop.
Human perception is adept at identifying motion and linking cause to effect through sequential actions. This is important for graphical user interfaces because they rely on making the user believe a certain interaction with a device causes a specific effect, and that the objects onscreen behave sufficiently realistically. For example, a button needs to highlight when a person taps or clicks it, and when someone drags an object across the screen, it needs to follow the mouse or finger.
There are two ways this illusion can break down:
- The time between user input and the screen update is too long, so the app’s UI doesn’t seem like it’s responding instantaneously anymore. A noticeable delay between user input and the corresponding screen update is called a hang. For more information, see Understanding hangs in your app.
- The motion onscreen isn’t fluid like it would be in the real world. An example is when the screen seems to get stuck and then jumps ahead during scrolling or during an animation. This is called a hitch. For more information, see Understanding hitches in your app.
This article covers different types of user interactions and how the event-handling and rendering loop processes events to handle them. This foundational knowledge helps you understand what causes hangs and hitches, how the two are similar, and what differentiates them.
-
Determine the cause for delays in user interactions by examining the main thread and the main run loop.
A discrete user interaction occurs when a person performs a single well-contained interaction and the screen then updates. An example is when someone presses a key on the keyboard and the corresponding letter then appears onscreen. Although the software running on the device needs time to process the incoming user input event and compute the corresponding screen update, it’s usually so quick that a human can’t perceive it and the screen update seems instantaneous.
When the delay in handling a discrete user interaction becomes noticeable, that period of unresponsiveness is known as a hang. Other common terms for this behavior are freeze because the app stops updating, and spin based on the spinning wait cursor that appears in macOS when an app is unresponsive.
Although discrete interactions are less sensitive to delays than continuous interactions, it doesn’t take long for a person to perceive a gap between an action and its reaction as a pause, which breaks their immersive experience. A delay of less than 100 ms in a discrete user interaction is rarely noticeable, but even a few hundred milliseconds can make people feel that an app is unresponsive.
A hang is almost always the result of long-running work on the main thread. This article explains what causes a hang, why the main thread and the main run loop are essential to understanding hangs, and how various tools can detect hangs on Apple devices.
-
Determine the cause of interruptions in motion by examining the render loop.
Human perception is very sensitive to interruptions in motion. When a fluid motion onscreen gets stuck for a short time, even a couple of milliseconds can be noticeable. This type of interruption is known as a hitch. Hitches happen during continuous interactions, like scrolling or dragging, or during animations. Each hitch impacts the user experience, so you want as few hitches as possible in your app.
An interruption in motion occurs when the display doesn’t update at the expected pace. The display doesn’t update in time when the next frame isn’t ready for display, so the frame is late.
A delay due to a late frame often causes the system to skip one or more subsequent frames, which is why such behavior is also referred to as a frame drop. However, dropping a frame is just one potential response the system uses to recover from a late frame, and not every hitch causes a frame drop.
When a frame is late, it’s usually due to a delay occurring somewhere in the render loop. These delays are the result of a delay in the main thread, most often in the commit phase, known as a commit hitch, or a delay in the render phase, known as a render hitch.
-
Reacting to property changes is fairly straightforward using the
@Observable
macro as well. You can simply use thewillSet
ordidSet
property observers to listen for changes. -
CatColab is...
- a collaborative environment for formal, interoperable, conceptual modeling
- an open source software project: find us on GitHub
- a technology created by Topos Institute and our partners: read the credits
While CatColab has a notebook-style interface that draws inspiration from computational notebooks like JupyterLab and structured document editors like Notion and Coda, its conceptual underpinnings are quite different from both of those classes of tools. Here is an overview of the concepts that you'll encounter in CatColab today:
CatColab is not a general-purpose programming or modeling language but is rather an extensible environment for working in domain-specific logics, such as those of database schemas or biochemical regulatory networks. To do anything in the tool besides write text, you'll need to choose a logic.
Models in CatColab are models within a logic, such as a particular database schema or regulatory network. Models are specified declaratively and are well-defined mathematical objects. CatColab is a structure editor for models in a logic, allowing formal declarations to be intermixed with descriptive rich text.
Unlike most computational notebooks, CatColab strictly separates the specification of a model from any outputs derived from it. Depending on the logic, an analysis of a model might include visualization, simulation, identification of motifs, and translation into other formats.
Future of versions of CatColab will display the further concepts of instances, morphisms, and migrations, to be described when they become available.
-
Add your published Swift package as a local package to your app’s project and develop the package and the app in tandem.
Swift packages are a convenient and lightweight solution for creating a modular app architecture and reusing code across your apps or with other developers. Over time, you may want to develop your published Swift package in tandem with your app, or create a sample app to showcase its features. To develop a Swift package in tandem with an app, you can leverage the behavior whereby a local package overrides a package dependency with the same name:
- Add the Swift package to your app as a package dependency instead of a local package, as described in Editing a package dependency as a local package.
- Develop your app and your Swift package in tandem, and push changes to their repositories.
- If you release a new version of your Swift package or want to stop using the local package, remove it from the project to use the package dependency again.
-
Indicates that the view should receive focus by default for a given namespace.
This modifier sets the initial focus preference when no other view has focus. Use the environment value resetFocus to force a reevaluation of default focus at any time.
The following tvOS example shows three buttons, labeled “1”, “2”, and “3”, in a VStack. By default, the “1” button would receive focus, because it is the first child in the stack. However, the
prefersDefaultFocus(_:in:)
modifier allows button “3” to receive default focus instead. Once the buttons are visible, the user can move down to and focus the “Reset to default focus” button. When the user activates this button, it uses the ResetFocusAction to reevaluate default focus in themainNamespace
, which returns the focus to button “3”.struct ContentView: View { @Namespace var mainNamespace @Environment(\.resetFocus) var resetFocus var body: some View { VStack { Button ("1") {} Button ("2") {} Button ("3") {} .prefersDefaultFocus(in: mainNamespace) Button ("Reset to default focus") { resetFocus(in: mainNamespace) } } .focusScope(mainNamespace) } }
The default focus preference is limited to the focusable ancestor that matches the provided namespace. If multiple views express this preference, then SwiftUI applies the current platform rules to determine which view receives focus.
-
Prevents the view from updating its child view when its new value is the same as its old value.
-
Use Instruments to analyze the performance, resource usage, and behavior of your apps. Learn how to improve responsiveness, reduce memory usage, and analyze complex behavior over time.
-
Learn how to analyze hangs with Instruments.
The following tutorials show you how to use Instruments to find a hang in an app, analyze what’s causing it to hang, and then try out various solutions to fix the problem. Through multiple iterations of recording and analyzing data — and iterating on changes to your code — you’ll apply fixes and ultimately end up with working code that doesn’t result in a hang. Many of the principles detailed here can also be found in the WWDC23 session, Analyze hangs with Instruments.
-
This has led me to create Global Actors with no custom functionality. This isn’t how most of us are thinking about actors, but it allows us to do some powerful things.
- Avoid dumping too much logic into an Actor. The removes the threat of Massive Actors. And leaves us more options as the codebase evolves.
- Separate the logic in our code from how it is run. This is a powerful technique I’ve used for years to allow code I work with to scale.
-
So assuming you have an Intel Mac, follow these instructions to use Boot Camp to install Windows 11.
-
One of OCaml’s flagship features is what they call “abstract types”. In essence, this lets programmers declare a type, even as a type synonym inside a module and, by hiding its definition in the signature, make it appear abstract to the outside world.
-
Spend time with stories that matter
StoryTime is for balanced teams that want to work with user stories.
Developed by a team of Pivotalumni, it aims to capture the best of tools we’ve used in the past, while improving on their weaknesses.
There are many tools available for the problems we want to address. We think you’ll find StoryTime compelling if you agree with some rough guiding principles:
- Teams (not individuals, or even pairs) are the fundamental unit of allocation.
- There is no story “priority," only position.
- Acceptance is an important concept.
- It’s better to work to select the right tools than to configure them to behave correctly.
- User stories are flexible, powerful, and useful for thinking and communicating about software.
StoryTime is very new. Because tools people count on are being retired, we’re pushing to preview as early as we can tolerate. We intend to invite people who sign up for updates to use the alpha version soon. We'll also soon share a public roadmap, in addition to writing more about our vision for StoryTime.
We're a small, bootstrapping team. We’re not going to take VC money to seek market share or usage numbers, or an exit. We don’t want to exit. We want this software to be available to people it fits with, for a price that's well worth it. We’re working to make that happen.
-
Learn how to harness the power of Swift’s advanced type system, and make it a powerful ally and assistant.
-
It seems at some point, even though
UserDefaults
is intended for non-sensitive information, it started getting marked as data that needs to be encrypted and cannot be accessed until the user unlocked their device. I don’t know if it’s because Apple found developers were storing sensitive data in there even when they shouldn’t be, but the result is even if you just store something innocuous like what color scheme the user has set for your app, that theme cannot be accessed until the device is unlocked. -
This article goes in-depth on how to create Shell scripts to manage many parts of a Swift Package’s lifecycle. If you’re just after the scripts and basic info, have a look at the SwiftPackageScripts project.
-
In iOS 15 and later, the system may, depending on device conditions, prewarm your app — launch nonrunning application processes to reduce the amount of time the user waits before the app is usable. Prewarming executes an app’s launch sequence up until, but not including, when
main()
callsUIApplicationMain(_:_:_:_:)
. This provides the system with an opportunity to build and cache any low-level structures it requires in anticipation of a full launch. -
Since reference types do not directly store their data, we only incur reference counting costs when copying them. There is more involved into it than just incrementing and decrementing an integer. Each operation requires several levels of indirection and must be performed atomically, since heap can be shared between multiple threads at the same time.
-
App Icons are the first touchpoint with your user and they serve as business cards of your product. Adding depth to it can elevate your App’s personality in an impactful way! Make sure to experiment with sketches, blending modes and shadows to find the rendering process that best conveys your style and the level of realism you were looking for.
-
Reminder: Apple Watches use 32 bit pointers
-
Build retro games using WebAssembly for a fantasy console
-
How to run Swift Data and Core Data operations in the background and share models across concurrency contexts
Core Data is a powerful framework that allows you to manage the persistent model layer of your application and, while it is a first-party solution that has been a standard in the Apple ecosystem for many years, it is dated and is not straightforward to use.
In fact, the community has been asking for many years for a more modern and easier-to-use alternative to Core Data and, those wishes were finally granted with the introduction of the SwiftData framework in WWDC23. While
SwiftData
is much simpler to set up and interact with than Core Data, it is a wrapper around Core Data and, as such, it inherits a lot of the bagage that developers dreaded when working with Core Data.One of the biggest challenges that seems to catch a lot of people off guard when working with Core Data and by extension SwiftData is managing models across different concurrency contexts. Swift Data and Core Data models are not
Sendable
or thread-safe so they are not safe to be shared across different threads. -
SwiftUI provides a powerful mechanism called the environment, allowing data to be shared across views in a view hierarchy. This is particularly useful when we need to pass data or configuration information from a parent view down to its children, even if they’re many levels deep. It enhances the flexibility of SwiftUI views and decouples the data source from its destination.
SwiftUI includes a predefined list of values stored in the EnvironmentValues struct. These values are populated by the framework based on system state and characteristics, user settings or sensible defaults. We can access these predefined values to adjust the appearance and behavior of custom SwiftUI views, and we can also override some of them to influence the behavior of built-in views. Additionally, we can define our own custom values by extending the
EnvironmentValues
struct.In this post, we'll explore various ways to work with the SwiftUI environment, including reading and setting predefined values, creating custom environment keys, and using the environment to pass down actions and observable classes.
-
Create intuitive and easily manipulated user-interactive controls for your tvOS app.
On Apple TV, people use a remote or game controller to navigate through interface elements like movie posters, apps, or buttons, highlighting each item as they come to it. The highlighted item is said to be focused or in focus. It appears elevated or otherwise distinct from other items. An item is considered focused when the user has highlighted it, but not selected it. The user moves focus by navigating through different UI items, which triggers a focus update.
-
Parallax is a subtle visual effect the system uses to convey depth and dynamism when an element is in focus. As an element comes into focus, the system elevates it to the foreground, gently swaying it while applying illumination that makes the element’s surface appear to shine. After a period of inactivity, out-of-focus content dims and the focused element expands.
Layered images are required to support the parallax effect.
-
I'm so sorry someone put you up to this. But I'll provide some tips that can maybe help out. I'm going to cover what a pixel artist needs to know and try to avoid any technical explanations for why things are such a pain (cough NTSC) (and I guess Woz isn't entirely blameless here).
Update! Was not expecting this to be so popular, sorry I don't have the screen captures ready. I don't have a working CRT anymore so captures are just with a capture card. Applewin does a really good job with the actual colors (it matches what you'd see on a CRT better than the capture device does). The "real world" capture will mostly show how the aspect ratio is a bit more squashed horizontally. This is most noticeable if you're drawing circles (and this is another case where it's adjustable on a real TV so there's not necessarily a "right" answer to what the value should be). Capture card output is important in one case: if I am capturing for a demoparty it's going to look like it does from the capture card so keep that in mind.
Anyway, these are your pixelart options, roughly ranked in level of complications you'll encounter trying to make things work. (Honestly if you're making art for me it might just be easier to make ZX Spectrum or IBM CGA art as those convert relatively easily to Apple II).
- Monochrome hi-res
280x192
- Four color hi-res
140x192
- Six color hi-res
140x192
- Fifteen color lo-res (
40x48
) - Text mode / Mouse-text
- Fifteen color double lo-res (
80x48
) - Fifteen color double hi-res (
140x192
) - Fifteen color cycle-counted mode (
40x96
)
- Monochrome hi-res
There are so many people advocating for the use of URLProtocol for mocking HTTP requests in Swift that I couldn’t believe how quickly it fell apart for me. In fact, I found more writing about using URLProtocol as a mock than I did about using URLProtocol for its intended purpose. This post is about the shortcomings that I encountered, and how I solved them by mocking URLSession instead.
Use ActivityKit to receive push tokens and to remotely start, update, and end your Live Activity with ActivityKit notifications.
ActivityKit offers functionality to start, update, and end Live Activities from your app. Additionally, it offers functionality to receive push tokens. You can use push tokens to update and end Live Activities with ActivityKit push notifications that you send from your server to Apple Push Notification service (APNs). Starting with iOS 17.2 and iPadOS 17.2, you can also start Live Activities by sending ActivityKit push notifications to push tokens.
Starting with iOS 18 and iPadOS 18, you can use broadcast push notifications to update Live Activities for a large audience. You can subscribe to updates on a channel using ActivityKit, and update or end Live Activities for everyone subscribed by sending an ActivityKit push notification on a channel to APNs from your remote server.
There’s a lot to say about animations — we could discuss the theory, the different curves, and how to ensure continuity. We could also dive into the design of animations. However, today we’ll focus solely on how to implement animations in SwiftUI. There are several ways to do this, but we’ll cover four main approaches.
Earlier this year, I read Martin Uecker's proposal N3212 to add parametric polymorphism to C. It's easy to scoff at the idea of adding generic programming to plain C, when C++ already has templates, and nearly every modern systems language has some form of generic programming support, but it got me thinking there is an opportunity to do something meaningfully and usefully different from those other languages. C++ templates rely on monomorphization, meaning that when you write a generic function or type, the compiler generates a distinct specialization for every set of types you use it with. Most other systems-ish languages follow C++'s lead, because monomorphization allows each specialization to be individually emitted and optimized specifically for the set of types it's instantiated on, and the resulting specializations don't need any runtime support to handle different types. However, monomorphization also implies a much more complicated compilation and linking model, where the source code (or some intermediate representation thereof) of generic definitions has to be consistently available to the compiler in order to generate new instantiations as needed.
Welcome to the 88x31 archive on hellnet.work! This site contains 31,119 unique* 88x31 buttons that I scraped from the GeoCities archives compiled by the incredible ARCHIVE TEAM before GeoCities' demise in late 2009.
There is also a background page with stats and interesting links!
Update 07/24: A scan of previously unchecked geocities subsites revealed even more 88x31 buttons in the archive! Discovered 1,862 new 88x31 banners which raises the total to 31,119!
The support library and macros allowing Swift code to easily call into Java libraries.
SwiftPM Snippets are one of the most powerful features of the Swift Package Manager, and yet two years after their introduction few developers know they exist. This tutorial will explain some of the advantages of using SwiftPM Snippets and show you how to add Snippets to a Swift package.
In this tutorial, we will use the Apple DocC tool to preview and iterate on Snippets locally. The DocC tool itself does not support rendering clickable references within Snippets, however the finished SwiftPM project containing Snippets can be published to a platform like Swiftinit where the Snippets will be rendered with clickable references, allowing readers to interact with the symbols contained within them and navigate to supplemental documentation.
All information about how to easily debug
tvOS
You'll likely see a lot of Unison code during the Unison Forall conference. Here are some basics to get you started.
Learn how to quickly pair your iPhone, iPad, iPod touch, or Mac to your Apple TV 4K or Apple TV HD.
With just a tap, you can pair your iOS device with your Apple TV so you can use your iOS device as a remote or keyboard. You can also use AirPlay* and screen sharing without having to enter a four-digit pin each time. Here's how:
- On your Apple TV, go to Settings > Remotes and Devices > Remote App and Devices.
- Unlock your iOS device and bring it close to your Apple TV.
- When you see a message on your iOS device that says Pair Apple TV, tap Pair.
- On your iOS device, enter the four-digit pin that appears on your TV.
- When paired, your iOS device appears under Devices on your Apple TV.
>
I am part of a Lean formalization project in analytic number theory (using Lean 4). I would like your assistance on one step in the formalization, which is to deduce one version
$\sum_{p \leq x} \log p = x + o(x)$ of the prime number theorem from another version$\sum_{n \leq x} \Lambda(n) = x + o(x)$ . The code is provided below, with both of the forms of the PNT given with "sorry"s in their proof. What I would like to do is to fill in the "sorry" for chebyshev_asymptotic (leaving the sorry for WeakPNT unfilled). I understand that this will be dependent on the methods available in Mathlib, and on the precise version of Lean 4 used, which may not be in your training data. However, if you can perhaps provide a plausible breakdown of the possible proof of chebyshev_asymptotic into smaller steps, each of which can be filled at present by a further sorry, we can start from there, see if it compiles, and then work on individual sorries later.
Create, organize, and annotate symbol images using SF Symbols.
SF Symbols 4 offers a set of over 4,000 consistent, highly configurable symbol images that you can use in your app. You can apply stylistic traits typically associated with text, such as applying colors, text style, weight, and scale. Symbols contain additional traits that allow them to integrate seamlessly with surrounding text, and adapt to platform features like Dynamic Text and Dark Mode.
You can create your own custom symbol images with the same capabilities that SF Symbols provides. To create your custom symbol:
- Export an SVG file from the SF Symbols app.
- Edit the SVG file in a vector-drawing app.
- Export the file from your drawing app as an SVG file.
- Validate the SVG file using the SF Symbols app.
- Import the custom symbol into the SF Symbols app and organize it into a group.
- Add annotations, if necessary.
- Export a template file for distribution.
One way to begin creating your own symbol is by basing it on an existing symbol you find in the SF Symbols app. For example, the circle symbol can give you a great reference point to start working with.
The Swift toolchain for Android is the culmination of many years of community effort, in which we (the Skip team) have played only a very small part.
Even before Swift was made open-source, people have been tinkering with getting it running on Android, starting with Romain Goyet’s “Running Swift code on Android” attempts in 2015, which got some basic Swift compiling and running on an Android device. A more practical example came with Geordie J’s “How we put an app in the Android Play Store using Swift” in 2016, where Swift was used in an actual shipping Android app. Then in 2018, Readdle published “Swift for Android: Our Experience and Tools” on integrating Swift into their Spark app for Android. These articles provide valuable technical insight into the mechanics and complexities involved with cross-compiling Swift for a new platform.
In more recent years, the Swift community has had various collaborative and independent endeavors to develop a usable Swift-on-Android toolchain. Some of the most prominent contributors on GitHub are @finagolfin, @vgorloff, @andriydruk, @compnerd, and @hyp. Our work merely builds atop of their tireless efforts, and we expect to continue collaborating with them in the hopes that Android eventually becomes a fully-supported platform for the Swift language.
Looking towards the future, we are eager for the final release of Swift 6.0, which will enable us to publish a toolchain that supports all the great new concurrency features, as well as the Swift Foundation reimplementation of the Foundation C/Objective-C libraries, which will give us the the ability to provide better integration between Foundation idioms (bundles, resources, user defaults, notifications, logging, etc.) and the standard Android patterns. A toolchain is only the first step in making native Swift a viable tool for building high-quality Android apps, but it is an essential component that we are very excited to be adding to the Skip ecosystem.
I recently ran into a funny bug with deep links.
Sometimes, when tapping a push notification, some users reported the destination screen appearing twice - the app would open, navigate to the correct screen, but the screen push transition would happen twice.
I began investigating, unaware how deep this rabbit hole would go.
Sets the preferred visibility of the non-transient system views overlaying the app.
Use this modifier to influence the appearance of system overlays in your app. The behavior varies by platform.
In iOS, the following example hides every persistent system overlay. In visionOS 2 and later, the SharePlay Indicator hides if the scene is shared through SharePlay, or not shared at all. During screen sharing, the indicator always remains visible. The Home indicator doesn’t appear without specific user intent when you set visibility to hidden. For a WindowGroup, the modifier affects the visibility of the window chrome. For an ImmersiveSpace, it affects the Home indicator.
Affected non-transient system views can include, but are not limited to:
- The Home indicator.
- The SharePlay indicator.
- The Multitasking Controls button and Picture in Picture on iPad.
>
See an overview of potential source compatibility issues.
Swift 6 includes a number of evolution proposals that could potentially affect source compatibility. These are all opt-in for the Swift 5 language mode.
Encapsulate view-specific data within your app’s view hierarchy to make your views reusable.
Store data as state in the least common ancestor of the views that need the data to establish a single source of truth that’s shared across views. Provide the data as read-only through a Swift property, or create a two-way connection to the state with a binding. SwiftUI watches for changes in the data, and updates any affected views as needed.
Don’t use state properties for persistent storage because the life cycle of state variables mirrors the view life cycle. Instead, use them to manage transient state that only affects the user interface, like the highlight state of a button, filter settings, or the currently selected list item. You might also find this kind of storage convenient while you prototype, before you’re ready to make changes to your app’s data model.
A control for selecting from a set of mutually exclusive values by index.
So, how far are we away from actually working without builds in HTML, CSS and Javascript? The idea of “buildless” development isn’t new - but there have been some recent improvements that might get us closer. Let’s jump in.
The obvious tradeoff for a buildless workflow is performance. We use bundlers mostly to concatenate files for fewer network requests, and to avoid long dependency chains that cause "loading waterfalls". I think it's still worth considering, but take everything here with a grain of performance salt.
Impactful Technical Leadership
As an engineering manager, you almost always have someone in your company to turn to for advice: a peer on another team, your manager, or even the head of engineering. But who do you turn to if you're the head of engineering? Engineering executives have a challenging learning curve, and many folks excitedly start their first executive role only to leave frustrated within the first 18 months.
In this book, author Will Larson shows you ways to obtain your first executive job and quickly ramp up to meet the challenges you may not have encountered in non-executive roles: measuring engineering for both engineers and the CEO, company-scoped headcount planning, communicating successfully across a growing organization, and figuring out what people actually mean when they keep asking for a "technology strategy."
The Cultural Atlas is an educational resource providing comprehensive information on the cultural background of Australia’s migrant populations. The aim is to improve social cohesion and promote inclusion in an increasingly culturally diverse society.
The
oklch()
functional notation expresses a given color in the Oklab color space.oklch()
is the cylindrical form ofoklab()
, using the sameL
axis, but with polar Chroma (C
) and Hue (h
) coordinates.
Paste HEX/RGB/HSL to convert to OKLCH
Picking color and creating balanced color palettes with Figma is not an easy task, HSL and HSB are not perceptually uniform, HSL's lightness is relative to the current hue, so for each of them, the real perceived 50% lightness is not at L 50.
Same problem with hue, if we make a palette from hue 0 to 70 with the same incremental value, we'll get a palette that is not perceptually progressive, some hue changes will seem bigger than others.
We also have a problem known as the “Abney effect”, mainly in the blue hues. If we take the hue 240, it shift from blue to purple when we update the lightness.
OkColor solves all these problems and more, its params are reliable and uniform, you know what you'll get.
If we change the hue of a color in OkLCH and keep the same lightness value, we know that the resulting color will have the same perceived lightness.
You can also easily create perceptually uniform color palettes, and do more advanced things with OkLCH like picking colors in P3 space and use the relative chroma (see this thread for more infos).
Use TypeScript as your preprocessor. Write type‑safe, locally scoped classes, variables and themes, then generate static CSS files at build time.
Learn the principles of the App Intents framework, like intents, entities, and queries, and how you can harness them to expose your app's most important functionality right where people need it most. Find out how to build deep integration between your app and the many system features built on top of App Intents, including Siri, controls and widgets, Apple Pencil, Shortcuts, the Action button, and more. Get tips on how to build your App Intents integrations efficiently to create the best experiences in every surface while still sharing code and core functionality.
Build, compile, and execute compute graphs utilizing all the different compute devices on the platform, including GPU, CPU, and Neural Engine.
Metal Performance Shaders Graph provides high-performance, energy-efficient computation on Apple platforms by leveraging different hardware compute blocks. You can use this framework to generate a symbolic compute graph of operations, where each operation can output a set of tensors used as edges of the graph. The tensors represent multidimensional data that objects like MTLBuffer or MTLTexture can back. After you construct the graph, you can compile it into an executable to optimize for performance and subsequently run the executable on your input data. This framework also provides the ability to serialize the executables and load executables from a serialized
.mpsgraphpackage
.
Hummingbird takes advantage of Swift to make it easy and enjoyable to create robust backends.
A crowd sourced repository for examples of Swift's native Regex type.
Swift IDE
Write and run Swift code easily and professionally!
Swifty Compiler app is a great way to get an algorithm or method down on the go and make sure it works.
You can use it as a playground to test Swift code quickly or review concepts.
Lessons for Individual Contributors and Managers from 10 Years at Google
In this insightful and comprehensive guide, Addy Osmani shares more than a decade of experience working on the Chrome team at Google, uncovering secrets to engineering effectiveness, efficiency, and team success. Engineers and engineering leaders looking to scale their effectiveness and drive transformative results within their teams and organizations will learn the essential principles, tips, and frameworks for building highly effective engineering teams.
Osmani presents best practices and proven strategies that foster engineering excellence in organizations of all sizes. Through practical advice and real-world examples, Leading Effective Engineering Teams empowers you to create a thriving engineering culture where individuals and teams can excel. Unlock the full potential of your engineering team and achieve unparalleled success by harnessing the power of trust, commitment, and accountability.
Swift is an exciting Open Source programming language developed by Apple. Swift on RISC-V is all about getting Swift onto RISC-V devices. This can be anything from small developer boards and IOT devices to high-performance cloud servers and PCs.
I am Neil Jones, the engineer responsible for the porting of Swift to RISC-V and the creator of the “Swift on RISC-V” project. Additionally, I am the creator and maintainer of the “Swift Community Apt Repository”. This includes the building of and creation of all packages hosted on this repository.
Want to install Swift on riscv64 in 3 easy steps using the Swift Community Apt Repository? Let’s dive in!
From Apple's Data and Privacy page, you can request to transfer the playlists that you’ve made in Apple Music to YouTube Music.
- When you transfer playlists to YouTube Music, they aren’t deleted from Apple Music. >
- The transfer process typically takes a few minutes, although it might take up to several hours depending on the number of playlists that you’re transferring.
>
You can request to transfer the playlists that you've made in YouTube Music to Apple Music.
- When you transfer playlists to Apple Music, they aren’t deleted from YouTube Music. >
- The transfer process typically takes a few minutes, although it might take up to several hours depending on the number of playlists that you’re transferring.
>
Hi everyone. For Embedded Swift and other low-overhead and performance sensitive code bases, we're looking into improving Swift's support for fixed-capacity data structures with inline storage. @Alejandro has been working on adding support for integer generic parameters, which is one step towards allowing these sorts of types to be generic over their capacity.
This may be going even further off-topic but it feels like everyone is trying to find workarounds for not being able to access swift's built in SwiftSyntax. That's why I wrote Swift macros without requiring swift-syntax last year and why I tried to discuss Passing Syntax directly to the macro without swift-syntax a few days ago, and this project, and I assume many more POCs.
Drag, drop, done.
Rewrite Git history with a single drag-and-drop. Undo anything with ⌘Z. All speed, no bumps.
Keyoxide is a decentralized tool to create and verify decentralized online identities.
Just like passports for real life identities, Keyoxide can be used to verify the online identity of people to make sure one is interacting with whom they intended to be and not an imposter.
Unlike real life passports, Keyoxide works with online identities or "personas", meaning these identities can be anonymous and one can have multiple separate personas to protect their privacy, both online and in real life.
Here is what a Keyoxide profile looks like.
Get started and create your own!
Swift Macros, while powerful, can hinder build times. This blog post explains why and what we can do to mitigate the issue. > Swift Macros were introduced in September 2023 alongside Xcode 15 and have become a powerful tool for developers to leverage the compiler to generate code. The community quickly adopted them and started building and sharing them as Swift Packages that teams could integrate into their projects. At Tuist, we started using Mockable as a tool to generate mocks from protocols, which we had previously been doing manually.
However, Swift Macros quickly revealed a serious challenge: they can significantly increase build times, causing slow feedback cycles both locally and in CI environments. This blog post aims to explain where the build time slowness comes from, what potential solutions we might see Apple adopting, and what we can do in the meantime to mitigate the issue.
There’s a technology that ticks all the boxes for what a Swift Macro needs:
- A way to run safely in a runtime.
- A way to ship a compiled version of it that runs in any version of the runtime.
That technology is WebAssembly, and Kabir Oberai had the brilliant idea to support that as the technology to run Swift Macros. And thanks to the WasmKit runtime, the problem is not only solved for the Darwin platform but also for Windows and Linux. There’s an ongoing conversation in the Swift Community forum, so hopefully, we’ll see this technology being adopted soon, which will require Swift Macro authors to compile their Swift Macros to .wasm binaries and ship them alongside the source code.
Spatial Computing with visionOS
Step Into the world of visionOS development with SwiftUI, RealityKit, and ARKit.
Access the elements of a collection.
Classes, structures, and enumerations can define subscripts, which are shortcuts for accessing the member elements of a collection, list, or sequence. You use subscripts to set and retrieve values by index without needing separate methods for setting and retrieval. For example, you access elements in an
Array
instance assomeArray[index]
and elements in aDictionary
instance assomeDictionary[key]
.You can define multiple subscripts for a single type, and the appropriate subscript overload to use is selected based on the type of index value you pass to the subscript. Subscripts aren’t limited to a single dimension, and you can define subscripts with multiple input parameters to suit your custom type’s needs.
As a developer for Apple platforms, you probably work on multiple projects with different coding styles and conventions and have to find yourself adjusting Xcode’s editor settings every time you switch between projects. This can be a tedious process that you might forget to do or overlook and, if the project does not have a linter that enforces the coding style, you might end up with inconsistent code formatting across the codebase.
Thankfully Xcode 16 adds support for EditorConfig files, which allows you to define Xcode editor settings in a programmatic way on a per-project basis. In this article, you will learn how to set up EditorConfig files in Xcode and what settings are supported at this time.
Learn how you can use Swift 5.7 to design advanced abstractions using protocols. We'll show you how to use existential types, explore how you can separate implementation from interface with opaque result types, and share the same-type requirements that can help you identify and guarantee relationships between concrete types. To get the most out of this session, we recommend first watching “Embrace Swift generics" from WWDC22.
So those are my favorite SwiftUI additions from
WWDC 2024
. As we have seen in recent years, Apple is gradually converging the OSes, especially with SwiftUI, which means that a lot of code is no longer platform-specific. But as developers, we still have the responsibility to make our apps look, feel and behave in a way that suits the platform, whatever it is.One other point is that a lot of resources will state that they are for iOS, but many of them are totally valid for macOS too. Don’t skip an article or video just because it doesn’t label itself as specifically for macOS.
Request permission to display alerts, play sounds, or badge the app’s icon in response to a notification.
Local and remote notifications get a person’s attention by displaying an alert, playing sounds, or badging your app’s icon. These interactions occur when your app isn’t running or is in the background. They let people know that your app has relevant information for them to view. Because a person might consider notification-based interactions disruptive, you must obtain permission to use them.
Learn what to do if your Apple devices don’t see Apple push notifications when connected to a network.
If you use a firewall or private Access Point Name for cellular data, your Apple devices must be able to connect to specific ports on specific hosts:
- TCP port 5223 to communicate with APNs.
- TCP port 443 or 2197 to send notifications to APNs.
TCP port 443 is used during device activation, and afterwards for fallback if devices can't reach APNs on port 5223. The connection on port 443 uses a proxy as long as the proxy allows the communication to pass through without decrypting.
The APNs servers use load balancing, so your devices don't always connect to the same public IP address for notifications. It's best to let your device access these ports on the entire 17.0.0.0/8 address block, which is assigned to Apple.
Taking URL’s beyond the Web in SwiftUI
Enter
openURL
, an Environment Key Value that, when summoned, allows you to do as it is named: open a URL. Particularly allowing you to do so somewhat programmatically within aView
.But
openURL
is of typeOpenURLAction
, which is exposed to us and gives us the ability to overrideopenURL
in the Environment.Let’s dig in, together, to understand how these work and for what purposes would we want to use them. We’ll even learn a few creative tricks that can bust open how we can incorporate links better in our app experiences.
This document describes how to set up a development loop for people interested in contributing to Swift.
If you are only interested in building the toolchain as a one-off, there are a couple of differences:
- You can ignore the parts related to Sccache.
- You can stop reading after Building the project for the first time.
>
Swift is a mature and powerful language that can be used way beyond development for Apple platforms. Due to its low memory footprint, performance and safety features, it has become a popular choice for server-side development.
One particular use case where Swift shines is in the development of Serverless applications using AWS Lambdas and, since I have been building and deploying them for many use cases for a while now, I thought I would share my experience and some tips in this comprehensive guide.
Use the keyboard, mouse, or trackpad of your Mac to control up to two other nearby Mac or iPad devices, and work seamlessly between them.
You can instantly send bitcoin to any
$cashtag
or another Lightning compatible wallet for free with Cash App
Use a scene-based life cycle in SwiftUI while keeping your existing codebase.
Take advantage of the declarative syntax in SwiftUI and its compatibility with spatial frameworks by moving your app to the SwiftUI life cycle.
Moving to the SwiftUI life cycle requires several steps, including changing your app’s entry point, configuring the launch of your app, and monitoring life-cycle changes with the methods that SwiftUI provides.
A type that represents the structure and behavior of an app.
Create an app by declaring a structure that conforms to the App protocol. Implement the required body computed property to define the app’s content:
@main struct MyApp: App { var body: some Scene { WindowGroup { Text("Hello, world!") } } }Precede the structure’s declaration with the @main attribute to indicate that your custom App protocol conformer provides the entry point into your app. The protocol provides a default implementation of the main() method that the system calls to launch your app. You can have exactly one entry point among all of your app’s files.
Compose the app’s body from instances that conform to the Scene protocol. Each scene contains the root view of a view hierarchy and has a life cycle managed by the system. SwiftUI provides some concrete scene types to handle common scenarios, like for displaying documents or settings. You can also create custom scenes.
Improve your UI test’s stability by handling interface changes that block the UI elements under test.
Use XCTestCase UI interruption monitors to handle situations in which unrelated UI elements might appear and block the test’s interaction with elements in the workflow under test. The following situations could result in a blocked test:
- Your app presents a modal view that takes focus away from the UI under test, as can happen, for example, when a background task fails and you notify the user of the failure.
- Your app performs an action that causes the operating system to present a modal UI. An example is an action that presents a photo picker, which may make the system request access to photos if the user hasn’t already granted it.
>
Apple provided a great shortcut for customizing different View layouts just by passing some parameters within a closure syntax. With that you can manage complex and different contexts just by defining the types of parameters you are expecting to your component and them mapping the parameter types into the respective block builders to result in different layouts. This makes SwiftUI an even more powerful tool and improves the reusability of your code. I hope this helps you simplify your Views and that you enjoyed ;).
Remote push notifications are messages that app developers can send to users directly on their devices from a remote server. These notifications can appear even if the app is not open, making them a powerful tool for re-engaging users or delivering timely information. They are different from local notifications, which are scheduled and triggered by the app itself on the device.
Adding remote notifications capability to an iOS app is a quite involved process that includes several steps and components. This post will walk you through all the necessary setup so that you can enable remote push notification functionality in your iOS project.
Note that to be able to fully configure and test remote push notifications, you will need an active Apple developer account.
The environment for push notifications.
This key specifies whether to use the development or production Apple Push Notification service (APNs) environment when registering for push notifications.
Xcode sets the value of the entitlement based on your app's current provisioning profile. For example, if you're using a development provisioning profile, Xcode sets the value to
development
. Production provisioning profile and Prerelease Versions and Beta Testers useproduction
. These default settings can be modified. Thedevelopment
environment is also referred to as the sandbox environment.Use this entitlement for both the UserNotifications and PushKit frameworks.
To add this entitlement to your app, enable the Push Notifications capability in Xcode.
Build synchronization constructs using low-level, primitive operations.
A synchronization primitive that protects shared mutable state via mutual exclusion.
The
Mutex
type offers non-recursive exclusive access to the state it is protecting by blocking threads attempting to acquire the lock. Only one execution context at a time has access to the value stored within theMutex
allowing for exclusive access.An example use of
Mutex
in a class used simultaneously by many threads protecting aDictionary
value:class Manager { let cache = Mutex<[Key: Resource]>([:]) func saveResouce(_ resource: Resouce, as key: Key) { cache.withLock { $0[key] = resource } } }
Perform an atomic add operation and return the old and new value, applying the specified memory ordering.
Render a capture stream with rose-tinted filtering and depth effects.
The Virtual Boy in true 3D
Available in true 3D on Apple Vision Pro
VirtualFriend: A new, open source Nintendo Virtual Boy emulator
Relive the unique red-and-black world of Nintendo's most ambitious '90s console. Whether you're experiencing these classics for the first time or revisiting fond memories, VirtualFriend delivers the definitive Virtual Boy experience on Apple Vision Pro and iOS devices.
Experience the Virtual Boy in high fidelity 3D on the Apple Vision Pro, or play on the go on iOS.
- Explore the entire official library of Virtual Boy titles and the most popular homebrew with provided metadata and 3D title screen previews
- Tired of red and black? Adjust the display's color palette between a series of presets, or choose your own colors
- Play using flexible controls; either touchscreen, controller, or keyboard
>
Build Metal apps quicker and easier using a common set of utility classes.
A queue for Swift concurrency
This package exposes a single type:
AsyncQueue
. Conceptually,AsyncQueue
is very similar to aDispatchQueue
orOperationQueue
. However, unlike these anAsyncQueue
can accept async blocks. This exists to more easily enforce ordering across unstructured tasks without requiring explicit dependencies between them.
Our goal is to gather together the very best technical minds and work with trusted partners to create the most innovative products powered by RISC-V chipsets.
For developers, will bring the highest development experience to our customers through state-of-the-art hardware, software and related services.
For consumers, we will raise the bar for RISC-V products by developing high-performance, high-quality, cost-effective innovations that will bring the advantages of RISC-V technology to everyone.
2factorauth is a non-profit organization registered in Sweden with members across the globe. Our mission is to be an independent source of information on which services support MFA/2FA and help consumers demand MFA/2FA on the services that currently don’t. Together, we’re able to get more platforms to #Support2FA.
Symbol images are vector-based icons from Apple's SF Symbols library, designed for use across Apple platforms. These scalable images adapt to different sizes and weights, ensuring consistent, high-quality icons throughout our apps. Using symbol images in SwiftUI is straightforward with the
Image
view and the system name of the desired symbol.Enhancing symbol images in SwiftUI can significantly improve our app's look and feel. By adjusting size, color, rendering modes, variable values, and design variants, we can create icons that make our app more intuitive and visually appealing. SwiftUI makes these adjustments straightforward, enabling us to easily implement and refine these customizations for a better user experience.
The method we present is a partial implementation of the algorithm in Kokojima et al. 2006 paper. The reason it is not a full implementation of Kokojima's solution is that we have not adopted their more optimal multi-sampling method and have rather followed a simpler, more costly, brute-force approach, as the performance difference on Apple GPUs is likely minimal.
Since these curves can (and commonly do) have many self-intersecting loops with a mixture of convex and concave curve sections, most solutions end up with a complex pre-processing stage that builds a detailed (non-overlapping) geometry.
Kokojima et al.
trades off this complexity for extra GPU rendering cost.The Kokojima et al. method consists of three distinct steps. The initial two steps are dedicated to setting up a stencil buffer, while the final step is responsible for shading the stenciled area.
Tools from the community and partners to simplify tasks and automate processes
Homomorphic encryption (HE) is a cryptographic technique that enables computation on encrypted data without revealing the underlying unencrypted data to the operating process. It provides a means for clients to send encrypted data to a server, which operates on that encrypted data and returns a result that the client can decrypt. During the execution of the request, the server itself never decrypts the original data or even has access to the decryption key. Such an approach presents new opportunities for cloud services to operate while protecting the privacy and security of a user’s data, which is obviously highly attractive for many scenarios.
Implement the Live Caller ID Lookup app extension to provide call-blocking and identity services.
With the Live Caller ID Lookup app extension, you can provide caller ID and call-blocking services from a server you maintain. The app extension tells the system how to communicate with your server. When someone’s device receives a phone call, the system communicates with your back-end server to retrieve caller ID and blocking information, and then displays that information on the incoming call screen and in the device’s recent phone calls.
Use UnionValue when you have some existing types and you want to take one of them. You could think of this as an “or” parameter.
I think non-
Sendable
types are tremendously useful. They are much easier to use with protocols. They are just as “thread-safe” as an isolated type. And now we have a way for them to have usable async methods. There is a small hole in their concurrency story. But, overall I think they can be a really powerful tool for modelling mutable state that can work with arbitrarily-isolated clients.Non-Sendable types are great and you should use them!
IndexedEntity
represents an App Entity decorated with an attribute set. A set of attributes that enable the system to perform structured indexing and queries of entities.
Ethersync enables real-time co-editing of local text files. You can use it for pair programming or note-taking, for example! Think Google Docs, but from the comfort of your favorite text editor!
Create app intents and entities to integrate your app’s photo and video functionality with Siri and Apple Intelligence. > To integrate your app’s photo and video capabilities with Siri and Apple Intelligence, you use Swift macros that generate additional properties and add protocol conformance for your app intent, app entity, and app enumeration implementation that Apple Intelligence needs. For example, if your app allows someone to open a photo, use the
AssistantIntent(schema:)
macro and provide the assistant schema that consists of the photos domain and the openAsset schema:@AssistantIntent(schema: .photos.openAsset) struct OpenAssetIntent: OpenIntent {
> var target: AssetEntity
@Dependency
> var library: MediaLibrary
> @Dependency
> var navigation: NavigationManager
@MainActor func perform() async throws -> some IntentResult {
> let assets = library.assets(for: [target.id])
> guard let asset = assets.first else { throw IntentError.noEntity }
> navigation.openAsset(asset)
> return .result()
> }
}
To learn more about assistant schemas, see [Integrating your app with Siri and Apple Intelligence](https://developer.apple.com/documentation/appintents/integrating-your-app-with-siri-and-apple-intelligence). For a list of available app intents in the [photos](https://developer.apple.com/documentation/appintents/assistantschema/model/photos-8mzhg) domain, see [AssistantSchema.PhotosIntent](https://developer.apple.com/documentation/appintents/assistantschema/photosintent).
An interface to express that a custom type has a predefined, static set of valid values to display.
Adopt the AppEnum protocol in a type that has a known set of valid values. You might use this protocol to specify that a variable of one of your intents has a fixed set of possible values. For example, you might use a variable to specify whether to navigate to the next or previous track in a music playlist.
Because this type conforms to the StaticDisplayRepresentable protocol, provide a string-based representation of your type’s values in your implementation. For example, provide descriptions for each case of an enum type in the inherited caseDisplayRepresentations property.
Create app intents, entities, and enumerations that conform to assistant schemas to tap into the enhanced action capabilities of Siri and Apple Intelligence.
Apple Intelligence is a new personal intelligence system that deeply integrates powerful generative models into the core of iPhone, iPad and Mac. Siri draws on the capabilities of Apple Intelligence to deliver assistance that’s more natural, contextually relevant and personal to users. A big part of people’s personal context are the apps they use every day. The App Intents framework gives you a means to express your app’s capabilities and content to the system and integrate with Siri and Apple Intelligence. This will unlock new ways for your users to interact with your app from anywhere on their device.
Add assistant schemas to your app and integrate your app with Siri and Apple Intelligence, and support system experiences like Spotlight.
Using this sample app, people can keep track of photos and videos they capture with their device and can use Siri to access app functionality. To make its main functionality available to Siri, the app uses the App Intents framework.
A Swift macro you use to make sure your app intent conforms to an assistant schema.
Add reference documentation to your symbols that explains how to use them.
To help the people who use your API have a better understanding of it, follow the steps in the sections below to add documentation comments to the symbols in your project. DocC compiles those comments and generates formatted documentation that you share with your users. For frameworks and packages, add the comments to the public symbols, and for apps, add the comments to both the internal and public symbols.
For a deeper understanding of how to write symbol documentation, please refer to Writing Symbol Documentation in Your Source Files Swift.org.
Getting the dimension of an element using JavaScript is a trivial task. You barely even need to do anything. If you have a reference to an element, you’ve got the dimensions (i.e.
el.offsetWidth
/el.offsetHeight
). But we aren’t so lucky in CSS. While we’re able to react to elements being particular sizes with@container
queries, we don’t have access to a straight up number we could use to, for example, display on the screen.It may sound impossible but it’s doable! There are no simple built-in functions for this, so get ready for some slightly hacky experimentation.
Instantly boost your productivity and launch apps quicker.
A prototype of new search features, using the strength of our AI models to give you fast answers with clear and relevant sources.
Whether you’re building the next big thing or tweaking your current project, we’re here to make the process smoother and more intuitive, built and operated by the Pixelfed project.
Specify different input parameters to generate multiple test cases from a test function.
Some tests need to be run over many different inputs. For instance, a test might need to validate all cases of an enumeration. The testing library lets developers specify one or more collections to iterate over during testing, with the elements of those collections being forwarded to a test function. An invocation of a test function with a particular set of argument values is called a test case.
By default, the test cases of a test function run in parallel with each other. For more information about test parallelization, see Running tests serially or in parallel.
Registers a handler to invoke in response to a URL that your app receives.
Use this view modifier to receive URLs in a particular scene within your app. The scene that SwiftUI routes the incoming URL to depends on the structure of your app, what scenes are active, and other configuration. For more information, see handlesExternalEvents(matching:).
UI frameworks traditionally pass Universal Links to your app using an NSUserActivity. However, SwiftUI passes a Universal Link to your app directly as a URL, which you receive using this modifier. To receive other user activities, like when your app participates in Handoff, use the onContinueUserActivity(_:perform:) modifier instead.
For more information about linking into your app, see Allowing apps and websites to link to your content.
Symbol images are vector-based icons from Apple's SF Symbols library, designed for use across Apple platforms. These scalable images adapt to different sizes and weights, ensuring consistent, high-quality icons throughout our apps. Using symbol images in SwiftUI is straightforward with the Image view and the system name of the desired symbol.
SwiftUI provides a variety of views, with a big number of them being actually actionable controls, including buttons, pickers, the toggle, slider, stepper and more. All controls have readable labels, but some of them display their label out of the area that users can interact with. For these controls specifically, it is possible to hide labels when their appearance is not desirable for various reasons. For instance, they might not fit to the look of the rest of the UI, or the control’s function is clear from the context. Managing that is extremely simple thanks to a not so well-known view modifier, details of which are presented right next.
I started working on supporting Xcode 16’s features in XcodeProj. One of those features is internal “synchronized groups”, which Apple introduced to minimize git conflicts in Xcode projects. In a nutshell, they replace many references to files in the file system with a reference to a folder containing a set of files that are part of a target. Xcode dynamically synchronizes the files, hence the name, in the same way packages are synchronized when needed.
- The impact of
some
is across variables. It enforces identical types to be returned. >- The impact of
any
is on a single variable. It has no enforcing to keep returning types identical.
some any Holds a fixed concrete type Holds an arbitrary concrete type Guarantees type relationships Erases type relationships
A parse strategy for creating URLs from formatted strings.
Create an explicit
URL.ParseStrategy
to parse multiple strings according to the same parse strategy. The following example creates a customized strategy, then applies it to multiple URL candidate strings.
A structure that converts between URL instances and their textual representations.
Instances of
URL.FormatStyle
create localized, human-readable text fromURL
instances and parse string representations of URLs into instances of URL.
The root object for a universal links service definition.
| | | > | ---: | :--- | | defaults applinks.Defaults | The global pattern-matching settings to use as defaults for all universal links in the domain. | | details [applinks.Details] | An array of
Details
objects that define the apps and the universal links they handle for the domain. | | substitutionVariables applinks.SubstitutionVariables | Custom variables to use for simplifying complex pattern matches. Each name acts as a variable that the system replaces with each string in the associated string array. |
Today’s goal is to parse URLs like
http://mywebsite.org/customers/:cid/orders/:oid
so that we can determine it’s a customer’s order request and extract the order #oid
and customer #cid
from it.We’ll try and do that in an elegant way, using pattern matching and variable binding.
Apple Vision Pro users will experience breathtaking series, films, and more spanning action-adventure, documentary, music, scripted, sports, and travel
Starting this week, Apple is releasing all-new series and films captured in Apple Immersive Video that will debut exclusively on Apple Vision Pro. Apple Immersive Video is a remarkable storytelling format that leverages 3D video recorded in 8K with a 180-degree field of view and Spatial Audio to transport viewers to the center of the action.
Boundless, a new series that invites viewers to experience once-in-a-lifetime trips from wherever they are, premieres at 6 p.m. PT today, July 18, with “Hot Air Balloons.” The next installment of Wild Life, the nature documentary series that brings viewers up close to some of the most charismatic creatures on the planet, premieres in August. Elevated, an aerial travel series that whisks viewers around iconic vistas from staggering heights, will launch in September.
Later this year, users can enjoy special performances featuring the world’s biggest artists, starting with an immersive experience from The Weeknd; the first scripted Apple Immersive short film, Submerged, written and directed by Academy Award winner Edward Berger; a behind-the-scenes and on-the-court view of the 2024 NBA All-Star Weekend; and Big-Wave Surfing, the first installment of a new sports series with Red Bull.
“Apple Immersive Video is a groundbreaking leap forward for storytelling, offering Apple Vision Pro users remarkable experiences with an unparalleled sense of realism and immersion,” said Tor Myhren, Apple’s vice president of Marketing Communications. “From soaring over volcanoes in Hawaii and surfing huge waves in Tahiti, to enjoying performances by the world’s biggest artists and athletes from all-new perspectives, Apple Immersive Video revolutionizes the way people experience places, stories, sports, and more by making viewers feel like they’re truly there. It’s the next generation of visual storytelling, and we’re excited to bring it to more people around the world.”
Steve’s talk at the 1983 International Design Conference in Aspen
Steve rarely attended design conferences. This was 1983, before the launch of the Mac, and still relatively early days of Apple. I find it breathtaking how profound his understanding was of the dramatic changes that were about to happen as the computer became broadly accessible. Of course, beyond just being prophetic, he was fundamental in defining products that would change our culture and our lives forever.
On the eve of launching the first truly personal computer, Steve is not solely preoccupied with the founding technology and functionality of the product’s design. This is extraordinarily unusual, as in the early stages of dramatic innovation, it is normally the primary technology that benefits from all of the attention and focus.
Steve points out that the design effort in the U.S. at the time had been focused on the automobile, with little consideration or effort given to consumer electronics. While it is not unusual to hear leaders talk about the national responsibility to manufacture, I thought it was interesting that he talked about a nation’s responsibility to design.
In the talk, Steve predicts that by 1986 sales of the PC would exceed sales of cars, and that in the following ten years, people would be spending more time with a PC than in a car. These were absurd claims for the early 1980s. Describing what he sees as the inevitability that this would be a pervasive new category, he asks the designers in the audience for help. He asks that they start to think about the design of these products, because designed well or designed poorly, they still would be made.
Steve remains one of the best educators I’ve ever met in my life. He had that ability to explain incredibly abstract, complex technologies in terms that were accessible, tangible and relevant. You hear him describe the computer as doing nothing more than completing fairly mundane tasks, but doing so very quickly. He gives the example of running out to grab a bunch of flowers and returning by the time you could snap your fingers – speed rendering the task magical.
When I look back on our work, what I remember most fondly are not the products but the process. Part of Steve’s brilliance was how he learned to support the creative process, encouraging and developing ideas even in large groups of people. He treated the process of creating with a rare and wonderful reverence.
The revolution Steve described over 40 years ago did of course happen, partly because of his profound commitment to a kind of civic responsibility. He cared, way beyond any sort of functional imperative. His was a victory for beauty, for purity and, as he would say, for giving a damn. He truly believed that by making something useful, empowering and beautiful, we express our love for humanity.
Prepare your app to respond to an incoming universal link.
When a user activates a universal link, the system launches your app and sends it an NSUserActivity object. Query this object to find out how your app launched and to decide what action to take.
To support universal links in your app:
- Create a two-way association between your app and your website and specify the URLs that your app handles. See Supporting associated domains.
- Update your app delegate to respond when it receives an NSUserActivity object with the
activityType
set toNSUserActivityTypeBrowsingWeb
.
>
On June 25th, I interviewed Tim Sweeney, Founder and CEO of Epic Games, which makes the Unreal Engine and Fortnite, and Neal Stephenson, the #1 New York Times bestselling author who also coined the term “Metaverse” in his 1992 bestseller Snow Crash, and is a Co-Founder of blockchain start-up Lamina1, and AI storytelling platform Whenere.
In the interview, we discuss their definitions of “Metaverse,” thoughts on its technological and economic growth, Neal’s reaction on the day Facebook changed its name to Meta, the future of Fortnite, Apple’s Vision Pro, blockchains, and the ethics of Generative AI, plus “Snow Crash 2," and much more.
Display content and descriptions, provide channel guides, and support multiple users on Apple TV.
Use the TVServices framework to display content prominently on the screen and to speed up user login. You can highlight media and other information from your app in the top shelf area. For example, a video playback app might show the user’s most recently viewed videos. The system displays your media items when the user selects your app on the tvOS Home Screen; your app doesn’t need to be running. You provide top shelf content using a Top Shelf app extension, which you include in the bundle of your tvOS app.
Apps that manage multiple user profiles can accelerate the login process by retaining the profile for each Apple TV user. Apple TV supports multiple user accounts, and these accounts are separate from the profiles your app manages. Mapping the system accounts to your own profiles lets users skip profile selection screens and go straight to their content, which provides a better user experience.
Support browsing an electronic program guide (EPG) and changing channels with specialized remote buttons.
Add Unique features to Xcode's Simulator and Build Apps Faster.
Key features: User Defaults Editor, Simulator Airplane Mode, Recordings with sound, touches & bezels. Accessibility & Dynamic Type Testing, Location Simulation, Test Push Notifications, Deeplinks, and compare designs on top of the Simulator.
Inspect Network Traffic
- Monitor in- and outgoing requests for your apps
- Explore JSON responses, requests & response headers
- Copy requests as cURL commands
- Investigate request metrics
Build Insights
- Keep track of build count and duration
- Find out how your app's build times improve per Xcode version
User Defaults Editor
- View and Edit User Defaults values in real time
- Works with both standard and group User Defaults
Location Simulation
- Scenario testing: City Run, Bicycle run, and Freeway Drive
- Simulate routes from start to destination using Quick Actions
- Update GPS to a specific point on the map
- Change the time zone whenever you update the location
Grids & Rulers helps you to create pixel-perfect design implementations
- Use horizontal and vertical rulers
- Measure the distance between elements in on-device pixels
- Configure grid size and color
Quick actions for your recent builds help you increase productivity
- Delete Derived Data easily, globally or per app, to prevent rebuilding all your Xcode projects.
- Open common directories like your app's documents folder
- Read and write user defaults
- Grant, revoke, or reset permissions like photo and location access, allowing you to test related implementations quickly
- Airplane mode: Disable Networking for your app while keeping a working connection on your Mac
Environment Overrides
- Switch Accessibility settings like Inverted Colors and Bold Text
- Configure any Dynamic Type directly from the side window
Deeplinks (Universal Links) and Push Notifications
- Add quick actions to test Deeplinks and Push Notifications
- Bundle Identifier based: actions automatically show up for recent builds
- Launch deeplinks to test routing in your apps
- Easily launch deeplinks from your clipboard
- Manage and share Quick Action groups with your colleagues
Compare Designs for pixel-perfect design implementations
- Create pixel-perfect implementations of your app’s design
- Drag, Paste, or Select images for comparison
- Use the overlay mode to compare your app’s implementation to its design
- The slider allows you to slide between your app’s implementation and its design
- Any image source, whether it’s Sketch, Figma, or Zeplin.
Magnify for precision
- Zoom in at pixel level to verify your design implementation
Create screenshots
- Device Bezels create that professional screen capture you need
- Adjust the background color to match your styling
Create professional recordings to share progress
- A popup next to the active Simulator allows you to start a recording easily
- Enable touches to explain better how your app responds to user interaction
- Device Bezels create that professional impression you need
- Export-ready for App Store Connect. Creating App Previews has never been easier
- Landscape orientation is correctly applied in the exported video
- A floating thumbnail with the resulting recording allows one to drag into any destination easily
- Select MP4 or GIF to match your needs
- Trim videos for perfect lengths
- Control the quality of exports for perfect performance
Completely customizable to fit your needs
- All actions are available through the status bar menu as well. Configure to hide the floating windows if you feel like they're in your way
- Configure shortcuts to perform actions even quicker
>
Universal Links allow you to link to content inside your app when a user opens a particular URL. Webpages will open in the app browser by default, but you can configure specific paths to open in your app if the user has it installed.
Redirecting users into your app is recommended to give them the most integrated mobile experience. A great example is the WeTransfer app that automatically opens transfer URLs, allowing the app to download the files using the most efficient system APIs. The alternative would require users to download the files via Safari, a much less integrated experience. Let’s dive in to see how you can add support for Universal Links.
Push user-facing notifications to the user’s device from a server, or generate them locally from your app.
User-facing notifications communicate important information to users of your app, regardless of whether your app is running on the user’s device. For example, a sports app can let the user know when their favorite team scores. Notifications can also tell your app to download information and update its interface. Notifications can display an alert, play a sound, or badge the app’s icon.
You can generate notifications locally from your app or remotely from a server that you manage. For local notifications, the app creates the notification content and specifies a condition, like a time or location, that triggers the delivery of the notification. For remote notifications, your company’s server generates push notifications, and Apple Push Notification service (APNs) handles the delivery of those notifications to the user’s devices.
Use this framework to do the following:
- Define the types of notifications that your app supports.
- Define any custom actions associated with your notification types.
- Schedule local notifications for delivery.
- Process already delivered notifications.
- Respond to user-selected actions.
The system makes every attempt to deliver local and remote notifications in a timely manner, but delivery isn’t guaranteed. The PushKit framework offers a more timely delivery mechanism for specific types of notifications, such as those VoIP and watchOS complications use. For more information, see PushKit.
For webpages in Safari version 16.0 and higher, generate remote notifications from a server that you manage using Push API code that works in Safari and other browsers.
A framework for training any-to-any multimodal foundation models. Scalable. Open-sourced. Across tens of modalities and tasks.
4M enables training versatile multimodal and multitask models, capable of performing a diverse set of vision tasks out of the box, as well as being able to perform multimodal conditional generation. This, coupled with the models' ability to perform in-painting, enables powerful image editing capabilities. These generalist models transfer well to a broad range of downstream tasks or to novel modalities, and can be easily fine-tuned into more specialized variants of itself.
Build sophisticated animations that you control using phase and keyframe animators.
SwiftUI provides a collection of useful animations that you can use in your app. These animations help enhance the user experience of your app by providing visual transitions of views and user interface elements. While these standard animations provide a great way to enhancement the user interaction of your app, there are times when you need to have more control over the timing and movement of a visual element. PhaseAnimator and KeyframeAnimator help give you that control.
A phase animator allows you to define an animation as a collection of discrete steps called phases. The animator cycles through these phases to create a visual transition. With keyframe animator, you create keyframes that define animation values at specific times during the visual transition.
A container that animates its content by automatically cycling through a collection of phases that you provide, each defining a discrete step within an animation.
Use one of the phase animator view modifiers like phaseAnimator(_:content:animation:) to create a phased animation in your app.
Welcome to The Valley of Code. Your journey in Web Development starts here. In the fundamentals section you'll learn the basic building blocks of the Internet, the Web and how its fundamental protocol (HTTP) works.
I finally have the feeling that I’m a decent programmer, so I thought it would be fun to write some advice with the idea of “what would have gotten me to this point faster?” I’m not claiming this is great advice for everyone, just that it would have been good advice for me.
Make your iOS app launch experience faster and more responsive by customizing a launch screen.
Every iOS app must provide a launch screen, a screen that displays while your app launches. The launch screen appears instantly when your app starts up and is quickly replaced with the app’s first screen.
You create a launch screen for your app in your Xcode project in one of two ways:
- Information property list
- User interface file
To make the app launch experience as seamless as possible, create a launch screen with basic views that closely resemble the first screen of your app.
For guidelines about designing a launch screen, see Launching in the Human Interface Guidelines.
Apply this attribute to a declaration, to suppress strict concurrency checking. You can apply this attribute to the following kinds of declarations: >
- Imports
- Structures, classes, and actors
- Enumerations and enumeration cases
- Protocols
- Variables and constants
- Subscripts
- Initializers
- Functions
>
Returning an opaque type looks very similar to using a boxed protocol type as the return type of a function, but these two kinds of return type differ in whether they preserve type identity. An opaque type refers to one specific type, although the caller of the function isn’t able to see which type; a boxed protocol type can refer to any type that conforms to the protocol. Generally speaking, boxed protocol types give you more flexibility about the underlying types of the values they store, and opaque types let you make stronger guarantees about those underlying types.
The
sonos
integration allows you to control your Sonos wireless speakers from Home Assistant. It also works with IKEA Symfonisk speakers.
Hide implementation details about a value’s type.
Swift provides two ways to hide details about a value’s type: opaque types and boxed protocol types. Hiding type information is useful at boundaries between a module and code that calls into the module, because the underlying type of the return value can remain private.
A function or method that returns an opaque type hides its return value’s type information. Instead of providing a concrete type as the function’s return type, the return value is described in terms of the protocols it supports. Opaque types preserve type identity — the compiler has access to the type information, but clients of the module don’t.
A boxed protocol type can store an instance of any type that conforms to the given protocol. Boxed protocol types don’t preserve type identity — the value’s specific type isn’t known until runtime, and it can change over time as different values are stored.
What exactly makes code “unsafe”? Join the Swift team as we take a look at the programming language's safety precautions — and when you might need to reach for unsafe operations. We'll take a look at APIs that can cause unexpected states if not used correctly, and how you can write code more specifically to avoid undefined behavior. Learn how to work with C APIs that use pointers and the steps to take when you want to use Swift's unsafe pointer APIs. To get the most out of this session, you should have some familiarity with Swift and the C programming language. And for more information on working with pointers, check out "Safely Manage Pointers in Swift".
Come with us as we delve into unsafe pointer types in Swift. Discover the requirements for each type and how to use it correctly. We'll discuss typed pointers, drop down to raw pointers, and finally circumvent pointer type safety entirely by binding memory. This session is a follow-up to "Unsafe Swift" from WWDC20. To get the most out of it, you should be familiar with Swift and the C programming language.
The question you're asking for is if a type is trivially copyable and destroyable, which in practice is the case iff the type does not contain any reference types or existentials. There's an
_isPOD()
(Warning: underscored API!) entry point in the stdlib for this purpose:print(_isPOD(Int.self)) // true print(_isPOD(Array<Int>.self)) // false
Provide app continuity for users by preserving their current activities.
This SwiftUI sample project demonstrates how to preserve your appʼs state information and restore the app to that previous state on subsequent launches. During a subsequent launch, restoring your interface to the previous interaction point provides continuity for the user, and lets them finish active tasks quickly.
When using your app, the user performs actions that affect the user interface. For example, the user might view a specific page of information, and after the user leaves the app, the operating system might terminate it to free up the resources it holds. The user can return to where they left off — and UI state restoration is a core part of making that experience seamless.
This sample app demonstrates the use of state preservation and restoration for scenarios where the system interrupts the app. The sample project manages a set of products. Each product has a title, an image, and other metadata you can view and edit. The project shows how to preserve and restore a product in its
DetailView
.
This page is a collection of my favorite resources for people getting started writing programming languages. I hope to keep it updated as long as I continue to find great stuff.
Creating a tvOS media catalog app in SwiftUI
This sample code project shows how to create the standard content lockups for tvOS, and provides best practices for building out rows of content shelves. It also includes examples for product pages, search views, and tab views, including the new sidebar adaptive tab view style that provides a sidebar in tvOS.
The sample project contains the following examples:
StackView
implements an example landing page for a content catalog app, defining several shelves with a showcase or hero header area above them. It also gives an example of an above- and below-the-fold switching animation.ButtonsView
provides a showcase of the various button styles available in tvOS.DescriptionView
provides an example of how to build a product page similar to those you see on the Apple TV app, with a custom material blur.SearchView
shows an example of a simple search page using thesearchable(text:placement:prompt:)
andsearchSuggestions(\_:)
modifiers.SidebarContentView
shows how to make a sectioned sidebar using the new tab bar APIs in tvOS 18.HeroHeaderView
gives an example of creating a material gradient to blur content in a certain area, fading it into unblurred content.
Adds an action to be called when the view crosses the threshold to be considered on/off screen.
Positions this view within an invisible frame with a size relative to the nearest container.
Use this modifier to specify a size for a view’s width, height, or both that is dependent on the size of the nearest container. Different things can represent a “container” including:
- The window presenting a view on iPadOS or macOS, or the screen of a device on iOS.
- A column of a NavigationSplitView
- A NavigationStack
- A tab of a TabView
- A scrollable view like ScrollView or List
The size provided to this modifier is the size of a container like the ones listed above subtracting any safe area insets that might be applied to that container.
Creates an environment values, transaction, container values, or focused values entry.
Make binaries available to other developers by creating Swift packages that include one or more XCFrameworks.
Creating a Swift package to organize and share your code makes source files available to developers who use the Swift package as a package dependency. However, you may need to make your code available as binaries to protect your intellectual property — for example, if you’re developing proprietary, closed-source libraries.
Carefully consider whether you want to distribute your code in binary form because doing so comes with drawbacks. For example, a Swift package that contains a binary is less portable because it can only support platforms that its included binaries support. In addition, binary dependencies are only available for Apple platforms, which limits the audience for your Swift package.
To use the Montreal subway (the Métro), you tap a paper ticket against the turnstile and it opens. The ticket works through a system called NFC, but what's happening internally? How does the ticket work without a battery? How does it communicate with the turnstile? And how can it be so cheap that you can throw the ticket away after one use? To answer these questions, I opened up a ticket and examined the tiny chip inside.
If the standard OpenType shaping engine doesn't give you enough flexibility, Harfbuzz allows you to write your own shaping engine in WebAssembly and embed it into your font! Any font which contains a Wasm table will be passed to the WebAssembly shaper.
How to use a .xcconfig file and a .plist with a Swift Package Manager based project.
The goal is to explore differentiable programming in realistic settings. If autodiff + vectorization is the future, then it is important to be able to write hard programs in a differentiable style (beyond just another Transformer).
Adds an action to be performed when a value, created from a geometry proxy, changes.
The geometry of a view can change frequently, especially if the view is contained within a ScrollView and that scroll view is scrolling.
You should avoid updating large parts of your app whenever the scroll geometry changes. To aid in this, you provide two closures to this modifier:
- transform: This converts a value of GeometryProxy to your own data type.
- action: This provides the data type you created in of and is called whenever the data type changes.
For example, you can use this modifier to know how much of a view is visible on screen. In the following example, the data type you convert to is a
Bool
and the action is called whenever theBool
changes.ScrollView(.horizontal) {
> LazyHStack {
> ForEach(videos) { video in
> VideoView(video)
> }
> }
}
struct VideoView: View { > var video: VideoModel
var body: some View {
> VideoPlayer(video)
> .onGeometryChange(for: Bool.self) { proxy in
> let frame = proxy.frame(in: .scrollView)
> let bounds = proxy.bounds(of: .scrollView) ?? .zero
> let intersection = frame.intersection(
> CGRect(origin: .zero, size: bounds.size))
> let visibleHeight = intersection.size.height
> return (visibleHeight / frame.size.height) > 0.75
> } action: { isVisible in
> video.updateAutoplayingState(
> isVisible: isVisible)
> }
> }
}
Extend your media viewing experience using Reality Composer Pro components like Docking Region, Reverb, and Virtual Environment Probe. Find out how to further enhance immersion using Reflections, Tint Surroundings Effect, SharePlay, and the Immersive Environment Picker.
Join us on a tour of SwiftUI, Apple's declarative user interface framework. Learn essential concepts for building apps in SwiftUI, like views, state variables, and layout. Discover the breadth of APIs for building fully featured experiences and crafting unique custom components. Whether you're brand new to SwiftUI or an experienced developer, you'll learn how to take advantage of what SwiftUI has to offer when building great apps.
C++ interoperability is a new feature in Swift 5.9. A great variety of C++ APIs can be called directly from Swift, and select Swift APIs can be used from C++.
This document is the reference guide describing how to mix Swift and C++. It describes how C++ APIs get imported into Swift, and provides examples showing how various C++ APIs can be used in Swift. It also describes how Swift APIs get exposed to C++, and provides examples showing how the exposed Swift APIs can be used from C++.
C++ interoperability is an actively evolving feature of Swift. It currently supports interoperation between a subset of language features. The status page provides an overview of the currently supported interoperability features, and lists the existing constraints as well.
Future releases of Swift might change how Swift and C++ interoperate, as the Swift community gathers feedback from real world adoption of C++ interoperability in mixed Swift and C++ codebases. Please provide the feedback that you have on the Swift forums, or by filing an issue on GitHub. Future changes to the design or functionality of C++ interoperability will not break code in existing codebases by default.
Neovim is a modern reimplementation of Vim, a popular terminal-based text editor. Neovim adds new features like asynchronous operations and powerful Lua bindings for a snappy editing experience, in addition to the improvements Vim brings to the original Vi editor.
This article walks you through configuring Neovim for Swift development, providing configurations for various plugins to build a working Swift editing experience. It is not a tutorial on how to use Neovim and assumes some familiarity with modal text editors like Neovim, Vim, or Vi. We are also assuming that you have already installed a Swift toolchain on your computer. If not, please see the Swift installation instructions.
Although the article references Ubuntu 22.04, the configuration itself works on any operating system where a recent version of Neovim and a Swift toolchain is available.
Basic setup and configuration includes:
- Installing Neovim.
- Installing
lazy.nvim
to manage our plugins.- Configuring the
SourceKit-LSP
server.- Setting up Language-Server-driven autocompletion with
nvim-cmp
.- Setting up snippets with
LuaSnip
.
>
Guarantee your code is free of data races by enabling the Swift 6 language mode.
Learn how new cross-platform APIs in RealityKit can help you build immersive apps for iOS, macOS, and visionOS. Check out the new hover effects, lights and shadows, and portal crossing features, and view them in action through real examples.
Apple has released Embedded Swift, a subset of the Swift language, bringing Swift to both Arm and RISC-V microcontrollers.
f you want to go spelunking in SwiftUI’s .swiftinterface file (people have found interesting things in there in past years), note that there’s a new SwiftUICore.framework this year, so now there’s two files to check.
/Applications/Xcode-16.0b1.app/Contents/Developer/Platforms/iPhoneOS.platform/Developer/SDKs/iPhoneOS.sdk/System/Library/Frameworks/SwiftUICore.framework/Modules/SwiftUICore.swiftmodule/arm64-apple-ios.swiftinterface
Arrange spatial Personas in a team-based guessing game
Use low-level mesh and texture APIs to achieve fast updates to a person’s brush strokes by integrating RealityKit with ARKit and SwiftUI.
Use attachments to place 2D content relative to 3D content in an immersive space.
Use this code to follow along with a guide to migrating your code to take advantage of the full concurrency protection that the Swift 6 language mode provides.
This sample provides two separate versions of the app:
- The original version uses Swift concurrency features but contains a number of issues that are detected by enabling Swift complete concurrency checking and that need to be resolved before enabling the Swift 6 language mode.
- The updated version resolves these issues and has enabled Swift 6. It also adds new features that record the location of the user when they log that they drank coffee.
Watch the session to see the process step by step, and then compare the two projects to see the differences.
Add scroll effects, rich color treatments, custom transitions, and advanced effects using shaders and a text renderer.
Add a deeper level of immersion to media playback in your app with RealityKit and Reality Composer Pro.
visionOS provides powerful features for building immersive media playback apps. It supports playing 3D video and Spatial Audio, which helps bring the content to life and makes the viewer feel like they’re part of the action. Starting in visionOS 2, you can take your app’s playback experience even further by creating custom environments using RealityKit and Reality Composer Pro. The Destination Video sample includes a custom environment, Studio. The Studio environment provides a large, open space that’s specifically designed to provide an optimal media viewing experience, as shown in the following image.
Create a more immersive experience by adding video reflections in a custom environment.
RealityKit and Reality Composer Pro provide the tools to build immersive media viewing environments in visionOS. The Destination Video sample uses these features to build a realistic custom environment called Studio. The environment adds to its realism and makes the video player feel grounded in the space by applying reflections of the player’s content onto the surfaces of the scene.
RealityKit and Reality Composer Pro support two types of video reflections:
- Specular reflections provide a direct reflection of the video content, and are typically useful to apply to glossy surfaces like metals and water.
- Diffuse reflections provide a softer falloff of video content, and are useful to apply to rougher, more organic surfaces.
This article describes how to adopt reflections in your own environment, and shows how Destination Video’s Studio environment supports these effects to create a compelling media viewing experience.
A native macOS app for App Store Connect that streamlines app updates and releases, making the process faster and easier.
Here’s how to do it on an Apple Silicon Mac: >
- Backup using Time Machine
- Create a new APFS volume
- Shut down Mac
- Start up and keep holding down the power button
- Select “Options”
- Then choose to reinstall Sonoma onto the volume from step 2.
- Wait a while (it said 5h for me, but took <1h)
- When it’s installed, probably best to not log into iCloud (though I did, and then disabled all the various sync options) and skip migrating your previous user account
- Then open to System Settings, enable beta updates, and update that install to Sequoia
- File feedback to Apple!
>
Highlights of new technologies introduced at WWDC24.
Browse a selection of documentation for new technologies and frameworks introduced at WWDC24. Many existing frameworks have added significant functionality, and you’ll find new ways to enhance your apps targeting the latest platform release.
For a comprehensive list of downloadable sample code projects, see WWDC24 Sample Code. For the latest design guidance localized in multiple languages, see Human Interface Guidelines > What’s New.
Learn how to adopt spatial photos and videos in your apps. Explore the different types of stereoscopic media and find out how to capture spatial videos in your iOS app on iPhone 15 Pro. Discover the various ways to detect and present spatial media, including the new QuickLook Preview Application API in visionOS. And take a deep dive into the metadata and stereo concepts that make a photo or video spatial.
Finally, let’s apply a similar trick to the question of whether we’re running Xcode 15 or later. For this I am also leaning on an example I found in the WebKit sources. By declaring boolean values for several Xcode version tests:
XCODE_BEFORE_15_1300 = YES XCODE_BEFORE_15_1400 = YES XCODE_BEFORE_15_1500 = NO
We lay the groundwork for expanding a build setting based on the XCODE_VERSION_MAJOR build setting, which is built in:
XCODE_BEFORE_15 = \((XCODE_BEFORE_15_\)(XCODE_VERSION_MAJOR)) XCODE_AT_LEAST_15 = \((NOT_\)(XCODE_BEFORE_15))
In this case, on my Mac running Xcode 15.1, XCODE_BEFORE_15 expands to XCODE_BEFORE_15_1500, which expands to NO. XCODE_AT_LEAST_15 uses the aforementioned NOT_ setting, expanding to NOT_NO, which expands to YES.
Specify your project’s build settings in plain-text files, and supply different settings for debug and release builds.
A build configuration file is a plain-text file you use to specify the build settings for a specific target or your entire project. Build configuration files make it easier to manage build settings yourself, and to change build settings automatically for different architectures and platforms. With a build configuration file, you place only the settings you want to modify in a text file. You can create multiple files, each with different combinations of build settings, and you can change the settings quickly for your target or project. Xcode layers your settings on top of other project-related settings to create the final build configuration.
Build configuration files are particularly useful in the following situations:
- You want different build settings based on the current platform, architecture, or build type.
- You want to store build settings in a way that is easier to inspect.
- You want to edit build settings outside of Xcode.
For more information about how build configuration files integrate with your project’s other settings values, see Configuring the build settings of a target.
The
MagicReplace
option is automatically applied to thereplace
symbol effect when possible, and it works specifically with related SF Symbols. This feature is particularly useful for tappable elements in our apps, such as various toggles.
Discover how to create stunning visual effects in SwiftUI. Learn to build unique scroll effects, rich color treatments, and custom transitions. We'll also explore advanced graphic effects using Metal shaders and custom text rendering.
A two-dimensional gradient defined by a 2D grid of positioned colors.
Each vertex has a position, a color and four surrounding Bezier control points (leading, top, trailing, bottom) that define the tangents connecting the vertex with its four neighboring vertices. (Vertices on the corners or edges of the mesh have less than four neighbors, they ignore their extra control points.) Control points may either be specified explicitly or implicitly.
When rendering, a tessellated sequence of Bezier patches are created, and vertex colors are interpolated across each patch, either linearly, or via another set of cubic curves derived from how the colors change between neighbors – the latter typically gives smoother color transitions.
Discover how Swift balances abstraction and performance. Learn what elements of performance to consider and how the Swift optimizer affects them. Explore the different features of Swift and how they're implemented to further understand the tradeoffs available that can impact performance.
Dive into the basis for your app's dynamic memory: the heap! Explore how to use Instruments and Xcode to measure, analyze, and fix common heap issues. We'll also cover some techniques and best practices for diagnosing transient growth, persistent growth, and leaks in your app.
Get started with noncopyable types in Swift. Discover what copying means in Swift, when you might want to use a noncopyable type, and how value ownership lets you state your intentions clearly.
Measure CPU and GPU utilization to find ways to improve your app’s performance.
You use the RealityKit framework to add 3D content to an ARKit app. The framework runs an entity component system (ECS) on the CPU to manage tasks like physics calculations, animations, audio processing, and network synchronization. It also relies on the Metal framework and GPU hardware to perform multithreaded rendering.
Although RealityKit handles much of the complexity of this system for you, it’s still important to optimize your app for performance. Use debugging features built in to RealityKit — along with standard tools like Xcode and Instruments — to pinpoint the causes of reduced frame rate. Then make data-driven adjustments to your assets or to the way you use the framework to improve performance.
Learn how to use LLDB to explore and debug codebases. We'll show you how to make the most of crashlogs and backtraces, and how to supercharge breakpoints with actions and complex stop conditions. We'll also explore how the “p” command and the latest features in Swift 6 can enhance your debugging experience.
Meet the RealityKit debugger and discover how this new tool lets you inspect the entity hierarchy of spatial apps, debug rogue transformations, find missing entities, and detect which parts of your code are causing problems for your systems.
Explore how builds are changing in Xcode 16 with explicitly built modules. Discover how modules are used to build your code, how explicitly built modules improve transparency in compilation tasks, and how you can optimize your build by sharing modules across targets.
Learn about the capabilities of SwiftUI container views and build a mental model for how subviews are managed by their containers. Leverage new APIs to build your own custom containers, create modifiers to customize container content, and give your containers that extra polish that helps your apps stand out.
Build a multiplatform app that uses windows, volumes, and animations to create a robot botanist’s greenhouse.
BOT-anist is a game-like experience where you build a custom robot botanist by selecting from a variety of color and shape options, and then guide your robot around a futuristic greenhouse to plant alien flowers. This app demonstrates how to build an app for visionOS, macOS, iOS, and iPadOS using a single shared Xcode target and a shared Reality Composer Pro project.
This sample shows off a number of RealityKit and visionOS features, including volume ornaments, dynamic lights and shadows, animation library components, and vertex animation using blend shapes. It also demonstrates how to set a volume’s default size and enable user resizing of volumes.
Discover powerful new ways to customize volumes and immersive spaces in visionOS. Learn to fine-tune how volumes resize and respond to people moving around them. Make volumes and immersive spaces interact through the power of coordinate conversions. Find out how to make your app react when people adjust immersion with the Digital Crown, and use a surrounding effect to dynamically customize the passthrough tint in your immersive space experience.
Learn how to create great single and multi-window apps in visionOS, macOS, and iPadOS. Discover tools that let you programmatically open and close windows, adjust position and size, and even replace one window with another. We'll also explore design principles for windows that help people use your app within their workflows.
Access iCloud from macOS guest virtual machines.
In macOS 15 and later, Virtualization supports access to iCloud accounts and resources when running macOS in a virtual machine (VM) on Apple silicon. When you create a VM in macOS 15 from a macOS 15 software image (an .ipsw file) using a VZMacHardwareModel that you obtain from a VZMacOSRestoreImage, Virtualization configures an identity for the VM that it derives from security information in the host’s Secure Enclave. Just as individual physical devices have distinct identities based on their Secure Enclaves, this identity is distinct from other VMs.
If someone moves a VM to a different Mac host and restarts it, the Virtualization framework automatically creates a new identity for the VM using the information from the Secure Enclave of the new Mac host. This identity change requires the person using the VM to reauthenticate to allow iCloud to restart syncing data to the VM.
Additionally, the Virtualization framework detects attempts to start multiple copies of the same VM simultaneously on the same Mac host. For example, when someone duplicates the files that make up a VM, the framework treats the copy of the VM as a clone of the first one. Starting a second clone while another clone is already running causes the Virtualization framework to automatically construct a new identity for the second clone. This preserves the integrity that different VMs have distinct identities, and requires that the person using the VM reauthenticate to use iCloud services.
A value that can replace the default text view rendering behavior.
An object for controlling video experiences.
Use this class to control, observe, and respond to experience changes for an
AVPlayerViewController
.AVPlayerViewController
’s presentation APIs will no longer be honored once anAVExperienceController
is attached. Using those presentation APIs may preclude use ofAVExperienceController
.
An object to manage viewing multiple videos at once.
Clipboard manager for macOS which does one job - keep your copy history at hand. Period.
Lightweight. Open source. No fluff.
Super Simple Streaming For 75% Less
Stream 8K+ resolution videos without any encoding or packaging costs. Spend your time on creativity instead of complexity.
Watch YouTube in your theater
Welcome to Theater: the most immersive way to watch YouTube, your media files, and even spatial livestreamed events in a tastefully designed movie theater with immersive sound, multiplex scale, and yes, even your friends can join you to watch together in SharePlay.
Add information to declarations and types.
There are two kinds of attributes in Swift — those that apply to declarations and those that apply to types. An attribute provides additional information about the declaration or type. For example, the
discardableResult
attribute on a function declaration indicates that, although the function returns a value, the compiler shouldn’t generate a warning if the return value is unused.You specify an attribute by writing the @ symbol followed by the attribute’s name and any arguments that the attribute accepts:
@<#attribute name#> @<#attribute name#>(<#attribute arguments#>)
Some declaration attributes accept arguments that specify more information about the attribute and how it applies to a particular declaration. These attribute arguments are enclosed in parentheses, and their format is defined by the attribute they belong to.
A type whose values can safely be passed across concurrency domains by copying.
What is an effect, anyway? In this hypothetical language (which I’ll call effecta because it sounds cool), an effect should be any change to any state. This sounds generic, but a generic foundation allows us to apply a question to everything we do to make sure it respects the symmetries of our system. Our question in this case will be: “is this thing an effect?”, and the answer will be “it is an effect if and only if it is a change to some state”.
This proposal allows the compiler to model how values are accessed such that it can be much less conservative about sendability. This modeling takes into account control statements like
if
andswitch
as it tracks values. The implementation is deep and, unsurprisingly, very sophisticated. I’m not going to get into too much of the details, instead I’m going to quote the proposal directly: >The compiler will allow transfers of non-Sendable values between isolation domains where it can prove they are safe and will emit diagnostics when it cannot at potential concurrent access points so that programmers don’t have to reason through the data flow themselves.
Build efficient custom worlds for your app.
You can implement immersive environments for your app that people can fade in and out using the Digital Crown, just like the provided system environments. However, custom immersive environments can cause performance and thermal problems if you’re not careful about how you build them. This article describes ways to address these potential problems, and the sample provides a demonstration of some of these methods in action.
In simple words, the problem is that the linker overoptimizes the binary removing symbols that are needed at runtime. The linker’s dead-stripping logic can’t delete dynamically referenced symbols. And this is something that happens not only when referencing Objective-C symbols, but Swift too. For example, when integrating Composable Architecture, which uses Objective-C runtime capabilities, developers might need to add explicit references to those symbols or add the aforementioned flags to the build settings.
This post is the first of a series called “pondering about what we could do better in the programming world because I have time to waste”. In this post I would like to introduce the idea of a static effect system and how it could be beneficial to programming languages moving forward.
Git Credential Manager (GCM) is another way to store your credentials securely and connect to GitHub over HTTPS. With GCM, you don't have to manually create and store a personal access token, as GCM manages authentication on your behalf, including 2FA (two-factor authentication).
Describe your data up front and generate schemas, API specifications, client / server code, docs, and more.
If you want SwiftUI to reinitialize a state object when a view input changes, make sure that the view’s identity changes at the same time. One way to do this is to bind the view’s identity to the value that changes using the id(_:) modifier.
From Apple Doc, >
SwiftUI only initializes a state object the first time you call its initializer in a given view. This ensures that the object provides stable storage even as the view’s inputs change. However, it might result in unexpected behavior or unwanted side effects if you explicitly initialize the state object.
How to get the most out of Xcode Previews
I like using previews as a sort of story-book-like feature. Whenever I create a new component on my Components/CoreUI modules, I create a Previews view. It’s just a simple Form with two sections:
- Component: This is where the real component is displayed.
- Configuration: A set of LabeledContents with customization options for the component (texts, toggles, pickers, etc).
It’s pretty easy to do, and it gives a quick glance at how the component looks and feels.
As is generally known, SwiftUI hands off some of its work to a private framework called AttributeGraph. In this article we will explore how SwiftUI uses that framework to efficiently update only those parts of an app necessary and to efficiently get the data out of your view graph it needs for rendering your app.
In one of my iOS apps, I have recently faced a problem where I had to efficiently look up locations that are geographically close to a specified point. As the naive approach, including computing a distance between dozens of point pairs, seem not so efficient to me — I made a little research and gave a try to Apple-provided R-tree implementation from GameKit.
A structure that calculates k-nearest neighbors.
An object that starts and manages headphone motion services.
This class delivers headphone motion updates to your app. Use an instance of the manager to determine if the device supports motion, and to start and stop updates. Adopt the CMHeadphoneMotionManagerDelegate protocol to receive and respond to motion updates. Before using this class, check isDeviceMotionAvailable to make sure the feature is available.
One workaround is to rely on environment variables in your
Package.swift
// swift-tools-version:5.7 // The swift-tools-version declares the minimum version of Swift required to build this package. import Foundation import PackageDescription let mocksEnabled = ProcessInfo.processInfo.environment["MOCKS"] != "NO" let package = Package( name: "MyPackage", defaultLocalization: "en", dependencies: [ ], targets: [ // your targets ] ) package.targets.forEach { target in guard target.type == .regular else { return } var settings = target.swiftSettings ?? []
A fun side project for a great cause featuring Core Motion, SwiftUI, a little help from AI, and a pair of AirPods to count 100 push-ups a day.
Entering a new platform only happens a few times in a developers life. It is a rare and delicious event, when you step in the realm of something genuenly new. If you are fast, you can feel yourself like the explorers in old times. Everything is new, and flexible; the new platform doesn’t yet have estabilished patterns, which gives you plenty of space to experiment.
Optimize text readability in visionOS leveraging font, color, and vibrancy
visionOS introduces a new layer to typography, where spatial considerations play a crucial role. Unlike traditional displays, text needs to be legible from varying distances and contexts. Font size and weight become the main factors in establishing a clear typographic hierarchy that remains legible across varying distances and contexts.
I’ve found what I believe to be a bug, or at least deeply disappointing behavior, in Xcode’s treatment of SwiftUI previews. I’ll put an explanation together in the paragraphs that follow, but the TL;DR is: I think you’ll probably want to start wrapping all your SwiftUI Previews and Preview Content Swift source code in
#if DEBUG
active compilation condition checks.
When developing a public API, we often reach the point where we would like different clients of our interface to consume either experimental features under development, or to tailor specific methods for them that we would not like other clients to use.
Swift's @_spi (System Programming Interface) attribute offers a solution by allowing developers to define subsets of an API targeted at specific clients, effectively hiding them from unintended users.
The front-end to your dev env
Pronounced "MEEZ ahn plahs"
An interactive study of common retry methods
Provide the localizable files from your project to localizers.
Export localizations for the languages and regions you’re ready to support. You can export all the files that you need to localize from your Xcode project, or export the files for specific localizations. Optionally, add files to the exported folders to provide context, and then give the files to localizers.
Setting name:
SWIFT_EMIT_LOC_STRINGS
When enabled, the Swift compiler will be used to extract Swift string literal and interpolation
LocalizedStringKey
andLocalizationKey
types during localization export.
As I've previously blogged in Pure Rust Implementation of Apple Code Signing (2021-04-14) and Expanding Apple Ecosystem Access with Open Source, Multi Platform Code signing (2022-04-25), I've been hacking on an open source implementation of Apple code signing and notarization using the Rust programming language. This takes the form of the
apple-codesign
crate / library and itsrcodesign
CLI executable. (Documentation / GitHub project / crates.io ).
The
git rerere
functionality is a bit of a hidden feature. The name stands for “reuse recorded resolution” and, as the name implies, it allows you to ask Git to remember how you’ve resolved a hunk conflict so that the next time it sees the same conflict, Git can resolve it for you automatically.
Collaboratively editing strings of text is a common desire in peer-to-peer applications. For example, a note-taking app might represent each document as a single collaboratively-edited string of text.
The algorithm presented here is one way to do this. It comes from a family of algorithms called CRDTs, which I will not describe here. It's similar to the approaches taken by popular collaborative text editing libraries such as Yjs and Automerge. Other articles have already been written about these similar approaches (see the references section below), but this article also has a nice interactive visualization of what goes on under the hood.
A frustrating aspect of the new MacBook Pro models is the notch. The notch itself isn't the problem; rather, it's that Apple hasn't automatically adjusted the menu bar icons so they don't hide behind the notch when many apps are running.
My colleagues often suggest purchasing Bartender for about 20€ to solve this issue. While it offers many features, I've refused to pay for a 3rd party solution to Apple's poor design decision. I have nothing against Bartender but I just don’t want to install yet another app into my machine to solve such a simple problem.
Recently, I discovered a free, built-in macOS workaround that doesn't require installing Bartender or any other additional apps.
>
defaults -currentHost write -globalDomain NSStatusItemSelectionPadding -int 6 defaults -currentHost write -globalDomain NSStatusItemSpacing -int 6
Expand the market for your app by supporting multiple languages and regions.
Localization is the process of translating and adapting your app into multiple languages and regions. Localize your app to provide access for users who speak a variety of languages, and who download from different App Store territories.
First, internationalize your code with APIs that automatically format and translate strings correctly for the language and region. Then add support for content that includes plural nouns and verbs by following language plural rules to increase the accuracy of your translations.
Use a string catalog to translate text, handle plurals, and vary the text your app displays on specific devices.
Your app delivers the best experience when it runs well in a person’s locale and displays content in their native language. Supporting multiple languages is more than translating text. It includes handling plurals for nouns and units, as well as displaying the right form of text on specific devices.
Use a string catalog to localize and translate all your app’s text in a visual editor right in Xcode. A string catalog automatically tracks all the localizable strings from your code, and keeps your translations in one place.
Use string catalogs to host translations, configure pluralization messages for different regions and locales, and change how text appears on different devices.
A specialized view that creates, configures, and displays Metal objects.
The MTKView class provides a default implementation of a Metal-aware view that you can use to render graphics using Metal and display them onscreen. When asked, the view provides a MTLRenderPassDescriptor object that points at a texture for you to render new contents into. Optionally, an MTKView can create depth and stencil textures for you and any intermediate textures needed for antialiasing. The view uses a CAMetalLayer to manage the Metal drawable objects.
The view requires a MTLDevice object to manage the Metal objects it creates for you. You must set the device property and, optionally, modify the view’s drawable properties before drawing.
Hey, we’re the makers of Clerk and Nextjournal! 👋
We’re building application.garden, a platform for hosting small web applications written in Clojure.
You can read more about what it will be able to do in the application.garden docs. A lot of this is still in flux though, so check back regularily and be surprised! ✨
This article documents several techniques I have found effective at improving the run time performance of Swift applications without resorting to “writing C in
.swift
files”. (That is, without resorting to C-like idioms and design patterns.) It also highlights a few pitfalls that often afflict Swift programmers trying to optimize Swift code.These tips are relevant as of version 5.5 of the Swift compiler. The only reason I say this is because a few of the classical boogeymen in the Swift world, like “Objective-C bridging” and “reference counting overhead” are no longer as important as they once were.
For an introduction and motivation into Embedded Swift, please see "A Vision for Embedded Swift", a Swift Evolution document highlighting the main goals and approaches.
The following document explains how to use Embedded Swift's support in the Swift compiler and toolchain.
SE-0421: Generalize effect polymorphism for AsyncSequence and AsyncIteratorProtocol
Have you ever wanted to use
some AsyncSequence
? I certainly have. The inability to hide the implementation type of anAsyncSequence
is an enormous pain. It is particularly problematic when trying to replace Combine with AsyncAlgorithms. There are some libraries out there that help, but I’d really like this problem to just disappear.
a (very opinionated) tiny companion for your personal project
This article is a partial-rebuttal/partial-confirmation to KGOnTech’s Apple Vision Pro’s Optics Blurrier & Lower Contrast than Meta Quest 3, prompted by RoadToVR’s Quest 3 Has Higher Effective Resolution, So Why Does Everyone Think Vision Pro Looks Best? which cites KGOnTech. I suppose it’s a bit late, but it’s taken me a while to really get a good intuition for how visionOS renders frames, because there is a metric shitton of nuance and it’s unfortunately very, very easy to make mistakes when trying to quantify things.
This post is divided into two parts: Variable Rasterization Rate (VRR) and how visionOS renders frames (including hard numbers for internal render resolutions and such), and a testbench demonstrating why photographing the visual clarity of Vision Pro (and probably future eye tracked headsets) may be more difficult than a DSLR pointed into the lenses (and how to detect the pitfalls if you try!)
Wasmphobia analyzes a WebAssembly file and gives you a breakdown of what contributed to the module’s size. This is only really useful when the WebAssembly binary has DWARF debugging data embedded.
SE-0420: Inheritance of actor isolation
Swift’s concurrency system seems incredibly simple at first. But, eventually, we all discover that there’s actually a tremendous amount of learning required to use concurrency successfully. And, one of the most challenging things is there’s also quite a bit to unlearn too. Swift concurrency has many features that feel familiar, but actually work very differently.
In this livestream we discuss all things app architecture! This includes the risks of bringing in 3rd party libraries, how TCA compares to other styles of building apps, the future of TCA, dependency management, and a whole bunch more.
Your first step toward developing for Apple platforms.
Pathways are simple and easy-to-navigate collections of the videos, documentation, and resources you’ll need to start building great apps and games. They’re the perfect place to begin your Apple developer journey — all you need is a Mac and an idea.
The SyncUps application is a recreation of one of Apple’s more interesting demo applications, Scrumdinger. We recreate it from scratch using the Composable Architecture, with a focus on domain modeling, controlling dependencies, and testability.
Matter Casting consists of three parts: >
- The mobile app: For most content providers, this would be your consumer-facing mobile app. By making your mobile app a Matter "Casting Client", you enable the user to discover casting targets, cast content, and control casting sessions. The example Matter tv-casting-app for Android / iOS and Linux builds on top of the Matter SDK to demonstrate how a TV Casting mobile app works.
- The TV content app: For most content providers, this would be your consumer-facing app on a Smart TV. By enhancing your TV app to act as a Matter "Content app", you enable Matter Casting Clients to cast content. The example Matter content-app for Android builds on top of the Matter SDK to demonstrate how a TV Content app works.
- The TV platform app: The TV platform app implements the Casting Video Player device type and provides common capabilities around media playback on the TV such as play/pause, keypad navigation, input and output control, content search, and an implementation of an app platform as described in the media chapter of the device library specification. This is generally implemented by the TV manufacturer. The example Matter tv-app for Android builds on top of the Matter SDK to demonstrate how a TV platform app works.
This document describes how enable your Android and iOS apps to act as a Matter "Casting Client". This documentation is also designed to work with the example example Matter tv-casting-app samples so you can see the experience end to end.
Domain modeling plays a significant role in modern software design, and investing time and effort to mastering this skill will be worth your while. Learn to leverage Swift's expressive type system to create accurate and robust models tailored to solve problems in your domain.
Positions this view within an invisible frame with a size relative to the nearest container.
Use this modifier to specify a size for a view’s width, height, or both that is dependent on the size of the nearest container. Different things can represent a “container” including:
- The window presenting a view on iPadOS or macOS, or the screen of a device on iOS.
- A column of a NavigationSplitView
- A NavigationStack
- A tab of a TabView
- A scrollable view like ScrollView or List
The size provided to this modifier is the size of a container like the ones listed above subtracting any safe area insets that might be applied to that container.
So you have a Swift Package Manager project, without an xcodeproj, and you launch Instruments, and try to profile something (maybe Allocations), and you receive the message “Required kernel recording resources are in use by another document.” But of course you don’t have any other documents open in Instruments and you’re at a loss, so you’ve come here. Welcome.
This package allows you to use various hidden SwiftUI features. Compatible with macOS 12.0+, iOS 15.0+
SE-0418: Inferring Sendable for methods and key path literals
This is a dense proposal, covering a lot of tricky stuff around the relationships between functions, key paths, and sendability. I’m going to go out on a limb here and say that the changes here won’t affect the majority of Swift users. However, the changes are still welcome!
Conveniently generate your app PrivacyInfo.xcprivacy file.
Starting on May 1st 2024 Apple requires all apps thet make use of certain APIs to declare this usage in a privacy manifest file. Since editing the file by hand is somewhat tedious, this site will help you generate the file instead so you just select which items you need to include and we do the rest!
Many yearn for the “good old days” of the web. We could have those good old days back — or something even better — and if anything, it would be easier now than it ever was.
System Log Analyzer
SwiftUI uses Dynamic Type to scale fonts based on the user's preferred text size (the size can be changed in the Settings app). At the moment of writing, Dynamic Type is not yet supported on macOS. When writing SwiftUI code, we can use the
.font
modifier to automatically set a dynamic type style, such asbody
,largeTitle
or any of the other builtin styles. The system then chooses the appropriate font based on the user's settings.
While everyone who writes Swift code will use Swift Macros, not everyone should write their own Swift Macros. This book will help you determine whether writing Swift Macros is for you and show you how the best ways to make your own.
You'll create both freestanding and attached macros and get a feel for when you should and shouldn't create them, which sort of macro you should create, and how to use SwiftSyntax to implement them. Your macros will accept parameters when appropriate and will always include tests. You'll even learn to create helpful diagnostics for your macros and even FixIts.
Following Structured Concurrency was one of the best decisions Swift could have made when introducing concurrency into the language. The impact of that decision on all the code written with concurrency in mind can't be underestimated.
But the other day I needed a tool that, while allowing me to stay in a structured concurrency system, it internally could leverage unstructured techniques. The exact situation is not really relevant besides understanding that I have a system that needs to read a value from some storage with the quirk that the value may not be there yet and thus it should wait for it.
I want to follow the structured concurrency principles on the reading side. But we can’t implement this without escaping the confines of this structure. That's because reading the value is not what starts the work of generating it. Instead, it’s another independent subsystem, at another time, that will end up saving the value into the storage.
To accomplish this, we need a way to pause the execution and resume it when another system tells us the value is available.
In addition to using Apple’s convenient, safe, and secure in-app purchase system, apps on the App Store in the United States that offer in-app purchases can also use the StoreKit External Purchase Link Entitlement (US) to include a link to the developer’s website that informs users of other ways to purchase digital goods or services. To use the entitlement, you’ll need to submit a request, enable the entitlement in Xcode, and use required StoreKit APIs. Apple will review your app to ensure it complies with the terms and conditions of the entitlement, as well as the App Review Guidelines and the Apple Developer Program License Agreement.
Now it’s Ruby that’s 5 times faster than Crystal!!! And 20x faster than our original version. Though most likely that’s some cost from the FFI, or something similar, though that does seem like a surprising amount of overhead.
I thought it was notable that by making some minor tweaks to Ruby code it can now outperform a precompiled statically typed language in a purpose-built example of when it is slow. I’m hopeful that someday with future advancements in the Ruby JIT even the small tweaks might not be necessary.
This is a parody of the nLab, a wiki for collaborative work on category theory and higher category theory. As anyone who's visited is probably aware, the jargon can be absolutely impenetrable for the uninitiated — thus, the idea for this project was born!
Once you generate a page, you can link to it using the hash url displayed; loading the site with no hash or following any link in the body will get you a new random page!
Configure the session when a SharePlay activity starts, and handle events that occur during the lifetime of the activity.
When one person in a group starts an activity, other people’s devices display system UI to prompt them to join that activity. When each person joins, the system prepares a GroupSession object for the activity and delivers it to their app. Your app uses that session object to:
- Prepare any required UI.
- Start the activity, monitor its state, and respond to changes.
- Synchronize activity-related information.
For information about how to define activities, see Defining your app’s SharePlay activities. For information about how to start activities, see Presenting SharePlay activities from your app’s UI.
Because of habits ingrained in me, by default I tend to reach for synchronous, blocking APIs when reading and writing data to and from disk. This causes problems with Swift’s cooperatively scheduled Tasks. In this post, I examine the various async-safe approaches I’ve discovered to hitting disk, and end with a general approach that I ended up using.
-
Functional programming emphasizes the use of mathematical functions and immutable data to construct software systems. This approach brings forth plenty of benefits, ranging from improved scalability and enhanced readability to streamlined debugging processes. In recent years, functional programming languages and frameworks have witnessed a surge in popularity, driven by their proven efficiency in real-world scenarios.
- Concurrency
- Enhanced readability
- Improved scalability
- Easier debugging
- Efficient parallel programming
- Testability
- Modularity
- Easier to reason about
- Transparancy
A home for makers, musicians, artists and DIY content creators
MobileCode (previously medc) is an editor for C. It was written for 📱phones and adapted to 🖥desktop.
It features:
- individual line wrapping, prettified
- hierarchical collapsing based on {} and empty lines
- 📱swipe control
- code generation via shell script comments
- 📱Termux integration
etc: multicursor, regex search, regex replace, undo, select, line select, cut/copy/paste
Swift 5.9 (WWDC23) introduced Macros to make your codebase more expressive and easier to read. In this article, I'll go over why swift macros exist, how they work and how you can easily set up one of your own.
This guide includes: >
- An overview of the Instruction Set Architecture (ISA) along with a method for detecting the existence of ISA features
- Detailed description of the Advanced SIMD and Floating Point (FP) instructions
- A discussion of intrinsic functions for utilizing specific instructions in high-level languages
- An overview of CPU and cache topologies with recommendations for effective utilization of asymmetric multiprocessing
- A high-level overview of CPU microarchitecture with sizes of key CPU structures and instruction latency and bandwidth tables
- A discussion of recommended instruction sequences for various actions and algorithms
- Lists of performance-monitoring events and metrics to measure key CPU performance behavior
>
SE-411: Isolated default value expressions
In my first post in this series, I said that Swift 5.10 can correctly find all possible sources of data races. But, I kind of lied! It turns out there is actually a pretty significant isolation hole in that version. But it gets a little more complicated, which I’ll get to.
- SE-401: Remove Actor Isolation Inference caused by Property Wrappers
- SE-411: Isolated default value expressions
>
If you’ve read my first post about Spatial Video, the second about Encoding Spatial Video, or if you’ve used my command-line tool, you may recall a mention of Apple’s mysterious “fisheye” projection format. Mysterious because they’ve documented a CMProjectionType.fisheye enumeration with no elaboration, they stream their immersive Apple TV+ videos in this format, yet they’ve provided no method to produce or playback third-party content using this projection type.
Additionally, the format is undocumented, they haven’t responded to an open question on the Apple Discussion Forums asking for more detail, and they didn’t cover it in their WWDC23 sessions. As someone who has experience in this area – and a relentless curiosity – I’ve spent time digging-in to Apple’s fisheye projection format, and this post shares what I’ve learned.
The defacto app for controlling monitors
Swift 5 updates have been slowly building up to the release of Swift 6. Some of the major updates have been the addition of async/await (concurrency) and existentials. If you use any of these features there will be some significant changes that will require some refactoring. Continue reading to learn how to prepare your projects and packages before the release of Swift 6 so you can also take advantage of new features (such as Swift 5.10's full data isolation) and have a smooth easy transition without any disruptive refactoring.
- Good timing. I am waiting on some backend changes to finish for a new set of features (coming soon). Also, WWDC is going to be here before we know it, and that may influence my plans. It's usually a good time in the months leading up to WWDC to address any technical debt in order to prepare for potential new APIs. >
- Concerns about iCloud. While I have no plans for Foodnoms to stop using iCloud, in the past year I've had more concerns about the app's reliance on it. Too much of the Foodnoms codebase directly depends on CloudKit. This has started to feel more like a liability, in the case one day I wish to use another backend syncing service.
- Sharing code with another app. For the past four months or so, I've been working on another app. This app was able to share a lot of code with Foodnoms. This was done via a shared Swift package: the monolith, "CoreFoodNoms". While I was able to share a lot of code successfully, there were some global side-effects and assumptions that were tricky to workaround. (Note: I have decided to pause work on this app for the time being.)
- Troubles with SwiftUI previews and compile times. SwiftUI previews for Foodnoms has always been unusable. This was mostly due to the incredibly slow compile times. I had heard that using SwiftUI previews in a smaller build target with fewer dependencies can help with this. However, this didn't work for me. The problem is that a lot of my SwiftUI code depends on core models, such as 'Food' and 'Recipe'. The thing is, these models were not 100% pure. Some of them referenced global singletons that required some sort of static/global initialization. As a result, SwiftUI previews of these views in smaller Swift packages would immediately crash, due to those singletons not being properly initialized.
>
macOS includes a variety of video and audio features that you can use in FaceTime and many other videoconferencing apps.
Reactions fill your video frame with a 3D effect expressing how you feel. To show a reaction, make the appropriate hand gesture in view of the camera and away from your face. Hold the gesture until you see the effect.
To turn this feature on or off, select Reactions in the Video menu , which appears in the menu bar when a video call is in progress. To show a reaction without using a hand gesture, click the arrow next to Reactions in the menu, then click a reaction button in the submenu.
-
With Swift 5.10, the compiler can correctly find all possible sources of data races. But, there are still quite a few sharp edges and usability issues with concurrency. Swift 6 is going to come with many language changes that will help. In fact, there are currently 13 evolution proposals and 4 pitches that are either directly or indirectly related to concurrency. That’s a lot!
The thing is, I often find it quite challenging to read these proposals. It can be really difficult for me to go from the abstract language changes to how them will impact concrete problems I’ve faced. Honestly, sometimes I don’t even fully get the language changes! But, I’m not going to let that stop me 😬
So, I’m going to make an attempt to cover all of the accepted evolution proposals. I’m not going to go too deep. Just a little introduction to the problem and a few examples to highlights the syntax changes. Of course, I’ll also throw in a little commentary. Each of these proposals probably deserves its own in-depth post. But, I’m getting tired just thinking about that.
Before continuing, let's take a moment to consider the cost of convenience in Xcode.
Designing a code editor that the spectrum from small to large-scale projects can use is a challenging task. Many tools approach the problem by layering their solution and providing extensibility. The bottom-most layer is very low-level and close to the underlying build system, and the top-most layer is a high-level abstraction that's convenient to use but less flexible. By doing so, they make the simple things easy, and everything else possible.
However, Apple decided to take a different approach with Xcode. The reason is unknown, but it's likely that optimizing for the challenges of large-scale projects has never been their goal. They overinvested in convenience for small projects, provided little flexibility, and strongly coupled the tools with the underlying build system. To achieve the convenience, they provide sensible defaults, which you can easily replace, and added a lot of implicit build-time-resolved behaviors that are the culprit of many issues at scale.
Universal Links help people access your content, whether or not they have your app installed. Get the details on the latest updates for the Universal Links API, including support for Apple Watch and SwiftUI. Learn how you can reduce the size and complexity of your app-site-association file with enhanced pattern matching features like wildcards, substitution variables, and Unicode support. And discover how cached associated domains data will improve the initial launch experience for people using your app.
Enhance WASM is bringing server side rendered web components to everyone. Author your components in friendly, standards based syntax. Reuse them across multiple languages, frameworks, and servers. Upgrade them using familiar client side code when needed.
Your path to resilient, cross platform interfaces begins here.
files-to-prompt is a new tool I built to help me pipe several files at once into prompts to LLMs such as Claude and GPT-4.
You definitely want to enable the
DisableOutwardActorInference
upcoming feature flag!
This has come up several times on the forums, but I’ve never written it up in a standard place, so here it is: There are only three ways to get run-time polymorphism in Swift. Well, three and a half.
What do I mean by run-time polymorphism? I mean a function/method call (or variable or subscript access) that will (potentially) run different code each time the call happens. This is by contrast with many, even most other function calls: when you call Array’s append, it’s always the same method that gets called.
So, what are the three, sorry, three and a half ways to get this behavior?
>
Global variables allow you to access shared instances from anywhere in your codebase. With strict concurrency, we must ensure access to the global state becomes concurrency-safe by actor isolation or
Sendable
conformance. In exceptional cases, we can opt out by marking a global variable as nonisolated unsafe.
This directory contains an Xcode project that can be used for rapidly iterating on refactorings built with the SwiftRefactor library.
Create video content for visionOS by converting an existing 3D HEVC file to a multiview HEVC format.
In visionOS, 3D video uses the Multiview High Efficiency Video Encoding (MV-HEVC) format, supported by MPEG4 and QuickTime. Unlike other 3D media, MV-HEVC stores a single track containing multiple layers for the video, where the track and layers share a frame size. This track frame size is different from other 3D video types, such as side-by-side video. Side-by-side videos use a single track, and place the left and right eye images next to each other as part of a single video frame.
To convert side-by-side video to MV-HEVC, you load the source video, extract each frame, and then split the frame horizontally. Then copy the left and right sides of the split frame into the left eye and right eye layers, writing a frame containing both layers to the output.
This sample app demonstrates the process for converting side-by-side video files to MV-HEVC, encoding the output as a QuickTime file. The output is placed in the same directory as the input file, with _MVHEVC appended to the original filename.
You can verify this sample’s MV-HEVC output by opening it with the sample project from Reading multiview 3D video files.
For the full details of the MV-HEVC format, see Apple HEVC Stereo Video — Interoperability Profile (PDF) and ISO Base Media File Format and Apple HEVC Stereo Video (PDF).
When you take off Apple Vision Pro (without disconnecting the battery or shutting it down), it turns off the displays to save power, locks for security, and goes to sleep. You can quickly wake and unlock Apple Vision Pro when you want to use it again.
If you disconnect the battery or shut down Apple Vision Pro, you’ll need to turn it on again before you can use it. See Complete setup.
Provide suggestions to people searching for content in your app.
You can suggest query text during a search operation by providing a collection of search suggestion views. Because suggestion views are not limited to plain text, you must also provide the search string that each suggestion view represents. You can also provide suggestions for tokens, if your search interface includes them. SwiftUI presents the suggestions in a list below the search field.
For both text and tokens, you manage the list of suggestions, so you have complete flexibility to decide what to suggest. For example, you can:
- Offer a static list of suggestions.
- Remember previous searches and offer the most recent or most common ones.
- Update the list of suggestions in real time based on the current search text.
- Employ some combination of these and other strategies, possibly changing over time.
>
Apple asks customers to help improve iOS by occasionally providing analytics, diagnostic, and usage information. Apple collects this information anonymously.
Gather crash reports and device logs from the App Store, TestFlight, and directly from devices.
After your app is distributed to customers, learn ways to improve it by collecting crash reports and diagnostic logs. If a customer reports an issue with your app, use the Crashes organizer in Xcode to get a report about the issue, as described in How are reports created? If the Crashes organizer doesn’t contain the diagnostic information you need or is unavailable to you, the customer can collect logs from their device and share them directly with you to resolve the issue. Once you have a crash report, you may need to add identifiable symbol information to the crash report—see Adding identifiable symbol names to a crash report for more information. For issues that aren’t crashes, inspect the operating system’s console log to find important information for diagnosing the issue’s source.
Crossing the language boundary between Haskell and Swift. This is the second part of an in-depth guide into developing native applications using Haskell with Swift.
This is the second installment of the in-depth series of blog-posts on developing native macOS and iOS applications using both Haskell and Swift/SwiftUI. This post covers how to call (non-trivial) Haskell functions from Swift by using a foreign function calling-convention strategy similar to that described by Calling Purgatory from Heaven: Binding to Rust in Haskell that requires argument and result marshaling.
You may find the other blog posts in this series interesting:
The series of blog posts is further accompanied by a github repository where each commit matches a step of this tutorial. If in doubt regarding any step, check the matching commit to make it clearer.
This write-up has been cross-posted to Well-Typed’s Blog.
Spatial is a free macOS command-line tool to process MV-HEVC video files (currently produced by iPhone 15 Pro and Apple Vision Pro). It exports from MV-HEVC files to common stereoscopic formats (like over/under, side-by-side, and separate left- and right-eye videos) that can be used with standard stereo/3D players and video editors. It can also make MV-HEVC video from the same stereoscopic formats to be played on Apple Vision Pro and Meta Quest.
For a deeper dive into Apple’s spatial and immersive formats, read my post about Spatial Video.
I started working with language models five years ago when I led the team that created CodeSearchNet, a precursor to GitHub CoPilot. Since then, I’ve seen many successful and unsuccessful approaches to building LLM products. I’ve found that unsuccessful products almost always share a common root cause: a failure to create robust evaluation systems.
I’m currently an independent consultant who helps companies build domain-specific AI products. I hope companies can save thousands of dollars in consulting fees by reading this post carefully. As much as I love making money, I hate seeing folks make the same mistake repeatedly.
This post outlines my thoughts on building evaluation systems for LLMs-powered AI products.
Like software engineering, success with AI hinges on how fast you can iterate. You must have processes and tools for:
- Evaluating quality (ex: tests).
- Debugging issues (ex: logging & inspecting data).
- Changing the behavior or the system (prompt eng, fine-tuning, writing code)
Many people focus exclusively on #3 above, which prevents them from improving their LLM products beyond a demo.1 Doing all three activities well creates a virtuous cycle differentiating great from mediocre AI products (see the diagram below for a visualization of this cycle).
If you streamline your evaluation process, all other activities become easy. This is very similar to how tests in software engineering pay massive dividends in the long term despite requiring up-front investment.
To ground this post in a real-world situation, I’ll walk through a case study in which we built a system for rapid improvement. I’ll primarily focus on evaluation as that is the most critical component.
A high-level introduction to distributed actor systems.
Distributed actors extend Swift’s “local only” concept of
actor
types to the world of distributed systems.In order to build distributed systems successfully you will need to get into the right mindset.
While distributed actors make calling methods (i.e. sending messages to them) on potentially remote actors simple and safe, thanks to compile time guarantees about the serializability of arguments to be delivered to the remote peer. It is important to stay in the mindset of “what should happen if this actor were indeed remote…?”
Distribution comes with the added complexity of partial failure of systems. Messages may be dropped as networks face issues, or a remote call may be delivered (and processed!) successfully, while only the reply to it may not have been able to be delivered back to the caller of a distributed function. In most, if not all, such situations the distributed actor cluster will signal problems by throwing transport errors from the remote function invocation.
In this section we will try to guide you towards “thinking in actors,” but perhaps it’s also best to first realize that: “you probably already know actors!” As any time you implement some form of identity that is given tasks that it should work on, most likely using some concurrent queue or other synchronization mechanism, you are probably inventing some form of actor-like structures there yourself!
In Swift 5.5, the Swift Package Manager adds support for package collections — bite size curated lists of packages that make it easy to discover, share and adopt packages.
At the time of this article’s publication, Swift 5.5 is available as a preview both from [Swift.org](http://swift.org) and in the Xcode 13 seeds. Swift 5.5 will be released officially later this year.
The goal of package collections is to improve two key aspects of the package ecosystem:
- Discovering great packages
- Deciding which package is the best fit for a particular engineering task
Package collections embrace and promote the concept of curation. Instead of browsing through long lists of web search results, package collections narrow the selection to a small list of packages from curators you trust. Package collections serve many use cases: For example, we envision communities of Swift developers publishing collections that reflect great packages produced and used by those communities to tackle everyday tasks. Educators can also use package collections to aggregate a set of packages to go along with course materials. Enterprises can use package collections to narrow the decision space for their internal engineering teams, focusing on a trusted set of vetted packages.
Choose a product or search below to view related documents and available downloads.
Many of Apple’s own visionOS apps, like Music, Safari, and Apple TV, have a handy search bar front and center on the window so you can easily search through your content. Oddly, as of visionOS 1.1, replicating this visually as a developer using SwiftUI or UIKit is not particularly easy due to lack of a direct API, but it’s still totally possible, so let’s explore how.
With Swift, anyone can code like the pros. Whether you’re working on a project for school, earning an industry-recognized credential, or just looking to build your skills, Swift makes it easy to create great apps for all Apple platforms — no experience necessary.
The main Swift repository contains the source code for the Swift compiler and standard library, as well as related components such as SourceKit (for IDE integration), the Swift regression test suite, and implementation-level documentation.
The Swift driver repository contains a new implementation of the Swift compiler’s “driver”, which aims to be a more extensible, maintainable, and robust drop-in replacement for the existing compiler driver.
As a whole, the Swift compiler is principally responsible for translating Swift source code into efficient, executable machine code. However, the Swift compiler front-end also supports a number of other tools, including IDE integration with syntax coloring, code completion, and other conveniences. This document provides a high-level description of the major components of the Swift compiler:
- Parsing: The parser is a simple, recursive-descent parser (implemented in lib/Parse) with an integrated, hand-coded lexer. The parser is responsible for generating an Abstract Syntax Tree (AST) without any semantic or type information, and emits warnings or errors for grammatical problems with the input source.
- Semantic analysis: Semantic analysis (implemented in lib/Sema) is responsible for taking the parsed AST and transforming it into a well-formed, fully-type-checked form of the AST, emitting warnings or errors for semantic problems in the source code. Semantic analysis includes type inference and, on success, indicates that it is safe to generate code from the resulting, type-checked AST.
- Clang importer: The Clang importer (implemented in lib/ClangImporter) imports Clang modules and maps the C or Objective-C APIs they export into their corresponding Swift APIs. The resulting imported ASTs can be referred to by semantic analysis.
- SIL generation: The Swift Intermediate Language (SIL) is a high-level, Swift-specific intermediate language suitable for further analysis and optimization of Swift code. The SIL generation phase (implemented in lib/SILGen) lowers the type-checked AST into so-called “raw” SIL. The design of SIL is described in docs/SIL.rst.
- SIL guaranteed transformations: The SIL guaranteed transformations (implemented in lib/SILOptimizer/Mandatory) perform additional dataflow diagnostics that affect the correctness of a program (such as a use of uninitialized variables). The end result of these transformations is “canonical” SIL.
- SIL optimizations: The SIL optimizations (implemented in lib/SILOptimizer/Analysis, lib/SILOptimizer/ARC, lib/SILOptimizer/LoopTransforms, and lib/SILOptimizer/Transforms) perform additional high-level, Swift-specific optimizations to the program, including (for example) Automatic Reference Counting optimizations, devirtualization, and generic specialization.
- LLVM IR generation: IR generation (implemented in lib/IRGen) lowers SIL to LLVM IR, at which point LLVM can continue to optimize it and generate machine code.
>
Add conditional compilation markers around code that requires a particular family of devices or minimum operating system version to run.
When you invest time developing a new feature for an app, you want to get the maximum value out of the code you write. Creating a new project to support a new platform or operating system version adds unnecessary work, especially if most of your code stays the same. The best solution is to maintain one version of your app that runs on multiple platforms and operating system versions. To achieve this, compile code conditionally for the target platform, or use availability condition checks to run code based on operating system version.
Skip’s Swift to Kotlin language transpiler is able to convert a large subset of the Swift language into Kotlin. The transpiler has the following goals:
- Avoid generating buggy code. We would rather give you an immediate error or generate Kotlin that fails to compile altogether than to generate Kotlin that compiles but behaves differently than your Swift source.
- Allow you to write natural Swift. Swift is a sprawling language; we attempt to supports its most common and useful features so that you can code with confidence.
- Generate idiomatic Kotlin. Where possible, we strive to generate clean and idiomatic Kotlin from your Swift source.
These goals form a hierarchy. For example, if generating more idiomatic Kotlin would run the risk of introducing subtle behavioral differences from the source Swift, Skip will always opt for a less idiomatic but bug-free transpilation.
3D DOM viewer, copy-paste this into your console to visualise the DOM topographically.
XcodePilot is a powerful development tool designed to provide integrated features and tools for Apple platform developers, aiming to enhance development efficiency and simplify the development process. XcodePilot integrates multiple tools, including Copilot, Xcode and Runtime management, simulator management, cache cleaning, and keyboard shortcuts customization. We continuously introduce new features to meet the needs of developers.
Add conditional compilation markers around code that requires a particular family of devices or minimum operating system version to run.
When you invest time developing a new feature for an app, you want to get the maximum value out of the code you write. Creating a new project to support a new platform or operating system version adds unnecessary work, especially if most of your code stays the same. The best solution is to maintain one version of your app that runs on multiple platforms and operating system versions. To achieve this, compile code conditionally for the target platform, or use availability condition checks to run code based on operating system version.
Binary Vector Search: The 30x Memory Reduction Revolution with Preserved Accuracy
Within the field of vector search, an intriguing development has arisen: binary vector search. This approach shows promise in tackling the long-standing issue of memory consumption by achieving a remarkable 30x reduction. However, a critical aspect that sparks debate is its effect on accuracy.
We believe that using binary vector search, along with specific optimization techniques, can maintain similar accuracy. To provide clarity on this subject, we showcase a series of experiments that will demonstrate the effects and implications of this approach.
By utilizing adaptive retrieval techniques, binary vectors can maintain a high level of accuracy while significantly reducing memory usage by 30 times. We have presented benchmark metrics in a table to showcase the results. It is important to note that these outcomes are specific to the openai text-embedding-3-large model, which possesses this particular property.
Learn how actors and sendable prevent race conditions in your concurrent code.
Skip brings Swift app development to Android. It is a tool that enables developers to use a single modern programming language (Swift) and first-class development environment (Xcode) to build genuinely native apps for both iOS and Android.j
To use Swift concurrency successfully, you have learn to think in terms of isolation. It is the foundational mechanism the compiler uses to reason about and prevent data races. All variables and functions have it. The thing is, isolation is really different from every other synchronization mechanism I’ve used before. Now that I have more practice, I find it often feels really natural. But getting to that point took real time! And, boy, did I make some spectacular mistakes along the way.
Developing intuition around how isolation works is essential, but it will be less work than you might think!
I've found the best way to understand this feature is to play around with it. But that has been difficult until recently because not all the necessary pieces were available in a nightly toolchain, even under an experimental flag. In particular the ability to create pointers and optionals of non-copyable types. But that changed when @lorentey landed support support for these last week. At the same time, some of the other proposals that are coming out are a little more obscure than the basic generics support, and so haven't had as much discussion. These are also much easier to understand once you actually try and use them, and see the impact of not having them.
To help tie all these pieces together, I wrote up some code that uses all these proposals in order to build a basic singly-linked list type. This code is similar to the code you can find in chapter 2 of @Gankra's excellent tutorial about linked lists in Rust, which I encourage you to read to get a better feel for how they handle ownership.
_ChatGPT CodeInterpreter example
PL/Swift allows you to write custom SQL functions and types for the PostgreSQL database server in the Swift programming language.
Bringing Swift to the Backend of the Backend’s Backend!
Apple doesn't like to make things easy for us, do they?
They created a wonderful first-party package ecosystem in Swift Package Manager, but didn't put much work into explaining how to make the most of it.
It's easy enough to package a dynamic framework, however you need to jump through many undocumented hoops to properly deduplicate assets and make your app lightweight.
But when you do get it working, you can achieve awesome results like shedding 58% from your app binary size. Take the time to work through the sample project, understand these clandestine techniques, and apply similar improvements to your own apps!
Develop device drivers that run in user space.
The DriverKit framework defines the fundamental behaviors for device drivers in macOS and iPadOS. The C++ classes of this framework define your driver’s basic structure, and provide support for handling events and allocating memory. This framework also supports appropriate types for examining the numbers, strings, and other types of data in your driver’s I/O registry entry. Other frameworks, such as USBDriverKit, HIDDriverKit, NetworkingDriverKit, PCIDriverKit, SerialDriverKit, and AudioDriverKit, provide the specific behaviors you need to support different types of devices.
The drivers you build with DriverKit run in user space, rather than as kernel extensions, which improves system stability and security. You create your driver as an app extension and deliver it inside your existing app.
In macOS, use the System Extensions framework to install and upgrade your driver. In iPadOS, the system automatically discovers and upgrades drivers along with their host apps.
Install and manage user space code that extends the capabilities of macOS.
Extend the capabilities of macOS by installing and managing system extensions—drivers and other low-level code—in user space rather than in the kernel. By running in user space, system extensions can’t compromise the security or stability of macOS. The system grants these extensions a high level of privilege, so they can perform the kinds of tasks previously reserved for kernel extensions (KEXTs).
You use frameworks like DriverKit, Endpoint Security, and Network Extension to write your system extension, and you package the extension in your app bundle. At runtime, use the SystemExtensions framework to install or update the extension on the user’s system. Once installed, an extension remains available for all users on the system. Users can disable the extension by deleting the app, which deletes the extension.
An extension other apps use to access files and folders managed by your app and synced with a remote storage.
If your app focuses on providing and syncing user documents from remote storage, you can implement a File Provider extension to give users access to those documents when they’re using other apps. If you just need to share local documents, see Share files locally below. The framework has two different starting points for building your File Provider extension.
NSFileProviderReplicatedExtension — The system manages the content accessed through the File Provider extension. Available in macOS 11+ and iOS 16+.
NSFileProviderExtension — The extension hosts and manages the files accessed through the File Provider extension. Available in iOS 11+.
The replicated extension takes responsibility for monitoring and managing the local copies of your documents. The file provider focuses on syncing data between the local copy and the remote storage—uploading any local changes and downloading any remote changes. For more information, see Replicated File Provider extension.
The nonreplicated extension manages a local copy of the extension’s content, including creating and managing placeholders for remote files. It also syncs the content with your remote storage. For more information, see Nonreplicated File Provider extension.
Create a DriverKit extension to support your Thunderbolt device’s custom features.
All hardware devices require special software — called drivers — to communicate with macOS. Thunderbolt devices communicate using the PCIe interface, and so they use PCIe drivers with extra support for Thunderbolt features.
If your Thunderbolt device uses popular PCIe Ethernet controllers from Intel, Broadcom, or Aquantia, or if your device communicates using industry-standard protocols such as XHCI, AHCI, NVMe, or FireWire, you don’t need to create a custom driver. Apple supplies built-in drivers that already support these chip sets and interfaces. The only time you need to create a custom driver is when your hardware supports proprietary features. In macOS 11 and later, build any custom drivers as DriverKit extensions using the PCIDriverKit framework.
Get notifications when the contents of a directory hierarchy change.
The file system events API provides a way for your application to ask for notification when the contents of a directory hierarchy are modified. For example, your application can use this to quickly detect when the user modifies a file within a project bundle using another application.
It also provides a lightweight way to determine whether the contents of a directory hierarchy have changed since your application last examined them. For example, a backup application can use this to determine what files have changed since a given time stamp or a given event ID.
Prevent data loss and app crashes by interacting with the file system in a coordinated, asynchronous manner and by avoiding unnecessary disk I/O.
A device’s file system is a shared resource available to all running processes. If multiple processes (or multiple threads in the same process) attempt to act on the same file simultaneously, data corruption or loss may occur, and your app may even crash.
To establish safe and efficient file access, avoid performing immediate file I/O on the app’s main thread. Use
NSFileCoordinator
to choreograph file access, opt for the I/O-free variants of file-related APIs, and implement the prefetching mechanisms ofUICollectionView
andUITableView
to efficiently prepare file-related data for display.
Add more protection to your HomeKit accessories by controlling which services and devices they communicate with on your home Wi-Fi network and over the internet.
Use universal links to link directly to content within your app and share data securely.
You can connect to content deep inside your app with universal links. Users open your app in a specified context, allowing them to accomplish their goals efficiently.
When users tap or click a universal link, the system redirects the link directly to your app without routing through Safari or your website. In addition, because universal links are standard HTTP or HTTPS links, one URL works for both your website and your app. If the user has not installed your app, the system opens the URL in Safari, allowing your website to handle it.
When users install your app, the system checks a file stored on your web server to verify that your website allows your app to open URLs on its behalf. Only you can store this file on your server, securing the association of your website and your app.
Take the following steps to support universal links:
- Create a two-way association between your app and your website and specify the URLs that your app handles, as described in Supporting associated domains.
- Update your app delegate to respond to the user activity object the system provides when a universal link routes to your app, as described in Supporting universal links in your app.
With universal links, users open your app when they click links to your website within Safari and WKWebView, and when they click links that result in a call to:
- open(_:options:completionHandler:) in iOS and tvOS
- openSystemURL(_:) in watchOS
- open(_:withApplicationAt:configuration:completionHandler:) in macOS
- openURL in SwiftUI
>
Experimental support for generic noncopyable types in the #swift standard library is now available in the nightly toolchain.
Here's a simple demonstration of adoption of this feature on the Swift Playdate example project. Switching the Sprite type from an enum+class box to a simpler non-copyable struct drops binary size from 7k to 6k on the SwiftBreak game.
While I was researching how to do level-order traversals of a binary tree in Haskell, I came across a library called tree-traversals which introduced a fancy Applicative instance called Phases. It took me a lot of effort to understand how it works. Although I still have some unresolved issues, I want to share my journey.
Note: I was planning to post this article on Reddit. But I gave up because it was too long so here might be a better place.
Note: This article is written in a beginner-friendly way. Experts may find it tedious.
Note: The author is not a native English speaker and is glad to accept corrections, refinements and suggestions.
Last week, I went on an adventure through the electromagnetic spectrum!
It’s like an invisible world that always surrounds us, and allows us to do many amazing things: It’s how radio and TV are transmitted, it’s how we communicate using Wi-Fi or our phones. And there are many more things to discover there, from all over the world.
In this post, I’ll show you fifty things you can find there — all you need is this simple USB dongle and an antenna kit!
Use mergeable dynamic libraries to get app launch times similar to static linking in release builds, without losing dynamically linked build times in debug builds.
In Xcode 15 or later, you can include symbols from a separate, mergeable dynamic library for macOS and iOS app and framework targets. Mergeable dynamic libraries include extra metadata so that Xcode can merge the library into another binary, similar to linking a static library with
-all_load
. When you enable automatic merging, Xcode enables build settings that make app launching fast and keep debugging and development build times fast.
Make your app more responsive by examining the event-handling and rendering loop.
Human perception is adept at identifying motion and linking cause to effect through sequential actions. This is important for graphical user interfaces because they rely on making the user believe a certain interaction with a device causes a specific effect, and that the objects onscreen behave sufficiently realistically. For example, a button needs to highlight when a person taps or clicks it, and when someone drags an object across the screen, it needs to follow the mouse or finger.
There are two ways this illusion can break down:
- The time between user input and the screen update is too long, so the app’s UI doesn’t seem like it’s responding instantaneously anymore. A noticeable delay between user input and the corresponding screen update is called a hang. For more information, see Understanding hangs in your app.
- The motion onscreen isn’t fluid like it would be in the real world. An example is when the screen seems to get stuck and then jumps ahead during scrolling or during an animation. This is called a hitch.
This article covers different types of user interactions and how the event-handling and rendering loop processes events to handle them. This foundational knowledge helps you understand what causes hangs and hitches, how the two are similar, and what differentiates them.
Managing Dependencies in the Age of SwiftUI
Dependency Injection (or in short: DI) is one of the most fundamental parts of structuring any kind of software application. If you do DI right, it gets a lot easier to change and extend your application in a safe manner. But if you get it wrong, it can become increasingly more difficult to ship your features in a timely, correct and safe way.
Apple notoriously has been quite unopinionated about Dependency Injection in its development frameworks until recently, when it introduced EnvironmentObject for SwiftUI.
In this post, let’s see how we can use GitHub Actions to automate building the DocC of a Swift Package with GitHub Actions.
Learn how you can optimize your app with the Swift Concurrency template in Instruments. We'll discuss common performance issues and show you how to use Instruments to find and resolve these problems. Learn how you can keep your UI responsive, maximize parallel performance, and analyze Swift concurrency activity within your app. To get the most out of this session, we recommend familiarity with Swift concurrency (including tasks and actors).
View power and performance metrics for apps you distribute through the App Store.
Use the Xcode Organizer to view anonymized performance data from your app’s users, including launch times, memory usage, UI responsiveness, and impact on the battery. Use the data to tune the next version of your app and catch regressions that make it into a specific version of your app.
In Xcode, choose Window > Organizer to open the Organizer window, and then select the desired metric or report. In some cases, the pane shows “Insufficient usage data available” because there may not be enough anonymized data reported from participating user devices. When this happens, try checking back in a few days.
Determine the cause for delays in user interactions by examining the main thread and the main run loop.
A discrete user interaction occurs when a person performs a single well-contained interaction and the screen then updates. An example is when someone presses a key on the keyboard and the corresponding letter then appears onscreen. Although the software running on the device needs time to process the incoming user input event and compute the corresponding screen update, it’s usually so quick that a human can’t perceive it and the screen update seems instantaneous.
When the delay in handling a discrete user interaction becomes noticeable, that period of unresponsiveness is known as a hang. Other common terms for this behavior are freeze because the app stops updating, and spin based on the spinning wait cursor that appears in macOS when an app is unresponsive.
Although discrete interactions are less sensitive to delays than continuous interactions, it doesn’t take long for a person to perceive a gap between an action and its reaction as a pause, which breaks their immersive experience. A delay of less than 100 ms in a discrete user interaction is rarely noticeable, but even a few hundred milliseconds can make people feel that an app is unresponsive.
A hang is almost always the result of long-running work on the main thread. This article explains what causes a hang, why the main thread and the main run loop are essential to understanding hangs, and how various tools can detect hangs on Apple devices.
Create a more responsive experience with your app by minimizing time spent in startup. A user’s first experience with an app is the wait while it launches. The OS indicates the app is launching with a splash screen on iOS and an icon bouncing in Dock on macOS. The app needs to be ready to help the user with a task as soon as possible. An app that takes too long to launch may frustrate the user, and on iOS, the watchdog will terminate it if it takes too long. Typically, users launch an app many times in a day if it’s part of their regular workflow, and a long launch time causes delays in performing a task.
When the user taps an app’s icon on their Home screen, iOS prepares the app for launch before handing control over to the app process. The app then runs code to get ready to draw its UI to the screen. Even after the app’s UI is visible, the app may still be preparing content or replacing an interstitial interface (for example, a loading spinner) with the final controls. Each of these steps contributes to the total perceived launch time of the app, and you can take steps to reduce their duration.
An activation happens when a user clicks on your icon or otherwise goes back to your app.
On iOS, an activation can either be a launch or a resume. A launch is when the process needs to start, and a resume is when your app already had a process alive, even if suspended. A resume is generally much faster, and the work to optimize a launch and resume differs.
On macOS, the system will not terminate your process as part of normal use. An activation may require the system to bring in memory from the compressor, swap, and re-render.
Your app activation varies significantly depending on previous actions on the device.
For example, on iOS, if you swipe back to the home screen and immediately re-enter the app, that is the fastest activation possible. It’s also likely to be a resume. When the system determines that a launch is required, it is commonly referred to as a “warm launch.”
Conversely, if a user just played a memory-intensive game, and they then re-enter your app, for example, it may be significantly slower than your average activation. On iOS, your app typically was evicted from memory to allow the foreground application more memory. Frameworks and daemons that your app depends on to launch might also require re-launching and paging in from disk. This scenario, or a launch immediately after boot, is often referred to as a “cold launch.”
Think of warm and cold launches as a spectrum. In real use, your users will experience a range of performance based on the state of the device. This spectrum is why testing in a variety of conditions is essential to predicting your real world performance.
I’m going to share some best practices when using
@StateObject
property wrappers, things learned the hard way, via some bugs that were difficult to diagnose and nearly impossible to notice during code review—unless one knows what to look for.The short version is this: if you have to explicitly initialize a
@StateObject
, pay close attention to the fact that the property wrapper’s initialization parameter is an escaping closure calledthunk
, not an object calledwrappedValue
. Do all the wrapped object initialization and prep inside the closure, or else you’ll undermine the performance benefits that likely motivated you to use@StateObject
in the first place.
Ezno is an experimental compiler I have been working on and off for a while. In short, it is a JavaScript compiler featuring checking, correctness and performance for building full-stack (rendering on the client and server) websites.
Using SharePlay and CarPlay, you and your passengers can all control the music that’s playing in the car.
Passengers can join a SharePlay session in two ways: by tapping a notification on their iPhone or by scanning a QR code, either on the CarPlay Now Playing screen or on the Now Playing screen of another passenger’s iPhone.
Create a user experience that feels responsive by removing hangs and hitches from your app.
An app that responds instantly to users’ interactions gives an impression of supporting their workflow. When the app responds to gestures and taps in real time, it creates an experience for users that they’re directly manipulating the objects on the screen. Apps with a noticeable delay in user interaction (a hang) or movement on screen that appears to jump (a hitch), shatter that illusion. This leaves the user wondering whether the app is working correctly. To avoid hangs and hitches, keep the following rough thresholds in mind as you develop and test your app.
< 100 ms — Synchronous main thread work in response to a discrete user interaction.
< 1 display refresh interval (8 or 17ms) — Main thread work and work to handle continuous user interaction.
Work performed on the main thread influences both the delay between an incoming user event and the corresponding screen update as well as the maximum frequency of screen updates.
If a delay in discrete user interaction becomes longer than 100 ms, it starts to become noticeable and causes a hang. Other stages of the event handling and rendering pipeline contribute to the overall delay. Assume that less than half that time is available for your app’s main thread to do its work. A shorter delay is rarely noticeable.
For fluid, uninterrupted motion, a new frame needs to be ready whenever the screen updates. On Apple devices, this can be as often as 120 times per second, or every 8.3 ms. Another common display refresh rate for Apple devices is 60Hz, so one update every 16.7ms. Depending on system conditions and other work that your app performs, you might not have the full display refresh interval to prepare your next screen update. If the work that your app needs to perform on the main thread to update the screen is less than 5 ms, the update is usually ready in time. If it takes longer, you need to take a closer look at the specific devices you’re targeting and the display refresh rate your app needs to support. Look at the section on hitches below for tools and guidelines to determine whether you are meeting the appropriate responsiveness thresholds.
Similarly, avoid scheduling work that does not have to execute on the main thread on the main thread, not even asynchronously, e.g. via
dispatch_async
orawait
ing the result of a function call on the main actor. As you have no control over when exactly the main thread processes your work or what the user might be doing at the time, it might come in in the middle of a continuous user interaction and cause a hitch.
import SwiftUI import AsyncAlgorithms struct AsyncChanges<V>: ViewModifier where V : Equatable, V: Sendable { typealias Element = (oldValue: V, newValue: V)
> typealias Action = (AsyncStream<Element>) async -> Void
> @State private var streamPair = AsyncStream<Element>.makeStream()
> private let action: Action
> private let value: V
> init(of value: V, initial: Bool, action: @escaping Action) {
> self.action = action
> self.value = value
> }
>
> func body(content: Content) -> some View {
> content
> .onChange(of: value, initial: true) { oldValue, newValue in
> streamPair.continuation.yield((oldValue, newValue))
> }
> .task {
> await action(streamPair.stream)
> }
> }
> }
>
> extension View {
> public func asyncChanges<V>(
> of value: V,
> initial: Bool = false,
> action: @escaping (AsyncStream<(oldValue: V, newValue: V)>) async -> Void
> ) -> some View where V: Equatable, V: Sendable {
> modifier(AsyncChanges<V>(of: value, initial: initial, action: action))
> }
> }
>
> struct ContentView: View {
> @State private var username = ""
>
> var body: some View {
> TextField("Username", text: self.$username)
> .asyncChanges(of: username) { sequence in
> for await value in sequence.debounce(for: .seconds(0.25)) {
> print("debounced value: \(value.newValue)")
> }
> }
> }
> }
> ```
An AsyncSequence that allows to be consumed several times. Returning the current state as specified in a reduce function
There are so many gosh darn syntax sites. How should I remember all their URLs?
TOPIC SITE C Function Pointers How Do I Declare a Function Pointer in C? Date Formatting Easy Skeezy Date Formatting for Swift and Objective-C Format Styles Gosh Darn Format Style! Git Dangit, Git Objective-C Block Syntax How Do I Declare a Block in Objective-C? Swift Closure Syntax How Do I Declare a Closure in Swift? Swift Multiple Trailing Closure Syntax How Do I Write Multiple Trailing Closures in Swift? Swift if case let
SyntaxHow Do I Write If Case Let in Swift? SwiftPM Swift Package Manager SwiftUI Gosh Darn SwiftUI SwiftUI Property Wrappers SwiftUI Property Wrappers
A convenient interface to the contents of the file system, and the primary means of interacting with it.
A file manager object lets you examine the contents of the file system and make changes to it. The
FileManager
class provides convenient access to a shared file manager object that is suitable for most types of file-related manipulations. A file manager object is typically your primary mode of interaction with the file system. You use it to locate, create, copy, and move files and directories. You also use it to get information about a file or directory or change some of its attributes.When specifying the location of files, you can use either
NSURL
orNSString
objects. The use of the NSURL class is generally preferred for specifying file-system items because URLs can convert path information to a more efficient representation internally. You can also obtain a bookmark from an NSURL object, which is similar to an alias and offers a more sure way of locating the file or directory later.If you are moving, copying, linking, or removing files or directories, you can use a delegate in conjunction with a file manager object to manage those operations. The delegate’s role is to affirm the operation and to decide whether to proceed when errors occur. In macOS 10.7 and later, the delegate must conform to the FileManagerDelegate protocol.
In iOS 5.0 and later and in macOS 10.7 and later,
FileManager
includes methods for managing items stored in iCloud. Files and directories tagged for cloud storage are synced to iCloud so that they can be made available to the user’s iOS devices and Macintosh computers. Changes to an item in one location are propagated to all other locations to ensure the items stay in sync.
Prevent data loss and app crashes by interacting with the file system in a coordinated, asynchronous manner and by avoiding unnecessary disk I/O.
A device’s file system is a shared resource available to all running processes. If multiple processes (or multiple threads in the same process) attempt to act on the same file simultaneously, data corruption or loss may occur, and your app may even crash.
To establish safe and efficient file access, avoid performing immediate file I/O on the app’s main thread. Use NSFileCoordinator to choreograph file access, opt for the I/O-free variants of file-related APIs, and implement the prefetching mechanisms of UICollectionView and UITableView to efficiently prepare file-related data for display.
We explore denotational interpreters: denotational semantics that produce coinductive traces of a corresponding small-step operational semantics. By parameterising our denotational interpreter over the semantic domain and then varying it, we recover dynamic semantics with different evaluation strategies as well as summary-based static analyses such as type analysis, all from the same generic interpreter. Among our contributions is the first provably adequate denotational semantics for call-by-need. The generated traces lend themselves well to describe operational properties such as evaluation cardinality, and hence to static analyses abstracting these operational properties. Since static analysis and dynamic semantics share the same generic interpreter definition, soundness proofs via abstract interpretation decompose into showing small abstraction laws about the abstract domain, thus obviating complicated ad-hoc preservation-style proof frameworks.
In this series of blog posts we’ll take a deep dive into on-device training. I’ll show how to train a customizable image classifier using k-Nearest Neighbors as well as a deep neural network.
This proposal introduces first-class differentiable programming to Swift. First-class differentiable programming includes five core additions:
- The
Differentiable
protocol.@differentiable
function types.- The
@differentiable
declaration attribute for defining differentiable functions.- The
@derivative
and@transpose
attributes for defining custom derivatives.- Differential operators (e.g.
derivative(of:)
) in the standard library.
THE FARTHEST tells the captivating tales of the people and events behind one of humanity’s greatest achievements in exploration: NASA’s Voyager mission, which celebrates its 40th anniversary this August. The twin spacecraft—each with less computing power than a cell phone—used slingshot trajectories to visit Jupiter, Saturn, Uranus and Neptune. They sent back unprecedented images and data that revolutionized our understanding of the spectacular outer planets and their many peculiar moons.
Still going strong four decades after launch, each spacecraft carries an iconic golden record with greetings, music and images from Earth—a gift for any aliens that might one day find it. Voyager 1, which left our solar system and ushered humanity into the interstellar age in 2012, is the farthest-flung object humans have ever created. A billion years from now, when our sun has flamed out and burned Earth to a cinder, the Voyagers and their golden records will still be sailing on—perhaps the only remaining evidence that humanity ever existed.
The ultimate playground for hardware programming in Swift
An example spatial/immersive video player for Apple Vision Pro
With Vision Pro, Apple has created a device that can playback spatial and immersive video recorded by iPhone 15 Pro, the Vision Pro itself, or created with my spatial command line tool (and similar tools). These videos are encoded using MV-HEVC, and each contains a Video Extended Usage box that describes how to play them back. Unfortunately, even one month after release, Apple has provided no (obvious) method to play these videos in all of their supported formats.
Out of necessity, I created a very bare-bones spatial video player to test the output of my command-line tool. It has also been used to test video samples that have been sent to me by interested parties. I've played up to 12K-per-eye (11520x5760) 360º stereo content (though at a low frame rate).
In order to avoid dependency graph nightmares, where you are unable to update or use a package due to conflicting dependency versions, we suggest being as flexible in your dependency on SwiftSyntax as possible.
This means that rather than depending on SwiftSyntax by saying you are willing to accept any minor version within a particular major version, as Xcode’s macro template does by default:
.package( url: "<https://github.com/apple/swift-syntax>", from: "509.0.0" )…you should instead accept a range of major versions like so:
.package( url: "<https://github.com/apple/swift-syntax>", "508.0.0"..<"510.0.0" )This allows people to depend on your package who are still stuck on version 508 of SwiftSyntax, while also allowing those who can target 509 to use your library.
How do you enable strict concurrency checking for all targets in a Swift Package?
@MainActor
is a Swift annotation to coerce a function to always run on the main thread and to enable the compiler to verify this. How does this work? In this article, I’m going to reimplement@MainActor
in a slightly simplified form for illustration purposes, mainly to show how little “magic” there is to it. The code of the real implementation in the Swift standard library is available in the Swift repository.
@MainActor
relies on two Swift features, one of them unofficial: global actors and custom executors.
Recently, someone asked me a question about actor isolation. The specifics aren’t important, but I really got to thinking about it because of course they were struggling. Isolation is central to how Swift concurrency works, but it’s a totally new concept.
Despite being new, it actually uses mostly familiar mechanisms. You probably do understand a lot about how isolation works, you just don’t realize it yet.
Here’s breakdown of the concepts, in the simplest terms I could come up with.
pfl
is a Python framework developed at Apple to enable researchers to run efficient simulations with privacy-preserving federated learning (FL) and disseminate the results of their research in FL. The framework is not intended to be used for third-party FL deployments but the results of the simulations can be tremendously useful in actual FL deployments. We hope thatpfl
will promote open research in FL and its effective dissemination.pfl
provides several useful features, including the following:
- Get started quickly trying out PFL for your use case with your existing model and data.
- Iterate quickly with fast simulations utilizing multiple levels of distributed training (multiple processes, GPUs and machines).
- Flexibility and expressiveness — when a researcher has a PFL idea to try,
pfl
has flexible APIs to express these ideas and promote their dissemination (e.g. models, algorithms, federated datasets, privacy mechanisms).- Fast, scalable simulations for large experiments with state-of-the-art algorithms and models.
- Support of both PyTorch and TensorFlow. This is great for groups that use both, e.g. other large companies.
- Unified benchmarks for datasets that has been vetted for both TensorFlow and PyTorch. Current FL benchmarks are made for one or the other.
- Support of other models in addition to neural networks, e.g. GBDTs. Switching between types of models while keeping the remaining setup fixed is seamless.
- xTight integration with privacy features, including common mechanisms for local and central differential privacy.
A reimplementation of the basics of MainActor. Sample code for https://oleb.net/2022/how-mainactor-works/
If you’ve used SwiftUI for long enough, you’ve probably noticed that the public Swift APIs it provides are really only half the story. Normally inconspicuous unless something goes exceedingly wrong, the private framework called AttributeGraph tracks almost every single aspect of your app from behind the scenes to make decisions on when things need to be updated. It would not be much of an exaggeration to suggest that this C++ library is actually what runs the show, with SwiftUI just being a thin veneer on top to draw some platform-appropriate controls and provide a stable interface to program against. True to its name, AttributeGraph provides the foundation of what a declarative UI framework needs: a graph of attributes that tracks data dependencies.
Mastering how these dependencies work is crucial to writing advanced SwiftUI code. Unfortunately, being a private implementation detail of a closed-source framework means that searching for AttributeGraph online usually only yields results from people desperate for help with their crashes. (Being deeply unpleasant to reverse-engineer definitely doesn’t help things, though some have tried.) Apple has several videos that go over the high-level design, but unsurprisingly they shy away from mentioning the existence of AttributeGraph itself. Other developers do, but only fleetingly.
This puts us in a real bind! We can
Self._printChanges()
all day and still not understand what is going on, especially if problems we have relate to missing updates rather than too many of them. To be honest, figuring out what AttributeGraph is doing internally is not all that useful unless it is not working correctly. We aren’t going to be calling those private APIs anyways, at least not easily, so there’s not much point exploring them. What’s more important is understanding what SwiftUI does and how the dependencies need to be set up to support that. We can take a leaf out of the generative AI playbook and go with the approach of just making guesses as how things are implemented. Unlike AI, we can also test our theories. We won’t know whether our speculation is right, but we can definitely check to make sure we’re not wrong!
Create a browser that renders content using an alternative browser engine. A web browser loads content and code from remote — and potentially untrusted — servers. Design your browser app to isolate access to operating system resources, the data of the person using the app, and untrusted data from the web. Code defensively to reduce the risk posed by vulnerabilities in your browser code.
If you use WKWebView to render web content in your browser app, WebKit automatically distributes its work to extensions that isolate their access to important resources and data.
Whether you use WebKit or write your own alternative browser engine, you need to request the entitlement to act as a person’s default web browser. For more information, see Preparing your app to be the default web browser.
SwiftUI has an undocumented system for interacting with collections of
View
types known asVariadicView
. The enum_VariadicView
is the entry point to this system, which includes other types like_VariadicView_MultiViewRoot
and_VariadicView.Tree
. The details of these were explored in a great post from MovingParts and there have been a few other helpful blogs about it.When I first read about it, I didn’t see the applications to my code. As with most SwiftUI, it relies heavily on generics and can be difficult to see how to use it just from reading the API. Since then, I’ve made it a core part of SnapshotPreviews and learned that, despite being a private API, it is very safe to use in production — in fact, many popular apps use it extensively.
This post will explain the specific use case I found for extracting snapshots from SwiftUI previews. Hopefully a concrete example will inspire others to use this powerful SwiftUI feature!
- Choose an existing SF Symbol (book.fill)
- Right click + "Duplicate as Custom Symbol"
- In Custom Symbols, right click + "Combine Symbol with Component"
- Select the component your want (badge.plus)
In visionOS, content can be displayed in windows, volumes, and spaces.
Windows and spaces generally work as advertised, but volumes have several limitations you should be aware of before designing your app around them.
The following list of issues applies to visionOS 1.0 and 1.1 beta. I’ll keep it updated as new visionOS versions are released
A utility for transforming spatial media.
As of January 2024, Apple's MV-HEVC format for stereoscopic video is very new and barely supported by anything. However, there are millions of iPhones (iPhone 15 Pro/Pro Max) that can capture spatial video already. There was no available FOSS tool capable of splitting the stereo pair, especially not in formats suited for post-production. Upon public request, the ability to create MV-HEVC files from two separate input files was also added.
Yeah, nobody remembers this, even if they’ve heard about it before.
.values
is just so easy to reach for. And the bug is a subtle race condition that drops messages. And you can’t easily unit test for it. And the compiler probably can’t warn you about it. And this problem exists in any situation where an AsyncSequence “pushes” values, which is basically every observation pattern, even without Combine.And so I struggle with whether to encourage
for-await
. Every time you see it, you need to think pretty hard about what’s going on in this specific case. And unfortunately, that’s kind of true of AsyncSequence generally. I’m not sure what to think about this yet. Most of my bigger projects use Combine for these kinds of things currently, and it “just works” including unsubscribing automatically when the AnyCancellable is deinited (another thing that’s easy to mess up withfor-await
). I just don’t know yet.
David Corfield made a very interesting observation: the three types of logical reasoning of Peirce’s, deduction, induction, abduction, correspond to three very elementary operations in category theory: composition, extension and lifting.
I was inspired by that discovery to finish working on a project I had long been putting off: documenting all the URLs supported by the Settings app in iOS and iPadOS.
Optics are bidirectional data accessors that capture data transformation patterns such as accessing subfields or iterating over containers. Profunctor optics are a particular choice of representation supporting modularity, meaning that we can construct accessors for complex structures by combining simpler ones. Profunctor optics have previously been studied only in an unenriched and non-mixed setting, in which both directions of access are modelled in the same category. However, functional programming languages are arguably better described by enriched categories; and we have found that some structures in the literature are actually mixed optics, with access directions modelled in different categories. Our work generalizes a classic result by Pastro and Street on Tambara theory and uses it to describe mixed V-enriched profunctor optics and to endow them with V-category structure. We provide some original families of optics and derivations, including an elementary one for traversals. Finally, we discuss a Haskell implementation.
Owl is an experiment in human-computer interaction using wearable devices to observe our lives and extract information and insights from them using AI. Presently, only audio and location are captured, but we plan to incorporate vision and other modalities as well. The objectives of the project are, broadly speaking:
- Develop an always-on AI system that is useful, unlocking new ways to enhance our productivity, our understanding of ourselves and the world around us, and ability to connect with others.
- Implement specific use cases for always-on AI (e.g., productivity and memory enhancement, knowledge capture and sharing, health, etc.)
- Explore human-computer interaction questions: user experience, interface design, privacy, security.
There are three major components to this project:
- Wearable capture devices. These include semi-custom development boards (with some assembly required) as well as off-the-shelf products like Apple Watch. We would like to develop fully custom open source hardware.
- AI server.
- Presentation clients. Applications that display information gathered by the system (e.g., transcripts, conversation summaries) and allow interaction with an online assistant. Currently, a mobile app and web app are included.
Today we are announcing the most significant cryptographic security upgrade in iMessage history with the introduction of PQ3, a groundbreaking post-quantum cryptographic protocol that advances the state of the art of end-to-end secure messaging. With compromise-resilient encryption and extensive defenses against even highly sophisticated quantum attacks, PQ3 is the first messaging protocol to reach what we call Level 3 security — providing protocol protections that surpass those in all other widely deployed messaging apps. To our knowledge, PQ3 has the strongest security properties of any at-scale messaging protocol in the world.
Create and manipulate 3D mathematical primitives.
The Spatial module is a lightweight 3D mathematical library that provides a simple API for working with 3D primitives. Much of its functionality is similar to the 2D geometry support in Core Graphics, but in three dimensions.
The Swift programming language has a lot of potential to be used for machine learning research because it combines the ease of use and high-level syntax of a language like Python with the speed of a compiled language like C++.
MLX is an array framework for machine learning research on Apple silicon. MLX is intended for research and not for production deployment of models in apps.
MLX Swift expands MLX to the Swift language, making experimentation on Apple silicon easier for ML researchers.
As part of this release we are including:
- A comprehensive Swift API for MLX core
- Higher level neural network and optimizers packages
- An example of text generation with Mistral 7B
- An example of MNIST training
- A C API to MLX which acts as the bridge between Swift and the C++ core
We are releasing all of the above under a permissive MIT license.
This is a big step to enable ML researchers to experiment using Swift.
Useful utilities and services over DNS
dns.toys
is a DNS server that takes creative liberties with the DNS protocol to offer handy utilities and services that are easily accessible via the command line.
During the sessions, the presenter shared that somewhere in the coming year (2024) Apple would start requiring privacy manifests in signed
XCFrameworks
. There was little concrete detail available then, and I’ve been waiting since for more information on how to comply. I expected documentation at least, and was hoping for an update inXcode
— specifically thexcodebuild
command — to add an option that accepted a path to a manifest and included it appropriately. So far, nothing from Apple on that front.
In this post, I will talk about inference rules, particularly in the field of programming language theory. The first question to get out of the way is “what on earth is an inference rule?”. The answer is simple: an inference rule is just a way of writing “if … then …”. When writing an inference rule, we write the “if” stuff above a line, and the “then” stuff below the line. Really, that’s all there is to it.
Version numbers are hard to get right. Semantic Versioning (SemVer) communicates backward compatibility via version numbers which often lead to a false sense of security and broken promises. Calendar Versioning (CalVer) sits at the other extreme of communicating almost no useful information at all.
Going forward I plan to version the projects I work on in a way that communicates how much effort I expect a user will need to spend to adopt the new version. I’m going to refer to that scheme as Intended Effort Versioning (EffVer for short).
List Swift compiler upcoming and experimental feature flags.
Dev Mode is a new space in Figma for developers with features that help you translate designs into code, faster
“UI is a function of state” is a pretty popular saying in the front-end world. In context (pun intended), that’s typically referring to application or component state. I thought I’d pull that thread a little further and explore all the states that can effect the UI layer…
We built this website to visually explain how the SwiftUI layout system works, and we hope you find it useful. We welcome any feedback, positive or negative, so please send us an email if you have anything to share. We're planning to build out this site over the next few months, so if you want to stay updated, subscribe to our mailing list below.
Watch and record your own custom channels.
Use streaming sources to create channels right on your TV. Security cams, web cams, open internet streams, SAT>IP devices, and more.
We present a systematic embedding of algebraic data types and their (recursive) processing using pattern-matching, and illustrate on examples of sums and recursive sums of products (strict and lazy trees). The method preserves all advantages of the tagless-final style, in particular, multiple interpretations -- such as evaluating the same DSL term strictly and non-strictly, and pretty-printing it as OCaml and Lua code. In effect, we may write Lua code with patter-matching and type safety. As another application, we investigate the efficiency of emulating left-fold via right-fold, in call-by-value, call-by-name and call-by-need.
Practical solutions to problems with Swift Concurrency
Swift Concurrency can be really hard to use. I thought it could be handy to document and share solutions and hazards you might face along the way. I am absolutely not saying this is comprehensive, or that the solutions presented are great. I'm learning too. Contributions are very welcome, especially for problems!
Quick definitions for the hazards referenced throughout the recipes:
- Timing: More than one option is available, but can affect when events actually occur.
- Ordering: Unstructured tasks means ordering is up to the caller. Think carefully about dependencies, multiple invocations, and cancellation.
- Lack of Caller Control: definitions always control actor context. This is different from other threading models, and you cannot alter definitions you do not control.
- Sendability: types that cross isolation domains must be sendable. This isn't always easy, and for types you do not control, not possible.
- Blocking: Swift concurrency uses a fixed-size thread pool. Tying up background threads can lead to lag and even deadlock.
- Availability: Concurrency is evolving rapidly, and some APIs require the latest SDK.
- Async virality: Making a function async affects all its callsites. This can result in a large number of changes, each of which could, itself, affect subsequence callsites.
- Actor Reentrancy: More than one thread can enter an Actor's async methods. An actor's state can change across awaits.
Create 64-bit ARM assembly language instructions that adhere to the application binary interface (ABI) that Apple platforms support.
The ARM architecture defines rules for how to call functions, manage the stack, and perform other operations. If part of your code includes ARM assembly instructions, you must adhere to these rules in order for your code to interoperate correctly with compiler-generated code. Similarly, if you write a compiler, the machine instructions you generate must adhere to these rules. If you don’t adhere to them, your code may behave unexpectedly or even crash.
Apple platforms diverge from the standard 64-bit ARM architecture in a few specific ways. Apart from these small differences, iOS, tvOS, and macOS adhere to the rest of the 64-bit ARM specification. For information about the ARM64 specification, including the Procedure Call Standard for the ARM 64-bit Architecture (AArch64), go to https://developer.arm.com.
Find patterns in crash reports that identify common problems, and investigate the issue based on the pattern.
You can identify the causes for many app crashes by looking for specific patterns in the crash report and taking specific diagnostic actions based on what the pattern shows. To recognize patterns, you consult two sections available in every crash report:
- The exception code in the Exception Information section identifies the specific way the app crashed.
- The backtraces show what code the thread was executing at the time of the crash.
Some types of common crashes have a Diagnostic Messages section or a
Last Exception Backtrace
in the Backtraces section, which further describe the issue. These sections aren’t present in all crash reports. Examining the fields in a crash report describes each section and field in detail.Compare the examples provided in this article to a crash report you’re investigating. Once you find a match, proceed to the more detailed article about that type of crash.
Determining whether your crash report contains a pattern for a common issue is the first step in diagnosing a problem. In some cases, the suggested diagnostic actions won’t identify the cause of the issue, requiring a more thorough analysis of the entire crash report. Analyzing a crash report describes how to perform a detailed analysis of a crash report.
Learn what the exception type tells you about why your app crashed.
The exception type in a crash report describes how the app terminated. It’s a key piece of information that guides how to investigate the source of the problem.
The exception types are summarized here. See the sections that follow for more information.
EXC_BREAKPOINT (SIGTRAP)
andEXC_BAD_INSTRUCTION (SIGILL)
. A trace trap interrupted the process.EXC_BAD_ACCESS
. The crash is due to a memory access issue. See Investigating memory access crashes.EXC_CRASH (SIGABRT)
. The process terminated because it received a SIGABRT.EXC_CRASH (SIGKILL)
. The operating system terminated the process.EXC_CRASH (SIGQUIT)
. The process terminated at the request of another process.EXC_GUARD
. The process violated a guarded resource protection.EXC_RESOURCE
. The process exceeded a resource consumption limit.EXC_ARITHMETIC
. The crashed thread performed an invalid arithmetic operation, such as division by zero or a floating point error.
Identify clues in a crash report that help you diagnose problems.
A crash report is a detailed log of an app’s state when it crashed, making it a crucial resource for identifying a problem before attempting to fix it. If you’re investigating a crash that isn’t resolved by the techniques discussed in Identifying the cause of common crashes, you need to do a careful analysis of the complete crash report.
When analyzing a crash report, read the information in all sections. As you formulate the hypothesis about the cause of a crash, ask questions about what the data in each section of the crash report says to refine or disprove the hypothesis. Some clues are explicitly captured by fields in the crash report, but other clues are subtle, and require you to uncover them by noticing small details. Performing a thorough analysis of a crash report and formulating a hypothesis takes time and practice to develop, but is a critical tool for making your app more robust.
Understand the structure of a crash report and the information each field contains.
Learn how Apple Vision Pro and visionOS protect your data
In this article we have explored how we can bridge from callback-based code or delegate-based code into
async/await
. We learned how to use checked continuations to do so, and we enforced the idea of what a continuation actually is.With this, you should now understand all the essentials of
async/await
. You are now ready to tackle actual concurrency, and next week we will start talking about that, starting with structured concurrency. You will learn how to run many tasks in parallel and how to process such results.
In SwiftUI we can create smooth transitions between views from one state to another with the Matched Geometry Effect. Using unique identifiers we can blend the geometry of two views with the same identifier creating an animated transition. Transitions like this can be useful for navigation or changing the state of UI elements.
To implement it on your user interface you must:
- Define the namespace that will be used to synchronize the geometry of the views;
- Define the initial and final states of the views that will be animated;
- Use the proper view modifier to identify the initial and final states for the matched geometry transition to take effect;
- Trigger the transition.
These cards are meant to supplement your studies during your technical job search - or are great for people learning data structures. Included are 46 digital cards that cover the data structures you need to know for technical interviews.
Are there any examples in the history of mathematics of a mathematical proof that was initially reviewed and widely accepted as valid, only to be disproved a significant amount of time later, possibly even after being used in proofs of other results?
(I realise it's a bit vague, but if there is significant doubt in the mathematical community then the alleged proof probably doesn't qualify. What I'm interested in is whether the human race as a whole is known to have ever made serious mathematical blunders.)
In telecommunications some very large and very small values are used. To make writing of these numbers easier use is made of a prefix. The prefix gives a value with which the value must be multiplied.
Some prefixes are also used in digital communications and computer technology but they have a slightly different value because they are based on a power of 2.
Prefix Analog Value Digital Value p (pico) 10-12 - n (nano) 10-9 - µ (micro) 10-6 - m (milli) 10-3 - k (kilo) 103 (1000) 210 (1024) M (mega) 106 (1,000,000) 220 (1,048,576) G (Giga) 109 (1,000,000,000) 230 (1,073,741,824) T (Tera) 1012 (1,000,000,000,000) 240 (1,099,511,627,776)
Typestate is a powerful design pattern that emerged in languages with advanced type systems and strict memory ownership models, notably Rust. It is now available to Swift programmers with the introduction of
Noncopyable
types inSwift 5.9
.Typestate brings the concept of a State Machine into the type system. In this pattern, the state of an object is encoded in its type, and transitions between states are reflected in the type system.
Crucially, Typestate helps catch serious logic mistakes at compile time rather than runtime. This makes it great for designing mission-critical systems, especially where human safety is involved (see the Tesla car example).
Like with most design patterns, the best way to understand it is by examining some examples.
Typestate is a powerful design pattern that brings great type and memory safety to your programs. It can drastically reduce the possibility of critical bugs and undefined behaviours by catching them at compile time. It can also reduce the reliance on inherently skippable quality control measures, such as tests, linters, code reviews, etc.
To decide if Typestate is a good choice for your use case, see if ANY of these apply:
- Your program behaves like a state machine. You can identify distinct states and transitions between them.
- Your program needs to enforce a strict order of operations, where out-of-order operations can lead to bugs or undefined behaviour.
- Your program manages resources that have open/use/close semantics. Typical examples: files, connections, streams, audio/video sessions, etc. Resources that can't be used before they are acquired, and that must be relinquished after use.
- Your program manages mutually exclusive systems. See the Tesla car example below, where the gas pedal either accelerates the real car or the video game car, depending on the state.
Setting your own vertical or horizontal alignment guide isn’t something I’ve thought about much when writing my own SwiftUI code. When they were announced, and later demo’d during a dub dub session in SwiftUI’s early days, I remember thinking, “Yeah, I don’t get that. Will check out later.”
Lately, though, I’ve seen two novel use cases where using one is exactly what was required. Or, at the very least, it solved a problem in a manageable way.
Rather than write out a bunch of code myself, I’ll get straight to it and show you the examples from other talented developers.
For Unison Cloud, we have simple prices that fit on a notecard. We don't pass the bizarrely complicated pricing structure of infra providers on to our customers, since most companies don't want or need that. They want simple and predictable pricing, and good productivity. For a larger enterprise deal, we're of course happy to negotiate a more granular pricing scheme (and we can suggest some options), as long as you aren't asking us to sell you $1 at a 20% discount. If you do need something custom, please get in touch.
To keep our costs under control, we make use of rate limiting so one user can't monopolize our resources or render an entire service unprofitable. But there are a range of limits which still allow us to operate profitably, and for bigger customers looking to optimize their spending, we're again happy to work out some custom deal.
This makes a lot more sense to us than having a complicated default pricing scheme that only serves the needs of the 1%. Big accounts are likely to want a custom deal anyway for their unique needs, so why not keep the default prices simple and leave the complexity for custom deals? Everybody wins.
Besides keeping pricing simple, we are actually serious about improving developer productivity. On Unison Cloud, there's no packaging or building containers, no boilerplate talking between services, no tedious code getting data stashed in durable storage and read back later, and lots more. We think the cloud should be simple and delightful to use, and we're making it happen.
Around the nLab and elsewhere, one occasionally sees an expression “the walking _____” where the blank is some mathematical concept. This is a colloquial way of referring to an archetypal model of the concept or type, and usually refers to a free or initial form of such a kind of structure.
Pronunciation is just as in ‘John is a walking almanac’ or ‘Eugene Levy is a walking pair of eyebrows’. The term is believed to have been introduced by James Dolan.
Sometimes, “the free-living _____” or “the free-standing _____” is used instead; this terminology is probably much older.
import SwiftUI extension View { func debugLog(_ name: String) -> some View { MyLayout(name: name) { self } } } struct MyLayout: Layout { var name: String func sizeThatFits(proposal: ProposedViewSize, subviews: Subviews, cache: inout ()) -> CGSize { assert(subviews.count == 1) let result = subviews[0].sizeThatFits(proposal) print(name, proposal, result) return result } func placeSubviews(in bounds: CGRect, proposal: ProposedViewSize, subviews: Subviews, cache: inout ()) { subviews[0].place(at: bounds.origin, proposal: proposal) } }
Learn about changes and features included in the firmware updates for your AirPods.
Firmware updates are delivered automatically while your AirPods are charging and in Bluetooth range of your iPhone, iPad, or Mac that's connected to Wi-Fi. You can also use your iPhone, iPad, or Mac to check that your AirPods have the latest version.
To use your iPhone or iPad to check that your AirPods are up to date, make sure that you have the latest version of iOS or iPadOS. Go to Settings > Bluetooth, then tap the Info button next to the name of your AirPods. Scroll down to the About section to find the firmware version.
To use your Mac to check that your AirPods are up to date, make sure that you have the latest version of macOS. Press and hold the Option key while choosing Apple menu > System Information. Click Bluetooth, then look under your AirPods for the firmware version. With macOS Ventura or later, you can also choose Apple menu > System Settings, click Bluetooth, then click the Info button next to the name of your AirPods.
If you don't have an Apple device nearby, you can set up an appointment at an Apple Store or with an Apple Authorized Service Provider to update your firmware.
Sound Actions allows you to make sounds to perform actions such as the following:
- Tap
- Recenter apps
- Open Capture
- Access Control Center
- Adjust the volume
- Take a screenshot
- Scroll up or down
- Activate Siri
- Go to Settings > Accessibility > Interaction > Sound Actions.
- Tap a sound, then assign an action to it. You can also tap Practice to practice sounds before assigning one to an action.
To animate the rotation, scale or translation of an entity is quite straight forward. The Transform component has a move method. The code below moves an entity 0.5m along the X axis.
let transform = Transform(scale: .one, simd_quatf(), translation: [0.5, 0, 0]) entity.move(to: transform, relativeTo: entity, duration: 1, timingFunction: .easeInOut)To do something at the end of the animation you add a subscription to the scene's publisher:
scene.publisher(for: AnimationEvents.PlaybackCompleted.self, on: entity).sink(receiveValue: { event in print("Animation finished") })
A class your Metal app uses to register for callbacks to synchronize its animations for a display.
CAMetalDisplayLink
instances are a specialized way to interact with variable-rate displays when you need more control over the timing window to render your app’s frames. Controlling the timing window and rendering delay for frames can help you achieve smoother frame rates and avoid visual artifacts.[!Tip] When working with less visually intensive apps or apps which don’t use Metal, use CADisplayLink to handle variable refresh rates.
Your app initializes a new Metal display link by providing a target CAMetalLayer. Set this instance’s delegate property to an implementation that encodes the rendering work for Metal to perform. With a set delegate, synchronize the display with a run loop to perform rendering on by calling the add(to:forMode:) method. Once you associate the display link with a run loop, the system calls the delegate’s metalDisplayLink(_:needsUpdate:) method to request new frames. This method receives update requests based on the preferredFrameRateRange and preferredFrameLatency of the display link. The system makes a best effort to make callbacks at appropriate times. Your app should complete any commits to the Metal device’s MTLCommandQueue for rendering the display layer before calling present() on a drawable element. Your app can disable notifications by setting isPaused to true. When your app finishes with a display link, call invalidate() to remove it from all run loops and the target.
A timer object that allows your app to synchronize its drawing to the refresh rate of the display.
Your app initializes a new display link by providing a target object and a selector to call when the system updates the screen. To synchronize your display loop with the display, your application adds it to a run loop using the add(to:forMode:) method.
Once you associate the display link with a run loop, the system calls the selector on the target when the screen’s contents need to update. The target can read the display link’s timestamp property to retrieve the time the system displayed the previous frame. For example, an app that displays movies might use timestamp to calculate which video frame to display next. An app that performs its own animations might use timestamp to determine where and how visible objects appear in the upcoming frame.
The duration property provides the amount of time between frames at the maximumFramesPerSecond. To calculate the actual frame duration, use targetTimestamp - timestamp. You can use this value in your app to calculate the frame rate of the display, the approximate time the system displays the next frame, and to adjust the drawing behavior so that the next frame is ready in time to display.
Your app can disable notifications by setting isPaused to true. Also, if your app can’t provide frames in the time the system provides, you may want to choose a slower frame rate. An app with a slower but consistent frame rate appears smoother to the user than an app that skips frames. You can define the number of frames per second by setting preferredFramesPerSecond.
When your app finishes with a display link, call invalidate() to remove it from all run loops and to disassociate it from the target.
The code listing below shows how to create a display link and add it to the current run loop. The display link invokes the step function, which prints the target timestamp with each screen update.
func createDisplayLink() { let displaylink = CADisplayLink(target: self, selector: #selector(step)) displaylink.add(to: .current, forMode: .defaultRunLoopMode) } func step(displaylink: CADisplayLink) { print(displaylink.targetTimestamp) }You shouldn’t subclass
CADisplayLink
.
In the early-to-mid 2010s, there was a renaissance in languages exploring new ways of doing concurrency. In the midst of this renaissance, one abstraction for achieving concurrent operations that was developed was the “future” or “promise” abstraction, which represented a unit of work that will maybe eventually complete, allowing the programmer to use this to manipulate control flow in their program. Building on this, syntactic sugar called “async/await” was introduced to take futures and shape them into the ordinary, linear control flow that is most common. This approach has been adopted in many mainstream languages, a series of developments that has been controversial among practitioners.
The story of our site-wide redesign and web tech and accessibility wins.
We are delighted to announce the open source first release of Pkl (pronounced Pickle), a programming language for producing configuration.
When thinking about configuration, it is common to think of static languages like JSON, YAML, or Property Lists. While these languages have their own merits, they tend to fall short when configuration grows in complexity. For example, their lack of expressivity means that code often gets repeated. Additionally, it can be easy to make configuration errors, because these formats do not provide any validation of their own.
To address these shortcomings, sometimes formats get enhanced by ancillary tools that add special logic. For example, perhaps there’s a need to make code more DRY, so a special property is introduced that understands how to resolve references, and merge objects together. Alternatively, there’s a need to guard against validation errors, so some new way is created to validate a configuration value against an expected type. Before long, these formats almost become programming languages, but ones that are hard to understand and hard to write.
On the other end of the spectrum, a general-purpose language might be used instead. Languages like Kotlin, Ruby, or JavaScript become the basis for DSLs that generate configuration data. While these languages are tremendously powerful, they can be awkward to use for describing configuration, because they are not oriented around defining and validating data. Additionally, these DSLs tend to be tied to their own ecosystems. It is a hard sell to use a Kotlin DSL as the configuration layer for an application written in Go.
We created Pkl because we think that configuration is best expressed as a blend between a static language and a general-purpose programming language. We want to take the best of both worlds; to provide a language that is declarative and simple to read and write, but enhanced with capabilities borrowed from general-purpose languages. When writing Pkl, you are able to use the language features you’d expect, like classes, functions, conditionals, and loops. You can build abstraction layers, and share code by creating packages and publishing them. Most importantly, you can use Pkl to meet many different types of configuration needs. It can be used to produce static configuration files in any format, or be embedded as a library into another application runtime.
We designed Pkl with three overarching goals:
- To provide safety by catching validation errors before deployment.
- To scale from simple to complex use-cases.
- To be a joy to write, with our best-in-class IDE integrations.
CMTime is a struct representing a time value such as a timestamp or duration. CMTime is defined by CoreMedia and it is often used by AVFoundation API interfaces.
Because the interface of CMTime is horrible, and its documentation is even worse, here you have a few use cases to make it easier to work with CMTime in a daily basis.
Apple Vision Pro – available today in the US – is a wearable spatial computer that blends the digital with the physical, heralding a whole new platform for experiencing technology
Explore the visionOS simulator's debug modes in Xcode for spatial computing apps.
Let's explore the debugging modes within the visionOS simulator in Xcode, tailored for developers working on spatial computing applications. Understanding these modes is crucial for effectively visualizing and troubleshooting applications in the unique environment that Vision Pro offers.
Developers have been working hard to create or update their apps for Apple Vision Pro. Here's a list of selected apps you might want to try out.
Strap in, cancel your Netflix, and load up some amazing apps!
Prefer to see ALL the apps? There's a great list of supported visionOS apps in this Google doc. Worth a bookmark when you're looking for new ideas.
It’s not often that we see a new platform get introduced to the world. Over the last two decades, there have really been only two platforms that focus on general-purpose computing. We might be witnessing the beginning of the third today.
When there is a new platform, it’s always cool to see what are all the new possibilities it enables. Going through the App Store for visionOS, I’m already surprised by the creativity of some developers, and also amazed by the new experiences that take advantage of the platform. I’m really looking forward to seeing what other cool things people create on visionOS.
Welcome to the era of spatial computing.
Here are all the native third-party apps available on day one for visionOS that I was able to find through the Apple Media Service API.
It’s really impressive to see the developers working through all the challenges and complications to bring something new to the world, and congratulations on launching these apps! It wouldn’t be a general-purpose computer if there is no third-party apps 😛
The engineer side of me really hope this platform can success as the technology packed in Apple Vision Pro is truly impressive. However, like many other developers, I think Apple’s behavior around App Store and app review is really alienating developers and pushing them further away. Apple’s view of iPhone can success without any third-party apps is just so out of touch to me (RIP Windows Phone 🥲). Maybe I'm naive, I hope with this new platform, we can meet somewhere in the middle and have both parties appreciate each other’s role played in making the platform successful. (The current relationship is definitely not healthy, as I have to worry about the possibility of retaliation from Apple just for writing this 🙃)
React Native is not a single company initiative, and its modularity allows many to step up and provide a solution for each aspect. Some libraries are gaining popularity, some solutions are fading from the scene, and some limitations are becoming more apparent.
All of this can make it difficult for developers to choose the right tools and libraries for their projects and be confident in their decisions.
The second edition of the survey presents the trends and outlines the new initiatives happening in the React Native ecosystem. Starting with this edition, we can examine the popularity and usability of specific solutions year over year. Some of the trends were expected, while others are complete surprises. We find that some aspects are getting more attention from contributors than ever before - take a look at the styling or debugging sections (and others!). The first edition of the survey was very successful. Major players in the ecosystem are reading and responding to the data. We've also grown by more than 500 new respondents year over year, reaching nearly 2400 unique respondents. By reaching more and more developers each year, we become a torch that guides people into the depths of the React Native ecosystem.
Enter the second edition of the State of React Native survey. Designed to consolidate opinions and provide meaningful insight into a variety of aspects that React Native developers deal with on a daily basis. I'm confident that the data you'll find here will serve you well the next time you need to choose the right state management solution for your project, or make any other React Native-related decision.
Discover spatial computing apps. Enjoy groundbreaking immersive experiences, explore new universes and get to know visionOS apps that are available on Apple Vision Pro.
Learn the fundamental concepts of SwiftNIO, such as EventLoops and nonblocking I/O
The Swift language constructs
self
,Self
, andSelf.self
can sometimes be a source of confusion, even for experienced developers. It's not uncommon to pause and recall what each is referring to in different contexts. In this post I aim to provide some straightforward examples that clarify the distinct roles and uses of these three constructs. Whether it's managing instance references, adhering to protocol conformance, or accessing metatype information, understanding these concepts is key to harnessing the full potential of Swift in our projects.
Create an alternative app marketplace or distribute your app on one.
An alternative app marketplace is an iOS app from which someone can install apps from other developers, as an alternative to the App Store. MarketplaceKit enables alternative app marketplaces to install the apps they host on peoples’ devices. The framework also supports features that compose quality browsing and installation experience, such as Spotlight Search and App Thinning. With the framework, you can manage existing app installations, convey download progress, update app licensing, and customize app search behavior.
In addition to alternative app marketplaces, this framework also serves:
- Web browsers, specifically by requesting alternative app marketplace installation triggered through an alternative marketplace webpage.
- Apps that distribute from an alternative app marketplace, by determining the installation source at runtime. This allows a marketplace-hosted app to branch its functionality depending on the marketplace from which it installs on a particular device, to accommodate differences on either marketplace.
To learn about the criteria and request the marketplace entitlement, see Getting started as an alternative app marketplace in the European Union.
Record an AR session in Reality Composer and replay it in your ARKit app.
ARKit apps use video feeds and sensor data from an iOS device to understand the world around the device. This reliance on real-world input makes the testing of an AR experience challenging because real-world input is never the same across two AR sessions. Differences in lighting conditions, device motion, and the location of nearby objects all change how RealityKit understands and renders the scene each time.
To provide consistent data to your AR app, you can record a session using Reality Composer, then use the recorded camera and sensor data to drive your app when running from Xcode.
Communication between views in SwiftUI can be tricky. As explained in a previous story about SwiftUI State monitoring, SwiftUI PropertyWrappers offer us a lot by hiding some complexity of managing the source of truth for our views. However, they can also bring confusion regarding state management and how to communicate between views.
Here’s a quick recap of the most common options.
Apple is sharing new business terms available for developers’ apps in the European Union. Developers can choose to adopt these new business terms, or stay on Apple’s existing terms. For existing developers who want nothing to change for them — from how the App Store works currently and in the rest of the world — no action is needed, and they can continue to distribute their apps only on the App Store and use its private and secure In-App Purchase system. Developers must adopt the new business terms for EU apps to use the new capabilities for alternative distribution or alternative payment processing.
Swift 5.2 brought some awesome changes to the package manager thanks to SE-0226 that massively improved the handling of dependencies. Going forward no longer would you face the spinning resolution of doom if you had dependency conflicts. And no longer would you have to download all transitive dependencies if some were only used in testing of your dependencies.
The Apple TV app lets you browse content from a variety of video services without switching from one app to the next. It provides movies, shows, and handpicked recommendations. The app is on iOS and tvOS devices— so you can watch wherever you go.
Changes to iOS
In the EU, Apple is making a number of changes to iOS to comply with the DMA. For developers, those changes include new options for distributing apps. The coming changes to iOS in the EU include:
- New options for distributing iOS apps from alternative app marketplaces — including new APIs and tools that enable developers to offer their iOS apps for download from alternative app marketplaces.
- New framework and APIs for creating alternative app marketplaces — enabling marketplace developers to install apps and manage updates on behalf of other developers from their dedicated marketplace app.
- New frameworks and APIs for alternative browser engines — enabling developers to use browser engines, other than WebKit, for browser apps and apps with in-app browsing experiences.
- Interoperability request form — where developers can submit additional requests for interoperability with iPhone and iOS hardware and software features.
As announced by the European Commission, Apple is also sharing DMA-compliant changes impacting contactless payments. That includes new APIs enabling developers to use NFC technology in their banking and wallet apps throughout the European Economic Area. And in the EU, Apple is introducing new controls that allow users to select a third-party contactless payment app — or an alternative app marketplace — as their default.
Inevitably, the new options for developers’ EU apps create new risks to Apple users and their devices. Apple can’t eliminate those risks, but within the DMA’s constraints, the company will take steps to reduce them. These safeguards will be in place when users download iOS 17.4 or later, beginning in March, and include:
- Notarization for iOS apps — a baseline review that applies to all apps, regardless of their distribution channel, focused on platform integrity and protecting users. Notarization involves a combination of automated checks and human review.
- App installation sheets — that use information from the Notarization process to provide at-a-glance descriptions of apps and their functionality before download, including the developer, screenshots, and other essential information.
- Authorization for marketplace developers — to ensure marketplace developers commit to ongoing requirements that help protect users and developers.
- Additional malware protections — that prevent iOS apps from launching if they’re found to contain malware after being installed to a user’s device.
These protections — including Notarization for iOS apps, and authorization for marketplace developers — help reduce some of the privacy and security risks to iOS users in the EU. That includes threats like malware or malicious code, and risks of installing apps that misrepresent their functionality or the responsible developer.
Changes to Safari
Today, iOS users already have the ability to set a third-party web browser — other than Safari — as their default. Reflecting the DMA’s requirements, Apple is also introducing a new choice screen that will surface when users first open Safari in iOS 17.4 or later. That screen will prompt EU users to choose a default browser from a list of options.
This change is a result of the DMA’s requirements, and means that EU users will be confronted with a list of default browsers before they have the opportunity to understand the options available to them. The screen also interrupts EU users’ experience the first time they open Safari intending to navigate to a webpage.
Changes to the App Store
On the App Store, Apple is sharing a number of changes for developers with apps in the EU, affecting apps across Apple’s operating systems — including iOS, iPadOS, macOS, watchOS, and tvOS. The changes also include new disclosures informing EU users of the risks associated with using alternatives to the App Store’s secure payment processing.
For developers, those changes include:
- New options for using payment service providers (PSPs) — within a developer’s app to process payments for digital goods and services.
- New options for processing payments via link-out — where users can complete a transaction for digital goods and services on the developer’s external website. Developers can also inform EU users of promotions, discounts, and other deals available outside of their apps.
- Business planning tools — for developers to estimate fees and understand metrics associated with Apple’s new business terms for apps in the EU.
The changes also include new steps to protect and inform EU users, including:
- App Store product page labels — that inform users when an app they’re downloading uses alternative payment processing.
- In-app disclosure sheets — that let users know when they are no longer transacting with Apple, and when a developer is directing them to transact using an alternative payment processor.
- New App Review processes — to verify that developers accurately communicate information about transactions that use alternative payment processors.
- Expanded data portability on Apple’s Data & Privacy site — where EU users can retrieve new data about their usage of the App Store and export it to an authorized third party.
For apps that use alternative payment processing, Apple will not be able to issue refunds, and will have less ability to support customers encountering issues, scams, or fraud. Helpful App Store features — like Report a Problem, Family Sharing, and Ask to Buy — will also not reflect these transactions. Users may have to share their payment information with additional parties, creating more opportunities for bad actors to steal sensitive financial information. And on the App Store, users’ purchase history and subscription management will only reflect transactions made using the App Store’s In-App Purchase system.
vmmap --summary X.memgraph
>vmmap X.memgraph | rg "MEMORY REGION NAME"
vmmap --verbose X.memgraph | rg "MEMORYREGION"
leaks --traceTree 0xSTARTINGMEMORYADDRESS
malloc_history X.memgraph --fullStacks 0xSTARTINGMEMORYADRESS
- Other helpful commands
vmmap --pages X.memgraph
leaks X.memgraph
heap X.memgraph
heap X.memgraph -sortBySize
heap X.memgraph -addresses all | <classes-pattern>
> ```swift
> .safeAreaInset(edge: •top, spacing: 0) {
> if canFilterTimeline, pinnedFilters. isEmpty {
> TimelineQuickAccessPills(pinnedFilters: $pinnedFilters, timeline: $timeline)
> .padding(vertical, 8)
.padding(horizontal, .layoutPadding)
.background(theme.primaryBackgroundColor.opacity(0.50)) .background(Material.regular) > } > } > .if(canFilterTimeline && !pinnedFilters.isEmpty) { view in view.toolbarBackground(.hidden, for: .navigationBar) } > ```
This dashboard tracks technical issues in major software platforms which disadvantage Firefox relative to the first-party browser. We consider aspects like security, stability, performance, and functionality, and propose changes to create a more level playing field.
Further discussion on the live issues can be found in our platform-tilt issue tracker.
In this blog post, I'll explain when and where you can use Swift's new package access modifier. I'll also give an outlook on plans from Apple to extend its usefulness for closed-code enterprise SDKs.
This should be everything you need to decode simple QR codes by hand. You can now either press the "Random code" button at the top to practice on short English words, or go find a QR code in the wild, and scan it using the "Scan code" button!
SwiftUI’s LazyVGrid and LazyHGrid offer powerful tools for creating dynamic and responsive grid layouts in iOS apps. Starting with the basics of LazyVGrid, we explored how different GridItem types like Adaptive, Fixed, and Flexible can shape your grid’s behavior and appearance. We then delved into LazyHGrid, highlighting its horizontal layout capabilities, which complement the vertical nature of LazyVGrid. The section on customizing grid spacing and alignment emphasized the importance of these elements in enhancing the visual appeal and functionality of your grids. By mastering these grid layouts, you can create diverse and engaging interfaces that are both visually appealing and user-friendly, significantly elevating the user experience in your SwiftUI applications.
Communication between views in SwiftUI can be tricky. As explained in a previous story about SwiftUI State monitoring, SwiftUI PropertyWrappers offer us a lot by hiding some complexity of managing the source of truth for our views. However, they can also bring confusion regarding state management and how to communicate between views.
There are two key insights here.
- the alignment guide passed to the
.alignmentGuide(…)
method refers to the container, not the view we’re modifying.- the alignment guides influence the layout of the cross dimension of the stack. So for a
VStack
you can control the horizontal alignment (but clearly the bars here are still stacked vertically). For anHStack
you can control the vertical alignment.So in my case I need a
ZStack
so I can have them vertically aligned and horizontally offset from each other in a way I can modify with the alignment guide.
ML models are probabilistic. Imagine that you want to know what’s the best cuisine in the world. If you ask someone this question twice, a minute apart, their answers both times should be the same. If you ask a model the same question twice, its answer can change. If the model thinks that Vietnamese cuisine has a 70% chance of being the best cuisine and Italian cuisine has a 30% chance, it’ll answer “Vietnamese” 70% of the time, and “Italian” 30%.
This probabilistic nature makes AI great for creative tasks. What is creativity but the ability to explore beyond the common possibilities, to think outside the box?
However, this probabilistic nature also causes inconsistency and hallucinations. It’s fatal for tasks that depend on factuality. Recently, I went over 3 months’ worth of customer support requests of an AI startup I advise and found that ⅕ of the questions are because users don’t understand or don’t know how to work with this probabilistic nature.
To understand why AI’s responses are probabilistic, we need to understand how models generate responses, a process known as sampling (or decoding). This post consists of 3 parts.
- Sampling: sampling strategies and sampling variables including temperature, top-k, and top-p.
- Test time sampling: sampling multiple outputs to help improve a model’s performance.
- Structured outputs: how to get models to generate outputs in a certain format.
A service that provides a custom communication channel between your app and a File Provider extension.
Defining the Service’s Protocol
Services let you define custom actions that are not provided by Apple’s APIs. Both the app and the File Provider extension must agree upon the service’s name and protocol. Communicate the name and protocol through an outside source (for example, posting a header file that defines both the name and protocol, or publishing a library that includes them both).
The service can be defined by either the app or the File Provider extension:
- Apps can define a service for features they would like to use. File providers can then choose to support those features by implementing the service.
- File Provider extensions can provide a service for the features they support. Apps can then choose to use the specified service.
When defining a service’s protocol, the parameters for each method must adhere to the following rules:
- The parameter’s class must conform to NSSecureCoding.
- The parameter’s class must be defined in both the app and the File Provider extension (for example, standard system types or classes defined in a library imported by both sides).
- If a collection parameter contains types other than property list types (see Property List Types and Objects), declare the valid types using the NSXPCInterface class’s classes(for:argumentIndex:ofReply:) method.
Tells the delegate that the user closed one or more of the app’s scenes from the app switcher.
When the user removes a scene from the app switcher, UIKit calls this method before discarding the scene’s associated session object altogether. (UIKit also calls this method to discard scenes that it can no longer display.) If your app isn’t running, UIKit calls this method the next time your app launches.
Use this method to update your app’s data structures and to release any resources associated with the scene. For example, you might use this method to update your app’s interface to incorporate the content associated with the scenes.
UIKit calls this method only when dismissing scenes permanently. It doesn’t call it when the system disconnects a scene to free up memory. Memory reclamation deletes the scene objects, but preserves the sessions associated with those scenes.
The @Observable Macro simplifies code at the implementation level and increases the performance of SwiftUI views by preventing unnecessary redraws. You’re no longer required to use @ObservedObject, ObservableObject, and @Published. However, you still need to use
@State
to create a single source of truth for model data.
This is open access book provides plenty of pleasant mathematical surprises. There are many fascinating results that do not appear in textbooks although they are accessible with a good knowledge of secondary-school mathematics. This book presents a selection of these topics including the mathematical formalization of origami, construction with straightedge and compass (and other instruments), the five- and six-color theorems, a taste of Ramsey theory and little-known theorems proved by induction.
Among the most surprising theorems are the Mohr-Mascheroni theorem that a compass alone can perform all the classical constructions with straightedge and compass, and Steiner's theorem that a straightedge alone is sufficient provided that a single circle is given. The highlight of the book is a detailed presentation of Gauss's purely algebraic proof that a regular heptadecagon (a regular polygon with seventeen sides) can be constructed with straightedge and compass.
Although the mathematics used in the book is elementary (Euclidean and analytic geometry, algebra, trigonometry), students in secondary schools and colleges, teachers, and other interested readers will relish the opportunity to confront the challenge of understanding these surprising theorems.
Supplementary material to the book can be found at motib/suprises.
C0deine is a compiler for C0. It is written in Lean 4, which allows us to express the formal semantics in the same language as the compiler itself. Hopefully, the whole compiler will be verified at some point/soon.
C0deine implements a number of sub-languages of C0 as well as fixing some bugs in the existing compiler. See this document for information about the languages themselves, as well as a list of changes/corrections. Also, here is a work-in-progress document detailing the static semantics of C0.
If you find any issues, please report them here.
Passkeys.directory is a community-driven index of websites, apps, and services that offer signing in with passkeys.
Investigate why your universal links are opening in Safari instead of your app.
Universal links use applinks, an associated domains service, to link directly to content within your app without routing through Safari or your website. If your app is installed, a universal link will open in your app. If it is not installed, the link will open in your default web browser, where your site handles the rest. If you are unfamiliar with universal links and how to support them in your code, see Supporting Associated Domains and Allowing apps and websites to link to your content.
This document outlines how to:
A `double category of relations' is defined in this paper as a cartesian equipment in which every object is suitably discrete. The main result is a characterization theorem that a `double category of relations' is equivalent to a double category of relations on a regular category when it has strong and monic tabulators and a double-categorical subobject comprehension scheme. This result is based in part on the recent characterization of double categories of spans due to Aleiferi. The overall development can be viewed as a double-categorical version of that of the notion of a "functionally complete bicategory of relations" or a "tabular allegory".
Users can turn any space into a personal theater, enjoy more than 150 3D movies, and experience the future of entertainment with Apple Immersive Video
This is a book about building applications using hypermedia systems. Hypermedia systems might seem like a strange phrase: how is hypermedia a system? Isn’t hypermedia just a way to link documents together?
In 2019, I built a work-for-hobby iOS simulator on a strict regimen of weekends and coffee. While the full details of this project will stay in-house, there’s enough I can share to hopefully be interesting!
A quick intro to the steps. There's essentially two steps in enabling universal links:
- Enable the entitlement. This essentially tells the app "you can open links from this specific domain". Its done in Xcode once, and forms part of your binary's meta-data.
- Provide a data file on the domain you want to enable links from. This is deployed to prove to the world that you, the domain owner approve of this app accepting your links.
This data file is called the Apple App Site Association file (often referred to as the AASA)
We won't go into detail of the contents here; Apple's documentation covers it well. You can find that here.
The skills you need are your intelligence, cunning, perseverance and the will to test yourself against the intricacies of multi-threaded programming in the divine language of C#. Each challenge below is a computer program of two or more threads. You take the role of the Scheduler — and a cunning one! Your objective is to exploit flaws in the programs to make them crash or otherwise malfunction.
For example, you might cause a deadlock to occur or you might schedule context switches in such a way that two threads enter the same critical section at the same time. Any action that disrupts the program this way counts as a victory for you.
You are the Scheduler — you only have one tool at your disposal: the ability to switch contexts at any time, as the total master of time and interruptions. Let's hope it is enough... it has to be, because the Parallel Wizard's armies are upon us and only you can lead the Sequentialist armies into victory!
Building Blocks of “LLM Programming”
Prompts are how one channels an LLM to do something. LLMs in a sense always have lots of “latent capability” (e.g. from their training on billions of webpages). But prompts—in a way that’s still scientifically mysterious—are what let one “engineer” what part of that capability to bring out.
There are many different ways to use prompts. One can use them, for example, to tell an LLM to “adopt a particular persona”. One can use them to effectively get the LLM to “apply a certain function” to its input. And one can use them to get the LLM to frame its output in a particular way, or to call out to tools in a certain way.
And much as functions are the building blocks for computational programming—say in the Wolfram Language—so prompts are the building blocks for “LLM programming”. And—much like functions—there are prompts that correspond to “lumps of functionality” that one can expect will be repeatedly used.
Today we’re launching the Wolfram Prompt Repository to provide a curated collection of useful community-contributed prompts—set up to be seamlessly accessible both interactively in Chat Notebooks and programmatically in things like LLMFunction:
An array of additional metadata for the player item to supplement or replace an asset’s embedded metadata.
AVPlayerViewController supports displaying the following metadata identifiers:
If you’ve worked with AVFoundation’s APIs, you’ll be familiar with CVPixelBuffer, an object which represents a single video frame. AVFoundation manages the tasks of reading, writing, and playing video frames, but the process changes when dealing with spatial video (aka MV-HEVC), which features video from two separate angles.
Loading a spatial video into an AVPlayer or AVAssetReader on iOS appears similar to loading a standard video. By default, however, the frames you receive only show one perspective (the “hero” eye view), while the alternate angle, part of the MV-HEVC file, remains uncompressed.
With iOS 17.2 and macOS 14.2, new AVFoundation APIs were introduced for handling MV-HEVC files. They make it easy to get both angles of a spatial video, but are lacking in documentation. Here’s a few tips for working with them:
An object containing information broadcast to registered observers that bridges to Notification; use
NSNotification
when you need reference semantics or other Foundation-specific behavior.
A few days ago, my former coworker Evan Hahn posted “The world’s smallest PNG”, an article walking through the minimum required elements of the PNG image format. He gave away the answer in the very first line:
The smallest PNG file is 67 bytes. It’s a single black pixel. However (spoilers!) he later points out that there are several valid 67-byte PNGs, such as a 1x1 all-white image, or an 8x1 all-black image, or a 1x1 gray image. All of these exploit the fact that you can’t have less than one byte of pixel data, so you might as well use all eight bits of it. Clever!
However again…are we really limited to one byte of pixel data?
(At this point you should go read Evan’s article before continuing with mine.)
Ever wanted to know how to find and fix performance issues in your app, or just how to make your app faster? In this article we go over how I made an app 19 times faster by replacing a single component, along with how to find and fix other performance related issues.
Most unidirectional architecture frameworks have a similar base class, so we model all our feature state inside that
State
type, whether that’s data that should trigger view renders or not.And that’s one of the main bottlenecks of some Redux-ish architectures that tend to model all the app state in a single place: view bodies recompute even if the state change was unrelated to that view. There are certainly ways to fix that (like TCA’s ViewStore), but as always, that comes with complexity and also with a feeling that we are kind of fighting the framework.
Fortunately, the new Observation framework is here to fix this. Or not… Let’s see.
AnyView is a type-erased view, that can be handy in SwiftUI containers consisting of heterogeneous views. In these cases, you don’t need to specify the concrete type of all the views that can be in the view hierarchy. With this approach, you can avoid using generics, thus simplifying your code.
However, that can come with a performance penalty. As mentioned in a previous post, SwiftUI relies on the type of the views to compute the diffing. If it’s AnyView (which is basically a wrapped type), SwiftUI will have hard t