Matt Cox Matt Cox

Pack

A hierarchical tree structure for Swift

 
 
 

A Swift package to serialise and deserialise data into an external representation

I’m pleased to announce the release of an open source package for Swift called Pack.

Pack is a Swift package for serialising and deserialising swift data types into an external representation. This can be useful for creating custom file formats, or for packaging for transmission.

Pack is similar in spirit to the built-in Codable protocols, however, unlike Codable, Pack is not key/value based, and as such is intend for packing and unpacking binary data for efficient storage. Because there are 101 ways to pack complex data, basic swift types are supported, but adding support for more complex types is left to the end-user who is responsible to ensure the data layout matches the end-use case.

Serialising and deserialising data

To serialise some data, you create a BinaryPack object and pack the data:

// Initialize a new Packer
let packer = BinaryPack()

// Pack an Integer
try packer.pack(12345)

// Pack a Double
try packer.pack(6789.0)

// Pack a String with utf8 encoding
try packer.pack("Hello, world!", using: .utf8)

You can also build the tree declaratively:

let root = Root("root") {
    Branch("A") {
        "C"
        "D"
    }
    
    "B"
}

You can then get the data as a Swift Data object, and use a BinaryPack object to unpack the data:

// Initialize a new Unpacker, and specify the data that should be unpacked
let unpacker = BinaryPack(from: packer.data)

// Unpack an Integer
let int = try unpacker.unpack(Int.self)

// Unpack an Double
let double = try unpacker.unpack(Double.self)

// Unpack an String that was packed with utf8 encoding
let string = try unpacker.unpack(String.self, using: .utf8)

Serialising and deserialising complex types

Pack supports serialisation and deserialisation of basic swift types, but support for more complex types can be added by the client, who is responsible for the layout of data in memory.

For example, this structure cannot be serialised or deserialised out of the box:

struct Color {
    let name: String
    var red: Double
    var green: Double
    var blue: Double
    var alpha: Double = 1.0
}

To allow this struct to be serialised and deserialised, it must add conformance to Packed, which is shorthand for Packable and Unpackable.

extension Color: Packed {
    init(from unpacker: inout Unpacker) throws {
        self.name = try unpacker.unpack(String.self, using: .utf16)
        self.red = try unpacker.unpack(Double.self)
        self.green = try unpacker.unpack(Double.self)
        self.blue = try unpacker.unpack(Double.self)
        self.alpha = try unpacker.unpack(Double.self)
    }

    func pack(to packer: inout Packer) throws {
        try packer.pack(name, using: .utf16)
        try packer.pack(red)
        try packer.pack(green)
        try packer.pack(blue)
        try packer.pack(alpha)
    }
}

And that’s basically it. There is some additional functionality, such as working with streams. For more information I suggest you check out the full documentation here: Documentation.

Read More
Matt Cox Matt Cox

Tree

A hierarchical tree structure for Swift

 
 
 

A hierarchical tree structure for Swift

I’m pleased to announce the release of an open source package for Swift called Tree.

Tree is a Swift package implementing a hierarchical tree structure constructed of interconnected nodes.

Each node has an associated value, which can be any identifiable type, and you can easily build a tree by describing the nodes in the tree. For example to create a tree of strings:

// Create a root node.
//
let root = Node("root")

// Create two nodes as children of the root node.
//
let A = root.append(child: "A")
let B = root.append(child: "B")

// Create some leaf nodes as children of node A.
//
let C = A.append(child: "C")
let D = A.append(child: "D")

You can also build the tree declaratively:

let root = Root("root") {
    Branch("A") {
        "C"
        "D"
    }
    
    "B"
}

The tree can then be iterated either using depth first, or breadth first, manipulated to append/remove/prune nodes, and the node properties can be inspected:

// Test if a node is a root node (has no parent).
//
print(root.isRoot) // "true"

// Test if a node is a leaf node (has no children).
//
print(root.isLeaf) // "false"

// Lookup a child node by identifier.
//
if let A = root.node(identifiedBy: "A") {
    // Get the parent for a node.
    //
    print(A.parent?.element) // "root"

    // Iterate over the children.
    //
    print(A.reduce("") {
        $0 + "\($1.element), "
    })
    // "C, D, "

    // Append a new child to the node.
    //
    A.append(child: Node("E"))
}

The full documentation is available here: Documentation.

Read More
Matt Cox Matt Cox

Vision Pro: A developers perspective

An overview of developing a launch day application for Vision Pro, from a developers perspective.

This was originally posted on LinkedIn on February 2nd 2024.

Apple Vision Pro

Despite being a software engineer for almost 20 years, I’ve never written about my experience as a developer. On saying that, I’ve never taken the chance to be an early adopter on a new platform before, or jumped in so quickly with two feet.

Today Apple launches a new spatial computing platform; Vision Pro. As a developer who has spent every waking minute over the last few months developing applications for this platform, I thought I would add my thoughts from a developers perspective, to add to the many hot takes that are likely springing up across LinkedIn, YouTube and other websites today.

swatchbook remix has launched on the visionOS App Store today as a day one app (iPhone and iPad are on the way), and we’ve been extremely lucky to be featured prominently by Apple in the store. Remix is an immersive application for applying materials to footwear, apparel and other products, targeted at consumers and prosumers.

The remix logo

Development of remix began just over 6 months ago in August 2023. Initially I was building out the framework for an iPad and iPhone app, intending it to be a sandbox for experimenting with new Apple platform technologies such as SwiftData, Observables, and RealityKit. In November, I shifted development entirely to visionOS, focusing on building an immersive experience for this new spatial computing platform.

Before I go any further I should make it clear that I don’t actually have a Vision Pro yet, and despite having access to one for a day in November (which bizarrely I still can’t talk about), I’ve been slumming it in the simulator for the last few months. This isn’t going to be an opinion on the new device, but more an insight into what it’s like to develop for the visionOS platform.

It’s not just an iPhone for your face

If you’re a developer for Apple platforms, it shouldn't come as a surprise that creating apps for Vision Pro is a very similar experience to developing for iPhone; you use Xcode, write code in Swift, create interfaces in SwiftUI, mostly use the same APIs, and if you don’t have access to the hardware, you use a simulator to get a general feeling of the experience your users will have.

The similarities between iPhone and Vision Pro development appear to extend to the applications on the platform, and I was disappointed that the majority of first-party apps demoed at WWDC '23 were nothing more than an iPad app floating in mid-air. A sizeable number of third party apps available at launch continue this trend. Coming from a 3D graphics background, I wanted to do something a bit more “spatial” for our app, that really exploited the uniqueness of the platform.

On initial launch, remix features a main window, guiding you through a wizard to select gorgeous 3D assets, along with collections of real world materials.

As you enter the editor, the 3D asset is dropped into the real world and ready for material assignments.

Interaction is made easy by simply looking at a part and performing basic gestures. A tap selects the part you are looking at, opening a material picker to change the material assigned to that part.

The assets and materials are exceptionally high quality, and combined with the immersive experience of the Vision Pro, that allows you to walk up to the asset and interact with it at very close distances, a level of realism is achievable that simply isn't possible on iPad and iPhone.

The app supports background syncing, sharing renders to social media and is available for free with an optional subscription to unlock premium content.

Not everything has to be a gesture

When Apple Watch was introduced in 2015, it was primarily pitched as a fashion accessory and communication device, with the product line including the premium 18-karat gold Apple Watch Edition for $10,000. It didn't have GPS, it was destroyed if you went for a swim, and the fancy health tracking that so many of us take for granted was nowhere to be found. It knew what it was, but that wasn't necessarily what the public wanted. Vision Pro know's what it is, and Apple clearly have a plan for its future, but I can guarantee that any assumptions any of us make about how it will be used, or what it will look like in 10 years will be completely wrong.

Designing for visionOS is a unique experience, distinct from any other platform. Since it's a new technology with no existing references, the design journey starts from scratch. It becomes very easy to paint outside the lines on the unbound canvas, and assume that every experience must be immersive. You're designing a list, well of course it must be a 3D list, you're designing a textbox, how about some 3D text with that? Finding the balance between traditional windowed experiences and entering the immersive space is a delicate balance that Apple have clearly given a lot of thought to. In their Human Interface Guidelines, Apple suggests focusing on familiar UI-paradigms like windows; prioritising comfort and familiarity when performing regular tasks; and using immersive experiences for when you really want a moment to land.

Despite this, I still get an uneasy feeling when deciding to use a window, almost like I'm running away scared back to the thing that feels safe and familiar.

Sketched design ideas for the remix app on Apple Vision Pro

As an app aimed primarily towards creative people, the inclination with remix was to build a truly immersive experience, placing the designer into a virtual studio with an infinite wall of swatches around them. Due to technical limitations, and an ever-approaching deadline, I simplified the interface to present a list of materials in a window that can be scaled in the environment. This change may seem minor, but has the effect of focusing the user's attention on their design and the 3D asset, which can be presented without distraction in the real world.

remix being used to create a puffer jacket design

An additional challenge was how to handle the small parts and extreme detail on the remix assets, including zippers, buttons and stitches, which don't always lend themselves to the look-based interaction of visionOS; as you can imagine, selecting a cotton stitch from 6ft away with any sense of precision can be a challenge. To solve this, I added an expansion feature that breaks the asset into individual parts - similar to an exploded-view drawing which are common in CAD workflows - making it easier to focus on one part at a time. Additionally, a part list was added to the main user interface, to provide a more traditional workflow for those users who need it.

For the expanded view, the aim was to include a gesture that allowed the user to literally pull the asset apart. However much like touch on iOS, there are limits to how many simultaneous unique interactions can be achieved with just two hands, so the feature (for now) is presented as a button in the main interface.

As we move forwards, and particularly after I get my hands on a device, I would expect the interface to evolve quite significantly, from minor tweaks such as introducing depth to the windows, and adding sound effects and subtle animations to make the experience more pleasurable, to more complex improvements based on long-term experience with Vision Pro.

Only time will tell how the platform evolves and how it will be used in the wild. I think remix feels like a modern application that is at home on Vision Pro, and I'm proud of the design, but I'm sure the developers of the first iPhone apps with their extremely skeuomorphic designs felt the same way.

Here be dragons

Apples move towards AR has been ongoing for multiple years, with core technologies such as ARKit and RealityKit introduced in 2017 and 2019 respectively. Whilst Vision Pro may be a new platform, the foundation for it has been under development in public for years.

 
RealityKit and ARKit icons
 

As a developer for Apple platforms, the best approach I've found is to swim with the current, and try - wherever possible - to develop the way Apple wants you to develop. As soon as you stray off the path even slightly, then "here be dragons".

When Apple introduced Vision Pro and associated development frameworks at WWDC '23, building on the foundation they had been building for years, the direction was clear; they had built a platform where people who were new to 3D could dip their toe in the water and develop apps, without having to worry too much about how 3D works.

Introduced at WWDC '23, a central component of the Vision Pro development suite is Reality Composer Pro. This application, designed with developers in mind, is tailored for preparing content specifically for the platform. It consumes 3D assets, and allows the user to build material and geometry shaders using an intuitive node graph, as well as setup dynamics, triggers and basic particle simulations.

Reality Composer Pro interface

Reality Composer Pro is a powerful application, and will likely continue to be developed and improved as visionOS as a platform progresses. However, despite being the path that Apple expects and intends developers to take, it's indicative of the larger challenges faced when developing something complex for visionOS; as soon as you try to build an experience directly in code, and avoid Reality Composer Pro, then there are those dragons again.

A prime example of this is custom materials and shaders. Apple have chosen for visionOS to exclusively support custom materials configured as node graphs in Reality Composer Pro. Whilst this method is more accessible to artists and offers significant technical capabilities to those not well-versed in 3D graphics, it can alienate more technical users, who can encounter feature limitations that are insurmountable. During development of remix this limitation prevented us from adding support for selection outlines, and shaders that adapt to changes in lighting, as Reality Composer Pro does not provide the features needed to build such a shader, despite being trivial in more industry-standard code based shaders.

The RealityKit framework has clearly been inspired by game development, adopting an Entity Component System pattern that is common in that space. Often as a developer of a non-game creative application, you can find yourself running up against the limitations of these architectural decisions.

One of the key workflows in remix is the ability to tap on any part in the immersive space to select it. This means that we require an efficient way to know which triangle on which part the user is looking at. RealityKit depends on generating collision surfaces for hit testing, either from boxes, capsules, convex hulls...etc. These can be expensive to create if you're not going through Reality Composer Pro, and provide terrible hit surfaces that don't allow anywhere near the level of precision required for user interaction. On visionOS a static mesh can be generated from a triangle soup, but the APIs handle nowhere near the amount of geometry used by the visualisation mesh. The eventual solution I found was to build a simplified version of the geometry and use two surfaces; one for visualisation, and another for hit testing. This solves the problem, but adds an additional burden to providing regularly updated content for the application.

This limitation seems obvious and understandable to those building game experiences with predefined geometry, but I wonder if technical limitations like these will prove to be a barrier to the adoption of Vision Pro as a true content creation platform, where geometry is being created by the user interactively, and content cannot be prepared in advance.

The future

Vision Pro is an extremely powerful and impressive piece of hardware, and as a development platform, it allowed a single developer with experience building against Apple APIs to deliver a polished app in a surprisingly small amount of time. I do worry however that to grow as a platform and to maintain a healthy eco-system of software, it has a long way to go to be able to build a diverse range of complex apps.

Additionally, the many new APIs and frameworks introduced for visionOS - as limited as they may sometimes feel - are still restricted to that platform, and at the time of writing are unavailable on iOS and macOS. This makes building and maintaining a cross platform application challenging.

I look forward to WWDC '24 and hope as a platform visionOS has the opportunity to take a step further towards maturity. I hope many of the holes in this version one product can be filled, and if Apple are truly serious about 3D and AR, they invest the time and effort to provide a first class development experience on all platforms.

I will hopefully be getting a Vision Pro very soon, so watch this space for more thoughts.

Read More