Introduction to aspects in Bazel

Bazel’s way of attaching additional information and behavior to the build graph is called aspects. They allow us to add extra logic to rules without modifying the rule implementations themselves. Common use cases include validation actions or analysis tasks such as detecting unused dependencies. These are just a few examples; aspects enable much more complex workflows.

A note on using aspects

Earlier, I mentioned that aspects allow us to attach additional logic to rules without modifying rule code. This is true, but only in one of the two ways aspects can be used:

  • Command-line invocation – aspects are applied externally at build time. This is what we will focus on in this article.
  • Attribute attachment – aspects are attached directly to rule attributes, which requires modifying the rule definition. This approach will be covered in the next article.

Writing aspects

Like most things in Bazel, aspects are rule-like and generally follow this pattern:

  1. Write an implementation function that accepts two arguments: target and ctx
  2. Optionally execute actions
  3. Return providers
  4. Create the aspect by calling the aspect() function and passing the implementation and configuration arguments

In this example, we will write an aspect that operates on swift_library targets (specifically, anything that propagates the SwiftInfo provider). The aspect will generate a .txt file containing the names of the Swift modules that the target depends on.

Given the following target:

swift_library(
    name = "lib",
    deps = [":lib2"],
    srcs = glob(["*.swift"]),
)

When the aspect is applied to this target, it will produce a text file containing lib2.

Loading required providers

In aspects.bzl, we first load the SwiftInfo provider from rules_swift:

load("@rules_swift//swift:providers.bzl", "SwiftInfo")

Next, we define our own provider to propagate the generated report files transitively:

SwiftDepsInfo = provider(fields = ["report_files"])

Aspect implementation

Now we can write the implementation function:

def _swift_deps(target, ctx):
    module_names = []

    if hasattr(ctx.rule.attr, "deps"):
        for dep in ctx.rule.attr.deps:
            if SwiftInfo in dep:
                for module in dep[SwiftInfo].direct_modules:
                    module_names.append(module.name)

    out = ctx.actions.declare_file("swift_deps_" + ctx.label.name + ".txt")
    ctx.actions.write(
        output = out,
        content = "\n".join(module_names),
    )

    transitive_files = []
    if hasattr(ctx.rule.attr, "deps"):
        for dep in ctx.rule.attr.deps:
            if SwiftDepsInfo in dep:
                transitive_files.append(dep[SwiftDepsInfo].report_files)

    all_files = depset(direct = [out], transitive = transitive_files)
    return [
        SwiftDepsInfo(report_files = all_files),
        DefaultInfo(files = all_files),
    ]

So the code above does few simple things:

  1. Collects Swift module names from the deps attribute via the SwiftInfo provider
  2. Declares an output file and writes the module names into it
  3. Collects transitive report files so they materialize as build artifacts
  4. Returns both a custom provider (SwiftDepsInfo) and DefaultInfo

Creating the aspect

The final step is to define the aspect itself:

swift_deps_aspect = aspect(
    implementation = _swift_deps,
    attr_aspects = ["deps"],
)

Note: The attr_aspects = ["deps"] argument tells Bazel to propagate this aspect along the deps attribute. In other words, when the aspect is applied to a rule, Bazel will also apply it to every target listed in that rule’s deps.

Running the aspect

Given the following BUILD.bazel file:

load("@rules_swift//swift:swift_library.bzl", "swift_library")

swift_library(
    name = "lib",
    srcs = ["main.swift"],
    deps = [":lib2"],
)

swift_library(
    name = "lib2",
    srcs = ["main.swift"],
    deps = [":lib3"],
)

swift_library(
    name = "lib3",
    srcs = ["main.swift"],
    module_name = "CustomModule",
)

Running:

bazel build :lib --aspects aspects.bzl%swift_deps_aspect

will produce three additional text files which you can locate under bazel-bin/:

  • swift_deps_lib.txt – contains lib2
  • swift_deps_lib2.txt – contains CustomModule
  • swift_deps_lib3.txt – empty, since it has no dependencies

Why does swift_deps_lib2.txt contain CustomModule instead of lib3? Because we are explicitly extracting the Swift module name from the SwiftInfo provider. If instead we wanted the target name, we could use dep.label.name, or str(dep.label) to get the fully qualified Bazel label.

Going further

This article was a brief introduction to Bazel aspects, intended to set the foundation for the next one. In the next article, we will look at more concrete use cases and explore attaching aspects directly to rule attributes from Starlark as well as utilizing some of the other arguments on that aspect() function.

Bazel toolchains, repository rules and module extensions

In my previous article I showed how to create a stupidly simple Bazel rule. While that rule was not useful in any way, shape, or form, it provided a gentle introduction to writing Bazel rules and helped build a mental model around them.

This week we will look at toolchains, which is a more advanced concept but becomes extremely important once your rules depend on external tools.

This walkthrough wires up macOS only to keep the example small. Adding Linux or Windows is the same pattern: download a different binary and register another toolchain(...) target with different compatibility constraints.

Toolchains

A Bazel toolchain is a way of abstracting tools away from rules. Many people describe it as dependency injection.

More precisely: toolchains are dependency injection with platform-based resolution — Bazel selects the right implementation based on the execution and target platforms.

Instead of hard-coding a tool inside a rule, the rule asks Bazel to give it “the correct tool for this platform”.

Example: rules_pkl

We will create oversimplified rule called pkl_library that turns .pkl files into their JSON representation using the PKL compiler. pkl is Apple’s programing language for producing configurations. There is official rules_pkl ruleset and this article doesn’t even scratch the surface of the capabilities that it offers.

To do this we need:

  1. A way to download the PKL binary
  2. A toolchain
  3. A rule that uses the toolchain

Downloading the PKL binary

We start by writing a repository rule that downloads the PKL compiler and exposes it as a Bazel target.

Create repositories.bzl:

def _pkl_download_impl(repository_ctx):
    repository_ctx.download(
        url = repository_ctx.attr.url,
        output = "pkl_bin",
        executable = True,
        sha256 = repository_ctx.attr.sha256,
    )

    repository_ctx.file(
        "BUILD.bazel",
        """
load("@bazel_skylib//rules:native_binary.bzl", "native_binary")

native_binary(
    name = "pkl",
    src = "pkl_bin",
    out = "pkl",
    visibility = ["//visibility:public"],
)
""",
    )

pkl_download = repository_rule(
    implementation = _pkl_download_impl,
    attrs = {
        "url": attr.string(mandatory = True),
        "sha256": attr.string(mandatory = True),
    },
)
````

This does two things:

* Downloads the PKL binary
* Wraps it using `native_binary`, which creates a proper Bazel executable without relying on a shell

The resulting binary is available as `@pkl_macos//:pkl`.

## Exposing the repository via a module extension

We’re still using a [repository rule](https://bazel.build/external/repo) to do the download; [bzlmod module extensions](https://bazel.build/external/extension) are just how we call that repository rule from `MODULE.bazel`.

Create `extensions.bzl`:

```starlark
load("//:repositories.bzl", "pkl_download")

def _pkl_module_extension_impl(ctx):
    pkl_download(
        name = "pkl_macos",
        url = "https://github.com/apple/pkl/releases/download/0.30.2/pkl-macos-aarch64",
        sha256 = "75ca92e3eee7746e22b0f8a55bf1ee5c3ea0a78eec14586cd5618a9195707d5c",
    )

pkl_extension = module_extension(
    implementation = _pkl_module_extension_impl,
)

In MODULE.bazel:

bazel_dep(name = "platforms", version = "1.0.0")
bazel_dep(name = "bazel_skylib", version = "1.9.0")

pkl_extension = use_extension("//:extensions.bzl", "pkl_extension")
use_repo(pkl_extension, "pkl_macos")

Now the binary can be referenced as @pkl_macos//:pkl.

Creating the toolchain

A toolchain is just a rule that returns a ToolchainInfo provider.

toolchains.bzl:

def _pkl_toolchain_impl(ctx):
    return [platform_common.ToolchainInfo(
        pkl_binary = ctx.executable.pkl_binary,
    )]

pkl_toolchain = rule(
    implementation = _pkl_toolchain_impl,
    attrs = {
        "pkl_binary": attr.label(
            executable = True,
            cfg = "exec",
            allow_files = True,
            mandatory = True,
        ),
    },
)

The important part is cfg = "exec", which ensures the binary runs on the execution platform.

registering the toolchain

In BUILD.bazel:

load("//:toolchains.bzl", "pkl_toolchain")

toolchain_type(
    name = "pkl_toolchain_type",
    visibility = ["//visibility:public"],
)

pkl_toolchain(
    name = "pkl_toolchain_macos_impl",
    pkl_binary = "@pkl_macos//:pkl",
)

toolchain(
    name = "pkl_toolchain_macos",
    toolchain = ":pkl_toolchain_macos_impl",
    toolchain_type = ":pkl_toolchain_type",
    exec_compatible_with = ["@platforms//os:macos"],
    target_compatible_with = ["@platforms//os:macos"],
)

To register the toolchain we need to modify our MODULE.bazel:

register_toolchains("//:pkl_toolchain")

Using the toolchain in a rule

Now we can write pkl_library.

Create rules.bzl:

def _pkl_library_impl(ctx):
    toolchain = ctx.toolchains["//:pkl_toolchain_type"]
    binary = toolchain.pkl_binary

    compiled_files = []
    for src in ctx.files.srcs:
        compiled_file_name = src.basename.replace(".pkl", ".json")
        compiled_file = ctx.actions.declare_file(compiled_file_name)

        ctx.actions.run(
            executable = binary,
            tools = [binary],
            inputs = [src],
            outputs = [compiled_file],
            arguments = [
                "eval",
                src.path,
                "--format",
                "json",
                "-o",
                compiled_file.path,
            ],
            mnemonic = "PKLCompile",
        )

        compiled_files.append(compiled_file)

    return [DefaultInfo(files = depset(compiled_files))]

pkl_library = rule(
    implementation = _pkl_library_impl,
    attrs = {
        "srcs": attr.label_list(
            allow_files = [".pkl"],
            mandatory = True,
        ),
    },
    toolchains = ["//:pkl_toolchain_type"],
)

The rule never knows which concrete PKL binary is being used — it only sees the resolved toolchain.

NOTE: Here instead of writing a for loop that creates new action per file is a decision that we need to make on a case by case basis. Sometimes it can be faster to run multiple actions in parallel than invoking a tool once and giving it a list of files to process. It heavily depends on the tool itself and shows us how bazel can be powerful in these scenarios.

Final thoughts

This was a practical look at repository rules, module extensions, toolchains, and how they fit together.

Toolchains are one of Bazel’s most powerful features. Once you start writing rules that depend on real tools (compilers, linters, generators), this pattern becomes unavoidable.

This marks the completion of my second article where I try to give real world examples of using bazel’s various features.

Writing a simple bazel rule

This article does not get into what the Bazel build system is or why you might consider using it. Instead, it focuses on explaining, in very simple terms, how to write a Bazel rule.

First things first

You need a Bazel repository, often referred to as a workspace. To create one, you need a MODULE.bazel file at the root of your project. This file is used to declare external dependencies, although that is not its only purpose. For now, let’s keep MODULE.bazel empty.

Next is a BUILD.bazel file. This is where rules are instantiated (used). The result of instantiating a rule is a Bazel target. Create an empty BUILD.bazel file at the root of the project as well.

Writing the rule

Think of a Bazel rule as a way to teach Bazel how to produce something. We will start by producing a simple text file and then make it slightly more complex.

We will call this rule hello. It will produce a file named hello.txt containing the word "hello".

Create a file called hello.bzl with the following content:

def _hello_impl(ctx):
    file = ctx.actions.declare_file(ctx.label.name + ".txt")
    ctx.actions.write(
        output = file,
        content = "hello",
    )
    return DefaultInfo(files = depset([file]))

This function is the implementation of our hello rule. Notice that the function name ends with _impl. This is a common Bazel convention for rule implementation functions, although it is not strictly required.

The function takes a single parameter, ctx. Every rule implementation receives a ctx (context) object, which provides access to attributes, labels, and the actions API used to interact with Bazel.

Before creating an action, we declare the output file:

file = ctx.actions.declare_file(ctx.label.name + ".txt")

This tells Bazel that the rule will produce a file named after the target (hello.txt in our case). The returned file object represents a declared output that can be passed to actions.

Next, we create an action that writes content to the file:

ctx.actions.write(
    output = file,
    content = "hello",
)

Here we explicitly tell Bazel what the output of the action is. Being explicit about outputs (and inputs, when present) is a defining characteristic of Bazel’s build model. In this example, there are no inputs—only an output.

Finally, we return a result from the rule using the DefaultInfo provider:

return DefaultInfo(files = depset([file]))

This makes the produced file part of the target’s default outputs. We will not go into providers or depset in this article; the official documentation covers those topics in depth.

Now that the implementation function exists, we define the rule itself by calling rule():

hello = rule(
    implementation = _hello_impl,
)

This is enough to define a usable rule.

Using the rule

In the previously created BUILD.bazel file, we first load the rule:

load(":hello.bzl", "hello")

Then we instantiate it:

hello(
    name = "hello",
)

Now run:

bazel build :hello

You should see output similar to:

INFO: Analyzed target //:hello (5 packages loaded, 7 targets configured).
INFO: Found 1 target...
Target //:hello up-to-date:
  bazel-bin/hello.txt
INFO: Elapsed time: 0.285s, Critical Path: 0.00s
INFO: 2 processes: 2 internal.
INFO: Build completed successfully, 2 total actions

Pay attention to the line bazel-bin/hello.txt. This is where Bazel exposes the output file (typically via a symlink). Open it with:

open bazel-bin/hello.txt

You should see that the file contains the word hello.

Rule attributes

To make this rule somewhat useful, we will add a new attribute called content that replaces the hardcoded "hello" string.

The first step is to declare that our rule has an attribute named content. We do this by providing a dictionary to the attrs parameter of rule():

hello = rule(
    implementation = _hello_impl,
    attrs = {
        "content": attr.string(mandatory = True),
    },
)

Here we declare a mandatory string attribute named content. Bazel will enforce that this attribute is provided when the rule is instantiated.

Next, we read the value of the attribute in the rule implementation function. Rule attributes are accessible through ctx.attr. We replace the hardcoded value with ctx.attr.content:

def _hello_impl(ctx):
    file = ctx.actions.declare_file(ctx.label.name + ".txt")
    ctx.actions.write(
        output = file,
        content = ctx.attr.content,
    )
    return DefaultInfo(files = depset([file]))

Finally, we provide the attribute value when instantiating the rule in the BUILD.bazel file:

hello(
    name = "hello",
    content = "Hello, world!",
)

After running:

bazel build :hello

the file located at bazel-bin/hello.txt will contain the provided text.

That’s it

This concludes my first article on the Bazel build system. I plan to expand this rule in subsequent articles to demonstrate more advanced concepts and gradually make it more useful. This also marks my first article of the year, and I plan to write one technical article every week until the year 2026 concludes.

Reverse Engineering Apple’s on-demand resource Asset Packs: How to Recreate .assetpack Files with Standard Unix Tools

I recently ran into a problem while integrating Apple’s on-demand resources system to bazel. Essentially I needed a way to generate .assetpack archives via command line without calling into xcodebuild.

After spending way too much time debugging this, I finally figured out exactly what Apple’s toolchain does to create these files - and more importantly, how to recreate them using standard macOS command-line tools.

The Problem: Asset Packs Look Like Regular Zip Files (But Aren’t)

When you run file on an .assetpack, it tells you it’s a zip archive:

$ file my-assets.assetpack
my-assets.assetpack: Zip archive data

So naturally I thought I will create the expected file hierarchy and zip it:

zip -r new-assets.assetpack some-assetpack-folder/

Your asset pack becomes unusable. iOS will reject it, and you’ll get unhelpful errors about not being able to move the file to the NSBundle.

Investigating the Differences

I used zipinfo -v to examine the internal structure of both Apple’s original asset packs and my re-zipped versions:

Apple’s Original Asset Pack: - Compression: none (stored) - zero compression - File ordering: Very specific sequence starting with META-INF/ - Structure: Flat hierarchy with files at zip root level - Metadata: No extended attributes or extra fields - Encoding version: 3.0

My zipped Version: - Compression: deflated - standard compression - File ordering: Alphabetical (zip’s default) - Structure: Nested bundle directory structure - Metadata: Full of Unix extended attributes and timestamps - Encoding version: 2.0

The Critical Requirements

After lots of experimentation, I discovered Apple’s asset packs have five strict requirements:

1. Flat Hierarchy Structure

The contents must be at the zip root level, not nested in a bundle directory. Apple’s structure looks like:

META-INF/
_CodeSignature/
SomeFile
Info.plist

Not like:

com.company.app.bundle-hash/
├── META-INF/
├── _CodeSignature/
├── SomeFile
└── Info.plist

2. Zero Compression

Every single file must use the store method (no compression). This is critical - any compressed files will cause rejection.

3. Specific File Ordering

The central directory must have entries in this exact order: 1. META-INF/ (directory) 2. META-INF/com.apple.ZipMetadata.plist (file) 3. _CodeSignature/ (directory) 4. Code signature files 5. Content files 6. Info.plist

4. No Extended Attributes

The zip must be clean of any extended attributes, Unix UID/GID info, or extra metadata fields.

5. The Critical Metadata File

The META-INF/com.apple.ZipMetadata.plist file must be the second entry in the zip. This file contains metadata that iOS validates.

The Solution: Recreating Asset Packs Correctly

Here’s the call to zip that packages the assetpack correctly:


# Navigate to the assetpack directory
cd some-assetpack-directory/

# Recreate with proper ordering and settings
(echo "META-INF/"; echo "META-INF/com.apple.ZipMetadata.plist"; find . -mindepth 1 -not -path "./META-INF*") | zip -0 -X recreated.assetpack -@

Let me break down what each part does:

  • echo "META-INF/" - Ensures META-INF directory is first
  • echo "META-INF/com.apple.ZipMetadata.plist" - Puts the critical metadata file second
  • find . -mindepth 1 -not -path "./META-INF*" - Adds everything else while excluding META-INF (to avoid duplicates)
  • zip -0 -X ... -@ - Creates zip with zero compression (-0), no extended attributes (-X), reading file list from stdin (-@)

The Bottom Line

Apple’s asset packs aren’t just zip files - they’re zip files with very specific structural requirements. iOS validates not just the content, but the exact structure, compression settings, and file ordering of the archive.

With the right zip parameters and file ordering, it is very easy to create them.

Integrating Conan with Xcode to manage C/C++ libraries

In my last post I went over how to manually link C++ libraries to Xcode project. While that is useful to know, it gets tedious to maintain once you have multiple C++ dependencies. In addition, if the library you want to link does not come with built binary, you are responsible for that too which in some cases may not be fun at all.

Enter Conan

Conan is a package manager for C/C++ that in addition to getting libraries, it allows for easy building of libraries for various CPU architectures which I personally find incredibly useful.

Why?

Past few weeks I spent some time building cross-platform library using C++ for iOS and Android which ended up depending on Crypto++. This meant that besides building the Crypto++ from source for iOS and iOS simulator, now I needed to build it for four more architectures (armV7, armV8), x86 and x86_64) that Android runs on.

Integrating Conan with Xcode

First things first, you need to make sure that you have Conan installed. The easiest way is with Homebrew, simply open terminal and run brew install conan. Once that’s sorted out, change directory to where your Xcode project is and create new “conanfile.txt” file. Make sure that it contains the following:

[requires]
cryptopp/8.8.0
[generators]
XcodeDeps
XcodeToolchain
[layout]
cmake_layout

this sets up Conan to look for 8.8.0 version of Crypto++. Then in the “generators” section of the file it tells Conan to generate xcconfig files that will ultimately help us link the library.

Next, it is required to create a Conan profile that describes how to build the library. It contains information like which CPU architecture to build for, whether to build in debug or release mode etc. So still in directory where your Xcode project is, go ahead and create empty file and give it “simulator-profile” name. You can pick whatever name you like, this is just my preference. After that, it should contain the following:

[settings]
arch=armv8
build_type=Debug
compiler=apple-clang
compiler.cppstd=gnu17
compiler.libcxx=libc++
compiler.version=15
os=iOS
os.version=17.0
os.sdk=iphonesimulator

this is pretty self-explanatory. It tells Conan to build the library for armV8 architecture using Clang version 15 and it tells what is the minimum iOs deployment target in addition to which SDK to build for.

Building and linking

After installing Conan, setting up “conanfile.txt” and “simulator-profile” it is time to build. Make sure your working directory in the terminal is the one that contains “conanfile.txt” and run

conan install . --build=missing --profile=simulator-profile --output-folder=conan-generated

Here is the breakdown of the entire command:

  • conan install . runs “install” command from conan. The “.” is used to look for “conanfile.txt” in the current working directory.
  • --build=missing explicitly tells Conan that the build for the library is missing which makes it build the library from source, hence the word “missing”.
  • --profile=simulator-profile this is passing profile file that I created earlier.
  • --output-folder=conan-generated this is the directory where Conan will generate files using generators I specified in “conanfile.txt”. I named it “conan-generated” but you can name that whatever you like, popular one is “build”.

So after command ran, you should see “conan-generated” directory next to your other project files. I recommend adding “conan-generated/” to .gitignore. All that’s left is to open your Xcode project and add “conan_config.xcconfig” file that is in “conan-generated” directory. I won’t go into specifics of using config files in Xcode, there is plenty of articles about that, like the one from NSHipster. It’s important to note that XcodeDeps aggregates all files and settings. Therefore, running the command above multiple times is not only acceptable but also necessary if you want it to generate configurations for all cases. For example if you want to have library linked both for simulator and device in debug and release configurations, you should run the command with different parameters multiple times.

Closing words

Even though there is a bit of learning curve and setup when it comes to Conan, it makes our lives much easier. Once you grasp the concepts you realize that integration between Conan and Xcode is fundamentally very simple. To deepen your understanding of Conan and its generators, it is best to consult official Conan documentation page. And to explore the vast universe of C/C++ libraries available for use with Conan, Conan center is the best place for that.

Linking C++ static library in iOS project

Linking against a static C++ library in Xcode tends to get complicated. Even though the idea is simple, there are few traps that you can run into.

The idea

  1. Write C++ code.
  2. Write interface which will be usable from Objective-C and Swift.
  3. Package it as a static “.a” library.
  4. Link it to iOS app project.

Let’s start with simple C++ code

We will create a class named ExampleCode which will expose single function helloWorld() that returns static string.

    #ifndef ExampleCode_hpp
    #define ExampleCode_hpp

    #include <stdio.h>

    using namespace std;

    class ExampleCode {
    public:
        const char* helloWorld();
    };

    #endif /* ExampleCode_hpp */

And here is the implementation:

    #include "ExampleCode.hpp"

    const char* ExampleCode::helloWorld() {
        char const *str = "This is my library";
        return str;
    }

Creating interface for Objective-C and Swift

Because Swift still doesn’t have interoperability with C++, we need to utilize Objective-C++ to achieve our goal of using the library from Swift. In the same project for our dummy library, create a new Objective-C file with header:

#import <Foundation/Foundation.h>

@interface NewLibrary : NSObject
- (NSString*)hello;
@end

Now for the implementation file it is important to change its .m extension to .mm because that is what makes it tap into C++ (aka makes it Objective-C++).

#import "NewLibrary.h"
#import "ExampleCode.hpp"

@implementation NewLibrary
- (NSString *)hello {
    ExampleCode* example = new ExampleCode();
    NSString* str = [NSString stringWithUTF8String:example->helloWorld()];
    return str;
}
@end

Also remember ExampleCode.hpp is the C++ header file that I created above, so that is why I import it here.

Packaging this code as static C++ .a library

This is fairly simple stuff, but here are the steps:

  1. Set your scheme to Release configuration.
  2. Build both for “Any iOS Simulator Device (arm64, x86_64)” and for “Any iOS Device (arm64)”. No, you can’t do it both at once.
  3. Find your build products in Xcode’s derived data folder. You should see two folders “Release-iphonesimulator” and “Release-iphoneos”. In there there is your “.a” library file and “include” folder containing “.h” header Objective-C file that we created earlier.

Now, most of the old advice goes to say that you should use lipo command to create universal (fat) binary. However this will only get you in trouble. Firstly, if you try creating universal binary from simulator and iOS device “.a” library files you will get an error telling you that both files are built for arm64 architecture. That is because they actually are since Apple Silicon was introduced. Secondly, don’t try to remove arm64 architecture from simulator “.a” library using lipo -remove arm64 path-to-simulator-lib.a -output library.a, not because it won’t work but because it will create troubles when debugging on simulator later on. Actually, you don’t need to do anything with those files at this point.

Linking against your library in separate iOS project

So in a new iOS project in Xcode it is required to perform a couple of steps to link your newly created static C++ library:

  1. Before you add your “.a” files into iOs project, it would be helpful to rename them such that you can differentiate simulator from real device “.a” libraries because they are different. You can name them something like NewLibrary-sim.a and NewLibrary-device.a.
  2. Add your “.a” files to Xcode project. Make sure that you check the box “Copy files if needed” when dropping “.a” files. Also make sure that you don’t add them to your target because we will link them conditionally in later step.
  3. Add “.h” header file. Just because you have two “.a” files, you don’t need two header files. Also make sure that you check “Copy files if needed” when dropping it into your project.
  4. In project build settings look for “other linker flags” and next to Debug and Release configurations click + icon and add two new entries, one for the simulator SDK and the other for the iOS SDK. In the entry for simulator SDK add path to your simulator .a file, you can write it like this $(SRCROOT)/Libraries/NewLibrary-sim.a. $(SRCROOT) gives you the path to the root of your project. And repeat the same for iOS SDK $(SRCROOT)/Libraries/NewLibrary-device.a.
  5. Now for the library to work, you also need to link against C++ standard library. Fortunately it is pretty straight forward. Go to build phases for your target and add “libc++.tbd” under “Link Binary With Libraries”. This is very important step and one that I see so many other articles fail to mention.
  6. Finally, because interface for our library is written in Objective-C, wee need to create bridging header. You can do that manually or you can add empty Objective-C file to your project and Xcode will offer to create bridging header for you. What ever you choose, just make sure to import your library header in bridging header to make Swift recognize public interface for your library.
//
//  Use this file to import your target's public headers that you would like to expose to Swift.
//

#import "NewLibrary.h"

At this point you should be able to build the app either for simulator or device without any issues.

Things to keep in mind

  • Swift will get support for direct interoperability with C++ very soon. It actually already supports it but the current stable Xcode version still does not ship Swift 5.9. This means that Objective-C won’t be necessary anymore.
  • Don’t fall into traps with removing arm64 from simulator version of your library. Also don’t go into build settings and add arm64 to “Excluded architectures”. If you do so, simulator will utilize Rosetta to run your app and debugging experience gets a lot slower and simulator starts to freeze.

Conclusion

Utilizing language like C++ can be very beneficial as it allows for code sharing between different platforms. However it can get challenging if you are doing it for the first time or you fail to perform one of the steps outlined in this article.

Introducing the existentialannotator: A Swift Command Line Tool that automatically marks all existential types with

I am pleased to present my command line tool that I hacked together on a Saturday morning, the “existentialannotator,” which can be found on github. As the name suggests, this tool performs a specific function: scanning your Swift files, identifying all declared protocols, and annotating all existential types with any. This feature will prove invaluable with the upcoming release of Swift 6. To get started, you have two options: installing it via Homebrew or obtaining the source code directly from GitHub and building it yourself. Once installed, simply navigate to your working directory and execute the command existentialannotator . to let the tool do its job.

Background

Concept of existential types in Swift has not been heavily discussed topic until recently when it was introduced as part of the Swift Evolution process on GitHub. Essentially, an existential type represents the existence of any specific type without specifying that particular type explicitly. This means that certain code, like the example below, will not compile in Swift 6:

protocol Vehicle {
  var batteryPercentage: Float { get }
}

struct HumanDriver {
  let vehicle: Vehicle

  func drive() {
    // Drive the vehicle
  }
}

The compilation fails because let vehicle: Vehicle utilizes an existential type without being explicitly annotated with the new keyword any.

Understanding Existential Types

In Swift, an existential type provides a way to handle values of different types in a unified manner, abstracting their specific type information. In the example above, we used the protocol Vehicle instead of a concrete type, demonstrating the essence of an existential type.

What’s New in Swift 6?

In Swift 6, every existential type must be marked with any. Failure to do so will result in a compilation error. Consequently, the code above let vehicle: Vehicle would now require the notation let vehicle: any Vehicle. This is where my tool, the Existential Annotator, comes in handy, particularly when dealing with large codebases.

Final Remarks

Although I put this tool together on Saturday morning, it is not flawless. There are numerous potential performance improvements that could be implemented. Nonetheless, I consider this tool complete, given its limited lifespan. As we move past the initial release of Swift 6, this tool will likely lose its relevance. Nevertheless, if you believe it could benefit from enhancements, please feel free to submit a pull request.

Using Swift withCheckedThrowingContinuation in methods without return value

When refactoring old closure-based code to new Swift concurrency it is inevitable that you come accross scenario where you need to call withCheckedThrowingContinuation where enclosing method has no return value. In that case, you should get error in Xcode: Generic parameter ’T’ could not be inferred

Let’s consider the following block of code

func fetchData() async throws {
    try await withCheckedThrowingContinuation({ continuation in
        URLSession.shared.dataTask(with: URL(string: "https://example.com")!) { data, response, error in
            if let error = error {
                continuation.resume(throwing: error)
                return
            }
            continuation.resume()
        }.resume()
    })
}

This code produces the error above because withCheckedThrowingContinuation method has generic parameter which compiler usuallly infers from return value of enclosing method. However, our enclosing method fetchData has no return value thus compiler raises the error.

Fortunately the fix is incredibly simple, just cast return type to Void

func fetchData() async throws {
    try await withCheckedThrowingContinuation({ continuation in
        URLSession.shared.dataTask(with: URL(string: "https://example.com")!) { data, response, error in
            if let error = error {
                continuation.resume(throwing: error)
                return
            }
            continuation.resume()
        }.resume()
    }) as Void
}

It is important to note that this approach works for all methods not just for withCheckedThrowingContinuation or other methods specific to Swift concurrency.

How to check if Xcode is building for previews

Lately, I find myself often dealing with a lot of Xcode build phases. One common problem that I encounter is that SwiftUI previews won’t work if some build phase runs a script which messes with Xcode project file or with individual files. For example, script which sorts files alphabetically may modify Xcode project file which will prevent previews to work. To work around this problem, you can check if Xcode is building for previews and then decide whether to run the script or not.

if [ ${ENABLE_PREVIEWS} == NO ];
then
echo "Running sorting script"
fi

I have spent some time playing with Apple’s Multi Peer connectivity framework on iOS. It is incredible what kind of apps it enables. Here is my unfinished sample app that allows for voice calls in cases where there is no internet access or even infrastructure Wi-Fi. I have to say that most difficult part was configuring AVAudioEngine it is extremely easy to mess things up. Audio is hard.