A Practical Introduction to Bazel Persistent Workers

Typically, Bazel rules execute actions that usually correspond to tool processes on the host OS. Sometimes this behavior can incur startup costs, like bootstrapping a JVM or initializing a compiler. To work around that, Bazel has the concept of persistent workers.

A persistent worker is essentially a long-lived process that accepts work requests and responds with work responses. Imagine a process that keeps a compiler alive and dispatches sources to compile without paying the startup cost every time.

Creating a rule that leverages workers

Because this is a fairly advanced concept in Bazel, and usually only rule authors deal with it, I tried to come up with a simple example that demonstrates it.

An uppercase rule

We will write a rule that simply uppercases the text in a given file. To begin, we need to meet a few requirements. The first one is adding dependencies in our MODULE.bazel:

bazel_dep(name = "swift_argument_parser", version = "1.7.1")
bazel_dep(name = "rules_swift", version = "3.6.1")

These will come into play a bit later.

Creating a rule

Like I said, this is a simple rule, but the code may look a bit scary at first. Create uppercase.bzl at the root of the directory:

def _uppercase_impl(ctx):
    out = ctx.actions.declare_file(ctx.label.name + ".out")
    args_file = ctx.actions.declare_file(ctx.label.name + ".worker_args")

    # These are the per-action arguments. Bazel will send these to the
    # persistent worker inside each WorkRequest.
    ctx.actions.write(
        output = args_file,
        content = "\n".join([
            "--input=" + ctx.file.src.path,
            "--output=" + out.path,
        ]),
    )

    ctx.actions.run(
        executable = ctx.executable._worker,
        inputs = [
            ctx.file.src,
            args_file,
        ],
        outputs = [out],
        arguments = [
            # For worker actions, the last argument is special:
            # it must be an @flagfile containing the per-request args.
            "@" + args_file.path,
        ],
        mnemonic = "UppercaseWorker",
        execution_requirements = {
            "supports-workers": "1",
            "requires-worker-protocol": "json",
        },
    )

    return [DefaultInfo(files = depset([out]))]


uppercase = rule(
    implementation = _uppercase_impl,
    attrs = {
        "src": attr.label(
            allow_single_file = True,
            mandatory = True,
        ),
        "_worker": attr.label(
            default = "//tools:worker",
            executable = True,
            cfg = "exec",
        ),
    },
)

The important part here is the arguments list. For worker actions, Bazel treats the last argument specially when it is an @flagfile. The contents of that file become the per-request arguments inside the WorkRequest. Any arguments before that are considered startup arguments for the worker process.

Now that the rule is in place, we need to create the actual worker binary. Because Swift is my language of choice, we will write it using Swift, but you can implement it in any language.

The Swift worker

Typically, I would split this out into multiple Swift files, but for the sake of simplicity, I will shove everything into one Swift file called worker.swift:

import Foundation
import ArgumentParser

struct WorkRequest: Decodable {
    var arguments: [String]?
    var requestId: Int?

    // Bazel may send other fields such as inputs, verbosity, etc.
    // JSONDecoder ignores unknown fields by default, which is what we want.
}

struct WorkResponse: Encodable {
    var requestId: Int
    var exitCode: Int
    var output: String
}

struct UppercaseArgs: ParsableArguments {
    @Option(name: .long)
    var input: String

    @Option(name: .long)
    var output: String
}

func expandArgs(_ args: [String]) throws -> [String] {
    var expanded: [String] = []

    for arg in args {
        if arg.hasPrefix("@") {
            let path = String(arg.dropFirst())
            let contents = try String(contentsOfFile: path, encoding: .utf8)

            for line in contents.split(separator: "\n") {
                let trimmed = line.trimmingCharacters(in: .whitespacesAndNewlines)

                if !trimmed.isEmpty {
                    expanded.append(trimmed)
                }
            }
        } else {
            expanded.append(arg)
        }
    }

    return expanded
}

func runOne(_ rawArgs: [String]) throws {
    let args = try UppercaseArgs.parse(expandArgs(rawArgs))

    let inputText = try String(contentsOfFile: args.input, encoding: .utf8)

    try inputText.uppercased().write(
        toFile: args.output,
        atomically: true,
        encoding: .utf8
    )
}

func writeResponse(requestId: Int, exitCode: Int = 0, output: String = "") {
    let response = WorkResponse(
        requestId: requestId,
        exitCode: exitCode,
        output: output
    )

    do {
        let data = try JSONEncoder().encode(response)

        FileHandle.standardOutput.write(data)
        FileHandle.standardOutput.write(Data("\n".utf8))
    } catch {
        // Important: do not print normal logs to stdout.
        // In worker mode, stdout is reserved for WorkResponse JSON.
        FileHandle.standardError.write(
            Data("failed to encode WorkResponse: \(error)\n".utf8)
        )
        exit(1)
    }
}

func persistentLoop() {
    let decoder = JSONDecoder()

    while let line = readLine() {
        do {
            let request = try decoder.decode(
                WorkRequest.self,
                from: Data(line.utf8)
            )

            let requestId = request.requestId ?? 0
            let arguments = request.arguments ?? []

            do {
                try runOne(arguments)
                writeResponse(requestId: requestId)
            } catch {
                writeResponse(
                    requestId: requestId,
                    exitCode: 1,
                    output: String(describing: error)
                )
            }
        } catch {
            writeResponse(
                requestId: 0,
                exitCode: 1,
                output: "failed to decode WorkRequest: \(error)"
            )
        }
    }
}

@main
struct Worker {
    static func main() {
        let startupArgs = Array(CommandLine.arguments.dropFirst())

        if startupArgs.contains("--persistent_worker") {
            persistentLoop()
        } else {
            // Non-worker fallback path. This lets the same executable still work
            // when Bazel uses local execution instead of worker execution.
            do {
                try runOne(startupArgs)
            } catch {
                FileHandle.standardError.write(Data("\(error)\n".utf8))
                exit(1)
            }
        }
    }
}

A persistent worker has a small protocol contract with Bazel: it should accept the --persistent_worker flag, read WorkRequests from stdin, and write WorkResponses to stdout. If the same binary is run without --persistent_worker, it should behave like a normal one-shot tool. This fallback path is useful because Bazel may still run the action without the worker strategy.

One small but important detail: in worker mode, stdout belongs to the worker protocol. If you need to log something, write it to stderr instead.

I will not get into every detail here. I do expect the reader to be familiar with Swift and the general concept of Bazel workers.

We are missing the actual Bazel target for the worker at tools/BUILD.bazel:

load("@rules_swift//swift:swift_binary.bzl", "swift_binary")

swift_binary(
    name = "worker",
    srcs = ["worker.swift"],
    visibility = ["//visibility:public"],
    deps = ["@swift_argument_parser//:ArgumentParser"],
)

Trying out the rule

At the root, it is time to create a BUILD.bazel, load our rule, and build it:

load("//:uppercase.bzl", "uppercase")

uppercase(
    name = "hello",
    src = "hello.txt",
)

hello.txt is just a text file that I created to demonstrate the rule.

Building and verifying

To try out our new rule, execute:

bazel build :hello --spawn_strategy=worker,sandboxed --worker_verbose

We set --spawn_strategy=worker,sandboxed to make sure that our rule runs using the worker strategy and falls back to the standard sandboxed strategy. The fallback is important because there are actions that run because of rules_swift that do not necessarily use workers.

--worker_verbose is here just to make it easier to see that our worker is being used.

The output should look something like this:

INFO: Analyzed target //:hello (102 packages loaded, 649 targets configured, 2 aspect applications).
INFO: Created new non-sandboxed singleplex SwiftCompile worker (id 5, key hash -1813863811), logging to /Users/adincebic/Library/Caches/bazel/_bazel_adincebic/19f2a862cd16d28bfab74de8ca294508/bazel-workers/worker-5-SwiftCompile.log
INFO: Created new non-sandboxed singleplex UppercaseWorker worker (id 6, key hash -755134554), logging to /Users/adincebic/Library/Caches/bazel/_bazel_adincebic/19f2a862cd16d28bfab74de8ca294508/bazel-workers/worker-6-UppercaseWorker.log
INFO: Found 1 target...
Target //:hello up-to-date:
  bazel-bin/hello.out
INFO: Elapsed time: 19.851s, Critical Path: 19.03s
INFO: 60 processes: 30 internal, 26 darwin-sandbox, 4 worker.
INFO: Build completed successfully, 60 total actions

To verify the result, inspect bazel-bin/hello.out. It should contain the uppercase version of hello.txt.

And that’s it.

A few notes

This is the simplest example I could come up with, and it comes with a few caveats:

  • My worker implementation does not implement cancellation.
  • This is a singleplex worker, meaning Bazel sends it one request at a time.
  • The parsing logic could be more robust.
  • The worker ignores fields like inputs and verbosity from WorkRequest, which is fine for this example but probably not what you would do in a production worker.

Conclusion

This is one of those advanced Bazel concepts that you do not run into often, even if you write your own rules, purely because it is not always needed. But if you ever need persistent workers, I hope this gets you started.

Why My Xcode Extension Kept Asking for File Permissions

Recently, I worked on developing an Xcode source editor extension that needed to run some of our internal code formatters. These formatters are driven by configuration files that define how the tools should be executed. Because Xcode extensions must be sandboxed, they can’t directly access arbitrary file locations, including these configuration files.

To work around this, we used a container app to prompt users to select the location of the configuration files. We then created security-scoped bookmarks and passed them to the extension process. As expected, the standard way to share data between processes—such as an app and its extension—is by using Apple’s App Groups capability.

After setting this up, I noticed that the extension kept prompting the user to grant access to the shared files, even though both the app and extension were part of the same app group. This was unexpected—intuitively, accessing files within your own shared container shouldn’t trigger permission prompts.

The mistake

Coming from an iOS background, I defined the app group ID like this:

<key>com.apple.security.application-groups</key>
<array>
	<string>group.example.app</string>
</array>

After running both the app and the extension and inspecting ~/Library/Group Containers/, it was clear that the shared container had been created. However, what I missed is that on macOS, App Group identifiers must be prefixed with the Team ID (for example, TEAMID.group.example.app). This allows the system to correctly associate the app group with your developer account and properly link the app and its extension.

Without this prefix, the container may still appear to exist, but entitlement validation and access behavior can be inconsistent—leading to issues like repeated permission prompts.

Conclusion

This turned out to be one of those frustrating issues where the root cause isn’t immediately obvious, even after checking open-source projects and documentation. To be fair, Apple does document this requirement—but it’s easy to overlook, especially since iOS does not require this detail and doesn’t expose the same behavior as clearly.

Centralizing Dependency Fetching in Bazel with the Remote Asset API

It has become increasingly common for major providers to experience outages—from Git servers being unavailable to failures when downloading external dependencies.

There are several ways to work around this, such as internal mirrors, vendoring dependencies, and similar approaches. While effective, these solutions can feel somewhat heavy-handed.

Bazel Remote Asset API

The Bazel Remote Asset API provides a mechanism for managing external dependencies in a centralized way.

More precisely, it maps external resource identifiers (such as URLs or Git repositories) to content stored in a content-addressable storage (CAS).

In practice, this allows a server to:

  • Fetch external resources (e.g. tarballs, Git repos)
  • Store them in CAS
  • Serve them to clients by digest

When used via Bazel’s remote downloader, this effectively acts as a download proxy/cache: instead of every developer machine and CI runner downloading dependencies independently, requests go through a central service that can fetch and cache them once.

How to Use

Getting started is straightforward: pass --experimental_remote_downloader=SERVER_ADDRESS either on the command line or in your .bazelrc.

This configures Bazel to route external downloads through a Remote Asset API–compatible service.

Before using it, ensure your remote cache/server supports the API. Many commercial solutions do, and the popular open-source bazel-remote supports (a subset of) it as well—though support is still marked experimental.

A Note on the Experimental Flag

Although the flag is prefixed with experimental, the feature has been available for some time and is widely used in practice. There is some good info on the Bazel Slack about it.

Conclusion

Combined with Bazel’s repository cache, the Remote Asset API provides a nice way to improve reliability when fetching external repos. It reduces reliance on third-party availability while avoiding the operational overhead of fully vendoring or mirroring all dependencies.

A Better Way to Ignore Files in Bazel with repo.bazel

In the Bazel world, we don’t always want it to track all the files in our repository. A typical example is ignoring the .git directory, as it can grow quite large over time. Additionally, some IDE integrations like rules_xcodeproj don’t work particularly well when it is present.

Traditionally, to instruct Bazel to ignore directories and files, we used the .bazelignore file, which requires explicitly listing paths to ignore. This works, but it has an important limitation: .bazelignore does not support glob patterns. As a result, we often need to update the file whenever new directories should be ignored—and it’s easy to forget to do so.

Introducing repo.bazel

repo.bazel is a simple configuration file that allows us to achieve similar behavior, but with support for glob patterns. It is a relatively recent addition to Bazel, introduced around the same time as bzlmod.

An example repo.bazel file looks like this:

ignore_directories([
    # Ignore all .build directories produced by Swift Package Manager
    "**/.build",
    # Ignore Node modules directories
    "**/node_modules",
])

And that’s it.

Conclusion

This approach builds on the same idea as .bazelignore, but adds a few quality-of-life improvements—most notably, support for glob patterns.

For more information, see the official Bazel documentation.

Reconfiguring bazel downloader

There are many security as well as practical reasons why one might need to reconfigure Bazel’s downloading behavior. One concrete case that I ran into fairly recently was Google rate-limiting our CI for an unknown reason. To work around that, I needed to redirect the downloader to a mirror. There are many ways to achieve that, like patching individual rules (tedious), using an internal registry (doesn’t solve everything), etc.

Bazel downloader config

Bazel offers a way to configure its downloader in a very simple manner. Unfortunately, it is not very well documented, but there are various resources online as well as the actual Bazel source, which explains it quite nicely. To enable it, we simply pass --downloader_config=<path_to_file> either on the command line or in .bazelrc.

File structure and syntax

The structure is easy to understand because it allows only a small set of directives:

  • allow host.name to allow a specific domain
  • block host.name to block a certain domain (also supports block * to block everything except what is explicitly allowed)
  • rewrite pattern replacement to rewrite URLs using regex
  • all_blocked_message message — a message shown if all candidate URLs end up blocked

Rewrite directive

Because all other directives are fairly self-explanatory, I will focus only on rewrite.

As an example, if we want to ensure that all GitHub downloads are redirected to an internal Artifactory, we could write a file like this:

rewrite github.com/(.*) internal.artifactory.example.com/$1

Of course, it is possible to define more sophisticated rewrite patterns, e.g.:

rewrite android.googlesource.com/platform/dalvik/\+archive/([0-9a-f]+)\.tar\.gz mirror.bazel.build/android.googlesource.com/platform/dalvik/+archive/$1.tar.gz
rewrite android.googlesource.com/platform/dalvik/\+archive/([0-9a-f]+)\.tar\.gz android.googlesource.com/platform/dalvik/+archive/$1.tar.gz

This rewrites requests for android.googlesource.com to mirror.bazel.build for this specific dalvik archive. The second rewrite directive ensures that Bazel falls back to the original URL if the mirror is unavailable.

Evaluation order

Bazel applies the directives in the following order, regardless of their position in the file:

  1. rewrite
  2. allow
  3. block

Comments

It is possible to add comments using # at the beginning of a line. Keep in mind that inline comments are not supported.

Conclusion

Typically, this is not needed very often, but it is good to keep the option in the back of your mind so you can reach for it when needed.

Bazel Output Groups: Producing Outputs on Demand

Typically, when writing a Bazel rule, we produce outputs using the DefaultInfo provider. However, there are cases where we want to produce additional outputs only on demand.

Enter output groups

Simply put, output groups are a way to tell Bazel to produce different sets of outputs instead of—or in addition to—the default outputs. For example, we might want to generate debug symbols, but we don’t need them unless explicitly requested.

Smallest possible example

Here is a minimal rule that demonstrates the use of output groups:

def _impl(ctx):
    out1 = ctx.actions.declare_file("main.txt")
    out2 = ctx.actions.declare_file("debug.txt")

    ctx.actions.write(out1, "main output")
    ctx.actions.write(out2, "debug output")

    return [
        DefaultInfo(files = depset([out1])),
        OutputGroupInfo(
            debug = depset([out2]),
        ),
    ]

my_rule = rule(
    implementation = _impl,
)

Notice how easy it is to use output groups. OutputGroupInfo is just another provider—a key-value mapping where, in this case, debug is the key (the output group name), and out2 is the value wrapped in a depset.

Requesting the debug output

If we instantiate this rule in a BUILD file:

my_rule(
    name = "groups",
)

We can build it:

bazel build :groups

This produces:

INFO: Analyzed target //:groups (5 packages loaded, 7 targets configured).
INFO: Found 1 target...
Target //:groups up-to-date:
  bazel-bin/main.txt
INFO: Elapsed time: 0.119s, Critical Path: 0.00s
INFO: 2 processes: 2 internal.
INFO: Build completed successfully, 2 total actions

The important part here is bazel-bin/main.txt. This happens because we did not tell Bazel to include outputs from the debug output group.

To do that, we use the --output_groups flag and specify the group name (in this case, debug):

bazel build :groups --output_groups=debug

Output:

INFO: Analyzed target //:groups (0 packages loaded, 0 targets configured).
INFO: Found 1 target...
Target //:groups up-to-date:
  bazel-bin/debug.txt
INFO: Elapsed time: 0.065s, Critical Path: 0.00s
INFO: 2 processes: 2 internal.
INFO: Build completed successfully, 2 total actions

Now the debug file is produced.

An important detail: debug.txt is produced instead of main.txt, not in addition to it. To request both the default outputs and an output group at the same time, use the + prefix:

bazel build :groups --output_groups=+debug

This produces both files:

INFO: Analyzed target //:groups (5 packages loaded, 7 targets configured).
INFO: Found 1 target...
Target //:groups up-to-date:
  bazel-bin/debug.txt
  bazel-bin/main.txt
INFO: Elapsed time: 0.108s, Critical Path: 0.00s
INFO: 3 processes: 3 internal.
INFO: Build completed successfully, 3 total actions

Conclusion

Output groups are simple to use, both when defining rules and when consuming them. They’re a small feature, but an extremely useful one.

What Bazel Really Runs (and How to See It)

There comes a time when working with Bazel when we want to understand the command-line flags used to build our code. For example, you might want to see what flags are being passed to swiftc. Up until Bazel 9, we would typically rely on --subcommands, but it could get quite verbose.

Action graph query

In addition to the standard bazel query command, there are also bazel cquery (configurable query) and bazel aquery (action graph query). Each of these helps us explore different parts of the build graph. Since we’re interested in inspecting command-line flags, aquery is the right tool—it exposes all declared actions, including the exact commands being executed.

For a project like this iOS template, we can explore how Swift code is compiled by running:

bazel aquery //app:app.library --output=commands

Which produces output like:

bazel-out/darwin_arm64-opt-exec/bin/external/rules_swift+/tools/worker/worker swiftc -target arm64-apple-macos12.6 -sdk __BAZEL_XCODE_SDKROOT__ -file-prefix-map '__BAZEL_XCODE_DEVELOPER_DIR__=/PLACEHOLDER_DEVELOPER_DIR' '-Xwrapped-swift=-bazel-target-label=@@//app:app.library' -emit-object -output-file-map bazel-out/darwin_arm64-fastbuild/bin/app/app.library.output_file_map.json -Xfrontend -no-clang-module-breadcrumbs -emit-module-path bazel-out/darwin_arm64-fastbuild/bin/app/app.swiftmodule '-enforce-exclusivity=checked' -emit-const-values-path bazel-out/darwin_arm64-fastbuild/bin/app/app.library_objs/source/ContentView.swift.swiftconstvalues -Xfrontend -const-gather-protocols-file -Xfrontend external/rules_swift+/swift/toolchains/config/const_protocols_to_gather.json -DDEBUG -Onone -Xfrontend -internalize-at-link -Xfrontend -no-serialize-debugging-options -enable-testing -disable-sandbox -gline-tables-only '-Xwrapped-swift=-file-prefix-pwd-is-dot' -file-prefix-map '__BAZEL_XCODE_DEVELOPER_DIR__=/PLACEHOLDER_DEVELOPER_DIR' -file-compilation-dir . -module-cache-path bazel-out/darwin_arm64-fastbuild/bin/_swift_module_cache -Ibazel-out/darwin_arm64-fastbuild/bin/modules/Models -Ibazel-out/darwin_arm64-fastbuild/bin/modules/API '-Xwrapped-swift=-macro-expansion-dir=bazel-out/darwin_arm64-fastbuild/bin/app/app.library.macro-expansions' -Xcc -iquote. -Xcc -iquotebazel-out/darwin_arm64-fastbuild/bin -Xfrontend -color-diagnostics -enable-batch-mode -module-name app -index-store-path bazel-out/darwin_arm64-fastbuild/bin/app/app.library.indexstore -index-ignore-system-modules '-Xwrapped-swift=-global-index-store-import-path=bazel-out/_global_index_store' -enable-bare-slash-regex -Xfrontend -disable-clang-spi -enable-experimental-feature AccessLevelOnImport -parse-as-library -static -Xcc -O0 -Xcc '-DDEBUG=1' -Xfrontend '-checked-async-objc-bridging=on' app/source/ContentView.swift app/source/MainApp.swift
...

At first glance, this output looks overwhelming. But if you break it down, it’s simply Bazel invoking tools with the appropriate flags.

Doing something useful

While this output can help us understand what is being executed and how, it becomes much more powerful when used comparatively.

One practical approach is to diff this output across ruleset versions or Bazel releases. For example:

bazel aquery //app:app.library --output=commands > commands.txt

You can generate one file per version and use standard diffing tools to spot regressions or better understand what changed between versions.

Making it executable

In a Bazel 9 video by aspect.build, Alex Eagle shared an interesting idea: turning aquery output into an executable shell script.

That idea is what got me intrigued. While the output isn’t directly executable, it seems feasible to get there by replacing placeholder variables, adjusting formatting, and fiddling with cwd. With a bit of effort, this could become a powerful debugging tool.

Conclusion

This is a small quality-of-life improvement in Bazel 9, but it unlocks a very practical debugging technique.

Bazel split transitions

A couple of articles ago, I touched on Bazel transitions. In that context, I was referring to 1:1 transitions—transitions that change the single configuration of a target. Split transitions, on the other hand, are 1:N, meaning they build the same target in multiple configurations.

Multi-arch build

An obvious example where split transitions are useful is multi-platform (or multi-architecture) builds. Because my background is in Swift and Apple platforms, I immediately think of device and simulator builds.

You can think of a split transition as turning a single dependency edge:

A → B

into multiple ones:

A → B (device)
  → B (simulator)

In other words, a single dependency is “split” into multiple configurations.

Multi-platform Swift library

Let’s imagine we want to build a swift_library for both iOS device and simulator—for example, to validate both environments in CI or to produce multi-platform artifacts.

Note: In real-world Apple builds, you would typically rely on existing transition support provided by rules_apple rather than defining your own. This example is intentionally simplified to illustrate how split transitions work under the hood.

To achieve this, we need to define a transition and a wrapper rule.

def _split_transition_impl(_settings, _attr):
    return {
        "device": {"//command_line_option:platforms": "@apple_support//platforms:ios_arm64"},
        "sim": {"//command_line_option:platforms": "@apple_support//platforms:ios_sim_arm64"},
    }

split_transition = transition(
    implementation = _split_transition_impl,
    inputs = [],
    outputs = ["//command_line_option:platforms"],
)

This illustrates why they’re called split transitions: a single dependency edge is expanded into multiple configurations, each identified by a key (device, sim).

Next, we define a rule that applies the transition:

def _multi_platform_swift_library(ctx):
    propagated_files = []

    for split_deps in ctx.split_attr.deps.values():
        for dep in split_deps:
            propagated_files.append(dep[DefaultInfo].files)

    return [
        DefaultInfo(
            files = depset(transitive = propagated_files),
        ),
    ]

multi_platform_swift_library = rule(
    implementation = _multi_platform_swift_library,
    attrs = {
        "deps": attr.label_list(cfg = split_transition),
    },
)

Here, ctx.split_attr.deps is a dictionary where each key (device, sim) maps to the list of dependencies built in that configuration.

We simply propagate the files from the dependencies so that they are built—this keeps the example minimal while still demonstrating the transition.

Finally, in BUILD.bazel, we define targets using these rules:

load("@rules_swift//swift:swift_library.bzl", "swift_library")
load("//:rules.bzl", "multi_platform_swift_library")

swift_library(
    name = "lib",
    module_name = "Library",
    srcs = ["main.swift"],
)

multi_platform_swift_library(
    name = "mp",
    deps = [":lib"],
)

To verify that the transition has been applied, we can use Bazel’s configuration query:

bazel cquery //:mp --transitions=full

This will produce output similar to:

INFO: Analyzed target //:mp (0 packages loaded, 0 targets configured).
INFO: Found 1 target...
NoTransition -> //:mp (680dcaf)
  deps#//:lib#(Starlark transition:/Users/adincebic/developer.noindex/split/rules.bzl:8:30 + (TestTrimmingTransition + ConfigFeatureFlagTaggedTrimmingTransition)) -> 58f99a6, c9f1a2b
    platforms:[@@bazel_tools//tools:host_platform] -> [[@@apple_support+//platforms:ios_arm64], [@@apple_support+//platforms:ios_sim_arm64]]
  $allowlist_function_transition#@bazel_tools//tools/allowlists/function_transition_allowlist:function_transition_allowlist#(null transition) -> 
INFO: Elapsed time: 0.067s, Critical Path: 0.00s
INFO: 0 processes.
INFO: Build completed successfully, 0 total actions

Notice how //:lib is configured twice—once for each platform (ios_arm64 and ios_sim_arm64). This confirms that the split transition produced multiple configurations.

Conclusion

Bazel transitions are a powerful tool, but they should be used sparingly. They fundamentally alter the build graph, and split transitions in particular can significantly increase its size since dependencies may be built multiple times.

Conceptually, however, split transitions are no more complex than standard 1:1 transitions—they simply require using split_attr instead of attr, and thinking in terms of one dependency becoming many.

How to Fix Xcode Source Editor Extensions That Don’t Appear in the Editor Menu

Recently I needed to develop my own Xcode source editor extension. The reasons for doing so aren’t relevant here, but the process quickly led me into an unexpected roadblock.

TL;DR: In your extension target settings, set XcodeKit.framework → “Embed without signing” under the General tab.

Extension not showing up in Xcode

After following Apple’s official guide for creating a source editor extension (a macOS app with an extension target), I ran the extension scheme to test it. However, the extension neither appeared in Xcode nor executed any code.

Normally, a source editor extension should appear in two places:

  • macOS System Settings (Extensions → Xcode Source Editor)
  • Xcode’s Editor menu

In my case, the extension showed up in System Settings but did not appear in Xcode.

Asking around

Like most developers, I immediately turned to Google and AI tools. It quickly became clear that this issue is not uncommon. Unfortunately, none of the suggested workarounds solved my problem.

Sifting through Xcode logs

After exhausting the usual fixes, I wondered whether Xcode might be logging something useful while attempting to load extensions.

Using the unified logging system, I started streaming Xcode logs with a predicate to filter relevant messages:

log stream --style compact --predicate 'process == "Xcode" && (eventMessage CONTAINS[c] "EditorExtension" || eventMessage CONTAINS[c] "XcodeKit")'

After running the extension again, I noticed the following message:

Xcode Extension does not incorporate XcodeKit

This was finally a clue.

Looking at existing extensions

My next step was to inspect existing open-source extensions. I looked at projects like SwiftFormat and compared their release artifacts with the one produced by my own extension.

Missing XcodeKit.framework

While inspecting the extension bundle, one difference stood out immediately: my extension was not bundling XcodeKit.framework, while SwiftFormat’s extension was.

I also noticed that SwiftFormat’s release workflow explicitly ensures that XcodeKit.framework is bundled into the extension archive.

The fix: Embed without signing

It turns out the default Xcode template for source editor extensions is misconfigured.

By default, XcodeKit.framework is set to “Do Not Embed” in the extension target settings. Because of this, the framework never gets bundled with the extension.

Changing the setting to:

General → Frameworks, and Libraries → XcodeKit.framework → “Embed without signing”

fixes the issue and allows Xcode to load the extension correctly.

Closing thoughts

Problems like this are especially frustrating when there’s little to no documentation online and even AI tools can’t identify the cause.

One useful takeaway is that Apple heavily relies on the Unified Logging System across their apps, tools and macOS in general. When something misbehaves, inspecting logs can provide a path forward.

Managing Bazel Flags in Monorepos with Flagsets (PROJECT.scl)

Bazel’s flagsets, or PROJECT.scl, are an approach aimed at solving project-specific flags in a monorepo. It leverages Starlark as its language, or more precisely, a subset of Starlark. This is in contrast to the traditional .bazelrc, which uses its own line-based language and has no formal specification.

Current state of the art

At the time of writing this, we are almost certainly all using .bazelrc files to hide away command line flags as well as to define configs (de facto sets of flags). My take is that .bazelrc is fine and will continue to work well for many people in a multi-repo setup. However, .bazelrc not only does not scale well in monorepos (yes, it is possible to compose them via import) but it can also lead to incorrect builds during day-to-day development.

Say we are working in a monorepo with both iOS and Android apps and we want to build iOS by executing:

bazel build //ios:app

At first glance, nothing seems wrong. However, all the flags that we use for Android are also applied when building iOS, and vice versa. Granted, this can be fine, but there might be a feature flag for C++ that both platforms use in different ways. In that scenario we always need to make sure that we explicitly apply the given flag differently for each platform or find some other way.

I’m sure there are many more examples one could come up with.

The solution

Project-specific PROJECT.scl files, aka flagsets. They were introduced at BazelCon 2025 in Bazel 9 as an experimental feature. However, they are currently not behind a feature flag, so if you are on Bazel 9 you can start using them today and help discover interesting corner cases.

Example iOS app

Imagine we are in a monorepo where each app is its own project (separate directory) like iOS/, Android/, and so on. So for the iOS app we would want to define dev and store configurations which obviously build the app in different ways. We would create a PROJECT.scl in the iOS subdirectory and give it the following contents:

load("//:project_proto.scl", "buildable_unit_pb2", "project_pb2")

project = project_pb2.Project.create(
    buildable_units = [
        # Since this buildable unit sets "is_default = True", these flags apply
        # to any target in this package or its subpackages by default. You can
        # also request these flags explicitly with "--scl_config=dev_config" or "--scl_config=store_config".
        buildable_unit_pb2.BuildableUnit.create(
            name = "dev_config",
            flags = [
                "--compilation_mode=dbg",
            ],
            is_default = True,
            description = "Default debug configuration used for development.",
        ),

        buildable_unit_pb2.BuildableUnit.create(
            name = "store_config",
            flags = [
                "--compilation_mode=opt",
                '--ios_multi_cpus=arm64',
            ],
            description = "Store configuration.",
        ),
    ],
)

I feel like this is pretty self-explanatory and demonstrates the basic idea. When building the app, Bazel 9 will take the PROJECT.scl closest to the target on the filesystem into account when applying flags. If we want to build for the store, the only thing required is to set --scl_config=store_config, much like --config in .bazelrc.

Enforcement policies

One of the more interesting features of flagsets is that we can define enforcement policies which basically can either:

  • WARN: warns you if you add additional flags via the command line or .bazelrc. This is the default, so it doesn’t have to be explicitly specified.
  • COMPATIBLE: disallows setting conflicting flags via the command line or .bazelrc, but allows flags that do not interfere with those specified in PROJECT.scl.
  • STRICT: disallows setting any flags via the command line or .bazelrc, so everything must be defined in the relevant PROJECT.scl.

To set one of the policies above, we use the enforcement_policy attribute on project_pb2.Project.create:

load("//:project_proto.scl", "buildable_unit_pb2", "project_pb2")

project = project_pb2.Project.create(
    enforcement_policy = "STRICT",
    buildable_units = [
        buildable_unit_pb2.BuildableUnit.create(
            name = "dev_config",
            flags = [
                "--compilation_mode=dbg",
            ],
            is_default = True,
            description = "Default debug configuration used for development.",
        ),

        buildable_unit_pb2.BuildableUnit.create(
            name = "store_config",
            flags = [
                "--compilation_mode=opt",
                '--ios_multi_cpus=arm64',
            ],
            description = "Store configuration.",
        ),
    ],
)

After building the target we would get the following output if additional flags are applied other than those in PROJECT.scl:

INFO: Reading project settings from //ios:PROJECT.scl.
ERROR: Cannot parse options: This build uses a project file (//:PROJECT.scl) that does not allow output-affecting flags in the command line or user bazelrc. Found ['--macos_minimum_os=12.6', '--flag_alias=build_python_zip=@@rules_python+//python/config_settings:build_python_zip', '--ios_simulator_version=18.5', '--flag_alias=incompatible_default_to_explicit_init_py=@@rules_python+//python/config_settings:incompatible_default_to_explicit_init_py', '--modify_execution_info=^(BitcodeSymbolsCopy|BundleApp|BundleTreeApp|DsymDwarf|DsymLipo|GenerateAppleSymbolsFile|ObjcBinarySymbolStrip|CppLink|ObjcLink|ProcessAndSign|SignBinary|SwiftArchive|SwiftStdlibCopy)$=+no-remote,^(BundleResources|ImportedDynamicFrameworkProcessor)$=+no-remote-exec', '--flag_alias=python_path=@@rules_python+//python/config_settings:python_path', '--features=swift.index_while_building', '--macos_minimum_os=13', '--incompatible_strict_action_env', '--features=swift.use_global_index_store', '--flag_alias=experimental_python_import_all_repositories=@@rules_python+//python/config_settings:experimental_python_import_all_repositories', '--host_macos_minimum_os=13', '--features=swift.use_global_module_cache']. Please remove these flags or disable project file resolution via --noenforce_project_configs.

Target-specific flags

Another interesting aspect of flagsets is the ability to set flags on a per-target basis. There is an attribute called target_patterns on buildable_unit_pb2.BuildableUnit.create:

            target_patterns = [
                "//target_specific:one",
            ],

The following patterns are supported (taken from Bazel documentation and examples):

  • //some:target (specific target)
  • -//some:target (exclude //some:target from this filter)
  • //some/path/... (all targets below a path)
  • -//some/path/... (exclude all targets below a path)

In conclusion

I believe this is a great step forward for improving how Bazel flags are managed. It is still very early days and there are many unanswered questions, such as what happens if a project depends on another project, and many others I probably haven’t even thought of yet. As time goes on, we will discover best practices and flagsets as a feature of Bazel will evolve to support more scenarios.

To learn more about the decisions that went into the design of flagsets, please see this talk on Youtube by the Bazel team.