How to Fix Xcode Source Editor Extensions That Don’t Appear in the Editor Menu

Recently I needed to develop my own Xcode source editor extension. The reasons for doing so aren’t relevant here, but the process quickly led me into an unexpected roadblock.

TL;DR: In your extension target settings, set XcodeKit.framework → “Embed without signing” under the General tab.

Extension not showing up in Xcode

After following Apple’s official guide for creating a source editor extension (a macOS app with an extension target), I ran the extension scheme to test it. However, the extension neither appeared in Xcode nor executed any code.

Normally, a source editor extension should appear in two places:

  • macOS System Settings (Extensions → Xcode Source Editor)
  • Xcode’s Editor menu

In my case, the extension showed up in System Settings but did not appear in Xcode.

Asking around

Like most developers, I immediately turned to Google and AI tools. It quickly became clear that this issue is not uncommon. Unfortunately, none of the suggested workarounds solved my problem.

Sifting through Xcode logs

After exhausting the usual fixes, I wondered whether Xcode might be logging something useful while attempting to load extensions.

Using the unified logging system, I started streaming Xcode logs with a predicate to filter relevant messages:

log stream --style compact --predicate 'process == "Xcode" && (eventMessage CONTAINS[c] "EditorExtension" || eventMessage CONTAINS[c] "XcodeKit")'

After running the extension again, I noticed the following message:

Xcode Extension does not incorporate XcodeKit

This was finally a clue.

Looking at existing extensions

My next step was to inspect existing open-source extensions. I looked at projects like SwiftFormat and compared their release artifacts with the one produced by my own extension.

Missing XcodeKit.framework

While inspecting the extension bundle, one difference stood out immediately: my extension was not bundling XcodeKit.framework, while SwiftFormat’s extension was.

I also noticed that SwiftFormat’s release workflow explicitly ensures that XcodeKit.framework is bundled into the extension archive.

The fix: Embed without signing

It turns out the default Xcode template for source editor extensions is misconfigured.

By default, XcodeKit.framework is set to “Do Not Embed” in the extension target settings. Because of this, the framework never gets bundled with the extension.

Changing the setting to:

General → Frameworks, and Libraries → XcodeKit.framework → “Embed without signing”

fixes the issue and allows Xcode to load the extension correctly.

Closing thoughts

Problems like this are especially frustrating when there’s little to no documentation online and even AI tools can’t identify the cause.

One useful takeaway is that Apple heavily relies on the Unified Logging System across their apps, tools and macOS in general. When something misbehaves, inspecting logs can provide a path forward.

Managing Bazel Flags in Monorepos with Flagsets (PROJECT.scl)

Bazel’s flagsets, or PROJECT.scl, are an approach aimed at solving project-specific flags in a monorepo. It leverages Starlark as its language, or more precisely, a subset of Starlark. This is in contrast to the traditional .bazelrc, which uses its own line-based language and has no formal specification.

Current state of the art

At the time of writing this, we are almost certainly all using .bazelrc files to hide away command line flags as well as to define configs (de facto sets of flags). My take is that .bazelrc is fine and will continue to work well for many people in a multi-repo setup. However, .bazelrc not only does not scale well in monorepos (yes, it is possible to compose them via import) but it can also lead to incorrect builds during day-to-day development.

Say we are working in a monorepo with both iOS and Android apps and we want to build iOS by executing:

bazel build //ios:app

At first glance, nothing seems wrong. However, all the flags that we use for Android are also applied when building iOS, and vice versa. Granted, this can be fine, but there might be a feature flag for C++ that both platforms use in different ways. In that scenario we always need to make sure that we explicitly apply the given flag differently for each platform or find some other way.

I’m sure there are many more examples one could come up with.

The solution

Project-specific PROJECT.scl files, aka flagsets. They were introduced at BazelCon 2025 in Bazel 9 as an experimental feature. However, they are currently not behind a feature flag, so if you are on Bazel 9 you can start using them today and help discover interesting corner cases.

Example iOS app

Imagine we are in a monorepo where each app is its own project (separate directory) like iOS/, Android/, and so on. So for the iOS app we would want to define dev and store configurations which obviously build the app in different ways. We would create a PROJECT.scl in the iOS subdirectory and give it the following contents:

load("//:project_proto.scl", "buildable_unit_pb2", "project_pb2")

project = project_pb2.Project.create(
    buildable_units = [
        # Since this buildable unit sets "is_default = True", these flags apply
        # to any target in this package or its subpackages by default. You can
        # also request these flags explicitly with "--scl_config=dev_config" or "--scl_config=store_config".
        buildable_unit_pb2.BuildableUnit.create(
            name = "dev_config",
            flags = [
                "--compilation_mode=dbg",
            ],
            is_default = True,
            description = "Default debug configuration used for development.",
        ),

        buildable_unit_pb2.BuildableUnit.create(
            name = "store_config",
            flags = [
                "--compilation_mode=opt",
                '--ios_multi_cpus=arm64',
            ],
            description = "Store configuration.",
        ),
    ],
)

I feel like this is pretty self-explanatory and demonstrates the basic idea. When building the app, Bazel 9 will take the PROJECT.scl closest to the target on the filesystem into account when applying flags. If we want to build for the store, the only thing required is to set --scl_config=store_config, much like --config in .bazelrc.

Enforcement policies

One of the more interesting features of flagsets is that we can define enforcement policies which basically can either:

  • WARN: warns you if you add additional flags via the command line or .bazelrc. This is the default, so it doesn’t have to be explicitly specified.
  • COMPATIBLE: disallows setting conflicting flags via the command line or .bazelrc, but allows flags that do not interfere with those specified in PROJECT.scl.
  • STRICT: disallows setting any flags via the command line or .bazelrc, so everything must be defined in the relevant PROJECT.scl.

To set one of the policies above, we use the enforcement_policy attribute on project_pb2.Project.create:

load("//:project_proto.scl", "buildable_unit_pb2", "project_pb2")

project = project_pb2.Project.create(
    enforcement_policy = "STRICT",
    buildable_units = [
        buildable_unit_pb2.BuildableUnit.create(
            name = "dev_config",
            flags = [
                "--compilation_mode=dbg",
            ],
            is_default = True,
            description = "Default debug configuration used for development.",
        ),

        buildable_unit_pb2.BuildableUnit.create(
            name = "store_config",
            flags = [
                "--compilation_mode=opt",
                '--ios_multi_cpus=arm64',
            ],
            description = "Store configuration.",
        ),
    ],
)

After building the target we would get the following output if additional flags are applied other than those in PROJECT.scl:

INFO: Reading project settings from //ios:PROJECT.scl.
ERROR: Cannot parse options: This build uses a project file (//:PROJECT.scl) that does not allow output-affecting flags in the command line or user bazelrc. Found ['--macos_minimum_os=12.6', '--flag_alias=build_python_zip=@@rules_python+//python/config_settings:build_python_zip', '--ios_simulator_version=18.5', '--flag_alias=incompatible_default_to_explicit_init_py=@@rules_python+//python/config_settings:incompatible_default_to_explicit_init_py', '--modify_execution_info=^(BitcodeSymbolsCopy|BundleApp|BundleTreeApp|DsymDwarf|DsymLipo|GenerateAppleSymbolsFile|ObjcBinarySymbolStrip|CppLink|ObjcLink|ProcessAndSign|SignBinary|SwiftArchive|SwiftStdlibCopy)$=+no-remote,^(BundleResources|ImportedDynamicFrameworkProcessor)$=+no-remote-exec', '--flag_alias=python_path=@@rules_python+//python/config_settings:python_path', '--features=swift.index_while_building', '--macos_minimum_os=13', '--incompatible_strict_action_env', '--features=swift.use_global_index_store', '--flag_alias=experimental_python_import_all_repositories=@@rules_python+//python/config_settings:experimental_python_import_all_repositories', '--host_macos_minimum_os=13', '--features=swift.use_global_module_cache']. Please remove these flags or disable project file resolution via --noenforce_project_configs.

Target-specific flags

Another interesting aspect of flagsets is the ability to set flags on a per-target basis. There is an attribute called target_patterns on buildable_unit_pb2.BuildableUnit.create:

            target_patterns = [
                "//target_specific:one",
            ],

The following patterns are supported (taken from Bazel documentation and examples):

  • //some:target (specific target)
  • -//some:target (exclude //some:target from this filter)
  • //some/path/... (all targets below a path)
  • -//some/path/... (exclude all targets below a path)

In conclusion

I believe this is a great step forward for improving how Bazel flags are managed. It is still very early days and there are many unanswered questions, such as what happens if a project depends on another project, and many others I probably haven’t even thought of yet. As time goes on, we will discover best practices and flagsets as a feature of Bazel will evolve to support more scenarios.

To learn more about the decisions that went into the design of flagsets, please see this talk on Youtube by the Bazel team.

Composing Bazel rules with subrules

Bazel subrules are a lesser-known mechanism for splitting rule functionality into smaller, reusable building blocks. They are designed to improve rule architecture by encapsulating implicit dependencies, toolchains, and action logic — without passing the entire ctx (“god object”) around.

In the past, we achieved similar reuse through plain Starlark helper functions. While that works, it often requires threading ctx through multiple layers, which reduces encapsulation and makes refactoring harder.

Subrules

A subrule is similar to a rule in that it can create actions during analysis. However, it operates under strict encapsulation constraints and exposes only a reduced context API.

Let’s start with a simple example.

def _hello_subrule_impl(ctx, name):
    hello_file = ctx.actions.declare_file(ctx.label.name + name + ".txt")
    return hello_file

hello_subrule = subrule(
    implementation = _hello_subrule_impl,
)

def _hello_impl(ctx):
    hello_file = hello_subrule(name = "Example")

    ctx.actions.write(
        output = hello_file,
        content = "Hello, world!",
    )

    return [
        DefaultInfo(files = depset([hello_file])),
    ]

hello = rule(
    implementation = _hello_impl,
    subrules = [hello_subrule],
)
  1. We define hello_subrule using subrule() and provide its implementation.
  2. The implementation function receives a restricted ctx (a SubruleContext) and any additional parameters.
  3. When defining the hello rule, we must declare subrules = [hello_subrule].
  4. Inside the rule implementation, we call the subrule like a normal function:
   hello_file = hello_subrule(name = "Example")

We do not call it via ctx. The ctx argument is automatically injected.

Subrules may return arbitrary values — not just providers.

Important: Subrules Must Be Declared

If you call a subrule but forget to list it in the subrules = [...] parameter of the rule (or another subrule), Bazel raises a runtime error during analysis.

What You Can and Cannot Do

Subrules can create actions just like rules, but they are intentionally constrained for better encapsulation.

All Attributes Must Be Private

Subrules can declare only implicit dependencies, and their attribute names must begin with an underscore.

This results in an error:

hello_subrule = subrule(
    implementation = _hello_subrule_impl,
    attrs = {
        "compiler": attr.label(),
    },
)
Error in subrule: illegal attribute name 'compiler': subrules may only define private attributes (whose names begin with '_').

Attributes Must Be label or label_list

Subrule attributes may only be attr.label() or attr.label_list().

Other attribute types (e.g., attr.string(), attr.bool()) are not allowed:

attrs = {
    "_compiler": attr.string(),
}

Produces:

Error in subrule: bad type for attribute '_compiler': subrule attributes may only be label or lists of labels.

Private attributes Require Defaults

Because subrule attributes are private implicit dependencies and cannot be set by users of the rule, they must define a default value (or a late-bound default label).

They must be spelled out as implementation function arguments.

The Context Is Restricted

Subrules receive a SubruleContext, not a full RuleContext.

Attempting to access ctx.attr will fail:

def _hello_subrule_impl(ctx, name):
    print(ctx.attr.surname)
    return None
Error: 'subrule_ctx' value has no field or method 'attr'

Available Fields on subrule context:

In the current API, the available members are:

  • actions
  • fragments
  • label
  • toolchains

Toolchains and Execution Groups

Subrules can declare:

subrule(
    implementation = _implementation,
    toolchains = [...],
    exec_group = ...,
    subrules = [...],
)

This allows:

  • Toolchain requirements per subrule
  • Encapsulation of execution groups

Only one exec group per subrule is supported.

Nested Subrules

A subrule may declare and call other subrules by using the subrules = [...] parameter.

Why Subrules Matter

Subrules address several long-standing architectural issues in Starlark rule design:

  • Avoid passing ctx everywhere
  • Encapsulate implicit dependencies
  • Encapsulate toolchain resolution
  • Improve composability
  • Make helper-function patterns more structured

Takeaways

Subrules are a powerful architectural improvement for composing complex Bazel rules, just remember:

  • You must declare subrules using subrules = [...].
  • You call a subrule like a function — not through ctx.
  • ctx is automatically injected.
  • Attributes must be private and label-based.
  • The context is intentionally restricted.
  • Subrules operate only at analysis time.

They are not replacements for existing mechanisms, but they are a much cleaner way to encapsulate reusable rule logic compared to traditional helper functions.

As adoption increases, they will likely become a default way of composing complex rules.

Applying Bazel Transitions to Third-Party Rules the Right Way

Historically, it has been tedious to apply transitions to rules we don’t control. You either had to maintain a fork, apply a patch, or wrap the rule — which is particularly annoying because you then have to manually forward the providers of the underlying rule.

Enter Rule Extensions

Bazel 8 introduced rule extensions, which allow you to augment an existing rule — somewhat similar to subclassing in object-oriented languages. Instead of copying or wrapping a rule manually, you can define a new rule that delegates to a parent rule while modifying selected behavior.

A Simple Transition

Let’s first create a transition that forces opt as the compilation mode:

def _opt_transition_impl(_settings, _attr):
    return {"//command_line_option:compilation_mode": "opt"}

opt_transition = transition(
    implementation = _opt_transition_impl,
    inputs = [],
    outputs = ["//command_line_option:compilation_mode"],
)

Nothing fancy — this transition forces the rule it’s applied to to always build in opt, regardless of the --compilation_mode specified on the command line.

Applying the Transition

Here I’m demonstrating this with swift_library, but the concept is the same regardless of the rule:

opt_swift_library = rule(
    implementation = lambda ctx: ctx.super(),
    parent = swift_library,
    cfg = opt_transition,
)

That’s it. The new opt_swift_library behaves exactly like swift_library, except it always builds in opt mode.

For the sake of completeness, here is the entire .bzl file:

load("@rules_swift//swift:swift_library.bzl", "swift_library")

def _opt_transition_impl(_settings, _attr):
    return {"//command_line_option:compilation_mode": "opt"}

opt_transition = transition(
    implementation = _opt_transition_impl,
    inputs = [],
    outputs = ["//command_line_option:compilation_mode"],
)

opt_swift_library = rule(
    implementation = lambda ctx: ctx.super(),
    parent = swift_library,
    cfg = opt_transition,
)

Wrapping Up

There’s much more you can do with rule extensions, but being able to easily apply transitions to third-party rules is already a huge win.

To learn more about what’s possible, check out Keith’s excellent article on this topic.

Creating Custom Command-Line Flags in Bazel

It’s fairly common to need to alter your builds based on a command-line flag. Maybe you want to change the distribution target for a mobile app or enable additional debug functionality. An easy way to do that is via a custom command-line flag.

This is a topic where Bazel newcomers often struggle, so let’s explore how to create custom flags in Bazel.

Build Settings

A build setting is a rule—just like any other rule—but with additional capabilities. Specifically, build settings allow us to define custom command-line flags that influence the build configuration.

Pre-defined Settings

Because build settings are rules, we can write our own. However, bazel-skylib provides several commonly used build settings out of the box.

You can see the full list in the skylib repository.

For simplicity, we’ll use a pre-defined setting in this article. Keep in mind, though, that you’re not limited to what skylib offers—you can implement your own build setting if needed.

Creating a Flag

Suppose we want to build differently for our local development environment versus what we release to the App Store. We’ll call this a flavor, with two variants:

  • dev
  • store

To create a flag, we need to instantiate a build setting in BUILD.bazel:

load("@bazel_skylib//rules:common_settings.bzl", "string_flag")

string_flag(
    name = "flavor",
    values = ["dev", "store"],
    build_setting_default = "dev",
)

Because this is a target, it has a label. That’s why we pass it on the command line using label syntax:

bazel build //:my_target --//:flavor=store

However, defining the flag alone is not enough.

Reacting to Build Settings

Bazel does not branch directly on build settings. Instead, it evaluates configuration conditions, represented by config_setting targets.

So we need to associate our flag with configuration conditions:

config_setting(
    name = "dev",
    flag_values = {":flavor": "dev"},
    visibility = ["//visibility:public"],
)

config_setting(
    name = "store",
    flag_values = {":flavor": "store"},
    visibility = ["//visibility:public"],
)

Here’s the mental model:

  • string_flag defines a configurable value.
  • config_setting defines a configuration condition.
  • select() switches on config_setting.

This separation is important: select() operates on config_setting targets, not directly on flags.

Using select()

Now that we have configuration conditions, we can branch using select().

To demonstrate that our flag works, we’ll create a simple genrule that writes which flavor was selected:

genrule(
    name = "which_flavor",
    outs = ["output.txt"],
    cmd = select({
        ":dev": "echo dev > $@",
        ":store": "echo store > $@",
    }),
)

Now build the target:

bazel build :which_flavor --//:flavor=store

You should see something like this:

INFO: Analyzed target //:which_flavor (6 packages loaded, 10 targets configured).
INFO: Found 1 target...
Target //:which_flavor up-to-date:
  bazel-bin/output.txt
INFO: Elapsed time: 0.297s, Critical Path: 0.02s
INFO: 2 processes: 1 internal, 1 darwin-sandbox.
INFO: Build completed successfully, 2 total actions

Opening bazel-bin/output.txt will reveal:

store

Closing Thoughts

To create a custom command-line flag in Bazel, remember:

  1. You need a build setting (string_flag, bool_flag, etc.).
  2. You need one or more config_setting targets that describe configuration conditions.
  3. You use select() to branch on those configuration conditions.

If you’re not writing custom rules, using pre-defined settings from skylib is probably the right approach in most cases. If you need more flexibility, you can write your own build setting rule—but that’s a topic for another day.

Using features in bazel rules

Often we want to allow users of our Bazel rules to enable or disable functionality on an as-needed basis. While Bazel offers several mechanisms for this, features are the simplest—and probably the right—approach for the majority of cases.

Using features

Features can be enabled in two ways:

  • On the command line: --features=my_feature
  • As a rule attribute: features = ["my_feature"]

NOTE: The command-line approach is cumulative, meaning multiple features can be enabled by repeating the --features flag with different values.

Reading features from rules

Say we have the following rule that writes a name to a file:

def _hello_impl(ctx):
    content = ctx.attr.first_name
    out = ctx.actions.declare_file(ctx.label.name + ".txt")
    ctx.actions.write(
        output = out,
        content = content,
    )
    return DefaultInfo(files = depset([out]))

hello = rule(
    implementation = _hello_impl,
    attrs = {
        "first_name": attr.string(),
    },
)

Now imagine we want to ensure there is a trailing newline at the end of the file, but we want this behavior to be opt-in. The simplest way to achieve that is:

  1. Check whether ctx.features contains hello.trailing_newline
  2. If yes, append \n
  3. Otherwise, keep the content as is
if "hello.trailing_newline" in ctx.features:
    content = ctx.attr.first_name + "\n"

To apply the feature from the command line:

bazel build :hello --features=hello.trailing_newline

Or directly on the target:

hello(
    name = "hello",
    first_name = "Adin",
    features = ["hello.trailing_newline"],
)

Disabling features

One neat trick about Bazel features is that they can be explicitly disabled, both on the command line and via the features attribute, by prefixing the feature name with -.

For example, to disable the hello.trailing_newline feature:

bazel build :hello --features=-hello.trailing_newline

Inside the rule implementation, Bazel exposes a conveniently named ctx.disabled_features list, which contains all features explicitly disabled for that target.

A few things to know

  • Features are global, not scoped to a specific rule. This is why prefixing feature names (for example, hello.trailing_newline) is strongly recommended. rules_swift follows the same convention.
  • Changing --features invalidates the analysis cache.
  • Bazel also provides a --host_features flag, which applies to the execution (host) configuration.

Wrapping up

This is one of those simple mechanisms that turns out to be extremely useful. So next time you think about adding an extra attribute to a rule, think twice—features are likely the better choice.

Executing actions from Bazel aspects

So far I went through writing the aspect, but I haven’t shown how to run actions from it to do something useful. Remember: we can do very little during Starlark evaluation (Bazel’s analysis phase). If we want to read files, inspect sources, or produce results, the work has to happen in the execution phase by running actions.

NOTE: Before continuing, it helps to read my earlier articles on Bazel aspects, since I’m going to assume that background.

ctx.actions

Every aspect and rule gets a ctx object, which gives us access to ctx.actions — including run(). Sticking with the “unused Swift deps” example, here’s what invoking a tool looks like:

ctx.actions.run(
    inputs = [input] + source_files,
    outputs = [out],
    executable = ctx.executable._tool,
    arguments = [arguments],
)

This snippet captures the essentials of a Bazel action:

  • Inputs: all files the tool may read
  • Outputs: files the tool is expected to generate
  • Executable: the tool binary itself
  • Arguments: the tool’s command-line arguments

Bazel uses these declarations to compute action keys, decide when work must be re-executed, and make remote/local caching correct.

Inputs

Here, inputs are:

  • a JSON file that describes what we want to be analyzed
  • the Swift source files to scan

First, we build the JSON payload:

payload = {
    "targetLabel": str(target.label),
    "sources": [f.path for f in source_files],
    "deps": deps_to_analyze,
}

encoded_json = json.encode(payload)
input = ctx.actions.declare_file(ctx.label.name + "_input.json")

Like I described in my first article, declaring the file is not enough — we also need an action to write it:

ctx.actions.write(
    output = input,
    content = encoded_json,
)

Outputs, arguments and tool

In this case the tool produces a single JSON output that contains the unused dependency results:

out = ctx.actions.declare_file(ctx.label.name + "_unused_deps.json")

For arguments, instead of building a plain list, Bazel provides ctx.actions.args():

arguments = ctx.actions.args()
arguments.add(input)
arguments.add("--output")
arguments.add(out)

One reasonable question is: why bother with Args instead of a list?

Because command lines can get huge in real builds (compilers and linkers are the classic examples). Args lets Bazel represent and expand arguments efficiently without paying the cost of building enormous Starlark lists.

arguments = [arguments]

There is way more information about this topic on the official bazel docs page.

Running the tool

To run a tool from an aspect (or a rule), the two common approaches are:

  • Private attribute
  • Toolchains

Here I use a private attribute for simplicity (I’ve covered toolchains in a previous post).

We add a private attribute by prefixing it with _:

unused_swift_deps_aspect = aspect(
    implementation = _unused_swift_deps_impl,
    attrs = {
        "_tool": attr.label(
            default = "//tools/FindUnusedSwiftDeps",
            executable = True,
            cfg = "exec",
        ),
    },
    attr_aspects = ["deps"],
)

Then we give it a default label (the tool target), mark it executable, and set cfg = "exec" so it runs on the execution platform.

Inside the implementation, you reference it as:

tool = ctx.executable._tool

At that point, you can wire it into ctx.actions.run(...) and Bazel will execute it as part of the build.

NOTE: I intentionally omited the code for the tool itself to keep the focus on starlark.

Closing thoughts

At this point it is clear how concept of aspects in bazel can be powerful. I probably won’t be writing more about aspects directly but may share some of what I wrote and found useful.

Utilizing Bazel aspect_hints rule attribute

In the last article I provided a short introduction to the concept of aspects in Bazel. To keep the series going, today I will go over the somewhat lesser-known feature aspect_hints.

What is aspect_hints?

In simple terms, it is an implicit attribute available on every rule that is meant to be consumed by an attached aspect, not by the rule implementation itself. This allows us to convey “hints” to aspects in a lightweight manner.

For example, we may have an aspect that we attach to swift_library to report unused deps, but for whatever reason we need a way to tell the aspect to ignore a specific dependency.

unused_swift_deps aspect

Below is my dummy implementation of an aspect that finds and reports unused Swift deps. It is not a real implementation, but it is enough to showcase the potential of aspect_hints.

So in my aspects.bzl I have the following:

load("@rules_swift//swift:providers.bzl", "SwiftInfo")

UnusedSwiftDepsInfo = provider(fields = ["report_files"])

def _unused_swift_deps(target, ctx):
    labels = []

    if hasattr(ctx.rule.attr, "deps"):
        for dep in ctx.rule.attr.deps:
            if SwiftInfo in dep:
                for module in dep[SwiftInfo].direct_modules:
                    labels.append(module.name)
                    labels.append(str(dep.label))

    out = ctx.actions.declare_file("unused_deps_" + ctx.label.name + ".txt")
    ctx.actions.write(
        output = out,
        content = "\n".join(labels),
    )

    transitive_files = []
    for dep in ctx.rule.attr.deps:
        if UnusedSwiftDepsInfo in dep:
            transitive_files.append(dep[UnusedSwiftDepsInfo].report_files)

    all_files = depset(direct = [out], transitive = transitive_files)
    return [
        UnusedSwiftDepsInfo(report_files = all_files),
        DefaultInfo(files = all_files),
    ]

unused_swift_deps_aspect = aspect(
    implementation = _unused_swift_deps,
    attr_aspects = ["deps"],
)

All of the concepts in this code are explained in my previous article on aspects, so I won’t go over them again.

Writing an aspect hint

As mentioned above, now we would like the ability to instruct this aspect to ignore certain labels. The way to achieve that will feel familiar:

  1. Create a provider
  2. Make an implementation function
  3. Create a rule

First we create a provider to be able to easily give a hint to our aspect:

UnusedSwiftDepsHintInfo = provider(fields = ["ignore_deps"])

Then an implementation function:

def _ignore_unused_swift_deps_hint_impl(ctx):
    return [UnusedSwiftDepsHintInfo(ignore_deps = ctx.attr.ignore_deps)]

Finally, a rule with the single job of filling the UnusedSwiftDepsHintInfo provider:

ignore_unused_swift_deps_hint = rule(
    implementation = _ignore_unused_swift_deps_hint_impl,
    attrs = {
        "ignore_deps": attr.label_list(mandatory = True),
    },
)

Reading aspect_hints from the aspect implementation function

Earlier I stated that aspect_hints is an implicit rule attribute that can be read from an attached aspect. Let’s go over how to do that.

In my aspect for reporting unused Swift deps, it is simply a matter of reading the attribute like any other, checking if it carries the desired provider, and then acting upon it.

Because aspect_hints is a list of labels, I will iterate over it, check if it carries UnusedSwiftDepsHintInfo, and skip writing the ignored labels to the report file:

    ignored_deps = []

    for aspect_hint in getattr(ctx.rule.attr, "aspect_hints", []):
        if UnusedSwiftDepsHintInfo in aspect_hint:
            for ignored_dep in aspect_hint[UnusedSwiftDepsHintInfo].ignore_deps:
                ignored_deps.append(ignored_dep.label)

Now that we collected labels to ignore, we just need to make sure to actually skip them when reporting:

if dep.label in ignored_deps:
    print("Ignoring {} dep".format(str(dep.label)))
    continue

And that’s it.

Applying it to the swift_library

Here is how my BUILD.bazel looks when the newly created aspect hint is applied:

load("@rules_swift//swift:swift_library.bzl", "swift_library")
load("//:aspects.bzl", "ignore_unused_swift_deps_hint")

ignore_unused_swift_deps_hint(
    name = "ignore_deps_hint",
    ignore_deps = [":lib2"],
)

swift_library(
    name = "lib",
    srcs = ["main.swift"],
    aspect_hints = [":ignore_deps_hint"],
    deps = [":lib2"],
)

swift_library(
    name = "lib2",
    srcs = ["main.swift"],
    deps = [":lib3"],
)

swift_library(
    name = "lib3",
    srcs = ["main.swift"],
    module_name = "CustomModuleName",
)

And if we build:

bazel build :lib --aspects aspects.bzl%unused_swift_deps_aspect

We see that our lib2 dep is being ignored:

DEBUG: /Users/adincebic/developer.noindex/applying-aspects/aspects.bzl:30:30: Ignoring [@@](https://micro.blog/@)//:lib2 dep
INFO: Analyzed target //:lib (1 packages loaded, 5 targets configured, 5 aspect applications).
INFO: Found 1 target...
INFO: Elapsed time: 0.110s, Critical Path: 0.00s
INFO: 2 processes: 9 action cache hit, 2 internal.
INFO: Build completed successfully, 2 total actions

And of course, if we inspect bazel-bin/unused_deps_lib.txt, the file is empty.

Wrapping up

aspect_hints is a relatively simple idea that allows us to achieve powerful things. I kept the example intentionally simple, but feel free to explore widely used rulesets that rely on aspect_hints, like rules_swift for interoperability with C and other languages.

Introduction to aspects in Bazel

Bazel’s way of attaching additional information and behavior to the build graph is called aspects. They allow us to add extra logic to rules without modifying the rule implementations themselves. Common use cases include validation actions or analysis tasks such as detecting unused dependencies. These are just a few examples; aspects enable much more complex workflows.

A note on using aspects

Earlier, I mentioned that aspects allow us to attach additional logic to rules without modifying rule code. This is true, but only in one of the two ways aspects can be used:

  • Command-line invocation – aspects are applied externally at build time. This is what we will focus on in this article.
  • Attribute attachment – aspects are attached directly to rule attributes, which requires modifying the rule definition. This approach will be covered in the next article.

Writing aspects

Like most things in Bazel, aspects are rule-like and generally follow this pattern:

  1. Write an implementation function that accepts two arguments: target and ctx
  2. Optionally execute actions
  3. Return providers
  4. Create the aspect by calling the aspect() function and passing the implementation and configuration arguments

In this example, we will write an aspect that operates on swift_library targets (specifically, anything that propagates the SwiftInfo provider). The aspect will generate a .txt file containing the names of the Swift modules that the target depends on.

Given the following target:

swift_library(
    name = "lib",
    deps = [":lib2"],
    srcs = glob(["*.swift"]),
)

When the aspect is applied to this target, it will produce a text file containing lib2.

Loading required providers

In aspects.bzl, we first load the SwiftInfo provider from rules_swift:

load("@rules_swift//swift:providers.bzl", "SwiftInfo")

Next, we define our own provider to propagate the generated report files transitively:

SwiftDepsInfo = provider(fields = ["report_files"])

Aspect implementation

Now we can write the implementation function:

def _swift_deps(target, ctx):
    module_names = []

    if hasattr(ctx.rule.attr, "deps"):
        for dep in ctx.rule.attr.deps:
            if SwiftInfo in dep:
                for module in dep[SwiftInfo].direct_modules:
                    module_names.append(module.name)

    out = ctx.actions.declare_file("swift_deps_" + ctx.label.name + ".txt")
    ctx.actions.write(
        output = out,
        content = "\n".join(module_names),
    )

    transitive_files = []
    if hasattr(ctx.rule.attr, "deps"):
        for dep in ctx.rule.attr.deps:
            if SwiftDepsInfo in dep:
                transitive_files.append(dep[SwiftDepsInfo].report_files)

    all_files = depset(direct = [out], transitive = transitive_files)
    return [
        SwiftDepsInfo(report_files = all_files),
        DefaultInfo(files = all_files),
    ]

So the code above does few simple things:

  1. Collects Swift module names from the deps attribute via the SwiftInfo provider
  2. Declares an output file and writes the module names into it
  3. Collects transitive report files so they materialize as build artifacts
  4. Returns both a custom provider (SwiftDepsInfo) and DefaultInfo

Creating the aspect

The final step is to define the aspect itself:

swift_deps_aspect = aspect(
    implementation = _swift_deps,
    attr_aspects = ["deps"],
)

Note: The attr_aspects = ["deps"] argument tells Bazel to propagate this aspect along the deps attribute. In other words, when the aspect is applied to a rule, Bazel will also apply it to every target listed in that rule’s deps.

Running the aspect

Given the following BUILD.bazel file:

load("@rules_swift//swift:swift_library.bzl", "swift_library")

swift_library(
    name = "lib",
    srcs = ["main.swift"],
    deps = [":lib2"],
)

swift_library(
    name = "lib2",
    srcs = ["main.swift"],
    deps = [":lib3"],
)

swift_library(
    name = "lib3",
    srcs = ["main.swift"],
    module_name = "CustomModule",
)

Running:

bazel build :lib --aspects aspects.bzl%swift_deps_aspect

will produce three additional text files which you can locate under bazel-bin/:

  • swift_deps_lib.txt – contains lib2
  • swift_deps_lib2.txt – contains CustomModule
  • swift_deps_lib3.txt – empty, since it has no dependencies

Why does swift_deps_lib2.txt contain CustomModule instead of lib3? Because we are explicitly extracting the Swift module name from the SwiftInfo provider. If instead we wanted the target name, we could use dep.label.name, or str(dep.label) to get the fully qualified Bazel label.

Going further

This article was a brief introduction to Bazel aspects, intended to set the foundation for the next one. In the next article, we will look at more concrete use cases and explore attaching aspects directly to rule attributes from Starlark as well as utilizing some of the other arguments on that aspect() function.

Bazel toolchains, repository rules and module extensions

In my previous article I showed how to create a stupidly simple Bazel rule. While that rule was not useful in any way, shape, or form, it provided a gentle introduction to writing Bazel rules and helped build a mental model around them.

This week we will look at toolchains, which is a more advanced concept but becomes extremely important once your rules depend on external tools.

This walkthrough wires up macOS only to keep the example small. Adding Linux or Windows is the same pattern: download a different binary and register another toolchain(...) target with different compatibility constraints.

Toolchains

A Bazel toolchain is a way of abstracting tools away from rules. Many people describe it as dependency injection.

More precisely: toolchains are dependency injection with platform-based resolution — Bazel selects the right implementation based on the execution and target platforms.

Instead of hard-coding a tool inside a rule, the rule asks Bazel to give it “the correct tool for this platform”.

Example: rules_pkl

We will create oversimplified rule called pkl_library that turns .pkl files into their JSON representation using the PKL compiler. pkl is Apple’s programing language for producing configurations. There is official rules_pkl ruleset and this article doesn’t even scratch the surface of the capabilities that it offers.

To do this we need:

  1. A way to download the PKL binary
  2. A toolchain
  3. A rule that uses the toolchain

Downloading the PKL binary

We start by writing a repository rule that downloads the PKL compiler and exposes it as a Bazel target.

Create repositories.bzl:

def _pkl_download_impl(repository_ctx):
    repository_ctx.download(
        url = repository_ctx.attr.url,
        output = "pkl_bin",
        executable = True,
        sha256 = repository_ctx.attr.sha256,
    )

    repository_ctx.file(
        "BUILD.bazel",
        """
load("@bazel_skylib//rules:native_binary.bzl", "native_binary")

native_binary(
    name = "pkl",
    src = "pkl_bin",
    out = "pkl",
    visibility = ["//visibility:public"],
)
""",
    )

pkl_download = repository_rule(
    implementation = _pkl_download_impl,
    attrs = {
        "url": attr.string(mandatory = True),
        "sha256": attr.string(mandatory = True),
    },
)
````

This does two things:

* Downloads the PKL binary
* Wraps it using `native_binary`, which creates a proper Bazel executable without relying on a shell

The resulting binary is available as `@pkl_macos//:pkl`.

## Exposing the repository via a module extension

We’re still using a [repository rule](https://bazel.build/external/repo) to do the download; [bzlmod module extensions](https://bazel.build/external/extension) are just how we call that repository rule from `MODULE.bazel`.

Create `extensions.bzl`:

```starlark
load("//:repositories.bzl", "pkl_download")

def _pkl_module_extension_impl(ctx):
    pkl_download(
        name = "pkl_macos",
        url = "https://github.com/apple/pkl/releases/download/0.30.2/pkl-macos-aarch64",
        sha256 = "75ca92e3eee7746e22b0f8a55bf1ee5c3ea0a78eec14586cd5618a9195707d5c",
    )

pkl_extension = module_extension(
    implementation = _pkl_module_extension_impl,
)

In MODULE.bazel:

bazel_dep(name = "platforms", version = "1.0.0")
bazel_dep(name = "bazel_skylib", version = "1.9.0")

pkl_extension = use_extension("//:extensions.bzl", "pkl_extension")
use_repo(pkl_extension, "pkl_macos")

Now the binary can be referenced as @pkl_macos//:pkl.

Creating the toolchain

A toolchain is just a rule that returns a ToolchainInfo provider.

toolchains.bzl:

def _pkl_toolchain_impl(ctx):
    return [platform_common.ToolchainInfo(
        pkl_binary = ctx.executable.pkl_binary,
    )]

pkl_toolchain = rule(
    implementation = _pkl_toolchain_impl,
    attrs = {
        "pkl_binary": attr.label(
            executable = True,
            cfg = "exec",
            allow_files = True,
            mandatory = True,
        ),
    },
)

The important part is cfg = "exec", which ensures the binary runs on the execution platform.

registering the toolchain

In BUILD.bazel:

load("//:toolchains.bzl", "pkl_toolchain")

toolchain_type(
    name = "pkl_toolchain_type",
    visibility = ["//visibility:public"],
)

pkl_toolchain(
    name = "pkl_toolchain_macos_impl",
    pkl_binary = "@pkl_macos//:pkl",
)

toolchain(
    name = "pkl_toolchain_macos",
    toolchain = ":pkl_toolchain_macos_impl",
    toolchain_type = ":pkl_toolchain_type",
    exec_compatible_with = ["@platforms//os:macos"],
    target_compatible_with = ["@platforms//os:macos"],
)

To register the toolchain we need to modify our MODULE.bazel:

register_toolchains("//:pkl_toolchain")

Using the toolchain in a rule

Now we can write pkl_library.

Create rules.bzl:

def _pkl_library_impl(ctx):
    toolchain = ctx.toolchains["//:pkl_toolchain_type"]
    binary = toolchain.pkl_binary

    compiled_files = []
    for src in ctx.files.srcs:
        compiled_file_name = src.basename.replace(".pkl", ".json")
        compiled_file = ctx.actions.declare_file(compiled_file_name)

        ctx.actions.run(
            executable = binary,
            tools = [binary],
            inputs = [src],
            outputs = [compiled_file],
            arguments = [
                "eval",
                src.path,
                "--format",
                "json",
                "-o",
                compiled_file.path,
            ],
            mnemonic = "PKLCompile",
        )

        compiled_files.append(compiled_file)

    return [DefaultInfo(files = depset(compiled_files))]

pkl_library = rule(
    implementation = _pkl_library_impl,
    attrs = {
        "srcs": attr.label_list(
            allow_files = [".pkl"],
            mandatory = True,
        ),
    },
    toolchains = ["//:pkl_toolchain_type"],
)

The rule never knows which concrete PKL binary is being used — it only sees the resolved toolchain.

NOTE: Here instead of writing a for loop that creates new action per file is a decision that we need to make on a case by case basis. Sometimes it can be faster to run multiple actions in parallel than invoking a tool once and giving it a list of files to process. It heavily depends on the tool itself and shows us how bazel can be powerful in these scenarios.

Final thoughts

This was a practical look at repository rules, module extensions, toolchains, and how they fit together.

Toolchains are one of Bazel’s most powerful features. Once you start writing rules that depend on real tools (compilers, linters, generators), this pattern becomes unavoidable.

This marks the completion of my second article where I try to give real world examples of using bazel’s various features.