diff --git a/.gitignore b/.gitignore index 8435d84..6016e1e 100644 --- a/.gitignore +++ b/.gitignore @@ -36,4 +36,8 @@ service_account_key.json *.zip release_work *.asc -secluso-v* \ No newline at end of file +secluso-v* + +# Exclude local release staging outputs and imported signing material +/releases/release_assets/ +*.p12 diff --git a/releases/.gitignore b/releases/.gitignore index ec2a80e..36d8cff 100644 --- a/releases/.gitignore +++ b/releases/.gitignore @@ -1 +1,7 @@ -builds \ No newline at end of file +builds + +# Keep release helper scripts tracked +!sign_macos_release.sh +!verify_macos_release.sh +!sign_windows_release.sh +!verify_windows_release.sh diff --git a/releases/README.md b/releases/README.md index 626c914..3879917 100644 --- a/releases/README.md +++ b/releases/README.md @@ -1,200 +1,178 @@ -# Secluso Reproducibility Guide +# Secluso Release Verification Guide -This folder is the release pipeline. It does three things: builds files, writes a manifest with hashes and build details, and compares two runs to check if they match. +This README is to explain how Secluso releases are verified and what guarantees that verification is intended to provide. -If you just want to check a release, you can ignore most internals and do two quick steps: rebuild, then compare your run directory with the official one. +There are two related but distinct cases. One is the regular Secluso binaries, such as Raspberry Pi and server artifacts. The other is the Secluso deploy tool, which is a desktop application for Linux, macOS, and Windows. Those two cases share the same reproducible-build philosophy, but the desktop operating systems have different release requirements, so the verification model is described carefully below to reflect that. -## Fast path: verify one release +If you only want to verify a release, use the commands in the next section. If you want the technical details and exact guarantees, please see the below sections. -Run a build for the same target and profile as the release you are checking: +## Fast Path - ./build.sh --target ipcamera --profile all +For the deploy-tool release, authenticate the release files you downloaded before comparing any build output. From the deploy-tool release-asset directory shown in this guide, verify the maintainer-signed checksum file first, then verify the files named by that checksum file: -Then compare your run output against the official artifact directory: + gpg --verify deploy-tool-SHA256SUMS.txt.jkaczman.asc deploy-tool-SHA256SUMS.txt + gpg --verify deploy-tool-SHA256SUMS.txt.arrdalan.asc deploy-tool-SHA256SUMS.txt + sha256sum -c deploy-tool-SHA256SUMS.txt - ./build.sh --compare ./builds/TIMESTAMP official-binaries +Both checksum signatures should be checked for complete assurance. The signing keys are the same project co-founder keys listed in SECURITY.md: Ardalan Amiri Sani, fingerprint 1A9A1BA3090FA78E946DC0C0301497925DCCE876, and John Kaczman, fingerprint 7785755F1A24FF04CE0E12575DF5E79230C57C4A. This step proves that the files on disk are the files covered by the signed checksum statement. Use the same pattern for regular binary release bundles. The reproducible-build checks below then prove what those files correspond to. -Use the run directory that directly contains manifest.json. Do not point compare at an inner artifacts folder. +Before rebuilding, make sure your source checkout is the release source revision: -When everything matches, the script prints: + git fetch --tags + git tag -v RELEASE_TAG + git switch --detach RELEASE_TAG - Reproducibility check PASSED +If a release is bound to an exact commit instead of a signed tag, compare git rev-parse HEAD against the commit published with the release. Do not use a moving branch name (such as the main branch) as the source identity for a verification run. -If it does not match, you get explicit mismatch lines such as version mismatch, lock digest mismatch, toolchain digest mismatch, or binary hash mismatch. +To verify the regular Secluso binaries, rebuild the same target and profile as the release you want to check. For example, if you are verifying a Raspberry Pi release, run: -
-Don't have an ARM64 machine for Raspberry Pi builds? Click here. -Unfortunately, not everyone has an ARM64 machine. We wanted to provide a guide for people who don't, so that you're able to verify our builds as well. + ./build.sh --target raspberry --profile all -There are a couple of ARM64 VPS providers. Most of them require that you do identity verification. One that doesn't, that I've personally tested, is https://servers.guru/arm-vps/. You can get a 2-core 4GB ARM VPS for $7/mo. Note that we are not affiliated with servers.guru whatsoever, and that is not an affiliate link. We like that they provide anonymous payment options and seem to try to respect your privacy. Any ARM64 VPS will work. Another option that's more popular is https://www.hetzner.com/cloud (Ampere option), but they'll likely require you to upload identity verification documents (such as your passport). +Then compare your run against the unpacked official verification bundle for that release: -Below is a guide instructing how to get everything setup on the VPS and run from scratch. + ./build.sh --compare builds/TIMESTAMP official-binaries -1. Provision with Ubuntu 24.04 -2. Use the credentials from the email to log in. Change your password to something secure after logging in (you will be prompted on the first login) -3. Install the latest Rust (https://www.rust-lang.org/tools/install) -4. Install Docker (https://docs.docker.com/engine/install/ubuntu/) -5. Update the list of available software packages by running `sudo apt-get update` -6. Install the command line utility jq (used for parsing JSON) by running `apt-get install jq` +Use the run directory that directly contains manifest.json. Do not point compare at an inner artifacts/ directory. If everything matches, the script prints Reproducibility check PASSED. -The following steps assume you are using version v0.1.0. If we have a release after this and have not updated the version number here, please change the version number accordingly. -1. Acquire the code from our latest release: `wget https://github.com/secluso/secluso/archive/refs/tags/v0.1.0.zip` -2. Unzip the zip file `apt install unzip` then `unzip v0.1.0.zip` (unzips into folder secluso-0.1.0) -3. Change your directory into the releases folder in the secluso-0.1.0 directory: `cd secluso-0.1.0/releases` -4. Run the build.sh script: `./build.sh` with your preferred arguments, which are detailed in the description below. -5. Fetch our latest release's binary/manifest ZIP file via wget, `wget https://github.com/secluso/secluso/releases/download/v0.1.0/secluso-v0.1.0.zip` -6. Verify the zip file: `echo "483a2e347bb0cd895e00c2434576849641097e8574ba6ceceb151e009c64e77b secluso-v0.1.0.zip" | sha256sum -c -` -7. Unzip the zip file: `unzip secluso-v0.1.0.zip -d official-binaries` (unzips into folder official-binaries) -8. Run the compare check: `./build.sh --compare builds/ official-binaries` (replace with the run folder that contains `manifest.json` and `artifacts/`) +To verify the Linux deploy tool, build the matching Linux deploy profile: -If you see `REPRODUCIBILITY CHECK PASSED`, then you're all set! We do not recommend casually building with this in case your server is compromised, we only recommend using it as a verification against our released binaries. -
+ ./build.sh --target deploy --profile linux -## What is pinned, and where +Then compare that run against the unpacked official Linux deploy-tool verification bundle: -Toolchain image digests are pinned in digests.lock.env in this directory. Current values include: + ./build.sh --compare builds/TIMESTAMP official-deploy-linux - RUST_DIGEST__AARCH64_UNKNOWN_LINUX_GNU=4c632e493dfa97f0fe014c3910d1690c149bba85ed8678d47d3563ec6f258ead - RUST_DIGEST__X86_64_UNKNOWN_LINUX_GNU=3f6e6f8d8725a65a2db964bb828850f888d430c68784d661f753144e5d787207 - RUST_DIGEST__X86_64_APPLE_DARWIN=3f6e6f8d8725a65a2db964bb828850f888d430c68784d661f753144e5d787207 - RUST_DIGEST__AARCH64_APPLE_DARWIN=4c632e493dfa97f0fe014c3910d1690c149bba85ed8678d47d3563ec6f258ead +To verify the macOS deploy tool, first build the matching unsigned local app: -Rust binary builds run through Docker Buildx with a BuildKit builder pinned to moby/buildkit:v0.23.0. The builder is created for the run and removed when the script exits. + ./build.sh --target deploy --profile macos-arm64 -Dependencies are also kept fixed through lockfiles. For Rust crates, each crate lockfile hash is written to the manifest as crate_lock_sha256. In deploy mode, the lock hash is computed from both deploy/src-tauri/Cargo.lock and deploy/pnpm-lock.yaml. +Then verify the signed release against that local build: -## What build.sh actually does + ./verify_macos_release.sh --local-run builds/TIMESTAMP --triple aarch64-apple-darwin --release /path/to/Secluso-Deploy-1.0.0-macos-arm64.app.zip -The entrypoint is build.sh. It has two modes. +The macOS verifier must be run on macOS with codesign, spctl, and xcrun stapler available, usually macOS with the Xcode Command Line Tools installed. -Build mode: +To verify the Windows deploy tool, first build the matching unsigned local installer: - ./build.sh --target TARGET --profile PROFILE + ./build.sh --target deploy --profile windows-x64 -Optional two-run self-check: +Then verify the distributed signed release against that local build: - ./build.sh --target TARGET --profile PROFILE --test-reproduce + ./verify_windows_release.sh --local-run builds/TIMESTAMP --triple x86_64-pc-windows-msvc --release /path/to/Secluso-Deploy-1.0.0-windows-x64-setup.exe [--signtool PATH] -Compare mode: +The Windows verifier must be run somewhere Microsoft signtool and PowerShell's Get-AuthenticodeSignature are available, usually Windows with the Windows SDK installed or Git Bash pointing at signtool.exe. - ./build.sh --compare RUN_A RUN_B +## The Main Idea -A normal build writes to builds/UNIX_TIMESTAMP. A self-check writes two sibling runs at builds/UNIX_TIMESTAMP/run1 and builds/UNIX_TIMESTAMP/run2, then compares them automatically. +Secluso supports two different verification paths. The regular Secluso binaries and the Linux deploy-tool artifacts are verified by direct reproducible-build comparison. The macOS and Windows deploy-tool artifacts are verified as signed (platform-based) releases. In the first path, the released file itself is expected to be byte-for-byte identical to a local rebuild. In the signed macOS and Windows paths, the released file is expected to differ from the unsigned local build **ONLY** where Apple or Microsoft signing requires it to differ, and the verifier then checks that every remaining byte still matches the reproducible local build. -Each completed run has this shape: +Linux allows for distributing the deploy tool as a directly reproducible artifact. macOS and Windows cannot be distributed in a platform-accepted form without additional signing state. A public macOS release needs Developer ID signing, hardened runtime, notarization, and a stapled ticket. A public Windows release needs Authenticode signing and a trusted timestamp. Those platform signing steps modify the artifact after the unsigned reproducible build is produced, so the verifier has to perform a much more precise verification than a raw file hash comparison. - RUN_DIR/ - manifest.json - artifacts/TARGET_TRIPLE/... - distribution/ +The chain of trust here has three parts. The signed checksum file authenticates the downloaded release files as the maintainer-published files. The authenticated release source revision and local rebuild establish the payload that should be produced from the pinned build inputs. The platform-specific verifiers then prove that the macOS and Windows signing bytes are confined to OS-defined signing metadata, while every executable or installer-payload byte outside those regions matches the local reproducible build. -The distribution folder includes a verification tarball and a checksum file. You can share that bundle so someone else can run the same compare step. +## What The Verification Guarantees Mean -## Targets and profile map in this script +For the regular Secluso binaries and for the Linux deploy-tool artifacts, the guarantee is simple. If verification passes, the released artifact matches a local rebuild byte-for-byte. The compare logic checks metadata first so that you are not comparing unlike inputs, then it recomputes hashes from disk and requires the actual produced files to match exactly. If the inputs differ, the compare step explains that mismatch. If the inputs match but the resulting artifact bytes do not, verification fails. -Targets: -raspberry, ipcamera, server, all, deploy +For macOS and Windows deploy-tool releases, the guarantee has a different scope. It is not a raw-file guarantee, because the platform-accepted release artifact is expected to contain signing material that the unsigned local build does not contain. The verifier instead checks the following two properties, that the distributed release satisfies the platform's signing policy, and every byte outside the exact signing-managed regions still matches the unsigned reproducible build. That is a byte-for-byte **payload** guarantee for signed desktop releases. -Profiles currently accepted: +## How Direct Comparison Works -raspberry: -all, core, camerahub, motion_ai_cli +The direct comparison path is the one used for the regular Secluso binaries and for Linux deploy-tool artifacts. build.sh writes a run directory containing a manifest.json, the produced artifacts, and a distributable verification bundle. The official comparison input must be that verification bundle or an equivalent run-style directory containing manifest.json and the artifact paths named by it, not just a folder of user-facing binaries. In compare mode, the script does not simply trust the hashes recorded in the manifest. It recomputes the hashes from disk and checks that the files actually match what the manifest claims. It also checks that the compared runs agree on the relevant build identity information, such as crate version, lock digest, and toolchain identity, before treating the byte comparison as meaningful. -ipcamera: -all, camerahub +Reproducibility is only useful when you compare like with like. If metadata already differs, then a byte mismatch is not very informative. If the metadata matches and the bytes still differ, that is the situation that should be taken as an actual reproducibility failure. -server: -server +## The macOS Technical Guarantee -all: -all, release, test +verify_macos_release.sh compares the signed, notarized, and stapled public app against the unsigned reproducible local app. It first checks the release-policy side via codesign verification, matching bundle identifier, expected Team ID, hardened runtime, stapled notarization ticket, xcrun stapler validation, and spctl assessment. Those checks show that the artifact is accepted by the verifier host's Apple tooling under the expected release policy and signing identity. They are NOT treated as proof that the app is benign or source-reproducible. -deploy: -all, linux, macos, windows, linux-x64, linux-arm64, macos-x64, macos-arm64, windows-x64, windows-arm64 +After that policy check, the script performs the reproducibility side of the verification. It materializes a copy of the distributed app, normalizes the release-signing and distribution metadata Apple packaging is allowed to introduce, and then compares that normalized result against the local build. That includes checking the entitlements policy, checking stable Mach-O CodeDirectory page hashes against the local build, and checking the layout invariants around signature regions. The technical guarantee is, therefore, if the script passes, the app is both a valid Apple release artifact and still the same app you would have built locally, modulo the exact Apple-managed signing and distribution metadata regions that must differ. -Two practical notes from the implementation: +The Mach-O part of this check depends on how macOS records code-signing data. LC_CODE_SIGNATURE points to a bounded signing envelope, and the CodeDirectory describes signed code pages through fields such as hash offset, number of code slots, code limit, hash size, hash type, page size, team offset, runtime flags, and the CodeDirectory hash table. The verifier uses that structure to distinguish Apple signing metadata from executable payload and to refuse a release whose stable code pages no longer match the reproducible local build. -Raspberry-only packages are skipped on non ARM Linux triples. +For the current macOS arm64 release artifact, the split looks like this: -The all/test profile builds Raspberry core binaries (camera hub, config tool, update) on ARM64 and adds x86_64 config tool only. +```text +Mach-O executable: 30,950,368 bytes -Deploy can mix host-native bundling and Docker fallback in the same run, depending on what the local Tauri toolchain can package. + [ compared payload ........................................ ] [ Apple signing envelope ] + 30,871,600 bytes 78,768 bytes + 99.746% 0.254% -## Manifest contents and why hash is stored +LC_CODE_SIGNATURE envelope: -Each artifact entry in manifest.json stores both build info and file hash. Example: + [ CodeDirectory ......................... ] [ CMS/ticket ] [ tail ] + 60,542 bytes 9,161 bytes 9,029 bytes - { - "package":"ip_camera_hub", - "target":"x86_64-unknown-linux-gnu", - "bin":"secluso-ip-camera-hub", - "bin_path":"artifacts/x86_64-unknown-linux-gnu/secluso-ip-camera-hub", - "sha256":"...", - "crate":"camera_hub", - "version":"...", - "crate_lock_sha256":"...", - "rust_digest":"..." - } +CodeDirectory: -The manifest hash is not trusted on its own. During compare, the script recomputes file hashes from disk and checks them against the manifest first. If a hash in the manifest is fake, compare fails right away with a manifest hash mismatch message. + slots in signed hash table : 1,885 x 16,384-byte pages + stable slots recomputed : 1..1,884 + first-page slot : covered by normalized Mach-O compare because Apple mutates header/load-command signing state + hash table size : 60,320 bytes -So the stored hash is a record of what the run says it produced, and runtime hashing is what actually checks it. +Verifier treatment: -## How compare decides pass versus fail + normalized app payload -> compared byte-for-byte against local build + CodeDirectory page hashes -> stable post-header slots are recomputed from local build pages, so the signed hash table must describe the same executable code + signature tail -> must match local signature-region tail bytes at the same offsets. It is not accepted as free-form data + CMS/notary ticket -> accepted only as Apple signing/notarization evidence after codesign, stapler, and spctl policy checks +``` -Compare keys are package, target, and binary name. From there, the script checks that the smaller run is fully contained in the larger one. +macOS sources for the signing model and policy: -For each overlapping key, compare checks: +- [LLVM CS_CodeDirectory](https://llvm.org/doxygen/structllvm_1_1MachO_1_1CS__CodeDirectory.html) +- [Go codesign.go](https://go.dev/src/cmd/internal/codesign/codesign.go) +- [XNU cs_blobs.h](https://github.com/apple-oss-distributions/xnu/blob/main/osfmk/kern/cs_blobs.h) +- [LIEF Mach-O Modification](https://lief.re/doc/stable/tutorials/11_macho_modification.html) +- [Notarization: is a notarized app safe to use?](https://eclecticlight.co/2021/01/05/notarization-is-a-notarized-app-safe-to-use/) +- [Notarization: the hardened runtime](https://eclecticlight.co/2021/01/07/notarization-the-hardened-runtime/) +- [Apple Accidentally Approved Malware to Run on MacOS](https://www.wired.com/story/apple-approved-malware-macos-notarization-shlayer/) -crate name and version -crate lock digest -toolchain digest -manifest hash presence -manifest hash equals on-disk file hash -run A file hash equals run B file hash +## The Windows Technical Guarantee -Compare is strict byte-for-byte. If file hashes differ, compare fails. +verify_windows_release.sh compares the Authenticode-signed public installer against the unsigned reproducible local installer. It first checks the release-policy side via signtool verification, a primary signature, a trusted timestamp, the expected publisher identity, and the pinned signer certificate SHA-1 thumbprint for this release. Those checks show that the artifact is accepted by the verifier host's Windows tooling under the expected signing policy and publisher identity. They are not a reproducibility proof on their own. -It checks metadata first on purpose, so you can quickly see if the two runs even used the same inputs. If metadata already differs, you are not comparing like-for-like. If metadata matches and the binary hashes still differ, that is the case to treat as a true reproducibility break. +Authenticode signing changes the byte comparison because the signature is not computed over the file as one uninterrupted byte stream. Microsoft's PE signing explainer says the signing provider "does not hash all of the bytes of the file" and specifies the PE checksum and certificate-table directory as omitted fields. The verifier accounts for those rules by zeroing the fixed PE bookkeeping fields in both views before comparing. The certificate table is handled differently. It is not accepted as arbitrary ignorable slack.. the verifier checks that it is the PE security directory, that it is bounded by the file format, that it is at EOF for this release artifact, and that removing it leaves an installer payload that matches the unsigned reproducible build. -## Deploy specifics you will probably hit +The certificate directory is a file-offset certificate table, not a normal RVA-mapped executable section. The Windows loader does not load that certificate table into the program address space as ordinary code or data. That does not make the certificate table safe in a broad sense, it only explains why the verifier can treat it as a bounded signing envelope after checking PE placement, EOF bounds, and alignment. -For Apple targets, deploy mode checks which bundle types the local Tauri CLI supports by parsing pnpm tauri build --help. -Non-Apple deploy targets always build in Docker fallback. -Apple deploy targets still require host-native bundling. -Docker Linux deploy builds also perform a single-pass deterministic post-bundle rewrite of wAppImage/deb/rpm outputs so normal one-run builds stay byte-stable. Set SECLUSO_CANONICALIZE_LINUX_BUNDLES=0 to disable. +After normalization, the rest of the file must match the unsigned local reproducible build **exactly**. The technical guarantee is, therefore, if the script passes, the installer is both a valid signed Windows release and still the same installer payload you would have built locally, except for the exact Authenticode-controlled regions that necessarily differ. -Apple deploy bundles need host-native support. If the host cannot bundle a requested Apple triple, the run fails with a clear message instead of silently using another path. +For the current Windows x64 release artifact, the relevant split looks like this: -Docker fallback logs are preserved per triple under artifacts/TARGET_TRIPLE as: +```text +Signed installer: 13,757,760 bytes -docker-buildx-TARGET_TRIPLE.log -docker-buildx-TARGET_TRIPLE-summary.log + [ compared payload ........................................ ] [ Authenticode ] + 13,742,115 bytes 15,645 bytes + 99.886% 0.114% -Those logs are usually the fastest way to debug packaging failures. +Normalized-away Authenticode bytes: -## Common failure messages and what they mean + checksum field 4 bytes -> zeroed in both views + security directory 8 bytes -> zeroed in both views + certificate alignment pad 5 bytes -> removed only if trailing NUL pad + WIN_CERTIFICATE 15,640 bytes -> must be final EOF structure -Missing manifest(s) for compare: -One or both compare inputs did not point at a run directory root containing manifest.json. +Verifier treatment: -Larger run does not contain all artifacts of the smaller run: -The superset rule failed, usually because different target/profile scopes were compared. + normalized installer payload -> compared byte-for-byte against local build + WIN_CERTIFICATE -> parser verifies PE placement, EOF bounds, and alignment. signtool verifies Authenticode chain, publisher, and timestamp. the script pins the expected signer certificate thumbprint + certificate alignment pad -> removed only if it is trailing NUL padding and at most the 7 bytes needed for 8-byte alignment + checksum + security directory -> fixed-size PE bookkeeping fields; zeroed in both views, not accepted as executable payload +``` -crate Cargo.lock SHA mismatch: -Dependency lock state differs between runs. +Windows sources for Authenticode and PE signing behavior: -toolchain digest mismatch: -Different toolchain identities were used. +- [Understanding executable file signing](https://learn.microsoft.com/en-us/windows/win32/secbp/understanding-pe-signatures) +- [PE Format](https://learn.microsoft.com/en-us/windows/win32/debug/pe-format) +- [Verifying Windows binaries, without Windows](https://blog.trailofbits.com/2020/05/27/verifying-windows-binaries-without-windows/) +- [LIEF PE Authenticode](https://lief.re/doc/latest/tutorials/13_pe_authenticode.html) +- [osslsigncode pe.c](https://sources.debian.org/src/osslsigncode/2.9-2/pe.c/) -manifest sha256 does not match file: -Run directory mismatch or local file tampering. +## Limits Of What This Proves -binary hash mismatch: -Inputs matched but outputs diverged. +If verification passes, that is strong evidence that the released distributed artifact matches what should come out of that source revision and those pinned build inputs. It does not prove that the source code itself is safe, and it does not make a single untrusted run directory authoritative on its own. The check should be run when at least one side of the comparison comes from somewhere independent, such as a separate build machine you control. -## Limits of what this proves - -If compare passes, you can treat it as strong evidence that the binaries match what should come out of that source revision and those build inputs. It still does not prove the source code itself is safe. - -One caveat: if someone can rewrite both artifacts and manifest in one untrusted run directory, that directory alone is not a reliable reference point. The check is strongest when at least one side of compare comes from somewhere independent, like an official release bundle or a separate build machine you control. +The signed macOS and Windows metadata should not be described as "safe" in a broad sense. The concrete guarantee is that those bytes are bounded by platform file-format parsers, accepted by platform signing tools, and removed or normalized before every remaining executable or installer-payload byte is compared against the local reproducible build. They are not accepted as normal OS-loaded app or installer payload under this verifier model. This does not separately prove that arbitrary application code could never read its own signature metadata as data. diff --git a/releases/digests.lock.env b/releases/digests.lock.env index 73840d2..bcb3d0a 100644 --- a/releases/digests.lock.env +++ b/releases/digests.lock.env @@ -4,11 +4,20 @@ RUST_DIGEST__X86_64_APPLE_DARWIN=3f6e6f8d8725a65a2db964bb828850f888d430c68784d66 RUST_DIGEST__AARCH64_APPLE_DARWIN=4c632e493dfa97f0fe014c3910d1690c149bba85ed8678d47d3563ec6f258ead # Host-macOS toolchain pins for deterministic deploy bundles. +# +# Deploy builds are not just using cargo build +# Tauri packaging pulls in host-side tooling from several layers like Rust/Cargo, Node/pnpm, Tauri CLI, and Apple toolchain components such as clang, Xcode, etc. +# Any one of those moving underneath us can change the emitted bundle structure / metadata / linker output / etc even when code is identical. +# +# We therefore lock every host dependency that affects packaging and record those versions here +# That way a compare failure can tell the operator of a host-environment drift and avoid reproducibility concerns. +# +# TODO: Longer-term, the cleaner answer here is to run the macOS deploy path inside a pinned macOS VM image. MACOS_HOST_RUSTC_VERSION=1.90.0 MACOS_HOST_CARGO_VERSION=1.90.0 MACOS_HOST_NODE_VERSION=22.9.0 MACOS_HOST_PNPM_VERSION=10.29.1 -MACOS_HOST_TAURI_CLI_VERSION=2.10.0 -MACOS_HOST_CLANG_VERSION=16.0.0 -MACOS_HOST_XCODE_VERSION=16.2 -MACOS_HOST_SDK_VERSION=15.2 +MACOS_HOST_TAURI_CLI_VERSION=2.10.1 +MACOS_HOST_CLANG_VERSION=21.0.0 +MACOS_HOST_XCODE_VERSION=26.4.1 +MACOS_HOST_SDK_VERSION=26.4 diff --git a/releases/expected_macos_entitlements.plist b/releases/expected_macos_entitlements.plist new file mode 100644 index 0000000..0c67376 --- /dev/null +++ b/releases/expected_macos_entitlements.plist @@ -0,0 +1,5 @@ + + + + + diff --git a/releases/lib/common.bash b/releases/lib/common.bash index 11f62f2..6eca02d 100644 --- a/releases/lib/common.bash +++ b/releases/lib/common.bash @@ -73,6 +73,96 @@ sha256_stdin() { fi } +normalized_macho_sha256_file() { + local file_path="$1" + + # This is intentionally not a general-purpose Mach-O canonicalizer. + # It is a (narrow) comparison helper for release verification, specifically for cases where the executable payload should match but Apple signing/related post-processing mutate some bytes in some (predictable) places. + # + # Current normalization policy we use here: + # [1] zero LC_UUID because it is per-build identity metadata rather than payload + # [2] zero LC_CODE_SIGNATURE because signing rewrites that load command + # [3] zero selected __LINKEDIT size fields that move around with signing + # [4] remove only the EXACT LC_CODE_SIGNATURE blob, and ONLY when we can prove it occupies the tail of both __LINKEDIT and the file + perl -MDigest::SHA=sha256_hex -e ' + use strict; + use warnings; + + sub u32le { + return unpack("V", substr($_[0], $_[1], 4)); + } + + sub u64le { + return unpack("Q<", substr($_[0], $_[1], 8)); + } + + my $path = shift @ARGV; + open my $fh, "<", $path or die "open($path): $!"; + binmode $fh; + local $/; + my $data = <$fh>; + my $file_len = length($data); + + my $magic = u32le($data, 0); + die "unsupported Mach-O magic in $path\n" if $magic != 0xfeedfacf; + + # Thin 64-bit little-endian Mach-O header. Load commands start at byte 32. + my $ncmds = u32le($data, 16); + my $offset = 32; + my ($linkedit_fileoff, $linkedit_filesize); + my ($sig_dataoff, $sig_datasize); + + for (my $i = 0; $i < $ncmds; $i++) { + die "truncated Mach-O load commands in $path\n" if $offset + 8 > $file_len; + my $cmd = u32le($data, $offset); + my $cmdsize = u32le($data, $offset + 4); + die "invalid load command size in $path\n" if $cmdsize < 8 || $offset + $cmdsize > $file_len; + + if ($cmd == 0x19) { + # LC_SEGMENT_64. + my $segname = substr($data, $offset + 8, 16); + $segname =~ s/\0.*$//s; + if ($segname eq "__LINKEDIT") { + die "multiple __LINKEDIT segments in $path\n" if defined $linkedit_fileoff; + $linkedit_fileoff = u64le($data, $offset + 40); + $linkedit_filesize = u64le($data, $offset + 48); + # Signing perturbs LINKEDIT sizing bookkeeping, so drop those fields from the comparison view while keeping the segment placement itself. + substr($data, $offset + 32, 8) = "\0" x 8; + substr($data, $offset + 48, 8) = "\0" x 8; + } + } elsif ($cmd == 0x1b) { + # LC_UUID is build-identity metadata rather than executable payload. + substr($data, $offset, $cmdsize) = "\0" x $cmdsize; + } elsif ($cmd == 0x1d) { + die "unexpected LC_CODE_SIGNATURE size in $path\n" if $cmdsize < 16; + die "multiple LC_CODE_SIGNATURE commands in $path\n" if defined $sig_dataoff; + $sig_dataoff = u32le($data, $offset + 8); + $sig_datasize = u32le($data, $offset + 12); + # Signing rewrites both the command metadata and the blob it points at. + substr($data, $offset, $cmdsize) = "\0" x $cmdsize; + } + + $offset += $cmdsize; + } + + die "missing __LINKEDIT segment in $path\n" if !defined $linkedit_fileoff; + die "missing LC_CODE_SIGNATURE in $path\n" if !defined $sig_dataoff; + die "empty LC_CODE_SIGNATURE in $path\n" if $sig_datasize == 0; + die "LC_CODE_SIGNATURE starts before __LINKEDIT in $path\n" + if $sig_dataoff < $linkedit_fileoff; + die "LC_CODE_SIGNATURE exceeds __LINKEDIT in $path\n" + if $sig_dataoff + $sig_datasize > $linkedit_fileoff + $linkedit_filesize; + die "LC_CODE_SIGNATURE is not the tail of __LINKEDIT in $path\n" + if $sig_dataoff + $sig_datasize != $linkedit_fileoff + $linkedit_filesize; + die "LC_CODE_SIGNATURE is not the tail of the file in $path\n" + if $sig_dataoff + $sig_datasize != $file_len; + + substr($data, $sig_dataoff, $sig_datasize) = ""; + + print sha256_hex($data); + ' "$file_path" +} + lookup_rust_digest() { local triple="$1" local key diff --git a/releases/lib/compare.bash b/releases/lib/compare.bash index 2c78a7d..6b31b47 100644 --- a/releases/lib/compare.bash +++ b/releases/lib/compare.bash @@ -13,7 +13,14 @@ # For each overlapping artifact key we validate: # 1) Build-input metadata (crate, version, lock digest, toolchain digest). # 2) Manifest hash presence / correctness against on-disk files. -# 3) Hash equality between the two runs. +# 3) Raw-byte hash equality between the two runs. +# +# Step 2 is intentionally raw-byte strict. +# A run directory is only trustworthy if the manifest sha256 still matches the file that is actually on disk. +# +# Step 3 is intentionally raw-byte strict. +# Generic run-vs-run reproducibility checks must compare the exact bytes that were built +# Signed-vs-unsigned macOS equivalence is handled separately by verify_macos_release.sh. # # This layered approach makes failures actionable because it distinguishes input # drift from output drift, instead of collapsing everything into some very generic @@ -159,9 +166,11 @@ compare_runs() { continue fi - local h1 h2 - h1="$(sha256_file "$p1")" - h2="$(sha256_file "$p2")" + # First prove that each run is internally self-consistent. + # We always compare the manifest's recorded raw sha256 against the real file bytes before any normalization logic is allowed to enter the picture. + local raw_h1 raw_h2 + raw_h1="$(sha256_file "$p1")" + raw_h2="$(sha256_file "$p2")" if [[ -z "$sha1" || -z "$sha2" ]]; then echo "FAIL: manifest missing sha256 for $pkg | $tgt | $bin" @@ -169,22 +178,28 @@ compare_runs() { continue fi - if [[ "$h1" != "$sha1" ]]; then + if [[ "$raw_h1" != "$sha1" ]]; then echo "FAIL: run1 manifest sha256 does not match file for $pkg | $tgt | $bin" echo " manifest: $sha1" - echo " file : $h1" + echo " file : $raw_h1" status=1 continue fi - if [[ "$h2" != "$sha2" ]]; then + if [[ "$raw_h2" != "$sha2" ]]; then echo "FAIL: run2 manifest sha256 does not match file for $pkg | $tgt | $bin" echo " manifest: $sha2" - echo " file : $h2" + echo " file : $raw_h2" status=1 continue fi + # Only after both run directories pass raw integrity checks do we compare cross-run output hashes. + # This stays raw-byte strict so compare mode proves exact reproducibility + local h1 h2 + h1="$raw_h1" + h2="$raw_h2" + if [[ "$h1" != "$h2" ]]; then echo "DIFF: binary hash mismatch for $pkg | $tgt | $bin" echo " run1: $h1" diff --git a/releases/lib/deploy_helpers.bash b/releases/lib/deploy_helpers.bash index e24e11c..2d308a5 100644 --- a/releases/lib/deploy_helpers.bash +++ b/releases/lib/deploy_helpers.bash @@ -16,7 +16,8 @@ select_deploy_bundles_for_triple() { # Prefer app bundles for reproducibility checks; dmg container metadata can # vary between runs even when the app payload is identical. *apple-darwin) echo "app dmg" ;; - *linux*) echo "appimage deb rpm" ;; + # TODO: RPM reproducibility canonicalization is currently brittle + *linux*) echo "appimage deb" ;; *) echo "all" ;; esac } @@ -102,7 +103,7 @@ deploy_bundle_targets_json_for_triple() { if is_windows_triple "$triple"; then echo '["nsis"]' elif is_linux_triple "$triple"; then - echo '["appimage","deb","rpm"]' + echo '["appimage","deb"]' else echo '["dmg","app"]' fi diff --git a/releases/lib/deploy_pipeline.bash b/releases/lib/deploy_pipeline.bash index d11eecc..5eca422 100644 --- a/releases/lib/deploy_pipeline.bash +++ b/releases/lib/deploy_pipeline.bash @@ -119,12 +119,14 @@ record_macos_app_payload_artifacts() { local app_name app_name="$(basename "$app_dir")" - local exec_dir="$app_dir/Contents/MacOS" - [[ -d "$exec_dir" ]] || die "Missing macOS app executable directory: $exec_dir" + local contents_dir="$app_dir/Contents" + [[ -d "$contents_dir" ]] || die "Missing macOS app Contents directory: $contents_dir" # Capture deterministic app payload files instead of dmg container bytes. + # Keep the app bundle intact enough to launch locally by preserving resources such as icon.icns. + # Exclude signing metadata from comparisons. local copied_any=0 - local info_plist="$app_dir/Contents/Info.plist" + local info_plist="$contents_dir/Info.plist" if [[ -f "$info_plist" ]]; then copied_any=1 local info_rel="app/${app_name}/Contents/Info.plist" @@ -166,9 +168,16 @@ record_macos_app_payload_artifacts() { "$deploy_version" \ "$deploy_lock_sha" \ "$digest" - done < <(find "$exec_dir" -type f | LC_ALL=C sort) + done < <( + find "$contents_dir" -type f \ + ! -path '*/Info.plist' \ + ! -path '*/_CodeSignature/*' \ + ! -name 'CodeResources' \ + ! -name '.DS_Store' \ + | LC_ALL=C sort + ) - [[ "$copied_any" -eq 1 ]] || die "No executable payload files found under $exec_dir" + [[ "$copied_any" -eq 1 ]] || die "No macOS app payload files found under $contents_dir" } run_host_deploy_bundle_for_triple() { @@ -187,13 +196,6 @@ run_host_deploy_bundle_for_triple() { local deterministic_rustflags="${13}" local effective_rustflags="$deterministic_rustflags" - if is_apple_triple "$triple"; then - local no_uuid_flag="-C link-arg=-Wl,-no_uuid" - if [[ "$effective_rustflags" != *"$no_uuid_flag"* ]]; then - effective_rustflags="${effective_rustflags:+$effective_rustflags }$no_uuid_flag" - fi - fi - # Keep build-output path stable across run1/run2 so build-script-generated # absolute paths do not introduce per-run entropy into the final binary. rm -rf "$run_target_dir" diff --git a/releases/lib/docker_deploy/run_tauri_build.bash b/releases/lib/docker_deploy/run_tauri_build.bash index f478c98..d21be4e 100755 --- a/releases/lib/docker_deploy/run_tauri_build.bash +++ b/releases/lib/docker_deploy/run_tauri_build.bash @@ -248,6 +248,177 @@ normalize_tree_timestamps() { done < <(find "$root" -mindepth 0 -print0 2>/dev/null | LC_ALL=C sort -z) } +desktop_field_value() { + local field="$1" + local desktop_file="$2" + awk -F= -v k="$field" ' + $1 == k { + sub(/^[^=]*=/, "", $0) + print + exit + } + ' "$desktop_file" +} + +trim_matching_quotes() { + local value="$1" + if [[ "$value" == \"*\" && "$value" == *\" ]]; then + value="${value#\"}" + value="${value%\"}" + fi + printf '%s\n' "$value" +} + +extract_desktop_exec_command() { + local value="$1" + value="$(trim_matching_quotes "$value")" + value="${value#"${value%%[![:space:]]*}"}" + printf '%s\n' "${value%%[[:space:]]*}" +} + +slugify_linux_desktop_id() { + local value="$1" + value="$(trim_matching_quotes "$value")" + value="$(printf '%s' "$value" | tr '[:upper:]' '[:lower:]' | sed -E 's/[^a-z0-9]+/-/g; s/^-+//; s/-+$//')" + printf '%s\n' "$value" +} + +rewrite_desktop_file_for_linux_bundle() { + local desktop_file="$1" + local display_name="$2" + local desktop_id="$3" + local tmp_file="${desktop_file}.tmp" + + awk -F= -v display_name="$display_name" -v desktop_id="$desktop_id" ' + BEGIN { + saw_name = 0 + saw_exec = 0 + saw_icon = 0 + saw_wmclass = 0 + } + /^Name=/ { + print "Name=" display_name + saw_name = 1 + next + } + /^Exec=/ { + print "Exec=" desktop_id + saw_exec = 1 + next + } + /^Icon=/ { + print "Icon=" desktop_id + saw_icon = 1 + next + } + /^StartupWMClass=/ { + print "StartupWMClass=" desktop_id + saw_wmclass = 1 + next + } + { print } + END { + if (!saw_name) print "Name=" display_name + if (!saw_exec) print "Exec=" desktop_id + if (!saw_icon) print "Icon=" desktop_id + if (!saw_wmclass) print "StartupWMClass=" desktop_id + } + ' "$desktop_file" > "$tmp_file" + mv -f "$tmp_file" "$desktop_file" +} + +normalize_linux_bundle_desktop_metadata() { + local root="$1" + local preferred_id="${2:-}" + local apps_dir="$root/usr/share/applications" + [[ -d "$apps_dir" ]] || return 0 + + local desktop_file="" + desktop_file="$(find "$apps_dir" -maxdepth 1 -type f -name '*.desktop' | LC_ALL=C sort | head -n 1)" + [[ -n "$desktop_file" && -f "$desktop_file" ]] || return 0 + + local display_name icon_name exec_value exec_command desktop_basename desktop_id + display_name="$(desktop_field_value "Name" "$desktop_file")" + icon_name="$(desktop_field_value "Icon" "$desktop_file")" + exec_value="$(desktop_field_value "Exec" "$desktop_file")" + exec_command="$(extract_desktop_exec_command "$exec_value")" + desktop_basename="$(basename "$desktop_file" .desktop)" + + desktop_id="$preferred_id" + [[ -n "$desktop_id" ]] || desktop_id="$icon_name" + [[ -n "$desktop_id" ]] || desktop_id="$exec_command" + [[ -n "$desktop_id" ]] || desktop_id="$desktop_basename" + [[ -n "$desktop_id" ]] || desktop_id="$display_name" + desktop_id="$(slugify_linux_desktop_id "$desktop_id")" + [[ -n "$desktop_id" ]] || return 0 + [[ -n "$display_name" ]] || display_name="$desktop_basename" + [[ -n "$display_name" ]] || display_name="$desktop_id" + + local bin_dir="$root/usr/bin" + if [[ -d "$bin_dir" ]]; then + local current_bin="" + if [[ -n "$exec_command" && -f "$bin_dir/$exec_command" ]]; then + current_bin="$bin_dir/$exec_command" + elif [[ -f "$bin_dir/$desktop_basename" ]]; then + current_bin="$bin_dir/$desktop_basename" + elif [[ -f "$bin_dir/$display_name" ]]; then + current_bin="$bin_dir/$display_name" + fi + + if [[ -n "$current_bin" && "$current_bin" != "$bin_dir/$desktop_id" && ! -e "$bin_dir/$desktop_id" ]]; then + mv -f "$current_bin" "$bin_dir/$desktop_id" + fi + fi + + local icon_dir candidate_base candidate_path target_path + while IFS= read -r icon_dir; do + [[ -n "$icon_dir" ]] || continue + for candidate_base in "$icon_name" "$desktop_basename" "$display_name"; do + [[ -n "$candidate_base" ]] || continue + candidate_path="$icon_dir/${candidate_base}.png" + target_path="$icon_dir/${desktop_id}.png" + if [[ -f "$candidate_path" && "$candidate_path" != "$target_path" && ! -e "$target_path" ]]; then + mv -f "$candidate_path" "$target_path" + fi + done + done < <(find "$root/usr/share/icons/hicolor" -type d -path '*/apps' 2>/dev/null | LC_ALL=C sort) + + rewrite_desktop_file_for_linux_bundle "$desktop_file" "$display_name" "$desktop_id" + + local normalized_desktop="$apps_dir/${desktop_id}.desktop" + if [[ "$desktop_file" != "$normalized_desktop" ]]; then + mv -f "$desktop_file" "$normalized_desktop" + desktop_file="$normalized_desktop" + fi + + if [[ -e "$root/AppRun" || -L "$root/.DirIcon" || -d "$root/apprun-hooks" ]]; then + find "$root" -maxdepth 1 -mindepth 1 \( -name '*.desktop' -o -name '*.png' \) -exec rm -f {} + + + cp -f "$desktop_file" "$root/${desktop_id}.desktop" + + local root_icon_source="" + local icon_size_dir + for icon_size_dir in 512x512 256x256@2 256x256 128x128 64x64 48x48 32x32 16x16; do + target_path="$root/usr/share/icons/hicolor/${icon_size_dir}/apps/${desktop_id}.png" + if [[ -f "$target_path" ]]; then + root_icon_source="$target_path" + break + fi + done + if [[ -z "$root_icon_source" ]]; then + while IFS= read -r target_path; do + root_icon_source="$target_path" + break + done < <(find "$root/usr/share/icons/hicolor" -type f -path "*/apps/${desktop_id}.png" 2>/dev/null | LC_ALL=C sort) + fi + + if [[ -n "$root_icon_source" && -f "$root_icon_source" ]]; then + cp -f "$root_icon_source" "$root/${desktop_id}.png" + ln -sfn "${desktop_id}.png" "$root/.DirIcon" + fi + fi +} + resolve_linuxdeploy_binary() { local linuxdeploy_bin="" linuxdeploy_bin="$(find /root/.cache/tauri -maxdepth 1 -type f -name 'linuxdeploy-*.AppImage' ! -name 'linuxdeploy-plugin-*' | LC_ALL=C sort | head -n 1)" @@ -328,6 +499,11 @@ rebuild_deb_deterministically_in_place() { return 1 } + local package_name="" + if [[ -f "$control_dir/control" ]]; then + package_name="$(control_field_value "Package" "$control_dir/control")" + fi + normalize_linux_bundle_desktop_metadata "$data_dir" "$package_name" normalize_tree_timestamps "$control_dir" normalize_tree_timestamps "$data_dir" @@ -514,6 +690,41 @@ find_valid_appimage_squashfs_offset() { return 1 } +escape_mksquashfs_sort_path() { + local input="$1" + local output="" + local i ch + + # mksquashfs parses each -sort line as " ". It also does not understand shell-style quoting. + # a path containing spaces/tabs is split into multiple fields unless we escape the whitespace in the generated sort file ourselves. + # + # keep -sort enabled on purpose instead of dropping it to "fix" paths with spaces. + # The sort file is what gives us a deterministic inode/packing order inside the squashfs payload. + # Removing it would make AppImage layout depend more heavily on traversal order (and would weaken reproducibility) + # The right fix is therefore to preserve -sort and emit sort-file paths in the exact escaped format mksquashfs expects. + for (( i = 0; i < ${#input}; i++ )); do + ch="${input:i:1}" + case "$ch" in + [[:space:]]|\\) output+="\\$ch" ;; + *) output+="$ch" ;; + esac + done + + printf '%s' "$output" +} + +appimage_runtime_section_offset_hex() { + local runtime_path="$1" + local section_name="$2" + + readelf -SW "$runtime_path" 2>/dev/null | awk -v section_name="$section_name" ' + $2 == section_name { + print $5 + exit + } + ' +} + rebuild_appimage_deterministically_in_place() { local appimage_path="$1" local appdir="$2" @@ -525,25 +736,25 @@ rebuild_appimage_deterministically_in_place() { local tmp tmp="$(mktemp -d)" local staged_appdir="$tmp/appdir.staged" - local runtime_path="" - local apprun_name - apprun_name="$(linux_apprun_name_for_target || true)" - if [[ -n "$apprun_name" && -x "/root/.cache/tauri/${apprun_name}" ]]; then - runtime_path="/root/.cache/tauri/${apprun_name}" - fi - - if [[ -n "$runtime_path" ]]; then - cp -f "$runtime_path" "$tmp/runtime" + local fallback_offset="" + fallback_offset="$(find_valid_appimage_squashfs_offset "$appimage_path" || true)" + if [[ -n "$fallback_offset" ]]; then + dd if="$appimage_path" of="$tmp/runtime" bs=1 count="$fallback_offset" status=none else - local fallback_offset="" - fallback_offset="$(find_valid_appimage_squashfs_offset "$appimage_path" || true)" - [[ -n "$fallback_offset" ]] || { + local runtime_path="" + local apprun_name + apprun_name="$(linux_apprun_name_for_target || true)" + if [[ -n "$apprun_name" && -x "/root/.cache/tauri/${apprun_name}" ]]; then + runtime_path="/root/.cache/tauri/${apprun_name}" + fi + [[ -n "$runtime_path" ]] || { rm -rf "$tmp" return 1 } - dd if="$appimage_path" of="$tmp/runtime" bs=1 count="$fallback_offset" status=none + cp -f "$runtime_path" "$tmp/runtime" fi + normalize_linux_bundle_desktop_metadata "$appdir" # Lets re-stage AppDir via sorted tar stream so inode creation order is deterministic # across runs before we generate a new squashfs.... mkdir -p "$staged_appdir" @@ -564,10 +775,11 @@ rebuild_appimage_deterministically_in_place() { local squash_order="$tmp/squashfs.sort" local priority_base=32000 - local rel + local rel escaped_rel while IFS= read -r rel; do [[ -n "$rel" ]] || continue - printf '%s %d\n' "$rel" "$priority_base" >> "$squash_order" + escaped_rel="$(escape_mksquashfs_sort_path "$rel")" + printf '%s %d\n' "$escaped_rel" "$priority_base" >> "$squash_order" if (( priority_base > -32000 )); then priority_base=$((priority_base - 1)) fi @@ -603,12 +815,23 @@ rebuild_appimage_deterministically_in_place() { perl -e 'print pack("Q<", $ARGV[0]);' "$runtime_size" > "$tmp/runtime-offset.bin" dd if="$tmp/runtime-offset.bin" of="$tmp/runtime" bs=1 seek=8 conv=notrunc status=none - local squash_sha id_hex + local squash_md5 squash_sha id_hex + local digest_offset_hex="" squash_sha="$(sha256sum "$tmp/payload.squashfs" 2>/dev/null | awk '{print $1}' || true)" [[ -n "$squash_sha" ]] || { rm -rf "$tmp" return 1 } + squash_md5="$(md5sum "$tmp/payload.squashfs" 2>/dev/null | awk '{print $1}' || true)" + [[ -n "$squash_md5" ]] || { + rm -rf "$tmp" + return 1 + } + digest_offset_hex="$(appimage_runtime_section_offset_hex "$tmp/runtime" ".digest_md5" || true)" + if [[ -n "$digest_offset_hex" ]]; then + perl -e 'print pack("H*", $ARGV[0]);' "$squash_md5" > "$tmp/runtime-digest-md5.bin" + dd if="$tmp/runtime-digest-md5.bin" of="$tmp/runtime" bs=1 seek="$((16#$digest_offset_hex))" conv=notrunc status=none + fi id_hex="${squash_sha:0:32}" if [[ -n "$id_hex" && "$runtime_size" -ge 16 ]]; then perl -e 'print pack("H*", $ARGV[0]);' "$id_hex" > "$tmp/runtime-id.bin" diff --git a/releases/lib/plan.bash b/releases/lib/plan.bash index c1cdec3..122a817 100644 --- a/releases/lib/plan.bash +++ b/releases/lib/plan.bash @@ -60,18 +60,18 @@ resolve_build_plan() { PKGS=( "deploy_tool" ) case "$PROFILE" in all) + # Default deploy profiles only include artifacts we can exercise on hardware we have access to right now. + # macOS x86_64 and Windows arm64 are buildable but intentionally omitted as we don't have a way to test them right now TRIPLES=( "x86_64-unknown-linux-gnu" "aarch64-unknown-linux-gnu" - "x86_64-apple-darwin" "aarch64-apple-darwin" "x86_64-pc-windows-msvc" - "aarch64-pc-windows-msvc" ) ;; linux) TRIPLES=( "x86_64-unknown-linux-gnu" "aarch64-unknown-linux-gnu" ) ;; - macos) TRIPLES=( "x86_64-apple-darwin" "aarch64-apple-darwin" ) ;; - windows) TRIPLES=( "x86_64-pc-windows-msvc" "aarch64-pc-windows-msvc" ) ;; + macos) TRIPLES=( "aarch64-apple-darwin" ) ;; + windows) TRIPLES=( "x86_64-pc-windows-msvc" ) ;; linux-x64) TRIPLES=( "x86_64-unknown-linux-gnu" ) ;; linux-arm64) TRIPLES=( "aarch64-unknown-linux-gnu" ) ;; macos-x64) TRIPLES=( "x86_64-apple-darwin" ) ;; diff --git a/releases/sign_macos_release.sh b/releases/sign_macos_release.sh new file mode 100755 index 0000000..0036c3d --- /dev/null +++ b/releases/sign_macos_release.sh @@ -0,0 +1,251 @@ +#!/usr/bin/env bash +# SPDX-License-Identifier: GPL-3.0-or-later +set -euo pipefail +IFS=$'\n\t' + +# Post-process a reproducible unsigned macOS app into a (release-shaped) artifact. +# +# This exists because macOS distribution requirements and reproducibility requirements go in different directions. +# Specifically, +# [1] Reproducible build comparison wants the app before Apple signing/notarization +# [2] End-user distribution wants the app after Apple signing/notarization +# +# We therefore keep this step out of build.sh. +# The reproducible pipeline produces the unsigned .app, compare verifies that payload, and only then do we copy that app here, sign it, notarize/staple it, and package the release zip. +# This keeps Apple-issued metadata from "polluting" the reproducible build outputs + +PROGRAM_NAME="$(basename "$0")" +RELEASES_DIR="$(cd -- "$(dirname -- "${BASH_SOURCE[0]}")" && pwd)" + +# shellcheck disable=SC1091 +source "${RELEASES_DIR}/lib/common.bash" + +SIGN_APP_PATH="" +SIGN_RUN_DIR="" +SIGN_TRIPLE="" +SIGN_IDENTITY="" +SIGN_NOTARY_PROFILE="" +SIGN_OUT_DIR="" +SIGN_ZIP_NAME="" +SIGN_TMP_DIR="" + +sign_usage() { + cat >&2 </dev/null || true + + local codesign_args=( + --force + --deep + --options runtime + --timestamp + --sign "$SIGN_IDENTITY" + "$work_app" + ) + + echo "Signing ${work_app}" + codesign "${codesign_args[@]}" + # Verify immediately after signing so we fail before notarization/packaging if the identity/entitlements/nested code/bundle structure are wrong. + codesign --verify --deep --strict --verbose=2 "$work_app" + + if [[ -n "$SIGN_NOTARY_PROFILE" ]]; then + # notarytool works on an archive submission. + # We submit a temporary zip for Apple's verdict, then staple the accepted ticket back onto the app bundle before copying the final release outputs into OUT_DIR. + # Some interesting info on this process located here: https://developer.apple.com/documentation/security/customizing-the-notarization-workflow#Staple-the-ticket-to-your-distribution + local submit_zip="${tmp_dir}/submit.zip" + # Avoid AppleDouble/resource-fork sidecars in the notarization upload + # Apple uses ditto for the notarization/distribution archive shape we need here (see cmd man ditto) + # ditto preserves HFS metadata by default; --norsrc turns that off so the upload zip stays limited to the signed app payload instead of picking up AppleDouble ._* sidecars or other host metadata. + ditto -c -k --keepParent --norsrc "$work_app" "$submit_zip" + echo "Submitting for notarization with keychain profile ${SIGN_NOTARY_PROFILE}" + xcrun notarytool submit "$submit_zip" --keychain-profile "$SIGN_NOTARY_PROFILE" --wait + xcrun stapler staple -v "$work_app" + spctl --assess --type execute --verbose=4 "$work_app" + else + echo "Skipping notarization: no --notary-profile provided" + fi + + # Preserve the signed bundle exactly as-validated and avoid packaging AppleDouble/resource-fork sidecars that can invalidate the result app + xattr -cr "$work_app" 2>/dev/null || true + + # Write both the final .app bundle and a zip containing that exact bundle. + ditto "$work_app" "$final_app" + # The copy into OUT_DIR can pick up host-specific extended attributes such as com.apple.provenance. + # Clear them before the final verification/zip steps + # xattrs are filesystem metadata attached separately from the bundle contents. + # Clearing them here keeps the final artifact from inheriting host-specific metadata (that is outside the signed payload) + xattr -cr "$final_app" 2>/dev/null || true + # Re-verify after the final copy. + # Catches cases where moving the bundle into OUT_DIR changed enough metadata to invalidate the signature + # Apple treats post-signing bundle changes as signature-invalidating, so we verify again after the final copy rather than assuming from the earlier check + # --deep --strict makes this check closer to the way Apple validates nested signed content during release handling. + codesign --verify --deep --strict --verbose=2 "$final_app" + # Zip without AppleDouble/resource-fork entries + # The same --norsrc rule applies to the release zip: package the signed app bundle itself (not filesystem metadata tha can change across machines) + ditto -c -k --keepParent --norsrc "$final_app" "$final_zip" + + local verify_tmp_dir verify_app + verify_tmp_dir="$(mktemp -d)" + ditto -x -k "$final_zip" "$verify_tmp_dir" + verify_app="${verify_tmp_dir}/$(basename "$source_app")" + # Verify the extracted payload (not just the source app/zip) + # Proves the actual artifact users download can be expanded back into a valid signed app. + codesign --verify --deep --strict --verbose=2 "$verify_app" + rm -rf "$verify_tmp_dir" + + local zip_sha + zip_sha="$(sha256_file "$final_zip")" + + echo "Signed macOS release output" + echo "- App: $final_app" + echo "- Zip: $final_zip" + echo "- SHA256: $zip_sha" + if [[ -n "$SIGN_NOTARY_PROFILE" ]]; then + echo "- Status: signed, notarized, stapled" + else + echo "- Status: signed only (not notarized)" + fi +} + +main "$@" diff --git a/releases/sign_windows_release.sh b/releases/sign_windows_release.sh new file mode 100644 index 0000000..f77d229 --- /dev/null +++ b/releases/sign_windows_release.sh @@ -0,0 +1,226 @@ +#!/usr/bin/env bash +# SPDX-License-Identifier: GPL-3.0-or-later +set -euo pipefail +IFS=$'\n\t' + +# Post-process reproducible unsigned Windows artifacts using Azure Artifact Signing. +# +# Mirrors the macOS split: reproducibility wants the unsigned build output, while end-user Windows distribution wants Authenticode-signed artifacts with a trusted timestamp. +# Therefore keep signing outside build.sh so the run dir remains the auditable unsigned source of truth. +# +# Must be run on Windows. + +PROGRAM_NAME="$(basename "$0")" +RELEASES_DIR="$(cd -- "$(dirname -- "${BASH_SOURCE[0]}")" && pwd)" + +# shellcheck disable=SC1091 +source "${RELEASES_DIR}/lib/common.bash" + +SIGN_FILE_PATH="" +SIGN_RUN_DIR="" +SIGN_TRIPLE="" +SIGN_OUT_DIR="" +SIGN_TIMESTAMP_URL="http://timestamp.acs.microsoft.com" +SIGN_DLIB_PATH="" +SIGN_METADATA_JSON="" +SIGN_SIGNTOOL_PATH="" +SIGN_TMP_DIR="" + +sign_usage() { + cat >&2 </dev/null 2>&1; then + SIGN_SIGNTOOL_PATH="$(command -v signtool)" + return + fi + + die "Microsoft signtool not found. Provide --signtool or run this on a Windows machine with the Windows SDK installed." +} + +ensure_sign_tools() { + find_signtool + init_sha256_tool +} + +validate_sign_inputs() { + [[ -n "$SIGN_OUT_DIR" ]] || die "--outdir is required" + [[ -n "$SIGN_DLIB_PATH" ]] || die "--dlib is required" + [[ -n "$SIGN_METADATA_JSON" ]] || die "--metadata-json is required" + [[ -f "$SIGN_DLIB_PATH" ]] || die "Azure CodeSigning dlib not found: $SIGN_DLIB_PATH" + [[ -f "$SIGN_METADATA_JSON" ]] || die "Artifact Signing metadata JSON not found: $SIGN_METADATA_JSON" + + if [[ -n "$SIGN_FILE_PATH" && -n "$SIGN_RUN_DIR" ]]; then + die "Provide either --file or --run-dir, not both" + fi + + if [[ -n "$SIGN_FILE_PATH" ]]; then + return + fi + + [[ -n "$SIGN_RUN_DIR" ]] || die "Provide either --file or --run-dir" + [[ -n "$SIGN_TRIPLE" ]] || die "--triple is required with --run-dir" +} + +is_supported_windows_artifact() { + case "$1" in + *.exe|*.msi) return 0 ;; + *) return 1 ;; + esac +} + +resolve_sign_inputs() { + if [[ -n "$SIGN_FILE_PATH" ]]; then + [[ -f "$SIGN_FILE_PATH" ]] || die "Artifact not found: $SIGN_FILE_PATH" + is_supported_windows_artifact "$SIGN_FILE_PATH" || die "Unsupported artifact type: $SIGN_FILE_PATH" + printf '%s\n' "$SIGN_FILE_PATH" + return + fi + + local artifact_dir="${SIGN_RUN_DIR}/artifacts/${SIGN_TRIPLE}" + [[ -d "$artifact_dir" ]] || die "Artifact directory not found: $artifact_dir" + + local found=0 + while IFS= read -r candidate; do + [[ -n "$candidate" ]] || continue + found=1 + printf '%s\n' "$candidate" + done < <(find "$artifact_dir" -type f \( -name '*.exe' -o -name '*.msi' \) | LC_ALL=C sort) + + [[ "$found" -eq 1 ]] || die "No Windows .exe/.msi artifacts found under: $artifact_dir" +} + +sign_one_artifact() { + local source_path="$1" + local out_dir="$2" + + local tmp_copy final_path + tmp_copy="${SIGN_TMP_DIR}/$(basename "$source_path")" + final_path="${out_dir}/$(basename "$source_path")" + + [[ ! -e "$final_path" ]] || die "Output already exists: $final_path" + + cp "$source_path" "$tmp_copy" + + echo "Signing ${tmp_copy} with Azure Artifact Signing" + "$SIGN_SIGNTOOL_PATH" sign \ + /v \ + /debug \ + /fd SHA256 \ + /tr "$SIGN_TIMESTAMP_URL" \ + /td SHA256 \ + /dlib "$SIGN_DLIB_PATH" \ + /dmdf "$SIGN_METADATA_JSON" \ + "$tmp_copy" + + # Verify the signed copy immediately after + "$SIGN_SIGNTOOL_PATH" verify /v /debug /pa "$tmp_copy" + + cp "$tmp_copy" "$final_path" + "$SIGN_SIGNTOOL_PATH" verify /v /debug /pa "$final_path" + + local sha256 + sha256="$(sha256_file "$final_path")" + echo "- Artifact: $final_path" + echo "- SHA256: $sha256" +} + +main() { + parse_sign_args "$@" + validate_sign_inputs + ensure_sign_tools + + mkdir -p "$SIGN_OUT_DIR" + + local tmp_dir + tmp_dir="$(mktemp -d)" + SIGN_TMP_DIR="$tmp_dir" + trap cleanup_sign_tmp_dir EXIT + + local inputs=() + while IFS= read -r path; do + [[ -n "$path" ]] || continue + inputs+=( "$path" ) + done < <(resolve_sign_inputs) + + [[ "${#inputs[@]}" -gt 0 ]] || die "No artifacts resolved for signing" + + echo "Signed Windows release output" + echo "- SignTool: $SIGN_SIGNTOOL_PATH" + echo "- Dlib: $SIGN_DLIB_PATH" + echo "- Metadata: $SIGN_METADATA_JSON" + for artifact in "${inputs[@]}"; do + sign_one_artifact "$artifact" "$SIGN_OUT_DIR" + done + echo "- Status: signed and timestamped via Azure Artifact Signing" +} + +main "$@" diff --git a/releases/verify_macos_release.sh b/releases/verify_macos_release.sh new file mode 100755 index 0000000..fd9d5f3 --- /dev/null +++ b/releases/verify_macos_release.sh @@ -0,0 +1,791 @@ +#!/usr/bin/env bash +# SPDX-License-Identifier: GPL-3.0-or-later +set -euo pipefail +IFS=$'\n\t' + +# Verify a macOS release directly against a local reproducible build. +# +# We do not use a published manifest. +# The trust model is to build it yourself, normalize the shipped signed app, and compare the two app trees directly. +# +# The comparison is also not a raw zip/app byte diff. +# Signed macOS releases pick up Apple-specific metadata that should differ from the unsigned reproducible build. +# We therefore materialize copies, strip bundle-level release metadata, normalize Mach-O binaries, and then compare the resulting stuff. + +PROGRAM_NAME="$(basename "$0")" +RELEASES_DIR="$(cd -- "$(dirname -- "${BASH_SOURCE[0]}")" && pwd)" + +# shellcheck disable=SC1091 +source "${RELEASES_DIR}/lib/common.bash" + +VERIFY_LOCAL_APP="" +VERIFY_LOCAL_RUN="" +VERIFY_TRIPLE="" +VERIFY_RELEASE_PATH="" +VERIFY_KEEP_TEMP=0 +VERIFY_TMP_DIR="" +VERIFY_EXPECTED_TEAM_ID="${VERIFY_EXPECTED_TEAM_ID:-8PYH264TD9}" +VERIFY_EXPECTED_ENTITLEMENTS_PLIST="${VERIFY_EXPECTED_ENTITLEMENTS_PLIST:-${RELEASES_DIR}/expected_macos_entitlements.plist}" + +verify_usage() { + cat >&2 <&1)" || die "Failed to inspect release signing metadata: $release_app" + release_identifier="$(awk -F= '/^Identifier=/{print $2; exit}' <<<"$release_meta")" + local_identifier="$(/usr/libexec/PlistBuddy -c 'Print :CFBundleIdentifier' "$local_app/Contents/Info.plist" 2>/dev/null || true)" + + # The signed release should present the same bundle identifier as the local reproducible build rather than a differently labeled app. + [[ -n "$release_identifier" ]] || die "Release signing metadata is missing Identifier: $release_app" + [[ -n "$local_identifier" ]] || die "Local app Info.plist is missing CFBundleIdentifier: $local_app" + [[ "$release_identifier" == "$local_identifier" ]] || die "Release identifier mismatch: expected $local_identifier, got $release_identifier" + + release_team_id="$(awk -F= '/^TeamIdentifier=/{print $2; exit}' <<<"$release_meta")" + + # Pin the signing team so a validly signed app from some other developer account does not pass this check. + [[ -n "$release_team_id" ]] || die "Release signing metadata is missing TeamIdentifier: $release_app" + [[ "$release_team_id" == "$VERIFY_EXPECTED_TEAM_ID" ]] || die "Release TeamIdentifier mismatch: expected $VERIFY_EXPECTED_TEAM_ID, got $release_team_id" + + # Require the core distribution properties this release policy expects for outside-App-Store delivery (hardened runtime metadata, CMS signing metadata, and a stapled notarization ticket on the artifact being checked) + grep -q 'flags=0x10000(runtime)' <<<"$release_meta" || die "Release is missing hardened runtime flag: $release_app" + grep -q '^Runtime Version=' <<<"$release_meta" || die "Release signing metadata is missing Runtime Version: $release_app" + grep -q '^CMSDigest=' <<<"$release_meta" || die "Release signing metadata is missing CMSDigest: $release_app" + grep -q '^Notarization Ticket=stapled' <<<"$release_meta" || die "Release is missing a stapled notarization ticket: $release_app" + + # Stapler validates the ticket payload, while spctl exercises Apple's execution policy layer rather than only the embedded signature structure itself + xcrun stapler validate "$release_app" || die "Stapled notarization ticket validation failed: $release_app" + spctl --assess --type execute --verbose=4 "$release_app" || die "spctl execution-policy assessment failed: $release_app" +} + +write_empty_plist() { + local out_file="$1" + # codesign prints nothing when a binary has no entitlements at all. + # verifier wants a policy artifact either way, so we put together an empty plist to compare in that instance + cat > "$out_file" <<'EOF' + + + + + +EOF +} + +app_main_executable_path() { + local app_dir="$1" + [[ -d "$app_dir" ]] || die "App bundle not found: $app_dir" + + local executable_name executable_path + executable_name="$(/usr/libexec/PlistBuddy -c 'Print :CFBundleExecutable' "$app_dir/Contents/Info.plist" 2>/dev/null || true)" + [[ -n "$executable_name" ]] || die "App Info.plist is missing CFBundleExecutable: $app_dir" + executable_path="$app_dir/Contents/MacOS/$executable_name" + [[ -f "$executable_path" ]] || die "App executable not found: $executable_path" + printf '%s\n' "$executable_path" +} + +release_main_executable_has_embedded_entitlements_blob() { + local executable_path="$1" + [[ -f "$executable_path" ]] || die "Executable not found for entitlements-blob check: $executable_path" + + perl -e ' + use strict; + use warnings; + + sub u32le { + return unpack("V", substr($_[0], $_[1], 4)); + } + + sub u32be { + return unpack("N", substr($_[0], $_[1], 4)); + } + + my $path = shift @ARGV; + open my $fh, "<", $path or die "open($path): $!"; + binmode $fh; + local $/; + my $data = <$fh>; + my $file_len = length($data); + + die "unsupported Mach-O magic in $path\n" if u32le($data, 0) != 0xfeedfacf; + + my $ncmds = u32le($data, 16); + my $offset = 32; + my ($sig_dataoff, $sig_datasize); + + for (my $i = 0; $i < $ncmds; $i++) { + die "truncated Mach-O load commands in $path\n" if $offset + 8 > $file_len; + my $cmd = u32le($data, $offset); + my $cmdsize = u32le($data, $offset + 4); + die "invalid load command size in $path\n" if $cmdsize < 8 || $offset + $cmdsize > $file_len; + + if ($cmd == 0x1d) { + die "unexpected LC_CODE_SIGNATURE size in $path\n" if $cmdsize < 16; + die "multiple LC_CODE_SIGNATURE commands in $path\n" if defined $sig_dataoff; + $sig_dataoff = u32le($data, $offset + 8); + $sig_datasize = u32le($data, $offset + 12); + } + + $offset += $cmdsize; + } + + die "missing LC_CODE_SIGNATURE in $path\n" if !defined $sig_dataoff; + die "empty LC_CODE_SIGNATURE in $path\n" if $sig_datasize == 0; + die "LC_CODE_SIGNATURE exceeds file in $path\n" if $sig_dataoff + $sig_datasize > $file_len; + die "LC_CODE_SIGNATURE too short for SuperBlob header in $path\n" if $sig_datasize < 12; + + my $sig = substr($data, $sig_dataoff, $sig_datasize); + die "LC_CODE_SIGNATURE is not a SuperBlob in $path\n" if u32be($sig, 0) != 0xfade0cc0; + + my $superblob_len = u32be($sig, 4); + my $count = u32be($sig, 8); + my $index_end = 12 + ($count * 8); + + die "SuperBlob length too small in $path\n" if $superblob_len < $index_end; + die "SuperBlob length exceeds LC_CODE_SIGNATURE size in $path\n" if $superblob_len > $sig_datasize; + + for (my $i = 0; $i < $count; $i++) { + my $entry_off = 12 + ($i * 8); + my $blob_off = u32be($sig, $entry_off + 4); + die "SuperBlob index points outside SuperBlob in $path\n" if $blob_off + 8 > $superblob_len; + my $blob_magic = u32be($sig, $blob_off); + if ($blob_magic == 0xfade7171 || $blob_magic == 0xfade7172) { + exit 0; + } + } + + exit 1; + ' "$executable_path" +} + +verify_release_entitlements_policy() { + local release_app="$1" + [[ -d "$release_app" ]] || die "Release app bundle not found for entitlements check: $release_app" + [[ -f "$VERIFY_EXPECTED_ENTITLEMENTS_PLIST" ]] || die "Expected entitlements plist not found: $VERIFY_EXPECTED_ENTITLEMENTS_PLIST" + + local expected_xml actual_raw actual_xml release_main_executable + expected_xml="$(mktemp)" + actual_raw="$(mktemp)" + actual_xml="$(mktemp)" + release_main_executable="$(app_main_executable_path "$release_app")" + + # Explicitly policy check what powers does the signed app request from macOS at runtime because they live inside the signature blob we do not compare byte-for-byte + plutil -convert xml1 -o "$expected_xml" "$VERIFY_EXPECTED_ENTITLEMENTS_PLIST" + codesign -d --entitlements - "$release_app" >"$actual_raw" 2>/dev/null || die "Failed to extract entitlements from release app: $release_app" + + if [[ ! -s "$actual_raw" ]]; then + # Do not treat empty extractor output as automatically safe. + # Only default to an empty plist when the signed Mach-O structure itself has no embedded entitlement blobs at all. + # If there is an entitlement blob but codesign does not give us a plist, fail. + if release_main_executable_has_embedded_entitlements_blob "$release_main_executable"; then + rm -f "$expected_xml" "$actual_raw" "$actual_xml" + die "Release executable has embedded entitlement blobs, but no entitlement plist could be extracted: $release_main_executable" + fi + + # No output from codesign and no embedded entitlement blobs + # Convert that into the same empty-plist as expected_macos_entitlements so a normal diff can enforce the expectation. + write_empty_plist "$actual_xml" + else + plutil -convert xml1 -o "$actual_xml" "$actual_raw" + fi + + if ! diff -u "$expected_xml" "$actual_xml"; then + rm -f "$expected_xml" "$actual_raw" "$actual_xml" + die "Release entitlements do not match expected policy: $release_app" + fi + + rm -f "$expected_xml" "$actual_raw" "$actual_xml" +} + +verify_macho_signature_tail_matches_local() { + local local_path="$1" + local release_path="$2" + [[ -f "$local_path" ]] || die "Local Mach-O file not found for signature-tail check: $local_path" + [[ -f "$release_path" ]] || die "Release Mach-O file not found for signature-tail check: $release_path" + + # The normalized Mach-O comparison removes the full LC_CODE_SIGNATURE region from the signed release view. + # Most of that region is Apple signing data, but we also see it leaves trailing bytes after the declared SuperBlob length. + # Those bytes are not executable code, but we still want them checked somehow. + # + # So when the signed artifact has a tail beyond the parsed SuperBlob, require that tail to be inherited unchanged from the local build at the same file offsets and within the local build's own LC_CODE_SIGNATURE region. + if ! perl -e ' + use strict; + use warnings; + + sub u32le { + return unpack("V", substr($_[0], $_[1], 4)); + } + + sub u64le { + return unpack("Q<", substr($_[0], $_[1], 8)); + } + + sub u32be { + return unpack("N", substr($_[0], $_[1], 4)); + } + + sub parse_macho_signature_region { + my ($data, $path) = @_; + my $file_len = length($data); + die "unsupported Mach-O magic in $path\n" if u32le($data, 0) != 0xfeedfacf; + + my $ncmds = u32le($data, 16); + my $offset = 32; + my ($sig_dataoff, $sig_datasize); + + for (my $i = 0; $i < $ncmds; $i++) { + die "truncated Mach-O load commands in $path\n" if $offset + 8 > $file_len; + my $cmd = u32le($data, $offset); + my $cmdsize = u32le($data, $offset + 4); + die "invalid load command size in $path\n" if $cmdsize < 8 || $offset + $cmdsize > $file_len; + + if ($cmd == 0x1d) { + die "unexpected LC_CODE_SIGNATURE size in $path\n" if $cmdsize < 16; + die "multiple LC_CODE_SIGNATURE commands in $path\n" if defined $sig_dataoff; + $sig_dataoff = u32le($data, $offset + 8); + $sig_datasize = u32le($data, $offset + 12); + } + + $offset += $cmdsize; + } + + die "missing LC_CODE_SIGNATURE in $path\n" if !defined $sig_dataoff; + die "empty LC_CODE_SIGNATURE in $path\n" if $sig_datasize == 0; + die "LC_CODE_SIGNATURE exceeds file in $path\n" if $sig_dataoff + $sig_datasize > $file_len; + + return ($sig_dataoff, $sig_datasize, $file_len); + } + + my ($local_path, $release_path) = @ARGV; + + open my $local_fh, "<", $local_path or die "open($local_path): $!"; + open my $release_fh, "<", $release_path or die "open($release_path): $!"; + binmode $local_fh; + binmode $release_fh; + local $/; + my $local_data = <$local_fh>; + my $release_data = <$release_fh>; + + my ($local_sig_off, $local_sig_size) = parse_macho_signature_region($local_data, $local_path); + my ($release_sig_off, $release_sig_size) = parse_macho_signature_region($release_data, $release_path); + + die "release LC_CODE_SIGNATURE too short for SuperBlob header in $release_path\n" + if $release_sig_size < 12; + + my $superblob_magic = u32be($release_data, $release_sig_off); + die "release LC_CODE_SIGNATURE is not a SuperBlob in $release_path\n" + if $superblob_magic != 0xfade0cc0; + + my $superblob_len = u32be($release_data, $release_sig_off + 4); + my $superblob_count = u32be($release_data, $release_sig_off + 8); + my $index_len = 12 + ($superblob_count * 8); + + die "release SuperBlob length too small in $release_path\n" + if $superblob_len < $index_len; + die "release SuperBlob length exceeds LC_CODE_SIGNATURE size in $release_path\n" + if $superblob_len > $release_sig_size; + + my $tail_len = $release_sig_size - $superblob_len; + exit 0 if $tail_len == 0; + + my $tail_off = $release_sig_off + $superblob_len; + my $local_sig_end = $local_sig_off + $local_sig_size; + my $release_sig_end = $release_sig_off + $release_sig_size; + + die "release signature tail starts before local LC_CODE_SIGNATURE in $release_path\n" + if $tail_off < $local_sig_off; + die "release signature tail exceeds local LC_CODE_SIGNATURE in $release_path\n" + if $tail_off + $tail_len > $local_sig_end; + + my $release_tail = substr($release_data, $tail_off, $tail_len); + my $local_tail = substr($local_data, $tail_off, $tail_len); + + die "release signature tail differs from local build bytes in $release_path\n" + if $release_tail ne $local_tail; + ' "$local_path" "$release_path"; then + die "Mach-O signature tail check failed: $release_path" + fi +} + +verify_release_macho_signature_tails() { + local local_app="$1" + local release_app="$2" + [[ -d "$local_app" ]] || die "Local app bundle not found for Mach-O signature-tail check: $local_app" + [[ -d "$release_app" ]] || die "Release app bundle not found for Mach-O signature-tail check: $release_app" + + while IFS= read -r release_path; do + [[ -n "$release_path" ]] || continue + + if ! file -b "$release_path" | grep -q 'Mach-O'; then + continue + fi + + local rel="${release_path#${release_app}/}" + local local_path="${local_app}/${rel}" + [[ -f "$local_path" ]] || die "Local Mach-O counterpart missing: $local_path" + file -b "$local_path" | grep -q 'Mach-O' || die "Local counterpart is not Mach-O: $local_path" + verify_macho_signature_tail_matches_local "$local_path" "$release_path" + done < <(find "$release_app" -type f | LC_ALL=C sort) +} + +# codesign --verify only says the signed release is internally consistent. +# It proves the CodeDirectory hashes match the bytes in that same signed app. +# It does NOT prove those signed executable pages match the user's local source build. +# +# The normalized Mach-O compare in normalized_macho_sha256_file() is our project-specific equivalence view for the first-page/load-command region that Apple signing mutates. +# This CodeDirectory check is not a replacement for that... +# +# This is the Apple-native complement for the stable pages AFTER slot 0. +# We take Apple's own signed CodeDirectory page hashes from the release app and recompute them from the local build. +# If those match, the bulk of the executable payload is no longer relying only on our custom normalization rules, while the normalized Mach-O compare is responsible for slot 0. +verify_macho_codedirectory_pages_match_local() { + local local_path="$1" + local release_path="$2" + [[ -f "$local_path" ]] || die "Local Mach-O file not found for CodeDirectory page check: $local_path" + [[ -f "$release_path" ]] || die "Release Mach-O file not found for CodeDirectory page check: $release_path" + + # Use Apple's own signed page-hash view for the stable post-header pages. + if ! perl -MDigest::SHA=sha1,sha256,sha384 -e ' + use strict; + use warnings; + + sub u32le { + return unpack("V", substr($_[0], $_[1], 4)); + } + + sub u64le { + return unpack("Q<", substr($_[0], $_[1], 8)); + } + + sub u32be { + return unpack("N", substr($_[0], $_[1], 4)); + } + + sub u64be { + return unpack("Q>", substr($_[0], $_[1], 8)); + } + + sub parse_macho_signature_region { + my ($data, $path) = @_; + my $file_len = length($data); + die "unsupported Mach-O magic in $path\n" if u32le($data, 0) != 0xfeedfacf; + + my $ncmds = u32le($data, 16); + my $offset = 32; + my ($sig_dataoff, $sig_datasize); + + for (my $i = 0; $i < $ncmds; $i++) { + die "truncated Mach-O load commands in $path\n" if $offset + 8 > $file_len; + my $cmd = u32le($data, $offset); + my $cmdsize = u32le($data, $offset + 4); + die "invalid load command size in $path\n" if $cmdsize < 8 || $offset + $cmdsize > $file_len; + + if ($cmd == 0x1d) { + die "unexpected LC_CODE_SIGNATURE size in $path\n" if $cmdsize < 16; + die "multiple LC_CODE_SIGNATURE commands in $path\n" if defined $sig_dataoff; + $sig_dataoff = u32le($data, $offset + 8); + $sig_datasize = u32le($data, $offset + 12); + } + + $offset += $cmdsize; + } + + die "missing LC_CODE_SIGNATURE in $path\n" if !defined $sig_dataoff; + die "empty LC_CODE_SIGNATURE in $path\n" if $sig_datasize == 0; + die "LC_CODE_SIGNATURE exceeds file in $path\n" if $sig_dataoff + $sig_datasize > $file_len; + + return substr($data, $sig_dataoff, $sig_datasize); + } + + sub find_codedirectory_blob { + my ($sig, $path) = @_; + die "LC_CODE_SIGNATURE too short for SuperBlob header in $path\n" if length($sig) < 12; + die "LC_CODE_SIGNATURE is not a SuperBlob in $path\n" if u32be($sig, 0) != 0xfade0cc0; + + my $superblob_len = u32be($sig, 4); + my $count = u32be($sig, 8); + my $index_end = 12 + ($count * 8); + + die "SuperBlob length too small in $path\n" if $superblob_len < $index_end; + die "SuperBlob length exceeds LC_CODE_SIGNATURE size in $path\n" if $superblob_len > length($sig); + + for (my $i = 0; $i < $count; $i++) { + my $entry_off = 12 + ($i * 8); + my $slot_type = u32be($sig, $entry_off); + my $blob_off = u32be($sig, $entry_off + 4); + die "SuperBlob index points outside SuperBlob in $path\n" if $blob_off + 8 > $superblob_len; + my $blob_magic = u32be($sig, $blob_off); + my $blob_len = u32be($sig, $blob_off + 4); + die "SuperBlob entry exceeds SuperBlob in $path\n" if $blob_off + $blob_len > $superblob_len; + if ($slot_type == 0) { + die "slot 0 is not a CodeDirectory in $path\n" if $blob_magic != 0xfade0c02; + return substr($sig, $blob_off, $blob_len); + } + } + + die "missing CodeDirectory slot in $path\n"; + } + + sub hash_bytes { + my ($hash_type, $bytes) = @_; + if ($hash_type == 1) { + return sha1($bytes); + } + if ($hash_type == 2) { + return sha256($bytes); + } + if ($hash_type == 3) { + return substr(sha256($bytes), 0, 20); + } + if ($hash_type == 4) { + return sha384($bytes); + } + die "unsupported CodeDirectory hash type $hash_type\n"; + } + + my ($local_path, $release_path) = @ARGV; + + open my $local_fh, "<", $local_path or die "open($local_path): $!"; + open my $release_fh, "<", $release_path or die "open($release_path): $!"; + binmode $local_fh; + binmode $release_fh; + local $/; + my $local_data = <$local_fh>; + my $release_data = <$release_fh>; + + my $sig = parse_macho_signature_region($release_data, $release_path); + my $cd = find_codedirectory_blob($sig, $release_path); + die "CodeDirectory header truncated in $release_path\n" if length($cd) < 44; + + my $cd_len = u32be($cd, 4); + my $version = u32be($cd, 8); + my $hash_offset = u32be($cd, 16); + my $n_special = u32be($cd, 24); + my $n_code = u32be($cd, 28); + my $code_limit = u32be($cd, 32); + my $hash_size = ord(substr($cd, 36, 1)); + my $hash_type = ord(substr($cd, 37, 1)); + my $page_exp = ord(substr($cd, 39, 1)); + + die "CodeDirectory length mismatch in $release_path\n" if $cd_len != length($cd); + die "unsupported scattered CodeDirectory in $release_path\n" + if $version >= 0x20100 && u32be($cd, 44) != 0; + + if ($version >= 0x20300) { + die "CodeDirectory v0x20300 header truncated in $release_path\n" if length($cd) < 64; + my $code_limit64 = u64be($cd, 56); + $code_limit = $code_limit64 if $code_limit64 != 0; + } + + my $page_size = $page_exp == 0 ? 0 : (1 << $page_exp); + my $expected_slots = $page_size == 0 ? 1 : int(($code_limit + $page_size - 1) / $page_size); + die "unexpected CodeDirectory slot count in $release_path\n" if $n_code != $expected_slots; + die "local file shorter than signed CodeDirectory codeLimit in $local_path\n" + if length($local_data) < $code_limit; + die "unsupported CodeDirectory page layout in $release_path\n" if $page_size == 0; + + my $hash_base = $hash_offset - ($n_special * $hash_size); + die "CodeDirectory hash table starts before blob in $release_path\n" if $hash_base < 0; + die "CodeDirectory hash table exceeds blob in $release_path\n" + if $hash_offset + ($n_code * $hash_size) > length($cd); + + # Slot 0 covers the first page of the Mach-O, which includes the header and load commands that Apple signing mutates (UUID, the LC_CODE_SIGNATURE command, and LINKEDIT sizing bookkeeping) + # We keep relying on the existing normalized Mach-O compare for that page and use CodeDirectory slot parity for the remaining stable pages + for (my $slot = 1; $slot < $n_code; $slot++) { + my $start = $slot * $page_size; + my $length = $page_size; + my $remaining = $code_limit - $start; + $length = $remaining if $remaining < $length; + my $page = substr($local_data, $start, $length); + my $actual = hash_bytes($hash_type, $page); + die "unexpected CodeDirectory hash size in $release_path\n" if length($actual) != $hash_size; + my $expected = substr($cd, $hash_offset + ($slot * $hash_size), $hash_size); + die "CodeDirectory page hash mismatch at slot $slot in $release_path\n" if $actual ne $expected; + } + ' "$local_path" "$release_path"; then + die "Mach-O CodeDirectory page verification failed: $release_path" + fi +} + +verify_release_macho_codedirectory_pages() { + local local_app="$1" + local release_app="$2" + [[ -d "$local_app" ]] || die "Local app bundle not found for CodeDirectory page check: $local_app" + [[ -d "$release_app" ]] || die "Release app bundle not found for CodeDirectory page check: $release_app" + + while IFS= read -r release_path; do + [[ -n "$release_path" ]] || continue + + if ! file -b "$release_path" | grep -q 'Mach-O'; then + continue + fi + + local rel="${release_path#${release_app}/}" + local local_path="${local_app}/${rel}" + [[ -f "$local_path" ]] || die "Local Mach-O counterpart missing: $local_path" + file -b "$local_path" | grep -q 'Mach-O' || die "Local counterpart is not Mach-O: $local_path" + verify_macho_codedirectory_pages_match_local "$local_path" "$release_path" + done < <(find "$release_app" -type f | LC_ALL=C sort) +} + +strip_bundle_signing() { + local app_dir="$1" + [[ -d "$app_dir" ]] || die "App bundle not found for normalization: $app_dir" + + # Public macOS release apps are expected to differ from the reproducible local build in exactly the places Apple signing and distribution tooling touch. + # examples being extended attributes, code signature directories, CodeResources, and optional provisioning metadata. + # + # This removes those bundle-level release things here so the comparison answers what we care about, as in... + # Does the shipped signed app reduce to the same underlying app payload as the reproducible unsigned build? + # + # The executable bytes themselves are handled separately below. + # Mach-O files still contain signing- & linkedit-related differences after bundle-level stripping. + # So normalized_file_hash() hashes a canonicalized Mach-O view instead of the raw bytes for those files only. + xattr -cr "$app_dir" 2>/dev/null || true + find "$app_dir" -name '.DS_Store' -type f -delete + find "$app_dir" -name 'CodeResources' -type f -delete + find "$app_dir" -name 'embedded.provisionprofile' -type f -delete + + while IFS= read -r code_sig_dir; do + [[ -n "$code_sig_dir" ]] || continue + rm -rf "$code_sig_dir" + done < <(find "$app_dir" -type d -name '_CodeSignature' | LC_ALL=C sort) +} + +normalized_file_hash() { + local path="$1" + + if file -b "$path" | grep -q 'Mach-O'; then + # Compare on a *narrowly* normalized Mach-O representation so Apple-added signature metadata does not outweigh payload equivalence. + # Unsupported layouts fail inside normalized_macho_sha256_file(). + normalized_macho_sha256_file "$path" + return + fi + + sha256_file "$path" +} + +write_app_inventory() { + local app_dir="$1" + local out_file="$2" + : > "$out_file" + + # Each line captures file type, mode, relative path, & either a symlink target or a normalized file hash. + # This makes the (eventual) diff readable and avoids requiring byte-for-byte archive identity at the zip/container level (due to what's discussed in the other functionality's comments). + while IFS= read -r path; do + [[ -n "$path" ]] || continue + local rel="${path#${app_dir}/}" + local mode + mode="$(stat -f '%p' "$path")" + + if [[ -L "$path" ]]; then + local target + target="$(readlink "$path")" + printf 'L\t%s\t%s\t%s\n' "$mode" "$rel" "$target" >> "$out_file" + continue + fi + + if [[ -f "$path" ]]; then + local hash + if ! hash="$(normalized_file_hash "$path")"; then + die "Failed to normalize file for release verification: $path" + fi + printf 'F\t%s\t%s\t%s\n' "$mode" "$rel" "$hash" >> "$out_file" + fi + done < <(find "$app_dir" \( -type f -o -type l \) | LC_ALL=C sort) +} + +main() { + parse_verify_args "$@" + [[ -n "$VERIFY_RELEASE_PATH" ]] || die "--release is required" + ensure_verify_tools + + # local side is expected to be the unsigned app produced by a reproducible build + # release side is the signed/notarized artifact someone downloaded + local local_source_app + local_source_app="$(resolve_local_app)" + + local tmp_dir + tmp_dir="$(mktemp -d)" + VERIFY_TMP_DIR="$tmp_dir" + if [[ "$VERIFY_KEEP_TEMP" -eq 0 ]]; then + trap cleanup_verify_tmp_dir EXIT + fi + + local local_root="${tmp_dir}/local" + local release_root="${tmp_dir}/release" + mkdir -p "$local_root" "$release_root" + + local local_app + local_app="$(materialize_app_copy "$local_source_app" "$local_root")" + local release_app + release_app="$(materialize_app_copy "$VERIFY_RELEASE_PATH" "$release_root")" + + verify_release_signing_policy "$release_app" "$local_app" + verify_release_entitlements_policy "$release_app" + verify_release_macho_codedirectory_pages "$local_app" "$release_app" + verify_release_macho_signature_tails "$local_app" "$release_app" + + # Strip bundle-level signing noise from both trees before inventorying them + # The Mach-O-specific normalization happens inside normalized_file_hash + strip_bundle_signing "$local_app" + strip_bundle_signing "$release_app" + + local local_inv="${tmp_dir}/local.inventory.txt" + local release_inv="${tmp_dir}/release.inventory.txt" + write_app_inventory "$local_app" "$local_inv" + write_app_inventory "$release_app" "$release_inv" + + if ! diff -u "$local_inv" "$release_inv"; then + echo "" + echo "macOS release verification FAILED" + echo "- Local unsigned app : $local_source_app" + echo "- Release input : $VERIFY_RELEASE_PATH" + if [[ "$VERIFY_KEEP_TEMP" -eq 1 ]]; then + echo "- Temp dir : $tmp_dir" + fi + exit 1 + fi + + echo "macOS release verification PASSED" + echo "- Local unsigned app : $local_source_app" + echo "- Release input : $VERIFY_RELEASE_PATH" + if [[ "$VERIFY_KEEP_TEMP" -eq 1 ]]; then + echo "- Temp dir : $tmp_dir" + fi +} + +main "$@" diff --git a/releases/verify_windows_release.sh b/releases/verify_windows_release.sh new file mode 100644 index 0000000..326333a --- /dev/null +++ b/releases/verify_windows_release.sh @@ -0,0 +1,503 @@ +#!/usr/bin/env bash +# SPDX-License-Identifier: GPL-3.0-or-later +set -euo pipefail +IFS=$'\n\t' + +# Verify a Windows release directly against a local reproducible build. +# +# We do not use a published manifest. +# The trust model is to build it yourself, normalize the shipped signed Windows artifact, and compare the two executable payloads directly. +# +# The comparison this script performs is not a raw exe byte diff. +# Signed Windows releases pick up Authenticode-specific metadata that should differ from the unsigned reproducible build. +# We therefore materialize normalized copies, strip only the exact PE signing fields/blob that Authenticode is allowed to touch, and then compare the resulting files byte-for-byte. + +PROGRAM_NAME="$(basename "$0")" +RELEASES_DIR="$(cd -- "$(dirname -- "${BASH_SOURCE[0]}")" && pwd)" + +# shellcheck disable=SC1091 +source "${RELEASES_DIR}/lib/common.bash" + +VERIFY_LOCAL_FILE="" +VERIFY_LOCAL_RUN="" +VERIFY_TRIPLE="" +VERIFY_RELEASE_PATH="" +VERIFY_SIGNTOOL_PATH="${VERIFY_SIGNTOOL_PATH:-}" +VERIFY_POWERSHELL_PATH="${VERIFY_POWERSHELL_PATH:-}" +VERIFY_EXPECTED_SUBJECT="${VERIFY_EXPECTED_SUBJECT:-Secluso, Inc.}" + +VERIFY_EXPECTED_CERT_SHA1="${VERIFY_EXPECTED_CERT_SHA1:-8768A4ED8597B0DD6ED0800EDFBED9AD262C1CE4}" +VERIFY_KEEP_TEMP=0 +VERIFY_TMP_DIR="" + +verify_usage() { + cat >&2 </dev/null 2>&1; then + VERIFY_SIGNTOOL_PATH="$(command -v signtool)" + return + fi + + local candidate + for candidate in \ + "/c/Program Files (x86)/Windows Kits/10/bin"/*/x64/signtool.exe \ + "/c/Program Files (x86)/Windows Kits/10/bin"/*/x86/signtool.exe + do + if [[ -x "$candidate" ]]; then + VERIFY_SIGNTOOL_PATH="$candidate" + return + fi + done + + die "Microsoft signtool not found. Provide --signtool or run this on a Windows machine with the Windows SDK installed." +} + +find_powershell() { + if [[ -n "$VERIFY_POWERSHELL_PATH" ]]; then + [[ -x "$VERIFY_POWERSHELL_PATH" ]] || die "PowerShell not executable: $VERIFY_POWERSHELL_PATH" + return + fi + + local candidate + for candidate in powershell.exe powershell pwsh.exe pwsh; do + if command -v "$candidate" >/dev/null 2>&1; then + VERIFY_POWERSHELL_PATH="$(command -v "$candidate")" + return + fi + done + + die "PowerShell not found. It is required to extract the primary Authenticode signer certificate for certificate pinning." +} + +ensure_verify_tools() { + require_tool perl + require_tool cmp + require_tool diff + find_signtool + find_powershell + # Keep Git Bash/MSYS from rewriting signtool's slash-prefixed options into filesystem paths when it launches the native Windows executable. + export MSYS2_ARG_CONV_EXCL="${MSYS2_ARG_CONV_EXCL:-};/v;/debug;/pa" + init_sha256_tool +} + +is_supported_windows_artifact() { + case "$1" in + *.exe) return 0 ;; + *) return 1 ;; + esac +} + +resolve_local_file() { + if [[ -n "$VERIFY_LOCAL_FILE" ]]; then + [[ -f "$VERIFY_LOCAL_FILE" ]] || die "Local unsigned artifact not found: $VERIFY_LOCAL_FILE" + is_supported_windows_artifact "$VERIFY_LOCAL_FILE" || die "Unsupported local artifact type: $VERIFY_LOCAL_FILE (expected .exe)" + printf '%s\n' "$VERIFY_LOCAL_FILE" + return + fi + + [[ -n "$VERIFY_LOCAL_RUN" ]] || die "Provide either --local-file or --local-run" + [[ -n "$VERIFY_TRIPLE" ]] || die "--triple is required with --local-run" + + local artifact_dir="${VERIFY_LOCAL_RUN}/artifacts/${VERIFY_TRIPLE}" + [[ -d "$artifact_dir" ]] || die "Artifact directory not found: $artifact_dir" + + local local_path="" + while IFS= read -r candidate; do + [[ -n "$candidate" ]] || continue + if [[ -n "$local_path" ]]; then + die "More than one Windows .exe artifact found under: $artifact_dir" + fi + local_path="$candidate" + done < <(find "$artifact_dir" -type f -name '*.exe' | LC_ALL=C sort) + + [[ -n "$local_path" ]] || die "No Windows .exe artifacts found under: $artifact_dir" + printf '%s\n' "$local_path" +} + +normalize_cert_sha1() { + local value="$1" + printf '%s' "$value" | tr '[:lower:]' '[:upper:]' | tr -cd 'A-F0-9' +} + +absolute_path() { + local path="$1" + local dir base + dir="$(cd -- "$(dirname -- "$path")" && pwd -P)" || return 1 + base="$(basename -- "$path")" + printf '%s/%s\n' "$dir" "$base" +} + +path_for_powershell() { + local path="$1" + local abs_path + abs_path="$(absolute_path "$path")" || return 1 + + if command -v cygpath >/dev/null 2>&1; then + cygpath -w "$abs_path" + return + fi + + printf '%s\n' "$abs_path" +} + +primary_signer_certificate_info() { + local release_path="$1" + local ps_path ps_output + ps_path="$(path_for_powershell "$release_path")" || die "Failed to resolve release path for PowerShell: $release_path" + + if ! ps_output="$(VERIFY_RELEASE_PATH_PS="$ps_path" "$VERIFY_POWERSHELL_PATH" -NoProfile -NonInteractive -Command ' + $ErrorActionPreference = "Stop" + $sig = Get-AuthenticodeSignature -LiteralPath $env:VERIFY_RELEASE_PATH_PS + if ($null -eq $sig.SignerCertificate) { + throw "No primary signer certificate was returned by Get-AuthenticodeSignature" + } + $cert = $sig.SignerCertificate + Write-Output ("status`t{0}" -f $sig.Status) + Write-Output ("subject`t{0}" -f $cert.Subject) + Write-Output ("simple_name`t{0}" -f $cert.GetNameInfo([System.Security.Cryptography.X509Certificates.X509NameType]::SimpleName, $false)) + Write-Output ("thumbprint`t{0}" -f $cert.Thumbprint) + ' 2>&1)"; then + printf '%s\n' "$ps_output" >&2 + die "Failed to inspect primary Authenticode signer certificate: $release_path" + fi + + printf '%s\n' "$ps_output" +} + +verify_primary_signer_certificate_pin() { + local release_path="$1" + local cert_info simple_name thumbprint expected_sha1 + cert_info="$(primary_signer_certificate_info "$release_path")" + simple_name="$(awk -F'\t' '$1 == "simple_name" { print $2; exit }' <<<"$cert_info")" + thumbprint="$(awk -F'\t' '$1 == "thumbprint" { print $2; exit }' <<<"$cert_info")" + expected_sha1="$(normalize_cert_sha1 "$VERIFY_EXPECTED_CERT_SHA1")" + thumbprint="$(normalize_cert_sha1 "$thumbprint")" + + [[ -n "$simple_name" ]] || die "Primary signer certificate is missing a subject common name: $release_path" + [[ "$simple_name" == "$VERIFY_EXPECTED_SUBJECT" ]] || die "Release signer mismatch: expected ${VERIFY_EXPECTED_SUBJECT}, got ${simple_name}" + [[ -n "$thumbprint" ]] || die "Primary signer certificate is missing a thumbprint: $release_path" + [[ "$thumbprint" == "$expected_sha1" ]] || die "Release signer certificate thumbprint mismatch: expected ${expected_sha1}, got ${thumbprint}" +} + +verify_release_signing_policy() { + local release_path="$1" + [[ -f "$release_path" ]] || die "Release artifact not found: $release_path" + is_supported_windows_artifact "$release_path" || die "Unsupported release artifact type: $release_path (expected .exe)" + + local verify_output + if ! verify_output="$("$VERIFY_SIGNTOOL_PATH" verify /v /debug /pa "$release_path" 2>&1)"; then + printf '%s\n' "$verify_output" >&2 + die "Authenticode verification failed: $release_path" + fi + + # Enforce the Windows-side release policy before we do any signed-vs-unsigned equivalence work. + # is the downloaded release still a valid Authenticode-signed artifact with the identity and timestamp properties we expect? + grep -q 'Successfully verified:' <<<"$verify_output" || die "signtool did not report successful verification: $release_path" + grep -q 'Signature Index: 0 (Primary Signature)' <<<"$verify_output" || die "Release is missing a primary Authenticode signature: $release_path" + grep -q 'The signature is timestamped:' <<<"$verify_output" || die "Release signature is missing a trusted timestamp: $release_path" + grep -q 'Timestamp Verified by:' <<<"$verify_output" || die "Release timestamp chain was not verified: $release_path" + + # Pin the primary signer certificate structurally through the Windows Authenticode API, rather than by grepping signtool text output. + verify_primary_signer_certificate_pin "$release_path" + + printf '%s\n' "$verify_output" +} + +normalize_pe_file() { + local file_path="$1" + local out_file="$2" + local info_file="$3" + local expected_signature_state="$4" + + [[ -f "$file_path" ]] || die "PE file not found for normalization: $file_path" + + # Public Windows release installers are expected to differ from the reproducible local build in exactly the places Authenticode signing touches. + # + # examples: the PE checksum, the PE security directory, optional alignment padding immediately before the certificate table, and the WIN_CERTIFICATE blob itself. + # Everything else is executable/installer payload and must persist after normalization unchanged. + # + # This is a (narrow) comparison helper for release verification, and it fails on layouts we do not explicitly understand. + perl -e ' + use strict; + use warnings; + + sub u16le { return unpack("v", substr($_[0], $_[1], 2)); } + sub u32le { return unpack("V", substr($_[0], $_[1], 4)); } + sub put_u32le { substr($_[0], $_[1], 4) = pack("V", $_[2]); } + + my ($path, $out_path, $info_path, $expected_signature_state) = @ARGV; + open my $fh, "<", $path or die "open($path): $!"; + binmode $fh; + local $/; + my $data = <$fh>; + my $file_len = length($data); + + die "file too small for MZ header: $path\n" if $file_len < 0x40; + die "missing MZ header: $path\n" if substr($data, 0, 2) ne "MZ"; + + my $pe_off = u32le($data, 0x3c); + die "invalid PE header offset in $path\n" if $pe_off + 24 > $file_len; + die "missing PE signature: $path\n" if substr($data, $pe_off, 4) ne "PE\0\0"; + + my $coff_off = $pe_off + 4; + my $optional_size = u16le($data, $coff_off + 16); + my $optional_off = $coff_off + 20; + die "truncated optional header in $path\n" if $optional_off + $optional_size > $file_len; + + my $magic = u16le($data, $optional_off); + my ($checksum_off, $number_rva_off); + if ($magic == 0x10b) { + $checksum_off = $optional_off + 64; + $number_rva_off = $optional_off + 92; + } elsif ($magic == 0x20b) { + $checksum_off = $optional_off + 64; + $number_rva_off = $optional_off + 108; + } else { + die "unsupported PE optional-header magic in $path\n"; + } + + die "truncated PE data directories in $path\n" if $number_rva_off + 4 > $optional_off + $optional_size; + my $number_rva = u32le($data, $number_rva_off); + die "PE has no certificate directory in $path\n" if $number_rva < 5; + + my $cert_dir_off = $number_rva_off + 4 + (4 * 8); + die "truncated certificate directory in $path\n" if $cert_dir_off + 8 > $optional_off + $optional_size; + + my $checksum = u32le($data, $checksum_off); + my $cert_file_off = u32le($data, $cert_dir_off); + my $cert_size = u32le($data, $cert_dir_off + 4); + my $cert_pad_size = 0; + + # Authenticode rewrites the PE checksum and points the security directory at the appended WIN_CERTIFICATE. + # Zero those bookkeeping fields in both views so they do not outweigh the payload comparison. + put_u32le($data, $checksum_off, 0); + put_u32le($data, $cert_dir_off, 0); + put_u32le($data, $cert_dir_off + 4, 0); + + if ($cert_file_off != 0 || $cert_size != 0) { + die "expected unsigned PE but found certificate table in $path\n" + if $expected_signature_state eq "unsigned"; + die "invalid certificate table bounds in $path\n" + if $cert_file_off <= 0 || $cert_size <= 0 || $cert_file_off + $cert_size > $file_len; + die "certificate table is not at end of file in $path\n" + if $cert_file_off + $cert_size != $file_len; + die "certificate table is not 8-byte aligned in $path\n" + if ($cert_file_off % 8) != 0; + + # The certificate table is intentionally a file offset rather than an RVA. + # Require it to be the final structure in the file so there is no unchecked overlay after the signature. + substr($data, $cert_file_off, $cert_size) = ""; + + # Authenticode places WIN_CERTIFICATE on an 8-byte boundary. + # Some installers therefore gain a small NUL pad immediately before the certificate table. + # Drop at most that alignment pad, and only when it is literally NUL bytes at the end of the remaining file. + for (my $i = 0; $i < 7 && length($data) > 0; $i++) { + last if substr($data, length($data) - 1, 1) ne "\0"; + substr($data, length($data) - 1, 1) = ""; + $cert_pad_size++; + } + } elsif ($expected_signature_state eq "signed") { + die "expected signed PE but certificate table is empty in $path\n"; + } + + open my $out, ">", $out_path or die "open($out_path): $!"; + binmode $out; + print {$out} $data; + close $out or die "close($out_path): $!"; + + # Keep the normalization facts as an audit artifact. + # If comparison fails, these lines make it obvious whether the difference was in normal payload bytes or in the PE signing envelope. + open my $info, ">", $info_path or die "open($info_path): $!"; + print {$info} "path\t$path\n"; + print {$info} "file_size\t$file_len\n"; + print {$info} "pe_header_offset\t$pe_off\n"; + print {$info} "optional_header_magic\t", sprintf("0x%x", $magic), "\n"; + print {$info} "checksum_offset\t$checksum_off\n"; + print {$info} "checksum_value\t$checksum\n"; + print {$info} "cert_directory_offset\t$cert_dir_off\n"; + print {$info} "cert_file_offset\t$cert_file_off\n"; + print {$info} "cert_size\t$cert_size\n"; + print {$info} "cert_alignment_pad_size\t$cert_pad_size\n"; + print {$info} "normalized_size\t", length($data), "\n"; + close $info or die "close($info_path): $!"; + ' "$file_path" "$out_file" "$info_file" "$expected_signature_state" +} + +write_artifact_inventory() { + local label="$1" + local file_path="$2" + local normalized_path="$3" + local pe_info_path="$4" + local out_file="$5" + + local raw_sha normalized_sha size + raw_sha="$(sha256_file "$file_path")" + raw_sha="${raw_sha#\\}" + normalized_sha="$(sha256_file "$normalized_path")" + normalized_sha="${normalized_sha#\\}" + size="$(wc -c < "$file_path" | awk '{print $1}')" + + # makes the failure diff readable while cmp remains the actual byte-for-byte payload check. + { + printf 'label\t%s\n' "$label" + printf 'basename\t%s\n' "$(basename "$file_path")" + printf 'size\t%s\n' "$size" + printf 'raw_sha256\t%s\n' "$raw_sha" + printf 'normalized_pe_sha256\t%s\n' "$normalized_sha" + cat "$pe_info_path" + } > "$out_file" +} + +compare_normalized_payloads() { + local local_normalized="$1" + local release_normalized="$2" + local diff_out="$3" + + # Above removes only the explicit Authenticode envelope. + # This is the part that checks every remaining byte from the shipped release against the local reproducible build. + if cmp -s "$local_normalized" "$release_normalized"; then + return 0 + fi + + { + echo "First differing normalized bytes:" + # cmp reports 1-based byte offsets and octal byte values. + # Keeping the first handful of differences is enough to see whether the mismatch is header-adjacent OR deep installer payload changes + cmp -l "$local_normalized" "$release_normalized" | head -n 40 + } > "$diff_out" || true + return 1 +} + +main() { + parse_verify_args "$@" + [[ -n "$VERIFY_RELEASE_PATH" ]] || die "--release is required" + ensure_verify_tools + + local local_file + local_file="$(resolve_local_file)" + + local tmp_dir + tmp_dir="$(mktemp -d)" + VERIFY_TMP_DIR="$tmp_dir" + if [[ "$VERIFY_KEEP_TEMP" -eq 0 ]]; then + trap cleanup_verify_tmp_dir EXIT + fi + + # release side is the signed/timestamped artifact someone downloaded + # local side is the unsigned artifact produced by a reproducible build + local signing_report="${tmp_dir}/signtool-verify.txt" + verify_release_signing_policy "$VERIFY_RELEASE_PATH" > "$signing_report" + + local local_normalized_file="${tmp_dir}/local.normalized.exe" + local release_normalized_file="${tmp_dir}/release.normalized.exe" + local local_pe_info="${tmp_dir}/local.pe-info.txt" + local release_pe_info="${tmp_dir}/release.pe-info.txt" + # Materialize normalized files before comparison instead of relying only on a hash. + # That keeps the byte-for-byte claim concrete, and --keep-temp lets an auditor inspect the exact files compared. + normalize_pe_file "$local_file" "$local_normalized_file" "$local_pe_info" "unsigned" + normalize_pe_file "$VERIFY_RELEASE_PATH" "$release_normalized_file" "$release_pe_info" "signed" + + local local_inv="${tmp_dir}/local.inventory.txt" + local release_inv="${tmp_dir}/release.inventory.txt" + write_artifact_inventory "local" "$local_file" "$local_normalized_file" "$local_pe_info" "$local_inv" + write_artifact_inventory "release" "$VERIFY_RELEASE_PATH" "$release_normalized_file" "$release_pe_info" "$release_inv" + + local normalized_diff="${tmp_dir}/normalized-byte-diff.txt" + if ! compare_normalized_payloads "$local_normalized_file" "$release_normalized_file" "$normalized_diff"; then + diff -u "$local_inv" "$release_inv" || true + cat "$normalized_diff" >&2 + echo "" + echo "Windows release verification FAILED" + echo "- Local unsigned artifact : $local_file" + echo "- Release input : $VERIFY_RELEASE_PATH" + if [[ "$VERIFY_KEEP_TEMP" -eq 1 ]]; then + echo "- Temp dir : $tmp_dir" + fi + exit 1 + fi + + local release_sha release_normalized + release_sha="$(awk -F'\t' '$1 == "raw_sha256" { print $2; exit }' "$release_inv")" + release_normalized="$(awk -F'\t' '$1 == "normalized_pe_sha256" { print $2; exit }' "$release_inv")" + + echo "Windows release verification PASSED" + echo "- Local unsigned artifact : $local_file" + echo "- Release input : $VERIFY_RELEASE_PATH" + echo "- Signer : $VERIFY_EXPECTED_SUBJECT" + echo "- Signer cert SHA1 : $VERIFY_EXPECTED_CERT_SHA1" + echo "- Release SHA256 : $release_sha" + echo "- Normalized PE SHA256 : $release_normalized" + echo "- Byte comparison : all normalized bytes match" + if [[ "$VERIFY_KEEP_TEMP" -eq 1 ]]; then + echo "- Temp dir : $tmp_dir" + fi +} + +main "$@"