macOS Fleet Inventory with Jamf Pro and Azure Log Analytics

· 10min

Introduction

When I joined one of my previous teams, I was working through my onboarding backlog in Jira and one ticket immediately stood out. On paper it looked like a small bugfix, but it was actually part of an internal EDR/endpoint bootstrap pipeline that tied Jamf Pro, Azure Log Analytics, and our SOC 2 reporting together. The engineer who originally owned the macOS inventory piece had already left, so the context walked out the door with them.

“Our in house macOS script posts device posture data into Azure for SOC 2 reporting. It’s breaking and needs to be fixed before ($COMPANY) SOC 2 audit. Connect with auditing/security to get the rundown.”

In my head, this automatically translated to a quick cleanup: fix a couple of paths, rename some variables, wire it into a Jamf policy, rerun it, and call it done.

What I actually inherited was a dev-only script that worked on a subset of Macs, silently failed on the rest, and sat in the middle of a device posture pipeline that security and auditors already assumed was reliable.

This write-up walks through how I turned that script into a production ready, architecture-aware reporting workflow that keeps our Jamf-managed macOS fleet visible inside Azure Log Analytics alongside our Windows devices, with consistent schema, predictable behavior, and enough observability to trust it during an audit.

Problem Statement and Constraints

The business requirement was straightforward. Security and auditors needed a unified device inventory view in Azure that included both Windows and macOS. For each endpoint, they cared about attributes like OS version, hardware model, serial number, and a small set of compliance related fields.

A natural question here is. Why not just lean on an EDR or a heavyweight endpoint platform for this data. Tools like CrowdStrike, Defender for Endpoint, Intune device compliance, or other full EDR stacks can absolutely push rich telemetry into SIEMs and data lakes. In our case, I learned those conversations were either still in flight, already scoped for a different use case for some reason, or not financially realistic for the scale of what we were trying to do. We needed something that worked with the stack we had that quarter. Jamf Pro, Azure, and an existing Logic App and schema, without waiting on a new platform evaluation or license cycle.

At this point in the project, tools like Workbrew were also on the table. Workbrew sits on top of Homebrew and turns it into a secure software delivery platform, with an agent and console that let you standardize, audit, and remotely manage brew packages across a fleet. It is basically a control plane for Homebrew at enterprise scale, with policy enforcement, inventory, and remote execution built in. A no brainer when it comes to dealing with Homebrew in your org. We spun up a sandbox instance and tested it in a dev environment, and it fit nicely with how we were already thinking about developer tooling and MDM. From a management perspective, using something like Workbrew to keep Homebrew and Homebrew packages in line would have been the cleanest option.

Budget constraints took that path off the table for the moment, though. Instead of buying a managed Homebrew control plane, I had to treat this as an engineering problem. The constraint became the design. Finish and build a reporting focused automation that we fully own, that runs from Jamf Pro, and that gives Azure the same quality of device inventory signal without adding a new platform to the bill.

Windows was already covered. Agents on that side of the house sent structured data into Azure using an existing schema. macOS was the missing half. The inherited script was supposed to close that gap by running from Jamf Pro and POSTing JSON into an Azure Logic App that wrote to Log Analytics.

When I picked it up, the script had some clear issues. It effectively supported a single architecture, so part of the fleet never reported in at all. It produced JSON that mostly matched the schema, but types and null handling were inconsistent. It also never surfaced the final payload anywhere operators actually look when debugging, which made it hard to trust.

All of this was happening while we were already in SOC 2 conversations. That meant, I had to finish what someone else started.

Requirements and Design Goals

Before changing anything, I reframed the work into explicit goals.

Functionally, we needed:

From an engineering side, I wanted:

Non goals for this first iteration were equally important. I was not trying to build a full telemetry pipeline, a real time health monitor, or a remediation engine. The scope was inventory and compliance relevant attributes, delivered reliably.

High Level Architecture

The final design is simple about responsibilities.

Jamf Pro is the orchestrator. A policy runs on a schedule that roughly aligns with inventory check in. That policy executes the reporting script locally on each scoped macOS device.

On the device, the script:

  1. Detects hardware architecture and chooses the correct tool paths.

  2. Verifies that Homebrew exists in that path, and verifies or installs jo.

  3. Collects a defined set of system attributes.

  4. Normalizes those attributes into predictable types and formats.

  5. Builds a JSON object that matches the Azure schema.

  6. Logs the JSON payload into the Jamf policy output.

  7. Sends the payload via HTTP POST to an Azure Logic App endpoint.

In Azure, the Logic App:

For consumers, there is one source of truth. A Log Analytics table where both Windows and macOS devices show up with the same core shape, ready to be queried with KQL or exported to CSV during audits to share with the right team members.

Key Implementation Details

The schema already existed on the Azure side, so part of this project was matching it cleanly instead of inventing a new one.

The bash script focuses on a small but intentional set of fields, including:

Timestamps use a consistent representation. ISO 8601 in UTC. This applies to both LastBoot and LastContact. If boot time cannot be determined, the script send an explicit fallback string instead of an empty value.

The rule of thumb is that every field that Azure expects has a stable type and a defined behavior for edge cases. That makes KQL queries and dashboards far more reliable, especially under audit conditions.

Here’s an example:

bash
Data=$($JO_PATH \
  Endpoint="$(hostname)" \
  UserName="$realname" \
  Email="${loggedInUser}@COMPANY.com" \
  ManagedBy="Jamf Pro" \
  JoinType="Jamf Connect" \
  Model="$Model" \
  Manufacturer="$Manufacturer" \
  UpTime="$UpTime" \
  LastBoot="$boot_time_date" \
  LastContact="$LastContact" \
  InstallDate="-" \
  Serial="$Serial" \
  BiosVersion="$BiosVersion" \
  BiosDate="$BiosDate" \
  RAM="$RAM" \
  OSVersion="$OSVersion" \
  OSName="macOS" \
  CPUManufacturer="$CPUManu" \
  CPUName="$CPUName" \
  CPUCores="$CPUCore" \
  CPULogical="$CPULogical" \
  StorageTotal="$StorageTotal" \
  StorageFree="$StorageFree")

If you have not used jo before, it is a small utility that builds JSON over the command line:

Examples:

jo name="MacBook Pro" serial="C02XXXXXXX" OSVersion="14.5"

Outputs:

{
  "name": "MacBook Pro",
  "serial": "C02XXXXXXX",
  "OSVersion": "14.5"
}

API Payloads:

jo endpoint="/api/v1/users" method="POST" body=$(jo user=$(jo name=Alice role=admin))

Outputs:

{"endpoint":"/api/v1/users","method":"POST","body":{"user":{"name":"Alice","role":"admin"}}}

Configuration Files:

jo logging=$(jo level=info file="/var/log/app.log") database=$(jo host=localhost port=5432) > config.json

Terraform External Data Sources:

jo compute_instances=$(jo -a $(jo name=web size=t3.medium) $(jo name=db size=r5.large))

You get the idea. The consistency and reliability of jo makes it superior to manual JSON construction, especially when you need to use the utility in shell scripts where string escaping errors can be common.

Behavior on the Endpoint

Most of the interesting technical work happens on the device, inside the script. Even without dropping code here, it is useful to think of it as a small agent with a clear lifecycle.

Architecture detection

The first step is to detect whether the machine is Intel or Apple Silicon. This drives two things. The path to Homebrew and jo, and the commands used to retrieve CPU details.

bash
ARCHITECTURE=$(uname -m)

if [[ "$ARCHITECTURE" == "arm64" ]]; then
    BREW_PATH="/opt/homebrew/bin/brew"
    JO_PATH="/opt/homebrew/bin/jo"
elif [[ "$ARCHITECTURE" == "x86_64" ]]; then
    BREW_PATH="/usr/local/bin/brew"
    JO_PATH="/usr/local/bin/jo"
else
    echo "Unsupported architecture found. Exiting script.."
    exit 1
fi

The script uses a single architecture flag to choose the correct paths and later to choose the right system commands for CPU fields. Any unknown architecture results in an explicit exit, not a best effort guess.

Dependency handling

Once the architecture is known, the script checks for Homebrew at the expected path. If it is missing, that is treated as a provisioning issue and the script exits. This avoids silently trying to install or alter the system in ways that are out of scope.

bash
if [[ ! -x "$BREW_PATH" ]]; then
    echo "Exiting.. Homebrew was not found at $BREW_PATH"
    exit 1
fi

if [[ ! -x "$JO_PATH" ]]; then
    echo "jo is not found at $JO_PATH. Installing jo..."
    "$BREW_PATH" install jo
else
    echo "jo is already installed on the device. Continuing with the script..."
fi

If Homebrew exists but jo is missing, the script installs jo and continues. That gives the system a useful self healing behavior for a small, safe dependency. New machines do not require manual prep work to join the reporting pipeline.

Data collection and normalization

With tools in place, the script collects the data model described earlier. It uses standard macOS utilities to read things like OS version, model identifier, serial number, memory, storage, uptime, and boot time.

The architecture flag is used again when extracting CPU information, because Apple Silicon and Intel expose some details differently. The end result is a consistent set of CPU fields regardless of which platform produced them.

After values are collected, the script normalizes them. That incldues:

Logging and HTTP POST

Before sending anything to Azure, the script logs the final JSON payload into the Jamf policy output. This is intentional. It allows operators to reconstruct exactly what was sent from the Jamf console alone, without instrumenting the script further.

After logging, the script performs an HTTP POST to the Azure Logic App endpoint with the JSON as the body. Any non zero exit status can be surfaced in Jamf so the policy shows up as failed rather than silently passing.

Deployment with Jamf Pro

From a deployment point of view, the script is just another Jamf policy payload. The important pieces are scope, timing, and rollout strategy.

The policy is configured to run on a regular cadence, aligned with inventory check in. That keeps the device inventory view in Azure reasonably fresh without hammering the Logic App unnecessarily.

For rollout, I used a phased approach:

This gave a safe path to production without surprising the fleet or the audit team.

Security and Compliance Considerations

All communication from devices to Azure uses HTTPS. The payload focuses on device inventory and identity level attributes, not secrets or highly sensitive user content.

The Log Analytics table schema is treated as a contract. Changes to the field set or types are versioned and reviewed before being rolled out. The script does not attempt to dynamically shape its payload based on environment conditions.

Access to the resulting data is controlled in Azure. The script assumes that the Logic App endpoint and Log Analytics workspace are configured with proper IAM and least privilege on the cloud side.

In a future iteration, endpoint configuration such as the Logic App URL will come from a secret management solution instead of being hard coded into the script, which would further tighten the story around credentials and configuration.

Observability and Operations

To operate this in production, I treat it like a small distributed service.

On the macOS side, Jamf policy logs act as the primary local observability surface. Every execution logs the JSON payload that was about to be sent. If something looks off in Azure, I can go back to the policy logs on a specific machine and see exactly what it attempted to send.

In Azure, the Logic App provides an execution history. Failures, throttling, or schema errors show up there. The Log Analytics workspace itself is the ultimate signal for success, because it is where device records actually land.

Operationally, I pay attention to:

When something breaks, the usual debugging path is:

  1. Check Jamf logs for the payload and any error messages from the script.
  2. Check the Logic App run history for failed or malformed requests.
  3. Use KQL in Log Analytics to search for specific devices by serial, hostname, or email.

This keeps troubleshooting grounded in actual signals instead of guesswork.

Full Script:

Future Iterations & Closing out

One direction is to enrich the payload with hardware health metrics. GPU details, battery cycles, and battery health are all easy candidates that feed into capacity planning, refresh strategy, and device lifecycle decisions.

Another direction is to fold in Apple Business Manager data. With the newer Apple School & Business Manager API endpoints, you can pull ABM attributes (enrollment status, device assignment, ownership details) and join them to this telemetry. That would give you a unified view that spans both the physical device state on macOS and its record of truth in ABM.

Simillarly, incorporate more security posture signals. Fields such as FileVault status, secure boot configuration, or EDR agent presence could be added to the same schema and surfaced in the same Log Analytics workspace, which would give security a more complete view without new tools. With that, could also be a Slack alerting channel to ingest alerts for fail over visibility.

There is also room to improve configuration management of the endpoint itself by pulling URLs and shared secrets from a secret store rather than embedding them in the script.

None of these require a rewrite. They all reuse the same pattern. Each macOS device emits a small, well defined JSON event, and the cloud treats that event as a first class signal for reporting and automation.

What started as a fragile, one off script became a small, dependable piece of the compliance pipeline. The biggest shift was treating it less like a utility and more like a service that happens to run on endpoints. Once architecture, dependencies, schema, and observability were treated as first class concerns, the rest of the work fell into place. The end result is simple from the outside. Security and auditors query a single Log Analytics table and see Windows and macOS devices side by side.