Giving OpenClaw Secure Access to Cloud Services Without Sharing Your Password

Device Code Flow has been built into OAuth2 for years, originally designed for TVs and game consoles. It works just as well for a Docker container. It requires no credentials stored on the agent machine. It gives you narrow, revocable access that the agent cannot exceed.

I have OpenClaw deployed in a sandboxed docker container on a dedicated host. I communicate with it via a Telegram bot, in a locked-down chat.

I wired the device-code authentication pattern up with OpenClaw and Microsoft (Personal/Consumer) services, but the authentication approach works with any app and any command-line tool you want to run in a headless environment, including Custom APIs. Device code flow + Entra ID app registrations give you a zero-stored-credentials gateway to any HTTP API you can build, not just Microsoft Graph.

What Is Device Code Flow?

OAuth2 device code flow (RFC 8628) was designed for “input-constrained devices” — things without a keyboard or a browser. Think of how you sign in to Netflix on a smart TV: a short code appears on screen, you visit a URL on your phone, you type the code in, and the TV logs in. You never type your password on the TV.

The official name for that pattern is the device authorization grant. A Docker container is, from the protocol’s perspective, exactly the same kind of device. It has no browser. It cannot perform an interactive redirect. But it can make HTTP requests, and that is all it needs.

The flow has three steps:

1. The app posts to the identity provider’s device code endpoint. It gets back a short user code (like “WDJB-MJHT”), a verification URL, and a polling device code.

2. The user visits the URL on any browser, on any device — their phone, laptop, anything — and enters the short code. They sign in normally, with their usual credentials and MFA if it is enabled.

3. The app polls the token endpoint in the background. Once the user finishes signing in, the poll returns a real access token and a refresh token. The app stores these and uses them for API calls going forward.

The app never sees the password. The password goes directly from the user’s browser to the identity provider. The app only ever handles two things: a short temporary code that it sends to the user, and a token that the identity provider gives back once the user has authenticated.

OpenClaw running in a Docker container fits this model exactly.

Which Services Support this?

Device code flow is not a Microsoft-only feature. It is a standard OAuth2 extension (RFC 8628) and most major identity providers support it — including Microsoft, Google, GitHub, and AWS. If a service uses Okta or Auth0 for identity, those support it too.

The Security Architecture

The agent never sees your credentials. Your password is typed in your browser, on your device, to your identity provider’s servers. It never touches the container. The agent only handles a short temporary code to give you, and a token issued by the provider once you have authenticated.

The device code is one-time and short-lived. After the user authenticates, the code is permanently invalidated. An intercepted code is useless without the user’s credentials and MFA.

Scopes define the ceiling. You configure the OAuth app or app registration to request only specific permissions. The agent cannot exceed those scopes. If you configure read-only access to email, the token cannot be used to send or delete email.

This is the authentication pattern that many of the consumer devices you already own use to access your accounts on your behalf. Your TV’s YouTube app, your smart home hub’s Google integration, the GitHub CLI you use on your workstation — these all use device flow. You have been trusting it for years without knowing what it was called.

How the implementation works

There are three components, and they run as separate Docker services that talk to each other through the Docker socket.

The app sidecar is your application container. It runs your CLI tool and your auth logic. It does nothing on its own. Its only job is to hold the authenticated token cache and execute commands on demand when OpenClaw calls into it.

OpenClaw is the AI agent container. It connects to Telegram, understands natural language, and knows which skills to run for which requests. It does not know anything about OAuth or your specific app. It just runs shell commands you specified and reports results back to you.

The Telegram auth bridge is a small script that lives inside the sidecar. It registers a device code callback, requests an auth flow, and uses the Telegram Bot API to forward the code to you — then confirms when authentication completes.

The compose file looks roughly like this:

```yaml
services:
  myapp-sidecar:
    image: ghcr.io/youruser/myapp:latest
    command: ["sleep", "infinity"]
    environment:
      - XDG_DATA_HOME=/data
      - TELEGRAM_BOT_TOKEN=${TELEGRAM_BOT_TOKEN}
      - TELEGRAM_CHAT_ID=${TELEGRAM_CHAT_ID}
    volumes:
      - ${APP_DATA_DIR:-./app-data}:/data
    restart: unless-stopped

  openclaw-gateway:
    build:
      context: .
      dockerfile: Dockerfile.openclaw
    volumes:
      - /var/run/docker.sock:/var/run/docker.sock
      - ./skills:/app/skills
    group_add:
      - "${DOCKER_GID:-999}"
    environment:
      - TELEGRAM_BOT_TOKEN=${TELEGRAM_BOT_TOKEN}
    ports:
      - "18789:18789"
    restart: unless-stopped
```

The Callback Pattern

The auth manager has a method called set_device_code_callback. You pass it a function, and when the device code is ready, the auth manager calls your function with the code and the verification URL rather than trying to open a browser.

```python
class AuthManager:
    def __init__(self, client_id: str, authority: str) -> None:
        self._client_id = client_id
        self._authority = authority.rstrip("/")
        self._on_device_code = None
        self._tokens = self._load_cache()

    def set_device_code_callback(self, fn) -> None:
        self._on_device_code = fn

    async def _device_code_auth(self) -> None:
        async with httpx.AsyncClient() as client:
            resp = await client.post(
                f"{self._authority}/oauth2/v2.0/devicecode",
                data={"client_id": self._client_id, "scope": OAUTH_SCOPES},
                timeout=30,
            )
        flow = resp.json()
        user_code = flow["user_code"]
        verification_uri = flow["verification_uri"]
        device_code = flow["device_code"]
        interval = flow.get("interval", 5)
        expires_in = flow.get("expires_in", 300)

        if self._on_device_code:
            self._on_device_code({
                "user_code": user_code,
                "verification_uri": verification_uri,
                "message": flow.get("message", ""),
            })
        else:
            try:
                webbrowser.open(verification_uri)
            except Exception:
                pass

        deadline = time.time() + expires_in
        while time.time() < deadline:
            await asyncio.sleep(interval)
            async with httpx.AsyncClient() as client:
                resp = await client.post(
                    f"{self._authority}/oauth2/v2.0/token",
                    data={
                        "client_id": self._client_id,
                        "grant_type": "urn:ietf:params:oauth:grant-type:device_code",
                        "device_code": device_code,
                    },
                    timeout=30,
                )
            body = resp.json()
            if resp.status_code == 200 and "access_token" in body:
                self._store_tokens(body)
                return
            error = body.get("error", "")
            if error == "authorization_pending":
                continue
            elif error == "slow_down":
                interval += 5
            elif error in ("authorization_declined", "expired_token"):
                raise AuthError(error)
            else:
                raise AuthError(body.get("error_description", "Auth failed"))
```

The polling loop handles the three error codes the spec defines: authorization_pending (keep waiting), slow_down (back off and increase the interval by 5 seconds), and the terminal errors that stop polling.

The Telegram Bridge

With the callback mechanism in place, the Telegram bridge is just a script that registers a callback and uses the Telegram Bot API to deliver the code to you:

```python
import asyncio, os, sys, traceback
import httpx
from your_app.auth import AuthManager
from your_app.config import Settings

TELEGRAM_BOT_TOKEN = os.environ["TELEGRAM_BOT_TOKEN"]
TELEGRAM_CHAT_ID = os.environ["TELEGRAM_CHAT_ID"]

async def send_telegram(text: str) -> None:
    url = f"https://api.telegram.org/bot{TELEGRAM_BOT_TOKEN}/sendMessage"
    async with httpx.AsyncClient() as client:
        await client.post(url, json={
            "chat_id": TELEGRAM_CHAT_ID,
            "text": text,
            "parse_mode": "Markdown",
        }, timeout=30)

async def main() -> int:
    settings = Settings()
    auth = AuthManager(settings.client_id, settings.authority)

    if auth.is_signed_in():
        await send_telegram("Already authenticated.")
        return 0

    def on_device_code(info: dict) -> None:
        msg = (
            "*Auth Required*\n\n"
            f"Go to: {info['verification_uri']}\n"
            f"Enter code: `{info['user_code']}`\n\n"
            "Complete sign-in in your browser and I'll confirm when done."
        )
        try:
            asyncio.get_event_loop().create_task(send_telegram(msg))
        except RuntimeError:
            import requests
            requests.post(
                f"https://api.telegram.org/bot{TELEGRAM_BOT_TOKEN}/sendMessage",
                json={"chat_id": TELEGRAM_CHAT_ID, "text": msg, "parse_mode": "Markdown"},
                timeout=10,
            )

    auth.set_device_code_callback(on_device_code)
    try:
        user_info = await auth.sign_in()
        await send_telegram(f"*Signed in* as {user_info['name']} ({user_info['email']})")
        return 0
    except Exception as exc:
        await send_telegram(f"*Auth failed:* {exc}\n```{traceback.format_exc()[:800]}```")
        return 2

if __name__ == "__main__":
    sys.exit(asyncio.run(main()))
```

The OpenClaw Skill

On the OpenClaw side, a “skill” is a markdown file. When OpenClaw sees a trigger phrase in Telegram, it runs the associated shell command. The markdown file looks like this:


name: myapp-auth
description: Authenticate with the cloud service via device code flow, coordinated through Telegram.


## myapp-auth

Use this skill when the user sends “/auth”, “authenticate”, or “sign in”.

Run the following command and wait up to 360 seconds for it to complete.
The script sends the device code to the user via Telegram and confirms when done.

docker exec myapp-sidecar-1 python /app/scripts/telegram_auth.py

Summary

Device code flow is a mature, widely-supported OAuth2 pattern that maps naturally onto OpenClaw running in Docker container. With this pattern OpenClaw never handles your credentials. It gets scoped, revocable tokens from the identity provider, after you have authenticated in your own browser. A Telegram bot is all you need to coordinate the handoff of the temporary code.

The approach works for many use-cases across Microsoft accounts, Google, GitHub, AWS, Auth0, Okta and others. It supports exactly the kinds of personal automations that make an OpenClaw genuinely useful in daily life.

References

SPCHR: Speech To Text App for Windows

I’m excited to announce the release of SPCHR. Pronounced as “speaker”. I know, I know. My apologies, it was too late at night and that’s the best I could come up with 😉

  • It is a Windows desktop application for speech-to-text transcription.
  • You can use it to “add” voice input to applications that don’t natively support it. It works anywhere you can type or paste text.
  • You can use it entirely locally on your PC if you like, so it is private and secure.

Key Features

  • Flexible: SPCHR can use either Azure Speech Services or a local OpenAI Whisper model, giving you the flexibility to choose between cloud-based and local transcription.
  • Zero Configuration: The application works immediately with local Whisper processing – no account setup required!
  • Global Hotkey: Start and stop recording from any application with Ctrl+Alt+L.
  • Seamless Integration: Transcribed text is automatically pasted into your active window.

Getting Started

  • Clone the repository
  • Build the solution using Visual Studio and run it
  • Press Ctrl+Alt+L and start speaking
  • Watch as your words appear in your active window
  • For those wanting cloud-based transcription, simply add your Azure Speech Services credentials to the configuration file.

Open Source

SPCHR is released under the MIT License, and I welcome contributions from the community. Whether you’re interested in adding features, fixing bugs, or improving documentation, your help is appreciated.

Check out GitHub repository

What’s next

This is just the beginning for SPCHR. Several improvements can be done:

  • Additional language support
  • Customizable hotkeys
  • Installer
  • UI Enhancements
  • AI Enhanced Features

Try it out

Ready to transform how you interact with your computer? Visit the GitHub repository to get started with SPCHR today.

I’m excited to see how you’ll use SPCHR in your workflow and look forward to your feedback!

Kusto Queries on AKS Clusters

Kusto query language can be used to get insights into Azure Kubernetes Service (AKS) clusters. Container insights collects data from AKS clusters and forwards it to Log Analytics workspace, if enabled for a cluster. This data is available for querying in the Azure Monitor. Here is an example of how you can query the pods not in running state in specific namespaces. 

KubePodInventory 
| where Namespace in ("dv","test","prod") 
| where ContainerStatus != "Running" 
| where ContainerStatusReason !in ("", "Completed") 
| distinct Namespace, Name 
Run 
Time range : Last 24 hours 
E] Save v 
14 Share v 
1 
2 
3 
4 
5 
KubePodInventory 
I where Namespaceän ('dv', 'test', 'prod') 
I where ContainerStatusu != 
"Running" 
I where ContainerStatusReasone ! in ('I", "Completed") 
I distinct Namespace, Name 
Results Chart 
@ Display time (UTC+OO:OO) v 
Columns v 
Completed. Showing results from the last 24 hours. 
> 
Namespace 
dv 
prod 
Y Name Y 
app2 
app2

The following query includes the name of the AKS cluster and renders the output as a stacked bar chart.

KubePodInventory 
| where Namespace in ("dv","test","prod") 
| where ContainerStatus != "Running" 
| where ContainerStatusReason !in ("", "Completed") 
| distinct ClusterName, Namespace, Name 
| summarize dcount(Name) by ClusterName, Namespace 
| render columnchart kind=stacked100 
Run 
Time range : Last 24 hours 
E] Save v 
14 Share v 
+ New alert rule 
Export v 
Pin to dashboard 
1 
2 
3 
4 
5 
6 
7 
8 
KubePodInventory 
I where Namespace in ('dv', 'test', 'prod') 
I where ContainerStatus ! = 
"Running" 
I where ContainerStatusReason ! in ('I", "Completed") 
I distinct ClusterName, Namespace, Name 
summarize dcount(Name) by ClusterName, Namespace 
render columnchart kind=stacked100 
Results 
Chart 
@ Display time (UTC+OO:OO) v 
Completed. Showing results from the last 24 hours. 
00:00.7 
2 records 
z 
o 
100 
50 
dv 
O 
aksdemol 
ClusterName 
prod 
Activate Windows 
Go to Settings to activate Windows.

You can include multiple AKS clusters in the scope in which this query is executed by clicking on [Select scope] hyperlink.


Create an Azure Dashboard panel with this output by clicking on [Pin to dashboard] button.

AIG Demo v/ 
Private dashboard 
New dashboard v 
Auto refresh : Off 
Analytics 
rq-aksdashboard 
CD Refresh Full screen 
UTC Time : Past 24 hours 
aksdemol 
test 
62 Edit Share 
Add filter 
ClusterName 
Download 
aksdem02 
Clone 
e Assign 
z 
o 
100 
75 
50 
25 
dv 
prod

You can also execute this Kusto query directly using powershell.

$workspaceName = "DefaultWorkspace-6637b095-xxxx-xxxx-xxxx-xxxxxxxxxxx-EUS" 
$workspaceRG = "defaultresourcegroup-eus" 
$WorkspaceID = (Get-AzOperationalInsightsWorkspace -Name $workspaceName -ResourceGroupName $workspaceRG).CustomerID 

$query = 'KubePodInventory | where Namespace in ("dv","test","prod") | where ContainerStatus != "Running" | where ContainerStatusReason !in ("", "Completed") | distinct ClusterName, Namespace, Name | summarize dcount(Name) by ClusterName, Namespace' 

$result = Invoke-AzOperationalInsightsQuery -WorkspaceId $WorkspaceID -Query $query -Timespan (New-TimeSpan -days 1) 
$result.results 

This allows you to include the results of your custom Kusto queries in any reports you might run using Azure Automation Runbooks.

PowerShell v Q) 
PS /home/ash> 
PS /home/ash> 
PS /home/ash> 
Q 
o 
PS /home/ash> $workspaceName 
"DefaultWorkspace-6637b095 
PS / home/ash> $workspaceRG 
"defaultresourcegroup-eus" 
PS / home/ash> $WorkspaceID 
(Get-AzOperationa11nsightsWorkspace 
-Name $workspaceName -ResourceGroupName $workspaceRG) . CustomerID 
'KubePodInventory I where Namespace in ("dv" , "test" , "prod") I where ContainerStatus ! = 
"Running" I where Container 
PS / home/ash> $query 
I distinct ClusterName, Namespace, Name I summarize dcount(Name) by ClusterName, Namespace' 
PS / home/ash> $result 
Invoke-AzOperationa11nsightsQuery -Workspaceld $WorkspaceID -Query $query -Timespan (New-TimeSpan 
-days 3) 
PS / home/ash> $result.results 
ClusterName Namespace dcount_Name 
aksdemol 
aksdemol 
aksdem02 
dv 
prod 
test 
1 
1 
1 
PS / home/ash> 
PS / home/ash> 
PS /home/ash>

AKS – Adding SSH Keys to VMSS Nodes

You can connect to Azure Kubernetes Service (AKS) nodes using ssh. It is documented here: Connect with SSH to Azure Kubernetes Service (AKS) cluster nodes for maintenance or troubleshooting.

I needed to access nodes on the System node pool for collecting some logs recently, but the process documented above was not working for me. It turns out that there were two different issues, both related to adding your SSH keys to the nodes in a virtual machine scale set (VMSS).

This is the az cli command that adds the ssh keys to the VMSS:

az vmss extension set  \
    --resource-group $CLUSTER_RESOURCE_GROUP \
    --vmss-name $SCALE_SET_NAME \
    --name VMAccessForLinux \
    --publisher Microsoft.OSTCExtensions \
    --version 1.4 \
    --protected-settings "{\"username\":\"azureuser\", \"ssh_key\":\"$(cat ~/.ssh/id_rsa.pub)\"}"

I was running this command from powershell on a windows host. So, the first modification I needed was to escape the double quotes by doubling them. You can also use back ticks instead of doubling the double quotes.

az vmss extension set  \
    --resource-group $CLUSTER_RESOURCE_GROUP \
    --vmss-name $SCALE_SET_NAME \
    --name VMAccessForLinux \
    --publisher Microsoft.OSTCExtensions \
    --version 1.4 \
    --protected-settings "{\""username\"":\""azureuser\"", \""ssh_key\"":\""$(cat ~/.ssh/id_rsa.pub)\""}"

The second issue was due to the fact that ssh-keygen adds a comment to the id_rsa.pub file. By default, it adds username@hostname as the comment at the end of the file . Normally, this comment wouldn’t cause any problems, but in this case it was causing an error – “Invalid escape sequence \uXXXX”.

The problem was that my domain user name starts with character ‘u’ and “domain\userid1” was being interpreted as containing a unicode character “\useri” which is obviously not a valid unicode character. The workaround here is to either delete the comment from the id_rsa.pub file (it works just fine without it) or to override the default comment when generating the ssh key by specifying your own comment via -C command line option.

ssh-keygen -C mycomment

Finally, there is another way to execute az vmss extension set command to get around both of these issues by creating a json file and passing that as the value of –protected-settings argument. Here is how –

  • Create a json file, protected_settings.json, with the following content:
{
	"username" : "azureuser",
	"ssh_key" : "REPLACE_THIS_WITH_CONTENT_OF_ID_RSA_PUB_FILE"
}
  • Pass this file as the command line argument:
az vmss extension set  \
    --resource-group $CLUSTER_RESOURCE_GROUP \
    --vmss-name $SCALE_SET_NAME \
    --name VMAccessForLinux \
    --publisher Microsoft.OSTCExtensions \
    --version 1.4 \
    --protected-settings protected_settings.json

Works like a charm!

AKS Supported Kubernetes Versions

Azure Kubernetes Service (AKS) supports specific versions of Kubernetes.
It is necessary to regularly monitor the release of new versions and upgrade your AKS clusters to supported versions in order to remain in compliance with AKS Kubernetes Version Support Policy.

AKS announces the planned date of a new minor version release and corresponding old version deprecation via AKS Release notes at least 30 days prior to removal. An email notification is sent to the subscription administrators with the planned version removal dates. You get 30 days from version removal to upgrade to a supported minor version release. Patch versions can be released anytime and you get 30 days from the removal date to upgrade to a supported patch version.

You should test new target Kubernetes versions and upgrade your AKS clusters in a timely manner. For that, it is necessary to proactively monitor the AKS release notes. This can easily become a chore in a large Enterprise environment. Here is a Powershell script to make it easier to stay on top of Kubernetes version releases in AKS and publish/share it with others: Get-AksSupportedVersions.

$aksVersionsJson = az aks get-versions --location eastus

$aksVersions = $aksVersionsJson | ConvertFrom-Json
$aksVersions.orchestrators.upgrades.orchestratorVersion 

$data = $aksVersions.orchestrators | Select-Object `
    -Property @{Name="Version";Expression={$_.orchestratorVersion}} `
            , @{Name="Default";Expression={$_.default}} `
            , @{Name="Preview";Expression={$_.isPreview}} `
            , @{Name="Upgrades";Expression={$_.upgrades.orchestratorVersion -join ", "}}

$versionTable = $data | ConvertTo-Html -Fragment 
$versionTableString = $versionTable | Out-String
$html = New-Object -ComObject "HTMLFile"
$html.IHTMLDocument2_write($versionTableString)
$tables = $html.body.getElementsByTagName("table")

$rptString = ""
ForEach($table in $tables){
    ForEach($row in $table.rows){
            $cellCount = 0
            ForEach($cell in $row.cells){
                $cellCount++
                if(($cellCount -eq 2) -and ($cell.innertext -eq 'True'))
                {
                    $row.className = "OkStatus"
                }
                if(($cellCount -eq 3) -and ($cell.innertext -eq 'True'))
                {
                    $row.className = "WarningStatus"
                }
        }
    }
    $rptString += $table.outerHTML
}

$reportDate = $(get-date -DisplayHint DateTime) | Out-String
$fileTS = $(get-date -Format "yyyyMMdd") 
$fileName = "aks-versions-$fileTS.html"

$rptTitle = "<h1>AKS Kubernetes Versions</h1><p>$reportDate</p>"
$report = ConvertTo-Html -Title "AKS Versions" -Body "$rptTitle $rptString" -Head $header
$report | Out-File "$fileName"

It creates an HTML report of currently supported Kubernetes versions in AKS along with their respective upgrade paths. Here is an example of the report generated by the script referenced above :

AKS Kubernetes Versions

Sunday, November 8, 2020 8:47:42 PM

Version Default Preview Upgrades
1.16.13 1.16.15, 1.17.9, 1.17.11
1.16.15 1.17.9, 1.17.11
1.17.9 1.17.11, 1.18.6, 1.18.8
1.17.11 True 1.18.6, 1.18.8
1.18.6 1.18.8, 1.19.0
1.18.8 1.19.0
1.19.0 True

Reference: How To Create An HTML Report With PowerShell

Ethereum Blockchain-As-A-Service in Azure Cloud

Ethereum Blockchain (EBaas) as a Service provided by Microsoft Azure and ConsenSys allows for enterprise customers and partners to play, learn, and fail fast at a low cost in a ready-made dev/test/production environment. It will allow them to create private, public and consortium based Blockchain environments very quickly in Azure. In this session, you will learn how to get started with prototyping building blocks of a decentralized application using EBaas in Windows Azure.

Venue: Triangle Azure User Group

Slides : download.