Skip to content

Latest commit

 

History

History
237 lines (122 loc) · 55.9 KB

File metadata and controls

237 lines (122 loc) · 55.9 KB

Changelog

Full release history for the Bosch Smart Home Camera HA integration.

Newest first. The README only highlights the most recent release — for older versions see this file or the GitHub Releases page (each release page mirrors the same notes plus downloadable assets).


v10.7.0

Event recordings now appear in HA's Media Browser — both local and NAS. New media_source provider exposes downloaded events under Media → Bosch SHC Camera, with two backends auto-detected from existing options:

  • Local — when Events automatically download is enabled with a download_path. Tree: Camera → Date → Event.
  • NAS / SMB — when SMB upload is enabled (default for users who don't want to fill HA's small disk). Tree: Year → Month → Day → Event; matches the on-disk layout, all cameras share a day folder. Files are streamed on-demand via smbprotocol with HTTP Range support so MP4 seeking works.

Each event title shows time, type, and camera (e.g. 09:15:23 — MOVEMENT (Garten)). MP4 clips play inline; JPEG snapshots double as thumbnails for the matching clip. macOS resource-fork files (._*) are filtered out — relevant for FRITZ.NAS / Time Machine targets.

When only one backend is configured, the source-chooser is hidden so the tree opens straight at the meaningful content. With both backends enabled the user picks Lokal vs NAS at the entry root.

Manual filter — Media Browser source option. New options-flow dropdown overrides the auto-detect when needed: Auto (default — show every backend with data), Nur Lokal, Nur NAS, Deaktiviert (hide the Media Browser entry entirely). Useful when both download_path and SMB upload are active but only one of them should appear in the browser.

Files are served by an authenticated /api/bosch_shc_camera/event/… view; path-traversal is blocked, only image/jpeg and video/mp4 are returned. Forum thread context: simon42 community post #14 — same UX as Reolink's Media → Reolink entry.

v10.6.2

Branding fix — switched to the right Bosch app icon. v10.6.1 mistakenly used the blue Bosch Smart Home hub icon. v10.6.2 uses the red Bosch Smart Camera app icon (Robert Bosch GmbH, sourced from the official iOS App Store listing) — that's the camera-specific Bosch branding which matches what this integration actually does. Pure asset swap.

v10.6.1

Branding refresh. The integration's icon files (brand/icon.png, icon@2x.png, dark_icon.png, dark_icon@2x.png) now use the official Bosch Smart Home brand mark — same icon HA Core's bundled bosch_shc integration uses (sourced from the Home Assistant Brands repository, CC BY 4.0). Replaces the previous custom red camera icon for visual consistency with the rest of the Bosch Smart Home ecosystem in HA. Pure asset swap — no code change. (Superseded by v10.6.2 — wrong icon variant.)

v10.6.0

Image rotation 180° for ceiling-mounted indoor cameras. New per-camera switch switch.bosch_<cam>_bild_180deg_drehen that rotates the camera image by 180° for upside-down (ceiling) mounting. Indoor-only — outdoor cameras have a fixed mounting orientation and don't get the switch. Three layers of effect, all client-side (Bosch firmware does not expose any image-rotation API):

  • Lovelace card applies a CSS transform: rotate(180deg) to the <video> and <img> elements. Zero CPU, zero latency, GPU-composited — the toggle is instant with no stream restart and no re-encode.
  • Snapshot path rotates the JPEG via PIL before serving it through camera.async_camera_image(), so push notifications, NAS clip uploads, and any other consumer that reads the camera entity also see the right-way-up image. ~15-30 ms per snapshot.
  • PTZ pan inversion — for the Gen1 360 camera, BoschPanNumber automatically inverts the slider sign when the rotation switch is on, so "right" on the slider stays "right" on the user's screen even when the camera is upside-down.

State persists across HA restarts via RestoreEntity. Default OFF. Available on Gen1 360 Innenkamera and Gen2 Eyes Innenkamera II.

Card v2.11.1 ships alongside.

v10.5.4

Stream switch unblocked when prior session has expired upstream. When a previous live session had its underlying URL invalidated (e.g. the relay-side lifetime cap was reached while the switch was still ON), HA's Stream.stop() could block waiting for a stuck FFmpeg reconnect-loop to exit. Both teardown paths (_tear_down_live_stream shared exit, fresh-toggle stale-Stream invalidation in _try_live_connection_inner) now wrap the call in asyncio.wait_for(timeout=5) and force-detach on timeout. Without this, a single hung stream.stop() held the per-camera setup lock for >5 minutes and every subsequent switch-ON returned try_live_connection: already in progress for ... — skipping.

REMOTE session lifetime watchdog. Mirror of the existing LOCAL keepalive task: when a stream opens against the cloud relay, a generation-tracked terminator is scheduled for max_session_duration - 60 s and tears the session down cleanly before the relay drops the RTSP TCP with a hard reset. Without this, the URL goes stale silently — switch shows "streaming" but FFmpeg is in a reconnect-loop, the next consumer sees a 2-minute HLS spinner. Generation counter shared with _auto_renew_local_session; OFF→ON cycles cancel the watchdog automatically.

AUTO-mode REMOTE-fallback now self-heals. Three independent fixes reduce the "permanently pinned to Cloud" failure mode that occurred after a transient LAN issue saturated the LOCAL error counter:

  • record_stream_error skips the increment when the active connection is REMOTE — Cloud-side hiccups no longer count against the LAN's health budget.
  • The error counter time-decays in _try_live_connection_inner's AUTO branch: 5 minutes if the camera's TCP-ping cache says LAN is currently reachable, 30 minutes otherwise. Modeled after the existing _LOCAL_RESCUE_TTL_SEC decay for cred-rotation rescues.
  • The status-loop's TCP-ping fast-path actively clears the fallback flag the moment LAN becomes reachable again — the next stream-on attempts LOCAL first instead of going straight to REMOTE. Only fires when stream_connection_type == "auto" and a fallback was actually in effect.
  • During a currently running REMOTE-fallback stream, the same trigger additionally schedules a try_live_connection(is_renewal=True) so the live HLS session migrates Cloud → LAN via Stream.update_source() without waiting for a re-toggle. Brief (~2-3 s) re-buffer during the swap; LAN failure simply lands back on REMOTE. 5-minute cooldown prevents ping-pong if LAN flaps.

max_stream_errors raised — per-model thresholds. With self-heal in place a false fallback now recovers automatically, so the gradual-counter path can give LOCAL a fairer chance before giving up. Default bumped from 3 → 5 (indoor / INDOOR, HOME_Eyes_Indoor, default unknown), explicit override 10 for outdoor models (OUTDOOR / CAMERA_EYES, HOME_Eyes_Outdoor / CAMERA_OUTDOOR_GEN2) where real WLAN flap + slower encoder init produce more transient bursts. The watchdog's hard 120 s "no healthy HLS output" path is unchanged — it still forces REMOTE fallback regardless of this counter.


v10.5.3

mark_events_read default flipped to OFF. v10.5.2 introduced the option but kept the previous behaviour as default — events were still being marked as read on the Bosch cloud after HA processed them, which silently consumed the "new event" highlight in the Bosch app for users who only use HA for live streaming. simon42 forum (Topic 81743 / Post 366006) confirmed the default-ON path was the wrong choice for the typical user. The default in OPTIONS_DEFAULTS, in the Configure dialog, and in all five gating call sites (__init__.py startup-poll / per-event tick / auto-download cycle, fcm.py push handler / clip handler) now resolves to False — fresh installs and existing installs that never explicitly toggled the option both stop firing PUT /v11/events {isRead: true}. The Bosch app keeps treating new events as unread regardless of whether HA already saw them. Users who prefer the previous behaviour (HA as primary client, no stale "new event" badges in the app) can enable it via Integration → Configure → Mark Bosch cloud events as read. English/German option-help text updated to describe the new default.


v10.5.2

New option mark_events_read (default ON). The integration calls PUT /v11/events {id, isRead: true} after processing each motion/audio event from five different code paths (startup poll, per-event coordinator tick, auto-download cycle, FCM push handler, FCM clip handler). Side effect: motion events appear as already viewed in the Bosch app, even if the user only consumes them via HA's live stream and never opens an automation. Reported by xDraGGi on the simon42 forum (Topic 81743 / Post 364079). New option mark_events_read in Integration → Configure gates all five call sites — default True preserves backwards-compatible behaviour, set to False to keep events flagged as unread in the Bosch app while still receiving them in HA. Local dedup via _last_event_ids is unaffected (lives independent of the cloud isRead flag).

Sensor renamed: Event DetectionFCM Push Status. The diagnostic sensor BoschFcmPushStatusSensor was named "Event Detection" in entity translations, which suggested that a disabled state meant no event detection at all. In reality the sensor only reflects the FCM-push pipeline (states: fcm_push / polling / disabled) — normal coordinator polling continues regardless. The unique_id (bosch_shc_camera_fcm_push_status) was already correct and is unchanged, so historical state preserves cleanly across the rename.

FCM Push Mode dropdown gated on master switch. The per-integration select.fcm_push_mode entity is now available=False whenever enable_fcm_push is OFF in Integration → Configure. Previously the dropdown was fully interactive on the device page even though changing it had no effect until the master switch was enabled — discovered via simon42-forum PN where geotie reported Event Detection: disabled while showing FCM Push Mode: Auto in the same screenshot.


v10.5.1 (patch)

iOS native HLS direct-path (Card v2.10.20). On iOS/WKWebView, WebRTC over Cloudflare Tunnel fails after a 5 s ICE timeout (UDP cannot traverse the HTTP tunnel) — the card then fell back to HLS, but the combined delay caused AVFoundation timeouts resulting in a ~1 minute black screen. Fix: iOS is detected via !window.MediaSource && canPlayType("application/vnd.apple.mpegurl") and the WebRTC attempt is skipped entirely; native HLS starts immediately via video.src. Desktop browsers (Chrome/Firefox) continue to use WebRTC as before. An info banner is shown while streaming on iOS: "ℹ HLS (kein WebRTC auf iOS) — wird automatisch neu gestartet".


v10.5.1

Stream Status Sensor. New sensor.bosch_{name}_stream_status entity per camera with states idle / warming_up / connecting / streaming / streaming_remote. device_class: enum with _attr_options so HA's more-info popup shows all possible states and a categorical state-history timeline. The card reads this sensor on every hass update — cold-open fix: opening a dashboard while the backend is already pre-warming shows the correct overlay and snapshot background without requiring a toggle click (_awaitingFresh guard prevents duplicate snapshot fetches). New snapshot_during_warmup card config option (default true).

Full entity translation and documentation for all platforms. Added _attr_translation_key + _attr_has_entity_name = True on _BoschSensorBase — entity names now render as "Bosch {Camera} {Sensor Name}" via translations instead of falling back to the device name. Removed conflicting _attr_name assignments from sensor.py. All 7 enum-like sensors now have SensorDeviceClass.ENUM + _attr_options. Full entity.* translation blocks in en.json and de.json covering all platforms: 26 sensors, 22 switches, 16 number entities, 5 selects (with per-state labels), 3 binary sensors, 2 buttons, 1 update, 3 lights. _attr_entity_category (CONFIG / DIAGNOSTIC / none) set on all entities across all platforms.

Event type fixes. BoschLastEventTypeSensor.native_value kept underscores in API values (trouble_disconnect not trouble disconnect) — options and translations expanded to cover audio_alarm, trouble_disconnect, trouble_reconnect. BoschAlarmStateSensor options expanded with SYSTEM_MANAGED_ARMED / DISARMED, ARMED_AWAY, ARMED_STAY, DISARMED after SYSTEM_MANAGED_DISARMED caused a ValueError at runtime on the Gen2 Indoor II.

Cloudflare-Tunnel HLS-Buffering Workaround (cf_unbuffer.py, runtime monkey-patch). Diagnosed 2026-04-29 from a remote-over-Cloudflare-tunnel session: cloudflared buffers HTTP responses by default per its connection.shouldFlush(headers) source — only Content-Type: text/event-stream / application/grpc / application/x-ndjson, no Content-Length, or Transfer-Encoding: chunked triggers streaming mode. HA's HLS endpoints (/api/hls/<token>/*.m3u8 and *.m4s segments) hit none of those — application/vnd.apple.mpegurl / video/mp4 with Content-Length set, no chunked. Cloudflared collected each segment in full at the edge before forwarding; iOS WKWebView on cellular gave up before the buffer flushed (visible in the cloudflared add-on log as Incoming request ended abruptly: context canceled). Two-prong runtime monkey-patch of HA's view classes (homeassistant.components.stream.hls): (1) HlsMasterPlaylistView + HlsPlaylistView get their Content-Type rewritten to text/event-stream; x-actual=application/vnd.apple.mpegurl — cloudflared HasPrefix-matches Branch (C) → flush. (2) HlsInitView + HlsPartView + HlsSegmentView get their web.Response re-emitted as a chunked web.StreamResponse (no Content-Length) — cloudflared shouldFlush() Branch (B) → flush. Verify with curl -sI https://your-ha.example.com/api/hls/<token>/segment/0.m4s — must show Transfer-Encoding: chunked and no Content-Length.

iOS Companion App livestream fix (Card v2.10.14). Root cause: _startLiveVideo called _loadHlsJs() unconditionally before the native-HLS fallback — WKWebView's stricter CDN policy caused hls.js load to throw, aborting the init before the video.src native path was tried. Fix: wrap _loadHlsJs() in its own try/catch; fall through to native playback when Hls is null or Hls.isSupported() is false (= iOS).

Motion / Person / Audio binary sensor reliability fix. Two compounding issues: (1) FCM push wrote to _cached_events but not to coordinator.data[cam_id]["events"] — sensors read stale event list on the same async_update_listeners() cycle. Fixed by mirroring immediately. (2) EVENT_ACTIVE_WINDOW raised from 30 s to 90 s to cover the full scan_interval in the polling-only path.


v10.5.0

FTP upload backend for NAS uploads + correctness fixes for the NAS settings. (1) FTP as alternative to SMB for event uploads. The FRITZ!Box NAS (and several other consumer-grade NAS devices) handles SMB metadata operations very poorly — and on macOS Sequoia 15.x the smbfs client is also known to hang on cross-directory rename() for minutes at a time (multiple Apple-Discussions threads, plus AVM and PC-WELT documenting the FritzOS-CPU bottleneck on USB storage). Real measurement on a FRITZ!Box 7590 with ~3300 small files (JPG + MP4): SMB rename via macOS-mounted share blocked for 9+ minutes without a single completed move, while FTP RNFR/RNTO against the same hardware completed all 3117 moves in 42 seconds (~74 file/s). The integration now exposes a new upload_protocol option (SMB / FTP, default SMB for backwards compatibility). FTP reuses the existing smb_server / smb_username / smb_password / smb_base_path / smb_folder_pattern / smb_file_pattern fields — only the smb_share field is unused under FTP because FTP has no shares (the base path is taken relative to the FTP root, e.g. FILES/Bosch-Kameras instead of just Bosch-Kameras on a FRITZ!Box). All three sync paths are protocol-aware: the periodic event upload, the daily retention cleanup (FTP uses MDTM for accurate mtimes), and the disk-free check (skipped silently under FTP because there's no portable RPC for it). Implementation uses Python's stdlib ftplib so no new requirement is added to the manifest. (2) NAS folder-pattern docs corrected. The settings descriptions for smb_folder_pattern / smb_file_pattern listed placeholders as [year], [month], [day], etc. — but the code actually uses Python str.format() with {year}, {month}, {day}, … so anyone who copy-pasted the documented pattern into the field got a KeyError on the next upload. All three translation files (strings.json, translations/en.json, translations/de.json) are now consistent with the code, plus the alert-storage path was corrected from /media/bosch_alerts/ (wrong) to www/bosch_alerts/ (the actual on-disk location served at /local/bosch_alerts/). (3) Default folder pattern now {year}/{month}/{day}. Previously {year}/{month} — for cameras that fire many motion events per day this produces folders with a thousand files inside them, which is hostile to both browsing and SMB performance. Existing custom patterns are untouched; only the default for new installs (and for upgraders who have not yet customised the field) changes. (4) Translation cleanup. German and English option screens are now 100 % key-aligned (no more cases of an option being labelled in one language but missing the description in the other). Previously-missing entries restored: enable_go2rtc description was missing in both languages, debug_logging description was missing in German. The alert_notify_service description in English now matches the German one and the actual code behaviour: this field is the fallback when per-type fields are empty, not a fan-out destination as it used to read. (5) Helper script migrate_smb_day_folders.sh ships at the repo root for users who want to migrate an existing flat {year}/{month}/ layout into the new {year}/{month}/{day}/ layout. Default dry-run, parses the day from filename, and runs against any mounted share.


v10.4.10

Three resilience fixes for stream stability + WAN-outage handling. (1) Stream stays on LAN after idle reconnect (Bosch session-cred rotation). Symptom: AUTO mode pre-warms LOCAL successfully and runs cleanly for ~14 min, then — when the HLS consumer disconnects (browser tab closed) and HA's stream-worker later reconnects — the camera answers HTTP 401 on the same TLS proxy (Bosch silently rotated the per-session digest creds during the RTSP idle gap). After 3 consecutive Error from stream worker: 401 Unauthorized errors, AUTO fell back to REMOTE even though the LAN was perfectly reachable. Reactive 401 rescue: when _handle_stream_worker_error sees a 401 / "Unauthorized" / "authorization failed" message on a LOCAL session, issue one fresh PUT /connection LOCAL to obtain new creds before falling through to the REMOTE path. Gated by a per-camera _local_rescue_attempts counter (max 1 per failure burst) with a 5-minute time-decay so the counter doesn't stick at 1 after the first rescue: record_stream_success never fires when no HLS consumer is connected, so without time decay the next legitimate 401 burst (typically 8–14 min later) would skip straight to REMOTE. Proactive cred refresh in heartbeat: capture analysis (see captures/api-findings.md §1) showed the Bosch iOS app fires PUT /connection LOCAL at ~5 Hz during live view and consumes the fresh digest user/password from each response; the active RTSP connection is unaffected because Bosch only invalidates the rotated creds for new connects. Our heartbeat now mirrors this behaviour: each successful heartbeat parses the response, caches user/password into _live_connections[cam_id], rebuilds the cached rtspsUrl with fresh creds, and calls Stream.update_source(). The running stream-worker is not disturbed (HA's update_source only changes the source for the next worker restart) — but when the worker eventually restarts after an idle gap, it picks up fresh creds and avoids the 401 in the first place. (2) FCM noise filter for WAN outages. Real-world finding 2026-04-28: when the home router rebooted, firebase_messaging.fcmpushclient._listen re-entered itself recursively on every retry, and each ERROR log line carried a ~3000-frame stack trace. With the 30 s reconnect cadence that produced ~200 log lines/s, ~12 500 lines/min, and the HA MainThread became wedged in stack-trace formatting and disk I/O — CPU rose from 30 % to 85 %, the bosch-shc-camera coordinator stopped firing entirely (no "Finished fetching" line for 4 min), and other integrations slowed too. New _FCMNoiseFilter (in fcm.py) attaches once to the firebase_messaging.fcmpushclient logger when FCM is set up: it strips exc_info/exc_text from "Unexpected exception during read" records (the recursive trace adds zero diagnostic value — we already know FCM disconnected) and rate-limits to one pass-through per 60 s. Reconnect behaviour is unchanged; the library still retries normally and recovers when WAN comes back, but the log volume drops from ~200 lines/s to ~1 line/min and the MainThread stays free. Library issue sdb9696/firebase-messaging#33 covers the abort-on-error angle but not the recursive trace itself, so a client-side filter is the right place. (3) Same-camera stream-source race protection (carried over from earlier work in this version): try_live_connection: already in progress for X — skipping is now the warning we see when two parallel start attempts collide; the first one always wins, the second exits cleanly without leaving a half-built TLS proxy or stale cache entry. (4) Hardware-privacy auto-teardown. When the camera's physical privacy button is pressed (or someone toggles privacy in the Bosch app), the cloud reports privacyMode=ON but our BoschPrivacyModeSwitch.async_turn_on — the only path that calls _tear_down_live_stream — never runs. Result before this fix: stuck state: streaming, the live-stream switch frozen on on, and the TLS proxy entering an endless reconnect loop against the now-gone camera (Errno 113 Host unreachable, observed in production at 06:25 on 2026-04-28 when a household member pressed the indoor cam's privacy button). New code path: in _async_update_data, when the privacy cache transitions OFF→ON outside the user-write lock and a live session is active, schedule the same teardown as the user-toggle path. (5) TLS-proxy connect-failure circuit breaker. When the camera goes physically offline (privacy button, power cut, Wi-Fi drop), HA's stream worker keeps opening new client connections every few seconds, and each one triggered a 10 s connect-timeout against the gone camera — burning CPU on a hopeless loop. After 5 consecutive connect failures within 30 s the proxy now closes its server socket; the coordinator (privacy-aware) decides whether to rebuild the session or stay torn-down. (6) does not support play stream service log filter. During the ~25 s LOCAL pre-warm window (PUT /connection → TLS proxy → encoder warm-up → rtspsUrl set) any consumer that calls the camera/stream WS API gets stream_source()==None and HA's camera component logs an ERROR. Real captures show 9 such lines in 15 s for a single stream start (multiple Lovelace tabs + Companion app + the card's own HLS-fallback path all polling around the same time). New _StreamSupportNoiseFilter keeps one ERROR per 30 s per bosch_* entity so a real "stream truly broken" issue still surfaces, but the pre-warm-window burst is collapsed to a single line. Other camera integrations are not touched. (7) Overview card use_bosch_sort option. New per-card opt-in flag for custom:bosch-camera-overview-card (Card v2.10.12 / Overview v1.1.0): when set, sorts cameras inside each tier (live → privacy → offline) by the Bosch-app priority instead of alphabetically. The priority is read from the new bosch_priority attribute on each Bosch camera entity, which mirrors the float priority field returned by GET /v11/video_inputs (settable via PUT /v11/video_inputs/order from the Bosch app). Default false preserves the old alphabetic ordering. YAML: use_bosch_sort: true. (8) Card stale-state guard against accidental toggles (Card v2.10.13). Diagnosed live 2026-04-28 14:00: a Live-Stream switch flipped to off from a system-admin user_id (iOS Companion App) with parent_id: null (= direct service call, not an automation) — but the user reported they didn't tap it. Root cause: when the HA-Companion-App suspends its WebSocket on backgrounding (Mobile/WLAN switch, app put away for a while), the local hass.states cache can briefly disagree with the server until the next WS push arrives. A user tap on the card's stream button during that window fires the wrong-direction toggle, because the card was reading a stale state. Fix in bosch-camera-card.js: (a) _toggleStream is now async and pulls the authoritative state via GET /api/states/<switch> immediately before callService — if the freshly-fetched state disagrees with what the card was showing, the toggle is aborted, the optimistic state is cleared, and the view is re-rendered (the user has to tap again with the now-correct state); (b) _onVisibilityChange (already wired to the Page Visibility API) now also pulls fresh REST states for the four primary toggle switches (live_stream, privacy_mode, audio, camera_light) when the page returns to the foreground, so a backgrounded card resyncs immediately rather than waiting for the next WS push. Behaviour unchanged when the card was already in sync; the REST round-trip adds <100 ms before the existing optimistic flip in the common path.


v10.4.9

Revert of v10.4.8 part 2 — privacy-mode RCP override was based on a wrong byte mapping. A/B testing 2026-04-27 showed that RCP 0x0d00 byte[1] stays 1 regardless of the user-facing privacy-mode toggle (verified by toggling privacy ON↔OFF in HA and reading 0x0d00 before and after — no change). That byte therefore does not represent the privacy mode; rcp_findings.txt's "PRIVACY MASK state" label refers to a separate static configuration. The Bosch cloud /v11/video_inputs.privacyMode field was never the lie I claimed in v10.4.8 — it was the correct source of truth all along. Removed: the override block in _async_update_data, the mismatch override in _refresh_rcp_state, the async_update_listeners() trigger, the camera-entity attributes rcp_privacy_mode / rcp_led_dimmer / rcp_state_age / rcp_state_source (since the underlying cache is no longer populated for those keys), and the helper functions parse_privacy_state / parse_led_dimmer_percent from local_rcp.py. Kept: the generic rcp_read_local_sync / rcp_read_remote_sync helpers (correct), the _rcp_state_cache dict scaffolding, and the post-stream-start _refresh_rcp_state hook (now a marker, ready for future verified RCP+ reads). The lesson: never ship a feature that overrides authoritative state from one source with another, without first confirming via a controlled toggle that the new source actually reflects the toggled value.


v10.4.8

Local RCP+ READ via the ad-hoc cbs-…-user from PUT /connection + Bosch Cloud privacyMode correction. Two parts: (1) RCP+ reads. New module local_rcp.py issues HTTP Digest reads against https://<cam>:443/rcp.xml (LOCAL session) and HTTP Basic-empty against https://proxy-XX:42090/{hash}/rcp.xml (REMOTE session — Cloud-Proxy fallback when HA is not on the LAN). Verified on Gen2 Outdoor FW 9.40.25: 10 reads/10 s did not rotate creds or kill the running stream — only PUT /connection rotates, normal RCP reads are safe. Two fields pulled opportunistically after each successful stream start: rcp_privacy_mode (from 0x0d00 P_OCTET, byte[1]==1 means ON) and rcp_led_dimmer (from 0x0c22 T_WORD, 0–100 %). Exposed as camera entity diagnostic attributes plus rcp_state_age (seconds since last read) and rcp_state_source (local / remote). (2) Privacy-mode correction. Diagnosed live 2026-04-27: Bosch Cloud /v11/video_inputs.privacyMode returned 'OFF' for the Terrasse (Gen2 Outdoor, ONLINE, physically in privacy) while every offline camera and the camera's own RCP read correctly returned ON. The HA switch.bosch_<cam>_privacy_mode entity, the BoschLiveStreamSwitch.available gate, the snapshot-fetch short-circuit, and try_live_connection's privacy guard all read _shc_state_cache.privacy_mode — so the cloud lie propagated everywhere. Fix: RCP+ now refines the SHC cache aggressively when (a) SHC is None (unconfigured, was already the v10.4.8-part-1 behavior), or (b) SHC and RCP disagree and no user-write lock is active — RCP wins because it reads camera hardware directly. Two override sites: _refresh_rcp_state corrects on each stream start, and the Cloud-Coordinator-Tick re-checks the RCP cache (≤120 s old) and re-corrects after every cloud refresh, so the cloud lie cannot resurface. async_update_listeners() is fired on each correction so the privacy switch flips immediately, without waiting for the next 60 s tick. The local /rcp.xml endpoint returns XML (not the binary TLV the Cloud-Proxy uses on the same path), so the parser is XML-based. Read-only — writes still need the service-account credentials Bosch will release with the Sommer 2026 local-user feature.


v10.4.7

New option: HLS player buffer profile (live_buffer_mode). Adds an integration-options dropdown to choose how aggressively the Lovelace card pre-buffers video before showing it. Three modes: Latency (~4-6 s lag, may stutter on flaky Wi-Fi), Balanced (~8-10 s lag, default — robust against typical Wi-Fi hiccups), Stable (~12-15 s lag, smooth even on weak links). Mapping is hardcoded client-side in the card: each mode sets liveSyncDurationCount, liveMaxLatencyDurationCount, maxBufferLength, maxMaxBufferLength, and lowLatencyMode on the hls.js instance. The previous values (3 / 6 / 10 / 20 / true) corresponded roughly to "Latency"; the new default is "Balanced" (4 / 8 / 14 / 22 / false), which is why existing users may see slightly more lag (~2 s) but fewer stutters out of the box. The maxBufferLength cap stays well below HA's 30 s OUTPUT_IDLE_TIMEOUT for all three modes, so FFmpeg is never killed by the idle watchdog. Audio quality is higher than the official Bosch app — the mobile app downsamples audio for cellular bandwidth, while this integration delivers the unmodified AAC-LC stream. Also fixed a UX confusion: the card's "Reaktion" info field now has a tooltip clarifying that the 500 ms / 1000 ms value shown is the Bosch-API response hint (bufferingTime from PUT /connection), not the player buffer — the latter is now controlled by the new live_buffer_mode option in integration settings.


v10.4.6

Three hardening changes. (1) Privacy enforcement — stream cannot be started when Privacy Mode is ON. Four bypass paths existed: BoschLiveStreamSwitch.available returned True while privacy was active (entity appeared clickable); async_turn_on used a fragile string comparison (str(…).upper() in ("ON", "TRUE", "1")) and issued a persistent_notification on the old code path; BoschAudioSwitch._apply_audio_change called try_live_connection without checking privacy; and coordinator.try_live_connection() had no guard at all. Fixes: available now gates on bool(_shc_state_cache.get(cam_id, {}).get("privacy_mode")) (entity greys out); async_turn_on raises ServiceValidationError (HA toast in UI, clean exception — no more persistent notification); _apply_audio_change logs a warning and returns early; try_live_connection has an early-exit guard (fail-open when cache not yet populated at boot). (2) Icon — no changes needed. Legal assessment confirmed the current SVG does not reproduce the Bosch trademark (uses Bosch red as a color only, not the circular wordmark). (3) Translation fixes (EN + DE). DE: standardised formality to informal "du" throughout (user.description heading); added missing debug_logging label (was in EN, absent in DE); corrected alert_save_snapshots path /www/bosch_alerts//media/bosch_alerts/. EN: already consistent, no changes.


v10.4.5

Two fixes. (1) Fix: LOCAL snapshot was 6–10 s; now matches REMOTE speed (~1 s). The imageUrlScheme field from PUT /connection LOCAL response defaults to https://{url}/snap.jpg with no resolution parameter. Without a ?JpegSize= parameter, the camera triggers a full-resolution on-demand capture from the sensor — slow (~8 s when idle). The REMOTE path already hardcodes ?JpegSize=1206. Fix: append ?JpegSize=1206 to the LOCAL proxyUrl when no JpegSize= is already present. One-line change in __init__.py. Probe-confirmed: adding any JpegSize parameter on the LAN path cuts snapshot latency from 8 s to ~1.4 s (7×) when the camera is idle; with an active stream the latency was already <100 ms regardless. (2) Fix: TROUBLE_CONNECT / TROUBLE_DISCONNECT alerts now route to alert_notify_system instead of the information path. Previously, camera connectivity events (camera going offline or back online) were dispatched via "information" — same path as motion/person events — so they landed on the video clip service (or the fallback service) instead of the configured system notification service. Fix in fcm.py: detect TROUBLE events at dispatch time and route the text notification through _notify_type("system", …). Steps 2 (snapshot) and 3 (clip) are skipped entirely since connectivity events carry no media. Also fixes an edge case where the early-return guard blocked TROUBLE events when no alert_notify_information service was configured.


v10.4.4

Hotfix for v10.4.3: the privacy short-circuit accessed self._camera_status_extra directly — but that dict isn't allocated until the first successful coordinator tick. During the boot/integration-load window (and on any HA restart), async_camera_image raised AttributeError: 'BoschCameraCoordinator' object has no attribute '_camera_status_extra', which the v10.4.2 wrapper caught and served the placeholder JPEG — but every snapshot-refresh background task also failed with the same error in _async_trigger_image_refresh, so cameras showed only the placeholder until the cache had warmed up. Fix: getattr(self, "_camera_status_extra", {}).get(cam_id, {}) — falls through to normal fetch when the cache isn't ready yet, identical pre-v10.4.3 behavior. v10.4.3 was live ~10 minutes before this regression was caught in the post-deploy log scan; rolled forward rather than reverted because v10.4.4 keeps the network-call optimization once the cache is warm.


v10.4.3

Optimization: skip snapshot fetches when Privacy Mode is ON. Both async_fetch_live_snapshot (REMOTE Cloud-proxy path) and async_fetch_live_snapshot_local (LAN HTTPDigest path) now short-circuit and return None immediately when the cached privacy_mode flag is True for the camera. Before: every coordinator tick (~1/min) would issue a PUT /connection REMOTE → snap.jpg request, get HTTP 200 with 0 bytes (Bosch backend behavior when the privacy shutter is closed), and log a debug line "empty response (privacy mode ON?)". With 4 cameras and one in privacy, that's ~4-8 wasted PUT/connection cycles per minute plus the same number of debug log lines, even though we already know the answer from the cached privacyMode field in the same /v11/video_inputs response we'd just fetched. The privacy state is read from _camera_status_extra[cam_id]["privacy_mode"] (populated at coordinator init line 1386), so no extra request needed for the check. The camera entity async_camera_image() falls through to its placeholder/cached path on None, identical to what happened before the short-circuit. No user-visible behavior change — pure log-noise + network-call reduction.


v10.4.2

Two robustness fixes — Gen1 cameras only. Diagnosed live 2026-04-27 with Innenbereich + Terrasse + Kamera (Gen1 360 Indoor) + Eingang/Garten (Gen1 Eyes Outdoor) all toggled simultaneously. Fix 1 — async_camera_image no longer 500s on transient pre-warm state. During the pre-warm window for Gen1 cams, an unhandled exception path produced HTTP 500 from HA's camera proxy. The Lovelace <img> element rendered the literal "500: Internal Server Error" 26-byte text body as a brown error frame on every Gen1 card — looking like cross-camera bleed even though the underlying streams were correct. Wrapped async_camera_image in a top-level try/except that always returns at least the placeholder 1×1 black JPEG (renamed the existing implementation to _async_camera_image_impl); CancelledError still propagates cleanly. Net effect: any future regression in the snapshot path becomes a debug log entry instead of a visible error frame. Fix 2 — is_stream_warming clears stuck flags more aggressively. Observed during the same 4-camera test: Gen1 cams stayed at stream_status="warming_up" with live_rtsps=null for >7 minutes while keepalive was already running (gen=2, 480s into session) — the existing auto-clear (added 2026-04-11) only handled the case where _live_connections[cam_id] was missing entirely, but not the case where the entry exists with _connection_type and _bufferingTime but no rtspsUrl (race in _try_live_connection_inner where the warming flag wasn't discarded on some exit path). Added two more clear-conditions: (a) flag set but rtspsUrl already populated → race, clear; (b) flag set for >300 s → hard timeout, clear. New _stream_warming_started: dict[str, float] tracks per-camera start time. Also unblocks privacy toggles on stuck cameras (which were previously gated on is_stream_warming returning False).


v10.4.1

Fix: stream cross-talk between two cameras streaming simultaneously. Reproduced live 2026-04-27 with Innenbereich (Gen2 Indoor) and Terrasse (Gen2 Outdoor) both active: the dashboard would render the same video on both camera cards — whichever camera was toggled most recently became the source for both. The HLS playlists at HA's /api/hls/<token>/master_playlist.m3u8 returned different tokens per camera and the image() snapshot endpoint returned the correct distinct frame for each — but the live HLS playback served the same content. Root cause: _try_live_connection_inner only invalidated the existing cam_entity.stream object on is_renewal=True (added in v10.3.10 for credential rotation). On a fresh user-toggle, a stale Stream object from a prior session could survive — update_source(new_url) then re-pointed it but HA's internal stream worker cache could still serve buffered segments tagged with the previous camera's source URL, producing the cross-camera bleed. Fix in __init__.py: always stop+null cam_entity.stream before pre-warm, regardless of is_renewal. Adds one cold FFmpeg start per stream-on (negligible — the pre-warm already dominates the 25–35 s activation window). User credit: hypothesis ("alte Streams nicht beendet → bei Stream-Start fixen Stream zuordnen") came directly from the live observation.


v10.4.0

Fix: stream health watchdog no longer triggers REMOTE fallback when no HLS consumer is connected. Diagnosed live 2026-04-27 on Innenbereich (HOME_Eyes_Indoor, FW 9.40.25): user enabled Live Stream switch via dashboard but the Lovelace card was not actively rendering the video element (e.g. tab in background or Picture card not yet mounted), so HA's Stream object was never instantiated by the frontend. The v10.3.x watchdog read cam_entity.stream as None and treated that as "stream unhealthy" — it tore the LOCAL session down, restarted, hit None again on the next 60 s tick, and after 2 consecutive failures escalated to REMOTE. Net effect: cameras silently demoted to Cloud streaming whenever the user toggled the switch from a non-rendering context, even though LAN was perfectly reachable and the LOCAL session was up. Root cause: _is_stream_healthy() collapsed three distinct states ("no consumer yet", "healthy", "FFmpeg crashed") into a single boolean, so the absence of a consumer was indistinguishable from a real failure. Fix in switch.py: replaced with _stream_health_state() returning "no_consumer" / "healthy" / "unhealthy". The watchdog now exits cleanly when no consumer is connected — leaves the LOCAL session up so a future browser tab gets the stream instantly. Restart-and-fallback path only triggers when a Stream object exists but isn't producing output (real FFmpeg failure). Also adds a debug log line so future false-positive cases are diagnosable. No behavior change when an HLS client is actively reading. Knowledge base added: knowledge-base/ folder with ha-stream-component.md (HA Core Stream lifecycle + .available semantics), go2rtc-races.md (Lazy-Registration race + producer-drop), and local-stream-failure-modes.md (3 prioritised hypotheses for the broader class of "RTSP-OK, no frames" failures with verification tests).


v10.3.29

Fix: snapshot occasionally missing from motion alerts (Step 2 silently skipped). Diagnosed live 2026-04-26 from a back-to-back pair of Innenbereich movement events: 05:13:49 received Step 1 (text) but no snapshot/clip notification, while 05:20:16 (~6 min later) sent the full text + snapshot + 4.7 MB clip sequence. Root cause in fcm.py:614-635: the FCM push sometimes arrives before the Bosch cloud has populated imageUrl on the corresponding /v11/events row — eventually consistent backend. The single re-fetch attempt at +5s gave up immediately when imageUrl was still empty, dropping Step 2 with no warning (the JPG eventually appeared ~90s later via the SMB upload path, but the Signal screenshot notification was already lost). v10.3.29 replaces the single attempt with a 3-attempt retry loop at cumulative +3 / +10 / +25 s — covers warm-cloud (succeeds on attempt 1) and slow-cloud cases (attempt 2 or 3) without delaying the common path. Adds an explicit "still empty after 3 retries" debug line so future skips are diagnosable. No behavior change when imageUrl was already in the FCM payload.


v10.3.28

Card v2.10.10 — quiet expected WebRTC race-window rejects. Follow-up to v10.3.27. The card spammed console.warn on every WebRTC offer reject during the ~3 s race-window between stream-feature-flip and HA's async_refresh_providers wiring up the WebRTC provider. The retry loop succeeds within seconds and the user gets WebRTC anyway — but the visible warn-level noise during that window looked alarming ("Text und Logs sind komisch"). Fix: classify the rejection. The Camera does not support WebRTC, frontend_stream_types={HLS} message is the expected race-window response — logged at console.debug only. Real WebRTC failures (timeout, ICE failure, transport error) still log at console.warn so they're visible during diagnosis. Net effect: clean console during normal stream activation; noisy console only when something actually breaks.


v10.3.27

Fix: WebRTC race condition (caps stale at stream-start) + always-attempt-WebRTC card path. Even with v10.3.24's watchdog, the card's camera/capabilities query at stream-start would race against HA's async_refresh_providers (which itself awaits stream_source() and runs out-of-band ~4s after supported_features flips to STREAM). Result: caps returned ['hls'] at the moment the card asked → card cached HLS for the whole session even though web_rtc would appear in caps a few seconds later. Two-part fix: (1) Coordinator _ensure_go2rtc_schemes_fresh() now does a direct refresh — re-fetches _supported_schemes on the existing WebRTCProvider instance via provider._rest_client.schemes.list() and pushes await cam.async_refresh_providers() to all streaming cameras. Cheaper and more reliable than a full config-entry reload, and bypasses the timing where reload happens but cam's cached _webrtc_provider = None from earlier doesn't get re-evaluated. Called pre-flight in try_live_connection() and from the post-stream watchdog as first-line recovery before falling back to the heavier reload. (2) Card v2.10.9 — drop the frontend_stream_types.includes('web_rtc') gate in _startLiveVideo. Always send the WebRTC offer; if HA's require_webrtc_support decorator rejects (caps haven't propagated yet, or genuine HLS-only camera), the offer fails fast in <100 ms and the existing HLS fallback kicks in unaffected. Also adds explicit pc-cleanup on WebRTC failure (was leaking a stuck-in-have-local-offer peer connection that confused diagnostic snippets). End-to-end verified live 2026-04-25 on Innenbereich Cloud: card v2.10.9 + _webrtcPc.connectionState='connected', no HLS fallback engaged.


v10.3.26

Card v2.10.8 — fix: loading-overlay flicker during stream startup. User report: "Loading erscheint 2-3 mal" — the overlay text would change rapidly between progressive messages ("Verbindung wird aufgebaut…" → "Stream wird gestartet…" → "Encoder wird aufgewärmt…" → "HLS wird geladen…") because three independent code paths (_toggleStream 9-message timeline, _update() periodic re-render, _waitForStreamReady polling) all called _setLoadingOverlay() independently — and a snapshot-load completing mid-startup would hide the overlay via _onImageLoaded only for it to reappear on the next stream-state poll, producing a visible spinner-on-off-on flicker. Fix in _setLoadingOverlay(): when any of _streamConnecting / _waitingForStream / _startingLiveVideo is active, refuse to hide the overlay (snapshot-load callbacks no longer interfere with stream-start UX), and refuse to overwrite a connecting-timeline message with the default "Bild wird geladen…". Net effect: one continuous spinner with progressive text from the moment the user taps Stream until the video plays — no bounces, no message flickering.


v10.3.25

Fix: Bug B — Cloud (REMOTE) WebRTC cert-mismatch. The Bosch Cloud RTSPS proxy serves session URLs on hosts like proxy-NN.live.cbs.boschsecurity.com:443 but the TLS cert SAN list only covers *.residential.connect.boschsecurity.com. go2rtc's Go RTSP client (used at WebRTC offer time) refuses the mismatch with tls: failed to verify certificate, leaving the card stuck on HLS (~20 s Cloud delay). Until v10.3.24 the integration worked around this with a rtspx:// rewrite at go2rtc-pre-registration time, but HA's homeassistant/components/go2rtc:_update_stream_source overwrites that URL with whatever stream_source() returns at offer time — re-introducing the cert error. v10.3.25 ports the existing LOCAL TLS-proxy approach to REMOTE: the integration starts a per-camera in-process Python TLS terminator (verify_mode=CERT_NONE, check_hostname=False), the cloud RTSPS bytes get unwrapped on 127.0.0.1, and stream_source() returns plain rtsp://127.0.0.1:N/<HASH>/rtsp_tunnel?... for both LOCAL and REMOTE. Both FFmpeg (HLS path) and go2rtc (WebRTC path) consume without scheme tricks. The rtspx:// rewrite from v10.3.21–v10.3.24 stays as fallback for the case where TLS-proxy startup fails (graceful degradation back to v10.3.24 behavior). Sub-millisecond latency penalty (in-process socket forwarding on the same host); no extra bandwidth cost (TLS tunnel terminates locally). Verified live 2026-04-25 on Innenbereich (Gen2 Indoor, REMOTE mode): WebRTC offer now returns session_id + answer + ICE candidates without cert error.


v10.3.24

Fix: WebRTC capability auto-recover from HA Core's stale-schemes bug. HA's bundled go2rtc integration runs WebRTCProvider.initialize() exactly once at config-entry-setup, caching _supported_schemes from the go2rtc REST API. The bundled go2rtc binary is occasionally respawned by HA's own watchdog (go2rtc/server.py) when its API stops responding — the Python provider instance keeps running, but if the initial initialize() call ever raced and returned an empty set, the cached schemes stay empty forever. Symptom: frontend_stream_types: ['hls'] only, no WebRTC, even though the go2rtc binary is healthy and reports rtsp/rtsps/rtspx in /api/schemes. Manifests as silently degraded performance — the card falls back to HLS (~8-10 s LAN, ~20 s Cloud) instead of using WebRTC (~2-3 s). Reproduced live 2026-04-25 on Innenbereich (Gen2): attempt 1: ['hls'] → reload go2rtc entry → attempt 2: ['web_rtc', 'hls']. Recovery: 4 s after every successful stream activation, the integration probes camera_capabilities.frontend_stream_types. If STREAM is in supported_features but WEB_RTC is missing, the bundled go2rtc config entry is reloaded — which re-runs provider.initialize() and refreshes the schemes set. Throttled to one reload per hour per integration entry to avoid loops if go2rtc is actually broken. No effect on already-working installations (the check returns early when WebRTC is already advertised). Upstream HA Core issue not yet filed; reload-after-empty-init is undocumented behavior we're depending on but it works.


v10.3.23

Three changes. 1) Fix: Gen1 Outdoor independent front-light / wallwasher control. The Bosch Cloud lighting_override endpoint rejects any request that includes frontIlluminatorIntensity while frontLightOn is false, with HTTP 400 frontIlluminatorIntensity must not be set if frontLightOn is false. Our integration always sent the intensity field, so toggling front-light off while wallwasher on was silently rejected — UI showed front=on indefinitely until the user also turned off the wallwasher. Diagnosed live on Gen1 Outdoor (Eyes Außenkamera) on 2026-04-25 by capturing the API response body. Fix: omit frontLightIntensity from the PUT body when frontLightOn is false. Both directions now work independently — front-on/wall-off, front-off/wall-on, both-on, both-off all pass. Verified via 30 s observation: after front OFF: front=off wall=on (was front=on wall=on before). No behavior change on Gen2 (different endpoint structure). 2) experimental_go2rtc_rtspx flag removed — rtspx:// is now the unconditional default for Bosch Cloud RTSPS routing through go2rtc. The flag was Beta in v10.3.21, default ON in v10.3.22, and after a week of testing on Gen2 Outdoor II + Gen1 Outdoor with no regressions, it graduates to permanent behavior. The option no longer appears in the integration UI. The rewrite (rtsps://…boschsecurity.com/…rtspx://…) is required to skip TLS verification for the Bosch cert/hostname mismatch — without it go2rtc rejects the producer with tls: failed to verify certificate. Existing config entries with the option set are silently ignored on load. 3) README cleanup: stale OAuth migration banner removed (now ~17 months old since v8.0.5; users on the legacy client see the auto-Reconfigure flow), added an Architecture section with two Mermaid diagrams (component overview + LOCAL stream activation sequence + REMOTE differences) so new users can grasp the LOCAL/REMOTE/HLS/WebRTC/TLS-proxy/go2rtc topology without reading the source.


v10.3.22

Four bundled changes. 1) FCM push listener hardening — the firebase-messaging library defaults to shutting its listener down after 3 sequential connection errors (e.g. a brief WAN blip) and does not self-restart, leaving the integration silently in "subscribed but no pushes arriving" state until the next HA restart. v10.3.22 passes FcmPushClientConfig(abort_on_sequential_error_count=None) so the library keeps reconnecting, and adds a watchdog in the coordinator tick that calls FcmPushClient.is_started() — if the listener terminates for any reason, sensor.bosch_camera_event_detection flips from fcm_push to polling, making silent death visible on the dashboard. Guarded by ImportError for older firebase-messaging installs. Ref: sdb9696/firebase-messaging#33. 2) experimental_go2rtc_rtspx now ON by default (was Beta-OFF in v10.3.21). After a week of testing on Gen2 Eyes Outdoor II with no regressions, the Cloud-RTSPS → go2rtc rtspx:// path becomes the new default. Option stays available as an opt-out escape hatch; label + description updated to drop Beta wording. 3) Card v2.10.7 — loading overlay sub-hint. The card now shows a secondary hint line under the progressive status message during stream startup: "Cloud-Stream — ca. 30–45 s bis erstes Bild, danach stabil" for REMOTE, "LAN-Stream — ca. 25–35 s bis erstes Bild" for LOCAL. Addresses user feedback that the ~30–45 s HLS initial-buffer-fill phase on Cloud streams feels broken without context — the hint sets realistic expectations. The actual stream startup time is unchanged (physics of HLS segment generation + Bosch cloud proxy first-frame latency). 4) README: Step 3 rewritten to reflect that the Lovelace resource is auto-registered since v10.3.19 — no manual "Add resource" step needed. Added a one-line note that the old www/bosch-camera-card.js file in /config/www/ is intentionally left in place on upgrade (the integration doesn't modify user files) and can be deleted manually if desired.


v10.3.21

Beta: route Bosch Cloud streams through go2rtc via the rtspx:// scheme. New Options toggle "Beta: lower cloud stream lag (go2rtc rtspx://)" (default OFF). Scope: only affects WebRTC and snapshot playback paths — HA's HLS path continues via FFmpeg-direct and is unaffected. Root cause: the Bosch cloud RTSPS proxy serves session URLs on hosts like proxy-NN.live.cbs.boschsecurity.com but its certificate only covers *.residential.connect.boschsecurity.com. When the integration registers the stream in go2rtc with rtsps://, go2rtc's Go RTSP client rejects the cert mismatch (tls: failed to verify certificate) — the registration succeeds but any WebRTC/snapshot consumer request 500s and HA silently falls back to built-in behavior. With this flag ON, the integration registers with rtspx:// (go2rtc's documented scheme for skipping TLS verification, originally added for Ubiquiti UniFi), and the stream name is aligned with camera.entity_id so HA's bundled go2rtc provider (homeassistant/components/go2rtc/) picks up our pre-registration on WebRTC/snapshot requests. LOCAL (LAN) streams are unaffected — they go through the integration's own TLS proxy and use plain rtsp://127.0.0.1:…. Additional fix in the same release: _register_go2rtc_stream now accepts HTTP 400 with a yaml: body as soft-success (bundled go2rtc returns that when its in-memory stream registration succeeds but YAML persistence to /config/go2rtc.yaml fails — verified via GET /api/streams?src=<name>). Sources: go2rtc rtspx:// — RTSP README, go2rtc pkg/tcp/dial.goInsecureSkipVerify for rtspx, go2rtc #343 — insecure HTTPS client request, go2rtc #1386 — 400 on successful POST /api/streams.


v10.3.20

CI compliance: Add .github/workflows/validate.yml (HACS action + Hassfest) running on push/PR/daily. manifest.json cleanup — drop invalid homeassistant key (belongs in hacs.json), add http to dependencies (used but undeclared), sort keys per Hassfest rule (domain, name, then alphabetical). Remove bare URLs from data_description fields in strings.json + translations/en.json (Hassfest disallows URLs there). No user-visible changes.


Earlier history

For v10.3.19 and below see GitHub Releases.