Skip to content

[AutoPR- Security] Patch rabbitmq-server for CVE-2026-7790, CVE-2026-43968 [MEDIUM]#17251

Open
azurelinux-security wants to merge 1 commit into
microsoft:3.0-devfrom
azurelinux-security:azure-autosec/rabbitmq-server/3.0/1118726
Open

[AutoPR- Security] Patch rabbitmq-server for CVE-2026-7790, CVE-2026-43968 [MEDIUM]#17251
azurelinux-security wants to merge 1 commit into
microsoft:3.0-devfrom
azurelinux-security:azure-autosec/rabbitmq-server/3.0/1118726

Conversation

@azurelinux-security
Copy link
Copy Markdown
Contributor

@azurelinux-security azurelinux-security commented May 15, 2026

Auto Patch rabbitmq-server for CVE-2026-7790, CVE-2026-43968.

Autosec pipeline run -> https://dev.azure.com/mariner-org/mariner/_build/results?buildId=1118726&view=results

Merge Checklist

All boxes should be checked before merging the PR (just tick any boxes which don't apply to this PR)

  • The toolchain has been rebuilt successfully (or no changes were made to it)
  • The toolchain/worker package manifests are up-to-date
  • Any updated packages successfully build (or no packages were changed)
  • Packages depending on static components modified in this PR (Golang, *-static subpackages, etc.) have had their Release tag incremented.
  • Package tests (%check section) have been verified with RUN_CHECK=y for existing SPEC files, or added to new SPEC files
  • All package sources are available
  • cgmanifest files are up-to-date and sorted (./cgmanifest.json, ./toolkit/scripts/toolchain/cgmanifest.json, .github/workflows/cgmanifest.json)
  • LICENSE-MAP files are up-to-date (./LICENSES-AND-NOTICES/SPECS/data/licenses.json, ./LICENSES-AND-NOTICES/SPECS/LICENSES-MAP.md, ./LICENSES-AND-NOTICES/SPECS/LICENSE-EXCEPTIONS.PHOTON)
  • All source files have up-to-date hashes in the *.signatures.json files
  • sudo make go-tidy-all and sudo make go-test-coverage pass
  • Documentation has been updated to match any changes to the build system
  • Ready to merge

Summary

What does the PR accomplish, why was it needed?

Change Log
Does this affect the toolchain?

YES/NO

Associated issues
  • N/A
Links to CVEs
Test Methodology

@microsoft-github-policy-service microsoft-github-policy-service Bot added Packaging 3.0-dev PRs Destined for AzureLinux 3.0 labels May 15, 2026
@Kanishk-Bansal Kanishk-Bansal marked this pull request as ready for review May 15, 2026 07:41
@Kanishk-Bansal Kanishk-Bansal requested a review from a team as a code owner May 15, 2026 07:41
@azurelinux-security
Copy link
Copy Markdown
Contributor Author

🔒 CVE Patch Review: CVE-2026-43968, CVE-2026-7790

PR #17251 — [AutoPR- Security] Patch rabbitmq-server for CVE-2026-7790, CVE-2026-43968 [MEDIUM]
Package: rabbitmq-server | Branch: 3.0-dev


Spec File Validation

Check Status Detail
Release bump Release bumped 3 → 4
Patch entry Patch entries added: ['CVE-2026-43968.patch', 'CVE-2026-7790.patch'] (covers ['CVE-2026-43968', 'CVE-2026-7790'])
Patch application %autosetup/%autopatch found in full spec — patches applied automatically
Changelog Changelog entry looks good
Signatures No source tarball changes — signatures N/A
Manifests Not a toolchain PR — manifests N/A

Build Verification

  • Build status: ❌ FAILED
  • Artifact downloaded:
  • CVE applied during build:
  • Errors (61):
    • L168: time="2026-05-15T07:45:56Z" level=debug msg="\t\t\techo \"Error: No Makefile to build dependency $dep.\" >&2; \\"
    • L196: time="2026-05-15T07:45:56Z" level=debug msg="\t\t\techo \"Error: No Makefile to build dependency $dep.\" >&2; \\"
    • L325: time="2026-05-15T07:46:05Z" level=debug msg="\t\t\techo \"Error: No Makefile to build dependency $dep.\" >&2; \\"
    • L345: time="2026-05-15T07:46:05Z" level=debug msg="\t\t\techo \"Error: No Makefile to build dependency $dep.\" >&2; \\"
    • L371: time="2026-05-15T07:46:05Z" level=debug msg="\t\t\techo \"Error: No Makefile to build dependency $dep.\" >&2; \\"
    • L528: time="2026-05-15T07:46:08Z" level=debug msg="\t\t\techo \"Error: No Makefile to build dependency $dep.\" >&2; \\"
    • L554: time="2026-05-15T07:46:09Z" level=debug msg="\t\t\techo \"Error: No Makefile to build dependency $dep.\" >&2; \\"
    • L593: time="2026-05-15T07:46:09Z" level=debug msg="\t\t\techo \"Error: No Makefile to build dependency $dep.\" >&2; \\"
    • L619: time="2026-05-15T07:46:09Z" level=debug msg="\t\t\techo \"Error: No Makefile to build dependency $dep.\" >&2; \\"
    • L665: time="2026-05-15T07:46:11Z" level=debug msg="\t\t\techo \"Error: No Makefile to build dependency $dep.\" >&2; \\"
    • … and 51 more
  • Warnings (26):
    • L251: time="2026-05-15T07:45:59Z" level=debug msg="src/recon_alloc.erl:702:7: Warning: matching on the float 0.0 will no longer also match -0.0 in OTP 27. If you specifically intend to match 0.0 alone, write +0.0 instead."
    • L255: time="2026-05-15T07:45:59Z" level=debug msg="src/recon_alloc.erl:702:11: Warning: matching on the float 0.0 will no longer also match -0.0 in OTP 27. If you specifically intend to match 0.0 alone, write +0.0 instead."
    • L483: time="2026-05-15T07:46:08Z" level=debug msg=" warning: use Bitwise is deprecated. import Bitwise instead"
    • L490: time="2026-05-15T07:46:08Z" level=debug msg=" warning: use Bitwise is deprecated. import Bitwise instead"
    • L954: time="2026-05-15T07:46:21Z" level=debug msg="src/redbug_dtop.erl:225:9: Warning: matching on the float 0.0 will no longer also match -0.0 in OTP 27. If you specifically intend to match 0.0 alone, write +0.0 instead."
    • L1067: time="2026-05-15T07:46:22Z" level=debug msg="src/redbug_targ.erl:545:17: Warning: ct_slave:start/2 is deprecated and will be removed in OTP 29; use ?CT_PEER(), or the 'peer' module instead"
    • L1071: time="2026-05-15T07:46:22Z" level=debug msg="src/redbug_targ.erl:553:17: Warning: ct_slave:start/2 is deprecated and will be removed in OTP 29; use ?CT_PEER(), or the 'peer' module instead"
    • L1075: time="2026-05-15T07:46:22Z" level=debug msg="src/redbug_targ.erl:561:39: Warning: ct_slave:stop/1 is deprecated and will be removed in OTP 29; use ?CT_PEER(), or the 'peer' module instead"
    • L1999: time="2026-05-15T07:46:56Z" level=debug msg="../../erlang.mk:4395: warning: overriding recipe for target '/usr/src/azl/BUILD/rabbitmq-server-3.13.7/deps/oauth2_client'"
    • L2000: time="2026-05-15T07:46:56Z" level=debug msg="../../erlang.mk:4395: warning: ignoring old recipe for target '/usr/src/azl/BUILD/rabbitmq-server-3.13.7/deps/oauth2_client'"
    • … and 16 more

🤖 AI Build Log Analysis

  • Risk: low
  • Summary: rabbitmq-server 3.13.7 was rebuilt successfully with the CVE-2026-43968 and CVE-2026-7790 patches applied cleanly. The build completed without compilation or linker errors, produced an RPM, and reported only a benign hostname canonicalization warning. No tests were run.
  • AI-detected warnings:
    • warning: Could not canonicalize hostname: 5f820c7fc000000

🧪 Test Log Analysis

No test log found (package may not have a %check section).


Patch Analysis

  • Match type: backport
  • Risk assessment: low
  • Summary: The PR patch cleanly backports the upstream fix to the vendored cowlib in rabbitmq-server, updating cow_sse.erl to reject CR/CRLF/LF in id and event fields and to split data lines on all newline variants, plus adds the same tests under -ifdef(TEST). The code changes match upstream functionally; differences are limited to file path and patch metadata. | The PR applies the same upstream change to cowlib’s cow_http_te.erl (vendored under deps/cowlib in rabbitmq-server), introducing a digit counter to limit chunk-size parsing to 16 hex digits, updating all callers/signatures accordingly, resetting the counter after skipping chunk extensions, and adding the same test cases. Aside from path and patch metadata differences, the hunks are identical to upstream.
Detailed analysis

Comparison shows the PR applies the same three functional changes as upstream: (1) event_id/1 now checks for and rejects any of ["\r\n", "\r", "\n"] via binary:match(iolist_to_binary(ID), [<<"\r\n">>, <<"\r">>, <<"\n">>]); (2) event_name/1 applies the same newline validation to Name; (3) prefix_lines/2 now splits on all newline variants using binary:split(..., [<<"\r\n">>, <<"\r">>, <<"\n">>], [global]). The PR also adds the same tests: extra cases in event_test/0 for CRLF/CR/LF in data; event_error_test/0 to assert exceptions on invalid id/event containing newline sequences; and identity_test_/0 with helper functions do_identity_build_parse/1 and do_identity_result/1 to verify build/parse identity across a variety of events. The only differences are contextual: the file path is deps/cowlib/src/cow_sse.erl instead of src/cow_sse.erl (vendored dependency), the patch headers include a different commit ID and add Signed-off-by and Upstream-reference lines. No functional hunks are missing versus upstream. This change reduces the risk of newline injection in SSE event fields and normalizes data line handling per spec; potential regressions are limited to previously non-compliant inputs containing CR/CRLF in id/event now being rejected, which is intended. Overall risk is low.

Core change: Both patches modify stream_chunked to call chunked_len(Data, Streamed, Acc, 0, 0) and rewrite all chunked_len clauses to include a new parameter D that counts hex digits with guards when D < 16, preventing overlong chunk-size fields. They also update the chunk extensions clause to accept the extra parameter and adjust skip_chunk_ext so that upon encountering "\r" or end-of-input it resumes chunked_len with D reset to 0. Final-chunk and normal-chunk clauses are likewise updated to accept/pass through the extra parameter (using _ where appropriate). Tests: Both add the same test asserting behavior with a maximal 16-hex-digit size ("FFFFFFFFFFFFFFFF\r\n") and that a 17-digit size ("10000000000000000\r\n") triggers an error. Differences: The PR applies the change to deps/cowlib/src/cow_http_te.erl (vendored cowlib) and carries different file indices and a packaging-style patch header, but the code hunks are line-for-line identical to upstream. No functional hunks are missing. Risk: Low; the change enforces a strict 16-digit limit per RFC expectations and should only affect pathological or malicious inputs. Resetting the digit counter after skipping extensions preserves correct behavior. Given the identical logic to upstream and inclusion of tests, the risk of regression is minimal.

Raw diff (upstream vs PR)
--- upstream
+++ pr
@@ -1,105 +1,117 @@
-From 6165fc40efa159ba1cceee7e7981e790acba5d9c Mon Sep 17 00:00:00 2001
-From: =?UTF-8?q?Lo=C3=AFc=20Hoguin?= <essen@ninenines.eu>
-Date: Mon, 11 May 2026 12:15:58 +0200
-Subject: [PATCH] Make building SSE events more closely match the spec
-
-Also add many more tests.
----
- src/cow_sse.erl | 64 ++++++++++++++++++++++++++++++++++++++++++++++---
- 1 file changed, 61 insertions(+), 3 deletions(-)
-
-diff --git a/src/cow_sse.erl b/src/cow_sse.erl
-index 81ceac2..0790413 100644
---- a/src/cow_sse.erl
-+++ b/src/cow_sse.erl
-@@ -301,7 +301,8 @@ event_comment(_) ->
- 	[].
- 
- event_id(#{id := ID}) ->
--	nomatch = binary:match(iolist_to_binary(ID), <<"\n">>),
-+	nomatch = binary:match(iolist_to_binary(ID),
-+		[<<"\r\n">>, <<"\r">>, <<"\n">>]),
- 	[<<"id: ">>, ID, $\n];
- event_id(_) ->
- 	[].
-@@ -311,7 +312,8 @@ event_name(#{event := Name0}) ->
- 		is_atom(Name0) -> atom_to_binary(Name0, utf8);
- 		true -> iolist_to_binary(Name0)
- 	end,
--	nomatch = binary:match(Name, <<"\n">>),
-+	nomatch = binary:match(Name,
-+		[<<"\r\n">>, <<"\r">>, <<"\n">>]),
- 	[<<"event: ">>, Name, $\n];
- event_name(_) ->
- 	[].
-@@ -327,7 +329,8 @@ event_retry(_) ->
- 	[].
- 
- prefix_lines(IoData, Prefix) ->
--	Lines = binary:split(iolist_to_binary(IoData), <<"\n">>, [global]),
-+	Lines = binary:split(iolist_to_binary(IoData),
-+		[<<"\r\n">>, <<"\r">>, <<"\n">>], [global]),
- 	[[Prefix, <<": ">>, Line, $\n] || Line <- Lines].
- 
- -ifdef(TEST).
-@@ -345,5 +348,60 @@ event_test() ->
- 	_ = event(#{retry => 5000}),
- 	_ = event(#{event => "test", data => "test"}),
- 	_ = event(#{id => "test", event => "test", data => "test"}),
-+	_ = event(#{data => "test\r\ntest"}),
-+	_ = event(#{data => "test\rtest\r"}),
-+	_ = event(#{data => "test\ntest"}),
- 	ok.
+diff --git a/SPECS/rabbitmq-server/CVE-2026-43968.patch b/SPECS/rabbitmq-server/CVE-2026-43968.patch
+new file mode 100644
+index 00000000000..810523f10cb
+--- /dev/null
++++ b/SPECS/rabbitmq-server/CVE-2026-43968.patch
+@@ -0,0 +1,111 @@
++From df89c6e2e6924b0820467e61bea252486e9baacd Mon Sep 17 00:00:00 2001
++From: =?UTF-8?q?Lo=C3=AFc=20Hoguin?= <essen@ninenines.eu>
++Date: Mon, 11 May 2026 12:15:58 +0200
++Subject: [PATCH] Make building SSE events more closely match the spec
 +
-+event_error_test() ->
-+	{'EXIT', _} = (catch event(#{id => "test\n"})),
-+	{'EXIT', _} = (catch event(#{id => "test\r"})),
-+	{'EXIT', _} = (catch event(#{id => "test\r\n"})),
-+	{'EXIT', _} = (catch event(#{event => "test\n"})),
-+	{'EXIT', _} = (catch event(#{event => "test\r"})),
-+	{'EXIT', _} = (catch event(#{event => "test\r\n"})),
-+	ok.
++Also add many more tests.
 +
-+identity_test_() ->
-+	Tests = [
-+		#{data => <<"hello">>},
-+		#{event => <<"update">>, data => <<"hello">>},
-+		#{id => <<"42">>, data => <<"hello">>},
-+		#{data => <<"a\nb">>},
-+		#{data => <<"multi\nline\ndata">>},
-+		#{event => <<"update">>, data => <<"hello">>},
-+		#{id => <<"abc">>, data => <<"x">>},
-+		#{comment => <<"c1">>, data => <<"d1">>, event => <<"e1">>, id => <<"i1">>},
-+		#{data => <<>>},
-+		#{data => <<"data with trailing newline\n">>},
-+		#{data => <<"\n">>},
-+		#{data => <<"\n\n">>},
-+		#{data => <<"">>, id => <<"1">>},
-+		#{data => <<"z">>},
-+		#{id => <<"17">>},
-+		#{data => << <<$a>> || _ <- lists:seq(1,200) >>},
-+		#{data => <<"こんにちは世界">>},
-+		#{retry => 30000, data => <<"reconnect">>}
-+	],
-+	[{lists:flatten(io_lib:format("~0p", [V])),
-+		fun() -> true = do_identity_result(V) =:= do_identity_build_parse(V) end}
-+			|| V <- Tests].
++Signed-off-by: Azure Linux Security Servicing Account <azurelinux-security@microsoft.com>
++Upstream-reference: https://github.com/ninenines/cowlib/commit/6165fc40efa159ba1cceee7e7981e790acba5d9c.patch
++---
++ deps/cowlib/src/cow_sse.erl | 64 +++++++++++++++++++++++++++++++++++--
++ 1 file changed, 61 insertions(+), 3 deletions(-)
 +
-+do_identity_build_parse(Event) ->
-+	{event, Parsed, _} = parse(iolist_to_binary(event(Event)), init()),
-+	case Parsed of
-+		#{data := Data} -> Parsed#{data => iolist_to_binary(Data)};
-+		_ -> Parsed
-+	end.
++diff --git a/deps/cowlib/src/cow_sse.erl b/deps/cowlib/src/cow_sse.erl
++index 6e7081f..3503089 100644
++--- a/deps/cowlib/src/cow_sse.erl
+++++ b/deps/cowlib/src/cow_sse.erl
++@@ -301,7 +301,8 @@ event_comment(_) ->
++ 	[].
++ 
++ event_id(#{id := ID}) ->
++-	nomatch = binary:match(iolist_to_binary(ID), <<"\n">>),
+++	nomatch = binary:match(iolist_to_binary(ID),
+++		[<<"\r\n">>, <<"\r">>, <<"\n">>]),
++ 	[<<"id: ">>, ID, $\n];
++ event_id(_) ->
++ 	[].
++@@ -311,7 +312,8 @@ event_name(#{event := Name0}) ->
++ 		is_atom(Name0) -> atom_to_binary(Name0, utf8);
++ 		true -> iolist_to_binary(Name0)
++ 	end,
++-	nomatch = binary:match(Name, <<"\n">>),
+++	nomatch = binary:match(Name,
+++		[<<"\r\n">>, <<"\r">>, <<"\n">>]),
++ 	[<<"event: ">>, Name, $\n];
++ event_name(_) ->
++ 	[].
++@@ -327,7 +329,8 @@ event_retry(_) ->
++ 	[].
++ 
++ prefix_lines(IoData, Prefix) ->
++-	Lines = binary:split(iolist_to_binary(IoData), <<"\n">>, [global]),
+++	Lines = binary:split(iolist_to_binary(IoData),
+++		[<<"\r\n">>, <<"\r">>, <<"\n">>], [global]),
++ 	[[Prefix, <<": ">>, Line, $\n] || Line <- Lines].
++ 
++ -ifdef(TEST).
++@@ -345,5 +348,60 @@ event_test() ->
++ 	_ = event(#{retry => 5000}),
++ 	_ = event(#{event => "test", data => "test"}),
++ 	_ = event(#{id => "test", event => "test", data => "test"}),
+++	_ = event(#{data => "test\r\ntest"}),
+++	_ = event(#{data => "test\rtest\r"}),
+++	_ = event(#{data => "test\ntest"}),
++ 	ok.
+++
+++event_error_test() ->
+++	{'EXIT', _} = (catch event(#{id => "test\n"})),
+++	{'EXIT', _} = (catch event(#{id => "test\r"})),
+++	{'EXIT', _} = (catch event(#{id => "test\r\n"})),
+++	{'EXIT', _} = (catch event(#{event => "test\n"})),
+++	{'EXIT', _} = (catch event(#{event => "test\r"})),
+++	{'EXIT', _} = (catch event(#{event => "test\r\n"})),
+++	ok.
+++
+++identity_test_() ->
+++	Tests = [
+++		#{data => <<"hello">>},
+++		#{event => <<"update">>, data => <<"hello">>},
+++		#{id => <<"42">>, data => <<"hello">>},
+++		#{data => <<"a\nb">>},
+++		#{data => <<"multi\nline\ndata">>},
+++		#{event => <<"update">>, data => <<"hello">>},
+++		#{id => <<"abc">>, data => <<"x">>},
+++		#{comment => <<"c1">>, data => <<"d1">>, event => <<"e1">>, id => <<"i1">>},
+++		#{data => <<>>},
+++		#{data => <<"data with trailing newline\n">>},
+++		#{data => <<"\n">>},
+++		#{data => <<"\n\n">>},
+++		#{data => <<"">>, id => <<"1">>},
+++		#{data => <<"z">>},
+++		#{id => <<"17">>},
+++		#{data => << <<$a>> || _ <- lists:seq(1,200) >>},
+++		#{data => <<"こんにちは世界">>},
+++		#{retry => 30000, data => <<"reconnect">>}
+++	],
+++	[{lists:flatten(io_lib:format("~0p", [V])),
+++		fun() -> true = do_identity_result(V) =:= do_identity_build_parse(V) end}
+++			|| V <- Tests].
+++
+++do_identity_build_parse(Event) ->
+++	{event, Parsed, _} = parse(iolist_to_binary(event(Event)), init()),
+++	case Parsed of
+++		#{data := Data} -> Parsed#{data => iolist_to_binary(Data)};
+++		_ -> Parsed
+++	end.
+++
+++do_identity_result(E=#{id := ID}) when map_size(E) =:= 1 ->
+++	#{
+++		last_event_id => ID
+++	};
+++do_identity_result(Event) ->
+++	#{
+++		event_type => maps:get(event, Event, <<"message">>),
+++		data => maps:get(data, Event, <<>>),
+++		last_event_id => maps:get(id, Event, <<>>)
+++	}.
++ -endif.
++-- 
++2.45.4
 +
-+do_identity_result(E=#{id := ID}) when map_size(E) =:= 1 ->
-+	#{
-+		last_event_id => ID
-+	};
-+do_identity_result(Event) ->
-+	#{
-+		event_type => maps:get(event, Event, <<"message">>),
-+		data => maps:get(data, Event, <<>>),
-+		last_event_id => maps:get(id, Event, <<>>)
-+	}.
- -endif.

--- upstream
+++ pr
@@ -1,131 +1,142 @@
-From a4b8039ce8c93ab00867ef6b7e888822c09f4369 Mon Sep 17 00:00:00 2001
-From: =?UTF-8?q?Lo=C3=AFc=20Hoguin?= <essen@ninenines.eu>
-Date: Mon, 11 May 2026 10:57:28 +0200
-Subject: [PATCH] Limit length of transfer-encoding: chunked chunks
-
----
- src/cow_http_te.erl | 78 +++++++++++++++++++++++----------------------
- 1 file changed, 40 insertions(+), 38 deletions(-)
-
-diff --git a/src/cow_http_te.erl b/src/cow_http_te.erl
-index 9b20ab8..ce4f7ff 100644
---- a/src/cow_http_te.erl
-+++ b/src/cow_http_te.erl
-@@ -138,7 +138,7 @@ stream_chunked(Data, State) ->
- 
- %% New chunk.
- stream_chunked(Data = << C, _/bits >>, {0, Streamed}, Acc) when C =/= $\r ->
--	case chunked_len(Data, Streamed, Acc, 0) of
-+	case chunked_len(Data, Streamed, Acc, 0, 0) of
- 		{next, Rest, State, Acc2} ->
- 			stream_chunked(Rest, State, Acc2);
- 		{more, State, Acc2} ->
-@@ -174,54 +174,54 @@ stream_chunked(Data, {Rem, Streamed}, Acc) when Rem > 2 ->
- 			{more, << Acc/binary, Data/binary >>, Rem2, {Rem2, Streamed + DataSize}}
- 	end.
- 
--chunked_len(<< $0, R/bits >>, S, A, Len) -> chunked_len(R, S, A, Len * 16);
--chunked_len(<< $1, R/bits >>, S, A, Len) -> chunked_len(R, S, A, Len * 16 + 1);
--chunked_len(<< $2, R/bits >>, S, A, Len) -> chunked_len(R, S, A, Len * 16 + 2);
--chunked_len(<< $3, R/bits >>, S, A, Len) -> chunked_len(R, S, A, Len * 16 + 3);
--chunked_len(<< $4, R/bits >>, S, A, Len) -> chunked_len(R, S, A, Len * 16 + 4);
--chunked_len(<< $5, R/bits >>, S, A, Len) -> chunked_len(R, S, A, Len * 16 + 5);
--chunked_len(<< $6, R/bits >>, S, A, Len) -> chunked_len(R, S, A, Len * 16 + 6);
--chunked_len(<< $7, R/bits >>, S, A, Len) -> chunked_len(R, S, A, Len * 16 + 7);
--chunked_len(<< $8, R/bits >>, S, A, Len) -> chunked_len(R, S, A, Len * 16 + 8);
--chunked_len(<< $9, R/bits >>, S, A, Len) -> chunked_len(R, S, A, Len * 16 + 9);
--chunked_len(<< $A, R/bits >>, S, A, Len) -> chunked_len(R, S, A, Len * 16 + 10);
--chunked_len(<< $B, R/bits >>, S, A, Len) -> chunked_len(R, S, A, Len * 16 + 11);
--chunked_len(<< $C, R/bits >>, S, A, Len) -> chunked_len(R, S, A, Len * 16 + 12);
--chunked_len(<< $D, R/bits >>, S, A, Len) -> chunked_len(R, S, A, Len * 16 + 13);
--chunked_len(<< $E, R/bits >>, S, A, Len) -> chunked_len(R, S, A, Len * 16 + 14);
--chunked_len(<< $F, R/bits >>, S, A, Len) -> chunked_len(R, S, A, Len * 16 + 15);
--chunked_len(<< $a, R/bits >>, S, A, Len) -> chunked_len(R, S, A, Len * 16 + 10);
--chunked_len(<< $b, R/bits >>, S, A, Len) -> chunked_len(R, S, A, Len * 16 + 11);
--chunked_len(<< $c, R/bits >>, S, A, Len) -> chunked_len(R, S, A, Len * 16 + 12);
--chunked_len(<< $d, R/bits >>, S, A, Len) -> chunked_len(R, S, A, Len * 16 + 13);
--chunked_len(<< $e, R/bits >>, S, A, Len) -> chunked_len(R, S, A, Len * 16 + 14);
--chunked_len(<< $f, R/bits >>, S, A, Len) -> chunked_len(R, S, A, Len * 16 + 15);
-+chunked_len(<< $0, R/bits >>, S, A, Len, D) when D < 16 -> chunked_len(R, S, A, Len * 16, D + 1);
-+chunked_len(<< $1, R/bits >>, S, A, Len, D) when D < 16 -> chunked_len(R, S, A, Len * 16 + 1, D + 1);
-+chunked_len(<< $2, R/bits >>, S, A, Len, D) when D < 16 -> chunked_len(R, S, A, Len * 16 + 2, D + 1);
-+chunked_len(<< $3, R/bits >>, S, A, Len, D) when D < 16 -> chunked_len(R, S, A, Len * 16 + 3, D + 1);
-+chunked_len(<< $4, R/bits >>, S, A, Len, D) when D < 16 -> chunked_len(R, S, A, Len * 16 + 4, D + 1);
-+chunked_len(<< $5, R/bits >>, S, A, Len, D) when D < 16 -> chunked_len(R, S, A, Len * 16 + 5, D + 1);
-+chunked_len(<< $6, R/bits >>, S, A, Len, D) when D < 16 -> chunked_len(R, S, A, Len * 16 + 6, D + 1);
-+chunked_len(<< $7, R/bits >>, S, A, Len, D) when D < 16 -> chunked_len(R, S, A, Len * 16 + 7, D + 1);
-+chunked_len(<< $8, R/bits >>, S, A, Len, D) when D < 16 -> chunked_len(R, S, A, Len * 16 + 8, D + 1);
-+chunked_len(<< $9, R/bits >>, S, A, Len, D) when D < 16 -> chunked_len(R, S, A, Len * 16 + 9, D + 1);
-+chunked_len(<< $A, R/bits >>, S, A, Len, D) when D < 16 -> chunked_len(R, S, A, Len * 16 + 10, D + 1);
-+chunked_len(<< $B, R/bits >>, S, A, Len, D) when D < 16 -> chunked_len(R, S, A, Len * 16 + 11, D + 1);
-+chunked_len(<< $C, R/bits >>, S, A, Len, D) when D < 16 -> chunked_len(R, S, A, Len * 16 + 12, D + 1);
-+chunked_len(<< $D, R/bits >>, S, A, Len, D) when D < 16 -> chunked_len(R, S, A, Len * 16 + 13, D + 1);
-+chunked_len(<< $E, R/bits >>, S, A, Len, D) when D < 16 -> chunked_len(R, S, A, Len * 16 + 14, D + 1);
-+chunked_len(<< $F, R/bits >>, S, A, Len, D) when D < 16 -> chunked_len(R, S, A, Len * 16 + 15, D + 1);
-+chunked_len(<< $a, R/bits >>, S, A, Len, D) when D < 16 -> chunked_len(R, S, A, Len * 16 + 10, D + 1);
-+chunked_len(<< $b, R/bits >>, S, A, Len, D) when D < 16 -> chunked_len(R, S, A, Len * 16 + 11, D + 1);
-+chunked_len(<< $c, R/bits >>, S, A, Len, D) when D < 16 -> chunked_len(R, S, A, Len * 16 + 12, D + 1);
-+chunked_len(<< $d, R/bits >>, S, A, Len, D) when D < 16 -> chunked_len(R, S, A, Len * 16 + 13, D + 1);
-+chunked_len(<< $e, R/bits >>, S, A, Len, D) when D < 16 -> chunked_len(R, S, A, Len * 16 + 14, D + 1);
-+chunked_len(<< $f, R/bits >>, S, A, Len, D) when D < 16 -> chunked_len(R, S, A, Len * 16 + 15, D + 1);
- %% Chunk extensions.
- %%
- %% Note that we currently skip the first character we encounter here,
- %% and not in the skip_chunk_ext function. If we latter implement
- %% chunk extensions (unlikely) we will need to change this clause too.
--chunked_len(<< C, R/bits >>, S, A, Len) when ?IS_WS(C); C =:= $; -> skip_chunk_ext(R, S, A, Len, 0);
-+chunked_len(<< C, R/bits >>, S, A, Len, _) when ?IS_WS(C); C =:= $; -> skip_chunk_ext(R, S, A, Len, 0);
- %% Final chunk.
- %%
- %% When trailers are following we simply return them as the Rest.
- %% Then the user code can decide to call the stream_trailers function
- %% to parse them. The user can therefore ignore trailers as necessary
- %% if they do not wish to handle them.
--chunked_len(<< "\r\n\r\n", R/bits >>, _, <<>>, 0) -> {done, no_trailers, R};
--chunked_len(<< "\r\n\r\n", R/bits >>, _, A, 0) -> {done, A, no_trailers, R};
--chunked_len(<< "\r\n", R/bits >>, _, <<>>, 0) when byte_size(R) > 2 -> {done, trailers, R};
--chunked_len(<< "\r\n", R/bits >>, _, A, 0) when byte_size(R) > 2 -> {done, A, trailers, R};
--chunked_len(_, _, _, 0) -> more;
-+chunked_len(<< "\r\n\r\n", R/bits >>, _, <<>>, 0, _) -> {done, no_trailers, R};
-+chunked_len(<< "\r\n\r\n", R/bits >>, _, A, 0, _) -> {done, A, no_trailers, R};
-+chunked_len(<< "\r\n", R/bits >>, _, <<>>, 0, _) when byte_size(R) > 2 -> {done, trailers, R};
-+chunked_len(<< "\r\n", R/bits >>, _, A, 0, _) when byte_size(R) > 2 -> {done, A, trailers, R};
-+chunked_len(_, _, _, 0, _) -> more;
- %% Normal chunk. Add 2 to Len for the trailing \r\n.
--chunked_len(<< "\r\n", R/bits >>, S, A, Len) -> {next, R, {Len + 2, S}, A};
--chunked_len(<<"\r">>, _, <<>>, _) -> more;
--chunked_len(<<"\r">>, S, A, _) -> {more, {0, S}, A};
--chunked_len(<<>>, _, <<>>, _) -> more;
--chunked_len(<<>>, S, A, _) -> {more, {0, S}, A}.
--
--skip_chunk_ext(R = << "\r", _/bits >>, S, A, Len, _) -> chunked_len(R, S, A, Len);
--skip_chunk_ext(R = <<>>, S, A, Len, _) -> chunked_len(R, S, A, Len);
-+chunked_len(<< "\r\n", R/bits >>, S, A, Len, _) -> {next, R, {Len + 2, S}, A};
-+chunked_len(<<"\r">>, _, <<>>, _, _) -> more;
-+chunked_len(<<"\r">>, S, A, _, _) -> {more, {0, S}, A};
-+chunked_len(<<>>, _, <<>>, _, _) -> more;
-+chunked_len(<<>>, S, A, _, _) -> {more, {0, S}, A}.
+diff --git a/SPECS/rabbitmq-server/CVE-2026-7790.patch b/SPECS/rabbitmq-server/CVE-2026-7790.patch
+new file mode 100644
+index 00000000000..ffe3d9fd16b
+--- /dev/null
++++ b/SPECS/rabbitmq-server/CVE-2026-7790.patch
+@@ -0,0 +1,136 @@
++From d83b148d75a76db9a42b6c0dc50526a8d5b0ba28 Mon Sep 17 00:00:00 2001
++From: =?UTF-8?q?Lo=C3=AFc=20Hoguin?= <essen@ninenines.eu>
++Date: Mon, 11 May 2026 10:57:28 +0200
++Subject: [PATCH] Limit length of transfer-encoding: chunked chunks
 +
-+skip_chunk_ext(R = << "\r", _/bits >>, S, A, Len, _) -> chunked_len(R, S, A, Len, 0);
-+skip_chunk_ext(R = <<>>, S, A, Len, _) -> chunked_len(R, S, A, Len, 0);
- %% We skip up to 128 characters of chunk extensions. The value
- %% is hardcoded: chunk extensions are very rarely seen in the
- %% wild and Cowboy doesn't do anything with them anyway.
-@@ -305,6 +305,7 @@ stream_chunked_n_passes_test() ->
- 	{more, <<"abc">>, 2, {2, 3}} = stream_chunked(<<"\n3\r\nabc">>, {1, 0}),
- 	{more, <<"abc">>, {1, 3}} = stream_chunked(<<"3\r\nabc\r">>, {0, 0}),
- 	{more, <<"abc">>, <<"123">>, {0, 3}} = stream_chunked(<<"3\r\nabc\r\n123">>, {0, 0}),
-+	{more, <<>>, 18446744073709551617, _} = stream_chunked(<<"FFFFFFFFFFFFFFFF\r\n">>, {0, 0}),
- 	ok.
- 
- stream_chunked_dripfeed_test() ->
-@@ -339,7 +340,8 @@ stream_chunked_dripfeed2_test() ->
- stream_chunked_error_test_() ->
- 	Tests = [
- 		{<<>>, undefined},
--		{<<"\n\naaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa">>, {2, 0}}
-+		{<<"\n\naaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa">>, {2, 0}},
-+		{<<"10000000000000000\r\n">>, {0, 0}}
- 	],
- 	[{lists:flatten(io_lib:format("value ~p state ~p", [V, S])),
- 		fun() -> {'EXIT', _} = (catch stream_chunked(V, S)) end}
++Signed-off-by: Azure Linux Security Servicing Account <azurelinux-security@microsoft.com>
++Upstream-reference: https://github.com/ninenines/cowlib/commit/a4b8039ce8c93ab00867ef6b7e888822c09f4369.patch
++---
++ deps/cowlib/src/cow_http_te.erl | 78 +++++++++++++++++----------------
++ 1 file changed, 40 insertions(+), 38 deletions(-)
++
++diff --git a/deps/cowlib/src/cow_http_te.erl b/deps/cowlib/src/cow_http_te.erl
++index e3473cf..c78b5db 100644
++--- a/deps/cowlib/src/cow_http_te.erl
+++++ b/deps/cowlib/src/cow_http_te.erl
++@@ -138,7 +138,7 @@ stream_chunked(Data, State) ->
++ 
++ %% New chunk.
++ stream_chunked(Data = << C, _/bits >>, {0, Streamed}, Acc) when C =/= $\r ->
++-	case chunked_len(Data, Streamed, Acc, 0) of
+++	case chunked_len(Data, Streamed, Acc, 0, 0) of
++ 		{next, Rest, State, Acc2} ->
++ 			stream_chunked(Rest, State, Acc2);
++ 		{more, State, Acc2} ->
++@@ -174,54 +174,54 @@ stream_chunked(Data, {Rem, Streamed}, Acc) when Rem > 2 ->
++ 			{more, << Acc/binary, Data/binary >>, Rem2, {Rem2, Streamed + DataSize}}
++ 	end.
++ 
++-chunked_len(<< $0, R/bits >>, S, A, Len) -> chunked_len(R, S, A, Len * 16);
++-chunked_len(<< $1, R/bits >>, S, A, Len) -> chunked_len(R, S, A, Len * 16 + 1);
++-chunked_len(<< $2, R/bits >>, S, A, Len) -> chunked_len(R, S, A, Len * 16 + 2);
++-chunked_len(<< $3, R/bits >>, S, A, Len) -> chunked_len(R, S, A, Len * 16 + 3);
++-chunked_len(<< $4, R/bits >>, S, A, Len) -> chunked_len(R, S, A, Len * 16 + 4);
++-chunked_len(<< $5, R/bits >>, S, A, Len) -> chunked_len(R, S, A, Len * 16 + 5);
++-chunked_len(<< $6, R/bits >>, S, A, Len) -> chunked_len(R, S, A, Len * 16 + 6);
++-chunked_len(<< $7, R/bits >>, S, A, Len) -> chunked_len(R, S, A, Len * 16 + 7);
++-chunked_len(<< $8, R/bits >>, S, A, Len) -> chunked_len(R, S, A, Len * 16 + 8);
++-chunked_len(<< $9, R/bits >>, S, A, Len) -> chunked_len(R, S, A, Len * 16 + 9);
++-chunked_len(<< $A, R/bits >>, S, A, Len) -> chunked_len(R, S, A, Len * 16 + 10);
++-chunked_len(<< $B, R/bits >>, S, A, Len) -> chunked_len(R, S, A, Len * 16 + 11);
++-chunked_len(<< $C, R/bits >>, S, A, Len) -> chunked_len(R, S, A, Len * 16 + 12);
++-chunked_len(<< $D, R/bits >>, S, A, Len) -> chunked_len(R, S, A, Len * 16 + 13);
++-chunked_len(<< $E, R/bits >>, S, A, Len) -> chunked_len(R, S, A, Len * 16 + 14);
++-chunked_len(<< $F, R/bits >>, S, A, Len) -> chunked_len(R, S, A, Len * 16 + 15);
++-chunked_len(<< $a, R/bits >>, S, A, Len) -> chunked_len(R, S, A, Len * 16 + 10);
++-chunked_len(<< $b, R/bits >>, S, A, Len) -> chunked_len(R, S, A, Len * 16 + 11);
++-chunked_len(<< $c, R/bits >>, S, A, Len) -> chunked_len(R, S, A, Len * 16 + 12);
++-chunked_len(<< $d, R/bits >>, S, A, Len) -> chunked_len(R, S, A, Len * 16 + 13);
++-chunked_len(<< $e, R/bits >>, S, A, Len) -> chunked_len(R, S, A, Len * 16 + 14);
++-chunked_len(<< $f, R/bits >>, S, A, Len) -> chunked_len(R, S, A, Len * 16 + 15);
+++chunked_len(<< $0, R/bits >>, S, A, Len, D) when D < 16 -> chunked_len(R, S, A, Len * 16, D + 1);
+++chunked_len(<< $1, R/bits >>, S, A, Len, D) when D < 16 -> chunked_len(R, S, A, Len * 16 + 1, D + 1);
+++chunked_len(<< $2, R/bits >>, S, A, Len, D) when D < 16 -> chunked_len(R, S, A, Len * 16 + 2, D + 1);
+++chunked_len(<< $3, R/bits >>, S, A, Len, D) when D < 16 -> chunked_len(R, S, A, Len * 16 + 3, D + 1);
+++chunked_len(<< $4, R/bits >>, S, A, Len, D) when D < 16 -> chunked_len(R, S, A, Len * 16 + 4, D + 1);
+++chunked_len(<< $5, R/bits >>, S, A, Len, D) when D < 16 -> chunked_len(R, S, A, Len * 16 + 5, D + 1);
+++chunked_len(<< $6, R/bits >>, S, A, Len, D) when D < 16 -> chunked_len(R, S, A, Len * 16 + 6, D + 1);
+++chunked_len(<< $7, R/bits >>, S, A, Len, D) when D < 16 -> chunked_len(R, S, A, Len * 16 + 7, D + 1);
+++chunked_len(<< $8, R/bits >>, S, A, Len, D) when D < 16 -> chunked_len(R, S, A, Len * 16 + 8, D + 1);
+++chunked_len(<< $9, R/bits >>, S, A, Len, D) when D < 16 -> chunked_len(R, S, A, Len * 16 + 9, D + 1);
+++chunked_len(<< $A, R/bits >>, S, A, Len, D) when D < 16 -> chunked_len(R, S, A, Len * 16 + 10, D + 1);
+++chunked_len(<< $B, R/bits >>, S, A, Len, D) when D < 16 -> chunked_len(R, S, A, Len * 16 + 11, D + 1);
+++chunked_len(<< $C, R/bits >>, S, A, Len, D) when D < 16 -> chunked_len(R, S, A, Len * 16 + 12, D + 1);
+++chunked_len(<< $D, R/bits >>, S, A, Len, D) when D < 16 -> chunked_len(R, S, A, Len * 16 + 13, D + 1);
+++chunked_len(<< $E, R/bits >>, S, A, Len, D) when D < 16 -> chunked_len(R, S, A, Len * 16 + 14, D + 1);
+++chunked_len(<< $F, R/bits >>, S, A, Len, D) when D < 16 -> chunked_len(R, S, A, Len * 16 + 15, D + 1);
+++chunked_len(<< $a, R/bits >>, S, A, Len, D) when D < 16 -> chunked_len(R, S, A, Len * 16 + 10, D + 1);
+++chunked_len(<< $b, R/bits >>, S, A, Len, D) when D < 16 -> chunked_len(R, S, A, Len * 16 + 11, D + 1);
+++chunked_len(<< $c, R/bits >>, S, A, Len, D) when D < 16 -> chunked_len(R, S, A, Len * 16 + 12, D + 1);
+++chunked_len(<< $d, R/bits >>, S, A, Len, D) when D < 16 -> chunked_len(R, S, A, Len * 16 + 13, D + 1);
+++chunked_len(<< $e, R/bits >>, S, A, Len, D) when D < 16 -> chunked_len(R, S, A, Len * 16 + 14, D + 1);
+++chunked_len(<< $f, R/bits >>, S, A, Len, D) when D < 16 -> chunked_len(R, S, A, Len * 16 + 15, D + 1);
++ %% Chunk extensions.
++ %%
++ %% Note that we currently skip the first character we encounter here,
++ %% and not in the skip_chunk_ext function. If we latter implement
++ %% chunk extensions (unlikely) we will need to change this clause too.
++-chunked_len(<< C, R/bits >>, S, A, Len) when ?IS_WS(C); C =:= $; -> skip_chunk_ext(R, S, A, Len, 0);
+++chunked_len(<< C, R/bits >>, S, A, Len, _) when ?IS_WS(C); C =:= $; -> skip_chunk_ext(R, S, A, Len, 0);
++ %% Final chunk.
++ %%
++ %% When trailers are following we simply return them as the Rest.
++ %% Then the user code can decide to call the stream_trailers function
++ %% to parse them. The user can therefore ignore trailers as necessary
++ %% if they do not wish to handle them.
++-chunked_len(<< "\r\n\r\n", R/bits >>, _, <<>>, 0) -> {done, no_trailers, R};
++-chunked_len(<< "\r\n\r\n", R/bits >>, _, A, 0) -> {done, A, no_trailers, R};
++-chunked_len(<< "\r\n", R/bits >>, _, <<>>, 0) when byte_size(R) > 2 -> {done, trailers, R};
++-chunked_len(<< "\r\n", R/bits >>, _, A, 0) when byte_size(R) > 2 -> {done, A, trailers, R};
++-chunked_len(_, _, _, 0) -> more;
+++chunked_len(<< "\r\n\r\n", R/bits >>, _, <<>>, 0, _) -> {done, no_trailers, R};
+++chunked_len(<< "\r\n\r\n", R/bits >>, _, A, 0, _) -> {done, A, no_trailers, R};
+++chunked_len(<< "\r\n", R/bits >>, _, <<>>, 0, _) when byte_size(R) > 2 -> {done, trailers, R};
+++chunked_len(<< "\r\n", R/bits >>, _, A, 0, _) when byte_size(R) > 2 -> {done, A, trailers, R};
+++chunked_len(_, _, _, 0, _) -> more;
++ %% Normal chunk. Add 2 to Len for the trailing \r\n.
++-chunked_len(<< "\r\n", R/bits >>, S, A, Len) -> {next, R, {Len + 2, S}, A};
++-chunked_len(<<"\r">>, _, <<>>, _) -> more;
++-chunked_len(<<"\r">>, S, A, _) -> {more, {0, S}, A};
++-chunked_len(<<>>, _, <<>>, _) -> more;
++-chunked_len(<<>>, S, A, _) -> {more, {0, S}, A}.
++-
++-skip_chunk_ext(R = << "\r", _/bits >>, S, A, Len, _) -> chunked_len(R, S, A, Len);
++-skip_chunk_ext(R = <<>>, S, A, Len, _) -> chunked_len(R, S, A, Len);
+++chunked_len(<< "\r\n", R/bits >>, S, A, Len, _) -> {next, R, {Len + 2, S}, A};
+++chunked_len(<<"\r">>, _, <<>>, _, _) -> more;
+++chunked_len(<<"\r">>, S, A, _, _) -> {more, {0, S}, A};
+++chunked_len(<<>>, _, <<>>, _, _) -> more;
+++chunked_len(<<>>, S, A, _, _) -> {more, {0, S}, A}.
+++
+++skip_chunk_ext(R = << "\r", _/bits >>, S, A, Len, _) -> chunked_len(R, S, A, Len, 0);
+++skip_chunk_ext(R = <<>>, S, A, Len, _) -> chunked_len(R, S, A, Len, 0);
++ %% We skip up to 128 characters of chunk extensions. The value
++ %% is hardcoded: chunk extensions are very rarely seen in the
++ %% wild and Cowboy doesn't do anything with them anyway.
++@@ -305,6 +305,7 @@ stream_chunked_n_passes_test() ->
++ 	{more, <<"abc">>, 2, {2, 3}} = stream_chunked(<<"\n3\r\nabc">>, {1, 0}),
++ 	{more, <<"abc">>, {1, 3}} = stream_chunked(<<"3\r\nabc\r">>, {0, 0}),
++ 	{more, <<"abc">>, <<"123">>, {0, 3}} = stream_chunked(<<"3\r\nabc\r\n123">>, {0, 0}),
+++	{more, <<>>, 18446744073709551617, _} = stream_chunked(<<"FFFFFFFFFFFFFFFF\r\n">>, {0, 0}),
++ 	ok.
++ 
++ stream_chunked_dripfeed_test() ->
++@@ -339,7 +340,8 @@ stream_chunked_dripfeed2_test() ->
++ stream_chunked_error_test_() ->
++ 	Tests = [
++ 		{<<>>, undefined},
++-		{<<"\n\naaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa">>, {2, 0}}
+++		{<<"\n\naaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa">>, {2, 0}},
+++		{<<"10000000000000000\r\n">>, {0, 0}}
++ 	],
++ 	[{lists:flatten(io_lib:format("value ~p state ~p", [V, S])),
++ 		fun() -> {'EXIT', _} = (catch stream_chunked(V, S)) end}
++-- 
++2.45.4
++

Verdict

CHANGES REQUESTED — Please address the issues flagged above.

Copy link
Copy Markdown
Contributor

@Kanishk-Bansal Kanishk-Bansal left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Patch Analysis (Both patch matches upstream, nothing actionable from AI test analysis.)

  • Buddy Build 
  • patch applied during the build (check rpm.log)
  • patch include an upstream reference
  • PR has security tag

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants