diff --git a/.github/skills/find-package-skill/SKILL.md b/.github/skills/find-package-skill/SKILL.md
index 0306b1b7be5b..3d682f386d07 100644
--- a/.github/skills/find-package-skill/SKILL.md
+++ b/.github/skills/find-package-skill/SKILL.md
@@ -22,4 +22,6 @@ you already know the package well.
| Package | Path |
| -------------------------- | ------------------------------------------------------------------------------------------------- |
+| `azure-ai-agents` | `sdk/ai/azure-ai-agents/.github/skills/azure-ai-agents/SKILL.md` |
+| `azure-ai-projects` | `sdk/ai/azure-ai-projects/.github/skills/azure-ai-projects/SKILL.md` |
| `azure-search-documents` | `sdk/search/azure-search-documents/.github/skills/azure-search-documents/SKILL.md` |
diff --git a/sdk/ai/.github/skills/api-diff/SKILL.md b/sdk/ai/.github/skills/api-diff/SKILL.md
new file mode 100644
index 000000000000..7dfbcd0c7140
--- /dev/null
+++ b/sdk/ai/.github/skills/api-diff/SKILL.md
@@ -0,0 +1,171 @@
+---
+name: api-diff
+description: 'Diff the Java SDK to identify new API features added between commits or branches. Buckets new additions by functionality (e.g. agents, toolboxes, sessions, memory, skills). WHEN: what is new in the API; diff API changes; compare API between branches; see what changed in the SDK; new API additions.'
+---
+
+# API Diff — Identify New Features
+
+Diff the generated Java SDK source to identify new public API additions (methods, models, clients) and bucket them by functionality area.
+
+## Preconditions
+
+- You must be in `sdk/ai/azure-ai-agents` or `sdk/ai/azure-ai-projects`.
+- The user must provide a **base reference** to diff against: a commit hash, branch name, or tag. If not provided, ask for it (e.g. `main`, `HEAD~1`, a specific commit).
+
+## Workflow
+
+### 1. Determine the diff range
+
+Ask for or infer:
+- **Base**: the starting point (e.g. `main`, a commit hash, a tag)
+- **Head**: the current state (defaults to working tree / `HEAD`)
+
+### 2. Get the raw diff of public API files
+
+Diff only the public-facing source files (exclude implementation, tests, samples):
+
+```bash
+# For azure-ai-agents
+git diff -- \
+ src/main/java/com/azure/ai/agents/*Client.java \
+ src/main/java/com/azure/ai/agents/*AsyncClient.java \
+ src/main/java/com/azure/ai/agents/models/
+
+# For azure-ai-projects
+git diff -- \
+ src/main/java/com/azure/ai/projects/*Client.java \
+ src/main/java/com/azure/ai/projects/*AsyncClient.java \
+ src/main/java/com/azure/ai/projects/models/
+```
+
+To see only new files:
+```bash
+git diff --name-status --diff-filter=A -- src/main/java/
+```
+
+To see only modified files:
+```bash
+git diff --name-status --diff-filter=M -- src/main/java/
+```
+
+### 3. Extract new public methods
+
+For each client class, find newly added public methods:
+
+```bash
+git diff -- src/main/java/com/azure/ai/agents/*Client.java | grep "^+" | grep "public "
+```
+
+Focus on convenience methods (skip `*WithResponse` protocol methods unless they have no convenience equivalent).
+
+### 4. Extract new models
+
+Find newly added model classes:
+
+```bash
+git diff --name-status --diff-filter=A -- src/main/java/com/azure/ai/agents/models/
+```
+
+For modified models, find new fields/getters:
+
+```bash
+git diff -- src/main/java/com/azure/ai/agents/models/.java | grep "^+" | grep "public \|private "
+```
+
+### 5. Bucket by functionality
+
+Categorize each new addition into a functionality area. Use **two sources** for bucket names: the known-buckets table below and dynamic discovery from the diff itself.
+
+#### 5a. Discover buckets dynamically
+
+Scan the current client classes to find all feature areas — don't rely solely on the table:
+
+```bash
+# List all client class names (each maps to a bucket)
+ls src/main/java/com/azure/ai/agents/*Client.java 2>/dev/null | sed 's/.*\///' | sed 's/Client.java//' | sort -u
+ls src/main/java/com/azure/ai/projects/*Client.java 2>/dev/null | sed 's/.*\///' | sed 's/Client.java//' | sort -u
+```
+
+Each `*Client.java` defines a top-level bucket. Methods within `AgentsClient` that share a resource prefix (e.g., `createSession`, `getSession`, `deleteSession`) form sub-buckets.
+
+For new models without a clear client, bucket by the model's package or the client method that references it.
+
+#### 5b. Known buckets (reference, may be incomplete)
+
+This table is a **starting point** — new feature areas may exist that aren't listed here. If you discover a new bucket during the diff, add it to this table and note it in your output.
+
+| Bucket | Client class | Method/model indicators |
+|--------|-------------|------------------------|
+| **Agents** | `AgentsClient` | `createAgent`, `deleteAgent`, `getAgent`, `listAgents`, `AgentVersionDetails`, `PromptAgentDefinition` |
+| **Hosted Agents** | `AgentsClient` | `HostedAgentDefinition`, `AgentProtocol`, `ProtocolVersionRecord`, methods with `HOSTED_AGENTS_V1_PREVIEW` |
+| **Sessions** | `AgentsClient` | `createSession`, `getSession`, `deleteSession`, `listSessions`, `AgentSessionResource` |
+| **Agent Endpoints** | `AgentsClient` | `patchAgentObject`, `AgentEndpoint`, `VersionSelector`, methods with `AGENT_ENDPOINT_V1_PREVIEW` |
+| **Toolboxes** | `ToolboxesClient` | `createToolboxVersion`, `getToolbox`, `updateToolbox`, `deleteToolbox`, `ToolboxDetails`, `ToolboxVersionDetails` |
+| **Memory** | `MemoryStoresClient` | `createMemoryStore`, `getMemoryStore`, `deleteMemoryStore`, `MemoryStoreDetails` |
+| **Conversations** | `AgentsClient` | `createConversation`, `getConversation`, `deleteConversation`, `ConversationDetails` |
+| **Responses** | `ResponsesClient` | `createResponse`, response-related models |
+| **Session Files** | `AgentSessionFilesClient` | `uploadFile`, `listFiles`, `getFile`, `deleteFile` |
+| **Tools** | models package | `McpTool`, `CodeInterpreterTool`, `FileSearchTool`, `AzureAISearchTool`, `OpenApiTool`, `Tool` subclasses |
+| **Skills** | `SkillsClient` | `createSkill`, `getSkill`, `deleteSkill`, `listSkills`, `SkillDetails` |
+| **Connections** | `ConnectionsClient` | `getConnection`, `listConnections`, `ConnectionDetails` |
+| **Datasets** | `DatasetsClient` | dataset operations |
+| **Deployments** | `DeploymentsClient` | deployment operations |
+| **Indexes** | `IndexesClient` | index operations |
+| **Evaluations** | `EvaluatorsClient`, `EvaluationRulesClient` | evaluation operations |
+
+#### 5c. Keeping this table current
+
+If you created a new bucket during this diff, **update this SKILL.md** to add it to the table above. This keeps the table useful for future runs. Add a row with the bucket name, the client class, and a few representative method/model indicators.
+
+```bash
+# Path to this skill file (for self-update)
+# sdk/ai/.github/skills/api-diff/SKILL.md
+```
+
+### 6. Output the summary
+
+Present findings as a structured summary:
+
+```
+## New API Additions ( → )
+
+### Agents
+- New method: `AgentsClient.createAgentFromManifest(...)`
+- New model: `CreateAgentFromManifestInput`
+
+### Toolboxes
+- New client: `ToolboxesClient` (entirely new)
+- New methods: createToolboxVersion, getToolbox, updateToolbox, ...
+- New models: ToolboxDetails, ToolboxVersionDetails, ToolboxPolicies
+
+### Memory
+- New method: `MemoryStoresClient.searchMemoryStore(...)`
+- New model: `MemorySearchResult`
+
+### Models (cross-cutting)
+- New field on `AgentVersionDetails`: `containerProtocolVersions`
+- Modified: `FixedRatioVersionSelectionRule.trafficPercentage` type changed int → Integer
+```
+
+## Tips
+
+- Use `git diff --stat` for a quick overview of what changed
+- Use `git log --oneline ..HEAD -- src/main/java/` to see commits that touched the source
+- For large diffs, focus on client classes first (they define the public API surface), then drill into models
+- New clients (entirely new `*Client.java` files) indicate a major new feature area
+- New `*OptInKeys` or `Foundry-Features` values indicate preview features
+
+## Example Usage
+
+User: "What's new since the last release?"
+```bash
+# Find the last release tag
+git tag --list "azure-ai-agents_*" --sort=-version:refname | head -1
+# Diff against it
+git diff -- src/main/java/com/azure/ai/agents/
+```
+
+User: "What changed in this PR branch?"
+```bash
+git diff main -- src/main/java/
+```
diff --git a/sdk/ai/.github/skills/codegen-survival-rules/SKILL.md b/sdk/ai/.github/skills/codegen-survival-rules/SKILL.md
new file mode 100644
index 000000000000..2c50cec8600a
--- /dev/null
+++ b/sdk/ai/.github/skills/codegen-survival-rules/SKILL.md
@@ -0,0 +1,50 @@
+---
+name: codegen-survival-rules
+description: 'Rules for making manual edits survive TypeSpec Java codegen re-generation. Covers @Generated removal, marker comment placement, and javadoc preservation. WHEN: edit generated code; survive codegen; @Generated annotation; marker comment placement; manual edits to generated files.'
+---
+
+# Codegen Survival Rules
+
+The TypeSpec Java codegen (`tsp-client update` / `tsp-client generate`) re-generates files on every run. Methods **without** `@Generated` are preserved (body intact), but everything **above** the method signature (javadoc, comments) is regenerated. Follow these rules so your manual edits survive.
+
+## Rules
+
+1. **Remove `@Generated`** from any method you modify. The codegen will not overwrite the method body.
+2. **Place marker comments inside the method body**, not above the signature. The codegen rewrites the javadoc block above the signature but does not touch the body.
+3. **Place javadoc above the method** normally. Since the method lacks `@Generated`, the codegen preserves the javadoc you wrote.
+4. **For field declarations**, place marker comments on the same line (trailing), not on the line above. The codegen regenerates the comment block above the field.
+
+## Examples
+
+```java
+// ✅ SURVIVES codegen: javadoc above, marker inside body
+/**
+ * Gets the reasoning configuration.
+ * @return the reasoning, or null if not set.
+ */
+public com.openai.models.Reasoning getReasoning() {
+ // AI Tooling: openai-java de-dup ← inside body, survives
+ return this.reasoning;
+}
+
+// ❌ WIPED by codegen: marker above signature
+// AI Tooling: openai-java de-dup ← above signature, gets wiped
+public com.openai.models.Reasoning getReasoning() {
+ return this.reasoning;
+}
+
+// ✅ SURVIVES codegen: field marker on same line
+private com.openai.models.Reasoning reasoning; // AI Tooling: openai-java de-dup
+
+// ❌ WIPED by codegen: field marker on line above
+// AI Tooling: openai-java de-dup
+private com.openai.models.Reasoning reasoning;
+```
+
+## When to apply
+
+These rules apply whenever you hand-edit a generated model class, including:
+- De-duplicating against openai-java types (see `dedup-openai` skill)
+- Overriding TypeSpec types with Java-native types (see `tsp-type-override` skill)
+- Adding typed union wrappers over `BinaryData` properties (see `union-type-wrappers` skill)
+- Any other manual customization of generated `toJson`/`fromJson` methods
diff --git a/sdk/ai/.github/skills/codegen/SKILL.md b/sdk/ai/.github/skills/codegen/SKILL.md
new file mode 100644
index 000000000000..955b436ad86b
--- /dev/null
+++ b/sdk/ai/.github/skills/codegen/SKILL.md
@@ -0,0 +1,61 @@
+---
+name: codegen
+description: 'Generate code from TypeSpec via tsp-client (update, sync, generate). Requires a tsp-location.yaml in the current working directory. Supports updating the commit hash before running. WHEN: generate code from TypeSpec; run tsp-client; update TypeSpec commit; sync TypeSpec; regenerate SDK.'
+---
+
+# TypeSpec Code Generation (tsp-client)
+
+Use this skill to run `tsp-client` workflows for projects that include a `tsp-location.yaml` file.
+
+## Preconditions
+- You must be in the directory that contains `tsp-location.yaml`.
+- If the file is missing, warn the user and ask for the correct directory (do not run commands).
+
+## Commit hash update
+If the user provides a commit hash, update the `commit:` field in `tsp-location.yaml` **before** running tsp-client.
+- Read the file and locate the `commit:` line.
+- Replace the value with the provided hash (keep the same key name and formatting).
+- Example:
+ - Before: `commit: 6267b6...`
+ - After: `commit: `
+
+## Commands
+
+### `tsp-client update`
+Pull the latest codegen tooling or definitions (default action when the user is vague).
+```bash
+tsp-client update
+```
+
+### `tsp-client sync`
+Fetch/sync TypeSpec inputs for the project.
+```bash
+tsp-client sync
+```
+
+### `tsp-client generate`
+Generate code from TypeSpec inputs.
+```bash
+tsp-client generate
+```
+
+Keep the synced TypeSpec inputs:
+```bash
+tsp-client generate --save-inputs
+```
+
+## Steps
+1. Verify `tsp-location.yaml` exists in the current directory. If not, stop and ask for the correct location.
+2. If the user provided a commit hash, update the `commit:` value in `tsp-location.yaml`.
+3. Determine the user intent:
+ - **Refresh/update/ingest changes from a commit**: run `tsp-client update`.
+ - **Fetch/sync spec from the current commit**: run `tsp-client sync`.
+ - **Generate from fetched spec**: run `tsp-client generate` (use `--save-inputs` only if the user asks to keep inputs).
+ - **Generate (no fetch requested)**: run `tsp-client generate`.
+4. If the user doesn’t specify, default to `tsp-client update`.
+5. If the project defines or creates a `TempTypeSpecFiles` folder and the user wants code generation, run `tsp-client generate` (with `--save-inputs` if requested).
+6. If a tsp-client command fails, report the error output and suggest checking the TypeSpec repo/commit referenced in `tsp-location.yaml`. Build a GitHub URL from `repo:` and `directory:` (and include the `commit:` as the ref), e.g.:
+ - Repo: `Azure/azure-rest-api-specs`
+ - Commit: `6267b6...`
+ - Directory: `specification/cognitiveservices/OpenAI.Inference`
+ - URL: `https://github.com/Azure/azure-rest-api-specs/tree/6267b6.../specification/cognitiveservices/OpenAI.Inference`
diff --git a/sdk/ai/.github/skills/dedup-openai/SKILL.md b/sdk/ai/.github/skills/dedup-openai/SKILL.md
new file mode 100644
index 000000000000..77756339dcbb
--- /dev/null
+++ b/sdk/ai/.github/skills/dedup-openai/SKILL.md
@@ -0,0 +1,226 @@
+---
+name: dedup-openai
+description: 'Suppress generated Java classes that duplicate openai-java models, using @@alternateType in TypeSpec and manual serialization bridges. Use after dup-classes has identified actionable duplicates. WHEN: suppress duplicate openai-java classes; dedup generated models; remove duplicate openai models; alternateType for openai.'
+---
+
+# De-duplicate Generated Classes Against openai-java
+
+Use this skill **after** the `dup-classes` skill has identified actionable duplicates. This skill suppresses the generated classes and bridges to the openai-java equivalents.
+
+## Preconditions
+- A `tsp-location.yaml` must exist in the current directory.
+- TypeSpec must be synced (`tsp-client sync`) so `TempTypeSpecFiles/` exists.
+- The `openai-java` dependency must be in the project's `pom.xml`.
+- You must know which classes to suppress (use `dup-classes` skill first).
+
+## Integration with api-diff
+
+After a codegen update, use the **api-diff** skill to identify newly added models, then use the **dup-classes** skill to check only those new models for duplicates. This avoids re-evaluating the entire model set and focuses dedup work on new additions.
+
+## Key concepts
+
+### What can be suppressed
+Only **standalone models** that don't participate in a discriminator hierarchy. A model is standalone if:
+- It does NOT extend `Tool`, `TextResponseFormatConfiguration`, or another base class with a `fromJson` discriminator
+- It is NOT a subtype dispatched by a parent's `fromJson` method
+
+### What cannot be suppressed
+**Structural equivalents** — classes that extend a base type in a discriminator hierarchy (e.g., `FunctionTool extends Tool`). The SDK's polymorphic serialization requires these. They produce identical JSON but are not actionable. Do NOT attempt to suppress them.
+
+### Two suppression mechanisms
+
+| Mechanism | When to use | Effect |
+|-----------|-------------|--------|
+| `@@alternateType(OpenAI.X, { identity: "com.openai.models.X" }, "java")` | The model is referenced as a field type or union member in other models | Codegen replaces the type with the openai-java class. The generated class is NOT emitted at all. |
+| `@@access(OpenAI.X, Access.internal, "java")` | The model is NOT referenced by any public model | Codegen moves the class to `implementation.models`. |
+
+**Prefer `@@alternateType`** — it fully prevents emission and is the cleanest approach. Use `@@access(internal)` only as a supplement when a model isn't reachable through the type graph but is still emitted.
+
+### Why `@@access(internal)` alone may not work
+If a model is referenced by a union or property in a public model, `@@access(internal)` will NOT move it. The codegen keeps it public because removing it would break the type graph. Example: `ComparisonFilter` is a member of the `Filters` union used by `FileSearchTool.filters` — `@@access(internal)` has no effect, but `@@alternateType` prevents emission entirely.
+
+## Steps
+
+### 1. Edit the TypeSpec client file
+
+Locate the `client.tsp` (or `client.java.tsp`) in `TempTypeSpecFiles/sdk-*/`:
+
+```bash
+find TempTypeSpecFiles -name "client*.tsp" -path "*/sdk-*"
+```
+
+Add `@@alternateType` directives for each actionable duplicate:
+
+```tsp
+// De-dup: map to openai-java equivalents
+@@alternateType(OpenAI.ComparisonFilter, { identity: "com.openai.models.ComparisonFilter" }, "java");
+@@alternateType(OpenAI.Reasoning, { identity: "com.openai.models.Reasoning" }, "java");
+```
+
+**Finding the correct model name:** The TypeSpec models are in the `OpenAI` namespace. Search the openai-typespec package:
+
+```bash
+grep -rn "^model " TempTypeSpecFiles/node_modules/@azure-tools/openai-typespec/src/ --include="*.tsp"
+```
+
+### 2. Regenerate and verify
+
+```bash
+tsp-client generate
+```
+
+After generation, verify the suppressed classes are gone:
+
+```bash
+# Should NOT exist:
+ls src/main/java/com/azure/ai/agents/models/.java
+# Should NOT exist in implementation either (with @@alternateType):
+ls src/main/java/com/azure/ai/agents/implementation/models/.java
+```
+
+### 3. Fix serialization in parent models
+
+When a model's property type changes from a generated `JsonSerializable` class to an openai-java class, the `toJson`/`fromJson` methods in parent models will break because the openai-java type doesn't implement `JsonSerializable`.
+
+**Pattern for `toJson`** — use `OpenAIJsonHelper.toBinaryData()`:
+```java
+// Before (generated, won't compile):
+jsonWriter.writeJsonField("reasoning", this.reasoning);
+
+// After:
+if (this.reasoning != null) {
+ jsonWriter.writeFieldName("reasoning");
+ OpenAIJsonHelper.toBinaryData(this.reasoning).writeTo(jsonWriter);
+}
+```
+
+**Pattern for `fromJson`** — read as BinaryData, convert with `OpenAIJsonHelper.fromBinaryData()`:
+```java
+// Before (generated, won't compile):
+reasoning = Reasoning.fromJson(reader);
+
+// After:
+BinaryData reasoningData
+ = reader.getNullable(nonNullReader -> BinaryData.fromObject(nonNullReader.readUntyped()));
+reasoning = OpenAIJsonHelper.fromBinaryData(reasoningData, com.openai.models.Reasoning.class);
+```
+
+**Pattern for getter/setter** — use the openai-java type directly, with javadoc above and marker comment inside the body:
+```java
+// Field stores the openai-java type directly (no BinaryData indirection)
+private com.openai.models.Reasoning reasoning; // AI Tooling: openai-java de-dup
+
+/**
+ * Gets the reasoning configuration.
+ * @return the reasoning, or null if not set.
+ */
+public com.openai.models.Reasoning getReasoning() {
+ // AI Tooling: openai-java de-dup
+ return this.reasoning;
+}
+
+/**
+ * Sets the reasoning configuration.
+ * @param reasoning the reasoning to set.
+ * @return this object.
+ */
+public PromptAgentDefinition setReasoning(com.openai.models.Reasoning reasoning) {
+ // AI Tooling: openai-java de-dup
+ this.reasoning = reasoning;
+ return this;
+}
+```
+
+Remove `@Generated` from any method you modify so the codegen preserves your changes on re-generation. See the `codegen-survival-rules` skill for comment/javadoc placement.
+
+### 4. Add typed convenience setters (for BinaryData fields)
+
+When a property is already `BinaryData` (e.g., because it's a union type), add **distinctly named** setter methods for the openai-java types. Do NOT overload `setX` with different parameter types — this causes null-ambiguity. Use descriptive names instead:
+
+```java
+/**
+ * Sets the filters using an openai-java ComparisonFilter.
+ * @param filter the filter to apply, or null to clear.
+ * @return this object.
+ */
+public FileSearchTool setComparisonFilter(com.openai.models.ComparisonFilter filter) {
+ // AI Tooling: openai-java de-dup
+ this.filters = OpenAIJsonHelper.toBinaryData(filter);
+ return this;
+}
+
+/**
+ * Sets the filters using an openai-java CompoundFilter.
+ * @param filter the filter to apply, or null to clear.
+ * @return this object.
+ */
+public FileSearchTool setCompoundFilter(com.openai.models.CompoundFilter filter) {
+ // AI Tooling: openai-java de-dup
+ this.filters = OpenAIJsonHelper.toBinaryData(filter);
+ return this;
+}
+```
+
+### 5. Add `OpenAIJsonHelper` methods if needed
+
+The `OpenAIJsonHelper` class in `com.azure.ai.agents.implementation` may need two bridge methods:
+
+```java
+// Serialize openai-java object → BinaryData (writes as JSON object, not quoted string)
+public static BinaryData toBinaryData(Object openAIObject)
+
+// Deserialize BinaryData → openai-java type
+public static T fromBinaryData(BinaryData data, Class type)
+```
+
+These use the openai-java `ObjectMappers.jsonMapper()` (which handles Kotlin internals correctly). Do NOT use `BinaryData.fromObject()` or `BinaryData.toObject()` with openai-java types — the default Jackson ObjectMapper cannot serialize Kotlin `SynchronizedLazyImpl` fields.
+
+### 6. Write serialization tests
+
+Write round-trip tests verifying the JSON shape is preserved. Test pattern:
+
+```java
+@Test
+public void testRoundTrip() throws IOException {
+ // Build with openai-java type
+ Reasoning reasoning = Reasoning.builder().effort(ReasoningEffort.HIGH).build();
+ PromptAgentDefinition original = new PromptAgentDefinition("gpt-4o").setReasoning(reasoning);
+
+ // Serialize
+ String json = serialize(original);
+ assertTrue(json.contains("\"effort\":\"high\""));
+
+ // Deserialize
+ PromptAgentDefinition deserialized = deserialize(json);
+ assertEquals(ReasoningEffort.HIGH, deserialized.getReasoning().effort().get());
+
+ // Re-serialize and compare
+ assertEquals(json, serialize(deserialized));
+}
+```
+
+Cover: all enum values, null/absent fields, combined with other fields, polymorphic deserialization via parent `fromJson`.
+
+### 7. Apply changes to the spec repo
+
+If a local checkout of `Azure/azure-rest-api-specs` is available, apply the same `client.tsp` edits there. Derive the path from `tsp-location.yaml`:
+
+```
+//client.tsp
+```
+
+## Codegen survival rules
+
+See the `codegen-survival-rules` skill for the full guide on making manual edits survive re-generation (`@Generated` removal, marker comment placement, javadoc rules).
+
+## Common pitfalls
+
+| Problem | Cause | Fix |
+|---------|-------|-----|
+| Class stays public despite `@@access(internal)` | Referenced by a union or property in a public model | Use `@@alternateType` instead |
+| `BinaryData.fromObject(openAIObj)` throws `SynchronizedLazyImpl` error | Default Jackson can't serialize Kotlin internals | Use `OpenAIJsonHelper.toBinaryData()` which uses `ObjectMappers.jsonMapper()` |
+| `BinaryData.fromString(json).writeTo(writer)` writes quoted string | `fromString` creates text content, not JSON | Use `BinaryData.fromObject(reader.readUntyped())` to store as a JSON object |
+| Getter/setter bridge through BinaryData on every call | Unnecessary indirection | Store the openai-java type directly in the field; bridge only in `toJson`/`fromJson` |
+| Tried to suppress a `Tool` subclass | Structural equivalent, not an actionable duplicate | Don't suppress — it's needed for polymorphic deserialization |
+| Javadoc/comments above method wiped after codegen | Codegen rewrites everything above non-`@Generated` method signatures | Place marker comments inside the method body; javadoc survives if `@Generated` is removed (see `codegen-survival-rules` skill) |
+| Overloaded setters cause null ambiguity | `setFilters(null)` matches `BinaryData`, `ComparisonFilter`, and `CompoundFilter` | Use distinct method names: `setComparisonFilter()`, `setCompoundFilter()` |
diff --git a/sdk/ai/.github/skills/dup-classes/SKILL.md b/sdk/ai/.github/skills/dup-classes/SKILL.md
new file mode 100644
index 000000000000..9b8bb65bb05e
--- /dev/null
+++ b/sdk/ai/.github/skills/dup-classes/SKILL.md
@@ -0,0 +1,90 @@
+---
+name: dup-classes
+description: 'Verify whether generated Java classes duplicate openai-java models by comparing fields/types (names may differ). WHEN: check for duplicate models; compare generated vs openai-java classes; find duplicate model coverage; verify openai-java duplicates.'
+---
+
+# Duplicate Class Verification (Generated vs openai-java)
+
+Use this skill to compare generated Java models against the `openai-java` dependency. The goal is **field-by-field** comparison of model shapes, even when class or field names differ.
+
+## Inputs to confirm
+- Generated source root (e.g., `src/main/java/...`)
+- The relevant `pom.xml` (module) to resolve the `openai-java` dependency version
+- Optional: package or class name hints to narrow the search
+
+## Integration with api-diff
+
+When checking for duplicates after a codegen update, use the **api-diff** skill first to identify newly added models. Focus duplicate checking on those new models rather than scanning the entire source tree — existing models have already been evaluated.
+
+## Steps
+1. **Locate the pom.xml** in the current directory tree (`find . -name pom.xml`). If multiple, ask which module to use.
+2. **Resolve openai-java**:
+ - Search the chosen pom for `openai-java` (or an explicit group/artifact provided by the user).
+ - Resolve the version (including properties) and locate the JAR in `~/.m2/repository`.
+ - The model classes live in `openai-java-core`, not the top-level `openai-java` artifact.
+3. **List candidate classes**:
+ - Generated classes: scan the source root for `class` and `record` declarations.
+ - openai-java classes: `jar tf | grep '\.class$'` (filter by package hints if provided).
+4. **Extract field signatures** (names may differ; compare shape):
+ - **Generated source**:
+ - For `record`, use the component list in the `record` declaration.
+ - For `class`, extract non-static field declarations (type + count) and note any `@JsonProperty` names.
+ - Check `toJson`/`fromJson` methods for the actual JSON keys used in serialization.
+ - **openai-java JAR**:
+ - Use `javap -classpath -p ` to list fields (ignore `static`, `validated`, `hashCode$delegate`, `additionalProperties`).
+ - Extract `@JsonProperty` keys from the sources JAR (`*-sources.jar`) for JSON key comparison.
+5. **Compare shapes**:
+ - Compare **field count** and **field types** (order-independent).
+ - Compare **JSON keys** from `@JsonProperty` (openai-java) vs `toJson`/`fromJson` string literals (generated).
+ - Compare **enum/union values** when field types are enums or string unions.
+ - Follow type hierarchy: `BinaryData` ↔ `JsonValue` (both represent untyped JSON), `Map` ↔ `Map`, Java enum ↔ Kotlin string enum.
+6. **Categorize results** (do NOT treat all matches the same):
+
+ **Actionable duplicates** — standalone models not in any type hierarchy. These can potentially be suppressed and replaced with the openai-java equivalent. Examples: `ComparisonFilter`, `Reasoning`.
+
+ **Structural equivalents** — classes that produce identical JSON but participate in a discriminator hierarchy (e.g., `extends Tool`, `extends TextResponseFormatConfiguration`). The SDK's polymorphic serialization (`Tool.fromJson()` dispatches to `FunctionTool.fromJson()`, etc.) requires these to exist. They are NOT actionable duplicates. Examples: `FunctionTool`, `FileSearchTool`, `ComputerUsePreviewTool`.
+
+ **Partial matches** — classes with most fields matching but extra Azure-specific fields. Note the extra fields. Examples: `CodeInterpreterTool` (extra `container`), `WebSearchTool` (extra `custom_search_configuration`).
+
+7. **Report**:
+ - Provide a table with: generated class → openai-java class, field count, matching fields, category.
+ - Clearly separate actionable duplicates from structural equivalents.
+ - For actionable duplicates, note whether `@@alternateType` or `@@access(internal)` would be the right suppression mechanism (see `dedup-openai` skill).
+
+## Useful commands
+
+### List generated class names
+```bash
+rg -n "^(public\s+)?(final\s+)?(class|record)\s+"
+```
+
+### Extract field lines from source (classes)
+```bash
+grep -E '^\s+private\s+' | grep -v 'static\s'
+```
+
+### Extract JSON keys from generated toJson/fromJson
+```bash
+grep -E 'jsonWriter\.write|"[a-z_]+"' | grep -v '//'
+```
+
+### Inspect fields in a JAR class
+```bash
+javap -classpath -p
+```
+
+### Extract @JsonProperty from openai-java sources JAR
+```bash
+jar xf main/com/openai/models/.kt
+grep '@JsonProperty' main/com/openai/models/.kt
+```
+
+### Check if a class participates in a hierarchy
+```bash
+grep 'extends\s' # If it extends Tool, TextResponseFormatConfiguration, etc. → structural
+```
+
+## Notes
+- See the `search-m2` skill for locating dependency versions and JAR paths.
+- If the user provides only a vague class hint, narrow candidates by package or field count first.
+- The openai-java classes are Kotlin and use Jackson; generated classes use azure-json (`JsonSerializable`). Compare at the JSON wire level, not at the Java API level.
diff --git a/sdk/ai/.github/skills/github/SKILL.md b/sdk/ai/.github/skills/github/SKILL.md
new file mode 100644
index 000000000000..0f1aeee5bfe4
--- /dev/null
+++ b/sdk/ai/.github/skills/github/SKILL.md
@@ -0,0 +1,156 @@
+---
+name: github
+description: 'Interact with GitHub using the gh CLI. Use gh issue, gh pr, gh run, and gh api for issues, PRs, CI runs, and advanced queries. WHEN: list PRs; view issue; check CI status; run GitHub CLI; gh api query.'
+---
+
+# GitHub Skill
+
+Use the `gh` CLI to interact with GitHub. Always pass `--repo owner/repo` when not inside a cloned git directory.
+
+## Pull Requests
+
+List open PRs:
+```bash
+gh pr list --repo owner/repo
+```
+
+View a specific PR (summary, checks, comments):
+```bash
+gh pr view 55 --repo owner/repo
+```
+
+Check CI status on a PR:
+```bash
+gh pr checks 55 --repo owner/repo
+```
+
+## CI / Workflow Runs
+
+List recent runs:
+```bash
+gh run list --repo owner/repo --limit 10
+```
+
+View a run summary (steps, status):
+```bash
+gh run view --repo owner/repo
+```
+
+View logs for failed steps only:
+```bash
+gh run view --repo owner/repo --log-failed
+```
+
+Re-run failed jobs:
+```bash
+gh run rerun --repo owner/repo --failed
+```
+
+## Issues
+
+List open issues (optionally filter by label):
+```bash
+gh issue list --repo owner/repo
+gh issue list --repo owner/repo --label bug
+```
+
+View a specific issue:
+```bash
+gh issue view 42 --repo owner/repo
+```
+
+Create an issue:
+```bash
+gh issue create --repo owner/repo --title "Title" --body "Description" --label bug
+```
+
+## JSON Output & Filtering
+
+Most commands support `--json` with `--jq` for structured output:
+
+```bash
+# List PR numbers and titles
+gh pr list --repo owner/repo --json number,title --jq '.[] | "\(.number): \(.title)"'
+
+# List issues with assignees
+gh issue list --repo owner/repo --json number,title,assignees \
+ --jq '.[] | "\(.number): \(.title) → \(.assignees[].login // "unassigned")"'
+```
+
+## Advanced: `gh api`
+
+Use `gh api` for data or actions not covered by other subcommands.
+
+Fetch a PR with specific fields:
+```bash
+gh api repos/owner/repo/pulls/55 --jq '.title, .state, .user.login'
+```
+
+List check runs for a commit:
+```bash
+gh api repos/owner/repo/commits//check-runs \
+ --jq '.check_runs[] | "\(.name): \(.conclusion)"'
+```
+
+Paginate results (e.g., all issues):
+```bash
+gh api --paginate repos/owner/repo/issues --jq '.[].title'
+```
+
+## Steps
+
+1. Check if `gh` is installed by running `gh --version`.
+ - If the command is **not found**, install it (see [Installation](#installation) below).
+2. Check if `gh` is authenticated by running `gh auth status`.
+ - If not authenticated, run `gh auth login`.
+3. If `owner/repo` is not provided, check if there is a `.git` directory and infer the remote via `gh repo view --json nameWithOwner`. Otherwise ask the user for the repo.
+4. Choose the appropriate subcommand (`pr`, `issue`, `run`, `api`) based on the user's request.
+5. Prefer structured subcommands (`gh pr`, `gh issue`, `gh run`) over raw `gh api` when they cover the use case.
+6. Use `--json` + `--jq` when the user needs specific fields or wants to pipe output into further processing.
+7. If a workflow run is failing, start with `gh pr checks` for a quick overview, then `gh run view --log-failed` for detailed output.
+8. Report results clearly; if output is large, summarize and highlight the relevant parts.
+
+## Installation
+
+If `gh` is missing, install it using the recommended method for the current OS.
+Detect the OS first, then run the matching command.
+
+### Windows
+```powershell
+winget install --id GitHub.cli
+```
+> Note: open a **new terminal window** after installation for PATH changes to take effect.
+
+### macOS
+```shell
+brew install gh
+```
+
+### Linux (Debian / Ubuntu)
+```bash
+(type -p wget >/dev/null || (sudo apt update && sudo apt install wget -y)) \
+ && sudo mkdir -p -m 755 /etc/apt/keyrings \
+ && out=$(mktemp) && wget -nv -O$out https://cli.github.com/packages/githubcli-archive-keyring.gpg \
+ && cat $out | sudo tee /etc/apt/keyrings/githubcli-archive-keyring.gpg > /dev/null \
+ && sudo chmod go+r /etc/apt/keyrings/githubcli-archive-keyring.gpg \
+ && sudo mkdir -p -m 755 /etc/apt/sources.list.d \
+ && echo "deb [arch=$(dpkg --print-architecture) signed-by=/etc/apt/keyrings/githubcli-archive-keyring.gpg] https://cli.github.com/packages stable main" | sudo tee /etc/apt/sources.list.d/github-cli.list > /dev/null \
+ && sudo apt update \
+ && sudo apt install gh -y
+```
+
+### Linux (Fedora / RHEL / CentOS)
+```bash
+sudo dnf install 'dnf-command(config-manager)'
+sudo dnf config-manager --add-repo https://cli.github.com/packages/rpm/gh-cli.repo
+sudo dnf install gh --repo gh-cli
+```
+
+After installation, verify with `gh --version`, then authenticate with `gh auth login` if needed.
+
+## Notes
+
+- `gh` must be authenticated (`gh auth status`). If not, run `gh auth login` first.
+- `--repo` accepts both `owner/repo` shorthand and full HTTPS URLs.
+- For `gh api`, use `--method POST/PATCH/DELETE` for write operations and pass body fields with `-f field=value` or `-F field=`.
+- `gh run list` defaults to the current branch when run inside a git repo; pass `--branch ` to target a specific branch.
diff --git a/sdk/ai/.github/skills/release-notes/SKILL.md b/sdk/ai/.github/skills/release-notes/SKILL.md
new file mode 100644
index 000000000000..e6fc44aa212a
--- /dev/null
+++ b/sdk/ai/.github/skills/release-notes/SKILL.md
@@ -0,0 +1,206 @@
+---
+name: release-notes
+description: 'Update CHANGELOG.md and README.md for an Azure SDK for Java package based on a GitHub PR. WHEN: write release notes; update changelog; update readme from PR; write changelog entries; document PR changes.'
+---
+
+# Release Notes Skill
+
+Update `CHANGELOG.md` and/or `README.md` for an Azure SDK for Java package using a GitHub PR as the source of truth.
+
+## Prerequisites
+
+- `gh` CLI authenticated (`gh auth status`).
+- The current working directory must be the package root (where `CHANGELOG.md` and `README.md` live).
+
+## Inputs
+
+Ask the user for any missing inputs before proceeding:
+
+| Input | Required | Description |
+|-------|----------|-------------|
+| PR URL or number | **Yes** | GitHub PR to use as the source of changes. |
+| Package directory | No | Defaults to `cwd`. Override if the user specifies a different package. |
+| Scope | No | `changelog`, `readme`, or `both` (default: `both`). |
+
+## Step 1 — Gather PR information
+
+Use `gh` to collect the data you need. The diff may be too large for `gh pr diff`; fall back to the files API.
+
+Alternatively, if the user wants release notes based on a diff range (not a PR), use the **api-diff** skill to identify new API additions bucketed by functionality, then use those buckets to structure the changelog entries.
+
+```bash
+# PR metadata
+gh pr view --json title,body
+
+# File list with status (added/modified/removed/renamed)
+gh api repos/{owner}/{repo}/pulls//files --paginate \
+ --jq '.[] | .status + " " + .filename'
+
+# Renamed files (old → new)
+gh api repos/{owner}/{repo}/pulls//files --paginate \
+ --jq '.[] | select(.status == "renamed") | "\(.previous_filename) -> \(.filename)"'
+
+# Patch for a specific file (when you need detail)
+gh api "repos/{owner}/{repo}/pulls//files?per_page=100" \
+ --jq '.[] | select(.filename | test("")) | .patch'
+```
+
+Collect:
+- Added, removed, and renamed model/enum classes.
+- Changes to client classes (method renames, new methods, removed methods).
+- Changes to `*ServiceVersion.java` (version string changes).
+- Changes to `*ClientBuilder.java` (base URL, authentication, new builder methods).
+- Changes to `module-info.java` (transitive exports, new requires).
+- Changes to customization files.
+- New or modified samples.
+
+## Step 2 — Check for existing entries
+
+Before writing anything, read the current `CHANGELOG.md` and `README.md` (if in scope) **in full** and compare their content against the changes you collected in Step 1.
+
+### 2a. Identify overlapping entries
+
+For each change you plan to document, check whether an entry already covers it:
+
+- **Exact match** — an existing bullet describes the same rename, addition, or removal using the same class/method names.
+- **Topical overlap** — an existing bullet covers the same area (e.g., "tool renames" or "new sub-client") but with different detail, wording, or scope.
+
+### 2b. Report findings to the user
+
+If **any** overlap is found, **stop and consult the user before editing**. Present a summary like:
+
+> The following changes from PR #NNN already appear to be covered in the current files:
+>
+> **CHANGELOG.md**
+> - _Features Added_ already mentions `FooClient` addition (line …).
+> - _Breaking Changes_ already has a bullet about tool renames that partially overlaps the renames in this PR.
+>
+> **README.md**
+> - The "Key concepts" section already lists the `BarClient` sub-client.
+>
+> Would you like me to:
+> 1. Skip the entries that are already covered and only add the new ones?
+> 2. Merge/update the overlapping entries (tell me how you'd like them worded)?
+> 3. Proceed anyway and add everything as new entries?
+
+Wait for the user's response before continuing to Step 3 or Step 4.
+
+### 2c. No overlap
+
+If there is **no** overlap at all, inform the user briefly (e.g., "No existing entries overlap with this PR — proceeding to update.") and continue.
+
+## Step 3 — Update CHANGELOG.md
+
+### Format rules (CI-enforced)
+
+The CHANGELOG structure is **strict**. Every version section must contain exactly these headings in this order:
+
+```markdown
+## (date or Unreleased)
+
+### Features Added
+
+### Breaking Changes
+
+### Bugs Fixed
+
+### Other Changes
+```
+
+- **Do not** add, remove, rename, or reorder these headings.
+- **Do not** delete or modify existing entries — all changes are **additive only**. Append new bullets below existing ones.
+- If Step 2 identified a topical overlap and the user chose to merge/update an existing entry, that is the **only** case where you may edit an existing bullet — and only as the user directed.
+- Only modify the target version section (usually the `(Unreleased)` one).
+- Each entry is a markdown list item starting with `- `.
+
+### What goes where
+
+| Heading | Content |
+|---------|---------|
+| **Features Added** | New public classes, methods, enums, samples, tools, client capabilities. |
+| **Breaking Changes** | Renamed/removed classes, renamed methods, changed enum values, changed method signatures, service version changes. |
+| **Bugs Fixed** | Fixes to incorrect behavior (e.g., URL construction, serialization bugs). |
+| **Other Changes** | Dependency updates, spec regeneration, module-info changes, internal refactors. |
+
+### Writing guidelines
+
+1. **Summarize, don't enumerate.** Group related changes under a single bullet when there is an overarching pattern (e.g., "Methods across sub-clients were renamed to include the resource name" with sub-bullets for each client). Don't list every file touched.
+2. **Consumer perspective.** Only mention changes that matter to a user of the library. Internal implementation model renames that are not in the public API can be omitted.
+3. **Use code formatting** for class names, method names, and enum values: `` `ClassName` ``.
+4. **Show before → after** for renames: `` `OldName` → `NewName` `` or `` `OldName` renamed to `NewName` ``.
+5. **Group repetitive renames** by pattern. For example, if 10 tool classes were renamed from `*AgentTool` to `*Tool`, write one bullet with representative examples rather than 10 bullets.
+6. **Mention service version changes** (e.g., date-based to `v1`) in Breaking Changes.
+7. **Don't over-list new models.** If the PR adds dozens of generated models, mention only the notable ones (new tool types, new feature-area models) and say "and related types" or similar.
+8. **Omit trivial internal changes** like parameter reordering in generated `@HostParam`/`@QueryParam` annotations, checkstyle suppression updates, or whitespace.
+
+## Step 4 — Update README.md
+
+### Format rules (CI-enforced)
+
+The README structure is also checked by CI. Follow the existing heading hierarchy exactly:
+
+```
+# client library for Java
+## Documentation
+## Getting started
+### Prerequisites
+### Adding the package to your product
+### Authentication
+## Key concepts
+###
+## Examples
+###
+### Service API versions
+#### Select a service API version
+## Troubleshooting
+## Next steps
+## Contributing
+
+```
+
+- **Do not** remove or reorder the top-level headings.
+- **Do not** change the `[//]: #` version-update markers.
+- **Do not** delete or rewrite existing prose or snippets unless they reference renamed APIs from this PR and Step 2 confirmed no conflict (or the user approved the change).
+- You **may** add new `###` subsections under `## Key concepts` or `## Examples`.
+- Keep existing code snippets intact unless they reference renamed APIs.
+
+### What to update
+
+1. **Package description** (opening paragraph): mention the REST API version if it changed (e.g., "targets the **v1** REST API").
+2. **Code snippets**: update any code that references renamed methods, classes, or builder patterns. Keep the `java com.azure...` snippet tags intact.
+3. **Sub-client lists**: if new sub-clients were added, add them. Mark preview sub-clients with **(preview)**.
+4. **Preview tools / features**: if the package defines `Tool` subclasses, document which are GA and which are preview (look for `Preview` in the class name or discriminator value). Use a table.
+5. **Opt-in flags / experimental features**: if the package uses `FoundryFeaturesOptInKeys`, `AgentDefinitionFeatureKeys`, or `Foundry-Features` headers, document:
+ - Which sub-clients auto-set the header (check `*Impl.java` for hardcoded `foundryFeatures` strings).
+ - Which accept it as an optional parameter.
+ - List known flag values.
+6. **OpenAI direct-usage snippets**: if the builder URL construction changed (e.g., `/openai` → `/openai/v1`), update the snippet and surrounding prose. Remove references to removed imports like `AzureUrlPathMode` or `AzureOpenAIServiceVersion` if they no longer apply.
+
+### Determining preview tools
+
+For the `azure-ai-agents` package, look at classes extending `com.azure.ai.agents.models.Tool`:
+
+```bash
+# Read the Tool.java discriminator to find all subtypes
+grep -A1 'equals(discriminatorValue)' src/main/java/com/azure/ai/agents/models/Tool.java
+```
+
+Tools whose discriminator value or class name contains `preview` are preview tools. All others are GA.
+
+### Determining preview operation groups
+
+Check which `*Impl.java` files hardcode a `foundryFeatures` value:
+
+```bash
+grep -rl "final String foundryFeatures" src/main/java/*/implementation/*.java
+```
+
+Those operation groups are preview and auto-opt-in. Also check convenience client classes for `FoundryFeaturesOptInKeys` parameters — those are opt-in by caller.
+
+## Notes
+
+- **All edits are additive.** Never remove or rewrite existing content unless the user explicitly approves it after being consulted in Step 2.
+- When the PR diff is too large for `gh pr diff` (HTTP 406), use `gh api .../pulls//files --paginate` instead.
+- Paginate with `--paginate` and page with `?per_page=100&page=N` as needed.
+- Always read the existing CHANGELOG and README **before** editing to avoid duplicating entries or breaking structure.
+- If the PR title/body provides a summary, use it as a starting point but verify against the actual file changes.
diff --git a/sdk/ai/.github/skills/run-tests/SKILL.md b/sdk/ai/.github/skills/run-tests/SKILL.md
new file mode 100644
index 000000000000..a18432e2eef5
--- /dev/null
+++ b/sdk/ai/.github/skills/run-tests/SKILL.md
@@ -0,0 +1,65 @@
+---
+name: run-tests
+description: 'Run project tests using Maven (mvn). WHEN: run tests; execute tests; mvn test; run specific test class; run tests with secrets.'
+---
+
+# Run Tests (Maven)
+
+Use Maven (`mvn`) to run tests. Confirm you are in a directory with a `pom.xml` (project root or module root).
+
+## Common commands
+
+### All tests (default)
+```bash
+mvn test
+```
+
+### Specific module (multi-module)
+```bash
+mvn -pl -am test
+```
+
+### Specific test class or method (Surefire)
+```bash
+mvn -Dtest=MyTest test
+mvn -Dtest=MyTest#myMethod test
+```
+
+## Test modes (AZURE_TEST_MODE)
+If the user asks for live/record/playback, set the env var for the command:
+```powershell
+$env:AZURE_TEST_MODE = "LIVE"
+mvn test
+```
+
+## Running tests with secrets (wr-load)
+
+Use `wr-load` to load env vars from KeyVault before running tests. Combine in a single command so env vars persist:
+
+```bash
+wr-load -Resource ; AZURE_TEST_MODE=LIVE mvn "-Dtest=" test
+```
+
+### Important: minimize wr-load calls
+`wr-load` fetches secrets from Azure KeyVault over the network. **Run it once**, capture the values, and reuse them. Do NOT call `wr-load` repeatedly across tool invocations. If you already know the env var values from a prior call, set them directly instead of calling `wr-load` again.
+
+## Steps
+1. Ensure you're in the correct Maven project directory (contains `pom.xml`). If not, ask for the correct path.
+2. Start simple: `mvn "-Dtest=" test`. Only add flags if something fails.
+3. If the user provides a test name, use `-Dtest=`.
+4. If the user specifies a module, use `-pl -am test`.
+5. If the user specifies a test mode (LIVE/RECORD/PLAYBACK), set the env var.
+6. If tests require secrets, use `wr-load` as shown above — **once**.
+7. If the command fails, report the error output and ask how they want to proceed.
+
+## Troubleshooting (only add flags when needed)
+
+Apply these **only** if the simple `mvn test` command fails with the specific error described:
+
+- **Samples fail to compile on Java 8 base-testCompile**: add `-Dbuildhelper.addtestsource.skip=true -Dbuildhelper.addtestresource.skip=true`
+- **JPMS/module-path errors** (e.g., `okio` module issues): add `-Dsurefire.useModulePath=false`
+- **Build plugins block the run**: add skip flags as needed, e.g. `-Denforcer.skip=true -Dcodesnippet.skip=true -Dcheckstyle.skip=true`
+- **Reactor blocking errors in async tests** (Netty thread): add `$env:AZURE_TEST_HTTP_CLIENTS = "okhttp"`
+- **SSL handshake / PKIX path building failed** (`SSLHandshakeException`, `unable to find valid certification path to requested target`): the JVM's trust store is missing the corporate root CA. Add `-Djavax.net.ssl.trustStoreType=WINDOWS-ROOT` to use the Windows certificate store instead. This is common on corporate networks with proxy/firewall TLS interception.
+
+Do NOT preemptively add all these flags. Start simple and escalate only on failure.
diff --git a/sdk/ai/.github/skills/samples/SKILL.md b/sdk/ai/.github/skills/samples/SKILL.md
new file mode 100644
index 000000000000..66c9aea6eac8
--- /dev/null
+++ b/sdk/ai/.github/skills/samples/SKILL.md
@@ -0,0 +1,203 @@
+---
+name: samples
+description: 'Generate Java samples for azure-ai-agents or azure-ai-projects by referencing existing Java samples for format and equivalent Python samples for CRUD flows. WHEN: write samples for a feature area; generate Java samples; create sample code; write agent samples; new feature samples.'
+---
+
+# Generate Java Samples
+
+Write Java sample files for the Azure AI Agents SDK (`azure-ai-agents`) or Azure AI Projects SDK (`azure-ai-projects`), following the established format and referencing equivalent Python samples for feature coverage.
+
+## Preconditions
+
+- You must be working within `sdk/ai/azure-ai-agents` or `sdk/ai/azure-ai-projects`.
+- The user must specify the **feature area** (e.g. toolboxes, hosted agents, sessions, skills, conversations, memory), OR ask to write samples for "new features" — in which case, use the **api-diff** skill first to identify what functionality areas have new API additions that need samples.
+- The user must provide a **live service endpoint** for testing.
+
+## Integration with api-diff
+
+When the user asks to write samples for new or changed functionality without specifying a feature area:
+
+1. Use the **api-diff** skill to identify new API additions and their functionality buckets.
+2. Cross-reference the bucketed results against existing sample directories:
+ ```bash
+ ls src/samples/java/com/azure/ai/agents/
+ ```
+3. Write samples for any new functionality bucket that has **no corresponding sample directory or file**.
+4. For existing buckets with new methods, add samples covering the new operations.
+
+## Reference Materials
+
+### Existing Java samples (format reference)
+
+| Package | Samples location |
+|---------|-----------------|
+| `azure-ai-agents` | `sdk/ai/azure-ai-agents/src/samples/java/com/azure/ai/agents/` |
+| `azure-ai-projects` | `sdk/ai/azure-ai-projects/src/samples/java/com/azure/ai/projects/` |
+
+Two styles exist:
+
+**Style A — One operation per file** (used in `azure-ai-agents`):
+- Separate file per CRUD operation (e.g. `CreateAgent.java`, `GetAgent.java`, `DeleteAgent.java`)
+- Organized in subdirectories by feature (e.g. `agents/`, `toolboxes/`, `hostedagents/`, `conversations/`)
+- `public static void main(String[] args)` entry point
+- Javadoc class comment explaining the sample
+
+**Style B — All operations in one file** (used in `azure-ai-projects`):
+- Single file per feature with multiple methods (e.g. `SkillsSample.java`)
+- Methods wrapped with `// BEGIN:` / `// END:` codesnippet markers
+- Client constructed as a class-level field
+
+Use whichever style matches the target package.
+
+### Python samples (feature/flow reference)
+
+Before writing a sample for a specific operation, search the Python SDK for an equivalent sample demonstrating the same functionality.
+
+#### Locating the Python SDK
+
+Try these approaches in order:
+
+1. **Ask the user** for the path to their local `azure-sdk-for-python` checkout.
+2. **Check common locations**:
+ ```bash
+ # Sibling to the Java SDK repo
+ ls -d $(dirname $(git rev-parse --show-toplevel))/azure-sdk-for-python 2>/dev/null
+ # Home directory
+ ls -d ~/azure-sdk-for-python 2>/dev/null
+ # Search for it
+ find /home -maxdepth 3 -name "azure-sdk-for-python" -type d 2>/dev/null | head -3
+ ```
+3. **Browse the web** if no local checkout is available — check the GitHub repository at `https://github.com/Azure/azure-sdk-for-python/tree/main/sdk/ai/azure-ai-projects/samples/` for equivalent Python samples.
+
+#### Searching for equivalent samples
+
+Once the Python SDK is located (locally or via web):
+
+```bash
+find /sdk/ai -name "sample_*.py" | xargs grep -l ""
+```
+
+Use matching Python samples to determine:
+- What operations to cover
+- The expected CRUD flow and ordering
+- What fields/parameters to demonstrate
+- Cleanup/teardown patterns
+
+If no Python sample exists for the operation, write the Java sample based on the API surface alone.
+
+## Java Sample Format Rules
+
+1. **License header**: `// Copyright (c) Microsoft Corporation. All rights reserved.` + `// Licensed under the MIT License.`
+2. **Package**: matches the directory (e.g. `package com.azure.ai.agents.toolboxes;`)
+3. **Endpoint from env**: `Configuration.getGlobalConfiguration().get("FOUNDRY_PROJECT_ENDPOINT")`
+4. **Auth**: `new DefaultAzureCredentialBuilder().build()`
+5. **Client construction**: `new AgentsClientBuilder().credential(...).endpoint(...).buildClient()`
+6. **No hardcoded endpoints or secrets**
+7. **Print results**: use `System.out.println` to show key fields from response objects
+8. **Imports**: only import what's used
+
+## Workflow
+
+### 1. Identify the feature area and target package
+
+Determine which package (`azure-ai-agents` or `azure-ai-projects`) and which feature subdirectory to use.
+
+### 2. Find equivalent Python samples
+
+Locate the Python SDK (ask the user, check common paths, or browse GitHub — see "Locating the Python SDK" above). Then search for samples covering the same feature:
+
+```bash
+find /sdk/ai -name "sample_*.py" | xargs grep -l ""
+```
+
+If no local checkout is available, check `https://github.com/Azure/azure-sdk-for-python/tree/main/sdk/ai/azure-ai-projects/samples/` for equivalent samples.
+
+Read matching Python samples to understand the intended CRUD flow and what operations to demonstrate.
+
+### 3. Discover available Java API
+
+Search the generated client classes for available methods:
+
+```bash
+grep -n "public.*.*methodName" src/main/java/com/azure/ai/agents/*Client.java
+```
+
+Check the models package for request/response types:
+
+```bash
+find src/main/java -name "*FeatureName*"
+```
+
+### 4. Check for preview feature flags
+
+Some operations require `Foundry-Features` headers. Look for `AgentDefinitionOptInKeys` parameters or check the TypeSpec routes for `required_previews`:
+
+```bash
+grep -n "foundryFeatures\|OptInKeys\|Foundry-Features" src/main/java/com/azure/ai/agents/*Client.java
+```
+
+If the convenience method doesn't have a `foundryFeatures` parameter, use the protocol method with `RequestOptions`:
+
+```java
+RequestOptions requestOptions = new RequestOptions()
+ .setHeader(HttpHeaderName.fromString("Foundry-Features"), "HostedAgents=V1Preview");
+```
+
+Known preview headers:
+- `HostedAgents=V1Preview` — sessions, hosted agent operations
+- `AgentEndpoints=V1Preview` — agent endpoint configuration
+- `ContainerAgents=V1Preview` — container agent operations
+
+### 5. Write the samples
+
+Create sample files following the appropriate style (A or B). Each sample should:
+- Be self-contained (can be copy-pasted and run)
+- Use placeholder values for resource names (e.g. `"my-toolbox-name"`) with comments indicating what to replace
+- Handle both convenience methods and protocol methods as needed
+- Mirror the flow demonstrated in the equivalent Python sample (if one exists)
+
+### 6. Verify compilation
+
+```bash
+cd && mvn compile -pl sdk/ai/ -am \
+ -DskipTests -Dcheckstyle.skip -Dspotbugs.skip -Drevapi.skip -Djacoco.skip \
+ -Denforcer.skip -Dcodesnippet.skip -Dcompile.samples=true -T 4
+```
+
+### 7. Test against live service
+
+Build the package and run each sample:
+
+```bash
+# Build without tests
+mvn package -pl sdk/ai/ -am \
+ -DskipTests -Dmaven.test.skip=true -Dmaven.javadoc.skip=true -Dsource.skip=true \
+ -Dcheckstyle.skip -Dspotbugs.skip -Drevapi.skip -Djacoco.skip -Denforcer.skip \
+ -Dgpg.skip -Dcodesnippet.skip -T 4
+
+# Compile samples against the packaged jar
+AGENTS_JAR="sdk/ai//target/-.jar"
+CP=$(mvn -pl sdk/ai/ dependency:build-classpath -Dmdep.outputFile=/dev/stdout -q)
+javac -cp "$AGENTS_JAR:$CP" -d /tmp/samples src/samples/java/com/azure/ai/agents//*.java
+
+# Run with endpoint set
+export FOUNDRY_PROJECT_ENDPOINT=""
+java -cp "/tmp/samples:$AGENTS_JAR:$CP" com.azure.ai.agents..
+```
+
+### 8. Handle errors
+
+Common issues:
+- **403 `preview_feature_required`**: Add the required `Foundry-Features` header
+- **404 `not_found`**: Resource doesn't exist yet (expected for Get/Delete samples with placeholder IDs)
+- **424 `session_not_ready`**: Container image doesn't implement `/readiness` endpoint (expected for non-production images)
+- **Compilation errors with modular JARs**: Use `mvn package` and compile against the packaged jar (not `target/classes`)
+
+## Troubleshooting
+
+| Symptom | Cause | Fix |
+|---------|-------|-----|
+| Samples fail `mvn compile` | codesnippet plugin scans `TempTypeSpecFiles/node_modules` | Add `-Dcodesnippet.skip` |
+| `module-info.class` errors with `javac` | Modular JARs on classpath | Use packaged jar approach (step 7) |
+| Java 8 base-testCompile overwrites classes | Multi-release compilation | Compile samples against packaged jar instead of `target/test-classes` |
+| Method not found on client | API might be protocol-only | Check for `*WithResponse` methods that take `BinaryData` + `RequestOptions` |
diff --git a/sdk/ai/.github/skills/search-m2/SKILL.md b/sdk/ai/.github/skills/search-m2/SKILL.md
new file mode 100644
index 000000000000..b5598ae06f17
--- /dev/null
+++ b/sdk/ai/.github/skills/search-m2/SKILL.md
@@ -0,0 +1,49 @@
+---
+name: search-m2
+description: 'Search for Java classes inside Maven dependencies in ~/.m2. Cross-reference pom.xml files in the current directory to resolve dependency names/versions. WHEN: find class in Maven dependency; locate JAR in m2; inspect dependency classes; search Maven local repo.'
+---
+
+# Search Maven Local Repository (search-m2)
+
+Use this skill to find which dependency JAR contains a class and to inspect JAR contents in `~/.m2/repository`.
+
+## Preconditions
+- Work from the user's current directory.
+- Look for `pom.xml` files in the current directory tree and use them to resolve dependency names and versions.
+
+## Key paths
+- Maven local repo: `~/.m2/repository`
+- Dependency JAR path pattern:
+ `~/.m2/repository////-.jar`
+
+## Steps
+1. **Discover pom files**: run `find . -name pom.xml` from the current directory.
+ - If multiple poms exist, prefer the closest one to the current directory or ask the user which project/module to use.
+2. **Extract dependency coordinates**:
+ - If the user provides a group/artifact, search the poms for that dependency and read its version.
+ - If versions are defined via properties (e.g., `${foo.version}`), resolve the property from the same pom (or parent if obvious).
+3. **Resolve the JAR path** using the groupId/artifactId/version mapping above.
+ - If the version is not found, list available versions in `~/.m2/repository///` and ask the user which to inspect.
+4. **Search for classes**:
+ - List classes: `jar tf | rg '\.class$'`
+ - Find a specific class: `jar tf | rg '(\.class)?$'`
+5. **Inspect class details** (if requested):
+ - `javap -classpath `
+
+## Useful commands
+
+### Find a class across all jars (fallback)
+```bash
+rg -g '*.jar' --files ~/.m2/repository | while read -r jar; do
+ jar tf "$jar" | rg -q 'com/example/MyClass.class' && echo "$jar"
+done
+```
+
+### Resolve versions for a dependency
+```bash
+ls ~/.m2/repository///
+```
+
+## Notes
+- If the user is vague, start by identifying the relevant pom.xml and extracting dependency coordinates.
+- If the dependency is transitive and not in the pom, check `mvn -q -DskipTests dependency:tree` (only if needed and the user agrees).
diff --git a/sdk/ai/.github/skills/test-proxy/SKILL.md b/sdk/ai/.github/skills/test-proxy/SKILL.md
new file mode 100644
index 000000000000..9927fa05408a
--- /dev/null
+++ b/sdk/ai/.github/skills/test-proxy/SKILL.md
@@ -0,0 +1,36 @@
+---
+name: test-proxy
+description: 'Push test-proxy recordings/assets using the test-proxy CLI (e.g., test-proxy push -a assets.json). WHEN: push test recordings; publish test-proxy assets; test-proxy push; upload recordings.'
+---
+
+# Test Proxy Recordings
+
+Use this skill to publish recordings to the test-proxy assets repo.
+
+## Command
+```bash
+test-proxy push -a assets.json
+```
+
+## Steps
+1. Check if `test-proxy` is installed by running `test-proxy --version`.
+ - If the command is **not found**, install it (see [Installation](#installation) below).
+2. Confirm the assets file path (default: `assets.json` in the current directory).
+3. If the file location is unclear, search for it or ask the user.
+4. Run `test-proxy push -a `.
+5. Report success or any errors.
+
+## Installation
+
+If `test-proxy` is missing, install it as a .NET global tool.
+
+### Prerequisites
+.NET 8.0 (LTS) or later must be installed. Verify with `dotnet --version`.
+If missing, ask the user to install the .NET SDK from https://dotnet.microsoft.com/download.
+
+### Install command
+```powershell
+dotnet tool update azure.sdk.tools.testproxy --global --prerelease --add-source https://pkgs.dev.azure.com/azure-sdk/public/_packaging/azure-sdk-for-net/nuget/v3/index.json --ignore-failed-sources
+```
+
+After installation, verify with `test-proxy --version`.
diff --git a/sdk/ai/.github/skills/tsp-naming-collision/SKILL.md b/sdk/ai/.github/skills/tsp-naming-collision/SKILL.md
new file mode 100644
index 000000000000..1612bdea0a19
--- /dev/null
+++ b/sdk/ai/.github/skills/tsp-naming-collision/SKILL.md
@@ -0,0 +1,149 @@
+---
+name: tsp-naming-collision
+description: 'Fix Java codegen parameter names that end with a numeric suffix (e.g. createAgentRequest1) caused by TypeSpec model names colliding with synthetic body type names. WHEN: fix parameter name ending in 1; fix naming collision; clientName override; TypeSpec model name collision.'
+---
+
+# Fix TypeSpec Naming Collisions in Java Codegen
+
+Fix parameter and implementation-model names that end with a numeric suffix (e.g. `createAgentRequest1`) in generated Java client code. This happens when a TypeSpec **named `model`** collides with the **synthetic body type** the Java codegen creates for an operation that spreads that model.
+
+## Root Cause
+
+When a TypeSpec route **spreads** a named model into an operation's parameters (via `...ModelName`), the Java codegen creates a synthetic body type to hold the body properties. It names this type `{OperationName}Request` (PascalCase of the operation name + `Request`). If a TypeSpec `model` already has that exact name, the codegen resolves the collision by appending `1`.
+
+Example collision:
+- TypeSpec model: `CreateAgentRequest`
+- Operation: `createAgent` spreads `...CreateAgentRequest`
+- Codegen synthetic body: wants name `CreateAgentRequest` → collision → `CreateAgentRequest1`
+- Result: parameter `createAgentRequest1`, implementation class `CreateAgentRequest1.java`
+
+> **Why aliases don't collide:** TypeSpec `alias` declarations don't occupy a name in the type namespace, so there is no collision. If the upstream spec used `alias CreateAgentRequest = { ... }` instead of `model CreateAgentRequest { ... }`, no fix would be needed.
+
+## Preconditions
+
+- You must be in the directory that contains `tsp-location.yaml`.
+- The TypeSpec must already be synced locally into `TempTypeSpecFiles/`. If not, run `tsp-client sync` first.
+- Identify the `client.tsp` file inside `TempTypeSpecFiles/` (usually under a subdirectory like `sdk-agents/`). This is the customization file where fixes are applied.
+
+## Integration with api-diff
+
+After a codegen update, use the **api-diff** skill to identify newly added methods. Check those new methods for `1`-suffixed parameter names — existing methods have already been fixed.
+
+## Important: TempTypeSpecFiles is volatile
+
+`TempTypeSpecFiles/` is regenerated on every `tsp-client sync` or `tsp-client update`. **Changes made only in `TempTypeSpecFiles/` will be lost.** Always apply the same edits to the corresponding `client.tsp` in a local checkout of `Azure/azure-rest-api-specs` (if available) so the changes can be committed to a PR.
+
+## Workflow
+
+### 1. Identify affected operations
+
+Search the generated Java client classes for parameter names ending with `1`:
+
+```bash
+grep -n 'Request1[,)]' src/main/java/com/azure/ai/agents/*Client.java
+```
+
+Also check for implementation model classes with the `1` suffix:
+
+```bash
+find . -name "*Request1.java" -path "*/implementation/models/*"
+```
+
+Collect the list of affected names (e.g. `createAgentRequest1`, `updateAgentRequest1`).
+
+### 2. Trace back to the TypeSpec models
+
+For each affected parameter, find the TypeSpec model that causes the collision. The model name matches the parameter name (PascalCase, without the `1`).
+
+Search the `.tsp` files:
+
+```bash
+grep -rn "model CreateAgentRequest\|model UpdateAgentRequest" TempTypeSpecFiles/ --include="*.tsp"
+```
+
+Confirm the model is used via spread (`...ModelName`) in the route definitions:
+
+```bash
+grep -rn "CreateAgentRequest\|UpdateAgentRequest" TempTypeSpecFiles/ --include="*.tsp"
+```
+
+Verify the model is a `model` (not an `alias`). Only named `model` types cause collisions.
+
+### 3. Add `@@clientName` overrides
+
+Edit the `client.tsp` customization file in `TempTypeSpecFiles/`. Add a `@@clientName` directive for each colliding model to give it a different client-side name. This frees the original name for the codegen's synthetic body type.
+
+Use a consistent naming convention. Recommended: rename `*Request` → `*Input`:
+
+```tsp
+// Rename request models to avoid collision with synthetic body types generated
+// by the Java codegen. The codegen names synthetic bodies as {OperationName}Request,
+// which clashes with the identically-named TypeSpec models, causing a "1" suffix.
+@@clientName(CreateAgentRequest, "CreateAgentInput");
+@@clientName(UpdateAgentRequest, "UpdateAgentInput");
+```
+
+> **Note:** These models are typically not emitted as public Java classes — they only exist to be spread into operations. The rename is purely internal and does not affect the public API surface.
+
+### 4. Regenerate and verify
+
+Generate with `--save-inputs` to preserve the edited TypeSpec files:
+
+```bash
+tsp-client generate --save-inputs
+```
+
+Verify the `1` suffix is gone:
+
+```bash
+# Should return zero matches
+grep -c "Request1" src/main/java/com/azure/ai/agents/*Client.java
+
+# Old *Request1.java files should no longer exist
+find src -name "*Request1.java" -path "*/implementation/models/*"
+
+# New clean-named files should exist
+find src -name "*Request.java" -path "*/implementation/models/*"
+```
+
+Compile to confirm no breakage:
+
+```bash
+mvn compile -Denforcer.skip=true -Dcodesnippet.skip=true -Dcheckstyle.skip=true \
+ -Dspotbugs.skip=true -Dspotless.skip=true -Drevapi.skip=true -Djacoco.skip=true \
+ -Dmaven.javadoc.skip=true -Dshade.skip=true -Danimal.sniffer.skip=true
+```
+
+### 5. Apply changes to the local spec repo (if available)
+
+If the user has a local checkout of `Azure/azure-rest-api-specs`, apply the **same `client.tsp` edits** there. Derive the file path from `tsp-location.yaml`:
+
+- `directory` field gives the relative spec path (e.g. `specification/ai-foundry/data-plane/Foundry/src/sdk-agents`)
+- The file to edit is `client.tsp` inside that directory
+
+For example, if the local repo is at `~/code/azure_repos/azure-rest-api-specs`:
+
+```
+~/code/azure_repos/azure-rest-api-specs/specification/ai-foundry/data-plane/Foundry/src/sdk-agents/client.tsp
+```
+
+Verify the file exists before editing. If it doesn't, warn the user and print the expected path.
+
+### 6. Full round-trip from the remote spec (optional)
+
+If the user wants to validate the fix end-to-end from the remote repo:
+
+1. Commit and push the `client.tsp` changes in the spec repo
+2. Get the new commit hash
+3. Update the `commit:` field in `tsp-location.yaml` with the new hash
+4. Run `tsp-client update` to sync and regenerate from the remote
+5. Verify the `1` suffix is gone and the build compiles
+
+## Troubleshooting
+
+| Symptom | Cause | Fix |
+|---------|-------|-----|
+| `1` suffix persists after adding `@@clientName` | The `@@clientName` target doesn't match the TypeSpec model name exactly | Double-check the model name is the TypeSpec name (not the Java name); names are case-sensitive |
+| New suffix appears (e.g. `2`) | Multiple models collide with the same synthetic name | Ensure every colliding model has a unique `@@clientName` |
+| Build fails after regeneration | Handwritten code references the old `*1` names | Update any manual references in custom client code, tests, or samples |
+| Changes lost after `tsp-client sync` | `TempTypeSpecFiles/` was overwritten | Apply changes to the spec repo `client.tsp` (see step 5) |
diff --git a/sdk/ai/.github/skills/tsp-type-override/SKILL.md b/sdk/ai/.github/skills/tsp-type-override/SKILL.md
new file mode 100644
index 000000000000..900e29b89595
--- /dev/null
+++ b/sdk/ai/.github/skills/tsp-type-override/SKILL.md
@@ -0,0 +1,239 @@
+---
+name: tsp-type-override
+description: 'Override TypeSpec types with Java-native types (e.g. OffsetDateTime, DayOfWeek) using @@alternateType in a client.java.tsp file. WHEN: override TypeSpec type; alternateType for Java type; fix incorrect TypeSpec type; map TypeSpec to OffsetDateTime.'
+---
+
+# TypeSpec Type Override for Java
+
+Override types in generated Java code by adding `@@alternateType` decorators to the `client.java.tsp` file.
+
+## Preconditions
+- You must be in the directory that contains `tsp-location.yaml`.
+- The TypeSpec must already be synced locally into `TempTypeSpecFiles/`. If not, run `tsp-client sync` first.
+- Identify the `client.java.tsp` file inside `TempTypeSpecFiles/`. This is the **only** file you should edit.
+
+## Important: TempTypeSpecFiles is volatile
+
+`TempTypeSpecFiles/` is a **transient working directory** managed by `tsp-client`. It is regenerated on every `tsp-client sync` or `tsp-client update` and is typically gitignored. **Any changes made only in `TempTypeSpecFiles/` will be lost** on the next sync/update cycle.
+
+If the user provides a **local checkout of `Azure/azure-rest-api-specs`**, always apply the same `client.java.tsp` edits there so the changes are preserved and can be committed to a PR. The path to the spec file inside that repo can be derived from `tsp-location.yaml`:
+- `directory` field gives the relative path (e.g. `specification/ai-foundry/data-plane/Foundry`)
+- The file to edit is `client.java.tsp` inside that directory
+
+## Workflow
+
+### 1. Locate the TypeSpec model and field
+
+Search the `.tsp` files under `TempTypeSpecFiles/` for the model and field the user wants to override:
+
+```bash
+grep -rn "\|" TempTypeSpecFiles/ --include="*.tsp"
+```
+
+Read the model definition to confirm the current type of the target field (e.g. `string`, `int32`, a union, etc.).
+
+### 2. Determine the correct decorator form
+
+There are **two forms** of `@@alternateType`. Choose based on the target type:
+
+#### Form A — TypeSpec built-in type (preferred when possible)
+
+Use when there is a TypeSpec scalar that the Java emitter already maps to the desired Java type.
+
+| Desired Java type | TypeSpec alternate |
+|-----------------------|------------------------|
+| `OffsetDateTime` | `utcDateTime` |
+| `Duration` | `duration` |
+| `byte[]` | `bytes` |
+| `long` / `Long` | `int64` |
+| `double` / `Double` | `float64` |
+
+Syntax (applied to a **model property**, scoped to Java):
+
+```tsp
+@@alternateType(ModelName.fieldName, utcDateTime, "java");
+```
+
+This form is **fully supported** on model properties.
+
+#### Form B — External Java type via identity (on a type definition)
+
+Use when no TypeSpec scalar maps to the desired Java type (e.g. `java.time.DayOfWeek`), or when you want to **prevent emission of a model entirely** by mapping it to an existing external class (e.g. an openai-java type).
+
+Syntax (applied to the **type definition itself**, not a property):
+
+```tsp
+@@alternateType(TypeName, { identity: "fully.qualified.ClassName" }, "java");
+```
+
+> **Important constraints for external types:**
+> - External types (`{ identity: ... }`) **cannot** be applied to model properties — they must target the type definition (Model, Enum, Union, Scalar).
+> - A `scope` parameter (e.g. `"java"`) is **required** for external types.
+> - **Known limitation (as of typespec-java 0.39.x):** The Java emitter does not fully support external types on Enum/Union definitions. It will still generate the class instead of referencing the JDK type. This is tracked as a bug. Only use Form B for Model types until the emitter is fixed.
+
+> **De-duplication use case:** Form B can suppress emission of generated models that duplicate an external dependency. For example, `@@alternateType(OpenAI.Reasoning, { identity: "com.openai.models.Reasoning" }, "java")` prevents the codegen from emitting its own `Reasoning` class — any property typed as `OpenAI.Reasoning` will use `com.openai.models.Reasoning` directly. This works for Model types that are members of unions too (e.g. `ComparisonFilter` inside a `Filters` union). See the `dedup-openai` skill for the full workflow including serialization fixes.
+
+#### Form C — External Java type on a single property (model indirection)
+
+Use when you need to override a **single property** to an external Java type, but Form B cannot be used because it would change the type globally, and `{ identity: ... }` cannot be applied directly to properties.
+
+The workaround is a **two-step indirection**: define a dummy model annotated with the external identity, then use `@@alternateType` on the property pointing to that model.
+
+```tsp
+// Step 1: Define a dummy model with the external Java type identity
+@alternateType({ identity: "java.util.TimeZone" }, "java")
+model TimeZoneType {}
+
+// Step 2: Override the property type to use the dummy model
+@@alternateType(OpenAI.ApproximateLocation.timezone, TimeZoneType, "java");
+```
+
+**Why this works:** `@alternateType({identity: ...})` is supported on Model definitions (Form B). The `@@alternateType` on a property accepts any TypeSpec type as the alternate (Form A). By combining both, the property override resolves through the model to the external Java class.
+
+**What does NOT work (and why this form exists):**
+- `{ identity: ... }` directly on a property → compiler error / silently ignored.
+- `@alternateType({identity: ...})` on a `scalar` → the Java emitter ignores the identity and falls back to `BinaryData` (as of typespec-java 0.40.x).
+- `@alternateType({identity: ...})` on a `scalar extends string` → the emitter ignores the identity entirely and uses `String`.
+
+**Important:** The Java emitter generates `writeJsonField` / `TypeName.fromJson(reader)` calls for the overridden property, which will **not compile** because the external Java type (e.g. `java.util.TimeZone`) does not implement `JsonSerializable`. You **must** fix `toJson`/`fromJson` manually — see step 5 below.
+
+### 3. Apply the override
+
+Edit the `client.java.tsp` file inside `TempTypeSpecFiles/`. Add the decorator(s) under the type-replacement section (usually at the bottom of the file):
+
+```tsp
+// EvaluatorVersion datetime fields are typed as string in the spec
+@@alternateType(EvaluatorVersion.created_at, utcDateTime, "java");
+```
+
+### 4. Generate and verify
+
+Always generate with `--save-inputs` so the edited TypeSpec files are preserved:
+
+```bash
+tsp-client generate --save-inputs
+```
+
+After generation, verify the Java source uses the expected type:
+
+```bash
+grep -n "OffsetDateTime\|DayOfWeek\|" src/main/java/com/azure/ai/projects/models/.java
+```
+
+Check that:
+- The field type changed (e.g. `private OffsetDateTime createdAt;`)
+- The getter return type changed (e.g. `public OffsetDateTime getCreatedAt()`)
+- The JSON deserialization uses the correct parser (e.g. `CoreUtils.parseBestOffsetDateTime`)
+- No spurious files were generated (e.g. under `src/main/java/java/`)
+
+### 5. Write unit tests for serialization/deserialization
+
+After generation, **always** write a unit test that verifies the generated model serializes and deserializes the overridden type to the **same wire-format values** defined in the original TypeSpec. This is critical because the emitter may generate serialization code that does not match the API wire format (e.g. `java.time.DayOfWeek.name()` produces `"MONDAY"` but the TypeSpec union defined `"Monday"`).
+
+Place the test class under `src/test/java/` in the model's package (e.g. `com.azure.ai.projects.models`).
+
+The test must cover three scenarios:
+
+1. **Serialization** — Construct the model with the Java type, serialize to JSON, and assert the JSON string values match the TSP-defined wire format (e.g. PascalCase `"Monday"`, not UPPER_CASE `"MONDAY"`).
+2. **Deserialization** — Parse a JSON string using the TSP-defined wire-format values and assert the Java type is correctly populated.
+3. **Round-trip** — Serialize → deserialize and assert the original values are preserved.
+
+Example test skeleton:
+
+```java
+@Test
+void serializationProducesWireFormatValues() throws IOException {
+ // Build model with the Java type
+ var schedule = new WeeklyRecurrenceSchedule(Arrays.asList(DayOfWeek.MONDAY, DayOfWeek.FRIDAY));
+ String json = toJsonString(schedule);
+ // Assert the wire values match the TSP union/enum values, NOT the Java enum constant names
+ String expected = "{\"daysOfWeek\":[\"Monday\",\"Friday\"],\"type\":\"Weekly\"}";
+ assertEquals(expected, json);
+}
+
+@Test
+void deserializationParsesWireFormatValues() throws IOException {
+ // Use TSP-defined wire-format values
+ String json = "{\"daysOfWeek\":[\"Monday\",\"Wednesday\"],\"type\":\"Weekly\"}";
+ WeeklyRecurrenceSchedule schedule;
+ try (JsonReader reader = JsonProviders.createReader(json)) {
+ schedule = WeeklyRecurrenceSchedule.fromJson(reader);
+ }
+ assertEquals(Arrays.asList(DayOfWeek.MONDAY, DayOfWeek.WEDNESDAY), schedule.getDaysOfWeek());
+}
+
+@Test
+void roundTripPreservesValues() throws IOException {
+ var original = new WeeklyRecurrenceSchedule(Arrays.asList(DayOfWeek.SUNDAY, DayOfWeek.SATURDAY));
+ String json = toJsonString(original);
+ WeeklyRecurrenceSchedule deserialized;
+ try (JsonReader reader = JsonProviders.createReader(json)) {
+ deserialized = WeeklyRecurrenceSchedule.fromJson(reader);
+ }
+ assertEquals(original.getDaysOfWeek(), deserialized.getDaysOfWeek());
+}
+```
+
+#### If the tests fail: customize toJson/fromJson
+
+When the emitter generates incorrect serialization (e.g. `element.name()` instead of PascalCase), you must manually fix the `toJson` and `fromJson` methods in the generated model class:
+
+1. **Remove `@Generated` and follow the `codegen-survival-rules` skill** — remove `@Generated` from `toJson` and `fromJson`, place marker comments inside method bodies, not above signatures.
+2. Fix the serialization logic to convert between the Java type and the TSP wire format. For example, for `java.time.DayOfWeek`:
+ - **`toJson`**: convert `DayOfWeek.MONDAY` → `"Monday"` (PascalCase) using a helper like:
+ ```java
+ private static String toPascalCase(DayOfWeek day) {
+ String name = day.name();
+ return name.charAt(0) + name.substring(1).toLowerCase(Locale.ROOT);
+ }
+ ```
+ - **`fromJson`**: convert `"Monday"` → `DayOfWeek.MONDAY` by uppercasing before `valueOf()`:
+ ```java
+ DayOfWeek.valueOf(reader.getString().toUpperCase(Locale.ROOT))
+ ```
+3. Re-run the unit tests and confirm all three scenarios pass.
+
+**Form C serialization fixes:** When using the model indirection (Form C), the emitter generates `writeJsonField("field", this.field)` and `ExternalType.fromJson(reader)` — both will fail to compile because the external Java type does not implement `JsonSerializable`. Fix by:
+ - **`toJson`**: replace `writeJsonField` with the correct writer method (e.g. `writeStringField("timezone", this.timezone != null ? this.timezone.getID() : null)`)
+ - **`fromJson`**: replace `ExternalType.fromJson(reader)` with the correct factory (e.g. `TimeZone.getTimeZone(reader.getString())`)
+
+### 6. Apply changes to the local spec repo (if provided)
+
+If the user supplied a local checkout path for `Azure/azure-rest-api-specs`, apply the **same edits** to the `client.java.tsp` there. Derive the file path from `tsp-location.yaml`:
+
+```
+//client.java.tsp
+```
+
+For example, if `directory: specification/ai-foundry/data-plane/Foundry` and the local repo is at `~/code/azure-rest-api-specs`:
+
+```
+~/code/azure-rest-api-specs/specification/ai-foundry/data-plane/Foundry/client.java.tsp
+```
+
+Verify the file exists before editing. If it doesn't, warn the user and print the expected path.
+
+### 7. Remind the user about the spec PR
+
+After confirming the generated code is correct, remind the user:
+
+> The `@@alternateType` changes in `client.java.tsp` are local overrides in `TempTypeSpecFiles/`.
+> For these to persist across future code generations, the same changes must be contributed to the
+> **Azure/azure-rest-api-specs** repository via a pull request targeting the corresponding
+> `client.java.tsp` file under the `specification/` directory.
+>
+> Build the PR URL from `tsp-location.yaml`:
+> - Repo: `repo` field (e.g. `Azure/azure-rest-api-specs`)
+> - Directory: `directory` field (e.g. `specification/ai-foundry/data-plane/Foundry`)
+> - File: `client.java.tsp` in that directory
+
+## Troubleshooting
+
+| Symptom | Cause | Fix |
+|---------|-------|-----|
+| Generated class still uses `String` | Decorator not picked up | Verify the model/field names match exactly (case-sensitive, use the TypeSpec name, not the Java name) |
+| File generated under `src/main/java/java/time/...` | External type identity used on an Enum/Union | Remove the decorator — this is the known emitter bug for Enum/Union external types |
+| Compiler error on `@@alternateType` | Wrong target kind | External types must target type definitions, not properties. TypeSpec built-ins can target properties. |
+| Warning: `external-type-on-model-property` | External type `{ identity: ... }` applied to a property | Move the decorator to the type definition instead |
+| Property becomes `BinaryData` instead of external type | `@alternateType({identity: ...})` used on a `scalar` | Scalars don't support external identity resolution. Use Form C (model indirection) instead. |
+| `writeJsonField` / `fromJson` compile errors after Form C | Emitter treats external type as `JsonSerializable` | Remove `@Generated` from `toJson`/`fromJson` and fix serialization manually (see step 5). |
diff --git a/sdk/ai/.github/skills/union-type-wrappers/SKILL.md b/sdk/ai/.github/skills/union-type-wrappers/SKILL.md
new file mode 100644
index 000000000000..dd3e708c6adf
--- /dev/null
+++ b/sdk/ai/.github/skills/union-type-wrappers/SKILL.md
@@ -0,0 +1,340 @@
+---
+name: union-type-wrappers
+description: 'Add typed getters and setters over BinaryData properties that represent TypeSpec union types in generated Java models. WHEN: wrap BinaryData union type; add typed accessors for union; replace BinaryData with typed API; union type wrapper.'
+---
+
+# Union Type Wrappers for Generated Java Models
+
+When the Java codegen encounters a TypeSpec union type (e.g. `string | SomeModel`), it emits the property as `BinaryData`. This skill replaces public `BinaryData` accessors with typed setters and getters for each union variant, following the same pattern established for `PromptAgentDefinition.toolChoice`.
+
+## Preconditions
+- You must be in a Java SDK module directory containing `tsp-location.yaml` and `pom.xml`.
+- TypeSpec sources must be available in `TempTypeSpecFiles/`. If missing, run `tsp-client sync` first.
+- The project must compile before starting.
+
+## Integration with api-diff
+
+After a codegen update, use the **api-diff** skill to identify newly added or modified models. Focus union-type wrapper work on those new models that expose `BinaryData` fields, rather than scanning the entire source tree.
+
+## Important: BinaryData.writeTo(JsonWriter) semantics
+
+Understanding how `BinaryData` writes to JSON is critical for choosing the correct factory method:
+
+| Factory method | Content type | `writeTo(JsonWriter)` calls | JSON output |
+|---|---|---|---|
+| `BinaryData.fromString("auto")` | `StringContent` | `jsonWriter.writeString("auto")` | `"auto"` (quoted) |
+| `BinaryData.fromObject("auto")` | `SerializableContent` | `jsonWriter.writeRawValue(...)` | `"auto"` (quoted via Jackson) |
+| `BinaryData.fromObject(42.0)` | `SerializableContent` | `jsonWriter.writeRawValue(...)` | `42.0` (raw) |
+| `BinaryData.fromObject(true)` | `SerializableContent` | `jsonWriter.writeRawValue(...)` | `true` (raw) |
+| `BinaryData.fromObject(jsonSerializable)` | `SerializableContent` | `jsonWriter.writeRawValue(...)` | `{...}` (JSON object via JacksonAdapter) |
+
+Key: `JacksonAdapter` has special handling for `JsonSerializable` types — `BinaryData.fromObject()` and `BinaryData.toObject()` both work correctly with Azure `JsonSerializable` models.
+
+### Setter factory method rules
+
+| Union variant | Factory method | Reason |
+|---|---|---|
+| String value (ID, enum token) | `BinaryData.fromString(value)` | Writes as JSON string via `writeString()` |
+| Numeric / boolean primitive | `BinaryData.fromObject(value)` | Writes as raw JSON value |
+| Azure `JsonSerializable` model | `BinaryData.fromObject(value)` | JacksonAdapter handles serialization |
+| Stainless (openai-java) type | `BinaryData.fromObject(value)` | Jackson handles serialization |
+
+### Getter deserialization rules
+
+| Union variant | Deserialization | Notes |
+|---|---|---|
+| String | `this.field.toObject(String.class)` | Consistent regardless of how BinaryData was created (`fromString` vs `fromObject` during deserialization) |
+| Primitive (Number, Boolean) | `this.field.toObject(Double.class)` etc. | Jackson deserializes raw JSON values |
+| Azure `JsonSerializable` model | `this.field.toObject(ModelClass.class)` | JacksonAdapter calls `fromJson()` |
+| Stainless type | `this.field.toObject(StainlessType.class)` | Jackson deserializes natively |
+
+## Workflow
+
+### 1. Scan for BinaryData properties
+
+Find all `BinaryData` fields in generated model classes:
+
+```bash
+grep -rn "private\s\+\(final\s\+\)\?BinaryData\s\+" src/main/java/ --include="*.java"
+```
+
+Exclude `List` and `Map<..., BinaryData>` from this pass — those are collection-of-union patterns that require separate handling.
+
+### 2. Cross-reference against TypeSpec
+
+For each `BinaryData` property found, determine whether it comes from a **union type** or an **`unknown` type**:
+
+```bash
+# Search for the property name (use the wire name, e.g. tool_choice, not toolChoice)
+grep -rn "" TempTypeSpecFiles/ --include="*.tsp"
+```
+
+- **Union type** (`type_a | type_b`): Proceed with this skill.
+- **`unknown` type**: Leave as `BinaryData` — this is the correct representation.
+
+Also check `client.tsp` for any `@@changePropertyType` overrides that may have already flattened the union to `string` (like `tool_choice`).
+
+### 3. Identify the union variants
+
+For each union type, determine what types the property can hold. Sources:
+
+1. **Local TSP files** — look at the type definition in `TempTypeSpecFiles/`.
+2. **Stainless SDK JAR** — if the union comes from `OpenAI.*`, inspect the Stainless types:
+ ```bash
+ jar tf ~/.m2/repository/com/openai/openai-java-core//openai-java-core-.jar | grep ""
+ javap -cp "com.openai.models.responses.\$"
+ ```
+3. **Generated Azure models** — check if the Azure SDK already generates the variant model classes (e.g. `AutoCodeInterpreterToolParam`, `McpToolFilter`).
+
+### 4. Apply changes to the model class
+
+For each union-typed `BinaryData` property, apply the following pattern:
+
+#### 4a. Leave the property field as-is
+
+Do **not** modify the property declaration. Keep `@Generated`, the block comment, and the visibility exactly as the codegen produced them. The field is already `private` — there is nothing to change.
+
+```java
+/*
+ * Original generated block comment.
+ */
+@Generated
+private BinaryData myField;
+```
+
+#### 4b. Make the existing getter and setter package-private
+
+- Remove `@Generated` and follow the `codegen-survival-rules` skill for marker comment placement.
+- Change visibility to **package-private** (no access modifier). This keeps them hidden from SDK consumers but accessible to unit tests in the same package.
+- **Keep the original method name** — do NOT rename to `*Internal`.
+- **Keep the original javadoc intact.**
+- Add `// AI Tooling: union type` as the **first line inside the method body**.
+
+```java
+/**
+ * Get the myField property: original description.
+ *
+ * @return the myField value.
+ */
+BinaryData getMyField() {
+ // AI Tooling: union type
+ return this.myField;
+}
+
+/**
+ * Set the myField property: original description.
+ *
+ * @param myField the myField value to set.
+ * @return the MyClass object itself.
+ */
+MyClass setMyField(BinaryData myField) {
+ // AI Tooling: union type
+ this.myField = myField;
+ return this;
+}
+```
+
+**For `@Immutable` classes (value is a constructor param):**
+
+- Change the `BinaryData` constructor visibility to package-private (keep for `fromJson` deserialization).
+- Add public constructor overloads for each variant.
+- Make the `BinaryData` getter private.
+
+```java
+/**
+ * Creates an instance of MyFilter class.
+ *
+ * @param type the type value to set.
+ * @param key the key value to set.
+ * @param value the value value to set.
+ */
+MyFilter(MyFilterType type, String key, BinaryData value) {
+ // AI Tooling: union type
+ this.type = type;
+ this.key = key;
+ this.value = value;
+}
+
+public MyFilter(MyFilterType type, String key, String value) {
+ this.type = type;
+ this.key = key;
+ this.value = BinaryData.fromObject(value);
+}
+```
+
+#### 4c. Add typed setters (one per union variant)
+
+Naming convention: **`set( value)`** — use method overloading.
+
+Copy the javadoc from the original generated setter, adapting the `@param` description to the specific variant type. Add `// AI Tooling: union type` as the first line inside the method body.
+
+```java
+/**
+ * Set the myField property: original description.
+ *
+ * @param myField the string value to set.
+ * @return the MyClass object itself.
+ */
+public MyClass setMyField(String myField) {
+ // AI Tooling: union type
+ this.myField = BinaryData.fromString(myField);
+ return this;
+}
+
+/**
+ * Set the myField property: original description.
+ *
+ * @param myField the SomeModel value to set.
+ * @return the MyClass object itself.
+ */
+public MyClass setMyField(SomeModel myField) {
+ // AI Tooling: union type
+ this.myField = BinaryData.fromObject(myField);
+ return this;
+}
+```
+
+When overloading isn't possible (e.g. two different `String` meanings), disambiguate with parameter names and javadoc.
+
+For `List` variants:
+```java
+/**
+ * Set the allowedTools property: original description.
+ *
+ * @param allowedTools the list of tool name strings to set.
+ * @return the McpTool object itself.
+ */
+public McpTool setAllowedTools(List allowedTools) {
+ // AI Tooling: union type
+ this.allowedTools = BinaryData.fromObject(allowedTools);
+ return this;
+}
+```
+
+#### 4d. Add typed getters (one per union variant)
+
+Naming convention: **`getAs()`**
+
+Copy the javadoc from the original generated getter, adapting the `@return` description. Add `// AI Tooling: union type` as the first line inside the method body.
+
+```java
+/**
+ * Get the myField property as a String: original description.
+ *
+ * @return the myField value as a String.
+ */
+public String getMyFieldAsString() {
+ // AI Tooling: union type
+ if (this.myField == null) {
+ return null;
+ }
+ return this.myField.toObject(String.class);
+}
+
+/**
+ * Get the myField property as a {@link SomeModel}: original description.
+ *
+ * @return the myField value as a SomeModel.
+ */
+public SomeModel getMyFieldAsSomeModel() {
+ // AI Tooling: union type
+ if (this.myField == null) {
+ return null;
+ }
+ return this.myField.toObject(SomeModel.class);
+}
+```
+
+For `List` variants:
+```java
+/**
+ * Get the allowedTools property as a list of tool name strings: original description.
+ *
+ * @return the allowedTools value as a list of Strings.
+ */
+@SuppressWarnings("unchecked")
+public List getAllowedToolsAsStringList() {
+ // AI Tooling: union type
+ if (this.allowedTools == null) {
+ return null;
+ }
+ return this.allowedTools.toObject(List.class);
+}
+```
+
+### 5. Update callers
+
+Search for existing code that uses the old `BinaryData` API:
+
+```bash
+grep -rn "\.setMyField(BinaryData\|\.getMyField()" src/ --include="*.java"
+```
+
+Update samples, tests, and internal code to use the new typed API. Remove unused `BinaryData` imports where applicable.
+
+### 6. Write unit tests
+
+Create a test class per model under `src/test/java/.../models/SerializationTests.java`.
+
+Each test class must include:
+
+1. **Serialization tests** — one per union variant. Construct the model with the typed setter, serialize to JSON, assert the JSON contains the expected field and value.
+2. **Deserialization tests** — one per union variant. Parse a JSON string, assert the typed getter returns the correct value.
+3. **Null/absent tests** — verify getters return `null` when the field is not set or absent from JSON.
+4. **Round-trip tests** — serialize → deserialize → assert values match.
+
+Use these helpers:
+
+```java
+private String serializeToJson(MyModel model) throws IOException {
+ ByteArrayOutputStream outputStream = new ByteArrayOutputStream();
+ try (JsonWriter jsonWriter = JsonProviders.createWriter(outputStream)) {
+ model.toJson(jsonWriter);
+ }
+ return outputStream.toString("UTF-8");
+}
+
+private MyModel deserializeFromJson(String json) throws IOException {
+ try (JsonReader jsonReader = JsonProviders.createReader(json)) {
+ return MyModel.fromJson(jsonReader);
+ }
+}
+```
+
+### 7. Compile and run tests
+
+```bash
+mvn compile -Dbuildhelper.addtestsource.skip=true -Dbuildhelper.addtestresource.skip=true \
+ -Dcodesnippet.skip=true -Dcheckstyle.skip=true -Dspotless.check.skip=true
+
+mvn "-Dtest=*SerializationTests" test \
+ -Dcodesnippet.skip=true -Dcheckstyle.skip=true -Dspotless.check.skip=true
+```
+
+All tests must pass before finishing.
+
+## Checklist
+
+Before reporting completion, verify:
+
+- [ ] Every `BinaryData` property was classified as **union** or **unknown**
+- [ ] Property fields left exactly as generated (no modifications)
+- [ ] Original `BinaryData` getter/setter made package-private, name kept, `@Generated` removed, javadoc preserved
+- [ ] `// AI Tooling: union type` placed inside the body of every modified or added getter/setter
+- [ ] Typed setters added for each union variant with javadoc copied from original
+- [ ] Typed getters added for each union variant (`get*As*()`) with javadoc copied from original
+- [ ] All callers (samples, tests, internal code) updated to use new API
+- [ ] Unused `BinaryData` imports removed from callers
+- [ ] Unit tests written and passing for serialization, deserialization, null, and round-trip
+- [ ] Full compilation succeeds (main + test + samples)
+
+## Troubleshooting
+
+| Symptom | Cause | Fix |
+|---------|-------|-----|
+| `BinaryData.fromObject(jsonSerializable)` produces wrong JSON | JacksonAdapter not on classpath | Verify `azure-core` dependency includes `JacksonAdapter` |
+| `toObject(AzureModel.class)` fails | JacksonAdapter doesn't find `fromJson` | Use `BinaryData.toObject()` which delegates to `JacksonAdapter.deserialize()` — confirm azure-core ≥ 1.51 |
+| Setter creates `StringContent` but expects raw JSON | Wrong factory method | Use `fromString()` only for string tokens; use `fromObject()` for primitives and objects |
+| Test fails on deserialized value comparison | Asymmetry between `fromString`/`fromObject` for string values | Deserialization always uses `fromObject(readUntyped())`, producing `SerializableContent`. Use `toObject(String.class)` in string getters — never `toString()` — to normalize both paths. |
+| Compilation error: cannot find `List` | Missing import after adding `List` setter | Add `import java.util.List;` |
+| `@SuppressWarnings` needed | Unchecked cast on `toObject(List.class)` | Add `@SuppressWarnings("unchecked")` to the method |
diff --git a/sdk/ai/azure-ai-agents/.github/skills/azure-ai-agents/SKILL.md b/sdk/ai/azure-ai-agents/.github/skills/azure-ai-agents/SKILL.md
new file mode 100644
index 000000000000..11345917a848
--- /dev/null
+++ b/sdk/ai/azure-ai-agents/.github/skills/azure-ai-agents/SKILL.md
@@ -0,0 +1,51 @@
+---
+name: azure-ai-agents
+description: 'Post-regeneration guide for azure-ai-agents SDK. Covers openai-java dedup, bridge class maintenance, polling customizations, and codegen survival. WHEN: regenerate azure-ai-agents; fix azure-ai-agents build errors; azure-ai-agents tsp-client update; add azure-ai-agents feature; modify azure-ai-agents.'
+---
+
+# azure-ai-agents — Package Skill
+
+> This skill activates after standard generation tools (`tsp-client update`, `azsdk_customized_code_update`) have been tried. It covers what they can't solve: openai-java dedup decisions, bridge class maintenance, and package-specific error diagnosis.
+
+## Common Pitfalls
+
+- **Always check for new openai-java duplicates after codegen.** Use `api-diff` → `dup-classes` → `dedup-openai` (shared skills) on every regeneration. New models may duplicate openai-java types and need suppression.
+- **Never serialize openai-java types with `BinaryData.fromObject()`.** The default Jackson ObjectMapper cannot handle Kotlin `SynchronizedLazyImpl` fields. Use `OpenAIJsonHelper.toBinaryData()` which uses `ObjectMappers.jsonMapper()`.
+- **Follow codegen survival rules for ALL manual edits.** Remove `@Generated`, place marker comments inside method bodies, not above signatures. See `codegen-survival-rules` shared skill.
+- **Check `AgentsCustomizations.java` FIRST when generated files have errors.** The polling strategy and enum rename customizations can break if the generated code structure changes.
+- **Do not suppress discriminator hierarchy types.** Classes extending `Tool`, `TextResponseFormatConfiguration`, or other polymorphic base types are structural equivalents, not actionable duplicates — the SDK's `fromJson` dispatch requires them.
+
+## Architecture
+
+5 sync + 5 async clients (Agents, AgentSessionFiles, MemoryStores, Responses, Toolboxes), 185 generated models, 7 hand-written bridge classes. Heavy openai-java integration (v4.14.0).
+
+See [references/architecture.md](references/architecture.md) for source layout and bridge class inventory.
+
+## After Regeneration
+
+| Error location | What it means | Where to fix |
+|---|---|---|
+| Generated file, `@Generated` method | Customization produced broken output | Fix `AgentsCustomizations.java` |
+| Generated file, method WITHOUT `@Generated` | Hand-written wrapper references changed types | Fix the hand-written method to match new generated signatures |
+| Hand-written bridge class (OpenAIJsonHelper, etc.) | References removed/renamed generated types | Fix the bridge class |
+| Model with openai-java type (toJson/fromJson) | Dedup serialization bridge broken by new fields | Update the `toJson`/`fromJson` bridge — see `dedup-openai` shared skill |
+
+**Post-regen workflow:** `tsp-naming-collision` → `api-diff` → `dup-classes` → `dedup-openai` → `union-type-wrappers` → `codegen-survival-rules` (all shared skills under `sdk/ai/.github/skills/`).
+
+## Post-Regeneration Customizations
+
+See [references/customizations.md](references/customizations.md) for per-method documentation.
+
+## Testing Notes
+
+- Test base: `ClientTestBase extends TestProxyTestBase` with RECORD/PLAYBACK/LIVE modes
+- Recordings: `assets.json` → `Azure/azure-sdk-assets` (tag prefix: `java/ai/azure-ai-agents`)
+- Coverage: 0% enforced (pre-GA)
+- JPMS workaround: `--add-opens com.azure.core/com.azure.core.implementation.util=ALL-UNNAMED`
+
+## References
+
+| File | Contents |
+|---|---|
+| [references/architecture.md](references/architecture.md) | Source layout, client inventory, bridge classes, module-info |
+| [references/customizations.md](references/customizations.md) | AgentsCustomizations.java method docs, update triggers |
diff --git a/sdk/ai/azure-ai-agents/.github/skills/azure-ai-agents/references/architecture.md b/sdk/ai/azure-ai-agents/.github/skills/azure-ai-agents/references/architecture.md
new file mode 100644
index 000000000000..9554b38c4861
--- /dev/null
+++ b/sdk/ai/azure-ai-agents/.github/skills/azure-ai-agents/references/architecture.md
@@ -0,0 +1,92 @@
+# azure-ai-agents — Architecture Reference
+
+## Source Layout
+
+```
+sdk/ai/azure-ai-agents/
+├── tsp-location.yaml # TypeSpec spec reference
+├── customizations/src/main/java/
+│ └── AgentsCustomizations.java # Post-gen AST customizations
+├── src/main/java/com/azure/ai/agents/
+│ ├── AgentsClient.java # Generated — main client
+│ ├── AgentsAsyncClient.java # Generated — async main client
+│ ├── AgentSessionFilesClient.java # Generated
+│ ├── MemoryStoresClient.java # Generated
+│ ├── ResponsesClient.java # Generated
+│ ├── ToolboxesClient.java # Generated
+│ ├── AgentsClientBuilder.java # Generated — multi-service builder
+│ ├── AgentsServiceVersion.java # Generated — service version enum
+│ ├── models/ # ~185 generated model classes
+│ └── implementation/
+│ ├── AgentsClientImpl.java # Generated — HTTP operations
+│ ├── OpenAIJsonHelper.java # HAND-WRITTEN — openai-java bridge
+│ ├── AgentsServicePollUtils.java # HAND-WRITTEN — polling helpers
+│ ├── StreamingUtils.java # HAND-WRITTEN — reactive streaming
+│ ├── TokenUtils.java # HAND-WRITTEN — token auth bridge
+│ ├── HttpClientHelper.java # HAND-WRITTEN — HTTP pipeline adapter
+│ ├── AzureHttpResponseAdapter.java # HAND-WRITTEN — response adapter
+│ └── JsonMergePatchHelper.java # HAND-WRITTEN — JSON patch utils
+├── src/samples/java/ # ~85 sample files
+└── src/test/java/ # ~25 test files
+```
+
+## Client Inventory
+
+| Sync Client | Async Client | Purpose |
+|---|---|---|
+| `AgentsClient` | `AgentsAsyncClient` | Agent CRUD, runs, threads, messages, sessions, conversations |
+| `AgentSessionFilesClient` | `AgentSessionFilesAsyncClient` | File upload/download within sessions |
+| `MemoryStoresClient` | `MemoryStoresAsyncClient` | Memory store CRUD and search |
+| `ResponsesClient` | `ResponsesAsyncClient` | Response creation and management |
+| `ToolboxesClient` | `ToolboxesAsyncClient` | Toolbox version CRUD |
+
+All clients are generated. The builder `AgentsClientBuilder` constructs all of them.
+
+## Bridge Classes (Hand-Written)
+
+These do NOT have the generated header. They survive codegen but may need updates if generated types change.
+
+| Class | Purpose | Update trigger |
+|---|---|---|
+| `OpenAIJsonHelper` | Serialization bridge between openai-java (Jackson/Kotlin) and Azure SDK (azure-json). Provides `toBinaryData()` / `fromBinaryData()` for safe interop. | New openai-java types used in dedup |
+| `AgentsServicePollUtils` | Adds `Foundry-Features` header to polling requests. Remaps terminal poll states (`completed`, `superseded`). | Polling behavior changes or new Foundry features |
+| `StreamingUtils` | Reactive streaming utilities for server-sent events | Streaming format changes |
+| `TokenUtils` | Bridges `TokenCredential` to openai-java token supplier | Auth mechanism changes |
+| `HttpClientHelper` | Adapts Azure `HttpPipeline` to openai-java `HttpClient` | openai-java HTTP contract changes |
+| `AzureHttpResponseAdapter` | Exposes Azure `HttpResponse` as openai-java `HttpResponse` | openai-java response contract changes |
+| `JsonMergePatchHelper` | JSON merge-patch utilities for PATCH operations | New PATCH-able models |
+
+## Module Descriptor (module-info.java)
+
+Generated. Key transitive exports:
+
+```java
+requires transitive com.azure.core;
+requires transitive openai.java.client.okhttp;
+requires transitive openai.java.core;
+
+exports com.azure.ai.agents;
+exports com.azure.ai.agents.models;
+opens com.azure.ai.agents.models to com.azure.core;
+opens com.azure.ai.agents.implementation.models to com.azure.core;
+```
+
+## Key Dependencies
+
+| Dependency | Version | Notes |
+|---|---|---|
+| `com.openai:openai-java` | 4.14.0 | External; enforcer bans other versions |
+| `com.azure:azure-core` | 1.58.0-beta.1 | Core library |
+| `com.azure:azure-core-http-netty` | 1.16.3 | Default HTTP client |
+
+## Shared Skills Pipeline
+
+After regeneration, apply these shared skills from `sdk/ai/.github/skills/` in order:
+
+1. **`tsp-naming-collision`** — Fix `*Request1` parameter suffixes
+2. **`api-diff`** — Identify new API additions, bucket by feature area
+3. **`dup-classes`** — Check new models against openai-java for duplicates
+4. **`dedup-openai`** — Suppress actionable duplicates via `@@alternateType`
+5. **`union-type-wrappers`** — Add typed getters/setters for `BinaryData` union properties
+6. **`codegen-survival-rules`** — Ensure manual edits survive next regen
+7. **`tsp-type-override`** — Override TypeSpec types with Java-native types if needed
diff --git a/sdk/ai/azure-ai-agents/.github/skills/azure-ai-agents/references/customizations.md b/sdk/ai/azure-ai-agents/.github/skills/azure-ai-agents/references/customizations.md
new file mode 100644
index 000000000000..00834eed1c39
--- /dev/null
+++ b/sdk/ai/azure-ai-agents/.github/skills/azure-ai-agents/references/customizations.md
@@ -0,0 +1,53 @@
+# azure-ai-agents — Customizations Reference
+
+**File:** `customizations/src/main/java/AgentsCustomizations.java`
+
+This file applies AST transformations during `tsp-client update` / `tsp-client generate`. It runs after code generation and modifies the emitted Java source.
+
+## Customization Methods
+
+### `renameImageGenToolSize()`
+
+**Problem:** The generated `ImageGenToolSize` enum entries use raw numeric names derived from the TypeSpec values (e.g., `ONE_ZERO_TWO_FOURX_ONE_ZERO_TWO_FOUR` for `1024x1024`), which are unreadable.
+
+**Solution:** Renames enum constants to descriptive names:
+
+| Generated Name | Renamed To |
+|---|---|
+| `ONE_ZERO_TWO_FOURX_ONE_ZERO_TWO_FOUR` | `RESOLUTION_1024_X_1024` |
+| `ONE_ZERO_TWO_FOURX_ONE_FIVE_THREE_SIX` | `RESOLUTION_1024_X_1536` |
+| `ONE_FIVE_THREE_SIXX_ONE_ZERO_TWO_FOUR` | `RESOLUTION_1536_X_1024` |
+
+**When to update:** If the TypeSpec adds new image sizes or changes the naming scheme.
+
+**Could this be a TypeSpec customization?** Yes — `@@clientName` in `client.tsp` could rename enum values. Consider migrating.
+
+### `modifyPollingStrategies()`
+
+**Problem:** The generated `OperationLocationPollingStrategy` and `SyncOperationLocationPollingStrategy` need two additions: (1) Foundry-Features headers on polling requests, and (2) remapping of custom terminal status values (`completed`, `superseded`).
+
+**Solution:**
+- Modifies the constructor to wrap the pipeline context with `AgentsServicePollUtils.withFoundryFeatures()`
+- Overrides the `poll()` method to call `AgentsServicePollUtils::remapStatus` on the poll response
+
+**When to update:**
+- If the generated polling strategy class names or constructors change
+- If new Foundry-Features values are needed for polling
+- If the service adds new terminal poll states
+
+**Could this be a TypeSpec customization?** Partially — the Foundry-Features header could potentially be set via TypeSpec decorators, but the status remapping is Java-specific behavior.
+
+## Adding a New Customization
+
+1. Add a new private method to `AgentsCustomizations.java`
+2. Call it from `customize(LibraryCustomization, Logger)`
+3. Run `tsp-client update` to verify the customization applies cleanly
+4. Test the generated code compiles: `mvn compile -Dcheckstyle.skip -Dspotbugs.skip`
+
+## Troubleshooting
+
+| Symptom | Cause | Fix |
+|---|---|---|
+| Customization silently does nothing | AST query doesn't match generated code structure | Update the JavaParser selectors to match the new generated code |
+| Build fails in generated file with `@Generated` | Customization produced broken output | Fix the customization method's AST manipulation |
+| Customization applies but creates duplicate code | Generated code already includes what the customization adds | Remove or guard the customization |
diff --git a/sdk/ai/azure-ai-projects/.github/skills/azure-ai-projects/SKILL.md b/sdk/ai/azure-ai-projects/.github/skills/azure-ai-projects/SKILL.md
new file mode 100644
index 000000000000..d7bb9f40c129
--- /dev/null
+++ b/sdk/ai/azure-ai-projects/.github/skills/azure-ai-projects/SKILL.md
@@ -0,0 +1,47 @@
+---
+name: azure-ai-projects
+description: 'Post-regeneration guide for azure-ai-projects SDK. Covers bridge class maintenance, openai-java pipeline adapter, and multi-service client architecture. WHEN: regenerate azure-ai-projects; fix azure-ai-projects build errors; azure-ai-projects tsp-client update; add azure-ai-projects feature; modify azure-ai-projects.'
+---
+
+# azure-ai-projects — Package Skill
+
+> This skill activates after standard generation tools (`tsp-client update`, `azsdk_customized_code_update`) have been tried. It covers what they can't solve: HTTP pipeline bridge maintenance, multi-service client patterns, and package-specific error diagnosis.
+
+## Common Pitfalls
+
+- **This package depends on `azure-ai-agents`.** Changes to the agents package (especially model renames or module-info changes) can break this package. Always build both together.
+- **Never serialize openai-java types with `BinaryData.fromObject()`.** Use `OpenAIJsonHelper.toBinaryData()` from the agents package — the default Jackson ObjectMapper cannot handle Kotlin internals.
+- **The HTTP pipeline bridge is hand-written and fragile.** `HttpClientHelper` and `AzureHttpResponseAdapter` adapt Azure's HTTP stack to openai-java's contract. If either SDK's HTTP interfaces change, these break silently at runtime, not compile time.
+- **Customizations are currently inactive.** `ProjectsCustomizations.java` has a commented-out method. Check it after regen to see if it needs re-enabling.
+- **Follow codegen survival rules for ALL manual edits.** See `codegen-survival-rules` shared skill.
+
+## Architecture
+
+11 sync + 11 async clients (Connections, Datasets, Deployments, EvaluationRules, EvaluationTaxonomies, Evaluators, Indexes, Insights, RedTeams, Schedules, Skills), 113 generated models, 3 hand-written bridge classes. Depends on `azure-ai-agents` and `azure-storage-blob`.
+
+See [references/architecture.md](references/architecture.md) for source layout and bridge class inventory.
+
+## After Regeneration
+
+| Error location | What it means | Where to fix |
+|---|---|---|
+| Generated file, `@Generated` method | Customization produced broken output | Check `ProjectsCustomizations.java` (mostly inactive — may need re-enabling) |
+| Generated file, method WITHOUT `@Generated` | Hand-written wrapper references changed types | Fix the hand-written method |
+| Hand-written bridge class | References removed/renamed types in Azure core or openai-java | Fix the bridge class (TokenUtils, HttpClientHelper, AzureHttpResponseAdapter) |
+| Module-info compilation error | Missing requires/exports after new dependency added | Update `module-info.java` |
+
+**Post-regen workflow:** `tsp-naming-collision` → `tsp-type-override` → compile → fix bridge classes (all shared skills under `sdk/ai/.github/skills/`).
+
+## Testing Notes
+
+- Test base: `ClientTestBase extends TestProxyTestBase` with RECORD/PLAYBACK/LIVE modes
+- Recordings: `assets.json` → `Azure/azure-sdk-assets` (tag prefix: `java/ai/azure-ai-projects`)
+- Custom sanitizers for sensitive data, custom matchers excluding Stainless metadata headers
+- Serialization tests exist for type-overridden models (e.g., `WeeklyRecurrenceScheduleSerializationTest`)
+
+## References
+
+| File | Contents |
+|---|---|
+| [references/architecture.md](references/architecture.md) | Source layout, client inventory, bridge classes, module-info |
+| [references/customizations.md](references/customizations.md) | ProjectsCustomizations.java status, update triggers |
diff --git a/sdk/ai/azure-ai-projects/.github/skills/azure-ai-projects/references/architecture.md b/sdk/ai/azure-ai-projects/.github/skills/azure-ai-projects/references/architecture.md
new file mode 100644
index 000000000000..f8ff15bf989c
--- /dev/null
+++ b/sdk/ai/azure-ai-projects/.github/skills/azure-ai-projects/references/architecture.md
@@ -0,0 +1,100 @@
+# azure-ai-projects — Architecture Reference
+
+## Source Layout
+
+```
+sdk/ai/azure-ai-projects/
+├── tsp-location.yaml # TypeSpec spec reference
+├── customizations/src/main/java/
+│ └── ProjectsCustomizations.java # Post-gen AST customizations (mostly inactive)
+├── src/main/java/com/azure/ai/projects/
+│ ├── AIProjectClientBuilder.java # Generated — multi-service builder
+│ ├── AIProjectsServiceVersion.java # Generated — service version enum
+│ ├── ConnectionsClient.java # Generated
+│ ├── DatasetsClient.java # Generated
+│ ├── DeploymentsClient.java # Generated
+│ ├── EvaluationRulesClient.java # Generated
+│ ├── EvaluationTaxonomiesClient.java # Generated
+│ ├── EvaluatorsClient.java # Generated
+│ ├── IndexesClient.java # Generated
+│ ├── InsightsClient.java # Generated
+│ ├── RedTeamsClient.java # Generated
+│ ├── SchedulesClient.java # Generated
+│ ├── SkillsClient.java # Generated
+│ ├── models/ # ~113 generated model classes
+│ └── implementation/
+│ ├── AIProjectClientImpl.java # Generated — HTTP operations
+│ ├── TokenUtils.java # HAND-WRITTEN — token auth bridge
+│ └── http/
+│ ├── HttpClientHelper.java # HAND-WRITTEN — HTTP pipeline adapter
+│ └── AzureHttpResponseAdapter.java # HAND-WRITTEN — response adapter
+├── src/samples/java/ # ~14 sample files
+└── src/test/java/ # ~13 test files
+```
+
+## Client Inventory
+
+| Sync Client | Async Client | Purpose |
+|---|---|---|
+| `ConnectionsClient` | `ConnectionsAsyncClient` | Workspace connection management |
+| `DatasetsClient` | `DatasetsAsyncClient` | Dataset CRUD |
+| `DeploymentsClient` | `DeploymentsAsyncClient` | Model deployment management |
+| `EvaluationRulesClient` | `EvaluationRulesAsyncClient` | Evaluation rule configuration |
+| `EvaluationTaxonomiesClient` | `EvaluationTaxonomiesAsyncClient` | Evaluation taxonomy management |
+| `EvaluatorsClient` | `EvaluatorsAsyncClient` | Evaluator CRUD and runs |
+| `IndexesClient` | `IndexesAsyncClient` | Index management |
+| `InsightsClient` | `InsightsAsyncClient` | Insights and metrics |
+| `RedTeamsClient` | `RedTeamsAsyncClient` | Red team evaluation |
+| `SchedulesClient` | `SchedulesAsyncClient` | Schedule management |
+| `SkillsClient` | `SkillsAsyncClient` | Skill CRUD |
+
+All clients are fully generated. No hand-written convenience wrappers mixed in.
+
+## Bridge Classes (Hand-Written)
+
+These do NOT have the generated header. They survive codegen but may need updates if SDK contracts change.
+
+| Class | Purpose | Update trigger |
+|---|---|---|
+| `TokenUtils` | Creates a lazy `BearerTokenSupplier` from Azure `TokenCredential` for openai-java auth | Auth mechanism changes in either SDK |
+| `HttpClientHelper` | Adapts Azure `HttpPipeline` to openai-java `HttpClient` interface. Handles sync/async request/response translation, maps Azure exceptions to openai-java exceptions. | HTTP contract changes in openai-java or azure-core |
+| `AzureHttpResponseAdapter` | Exposes Azure `HttpResponse` as openai-java `HttpResponse`. Converts header formats. | Response contract changes in openai-java |
+
+**Key risk:** These bridge classes implement openai-java interfaces, so changes to openai-java's HTTP contracts (e.g., new methods on `HttpClient` or `HttpResponse`) will cause compile failures.
+
+## Module Descriptor (module-info.java)
+
+Generated. Notable transitive dependencies:
+
+```java
+requires transitive com.azure.core;
+requires com.azure.storage.blob;
+requires transitive openai.java.core;
+requires transitive openai.java.client.okhttp;
+requires com.azure.ai.agents;
+
+exports com.azure.ai.projects;
+exports com.azure.ai.projects.models;
+opens com.azure.ai.projects.models to com.azure.core;
+opens com.azure.ai.projects.implementation.models to com.azure.core;
+```
+
+**Cross-package dependency:** This module requires `com.azure.ai.agents`. Changes to the agents module's exports can affect this package.
+
+## Key Dependencies
+
+| Dependency | Version | Notes |
+|---|---|---|
+| `com.openai:openai-java` | 4.14.0 | External; enforcer bans other versions |
+| `com.azure:azure-core` | 1.58.0-beta.1 | Core library |
+| `com.azure:azure-ai-agents` | 2.1.0 | Sibling package dependency |
+| `com.azure:azure-storage-blob` | 12.33.3 | File storage operations |
+
+## Relationship to azure-ai-agents
+
+This package depends on `azure-ai-agents` at both the Maven and JPMS level. The implications:
+
+- **Build order matters:** Always build agents before projects
+- **Model sharing:** Some shared types come from the agents package
+- **Bridge class reuse:** `OpenAIJsonHelper` lives in agents — projects uses it transitively
+- **Version lockstep:** Both packages share the same version (2.1.0) and should be released together
diff --git a/sdk/ai/azure-ai-projects/.github/skills/azure-ai-projects/references/customizations.md b/sdk/ai/azure-ai-projects/.github/skills/azure-ai-projects/references/customizations.md
new file mode 100644
index 000000000000..f616dcb9381a
--- /dev/null
+++ b/sdk/ai/azure-ai-projects/.github/skills/azure-ai-projects/references/customizations.md
@@ -0,0 +1,35 @@
+# azure-ai-projects — Customizations Reference
+
+**File:** `customizations/src/main/java/ProjectsCustomizations.java`
+
+## Current Status: Mostly Inactive
+
+The customizations file exists and overrides `customize(LibraryCustomization, Logger)`, but the only internal method — `removeConversationsClientBuilder()` — is **commented out**.
+
+This means the file is effectively a no-op placeholder. It is invoked during `tsp-client update` but applies no transformations.
+
+## Inactive Method: `removeConversationsClientBuilder()`
+
+**Purpose:** Would remove the `ConversationsClientBuilder` class file from the generated output.
+
+**Why commented out:** The ConversationsClientBuilder may have been removed from the TypeSpec, or the removal is being handled differently. Check whether `ConversationsClientBuilder.java` still exists in the generated output — if it doesn't, this customization is no longer needed.
+
+**When to re-enable:** If a new codegen run re-introduces a class that should be suppressed.
+
+## Adding a New Customization
+
+1. Add a new private method to `ProjectsCustomizations.java`
+2. Call it from `customize(LibraryCustomization, Logger)`
+3. Run `tsp-client update` to verify the customization applies cleanly
+4. Test the generated code compiles: `mvn compile -Dcheckstyle.skip -Dspotbugs.skip`
+
+## Comparison with azure-ai-agents
+
+The agents package has heavier customizations (enum renames, polling strategy modifications). The projects package relies more on TypeSpec-level customizations (`client.tsp` / `client.java.tsp`) and hand-written bridge classes instead of AST post-processing. This is the preferred approach — consider migrating agents customizations to TypeSpec decorators where possible.
+
+## Troubleshooting
+
+| Symptom | Cause | Fix |
+|---|---|---|
+| Unwanted class appears in generated output | Customization that removes it is commented out | Re-enable the removal method |
+| Build fails after regen with no code changes | Inactive customization may need re-activation | Check if a previously-suppressed class reappeared |