Is your feature request related to a problem? Please describe.
To build confidence in JB Manager architecture. We need to stress test the architecture.
Describe the solution you'd like
We need to mock the channel to generate the volume of 10 / 100 / 1000 messages a minute and see the delivery time of each message.
Scenarios:
- Keep the FSM very simple -- echo the message back to the user -- no LLM
- Using above FSM, send the message in voice format -- convert speech to text, translate from hindi to english but no LLM
- Add an LLM call into the FSM but use GPT3.5-Turbo
Steps:
- Make an API request to generate message. Generate a unique mobile number / user id as part of the payload.
- Change the channel API endpoint and point it to a new server. Add a server that simply logs in coming message and returns 200.
- Compare the delay between input and output based on the unique mobile number / user id.
Additional context
No response
Is your feature request related to a problem? Please describe.
To build confidence in JB Manager architecture. We need to stress test the architecture.
Describe the solution you'd like
We need to mock the channel to generate the volume of 10 / 100 / 1000 messages a minute and see the delivery time of each message.
Scenarios:
Steps:
Additional context
No response