Code of Conduct
AI Policy
Is your feature request related to a problem? Please describe.
Some times there are information that can only be retrieved from the ReqLLM data directly and that is discarded by AshAI.
For example, if I set a limit of tokens and want to check if the LLM generation was stopped by checking ReqLLM metadata field finish_reason.
Describe the solution you'd like
I'm not sure where this data should be stored, maybe in the context? But regardless on where, giving the user a way to retrieve it would be great, either by somehow returning it with the action return value, or, if that is not possible, having some way to handle that inside the action itself (in this case, it would be great if we could update the input data and request a retry (so, for example, in the case I mentioned above regarding max tokens, I would be able to check finish_reason, and, if the generating was stopped in the middle because of not enough tokens, I would be able to increase the amount of tokens and retry).
Describe alternatives you've considered
No response
Additional context
No response
Code of Conduct
AI Policy
Is your feature request related to a problem? Please describe.
Some times there are information that can only be retrieved from the ReqLLM data directly and that is discarded by AshAI.
For example, if I set a limit of tokens and want to check if the LLM generation was stopped by checking ReqLLM metadata field
finish_reason.Describe the solution you'd like
I'm not sure where this data should be stored, maybe in the context? But regardless on where, giving the user a way to retrieve it would be great, either by somehow returning it with the action return value, or, if that is not possible, having some way to handle that inside the action itself (in this case, it would be great if we could update the input data and request a retry (so, for example, in the case I mentioned above regarding max tokens, I would be able to check
finish_reason, and, if the generating was stopped in the middle because of not enough tokens, I would be able to increase the amount of tokens and retry).Describe alternatives you've considered
No response
Additional context
No response