Issue Description
llama-server /models api returns a 503 error while the model is loading. ramalama run handles this but ramalama chat currently does not. For interactive use this isn't a major issue but it would be good to fix this.
Worked around this for CI in #2342
Steps to reproduce the issue
Run ramalama chat soon after ramalama serve --detach
Describe the results you received
ramalama chat fails
Describe the results you expected
ramalama chat succeeds
ramalama info output
Upstream Latest Release
Yes
Additional environment details
No response
Additional information
No response
Issue Description
llama-server /models api returns a 503 error while the model is loading. ramalama run handles this but ramalama chat currently does not. For interactive use this isn't a major issue but it would be good to fix this.
Worked around this for CI in #2342
Steps to reproduce the issue
Run ramalama chat soon after ramalama serve --detach
Describe the results you received
ramalama chat fails
Describe the results you expected
ramalama chat succeeds
ramalama info output
N/AUpstream Latest Release
Yes
Additional environment details
No response
Additional information
No response