fix: add 60s timeout to OpenAI-compatible HTTP client preventing LLM deadlock
All checks were successful
BotServer CI/CD / build (push) Successful in 4m2s

reqwest::Client::new() has no timeout — when external APIs (NVIDIA,
Groq, etc.) hang or throttle, the request blocks forever, freezing the
entire response pipeline for the user.

Also add std::time::Duration import to llm/mod.rs.

Co-authored-by: Qwen-Coder <qwen-coder@alibabacloud.com>
This commit is contained in:
Rodrigo Rodriguez (Pragmatismo) 2026-04-13 23:31:12 -03:00
parent 25d6d2fd57
commit 3ec72f6121

View file

@ -3,6 +3,7 @@ use futures::StreamExt;
use log::{error, info};
use serde_json::Value;
use std::sync::Arc;
use std::time::Duration;
use tokio::sync::{mpsc, RwLock};
pub mod cache;
@ -198,7 +199,10 @@ impl OpenAIClient {
};
Self {
client: reqwest::Client::new(),
client: reqwest::Client::builder()
.timeout(Duration::from_secs(60))
.build()
.unwrap_or_else(|_| reqwest::Client::new()),
base_url: base,
endpoint_path: endpoint,
rate_limiter: Arc::new(rate_limiter),