Analyze Docker logs with LLMs. Local (Ollama) or Cloud (OpenAI).
pip install dockai dockai analyze my-container --since 15m --tail 3000
LLM-driven summaries & root-cause fixes.
Instant CPU & Mem stats (`--instant-perf`), windowed with `--perf`.
Ollama (qwen2.5:7b-instruct
) or OpenAI (gpt-4o-mini
).
pip install dockai
Then run: dockai analyze <container> --since 1h --tail 10000
--instant-perf
for quick one-shot metrics,
or --perf 60
to sample a time window.
pip install dockai
Then run: dockai analyze <container> --since 1h --tail 10000