Get started in 3 simple steps
npm i use-every-llm
Configure your provider key(s)
useLLM({model:"any model",prompt:"..."})
From simple text generation to complex multimodal AI interactions
1import useLLM from 'use-every-llm'
2
3const result = await useLLM({
4 model: "gemini-2.0-flash",
5 prompt: "What model is it",
6});
7
8console.log(result.text);
1import useLLM from 'use-every-llm'
2
3const result = await useLLM({
4 model: "gemini-2.0-flash",
5 prompt: "What model is it",
6 streamingResponse: true,
7});
8
9for await (const chunk of result ) {
10 console.log(chunk.text);
11}
1import useLLM from 'use-every-llm'
2
3const result = await useLLM({
4 model: "gemini-1.5-flash",
5 prompt: "what image is it?",
6 image: "image.png",
7});
8
9console.log(result.text);
1import useLLM from 'use-every-llm'
2
3const result = await useLLM({
4 model: "gemini-2.0-flash",
5 prompt: "what video is it?",
6 systemPrompt: "You are a video describer",
7 video: "My Movie.mp4",
8});
9
10console.log(result.text);
No analytics by default. No key collection. All requests go straight to the provider you choose.
MIT licensed for maximum flexibility and peace of mind.
Env-based keys, recommended server-only usage for maximum security.