Artificial Intelligence (AI) is no longer confined to web apps and cloud dashboards — it’s rapidly making its way into developer tools, automations, and even command-line utilities. A lightweight AI-powered CLI can save time, generate ideas, explain code, or automate tasks directly from your terminal without switching context.
In this tutorial, we’ll build a command-line tool in Rust that connects to an AI text-generation API. With it, you’ll be able to type:
ai-cli ask "Explain Rust ownership in simple terms"
…and instantly get an AI-generated response.
Along the way, you’ll learn how to:
-
Design and build a CLI with Clap for argument parsing.
-
Perform async HTTP requests using Tokio and Reqwest.
-
Parse JSON responses using Serde.
-
Securely handle API keys via environment variables.
-
Add features like conversation history, caching, and error handling.
-
Package, test, and distribute your CLI tool.
Why Rust? Rust is ideal for CLI applications because it combines performance, safety, and portability. Your AI-powered assistant will run blazingly fast and compile to a single binary that works across platforms.
By the end of this guide, you’ll have a fully functional AI CLI assistant and the skills to extend it further — whether you want to integrate other APIs, add a terminal UI, or distribute it as an open-source tool.
Project Setup & Dependencies
Before we start coding, we’ll set up a fresh Rust project and add the dependencies we’ll need to build our AI-powered CLI tool.
Step 1: Create a new Rust binary project
Open your terminal and run:
cargo new ai-cli --bin
cd ai-cli
This creates a new folder called ai-cli
with the following structure:
ai-cli/
├─ Cargo.toml
└─ src/
└─ main.rs
The --bin
flag tells Cargo to generate a binary project (executable) instead of a library.
Step 2: Add dependencies
Open the Cargo.toml
file and update it with the following dependencies:
[package]
name = "ai-cli"
version = "0.1.0"
edition = "2024"
[dependencies]
# CLI argument parsing
clap = { version = "4.2", features = ["derive"] }
# Async runtime
tokio = { version = "1.36", features = ["rt-multi-thread", "macros"] }
# HTTP client
reqwest = { version = "0.12.23", features = ["json"] }
# JSON serialization/deserialization
serde = { version = "1.0", features = ["derive"] }
serde_json = "1.0"
# Error handling
anyhow = "1.0"
# Environment variable management
dotenvy = "0.15"
# Logging and tracing
tracing = "0.1"
tracing-subscriber = { version = "0.3", features = ["fmt"] }
Here’s what each dependency does:
-
clap → Makes it easy to define CLI commands, arguments, and flags.
-
tokio → Async runtime required by Reqwest.
-
reqwest → HTTP client to make API calls to the AI provider.
-
serde / serde_json → Serialize request payloads and parse JSON responses.
-
anyhow → Simplified error handling with context.
-
dotenvy → Loads environment variables from a
.env
file. -
tracing + tracing-subscriber → For structured logging and debugging.
Step 3: Verify dependencies
Run the following command to make sure everything compiles:
cargo build
If successful, you’re ready to move on.
✅ At this point, we have a Rust project set up with all the dependencies needed for building our AI-powered CLI.
CLI Design with Clap
Now that our project is set up, it’s time to design the command-line interface. We’ll use the Clap crate, which provides a clean way to define commands, subcommands, arguments, and options.
Step 1: Decide on commands
For our AI-powered CLI, we’ll start with these commands:
-
ask → Send a single prompt and get a response.
-
chat (optional, for later) → Start an interactive session with conversation history.
-
config (optional) → Configure settings like API key or default model.
In this section, we’ll implement the basic ask
command.
Step 2: Define the CLI structure
Open src/main.rs
and replace its contents with the following:
use clap::{Parser, Subcommand};
/// AI-powered CLI tool built in Rust
#[derive(Parser)]
#[command(name = "ai-cli")]
#[command(version = "0.1.0")]
#[command(about = "A small AI-powered CLI tool", long_about = None)]
struct Cli {
#[command(subcommand)]
command: Commands,
}
#[derive(Subcommand)]
enum Commands {
/// Ask a single question and get an AI-generated reply
Ask {
/// The prompt or question to send
prompt: String,
/// Model name (optional, defaults to "default-model")
#[arg(short, long, default_value = "default-model")]
model: String,
/// Max tokens (response length)
#[arg(short = 't', long, default_value_t = 150)]
max_tokens: u32,
/// Temperature (controls randomness)
#[arg(short, long, default_value_t = 0.7)]
temperature: f32,
},
}
fn main() {
let cli = Cli::parse();
match &cli.command {
Commands::Ask { prompt, model, max_tokens, temperature } => {
println!("Prompt: {}", prompt);
println!("Model: {}", model);
println!("Max tokens: {}", max_tokens);
println!("Temperature: {}", temperature);
}
}
}
Step 3: Try it out
Run the CLI with a sample command:
cargo run -- ask "Explain Rust ownership in simple terms" -m gpt-4 -n 200 -T 0.8
Expected output:
Prompt: Explain Rust ownership in simple terms
Model: gpt-4
Max tokens: 200
Temperature: 0.8
Right now, the CLI just prints arguments — in the next section, we’ll hook this up to an AI API call.
✅ With this step, we have a working CLI structure that can accept commands, arguments, and options.
Async HTTP Client & Calling the AI API
Now that our CLI is parsing commands correctly, it’s time to connect it to an AI API. We’ll use Reqwest for making HTTP requests, Serde for JSON parsing, and Tokio as the async runtime.
Step 1: Prepare your API key
Most AI providers require an API key (e.g., OpenAI, Anthropic, etc.). For security, we’ll store it in an environment variable.
Create a .env
file in your project root:
AI_API_KEY=your_api_key_here
⚠️ Important: Never commit .env
or API keys to version control. Add it to .gitignore
.
Step 2: Define request & response structs
Open src/main.rs
and add the following above fn main
:
use serde::{Deserialize, Serialize};
use anyhow::{Context, Result};
use reqwest::Client;
use std::env;
#[derive(Serialize)]
struct ChatMessage<'a> {
role: &'a str,
content: &'a str,
}
#[derive(Serialize)]
struct ChatRequest<'a> {
model: &'a str,
messages: Vec<ChatMessage<'a>>,
max_tokens: u32,
temperature: f32,
}
#[derive(Deserialize, Debug)]
struct ChatChoice {
message: ChatMessageOwned,
}
#[derive(Deserialize, Debug)]
struct ChatMessageOwned {
content: String,
}
#[derive(Deserialize, Debug)]
struct ChatResponse {
choices: Vec<ChatChoice>,
}
These map our Rust structs to the typical request/response JSON shape of an AI completion API.
(Adjust field names if your provider uses a different format.)
Step 3: Implement the ask
function
Still in src/main.rs
, add this async function:
async fn ask(
client: &Client,
api_key: &str,
model: &str,
prompt: &str,
max_tokens: u32,
temperature: f32
) -> Result<String> {
let req = ChatRequest {
model,
messages: vec![ChatMessage {
role: "user",
content: prompt,
}],
max_tokens,
temperature,
};
let url = "https://api.openai.com/v1/chat/completions";
let res = client
.post(url)
.bearer_auth(api_key)
.json(&req)
.send().await
.context("Failed to send request")?;
if !res.status().is_success() {
let status = res.status();
let body = res.text().await.unwrap_or_default();
anyhow::bail!("API error: {} - {}", status, body);
}
let completion: ChatResponse = res.json().await.context("Failed to parse response")?;
let reply = completion.choices
.get(0)
.map(|c| c.message.content.clone())
.unwrap_or_else(|| "No reply found".to_string());
Ok(reply.trim().to_string())
}
Step 4: Update main
to use it
Modify fn main
to run asynchronously and call the ask
function:
#[tokio::main]
async fn main() -> Result<()> {
dotenvy::dotenv().ok();
let cli = Cli::parse();
let api_key = env::var("AI_API_KEY")
.context("Please set AI_API_KEY in your environment or .env file")?;
let client = Client::new();
match &cli.command {
Commands::Ask {
prompt,
model,
max_tokens,
temperature,
} => {
match ask(
&client,
&api_key,
model,
prompt,
*max_tokens,
*temperature,
)
.await
{
Ok(reply) => println!("\nAI reply:\n{}\n", reply),
Err(e) => eprintln!("Error: {:?}", e),
}
}
}
Ok(())
}
Step 5: Test the CLI
Run:
cargo run -- ask "Explain Rust ownership in one paragraph" -m gpt-3.5-turbo -n 150 -T 0.7
If everything is set up correctly and your API key is valid, you should get an AI-generated reply right in your terminal 🎉.
✅ At this point, you’ve built a fully working AI CLI that accepts a prompt, sends it to an API, and prints the response.
Basic AI Response Flow (Ask Command)
In this section, we’ll connect everything so that when a user runs the CLI with the ask
command, it will:
-
Read the prompt from the command line.
-
Build a JSON request for the AI API.
-
Send it via
reqwest
. -
Parse the response JSON.
-
Print out the AI-generated reply.
1. Create a Response Struct for JSON Deserialization
We’ll model only the relevant fields from the OpenAI-style API response.
use serde::Deserialize;
#[derive(Debug, Deserialize)]
struct Choice {
message: Message,
}
#[derive(Debug, Deserialize)]
struct Message {
role: String,
content: String,
}
#[derive(Debug, Deserialize)]
struct ApiResponse {
choices: Vec<Choice>,
}
2. Update main.rs
to handle the Ask
Command
We add logic to send the user’s prompt and print the AI’s reply.
use clap::{Parser, Subcommand};
use reqwest::Client;
use serde_json::json;
use std::env;
#[tokio::main]
async fn main() -> anyhow::Result<()> {
let cli = Cli::parse();
match &cli.command {
Commands::Ask {
prompt,
model,
max_tokens,
temperature,
} => {
// Load API key
let api_key = env::var("AI_API_KEY")
.expect("AI_API_KEY environment variable not set");
// Prepare request payload
let payload = json!({
"model": model,
"messages": [
{ "role": "user", "content": prompt }
],
"max_tokens": max_tokens,
"temperature": temperature
});
// Send request
let client = Client::new();
let res = client
.post("https://api.openai.com/v1/chat/completions")
.bearer_auth(api_key)
.json(&payload)
.send()
.await?;
if !res.status().is_success() {
let err_text = res.text().await?;
anyhow::bail!("API error: {} - {}", res.status(), err_text);
}
// Parse response
let api_response: ApiResponse = res.json().await?;
if let Some(choice) = api_response.choices.first() {
println!("\n🤖 AI Response:\n{}\n", choice.message.content);
} else {
println!("No response received from AI.");
}
}
}
Ok(())
}
3. Try It Out
Run the CLI:
cargo run -- ask "Explain Rust ownership in simple terms" -m gpt-4o-mini -n 200 -T 0.7
Expected output (varies depending on model):
🤖 AI Response:
Rust ownership is a system that makes sure data is managed safely without needing a garbage collector. Each piece of data has a single owner, and when the owner goes out of scope, the data is cleaned up automatically...
🎯 What We’ve Achieved
-
The
ask
command now sends prompts to the AI. -
Responses are parsed and displayed cleanly.
-
Errors from the API are handled gracefully.
Improving User Experience (Colors, Formatting, Multi-line Prompts)
A CLI tool doesn’t just need to work — it should also be pleasant to use. We’ll enhance the output by:
-
Adding colors and styling using the
colored
crate. -
Formatting output with clear headers and separation lines.
-
Supporting multi-line prompts so users can write bigger queries.
1. Add the colored
Dependency
In Cargo.toml
:
colored = "3.0.0"
2. Enhance Output with Colors
Update main.rs
where we print the AI response:
use colored::*;
And replace:
println!("\n🤖 AI Response:\n{}\n", choice.message.content);
with something more polished:
println!("{}", "================ AI Response ================".green().bold());
println!("{}", choice.message.content.white());
println!("{}", "============================================".green().bold());
Now the response stands out and looks professional.
3. Multi-Line Prompt Input (Optional but Handy)
Sometimes you want to paste a paragraph as a prompt.
We can allow stdin input when prompt
is missing.
Modify the Ask
command struct in Commands
:
Ask {
/// The prompt or question to send
#[arg()]
prompt: Option<String>,
/// Model name (optional, defaults to gpt-4o-mini)
#[arg(short, long, default_value = "gpt-4o-mini")]
model: String,
/// Max tokens (response length)
#[arg(short = 'n', long, default_value_t = 150)]
max_tokens: u32,
/// Temperature (controls randomness)
#[arg(short = 'T', long, default_value_t = 0.7)]
temperature: f32,
},
And in main.rs
:
use std::io::{self, Read};
match &cli.command {
Commands::Ask {
prompt,
model,
max_tokens,
temperature,
} => {
// Get prompt either from arg or stdin
let user_prompt = if let Some(p) = prompt {
p.clone()
} else {
println!("{}", "Enter your prompt (Ctrl+D to finish):".blue());
let mut buffer = String::new();
io::stdin().read_to_string(&mut buffer)?;
buffer
};
// ... then use `user_prompt` in the JSON payload instead of `prompt`
}
}
4. Try It Out
-
Single-line prompt:
cargo run -- ask "Give me 5 project ideas in Rust"
-
Multi-line prompt (leave out the argument and type directly):
cargo run -- ask
Then type (end with Ctrl+D
on macOS/Linux or Ctrl+Z
on Windows):
Write a haiku about Rust programming.
Make it funny but clear.
Output will look like:
================ AI Response ================
Rust borrows the code,
Ownership keeps bugs away,
Safe and sound it runs.
=============================================
🎯 What We’ve Achieved
-
Added colored output for clarity.
-
Improved readability with headers and formatting.
-
Supported multi-line prompts for long-form queries.
Adding More Commands (Summarize, Translate, Chat Mode)
A good CLI tool should offer multiple entry points for common AI use cases. We’ll extend our CLI with three new subcommands:
-
summarize
→ Summarize long text into a shorter output. -
translate
→ Translate text into a target language. -
chat
→ Start an interactive chat session with the AI.
1. Extend the Clap CLI with New Commands
Update the Commands
enum:
#[derive(Subcommand)]
enum Commands {
/// Ask a single question and get an AI-generated reply
Ask {
#[arg()]
prompt: Option<String>,
#[arg(short, long, default_value = "gpt-4o-mini")]
model: String,
#[arg(short = 'n', long, default_value_t = 150)]
max_tokens: u32,
#[arg(short = 'T', long, default_value_t = 0.7)]
temperature: f32,
},
/// Summarize text input
Summarize {
#[arg()]
text: Option<String>,
},
/// Translate text into another language
Translate {
#[arg()]
text: Option<String>,
/// Target language (e.g., "fr", "es", "id")
#[arg(short, long, default_value = "en")]
to: String,
},
/// Start an interactive chat session
Chat {
#[arg(short, long, default_value = "gpt-4o-mini")]
model: String,
},
}
2. Implement summarize
and translate
Inside main.rs
, extend the match block:
async fn send_ai_request(user_input: &str, task: &str, model: &str) -> anyhow::Result<()> {
let api_key = std::env::var("AI_API_KEY").expect("AI_API_KEY not set");
let client = reqwest::Client::new();
// Build full prompt
let full_prompt = format!("{}\n\n{}", task, user_input);
let payload =
serde_json::json!({
"model": model,
"messages": [
{ "role": "user", "content": full_prompt }
],
"max_tokens": 200,
"temperature": 0.7
});
let res = client
.post("https://api.openai.com/v1/chat/completions")
.bearer_auth(api_key)
.json(&payload)
.send().await?;
let status = res.status();
if !status.is_success() {
let err_text = res.text().await?;
anyhow::bail!("API error: {} - {}", status, err_text);
}
let api_response: ApiResponse = res.json().await?;
if let Some(choice) = api_response.choices.first() {
println!("{}", "================ AI Response ================".green().bold());
println!("{}", choice.message.content.white());
println!("{}", "============================================".green().bold());
} else {
println!("No response received from AI.");
}
Ok(())
}
#[tokio::main]
async fn main() -> anyhow::Result<()> {
let cli = Cli::parse();
match &cli.command {
Commands::Summarize { text } => {
let input_text = get_input_or_stdin(
text,
"Paste text to summarize (Ctrl+D to finish):"
)?;
send_ai_request(
&input_text,
"Summarize the following text briefly:",
"gpt-4o-mini"
).await?;
}
Commands::Translate { text, to } => {
let input_text = get_input_or_stdin(
text,
"Paste text to translate (Ctrl+D to finish):"
)?;
let prompt = format!("Translate the following text into {}:", to);
send_ai_request(&input_text, &prompt, "gpt-4o-mini").await?;
}
Commands::Chat { model } => {
println!("{}", "Starting interactive chat (type 'exit' to quit)".cyan().bold());
let mut history = vec![];
loop {
print!("{}", "You: ".blue().bold());
use std::io::Write;
std::io::stdout().flush()?;
let mut input = String::new();
std::io::stdin().read_line(&mut input)?;
let input = input.trim();
if input.eq_ignore_ascii_case("exit") {
break;
}
history.push(json!({ "role": "user", "content": input }));
let payload =
json!({
"model": model,
"messages": history,
"max_tokens": 200,
"temperature": 0.7
});
let api_key = std::env::var("AI_API_KEY").expect("AI_API_KEY not set");
let client = reqwest::Client::new();
let res = client
.post("https://api.openai.com/v1/chat/completions")
.bearer_auth(api_key)
.json(&payload)
.send().await?;
let api_response: ApiResponse = res.json().await?;
if let Some(choice) = api_response.choices.first() {
println!("{}", format!("AI: {}", choice.message.content).green());
history.push(
json!({
"role": "assistant",
"content": choice.message.content
})
);
}
}
}
_ => {}
}
Ok(())
}
3. Implement chat
Mode (Interactive Loop)
This allows back-and-forth with the AI:
Commands::Chat { model } => {
println!("{}", "Starting interactive chat (type 'exit' to quit)".cyan().bold());
let mut history = vec![];
loop {
print!("{}", "You: ".blue().bold());
use std::io::Write;
std::io::stdout().flush()?;
let mut input = String::new();
std::io::stdin().read_line(&mut input)?;
let input = input.trim();
if input.eq_ignore_ascii_case("exit") {
break;
}
history.push(json!({ "role": "user", "content": input }));
let payload = json!({
"model": model,
"messages": history,
"max_tokens": 200,
"temperature": 0.7
});
let api_key = std::env::var("AI_API_KEY").expect("AI_API_KEY not set");
let client = reqwest::Client::new();
let res = client
.post("https://api.openai.com/v1/chat/completions")
.bearer_auth(api_key)
.json(&payload)
.send()
.await?;
let api_response: ApiResponse = res.json().await?;
if let Some(choice) = api_response.choices.first() {
println!("{}", format!("AI: {}", choice.message.content).green());
history.push(json!({
"role": "assistant",
"content": choice.message.content
}));
}
}
}
4. Helper Function for Input
To avoid repeating stdin logic:
fn get_input_or_stdin(opt: &Option<String>, prompt: &str) -> anyhow::Result<String> {
if let Some(text) = opt {
Ok(text.clone())
} else {
println!("{}", prompt.blue());
let mut buffer = String::new();
std::io::stdin().read_to_string(&mut buffer)?;
Ok(buffer)
}
}
5. Try It Out
-
Summarize:
cargo run -- summarize "Rust is a systems programming language..."
-
Translate to Indonesian:
cargo run -- translate "Hello, how are you?" -to id
-
Chat mode:
cargo run -- chat
Example:
You: Hello AI
AI: Hi there! How can I help you today?
You: Tell me a Rust joke
AI: Why did the Rustacean bring a borrow checker to the party? To make sure no one used references after they were dropped!
🎯 What We’ve Achieved
-
Added
summarize
,translate
, andchat
commands. -
Built a reusable stdin helper.
-
Enabled interactive chat mode with conversation history.
Error Handling & Logging
A good CLI tool should:
-
Show clear, user-friendly errors (colored, consistent formatting).
-
Allow verbose debugging logs when needed (via an env var like
RUST_LOG
).
We’ll use:
-
anyhow
(already in use) → easy error propagation with?
. -
thiserror
→ structured custom errors (optional). -
env_logger
+log
→ debug and trace logging.
1. Add Dependencies
In Cargo.toml
:
log = "0.4"
env_logger = "0.11"
thiserror = "1.0"
2. Initialize Logging
In main.rs
, at the start of main
:
#[tokio::main]
async fn main() -> anyhow::Result<()> {
env_logger::init(); // Enable logging (controlled by RUST_LOG)
let cli = Cli::parse();
// ...
Ok(())
}
Now you can run with RUST_LOG=debug cargo run -- ask "Hello"
to see debug logs.
3. Add Structured Error Types (Optional but Clean)
Define a custom error type for common cases:
use thiserror::Error;
#[derive(Error, Debug)]
pub enum CliError {
#[error("API key not set. Please export AI_API_KEY before running.")]
MissingApiKey,
#[error("Network request failed: {0}")]
NetworkError(String),
#[error("API returned error: {0}")]
ApiError(String),
}
4. Improve send_ai_request
Error Handling
Update send_ai_request
to use friendly errors and logs:
async fn send_ai_request(
user_input: &str,
task: &str,
model: &str,
) -> anyhow::Result<()> {
let api_key = std::env::var("AI_API_KEY")
.map_err(|_| CliError::MissingApiKey)?;
log::debug!("Sending request with model {}", model);
let client = reqwest::Client::new();
let full_prompt = format!("{}\n\n{}", task, user_input);
let payload = serde_json::json!({
"model": model,
"messages": [
{ "role": "user", "content": full_prompt }
],
"max_tokens": 200,
"temperature": 0.7
});
let res = client
.post("https://api.openai.com/v1/chat/completions")
.bearer_auth(&api_key)
.json(&payload)
.send()
.await
.map_err(|e| CliError::NetworkError(e.to_string()))?;
let status = res.status();
if !status.is_success() {
let err_text = res.text().await.unwrap_or_default();
return Err(CliError::ApiError(format!("{} - {}", status, err_text)).into());
}
let api_response: ApiResponse = res.json().await?;
if let Some(choice) = api_response.choices.first() {
println!("{}", "================ AI Response ================".green().bold());
println!("{}", choice.message.content.white());
println!("{}", "============================================".green().bold());
} else {
println!("{}", "⚠️ No response received from AI.".yellow());
}
Ok(())
}
5. Friendlier Errors at Runtime
Try running without an API key:
cargo run -- ask "Hello"
Output:
Error: API key not set. Please export AI_API_KEY before running.
Try with RUST_LOG=debug
:
RUST_LOG=debug cargo run -- ask "Hello"
Output:
DEBUG main: Sending request with model gpt-4o-mini
Error: API key not set. Please export AI_API_KEY before running.
🎯 What We Achieved
-
Errors are now clear and user-friendly.
-
Debug logs are available with
RUST_LOG
. -
Common issues (missing API key, network failure, API error) have structured messages.
Packaging & Distribution
Right now, our AI CLI runs via cargo run
, but real-world users should be able to install it globally and run it with a single command (ai-cli
). In this section, we’ll:
-
Configure the project for installation.
-
Show how to build release binaries.
-
Optionally prepare for publishing to crates.io.
1. Set a Binary Name
Open Cargo.toml
and add a [[bin]]
section:
[package]
name = "ai-cli"
version = "0.1.0"
edition = "2024"
description = "A simple Rust AI CLI assistant powered by OpenAI API"
license = "MIT"
repository = "https://github.com/yourusername/ai-cli"
[[bin]]
name = "ai-cli"
path = "src/main.rs"
This ensures the installed binary is called ai-cli
instead of ai_cli
.
2. Build & Install Locally
Run:
cargo build --release
This creates an optimized binary in:
target/release/ai-cli
To install globally (so you can just type ai-cli
anywhere):
cargo install --path .
✅ Now you can run:
ai-cli ask "Hello from anywhere!"
3. Add a --version
& --help
Check
Thanks to Clap, this already works:
ai-cli --help
ai-cli --version
Output example:
AI CLI - Ask AI from your terminal
Usage: ai-cli <COMMAND>
Commands:
ask Ask the AI assistant a question
code Generate code with AI
doc Generate documentation
help Print this message or the help of the given subcommand(s)
Options:
-h, --help Print help
-V, --version Print version
4. Distribute to Others
Option A: Share Binary
Zip and share target/release/ai-cli
. Users just download and place it in their PATH
.
Option B: GitHub Release
Create a GitHub repo, push your code, and add release binaries for Linux, macOS, and Windows.
Option C: Publish to crates.io
Make your crate public so others can install via:
cargo install ai-cli
Steps:
-
Ensure
Cargo.toml
has metadata:description = "A Rust AI CLI Assistant" license = "MIT" repository = "https://github.com/yourusername/ai-cli" readme = "README.md" categories = ["command-line-utilities", "ai", "chatgpt"] keywords = ["cli", "ai", "openai", "chatgpt"]
-
Log in to crates.io:
cargo login <API_KEY>
-
Publish:
cargo publish
5. Add a README.md
A clear README.md
helps users understand usage. Example:
# AI CLI 🤖
A simple Rust-based CLI to interact with AI models from your terminal.
## Install
```bash
cargo install --path .
Usage
ai-cli ask "What is Rust?"
ai-cli code "Write a bubble sort in Python"
ai-cli doc "Explain this function"
Configuration
Set your API key:
export AI_API_KEY="your_api_key_here"
License
MIT
🎯 What We Achieved
- Your CLI now builds and installs globally.
- Packaged cleanly with metadata for crates.io.
- Ready to **share with the world**!
👉 That completes the Rust AI CLI Tutorial ✅
Conclusion & Next Steps
In this tutorial, we built an AI-powered CLI tool in Rust step by step:
-
✅ Set up a Rust project with the right dependencies (
tokio
,reqwest
,clap
,serde
). -
✅ Designed a clean CLI interface using Clap with multiple subcommands.
-
✅ Connected to the AI API with async HTTP requests.
-
✅ Implemented the
ask
command for natural Q&A. -
✅ Enhanced UX with formatted output (colors, wrapping, structured logs).
-
✅ Added extra commands like
code
anddoc
to extend functionality. -
✅ Improved error handling and logging for real-world reliability.
-
✅ Learned how to package, install, and distribute the CLI for others.
With this foundation, you now have a production-ready CLI skeleton that you can adapt for any AI-powered workflow.
🚀 Next Steps: Where to Go From Here
This project can evolve in many directions:
-
Streaming Responses
-
Use the AI API’s streaming endpoint for real-time token output.
-
Great for interactive chat-style use.
-
-
Conversation Memory
-
Store previous prompts/responses in a local file or SQLite database.
-
Useful for multi-turn conversations.
-
-
Config File Support
-
Instead of only relying on environment variables, add
~/.ai-cli/config.toml
. -
Let users customize default model, temperature, and tokens.
-
-
Multiple AI Providers
-
Abstract the API client to support OpenAI, Anthropic, Gemini, Ollama.
-
Allow users to pick via
--provider openai
.
-
-
Improved Output Options
-
Add
--json
flag for machine-readable results. -
Add clipboard copy (
--copy
) for quick usage.
-
-
Publishing to crates.io
-
Share your tool with the Rust ecosystem so others can
cargo install ai-cli
.
-
🎯 Final Thoughts
Rust may not be the first language that comes to mind for AI integration, but its speed, reliability, and great ecosystem make it perfect for building tools like this. With just a few crates, you created a robust CLI assistant that feels polished and professional.
This project is just the beginning — from here, you can make it your personal AI Swiss Army knife 🛠️🤖.
You can get the full source code on our GitHub.
That is just the basics. If you need more deep learning about the Rust language and frameworks, you can take the following cheap course:
- Rust Programming Language: The Complete Course
- Rust Crash Course for Absolute Beginners 2025
- Hands-On Data Structures and Algorithms in Rust
- Master Rust: Ownership, Traits & Memory Safety in 8 Hours
- Web3 Academy Masterclass: Rust
- Creating Botnet in Rust
- Rust Backend Development INTERMEDIATE to ADVANCED [2024]
Happy coding, and may your terminal always have AI superpowers! ⚡