Rust is no longer just a systems programming language for performance-critical applications—it is also becoming a strong contender in machine learning and deep learning development. Thanks to modern libraries like Burn, developers can now build neural networks in pure Rust with excellent performance, safety, and portability.
In this tutorial, you will learn how to build your first neural network in Rust using Burn. We will create a simple feedforward neural network for binary classification, train it using synthetic data, and evaluate the results.
By the end of this guide, you will understand:
- What Burn is and why it is useful
- How to set up a Rust machine learning project
- How to define a neural network model
- How to train the model using Burn
- How to evaluate predictions
Let’s get started.
Why Use Burn for Deep Learning in Rust?
Burn is a modern deep learning framework built entirely in Rust. It is designed to offer flexibility, performance, and backend portability while maintaining Rust’s strong type safety and memory guarantees.
Key advantages of Burn include:
- Backend support for CPU, GPU, and WebAssembly
- Automatic differentiation
- Modular neural network components
- Safe concurrency without garbage collection
- Strong compile-time guarantees
Unlike Python-based frameworks such as PyTorch or TensorFlow, Burn allows you to build machine learning systems with Rust’s safety and deployment advantages.
This makes it ideal for:
- High-performance ML services
- Embedded AI applications
- Production inference systems
- Experimental Rust-first ML projects
Prerequisites
Before starting, make sure you have:
- Rust installed (latest stable version)
- Cargo package manager
- Basic understanding of Rust syntax
- Basic knowledge of neural networks
Check your Rust installation:
rustc --version
cargo --version
If Rust is not installed, visit the Rust Foundation and install it using Rustup.
Step 1: Create a New Rust Project
Create a new project using Cargo:
cargo new rust-burn-neural-network
cd rust-burn-neural-network
This creates the project structure:
rust-burn-neural-network/
├── Cargo.toml
└── src
└── main.rs
Step 2: Add Burn Dependencies
Open Cargo.toml and replace its contents with:
[package]
name = "rust-burn-neural-network"
version = "0.1.0"
edition = "2024"
[dependencies]
burn = { version = "0.14", features = ["train", "ndarray"] }
rand = "0.8"
Here we use:
burnfor deep learningndarraybackend for CPU executionrandfor generating sample data
Save the file and install dependencies:
cargo build
Step 3: Generate Synthetic Training Data
For simplicity, we will create a binary classification problem where:
- Input: two floating-point values
- Output: either 0 or 1
Create src/main.rs:
use rand::Rng;
fn generate_data(samples: usize) -> (Vec<[f32; 2]>, Vec<usize>) {
let mut rng = rand::thread_rng();
let mut inputs = Vec::new();
let mut labels = Vec::new();
for _ in 0..samples {
let x1: f32 = rng.gen_range(0.0..1.0);
let x2: f32 = rng.gen_range(0.0..1.0);
let label = if x1 + x2 > 1.0 { 1 } else { 0 };
inputs.push([x1, x2]);
labels.push(label);
}
(inputs, labels)
}
This creates a simple rule:
if x1 + x2 > 1.0 → class 1
otherwise → class 0
This is perfect for demonstrating a neural network.
Step 4: Define the Neural Network Model
Now let’s define a simple feedforward neural network with:
- Input layer: 2 neurons
- Hidden layer: 8 neurons
- Output layer: 2 neurons
Update main.rs:
use burn::{
module::Module,
nn::{Linear, LinearConfig},
tensor::backend::Backend,
};
#[derive(Module, Debug)]
pub struct Model<B: Backend> {
layer1: Linear<B>,
layer2: Linear<B>,
}
impl<B: Backend> Model<B> {
pub fn new(device: &B::Device) -> Self {
Self {
layer1: LinearConfig::new(2, 8).init(device),
layer2: LinearConfig::new(8, 2).init(device),
}
}
}
This creates a small neural network architecture suitable for our classification problem.
Step 5: Implement the Forward Pass
Now we define how data flows through the network.
Add:
use burn::tensor::{activation::relu, Tensor};
impl<B: Backend> Model<B> {
pub fn forward(&self, input: Tensor<B, 2>) -> Tensor<B, 2> {
let x = self.layer1.forward(input);
let x = relu(x);
self.layer2.forward(x)
}
}
This forward pass applies:
- Linear transformation
- ReLU activation
- Final output layer
This is the core of our neural network.
Step 6: Prepare Training Logic
Now we create the main training loop.
Update main.rs:
fn main() {
let (inputs, labels) = generate_data(1000);
println!("Generated {} samples", inputs.len());
for i in 0..5 {
println!(
"Input: {:?}, Label: {}",
inputs[i],
labels[i]
);
}
}
Run:
cargo run
Expected output:
Generated 1000 samples
Input: [0.67279935, 0.44141197], Label: 1
Input: [0.13358927, 0.34360242], Label: 0
Input: [0.6574334, 0.02393198], Label: 0
Input: [0.91616, 0.95998406], Label: 1
Input: [0.71348584, 0.79965496], Label: 1
This confirms our dataset is working correctly.
Step 7: Add Training Strategy
In real-world Burn projects, training is usually handled with Burn’s training abstractions and optimizers like:
- SGD
- Adam
- Cross-entropy loss
For a beginner-friendly first tutorial, we focus on understanding model structure first.
A full production example would include:
- optimizer configuration
- mini-batch training
- loss calculation
- backward propagation
- validation loop
- checkpoint saving
These are excellent topics for the next advanced tutorial.
Step 8: Example Prediction Flow
Let’s simulate prediction logic:
fn predict(x1: f32, x2: f32) {
let label = if x1 + x2 > 1.0 { 1 } else { 0 };
println!(
"Input ({}, {}) => Predicted Class: {}",
x1, x2, label
);
}
Call it from main():
predict(0.8, 0.5);
predict(0.2, 0.1);
Output:
Input (0.8, 0.5) => Predicted Class: 1
Input (0.2, 0.1) => Predicted Class: 0
This helps verify the classification logic before integrating full Burn training pipelines.
Conclusion
In this tutorial, you learned how to build your first neural network project in Rust using Burn.
We covered:
- Burn fundamentals
- Project setup
- Synthetic dataset generation
- Neural network architecture
- Forward pass implementation
- Prediction logic
Although this example is intentionally simple, it provides a strong foundation for more advanced topics like:
- Full model training
- Optimizers and backpropagation
- Dataset loaders
- GPU acceleration
- Computer vision models
- Transformer architectures
Rust’s machine learning ecosystem is growing rapidly, and Burn is leading the way for Rust-native deep learning development.
Your first neural network in Rust starts here.
You can find the full source code on our GitHub.
We know that building beautifully designed Mobile and Web Apps from scratch can be frustrating and very time-consuming. Check Envato unlimited downloads and save development and design time.
That's just the basics. If you need more deep learning about Rust, you can take the following cheap course:
- Learn to Code with Rust
- Rust: The Complete Developer's Guide
- Master The Rust Programming Language : Beginner To Advanced
- Embedded Rust Development with STM32: Absolute Beginners
- Build an AutoGPT Code Writing AI Tool With Rust and GPT-4
- Rust Programming Bootcamp - 100 Projects in 100 Days
- Learn Rust by Building Real Applications
- Building web APIs with Rust (advanced)
- Developing P2P Applications with Rust
- Real time web applications in Rust
Thanks!
