Oliver Jumpertz
hero image

How to Read Files in Rust

Share this post to:

Reading files is one of the most common operations you can come across in software development. Loading configuration files, processing files, and more are often part of the use case of the software you build.

Like in any other programming language, there are multiple ways how to read files in Rust. However, all have their advantages and disadvantages. This is why it is crucial also to understand which method to use in which case.

In this article, you will learn about Rust’s most common ways to read files.

Just in case you enjoy YouTube videos more than an article, you can also watch the video above instead!


Table of Contents

Open Table of Contents

Reading an entire File into a String

Reading an entire file into a String has its advantages. You don’t need to worry about anything other than handling the file and processing its content. This is an excellent choice for files that:

  1. Contain String content
  2. Can be processed as a whole at once

On the other hand, this method also has its drawbacks:

  1. Files too large might have severe performance impacts
  2. The larger the file, the larger the memory consumption of your program
  3. Files that do not contain String but binary content can’t be processed this way

The following example shows how to read a whole file into a String:

use std::fs;
fn read_file_content_as_string(path: &str) -> Result<String, Box<dyn std::error::Error>> {
let string_content = fs::read_to_string(path)?;
Ok(string_content)
}

Reading an entire File into a Byte Vector

Reading an entire file into a byte vector is the way to go if you don’t necessarily have to deal with String content but some form of binary format you need to process. This method still works for String content, though. Instead of directly receiving a String from the method call, you have to instantiate it yourself. You don’t have to do that if you don’t deal with String content.

This method is a great choice for files that:

  1. Contain any form of content
  2. Can be processed as a whole file at once

Still, some of the same drawbacks that also affect reading an entire file into a String apply. These are namely:

  1. Files too large might have a severe performance impact
  2. The larger the file, the larger the memory consumption of your program

The example below demonstrates how to read a whole file into a byte vector:

use std::fs;
fn read_file_as_bytes(path: &str) -> Result<Vec<u8>, Box<dyn std::error::Error>> {
let byte_content = fs::read(path)?;
Ok(byte_content)
}

If you still want to convert the byte vector into a String yourself, you can do it as follows:

use std::fs;
use std::str;
fn read_file_as_bytes(path: &str) -> Result<String, Box<dyn std::error::Error>> {
let byte_content = fs::read(path)?;
let string_content = str::from_utf8(&byte_content)?;
Ok(string_content.to_string())
}

Reading a File Line by Line

As stated above, reading an entire file at once can lead to problems if you deal with large files. In cases like this, it’s best to process these files with a line-by-line approach. This is, of course, something mostly applicable to files with String content.

Rust gladly has a convenient struct in its standard library that also takes away some lower-level details, called BufReader. This method is a solid choice for files that:

  1. Contain String content
  2. Are too large to be processed at once

This approach, however, also has a few drawbacks that need to be mentioned, and these are:

  1. It only works for files with String content
  2. Implementations can quickly become more complex
  3. Depending on the formatting of the file, you might have to buffer lines yourself if not everything you want to process is placed on the same line

The following example shows how to read a file line by line:

use std::fs::File;
use std::io::{BufReader, BufRead};
fn read_file_line_by_line(path: &str) -> Result<(), Box<dyn std::error::Error>> {
let file = File::open(path)?;
let reader = BufReader::new(file);
for line in reader.lines() {
match line {
// line is a String
Ok(line) => process_line(line),
Err(err) => handle_error(err),
}
}
Ok(())
}

Reading a File in Single Byte-Steps

While the previous approach allowed you to read a file line-by-line, this one allows you to read individual bytes from the file you want to process with a BufReader. It is one of the most basic approaches as you put away nearly all guard rails the standard library gives you. It does, however, give you some of the most flexibility.

Use this approach if you:

  1. Need complete control over what happens with the content of a file
  2. Are perfectly fine to implement a lot of the content handling yourself
  3. Have to deal with large files that would make your memory consumption explode if read all at once

Although already mentioned, it is still a good idea to talk about the drawbacks of this method, as well. Its drawbacks include:

  1. You have to work with raw data. In this case, its even single raw bytes
  2. You probably still need a buffer to temporarily save single bytes until you can merge several of them into something more meaningful

The following example demonstrates how to read a file in single byte-steps:

use std::fs::File;
use std::io::{BufReader, Read};
fn read_file_as_single_bytes(path: &str) -> Result<(), Box<dyn std::error::Error>> {
let file = File::open(path)?;
let reader = BufReader::new(file);
for byte in reader.bytes() {
match byte {
// byte is exactly one byte
Ok(byte) => process_byte(byte),
Err(err) => handle_error(err),
}
}
Ok(())
}

Reading a File in Byte Chunks

You can use BufReader to read chunks from a file if you want more flexibility. To be completely honest, BufReader also does optimizations under the hood and doesn’t read each byte individually when you use its .bytes() method. It reads them in chunks and then returns single bytes from the Iterator.

That doesn’t help much when you want to process chunks yourself, though. You can, of course, buffer the bytes manually when using bytes(), or you simply follow this method.

Reading the content of a file in byte chunks has advantages and disadvantages like any other method. Its advantages are:

  1. You gain complete control over how you deal with the contents of a file
  2. It gives you the most flexibility, as you can adjust the chunk size dynamically and react to specific circumstances
  3. You can use it if you have to deal with large files that would make your memory consumption explode if read all at once

Of course, there are once again a few known drawbacks that apply to this method:

  1. You have to work with raw data. All decoding and processing is up to you
  2. It might take a few attempts to optimize the buffer size for specific scenarios
  3. If you make the chunk size too small, you might actually hurt the overall performance of your program (too many system calls)

The example below shows how to read a file in byte chunks:

use std::fs::File;
use std::io::{BufReader, BufRead}
const BUFFER_SIZE: usize = 512;
fn read_file_in_byte_chunks(path: &str) -> Result<(), Box<dyn std::error::Error>> {
let file = File::open(path)?;
let mut reader = BufReader::with_capacity(BUFFER_SIZE, file);
loop {
let buffer = reader.fill_buf()?;
let buffer_length = buffer.len();
// BufRead could not read any bytes.
// The file must have completely been read.
if buffer_length == 0 {
break;
}
do_something_with(buffer);
// All bytes consumed from the buffer
// should not be read again.
reader.consume(buffer_length);
}
Ok(())
}

Summary

Reading files is a common operation when developing software. Like any other programming language, Rust offers several ways to deal with it. This guide covered five common ways to read files (both as Strings and in raw binary format) in Rust.

All methods presented have advantages and drawbacks, and you need to choose the one appropriate for your specific situation and use case.

Reading an entire file into a String is a great choice if you have small files and deal with String content. On the other hand, the method is not the best if your files become larger or you don’t deal with String content at all.

Reading an entire file into a byte vector is a good choice if you have small files and deal with arbitrary raw content. It is lacking if your files become larger and you have memory constraints, though.

Reading files line by line is an excellent choice if you deal with String content and don’t want your memory to grow too much. The method falls short if you don’t deal with String content, and have files that spread the content you want at once over multiple lines, which requires you to buffer lines yourself.

Reading files in single byte-steps is one of the most basic methods. It’s a great choice if you want flexibility and need a lot of control. On the other hand, you need to work with raw data and also have to probably buffer data yourself if you need to merge multiple bytes into something more meaningful.

Lastly, reading a file in byte chunks is a little more flexible than reading each byte individually. It offers full control over the processing of the data and can also be adjusted dynamically. But once again, you need to work with raw data, and it might take some time to fine-tune the chunking.


Share this post to: