-
Notifications
You must be signed in to change notification settings - Fork 50
Description
Summary
The Rust ecosystem heavily integrates with std
's Read
and Write
traits. It would be helpful to interoperate with these traits, especially in the context of the aead
and hash
modules.
Example
Hashing
use orion::hash::{self, Digest};
use std::io::Cursor;
let data = Cursor::new(vec![0, 1, 2, 3, 4]);
let digest: Digest = hash::from_reader(&mut data);
AEAD
use orion::aead;
use std::io::Cursor;
let data = Cursor::new(vec![0, 1, 2, 3, 4]);
let secret_key = aead::SecretKey::default();
// Encrypt from a `Read` type.
let mut encrypted_data = Vec::new();
aead::seal_copy(&secret_key, &mut data, &mut encrypted_data)?;
// Decrypt into a `Write` type.
let mut decrypted_data = Vec::new();
aead::open_copy(&secret_key, &mut encrypted_data, &mut decrypted_data)?;
I'm definitely open to suggestions on the API described here. Particularly, I'm not sure if we'd want convenience wrappers around the *_copy
functions for the AEAD case. The copy functions are fairly general, and will allow for people to allocate buffers how they want. But there's probably a case to be made to provide a simpler type that just outputs a Vec
of encrypted/decrypted data.
In any case, the real advantage of implementing these APIs is that someone could do, for example, the following:
use std::fs::File;
use orion::hash;
let file = File::open("filename.txt").unwrap(); // may want to use a BufReader (?)
let digest = hash::from_reader(file).unwrap();
And it would work as expected. The big deal here is that large files should be read and hashed in pieces, which currently requires reaching into the hazardous
module and using the "streaming interface" (the update
method). This looks something like the following.
use orion::hazardous::hash::blake2b::{Blake2b, Hasher, SecretKey};
// Figuring out the "right" size isn't something users should have to do.
let mut state = Blake2b::new(None, 32)?;
let mut reader = File::open("filename.txt")?;
let mut buf = Vec::new();
while let 0 < reader.read(&mut buf) {
state.update(buf.as_slice())?;
buf.clear();
}
let digest = state.finalize()?;
So it's longer, has a little more room for user error, and in general we probably just want to support the existing IO traits.
I already started the implementation, and I'll post a draft PR soon.
Also, I considered making AsyncRead
and AsyncWrite
part of this issue/PR, but that seems like it should be its own thing. There's talk about integrating those traits into the standard library some time soon, so maybe we should hold off until then.