Go implementation of the Data At Rest Encryption (DARE) format.

Overview

Godoc Reference Travis CI Go Report Card

Secure IO

Go implementation of the Data At Rest Encryption (DARE) format.

Introduction

It is a common problem to store data securely - especially on untrusted remote storage. One solution to this problem is cryptography. Before data is stored it is encrypted to ensure that the data is confidential. Unfortunately encrypting data is not enough to prevent more sophisticated attacks. Anyone who has access to the stored data can try to manipulate the data - even if the data is encrypted.

To prevent these kinds of attacks the data must be encrypted in a tamper-resistant way. This means an attacker should not be able to:

  • Read the stored data - this is achieved by modern encryption algorithms.
  • Modify the data by changing parts of the encrypted data.
  • Rearrange or reorder parts of the encrypted data.

Authenticated encryption schemes (AE) - like AES-GCM or ChaCha20-Poly1305 - encrypt and authenticate data. Any modification to the encrypted data (ciphertext) is detected while decrypting the data. But even an AE scheme alone is not sufficiently enough to prevent all kinds of data manipulation.

All modern AE schemes produce an authentication tag which is verified after the ciphertext is decrypted. If a large amount of data is decrypted it is not always possible to buffer all decrypted data until the authentication tag is verified. Returning unauthenticated data has the same issues like encrypting data without authentication.

Splitting the data into small chunks fixes the problem of deferred authentication checks but introduces a new one. The chunks can be reordered - e.g. exchanging chunk 1 and 2 - because every chunk is encrypted separately. Therefore the order of the chunks must be encoded somehow into the chunks itself to be able to detect rearranging any number of chunks.

This project specifies a format for en/decrypting an arbitrary data stream and gives some recommendations about how to use and implement data at rest encryption (DARE). Additionally this project provides a reference implementation in Go.

Applications

DARE is designed with simplicity and efficiency in mind. It combines modern AE schemes with a very simple reorder protection mechanism to build a tamper-resistant encryption scheme. DARE can be used to encrypt files, backups and even large object storage systems.

Its main properties are:

  • Security and high performance by relying on modern AEAD ciphers
  • Small overhead - encryption increases the amount of data by ~0.05%
  • Support for long data streams - up to 256 TB under the same key
  • Random access - arbitrary sequences / ranges can be decrypted independently

Install: go get -u github.com/minio/sio

DARE and github.com/minio/sio are finalized and can be used in production.

We also provide a CLI tool to en/decrypt arbitrary data streams directly from your command line:

Install ncrypt: go get -u github.com/minio/sio/cmd/ncrypt && ncrypt -h

Performance

Cipher 8 KB 64 KB 512 KB 1 MB
AES_256_GCM 90 MB/s 1.96 GB/s 2.64 GB/s 2.83 GB/s
CHACHA20_POLY1305 97 MB/s 1.23 GB/s 1.54 GB/s 1.57 GB/s

On i7-6500U 2 x 2.5 GHz | Linux 4.10.0-32-generic | Go 1.8.3 | AES-NI & AVX2

Issues
  • Why not AES-CTR + HMAC?

    Why not AES-CTR + HMAC?

    What issue(s) is sio solving by implementing the DARE format using the more complex custom mixture of GCM (+ chunk order integrity) instead of simply using CTR + HMAC across the whole stream like xeodou/aesf or the popular odeke-em/drive client (wiki)?

    question 
    opened by Xeoncross 10
  • Implement DARE 2.0

    Implement DARE 2.0

    This change adds a DARE 2.0 implementation for en/decrypting io.Reader and io.Writer. Further it adds a generic decrypted reader/writer to handle 1.0/2.0 compatibility.

    • The default for encrypting io.Reader/io.Writer is DARE 2.0
    • The default for decrypting io.Reader/io.Writer is DARE 1.0 and 2.0. (backward compatible)

    This change also separates the DARE implementations from the io.Reader/io.Writer implementations to make reasoning about the code easier. As part of this separation this change adds new test vectors for DARE 1.0 and 2.0.

    Fixes #16

    opened by aead 9
  • Question: Random access supported by specification?

    Question: Random access supported by specification?

    The Readme states that random access is a property of this encryption format - however, the specification at https://github.com/minio/sio/blob/master/DARE.md is not very clear about this.

    For random access to work, we need the payload size in each package sequence of an encrypted file to be constant (except for the last one, which may be smaller). However this does not seem to be required by the specification. While the sio implementation uses a fixed payload size (that can be configured at the beginning) to generate the packages - other implementations may use different package sizes within the same package sequence - this would appear to still be specification compliant - but it defeats the goal of efficient random access.

    question 
    opened by donatello 8
  • Questions/suggestions

    Questions/suggestions

    Hi,

    This looks very interesting.

    I am thinking this could be a useful format for syncthing to implement encryption, yet there seems to be a few caveats I'd have to work around, and it seems as it stands now it would not be possible, hence I am raising this for clarification.

    Syncthing operates on 128k blocks, hence data + whatever overhead needs to be exactly 128k. It seems the block size is fixed, and it would be impossible to do that.

    Furthermore, the front page talks about Random access - arbitrary sequences / ranges can be decrypted independently, yet it's not obvious to me how to achieve that given you do not support io.WriteAt or io.ReadAt. I assume random access encryption should be supported at well?

    Syncthing downloads content block by block, yet the blocks might be downloaded in a random order, as in, we might write to the end of the file, then to the front, etc, etc, so it's not obvious that with this package this could be achieved.

    Thanks.

    question 
    opened by AudriusButkevicius 7
  • Why not adding nonce/salt to file header?

    Why not adding nonce/salt to file header?

    Maybe I misunderstood something from the announcement but, if I encrypt 1000 files, I need to create 1000 nonces and then I somehow need to keep track of each nonce so I can decrypt each file?

    Keeping track of each nonce seems the kind of requirement that user may either end up reusing the same nonce or may misplace it, resulting in not being able to decrypt files any more.

    If there a reason why your format doesn't include adding the nonce to the file itself? similar to how using bcrypt, the salt is prefixed to the ciphertext?

    question 
    opened by fmpwizard 7
  • ncrypt: read password from file and fix return code when failed

    ncrypt: read password from file and fix return code when failed

    This PR contains two commits:

    1. The first one fixes the return code when fail to read the password
    2. ~~The second one adds the -password-file flag to support reading the password from a file.~~
    3. The second one adds the -p flag to support specifying the password.
    blocked 
    opened by pjw91 5
  • v1 interfaces should contain warning about flawed crypto

    v1 interfaces should contain warning about flawed crypto

    DARE 1.0 contained a flaw that reused a nonce multiple times for the same key, which invalidate the security guarantees of the underlying primitives.

    This is changed in DARE 2.0 so that the nonce contains a 32-bit stream-local sequence counter, thus ensuring that multiple packets in the same stream will not contain the same nonce as long as the 32-bit counter does not overflow. minio/sio implements both DARE 1.0 and DARE 2.0 (supposedly for compatibility).

    Looking in the readme, documentation and in the code, I found nowhere that stated that DARE 1.0 is insecure, and that it should only be used to decrypt legacy content if the lack of security is acceptable. It is of outmost importance to make such a flaw clear, and state it in all those three locations at the very least. Preferably at the very top of the README in the coming time until DARE 2.0 has gotten some age.

    As a side-note, the DARE.md file still only mention DARE 1.0. An up to date spec would be nice.

    enhancement documentation 
    opened by kennylevinsen 5
  • Compatibility with Raspberry Pi

    Compatibility with Raspberry Pi

    Hello. I’m building a tool in go that uses this to create encrypted backups of data. In the source code we have aesSuported flag, which checks stuff from the CPU. However, if the GOARCH is ARM, what would happen then?

    question 
    opened by eacp 4
  • Prevent cutting-off attacks in version 2.0

    Prevent cutting-off attacks in version 2.0

    Cutting-off attack

    Currently version 1.0 is vulnerable for a quite obvious attack:

    Let's assume an encrypted sequence S consists of n packages and the encryption key K is unique. An adversary can modify the sequence by removing the last k packages of S. This cutting-off-attack is not detected during decryption if n (or the length of S) is not known because no package (esp. the last package) depends on n / the length of S.

    Why is this attack not prevented in 1.0:

    It was assumed that multipart-PUT operations (see S3 multipart) require support for extending a sequence of packages. The basic idea was: Spart_1 = E(K, part_1) with Config.SequenceNumber = 0 Spart_2 = E(K, part_2) with Config.SequenceNumber = |Spart_1| / (32+L)| Spart_3 = E(K, part_3) with Config.SequenceNumber = |Spart_1| / (32+L) + |Spart_2| / (32+L)|

    However it is not required to build one valid sequence from k smaller sequences. Instead each sequence can be encrypted separately: Spart_1 = E(Kpart_1, part_1) Spart_2 = E(Kpart_2, part_2) Spart_3 = E(Kpart_3, part_3)

    Proposal Version 1.1

    Version 1.1 can prevent the attack by introducing a final flag:

    header = version | cipher | payload_length | final_flag | random
             1 byte  | 1 byte |     2 byte     |   1 byte   | 11 byte
    

    Encryption is defined as C, T = E(K, P, N, AAD) where:

    • AAD = version | cipher | payload_length | Ls (Ls is the current length of all payloads - Ls,0 = 0, Ls,1 = payload_length, Ls,2 = 2*payload_length)
    • N = (final_flag | random) ⊕ i (i is the index of the package - 0 for the first package, 1 for the second package, a.s.o)

    Adding the current length of the sequence to AAD makes exchanging of packages slightly harder (if an encryption key is reused). Xoring the 11 byte random value with the 'sequence number' provides a slightly lower collision probability compared to the sequence number | random construction.

    Version 1.1: H = V | C | L | F | R | P/C | T AAD = V | C | L | Ls N = (F | R) ⊕ i C , T = E(K, P, N, AAD) P, T = D(K, C, N, AAD)

    SIV

    I also thought about using a deterministic synthetic-IV scheme to provide integrity and confidentially even if an encryption key is reused. For example using a PRF to derive an unique encryption key per package from the encryption key (which would than be a master key) and the payload plaintext. This was discarded because of:

    • Even if a fast PRF is used (SipHash / HighwayHash) the en/decryption speed decreases (2 passes over the data)
    • For every package a new AEAD cipher instance must be created (performance)
    • The header will be increase (from 16 to ~32 bytes) - This would require a major version update.

    However, a SIV scheme should be taken into account for a major version update.

    proposal specification security 
    opened by aead 4
  • Export errors

    Export errors

    It would be useful to have opportunity to compare errors in some cases . For example,

    _, err = sio.Decrypt(w, f, sio.Config{Key: key})
    if err != nil {
    	if err == sio.ErrUnsupportedVersion {
    		os.Copy(w, f) // file wasn't encrypted
    	} else {
    		log.Println(err)
    	}
    }
    
    question working-as-expected 
    opened by ShoshinNikita 3
  • Use with golang webdav implementation

    Use with golang webdav implementation

    Hello,

    I tried to use sio in a middleware over golang webdav server (using EncryptReader for ingoing PUT requests and DecryptWriter for recovering file with GET requests). But my files (like images) get cropped (but can be opened). It would seems that golang webdav writes the data in 32kB chunks, could it be related ?

    My code :

    // Encrypt enables body encryption (to be used with webdav PUT requests)
    func Encrypt(next http.Handler, key []byte) http.Handler {
    	return http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
    		config := sio.Config{Key: key}
    		encBody, err := sio.EncryptReader(r.Body, config)
    		if err != nil {
    			http.Error(w, err.Error(), 500)
    			return
    		}
    		r.Body = ioutil.NopCloser(encBody)
    		next.ServeHTTP(w, r)
    	})
    }
    
    // Decrypt enables body decryption (to be used with webdav GET requests)
    func Decrypt(next http.Handler, key []byte) http.Handler {
    	return http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
    		config := sio.Config{Key: key}
    		decryptWriter, err := newDecryptWriter(w, config)
    		if err != nil {
    			http.Error(w, err.Error(), 500)
    			return
    		}
    		next.ServeHTTP(decryptWriter, r)
    	})
    }
    
    type decryptWriter struct {
    	writer    http.ResponseWriter
    	encWriter io.Writer
    }
    
    func newDecryptWriter(w http.ResponseWriter, config sio.Config) (*decryptWriter, error) {
    	encWriter, err := sio.DecryptWriter(w, config)
    	if err != nil {
    		return nil, err
    	}
    	return &decryptWriter{
    		writer:    w,
    		encWriter: encWriter,
    	}, err
    }
    
    func (r *decryptWriter) Header() http.Header {
    	return r.writer.Header()
    }
    
    func (r *decryptWriter) Write(p []byte) (int, error) {
    	return r.encWriter.Write(p)
    }
    
    func (r *decryptWriter) WriteHeader(status int) {
    	r.writer.WriteHeader(status)
    }
    

    Thanks, Best regards.

    opened by nicolaspernoud 2
  • salt offset with ReaderAt

    salt offset with ReaderAt

    Its common practise to prefix the encrypted output with a salt/nonce. Decrypting this is normally easy, first read 32bytes, then use the current reader position for the decrypter. (like in cmd/ncrypt)

    But if you want to use the DecryptReaderAt() API, it can be very tricky. Because reading the salt doesnt affect the current position of the ReaderAt. The implementation of DecryptReaderAt always tries to read the version byte on position 0, but with the salt its on position 32.

    My solution for this is to use an own section Reader which always adds the size of salt to the offset. I want to share my solution and discuss if its possible to make this easier by extenting the existing api.

    type sectionReaderAt struct {
    	r             io.ReaderAt
    	addThisOffset int64 // should be set to 32
    }
    
    func (r *sectionReaderAt) ReadAt(p []byte, off int64) (n int, err error) {
    	return r.r.ReadAt(p, off+r.addThisOffset)
    }
    
    opened by mschneider82 0
  • Unsupported Version error

    Unsupported Version error

    I am trying to encrypt a file stream and add it to IPFS. When I try to decrypt the stream, reading from IPFS, I get this error Malformed encrypted data: sio: unsupported version. I have tried setting the MinVersion and MaxVersion to Version20 and Version 10 as well while encrypting and decrypting but that does not seem to fix the issue.

    Encryption

    func EncryptStream(w io.Writer, r io.Reader, masterkey, nonce []byte) error {
    	var key [32]byte
    	kdf := hkdf.New(sha256.New, masterkey, nonce, nil)
    	if _, err := io.ReadFull(kdf, key[:]); err != nil {
    		fmt.Printf("Failed to derive encryption key: %v", err)
    		return err
    	}
    
    	cfg := sio.Config{Key: key[:], MinVersion: sio.Version10, MaxVersion: sio.Version10}
    	if _, err := sio.Encrypt(w, r, cfg); err != nil {
    		fmt.Printf("Failed to encrypt data: %v", err)
    		return err
    	}
    	return nil
    }	if _, err := sio.Encrypt(w, r, cfg); err != nil {
    		fmt.Printf("Failed to encrypt data: %v", err)
    		return err
    	}
    	return nil
    }
    
    

    Decryption

    func DecryptStream(w io.Writer, r io.Reader, masterkey, nonce []byte) error {
    	var key [32]byte
    	kdf := hkdf.New(sha256.New, masterkey, nonce, nil)
    	if _, err := io.ReadFull(kdf, key[:]); err != nil {
    		fmt.Printf("Failed to derive encryption key: %v", err)
    		return err
    	}
    
    	cfg := sio.Config{Key: key[:], MinVersion: sio.Version10, MaxVersion: sio.Version10}
    	log.Println("SIO Min Version: ", cfg.MinVersion, " Max Version: ", cfg.MaxVersion)
    	if _, err := sio.Decrypt(w, r, cfg); err != nil {
    		if _, ok := err.(sio.Error); ok {
    			fmt.Printf("Malformed encrypted data: %v", err)
    			return err
    		}
    		fmt.Printf("Failed to decrypt data: %v", err)
    		return err
    	}
    
    	return nil
    }
    
    opened by cvhariharan 2
  • Prevent analysis of encrypted streams

    Prevent analysis of encrypted streams

    Currently the DARE format adds a 16 byte header and a 16 byte tailer to every encrypted payload - which builds an encrypted package: header || payload || tailer. The header provides the metadata for the encrypted package (version number, payload size a.s.o) and is (partially) deterministic. Following it's possible to:

    1. Determine whether a binary blob is encrypted with the DARE format.
    2. Do some "traffic analysis" of this encrypted stream. For example count the number of packages, determine the size and the cipher suite of every package a.s.o.

    Sealed header and tailer

    One possible countermeasure against this analysis would be a sealed header and tailer. Therefore each header and tailer is encrypted. One way would be:

    K := encryption key
    B := block cipher (block size = 128 bit)
    
    sealed_header := B (K, header)
    sealed_tailer := B (K, tailer)  
    

    Let n be the number of packages within one sequence of packages. For all indices i,j of one valid encrypted sequence of packages the following properties holds:

    • If i != j than header[i] != header[j]. The reason is that the sequence number (which is part of the header) is a monotonic increasing number. Further the probability that two headers H and H' are the same across at least two different sequences is ~ 1 / 2**(64 / 2) (birthday paradox). However if an encryption key is reused the tamper-proof property is gone anyway.

    • If i != j than P(tailer[i] = tailer[j]) <= n / 2**128. The reason is that if we find a pair T and T' (T, T' := tailer[i], tailer[j]) and i != j but T = T' we've found two messages m and m' - representing the different packages (i and j) - which fulfill: c, t := AEAD(K, nonce, m, data) and c',t := AEAD((K, nonce', m', data). This would violate the security properties provided by the AEAD scheme. Further if we assume that the encryption key is unique per encrypted stream the probability that two tailers within different streams are equal is: <= ~ 1 / 2**64 (birthday paradox).

    Additionally to the 'standard'' security requirements of a block cipher the used block cipher must provide the following property:

    • The encrypted block must be indistinguishable from true randomness if the tuple (key, input block) is unique.

    Luckily such block ciphers (block size = 128) exists: AES, Serpent, Twofish

    Implications

    • If an adversary can break the block cipher in a way that the encryption key is revealed and the same key is used to seal header/tailer and encrypt the packages the security of the whole DARE encryption scheme is gone!
    • The block cipher (and the sealing key) must be known in advance. It's not possible to negotiate the block cipher (e.g. by a cipher id) because this would reveal (some) metadata again.
    feature proposal specification 
    opened by aead 0
  • Provide helper function to verify encrypted streams

    Provide helper function to verify encrypted streams

    Currently the verification of an encrypted stream can be done by:

    if _, err := sio.Decrypt(ioutil.Discard, reader, Config{Key: key}); err != nil {
     ....
    }
    

    That may look a bit odd to users. A simpler approach would be a API function: func Verify(src io.Reader, config Config) error which implements the solution shown above. Verification of encrypted streams is useful if someone wants to verify that the received data stream(s) are valid without decrypting and using the data (yet).

    proposal 
    opened by aead 0
  • Implement io.Seeker for decryption using random access

    Implement io.Seeker for decryption using random access

    Decrypting (and maybe also encrypting) data streams using random access is an important use case. Therefore the decrypting io.Reader and io.Writer (and maybe also the encrypting counterparts) should implement io.Seeker.

    Seeking can be done from the beginning of the stream (io.SeekStart) or from the current position (io.SeekCurrent) but not from the end (io.SeekEnd) because the size of the stream is usually not known.

    Seeking within an encrypted stream from io.SeekStart requires:

    • Moving the position of the underlying io.Reader to the k-th package - k can be computed by (offset / payload_size) * (payload_size + 32).
    • Adjusting the expected sequence_number to k. The k-th package is decrypted so the expected sequence-number must be equal to k
    • If the seeking offset is not a multiple of the payload_size the next mod := offset % payload_size bytes must be extracted and discarded (but verified).

    Seeking within an encrypted stream from io.SeekCurrent requires:

    • Moving the position of the underlying io.Reader to the k-th package - k can be computed by sequence_number + (offset / payload_size) * (payload_size + 32).
    • Adjusting the expected sequence_number to k. The k-th package is decrypted so the expected sequence-number must be equal to k
    • If the seeking offset is not a multiple of the payload_size the next mod := current_offset + (offset % payload_size) bytes must be extracted and discarded (but verified). The current_offset is the current offset of the reader within the currently package. An important corner case is that mod can be negative. In this case k = k-1 and mod = payload_size - |mod|

    Seeking from the user perspective

    The most important rule for users of the sio library is: Seeking within an encrypted stream is only valid if all packages (except for the last one) have the same package size.

    This means that the length of all packages - except the length of the last package - must be the same. Further if a custom package size is used (the Config.PayloadSize field is set manually for encryption) it is required that the Config.PayloadSize is set to the same value for decryption. Notice that this is not required to decrypt a stream but to compute the 'correct' package and sequence number. In all cases where the packages (except for the last one) have not same package size or the wrong payload size is used the seeking may succeed but the decryption will always fail.

    It is not possible to seek within an encrypted stream (using io.Seeker) if the size of packages within one stream is not the same - e.g. If the payload size of package 1 is 32 KB, package 2 is 33 KB, package 3 is 55 KB, ... a.s.o. In this case the user must set the correct configuration values and implement the seeking functionality on top of the sio library manually.

    feature proposal 
    opened by aead 13
Owner
Object Storage for the Era of the Hybrid Cloud
MinIO’s high performance, Kubernetes-native object storage suite is built for the demands of the hybrid cloud.
Object Storage for the Era of the Hybrid Cloud
The minilock file encryption system, ported to pure Golang. Includes CLI utilities.

Go-miniLock A pure-Go reimplementation of the miniLock asymmetric encryption system. by Cathal Garvey, Copyright Oct. 2015, proudly licensed under the

Cathal Garvey 172 Apr 5, 2022
An easy-to-use XChaCha20-encryption wrapper for io.ReadWriteCloser (even lossy UDP) using ECDH key exchange algorithm, ED25519 signatures and Blake3+Poly1305 checksums/message-authentication for Go (golang). Also a multiplexer.

Quick start Prepare keys (on both sides): [ -f ~/.ssh/id_ed25519 ] && [ -f ~/.ssh/id_ed25519.pub ] || ssh-keygen -t ed25519 scp ~/.ssh/id_ed25519.pub

null 24 Mar 21, 2022
DERO Homomorphic Encryption Blockchain Protocol

Homomorphic encryption is a form of encryption allowing one to perform calculations on encrypted data without decrypting it first. The result of the computation is in an encrypted form, when decrypted the output is the same as if the operations had been performed on the unencrypted data.

null 83 May 6, 2022
Sekura is an Encryption tool that's heavily inspired by the Rubberhose file system.

It allows for multiple, independent file systems on a single disk whose existence can only be verified if you posses the correct password.

null 51 Feb 1, 2022
A document encryption solution for the reMarkable 2 ePaper tablet.

Remarkable 2 Encryption This repository contains multiple tools to encrypt the home folder of the reMarkable 2 epaper tablet using gocryptfs. Detailed

RedTeam Pentesting GmbH 28 Apr 25, 2022
A super easy file encryption utility written in go and under 800kb

filecrypt A super easy to use file encryption utility written in golang ⚠ Help Wanted on porting filecrypt to other programing languages NOTE: if you

Flew Software 77 Mar 12, 2022
Encryption Abstraction Layer and Utilities for ratnet

What is Bencrypt? Bencrypt is an abstraction layer for cryptosystems in Go, that lets applications use hybrid cryptosystems without being coupled to t

null 18 Mar 12, 2022
A simple, semantic and developer-friendly golang package for encoding&decoding and encryption&decryption

A simple, semantic and developer-friendly golang package for encoding&decoding and encryption&decryption

null 84 Apr 30, 2022
Encryption & Decryption package for golang

encdec Encryption & Decryption package for golang func main() { startingTime := time.Now() privKey, pubKey := GenerateRsaKeyPair() fmt.Println("Priva

MD MOSTAIN BILLAH 3 Feb 11, 2022
A simple, modern and secure encryption tool (and Go library) with small explicit keys, no config options, and UNIX-style composability.

A simple, modern and secure encryption tool (and Go library) with small explicit keys, no config options, and UNIX-style composability.

Filippo Valsorda 10.4k May 12, 2022
Easy to use encryption library for Go

encryptedbox EncryptedBox is an easy to use module for Go that can encrypt or sign any type of data. It is especially useful when you must serialize y

Jesse Swidler 18 Oct 26, 2021
A tool for secrets management, encryption as a service, and privileged access management

Deploy HCP Vault & AWS Transit Gateways via Terraform https://medium.com/hashicorp-engineering/deploying-hcp-vault-using-the-hcp-terraform-provider-5e

Temur Yunusov 0 Nov 23, 2021
TTAK.KO-12.0223 Lightweight Encryption Algorithm with Galois/Counter Mode (LEA-GCM)

LEACrypt The Lightweight Encryption Algorithm (also known as LEA) is a 128-bit block cipher developed by South Korea in 2013 to provide confidentialit

Pedro F. Albanese 0 Dec 28, 2021
Functional encryption for images

ImageFE Functional encryption for images. Introduction In the traditional cryptography framework, a decryptor either recovers the entire plaintext fro

null 3 Mar 8, 2022
Attempts to make attribute based encryption work, particularly trying out bn256 pairing curve

EC Pairings over bn256 This is an attempt to solve the core problem of attribute based encryption, where the goal is to be able to use CA-issued attri

Robert Fielding 1 Jan 5, 2022
Go Encrypt! Is a simple command-line encryption and decryption application using AES-256 GCM.

Go Encrypt! Go Encrypt! is a command-line application used to easily encrypt and decrypt files with the AES-256 GCM encryption algorithm. Usage Usage

Peter Georgas 0 Jan 5, 2022
Lattigo: lattice-based multiparty homomorphic encryption library in Go

Lattigo: lattice-based multiparty homomorphic encryption library in Go Lattigo i

null 0 Jan 18, 2022
Length-preserving encryption algorithm

hctr2 Length-preserving encryption algorithm https://eprint.iacr.org/2021/1441.pdf Security Disclosure This project uses full disclosure. If you find

Eric Lagergren 2 Feb 28, 2022
Card-encrypt - The encryption code necessary to enroll debit cards in the Palla API

?? Card RSA Encryption Thank you for choosing Palla! ?? In this repository you'l

palla 0 Feb 7, 2022