-
-
Notifications
You must be signed in to change notification settings - Fork 0
init the encryption storage algo #4
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
Reviewer's Guide by SourceryThis pull request initializes the encryption storage algorithm and sets up the necessary infrastructure for secure algorithm storage and execution. It introduces a secure encryption service that handles algorithm storage, validation, encryption, and decryption operations within isolated Docker containers. The service uses AES encryption for secure storage of algorithms and supports various programming languages through Docker images. The update also includes database migration to PostgreSQL, directory creation for key and algorithm storage, and compression/decompression utilities. Sequence diagram for algorithm storage and executionsequenceDiagram
participant Client
participant Service as SecureEncryptionService
participant Docker
participant Storage
Client->>Service: Store Algorithm(files, metadata)
Service->>Service: Validate Algorithm
Service->>Docker: Create Container
Service->>Docker: Copy Files
Service->>Docker: Run Tests
Docker-->>Service: Test Results
Service->>Service: Encrypt Algorithm
Service->>Storage: Store Encrypted Files
Service-->>Client: Success Response
Note over Client,Storage: Later: Algorithm Execution
Client->>Service: Execute Algorithm(data)
Service->>Storage: Load Algorithm
Service->>Service: Decrypt Algorithm
Service->>Docker: Run in Container
Docker-->>Service: Results
Service-->>Client: Processed Data
Entity relationship diagram for database schemaerDiagram
USERS {
bigint id PK
timestamp created_at
timestamp updated_at
timestamp deleted_at
text name
text email
smallint age
timestamp birthday
text member_number
timestamp activated_at
}
PRODUCTS {
bigint id PK
timestamp created_at
timestamp updated_at
timestamp deleted_at
text code
bigint price
}
Class diagram for secure encryption serviceclassDiagram
class SecureEncryptionService {
-docker *client.Client
-config SecurityConfig
-storageDir string
-keyStorage *KeyStorage
+StoreAlgorithm(ctx, userID, files, metadata)
+ValidateAlgorithm(ctx, files, metadata)
+Encrypt(ctx, userID, data)
+Decrypt(ctx, userID, data)
}
class SecurityConfig {
+MaxCPUs int64
+MaxMemoryMB int64
+MaxExecTime int
+WorkingDir string
+StorageKey []byte
}
class KeyStorage {
-storageKey []byte
-storageDir string
}
class AlgorithmMetadata {
+Language string
+BuildCmd string
+RunCmd string
+EntryPoints map[string]string
}
SecureEncryptionService --> SecurityConfig
SecureEncryptionService --> KeyStorage
SecureEncryptionService ..> AlgorithmMetadata
File-Level Changes
Tips and commandsInteracting with Sourcery
Customizing Your ExperienceAccess your dashboard to:
Getting Help
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Hey @DoniLite - I've reviewed your changes and found some issues that need to be addressed.
Blocking issues:
- Avoid hardcoding database credentials in source code (link)
- Hardcoded database credentials in
atlas.hcl
. (link) - Potentially sensitive path in
services/init_service.go
. (link)
Overall Comments:
- Database credentials should not be hardcoded in db.go. Use environment variables or a secure configuration management system instead.
- Consider moving Docker configuration (MaxCPUs, MaxMemoryMB, etc.) to environment variables rather than hardcoding in DefaultServiceOptions()
Here's what I looked at during the review
- 🟢 General issues: all looks good
- 🔴 Security: 3 blocking issues, 4 other issues
- 🟡 Testing: 2 issues found
- 🟢 Complexity: all looks good
- 🟢 Documentation: all looks good
Help me be more useful! Please click 👍 or 👎 on each comment and I'll use the feedback to improve your reviews.
} | ||
// Enregistrement des clés | ||
privateFile, err := os.Create(user + "-private.pem") | ||
privateFile, err := os.Create(path.Join(keyDir, user + "-private.pem")) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🚨 suggestion (security): Consider setting restrictive file permissions for the private key file
Use os.OpenFile with mode 0600 to ensure the private key file has appropriate permissions. The public key file should also have controlled permissions, though they can be more permissive (e.g. 0644).
Suggested implementation:
privateFile, err := os.OpenFile(path.Join(keyDir, user + "-private.pem"), os.O_WRONLY|os.O_CREATE|os.O_TRUNC, 0600)
If there is also code creating a public key file in this same function (which I can't see in the provided snippet), it should also be modified to use os.OpenFile with 0644 permissions like this:
publicFile, err := os.OpenFile(path.Join(keyDir, user + "-public.pem"), os.O_WRONLY|os.O_CREATE|os.O_TRUNC, 0644)
} | ||
|
||
|
||
func CreateDirectories(dirs []string) { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🚨 suggestion (security): Add path validation to prevent directory traversal
Consider validating that all directory paths are within an expected root directory to prevent potential directory traversal issues.
Suggested implementation:
// rootDir defines the base directory for all created directories
const rootDir = "."
// isValidPath checks if a directory path is within the root directory
func isValidPath(dir string) bool {
// Clean the path to resolve any .. or . components
cleanPath := path.Clean(dir)
// Check if the path tries to escape root directory
if path.IsAbs(cleanPath) || strings.HasPrefix(cleanPath, "..") {
return false
}
return true
}
func CreateDirectories(dirs []string) error {
defaultDirs := []string{
"rsa",
"rsa/server",
"rsa/user",
"client",
"crypto",
}
if len(dirs) == 0 {
dirs = defaultDirs
}
for _, dir := range dirs {
if !isValidPath(dir) {
return fmt.Errorf("invalid directory path: %s", dir)
}
fullPath := path.Join(rootDir, dir)
if _, err := os.Stat(fullPath); os.IsNotExist(err) {
if err := os.MkdirAll(fullPath, os.ModePerm); err != nil {
return fmt.Errorf("failed to create directory %s: %v", fullPath, err)
}
}
You'll need to:
- Add "fmt" and "strings" to the imports if they're not already present
- Update any code that calls CreateDirectories to handle the returned error
- Update any code that depends on the old relative paths (../rsa, etc) to use the new paths relative to rootDir
} | ||
|
||
|
||
func DecompressData(data []byte) ([]byte, error) { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🚨 suggestion (security): Add size limit for decompressed data to prevent memory exhaustion
Consider implementing a maximum size limit for decompressed data and using io.LimitReader to prevent potential denial of service attacks through zip bombs.
Suggested implementation:
"bytes"
"compress/gzip"
"errors"
"io"
)
// MaxDecompressedSize defines the maximum allowed size for decompressed data (100MB)
const MaxDecompressedSize = 100 * 1024 * 1024
// ErrDecompressedSizeExceeded is returned when decompressed data exceeds the size limit
var ErrDecompressedSizeExceeded = errors.New("decompressed data exceeds maximum allowed size")
limitReader := io.LimitReader(gz, MaxDecompressedSize+1)
decompressed, err := io.ReadAll(limitReader)
if err != nil {
return nil, err
}
if len(decompressed) > MaxDecompressedSize {
return nil, ErrDecompressedSizeExceeded
func init() { | ||
|
||
db, err := gorm.Open(sqlite.Open("./db/test.db"), &gorm.Config{}) | ||
dsn := "host=localhost user=doni password=DoniLite13 dbname=anexis port=5432 sslmode=disable" |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🚨 issue (security): Avoid hardcoding database credentials in source code
Consider using environment variables or a secure configuration management system to handle sensitive credentials.
} | ||
|
||
// Encrypt chiffre les données de manière sécurisée | ||
func (s *SecureEncryptionService) Encrypt(ctx context.Context, userID string, data []byte) ([]byte, error) { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🚨 suggestion (security): Consider implementing rate limiting for encryption operations
To prevent potential abuse, consider adding rate limiting per user or IP address for encryption and decryption operations.
Suggested implementation:
// Encrypt chiffre les données de manière sécurisée
func (s *SecureEncryptionService) Encrypt(ctx context.Context, userID string, data []byte) ([]byte, error) {
// Check rate limit for the user
if err := s.rateLimiter.Allow(userID); err != nil {
return nil, fmt.Errorf("rate limit exceeded for user %s: %w", userID, err)
}
return s.runSecureOperation(ctx, userID, "encrypt", data)
}
// RateLimiter defines the interface for rate limiting operations
type RateLimiter interface {
Allow(key string) error
}
type SecureEncryptionService struct {
rateLimiter RateLimiter
// ... other fields
}
type AlgorithmMetadata struct {
To complete this implementation, you'll also need to:
- Create a rate limiter implementation (e.g., using golang.org/x/time/rate or a similar package)
- Update the SecureEncryptionService constructor to accept and initialize the rate limiter
- Apply the same rate limiting to the Decrypt method if it exists
- Consider adding rate limit configuration parameters (like requests per minute) to the service configuration
Example rate limiter implementation could use:
import "golang.org/x/time/rate"
The constructor would need to be updated like:
func NewSecureEncryptionService(config Config) *SecureEncryptionService {
return &SecureEncryptionService{
rateLimiter: NewRateLimiter(config.RateLimit),
// ... other fields
}
}
@@ -0,0 +1,50 @@ | |||
package services_test |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
issue (testing): Missing tests for SecureEncryptionService
This PR introduces the SecureEncryptionService
but doesn't include any corresponding tests. It's crucial to add tests for StoreAlgorithm
, ValidateAlgorithm
, Encrypt
, and Decrypt
to ensure the service functions correctly and securely. Consider testing various scenarios, including valid and invalid algorithm inputs, different data sizes, and edge cases like empty data or invalid user IDs.
var privateKey *rsa.PrivateKey | ||
const USER = "doni" | ||
|
||
func TestGenerator(t *testing.T) { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
suggestion (testing): Add edge case tests for GenerateRSAKeys
The test for GenerateRSAKeys
should cover edge cases, such as invalid user inputs (e.g., empty string, special characters) and potential errors during file creation. This ensures the function is robust and handles unexpected situations gracefully.
Suggested implementation:
import (
"crypto/rsa"
"strings"
"testing"
"cloudbeast.doni/m/utils"
)
var privateKey *rsa.PrivateKey
const USER = "doni"
func TestGenerator(t *testing.T) {
tests := []struct {
name string
user string
wantErr bool
}{
{
name: "Valid username",
user: USER,
wantErr: false,
},
{
name: "Empty username",
user: "",
wantErr: true,
},
{
name: "Username with special characters",
user: "user@#$%^",
wantErr: true,
},
{
name: "Very long username",
user: strings.Repeat("a", 256),
wantErr: true,
},
}
for _, tt := range tests {
t.Run(tt.name, func(t *testing.T) {
key, err := utils.GenerateRSAKeys(tt.user)
if (err != nil) != tt.wantErr {
t.Errorf("GenerateRSAKeys() error = %v, wantErr %v", err, tt.wantErr)
return
}
if !tt.wantErr {
if key == nil {
t.Error("GenerateRSAKeys() returned nil key for valid input")
}
privateKey = key
}
})
}
}
Note that these changes assume that the GenerateRSAKeys
function in the utils package should validate its input and return appropriate errors for invalid cases. You may need to update the GenerateRSAKeys
implementation to:
- Add input validation for empty usernames
- Add validation for usernames with special characters
- Add validation for maximum username length
- Handle file creation errors appropriately
The exact error conditions should match your application's requirements for valid usernames.
env "gorm" { | ||
src = data.external_schema.gorm.url | ||
dev = "sqlite://db/test.db" | ||
dev = "postgresql://doni:DoniLite13@localhost:5432/anexis" |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🚨 issue (security): Hardcoded database credentials in atlas.hcl
.
The database credentials doni:DoniLite13
are hardcoded in atlas.hcl
. This is a security risk and should be stored securely, such as in environment variables or a secrets management service.
Opts = DefaultServiceOptions() | ||
|
||
// Personnaliser le chemin de travail | ||
workDir := "../.encryption-service" |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🚨 issue (security): Potentially sensitive path in services/init_service.go
.
The path ../.encryption-service
is hardcoded. Ensure this doesn't expose sensitive information and consider using environment variables or configuration files for flexibility.
Summary by Sourcery
Update Go version, introduce Postgres as the database, implement secure algorithm storage, and add a CI/CD pipeline.
New Features:
Tests: