WASM Security and Sandboxing: Production-Ready Isolation Strategies
WASM Security and Sandboxing: Production-Ready Isolation Strategies
WebAssembly's security model represents a paradigm shift from traditional security approaches. Built from the ground up with security in mind, WASM provides capability-based access control, memory safety, and strong isolation. This comprehensive guide explores how to leverage WASM's security features for production deployments while mitigating potential vulnerabilities.
Table of Contents
- WASM Security Fundamentals
- Capability-Based Security Model
- Memory Safety and Isolation
- WASI Security Considerations
- Production Hardening Strategies
- Vulnerability Assessment and Management
- Secure Development Practices
- Multi-Tenant Isolation
- Security Monitoring and Auditing
- Compliance and Regulatory Considerations
WASM Security Fundamentals
Security-by-Design Architecture
WASM Security Layers:
┌─────────────────────────────────────────────────┐
│ Application Logic │
├─────────────────────────────────────────────────┤
│ Capability Interface │
│ (Explicit Permissions) │
├─────────────────────────────────────────────────┤
│ WASM Module │
│ (Memory-Safe Code) │
├─────────────────────────────────────────────────┤
│ WASM Runtime │
│ (Sandboxed Execution) │
├─────────────────────────────────────────────────┤
│ Host Environment │
│ (Controlled Access) │
└─────────────────────────────────────────────────┘
Core Security Principles
🔒 Principle of Least Privilege
- WASM modules have no access by default
- Explicit capability grants required
- Fine-grained permission control
- Runtime-enforced access controls
🛡️ Defense in Depth
- Multiple security layers
- Memory safety guarantees
- Runtime sandboxing
- Host environment controls
🎯 Zero Trust Architecture
- No implicit trust relationships
- Continuous verification
- Capability-based access
- Cryptographic validation
Security Comparison
Security Aspect | Traditional Containers | WASM Modules |
---|---|---|
Memory Safety | Process isolation | Language-level safety |
Attack Surface | Large (OS, libraries) | Minimal (runtime only) |
Privilege Model | User/group based | Capability based |
Resource Access | Filesystem namespaces | Explicit capabilities |
Network Access | iptables/namespaces | Import-based permissions |
Code Injection | Possible via exploits | Prevented by design |
Side Channels | Many vectors | Limited exposure |
Capability-Based Security Model
Understanding Capabilities
// Capability-based security in WASM
use wasi::{
fd_prestat_get, fd_prestat_dir_name,
path_open, fd_read, fd_write,
};
// Capabilities are explicitly imported
#[derive(Debug)]
struct FileCapability {
fd: wasi::Fd,
path: String,
permissions: FilePermissions,
}
#[derive(Debug)]
struct FilePermissions {
read: bool,
write: bool,
create: bool,
delete: bool,
}
impl FileCapability {
fn new(path: &str, permissions: FilePermissions) -> Result<Self, SecurityError> {
// Capability must be granted by host
let fd = match path_open(
wasi::AT_FDCWD,
0, // lookupflags
path,
wasi::OFLAGS_CREAT,
Self::permissions_to_rights(&permissions),
0, // fdflags
) {
Ok(fd) => fd,
Err(_) => return Err(SecurityError::CapabilityDenied),
};
Ok(FileCapability {
fd,
path: path.to_string(),
permissions,
})
}
fn read(&self, buffer: &mut [u8]) -> Result<usize, SecurityError> {
if !self.permissions.read {
return Err(SecurityError::InsufficientPrivileges);
}
let iovs = [wasi::Iovec {
buf: buffer.as_mut_ptr(),
buf_len: buffer.len(),
}];
match fd_read(self.fd, &iovs) {
Ok(bytes_read) => Ok(bytes_read),
Err(_) => Err(SecurityError::IOError),
}
}
fn write(&self, data: &[u8]) -> Result<usize, SecurityError> {
if !self.permissions.write {
return Err(SecurityError::InsufficientPrivileges);
}
let iovs = [wasi::Ciovec {
buf: data.as_ptr(),
buf_len: data.len(),
}];
match fd_write(self.fd, &iovs) {
Ok(bytes_written) => Ok(bytes_written),
Err(_) => Err(SecurityError::IOError),
}
}
fn permissions_to_rights(permissions: &FilePermissions) -> wasi::Rights {
let mut rights = 0;
if permissions.read {
rights |= wasi::RIGHTS_FD_READ | wasi::RIGHTS_FD_READDIR;
}
if permissions.write {
rights |= wasi::RIGHTS_FD_WRITE;
}
if permissions.create {
rights |= wasi::RIGHTS_PATH_CREATE_FILE | wasi::RIGHTS_PATH_CREATE_DIRECTORY;
}
if permissions.delete {
rights |= wasi::RIGHTS_PATH_UNLINK_FILE | wasi::RIGHTS_PATH_REMOVE_DIRECTORY;
}
rights
}
}
#[derive(Debug)]
enum SecurityError {
CapabilityDenied,
InsufficientPrivileges,
IOError,
}
// Network capability example
struct NetworkCapability {
allowed_hosts: Vec<String>,
allowed_ports: Vec<u16>,
max_connections: usize,
}
impl NetworkCapability {
fn connect(&self, host: &str, port: u16) -> Result<Connection, SecurityError> {
// Validate against capability constraints
if !self.allowed_hosts.contains(&host.to_string()) {
return Err(SecurityError::CapabilityDenied);
}
if !self.allowed_ports.contains(&port) {
return Err(SecurityError::CapabilityDenied);
}
// Implementation would create actual connection
Ok(Connection::new(host, port))
}
}
struct Connection {
host: String,
port: u16,
}
impl Connection {
fn new(host: &str, port: u16) -> Self {
Self {
host: host.to_string(),
port,
}
}
}
Host-Side Capability Management
// Host implementation for capability management
use std::collections::HashMap;
use std::sync::{Arc, Mutex};
pub struct CapabilityManager {
file_capabilities: Arc<Mutex<HashMap<String, FileCapabilityGrant>>>,
network_capabilities: Arc<Mutex<HashMap<String, NetworkCapabilityGrant>>>,
resource_limits: ResourceLimits,
}
#[derive(Clone)]
pub struct FileCapabilityGrant {
path: String,
permissions: FilePermissions,
expiry: Option<std::time::SystemTime>,
usage_count: usize,
max_usage: Option<usize>,
}
#[derive(Clone)]
pub struct NetworkCapabilityGrant {
allowed_hosts: Vec<String>,
allowed_ports: Vec<u16>,
max_bandwidth: Option<u64>,
connection_limit: usize,
}
pub struct ResourceLimits {
max_memory: usize,
max_cpu_time: std::time::Duration,
max_file_descriptors: usize,
max_network_connections: usize,
}
impl CapabilityManager {
pub fn new() -> Self {
Self {
file_capabilities: Arc::new(Mutex::new(HashMap::new())),
network_capabilities: Arc::new(Mutex::new(HashMap::new())),
resource_limits: ResourceLimits {
max_memory: 100 * 1024 * 1024, // 100MB
max_cpu_time: std::time::Duration::from_secs(30),
max_file_descriptors: 64,
max_network_connections: 10,
},
}
}
pub fn grant_file_capability(
&self,
module_id: &str,
path: &str,
permissions: FilePermissions,
) -> Result<(), SecurityError> {
// Validate path is within allowed bounds
if !self.validate_file_path(path) {
return Err(SecurityError::CapabilityDenied);
}
let grant = FileCapabilityGrant {
path: path.to_string(),
permissions,
expiry: Some(std::time::SystemTime::now() + std::time::Duration::from_secs(3600)),
usage_count: 0,
max_usage: Some(1000),
};
let mut capabilities = self.file_capabilities.lock().unwrap();
capabilities.insert(format!("{}:{}", module_id, path), grant);
Ok(())
}
pub fn check_file_access(
&self,
module_id: &str,
path: &str,
operation: FileOperation,
) -> Result<(), SecurityError> {
let key = format!("{}:{}", module_id, path);
let mut capabilities = self.file_capabilities.lock().unwrap();
if let Some(grant) = capabilities.get_mut(&key) {
// Check expiry
if let Some(expiry) = grant.expiry {
if std::time::SystemTime::now() > expiry {
capabilities.remove(&key);
return Err(SecurityError::CapabilityDenied);
}
}
// Check usage limits
if let Some(max_usage) = grant.max_usage {
if grant.usage_count >= max_usage {
return Err(SecurityError::CapabilityDenied);
}
}
// Check permissions
let allowed = match operation {
FileOperation::Read => grant.permissions.read,
FileOperation::Write => grant.permissions.write,
FileOperation::Create => grant.permissions.create,
FileOperation::Delete => grant.permissions.delete,
};
if !allowed {
return Err(SecurityError::InsufficientPrivileges);
}
// Update usage count
grant.usage_count += 1;
Ok(())
} else {
Err(SecurityError::CapabilityDenied)
}
}
fn validate_file_path(&self, path: &str) -> bool {
// Implement path validation logic
// - No parent directory access (..)
// - Within allowed directories
// - No special files (/dev, /proc, etc.)
if path.contains("..") {
return false;
}
let allowed_prefixes = ["/tmp/wasm/", "/data/", "/config/"];
allowed_prefixes.iter().any(|prefix| path.starts_with(prefix))
}
}
#[derive(Debug)]
pub enum FileOperation {
Read,
Write,
Create,
Delete,
}
Memory Safety and Isolation
Linear Memory Model
// Memory safety enforcement in WASM
use std::ptr;
// WASM linear memory with bounds checking
pub struct LinearMemory {
data: Vec<u8>,
size: usize,
max_size: usize,
}
impl LinearMemory {
pub fn new(initial_size: usize, max_size: usize) -> Self {
let mut data = Vec::with_capacity(max_size);
data.resize(initial_size, 0);
Self {
data,
size: initial_size,
max_size,
}
}
pub fn read(&self, offset: u32, size: u32) -> Result<&[u8], MemoryError> {
let start = offset as usize;
let end = start + size as usize;
if end > self.size {
return Err(MemoryError::OutOfBounds);
}
Ok(&self.data[start..end])
}
pub fn write(&mut self, offset: u32, data: &[u8]) -> Result<(), MemoryError> {
let start = offset as usize;
let end = start + data.len();
if end > self.size {
return Err(MemoryError::OutOfBounds);
}
self.data[start..end].copy_from_slice(data);
Ok(())
}
pub fn grow(&mut self, pages: u32) -> Result<u32, MemoryError> {
const PAGE_SIZE: usize = 64 * 1024; // 64KB
let new_size = self.size + (pages as usize * PAGE_SIZE);
if new_size > self.max_size {
return Err(MemoryError::GrowthLimitExceeded);
}
let old_pages = self.size / PAGE_SIZE;
self.data.resize(new_size, 0);
self.size = new_size;
Ok(old_pages as u32)
}
// Safe pointer creation for WASM module
pub fn create_pointer(&self, offset: u32) -> Result<*const u8, MemoryError> {
if offset as usize >= self.size {
return Err(MemoryError::OutOfBounds);
}
Ok(self.data.as_ptr().wrapping_add(offset as usize))
}
// Validate all memory accesses
pub fn validate_access(&self, offset: u32, size: u32) -> Result<(), MemoryError> {
let start = offset as usize;
let end = start + size as usize;
if start >= self.size || end > self.size || end < start {
return Err(MemoryError::OutOfBounds);
}
Ok(())
}
}
#[derive(Debug)]
pub enum MemoryError {
OutOfBounds,
GrowthLimitExceeded,
InvalidAlignment,
}
// Stack overflow protection
pub struct StackGuard {
stack_base: *const u8,
stack_size: usize,
guard_size: usize,
}
impl StackGuard {
pub fn new(stack_size: usize) -> Self {
let guard_size = 4096; // 4KB guard page
let total_size = stack_size + guard_size;
// Allocate stack with guard page
let stack_base = unsafe {
libc::mmap(
ptr::null_mut(),
total_size,
libc::PROT_READ | libc::PROT_WRITE,
libc::MAP_PRIVATE | libc::MAP_ANONYMOUS,
-1,
0,
) as *const u8
};
// Protect guard page
unsafe {
libc::mprotect(
stack_base as *mut libc::c_void,
guard_size,
libc::PROT_NONE,
);
}
Self {
stack_base,
stack_size,
guard_size,
}
}
pub fn check_stack_overflow(&self, current_sp: *const u8) -> Result<(), MemoryError> {
let stack_start = unsafe { self.stack_base.add(self.guard_size) };
let stack_end = unsafe { stack_start.add(self.stack_size) };
if current_sp < stack_start || current_sp >= stack_end {
return Err(MemoryError::OutOfBounds);
}
Ok(())
}
}
// Memory encryption for sensitive data
pub struct EncryptedMemory {
key: [u8; 32],
nonce_counter: u64,
}
impl EncryptedMemory {
pub fn new() -> Self {
let mut key = [0u8; 32];
// Generate random key (simplified)
for i in 0..32 {
key[i] = (i * 7 + 13) as u8; // Simplified key generation
}
Self {
key,
nonce_counter: 0,
}
}
pub fn encrypt_data(&mut self, data: &[u8]) -> Vec<u8> {
let mut encrypted = Vec::with_capacity(data.len());
self.nonce_counter += 1;
// Simple XOR encryption (use proper crypto in production)
for (i, &byte) in data.iter().enumerate() {
let key_byte = self.key[i % 32];
let nonce_byte = (self.nonce_counter >> (i % 8)) as u8;
encrypted.push(byte ^ key_byte ^ nonce_byte);
}
encrypted
}
pub fn decrypt_data(&self, encrypted: &[u8], nonce: u64) -> Vec<u8> {
let mut decrypted = Vec::with_capacity(encrypted.len());
for (i, &byte) in encrypted.iter().enumerate() {
let key_byte = self.key[i % 32];
let nonce_byte = (nonce >> (i % 8)) as u8;
decrypted.push(byte ^ key_byte ^ nonce_byte);
}
decrypted
}
}
Side-Channel Attack Mitigation
// Constant-time operations to prevent timing attacks
pub struct ConstantTimeOps;
impl ConstantTimeOps {
// Constant-time comparison
pub fn secure_compare(a: &[u8], b: &[u8]) -> bool {
if a.len() != b.len() {
return false;
}
let mut result = 0u8;
for (x, y) in a.iter().zip(b.iter()) {
result |= x ^ y;
}
result == 0
}
// Constant-time conditional select
pub fn conditional_select(condition: bool, a: u8, b: u8) -> u8 {
let mask = if condition { 0xFF } else { 0x00 };
(a & mask) | (b & !mask)
}
// Constant-time array lookup
pub fn secure_lookup(array: &[u8], index: usize) -> u8 {
let mut result = 0u8;
for (i, &value) in array.iter().enumerate() {
let mask = if i == index { 0xFF } else { 0x00 };
result |= value & mask;
}
result
}
// Memory scrubbing to prevent data leakage
pub fn secure_zero(data: &mut [u8]) {
// Use volatile write to prevent optimization
for byte in data.iter_mut() {
unsafe {
ptr::write_volatile(byte, 0);
}
}
}
}
// Cache-resistant operations
pub struct CacheResistantOps {
dummy_memory: Vec<u8>,
}
impl CacheResistantOps {
pub fn new() -> Self {
Self {
dummy_memory: vec![0; 4096], // Dummy memory for cache obfuscation
}
}
// Access pattern obfuscation
pub fn obfuscated_read(&mut self, data: &[u8], real_index: usize) -> u8 {
let mut result = 0u8;
// Access all elements to hide real access pattern
for (i, &value) in data.iter().enumerate() {
let is_target = i == real_index;
let mask = if is_target { 0xFF } else { 0x00 };
result |= value & mask;
// Add dummy cache access
let dummy_index = (i * 7) % self.dummy_memory.len();
self.dummy_memory[dummy_index] = self.dummy_memory[dummy_index].wrapping_add(1);
}
result
}
}
WASI Security Considerations
Secure WASI Implementation
// Secure WASI host implementation
use std::collections::HashMap;
use std::path::PathBuf;
pub struct SecureWASIHost {
file_permissions: HashMap<String, WASIFilePermissions>,
environment_whitelist: Vec<String>,
network_policy: NetworkPolicy,
resource_limits: WASIResourceLimits,
}
#[derive(Clone)]
pub struct WASIFilePermissions {
allowed_paths: Vec<PathBuf>,
read_only: bool,
max_file_size: u64,
max_files_open: usize,
}
pub struct NetworkPolicy {
allowed_domains: Vec<String>,
allowed_ports: Vec<u16>,
blocked_private_networks: bool,
max_connections: usize,
}
pub struct WASIResourceLimits {
max_memory: usize,
max_cpu_time: std::time::Duration,
max_file_descriptors: usize,
max_processes: usize,
}
impl SecureWASIHost {
pub fn new() -> Self {
Self {
file_permissions: HashMap::new(),
environment_whitelist: vec![
"PATH".to_string(),
"LANG".to_string(),
"TZ".to_string(),
],
network_policy: NetworkPolicy {
allowed_domains: vec!["api.trusted.com".to_string()],
allowed_ports: vec![80, 443, 8080],
blocked_private_networks: true,
max_connections: 10,
},
resource_limits: WASIResourceLimits {
max_memory: 64 * 1024 * 1024, // 64MB
max_cpu_time: std::time::Duration::from_secs(30),
max_file_descriptors: 32,
max_processes: 1,
},
}
}
pub fn configure_file_access(&mut self, module_id: &str, config: WASIFilePermissions) {
self.file_permissions.insert(module_id.to_string(), config);
}
// Secure path resolution with validation
pub fn resolve_path(&self, module_id: &str, path: &str) -> Result<PathBuf, WASIError> {
let config = self.file_permissions.get(module_id)
.ok_or(WASIError::PermissionDenied)?;
let resolved_path = PathBuf::from(path);
// Canonicalize path to prevent directory traversal
let canonical_path = resolved_path.canonicalize()
.map_err(|_| WASIError::InvalidPath)?;
// Check if path is within allowed directories
let allowed = config.allowed_paths.iter().any(|allowed_path| {
canonical_path.starts_with(allowed_path)
});
if !allowed {
return Err(WASIError::PermissionDenied);
}
Ok(canonical_path)
}
// Secure environment variable access
pub fn get_environment_variable(&self, key: &str) -> Result<Option<String>, WASIError> {
if !self.environment_whitelist.contains(&key.to_string()) {
return Err(WASIError::PermissionDenied);
}
Ok(std::env::var(key).ok())
}
// Network access validation
pub fn validate_network_access(&self, host: &str, port: u16) -> Result<(), WASIError> {
// Check allowed domains
let domain_allowed = self.network_policy.allowed_domains.iter()
.any(|domain| host.ends_with(domain));
if !domain_allowed {
return Err(WASIError::PermissionDenied);
}
// Check allowed ports
if !self.network_policy.allowed_ports.contains(&port) {
return Err(WASIError::PermissionDenied);
}
// Block private networks if configured
if self.network_policy.blocked_private_networks {
if self.is_private_ip(host) {
return Err(WASIError::PermissionDenied);
}
}
Ok(())
}
fn is_private_ip(&self, host: &str) -> bool {
// Check for private IP ranges
if let Ok(ip) = host.parse::<std::net::IpAddr>() {
match ip {
std::net::IpAddr::V4(ipv4) => {
let octets = ipv4.octets();
// Check RFC 1918 private ranges
matches!(
octets,
[10, _, _, _] |
[172, 16..=31, _, _] |
[192, 168, _, _] |
[127, _, _, _] // localhost
)
}
std::net::IpAddr::V6(_) => {
// Simplified IPv6 private check
host.starts_with("::1") || host.starts_with("fc") || host.starts_with("fd")
}
}
} else {
false
}
}
}
#[derive(Debug)]
pub enum WASIError {
PermissionDenied,
InvalidPath,
ResourceExhausted,
NetworkError,
}
// Secure random number generation for WASI
pub struct SecureRandom {
entropy_pool: Vec<u8>,
pool_index: usize,
}
impl SecureRandom {
pub fn new() -> Self {
let mut entropy_pool = vec![0u8; 4096];
// Initialize with system entropy
#[cfg(unix)]
{
use std::fs::File;
use std::io::Read;
if let Ok(mut urandom) = File::open("/dev/urandom") {
let _ = urandom.read_exact(&mut entropy_pool);
}
}
Self {
entropy_pool,
pool_index: 0,
}
}
pub fn get_random_bytes(&mut self, buffer: &mut [u8]) -> Result<(), WASIError> {
if buffer.len() > self.entropy_pool.len() - self.pool_index {
return Err(WASIError::ResourceExhausted);
}
buffer.copy_from_slice(
&self.entropy_pool[self.pool_index..self.pool_index + buffer.len()]
);
self.pool_index += buffer.len();
// Refresh entropy pool when low
if self.pool_index > self.entropy_pool.len() / 2 {
self.refresh_entropy();
}
Ok(())
}
fn refresh_entropy(&mut self) {
// Mix existing entropy with new entropy
for (i, byte) in self.entropy_pool.iter_mut().enumerate() {
*byte ^= (i as u8).wrapping_mul(179);
}
self.pool_index = 0;
}
}
Production Hardening Strategies
Runtime Security Configuration
# Secure WASM runtime configuration
apiVersion: v1
kind: ConfigMap
metadata:
name: wasm-security-config
data:
wasmtime-config.toml: |
[security]
# Enable all security features
enable_simd = false # Disable SIMD for security
enable_bulk_memory = false # Disable bulk memory operations
enable_reference_types = false # Disable reference types
# Memory limits
max_wasm_stack = 524288 # 512KB stack limit
max_memory_size = 67108864 # 64MB memory limit
# Execution limits
max_instances = 100
max_tables = 10
max_memories = 1
max_globals = 1000
# File system restrictions
[security.filesystem]
allowed_dirs = ["/tmp/wasm", "/data/readonly"]
readonly_dirs = ["/data/readonly", "/config"]
max_open_files = 32
max_file_size = 10485760 # 10MB
# Network restrictions
[security.network]
allowed_hosts = ["api.trusted.com", "cdn.example.com"]
allowed_ports = [80, 443, 8080]
block_private_networks = true
max_connections = 10
connection_timeout = 30 # seconds
# Environment variable whitelist
[security.environment]
allowed_vars = ["PATH", "LANG", "TZ", "APP_CONFIG"]
# Resource limits
[security.resources]
max_cpu_time = 30 # seconds
max_memory_growth = 33554432 # 32MB
enable_fuel = true
fuel_limit = 1000000
---
# Pod Security Policy for WASM workloads
apiVersion: policy/v1beta1
kind: PodSecurityPolicy
metadata:
name: wasm-security-policy
spec:
privileged: false
allowPrivilegeEscalation: false
requiredDropCapabilities:
- ALL
volumes:
- 'configMap'
- 'emptyDir'
- 'projected'
- 'secret'
- 'downwardAPI'
- 'persistentVolumeClaim'
hostNetwork: false
hostIPC: false
hostPID: false
runAsUser:
rule: 'MustRunAsNonRoot'
supplementalGroups:
rule: 'MustRunAs'
ranges:
- min: 1
max: 65535
fsGroup:
rule: 'MustRunAs'
ranges:
- min: 1
max: 65535
readOnlyRootFilesystem: true
seLinux:
rule: RunAsAny
seccompProfile:
type: RuntimeDefault
---
# Network Policy for WASM workloads
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: wasm-network-policy
spec:
podSelector:
matchLabels:
runtime: wasm
policyTypes:
- Ingress
- Egress
ingress:
- from:
- podSelector:
matchLabels:
app: gateway
ports:
- protocol: TCP
port: 8080
egress:
# Allow DNS
- to: []
ports:
- protocol: UDP
port: 53
# Allow specific external APIs
- to: []
ports:
- protocol: TCP
port: 443
# Block all other egress
Secure Deployment Pipeline
# Security-focused CI/CD pipeline
name: Secure WASM Deployment
on:
push:
branches: [main]
pull_request:
branches: [main]
jobs:
security-scan:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v3
- name: WASM Security Scan
run: |
# Install WASM security tools
cargo install --locked cargo-audit
cargo install --locked cargo-deny
# Vulnerability scan
cargo audit
# License and dependency check
cargo deny check
# WASM-specific security checks
./scripts/wasm-security-check.sh
- name: Static Analysis
run: |
# Install security linters
cargo install --locked clippy
# Run security-focused lints
cargo clippy -- -D warnings -D unsafe-code
# Check for common security issues
cargo clippy -- -W clippy::mem_forget \
-W clippy::mem_replace_with_uninit \
-W clippy::transmute_ptr_to_ptr
- name: Fuzzing Tests
run: |
# Install cargo-fuzz
cargo install cargo-fuzz
# Run fuzz tests for critical functions
cargo fuzz run fuzz_input_validation -- -max_total_time=300
cargo fuzz run fuzz_memory_operations -- -max_total_time=300
build-secure:
needs: security-scan
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v3
- name: Secure Build
run: |
# Build with security optimizations
RUSTFLAGS="-C control-flow-guard=yes -C overflow-checks=yes" \
cargo build --target wasm32-wasi --release
# Strip debug information
wasm-strip target/wasm32-wasi/release/app.wasm
# Optimize for security (not just size)
wasm-opt --strip-debug --strip-producers \
--remove-unused-names \
-Os target/wasm32-wasi/release/app.wasm \
-o app-secure.wasm
- name: Binary Analysis
run: |
# Analyze WASM binary for security issues
wabt-objdump -x app-secure.wasm
# Check for dangerous imports
./scripts/check-wasm-imports.sh app-secure.wasm
# Verify no debug information leaked
./scripts/verify-no-debug-info.sh app-secure.wasm
- name: Sign Binary
run: |
# Sign WASM binary for integrity
cosign sign --key cosign.key app-secure.wasm
# Generate SBOM
syft app-secure.wasm -o spdx-json > app-sbom.json
# Attest build provenance
cosign attest --predicate app-sbom.json --key cosign.key app-secure.wasm
deploy-secure:
needs: build-secure
runs-on: ubuntu-latest
if: github.ref == 'refs/heads/main'
steps:
- name: Secure Deployment
run: |
# Deploy with security policies
kubectl apply -f k8s/security-policies/
kubectl apply -f k8s/network-policies/
kubectl apply -f k8s/deployment-secure.yaml
# Verify deployment security
kubectl get pods -l app=wasm-secure -o yaml | \
yq eval '.items[].spec.securityContext'
# Check runtime security
kubectl exec -it deployment/wasm-secure -- \
/security-check.sh
- name: Security Monitoring
run: |
# Set up security monitoring
kubectl apply -f monitoring/falco-rules-wasm.yaml
kubectl apply -f monitoring/security-dashboard.yaml
# Configure alerts
kubectl apply -f monitoring/security-alerts.yaml
Vulnerability Assessment and Management
WASM-Specific Vulnerability Scanner
// WASM vulnerability scanner
use std::collections::HashMap;
use wasmparser::{Parser, Payload};
pub struct WASMVulnerabilityScanner {
vulnerability_db: HashMap<String, VulnerabilityInfo>,
scan_rules: Vec<ScanRule>,
}
#[derive(Debug, Clone)]
pub struct VulnerabilityInfo {
id: String,
severity: Severity,
description: String,
affected_imports: Vec<String>,
mitigation: String,
}
#[derive(Debug, Clone, PartialEq, Eq, PartialOrd, Ord)]
pub enum Severity {
Low,
Medium,
High,
Critical,
}
pub struct ScanRule {
name: String,
check: Box<dyn Fn(&WASMModule) -> Vec<Finding>>,
}
#[derive(Debug)]
pub struct Finding {
rule_name: String,
severity: Severity,
message: String,
location: Option<String>,
recommendation: String,
}
pub struct WASMModule {
imports: Vec<Import>,
exports: Vec<Export>,
functions: Vec<Function>,
memory: Option<Memory>,
globals: Vec<Global>,
}
#[derive(Debug)]
pub struct Import {
module: String,
name: String,
typ: ImportType,
}
#[derive(Debug)]
pub enum ImportType {
Function,
Memory,
Global,
Table,
}
#[derive(Debug)]
pub struct Export {
name: String,
typ: ExportType,
}
#[derive(Debug)]
pub enum ExportType {
Function,
Memory,
Global,
Table,
}
#[derive(Debug)]
pub struct Function {
name: Option<String>,
params: Vec<String>,
returns: Vec<String>,
locals: Vec<String>,
}
#[derive(Debug)]
pub struct Memory {
initial: u32,
maximum: Option<u32>,
}
#[derive(Debug)]
pub struct Global {
name: Option<String>,
typ: String,
mutable: bool,
}
impl WASMVulnerabilityScanner {
pub fn new() -> Self {
let mut scanner = Self {
vulnerability_db: HashMap::new(),
scan_rules: Vec::new(),
};
scanner.initialize_vulnerability_db();
scanner.initialize_scan_rules();
scanner
}
fn initialize_vulnerability_db(&mut self) {
// Load known vulnerabilities
self.vulnerability_db.insert(
"WASM-001".to_string(),
VulnerabilityInfo {
id: "WASM-001".to_string(),
severity: Severity::High,
description: "Unrestricted memory growth".to_string(),
affected_imports: vec!["memory.grow".to_string()],
mitigation: "Set maximum memory limits".to_string(),
},
);
self.vulnerability_db.insert(
"WASM-002".to_string(),
VulnerabilityInfo {
id: "WASM-002".to_string(),
severity: Severity::Medium,
description: "Exposed debugging functions".to_string(),
affected_imports: vec!["debug.print".to_string(), "console.log".to_string()],
mitigation: "Remove debug exports in production".to_string(),
},
);
self.vulnerability_db.insert(
"WASM-003".to_string(),
VulnerabilityInfo {
id: "WASM-003".to_string(),
severity: Severity::Critical,
description: "Unrestricted file system access".to_string(),
affected_imports: vec!["wasi_snapshot_preview1.path_open".to_string()],
mitigation: "Implement capability-based file access".to_string(),
},
);
}
fn initialize_scan_rules(&mut self) {
// Rule: Check for unrestricted memory growth
self.scan_rules.push(ScanRule {
name: "unrestricted_memory_growth".to_string(),
check: Box::new(|module| {
let mut findings = Vec::new();
if let Some(memory) = &module.memory {
if memory.maximum.is_none() {
findings.push(Finding {
rule_name: "unrestricted_memory_growth".to_string(),
severity: Severity::High,
message: "Memory has no maximum limit".to_string(),
location: Some("memory section".to_string()),
recommendation: "Set a reasonable maximum memory limit".to_string(),
});
}
}
findings
}),
});
// Rule: Check for dangerous imports
self.scan_rules.push(ScanRule {
name: "dangerous_imports".to_string(),
check: Box::new(|module| {
let mut findings = Vec::new();
let dangerous_imports = [
"wasi_snapshot_preview1.proc_exit",
"wasi_snapshot_preview1.proc_raise",
"wasi_snapshot_preview1.fd_close",
];
for import in &module.imports {
let import_name = format!("{}.{}", import.module, import.name);
if dangerous_imports.contains(&import_name.as_str()) {
findings.push(Finding {
rule_name: "dangerous_imports".to_string(),
severity: Severity::Medium,
message: format!("Potentially dangerous import: {}", import_name),
location: Some("import section".to_string()),
recommendation: "Review necessity of this import".to_string(),
});
}
}
findings
}),
});
// Rule: Check for exposed internal functions
self.scan_rules.push(ScanRule {
name: "exposed_internals".to_string(),
check: Box::new(|module| {
let mut findings = Vec::new();
let internal_prefixes = ["_", "debug_", "test_", "internal_"];
for export in &module.exports {
if internal_prefixes.iter().any(|prefix| export.name.starts_with(prefix)) {
findings.push(Finding {
rule_name: "exposed_internals".to_string(),
severity: Severity::Low,
message: format!("Internal function exposed: {}", export.name),
location: Some("export section".to_string()),
recommendation: "Remove internal exports from production builds".to_string(),
});
}
}
findings
}),
});
}
pub fn scan_module(&self, wasm_bytes: &[u8]) -> Result<ScanResult, ScanError> {
let module = self.parse_wasm_module(wasm_bytes)?;
let mut findings = Vec::new();
// Run all scan rules
for rule in &self.scan_rules {
let rule_findings = (rule.check)(&module);
findings.extend(rule_findings);
}
// Check against vulnerability database
for import in &module.imports {
let import_name = format!("{}.{}", import.module, import.name);
for vuln in self.vulnerability_db.values() {
if vuln.affected_imports.contains(&import_name) {
findings.push(Finding {
rule_name: vuln.id.clone(),
severity: vuln.severity.clone(),
message: vuln.description.clone(),
location: Some("import section".to_string()),
recommendation: vuln.mitigation.clone(),
});
}
}
}
Ok(ScanResult {
module_info: module,
findings,
scan_time: std::time::SystemTime::now(),
})
}
fn parse_wasm_module(&self, wasm_bytes: &[u8]) -> Result<WASMModule, ScanError> {
let mut module = WASMModule {
imports: Vec::new(),
exports: Vec::new(),
functions: Vec::new(),
memory: None,
globals: Vec::new(),
};
let parser = Parser::new(0);
for payload in parser.parse_all(wasm_bytes) {
match payload.map_err(|_| ScanError::ParseError)? {
Payload::ImportSection(reader) => {
for import in reader {
let import = import.map_err(|_| ScanError::ParseError)?;
module.imports.push(Import {
module: import.module.to_string(),
name: import.name.to_string(),
typ: ImportType::Function, // Simplified
});
}
}
Payload::ExportSection(reader) => {
for export in reader {
let export = export.map_err(|_| ScanError::ParseError)?;
module.exports.push(Export {
name: export.name.to_string(),
typ: ExportType::Function, // Simplified
});
}
}
Payload::MemorySection(reader) => {
for memory in reader {
let memory = memory.map_err(|_| ScanError::ParseError)?;
module.memory = Some(Memory {
initial: memory.initial,
maximum: memory.maximum,
});
break; // Only handle first memory
}
}
_ => {} // Skip other sections for now
}
}
Ok(module)
}
}
#[derive(Debug)]
pub struct ScanResult {
pub module_info: WASMModule,
pub findings: Vec<Finding>,
pub scan_time: std::time::SystemTime,
}
#[derive(Debug)]
pub enum ScanError {
ParseError,
InvalidModule,
}
impl ScanResult {
pub fn get_critical_findings(&self) -> Vec<&Finding> {
self.findings.iter()
.filter(|f| f.severity == Severity::Critical)
.collect()
}
pub fn get_high_findings(&self) -> Vec<&Finding> {
self.findings.iter()
.filter(|f| f.severity == Severity::High)
.collect()
}
pub fn has_security_issues(&self) -> bool {
self.findings.iter()
.any(|f| matches!(f.severity, Severity::High | Severity::Critical))
}
pub fn generate_report(&self) -> String {
let mut report = String::new();
report.push_str("WASM Security Scan Report\n");
report.push_str("=========================\n\n");
// Summary
let critical_count = self.get_critical_findings().len();
let high_count = self.get_high_findings().len();
report.push_str(&format!("Critical Issues: {}\n", critical_count));
report.push_str(&format!("High Issues: {}\n", high_count));
report.push_str(&format!("Total Findings: {}\n\n", self.findings.len()));
// Detailed findings
for finding in &self.findings {
report.push_str(&format!("Severity: {:?}\n", finding.severity));
report.push_str(&format!("Rule: {}\n", finding.rule_name));
report.push_str(&format!("Message: {}\n", finding.message));
if let Some(location) = &finding.location {
report.push_str(&format!("Location: {}\n", location));
}
report.push_str(&format!("Recommendation: {}\n", finding.recommendation));
report.push_str("\n---\n\n");
}
report
}
}
Secure Development Practices
Security-First Development Workflow
// Secure WASM development template
#![deny(unsafe_code)]
#![warn(
missing_docs,
clippy::all,
clippy::pedantic,
clippy::cargo,
clippy::nursery
)]
//! Secure WASM application template with built-in security best practices
use serde::{Deserialize, Serialize};
use std::collections::HashMap;
/// Main application context with security controls
pub struct SecureApp {
config: AppConfig,
security_context: SecurityContext,
audit_log: AuditLog,
}
/// Application configuration with security validation
#[derive(Debug, Deserialize)]
pub struct AppConfig {
#[serde(deserialize_with = "validate_max_memory")]
max_memory: usize,
#[serde(deserialize_with = "validate_timeout")]
timeout_seconds: u64,
#[serde(deserialize_with = "validate_allowed_origins")]
allowed_origins: Vec<String>,
#[serde(default = "default_rate_limit")]
rate_limit: u32,
}
/// Security context for request validation
#[derive(Debug)]
pub struct SecurityContext {
request_count: u32,
last_request_time: std::time::SystemTime,
trusted_origins: Vec<String>,
security_headers: HashMap<String, String>,
}
/// Audit logging for security events
pub struct AuditLog {
events: Vec<SecurityEvent>,
max_events: usize,
}
#[derive(Debug, Serialize)]
pub struct SecurityEvent {
timestamp: u64,
event_type: SecurityEventType,
details: HashMap<String, String>,
risk_score: u8,
}
#[derive(Debug, Serialize)]
pub enum SecurityEventType {
AuthenticationAttempt,
AuthorizationFailure,
RateLimitExceeded,
InvalidInput,
SuspiciousActivity,
ConfigurationChange,
}
impl SecureApp {
/// Create new secure application instance
pub fn new(config: AppConfig) -> Result<Self, SecurityError> {
let security_context = SecurityContext {
request_count: 0,
last_request_time: std::time::SystemTime::now(),
trusted_origins: config.allowed_origins.clone(),
security_headers: Self::default_security_headers(),
};
let audit_log = AuditLog {
events: Vec::new(),
max_events: 1000,
};
Ok(Self {
config,
security_context,
audit_log,
})
}
/// Process request with security validation
pub fn process_request(&mut self, request: &Request) -> Result<Response, SecurityError> {
// Rate limiting
self.check_rate_limit()?;
// Origin validation
self.validate_origin(&request.origin)?;
// Input validation
self.validate_input(&request.data)?;
// Audit logging
self.log_security_event(SecurityEventType::AuthenticationAttempt, &[
("origin".to_string(), request.origin.clone()),
("user_agent".to_string(), request.user_agent.clone()),
]);
// Process the request
let response = self.handle_request(request)?;
// Add security headers
Ok(self.add_security_headers(response))
}
fn check_rate_limit(&mut self) -> Result<(), SecurityError> {
let now = std::time::SystemTime::now();
let time_diff = now.duration_since(self.security_context.last_request_time)
.unwrap_or_default();
// Reset counter if enough time has passed
if time_diff.as_secs() >= 60 {
self.security_context.request_count = 0;
self.security_context.last_request_time = now;
}
self.security_context.request_count += 1;
if self.security_context.request_count > self.config.rate_limit {
self.log_security_event(SecurityEventType::RateLimitExceeded, &[
("request_count".to_string(), self.security_context.request_count.to_string()),
("time_window".to_string(), "60".to_string()),
]);
return Err(SecurityError::RateLimitExceeded);
}
Ok(())
}
fn validate_origin(&self, origin: &str) -> Result<(), SecurityError> {
if self.security_context.trusted_origins.is_empty() {
return Ok(()); // No restrictions
}
let is_trusted = self.security_context.trusted_origins.iter()
.any(|trusted| origin.ends_with(trusted));
if !is_trusted {
self.log_security_event(SecurityEventType::AuthorizationFailure, &[
("origin".to_string(), origin.to_string()),
("reason".to_string(), "untrusted_origin".to_string()),
]);
return Err(SecurityError::UntrustedOrigin);
}
Ok(())
}
fn validate_input(&self, data: &[u8]) -> Result<(), SecurityError> {
// Size validation
if data.len() > 1024 * 1024 { // 1MB limit
return Err(SecurityError::InputTooLarge);
}
// Content validation
if data.is_empty() {
return Err(SecurityError::EmptyInput);
}
// Check for suspicious patterns
if self.contains_suspicious_patterns(data) {
self.log_security_event(SecurityEventType::SuspiciousActivity, &[
("data_size".to_string(), data.len().to_string()),
("reason".to_string(), "suspicious_patterns".to_string()),
]);
return Err(SecurityError::SuspiciousInput);
}
Ok(())
}
fn contains_suspicious_patterns(&self, data: &[u8]) -> bool {
let suspicious_patterns = [
b"<script",
b"javascript:",
b"eval(",
b"exec(",
b"system(",
b"../",
b"..\\",
];
let data_lower = data.to_ascii_lowercase();
suspicious_patterns.iter().any(|pattern| {
data_lower.windows(pattern.len()).any(|window| window == *pattern)
})
}
fn handle_request(&self, request: &Request) -> Result<Response, SecurityError> {
// Sanitize and process the request
let sanitized_data = self.sanitize_input(&request.data)?;
// Process the sanitized data
let result = self.process_data(&sanitized_data)?;
Ok(Response {
status: 200,
data: result,
headers: HashMap::new(),
})
}
fn sanitize_input(&self, data: &[u8]) -> Result<Vec<u8>, SecurityError> {
let mut sanitized = Vec::with_capacity(data.len());
for &byte in data {
// Remove or escape dangerous characters
match byte {
b'<' => sanitized.extend_from_slice(b"<"),
b'>' => sanitized.extend_from_slice(b">"),
b'&' => sanitized.extend_from_slice(b"&"),
b'"' => sanitized.extend_from_slice(b"""),
b'\'' => sanitized.extend_from_slice(b"'"),
// Allow printable ASCII
32..=126 => sanitized.push(byte),
// Skip other characters
_ => {}
}
}
Ok(sanitized)
}
fn process_data(&self, data: &[u8]) -> Result<Vec<u8>, SecurityError> {
// Secure data processing logic
let processed = data.iter()
.map(|&b| b.wrapping_add(1)) // Simple transformation
.collect();
Ok(processed)
}
fn add_security_headers(&self, mut response: Response) -> Response {
for (key, value) in &self.security_context.security_headers {
response.headers.insert(key.clone(), value.clone());
}
response
}
fn default_security_headers() -> HashMap<String, String> {
let mut headers = HashMap::new();
headers.insert("X-Content-Type-Options".to_string(), "nosniff".to_string());
headers.insert("X-Frame-Options".to_string(), "DENY".to_string());
headers.insert("X-XSS-Protection".to_string(), "1; mode=block".to_string());
headers.insert("Strict-Transport-Security".to_string(),
"max-age=31536000; includeSubDomains".to_string());
headers.insert("Content-Security-Policy".to_string(),
"default-src 'none'".to_string());
headers
}
fn log_security_event(&mut self, event_type: SecurityEventType, details: &[(String, String)]) {
let event = SecurityEvent {
timestamp: std::time::SystemTime::now()
.duration_since(std::time::UNIX_EPOCH)
.unwrap_or_default()
.as_secs(),
event_type,
details: details.iter().cloned().collect(),
risk_score: self.calculate_risk_score(&event_type),
};
self.audit_log.events.push(event);
// Maintain log size
if self.audit_log.events.len() > self.audit_log.max_events {
self.audit_log.events.remove(0);
}
}
fn calculate_risk_score(&self, event_type: &SecurityEventType) -> u8 {
match event_type {
SecurityEventType::AuthenticationAttempt => 1,
SecurityEventType::AuthorizationFailure => 5,
SecurityEventType::RateLimitExceeded => 3,
SecurityEventType::InvalidInput => 2,
SecurityEventType::SuspiciousActivity => 8,
SecurityEventType::ConfigurationChange => 4,
}
}
}
// Input validation functions
fn validate_max_memory<'de, D>(deserializer: D) -> Result<usize, D::Error>
where
D: serde::Deserializer<'de>,
{
let value = usize::deserialize(deserializer)?;
if value > 100 * 1024 * 1024 { // 100MB limit
return Err(serde::de::Error::custom("Memory limit too high"));
}
Ok(value)
}
fn validate_timeout<'de, D>(deserializer: D) -> Result<u64, D::Error>
where
D: serde::Deserializer<'de>,
{
let value = u64::deserialize(deserializer)?;
if value > 300 { // 5 minute limit
return Err(serde::de::Error::custom("Timeout too long"));
}
Ok(value)
}
fn validate_allowed_origins<'de, D>(deserializer: D) -> Result<Vec<String>, D::Error>
where
D: serde::Deserializer<'de>,
{
let origins = Vec::<String>::deserialize(deserializer)?;
for origin in &origins {
if origin.contains("..") || origin.contains("//") {
return Err(serde::de::Error::custom("Invalid origin format"));
}
}
Ok(origins)
}
fn default_rate_limit() -> u32 {
60 // 60 requests per minute
}
// Data structures
#[derive(Debug)]
pub struct Request {
pub origin: String,
pub user_agent: String,
pub data: Vec<u8>,
}
#[derive(Debug)]
pub struct Response {
pub status: u16,
pub data: Vec<u8>,
pub headers: HashMap<String, String>,
}
#[derive(Debug)]
pub enum SecurityError {
RateLimitExceeded,
UntrustedOrigin,
InputTooLarge,
EmptyInput,
SuspiciousInput,
ProcessingError,
}
// WASM exports
#[no_mangle]
pub extern "C" fn process_secure_request(
config_ptr: *const u8,
config_len: usize,
request_ptr: *const u8,
request_len: usize,
response_ptr: *mut u8,
response_len: *mut usize,
) -> i32 {
// Implementation for WASM export
// This would deserialize config and request, process securely, and return response
0 // Success
}
Multi-Tenant Isolation
Tenant Isolation Architecture
// Multi-tenant WASM isolation system
use std::collections::HashMap;
use std::sync::{Arc, Mutex};
pub struct MultiTenantWASMHost {
tenants: Arc<Mutex<HashMap<String, TenantContext>>>,
global_limits: GlobalLimits,
isolation_manager: IsolationManager,
}
#[derive(Clone)]
pub struct TenantContext {
tenant_id: String,
resource_limits: TenantResourceLimits,
security_policy: TenantSecurityPolicy,
active_instances: usize,
resource_usage: ResourceUsage,
isolation_level: IsolationLevel,
}
#[derive(Clone)]
pub struct TenantResourceLimits {
max_memory_per_instance: usize,
max_instances: usize,
max_cpu_time: std::time::Duration,
max_network_connections: usize,
max_file_descriptors: usize,
storage_quota: u64,
}
#[derive(Clone)]
pub struct TenantSecurityPolicy {
allowed_imports: Vec<String>,
allowed_file_paths: Vec<String>,
allowed_network_hosts: Vec<String>,
require_signed_modules: bool,
sandbox_level: SandboxLevel,
}
#[derive(Clone, Debug)]
pub enum IsolationLevel {
Process, // Each tenant in separate process
Runtime, // Separate WASM runtime per tenant
Instance, // Shared runtime, separate instances
}
#[derive(Clone, Debug)]
pub enum SandboxLevel {
Strict, // Maximum isolation
Standard, // Default isolation
Relaxed, // Minimal isolation for trusted tenants
}
#[derive(Clone, Default)]
pub struct ResourceUsage {
memory_used: usize,
cpu_time_used: std::time::Duration,
instances_created: usize,
network_connections: usize,
files_opened: usize,
storage_used: u64,
}
pub struct GlobalLimits {
max_total_memory: usize,
max_total_instances: usize,
max_tenants: usize,
}
pub struct IsolationManager {
process_pool: ProcessPool,
runtime_pool: RuntimePool,
}
pub struct ProcessPool {
processes: HashMap<String, ProcessHandle>,
max_processes: usize,
}
pub struct ProcessHandle {
pid: u32,
tenant_id: String,
created_at: std::time::SystemTime,
}
pub struct RuntimePool {
runtimes: HashMap<String, RuntimeHandle>,
max_runtimes: usize,
}
pub struct RuntimeHandle {
runtime_id: String,
tenant_id: String,
instance_count: usize,
}
impl MultiTenantWASMHost {
pub fn new() -> Self {
Self {
tenants: Arc::new(Mutex::new(HashMap::new())),
global_limits: GlobalLimits {
max_total_memory: 8 * 1024 * 1024 * 1024, // 8GB
max_total_instances: 10000,
max_tenants: 1000,
},
isolation_manager: IsolationManager {
process_pool: ProcessPool {
processes: HashMap::new(),
max_processes: 100,
},
runtime_pool: RuntimePool {
runtimes: HashMap::new(),
max_runtimes: 500,
},
},
}
}
pub fn create_tenant(
&mut self,
tenant_id: String,
limits: TenantResourceLimits,
policy: TenantSecurityPolicy,
isolation_level: IsolationLevel,
) -> Result<(), TenantError> {
let mut tenants = self.tenants.lock().unwrap();
// Check global limits
if tenants.len() >= self.global_limits.max_tenants {
return Err(TenantError::GlobalLimitExceeded);
}
// Validate resource limits
self.validate_resource_limits(&limits)?;
let tenant_context = TenantContext {
tenant_id: tenant_id.clone(),
resource_limits: limits,
security_policy: policy,
active_instances: 0,
resource_usage: ResourceUsage::default(),
isolation_level,
};
tenants.insert(tenant_id, tenant_context);
Ok(())
}
pub fn create_instance(
&mut self,
tenant_id: &str,
module_bytes: &[u8],
) -> Result<InstanceHandle, TenantError> {
let mut tenants = self.tenants.lock().unwrap();
let tenant = tenants.get_mut(tenant_id)
.ok_or(TenantError::TenantNotFound)?;
// Check tenant limits
if tenant.active_instances >= tenant.resource_limits.max_instances {
return Err(TenantError::InstanceLimitExceeded);
}
// Validate module against security policy
self.validate_module(module_bytes, &tenant.security_policy)?;
// Create isolated instance based on isolation level
let instance_handle = match tenant.isolation_level {
IsolationLevel::Process => {
self.create_process_isolated_instance(tenant_id, module_bytes)?
}
IsolationLevel::Runtime => {
self.create_runtime_isolated_instance(tenant_id, module_bytes)?
}
IsolationLevel::Instance => {
self.create_instance_isolated_instance(tenant_id, module_bytes)?
}
};
tenant.active_instances += 1;
Ok(instance_handle)
}
fn validate_resource_limits(&self, limits: &TenantResourceLimits) -> Result<(), TenantError> {
// Validate against global limits
if limits.max_memory_per_instance > 1024 * 1024 * 1024 { // 1GB
return Err(TenantError::InvalidLimits);
}
if limits.max_instances > 1000 {
return Err(TenantError::InvalidLimits);
}
Ok(())
}
fn validate_module(
&self,
module_bytes: &[u8],
policy: &TenantSecurityPolicy,
) -> Result<(), TenantError> {
// Parse module and validate against security policy
let module_info = self.parse_module_info(module_bytes)?;
// Check imports against allowed list
for import in &module_info.imports {
let import_name = format!("{}.{}", import.module, import.name);
if !policy.allowed_imports.contains(&import_name) {
return Err(TenantError::UnauthorizedImport);
}
}
// Check if module signature is required
if policy.require_signed_modules {
self.verify_module_signature(module_bytes)?;
}
Ok(())
}
fn parse_module_info(&self, _module_bytes: &[u8]) -> Result<ModuleInfo, TenantError> {
// Simplified module parsing
Ok(ModuleInfo {
imports: vec![
ImportInfo {
module: "wasi_snapshot_preview1".to_string(),
name: "fd_read".to_string(),
},
],
})
}
fn verify_module_signature(&self, _module_bytes: &[u8]) -> Result<(), TenantError> {
// Implement digital signature verification
// This would use cryptographic libraries to verify signatures
Ok(())
}
fn create_process_isolated_instance(
&mut self,
tenant_id: &str,
module_bytes: &[u8],
) -> Result<InstanceHandle, TenantError> {
// Create new process for complete isolation
let process_handle = self.spawn_isolated_process(tenant_id, module_bytes)?;
Ok(InstanceHandle {
tenant_id: tenant_id.to_string(),
instance_id: format!("proc-{}", process_handle.pid),
isolation_type: IsolationType::Process,
created_at: std::time::SystemTime::now(),
})
}
fn create_runtime_isolated_instance(
&mut self,
tenant_id: &str,
module_bytes: &[u8],
) -> Result<InstanceHandle, TenantError> {
// Create dedicated runtime instance
let runtime_handle = self.create_dedicated_runtime(tenant_id, module_bytes)?;
Ok(InstanceHandle {
tenant_id: tenant_id.to_string(),
instance_id: runtime_handle.runtime_id,
isolation_type: IsolationType::Runtime,
created_at: std::time::SystemTime::now(),
})
}
fn create_instance_isolated_instance(
&mut self,
tenant_id: &str,
module_bytes: &[u8],
) -> Result<InstanceHandle, TenantError> {
// Create instance in shared runtime with namespace isolation
let instance_id = format!("inst-{}-{}", tenant_id, uuid::Uuid::new_v4());
// This would create a WASM instance with tenant-specific limits
// and capability restrictions
Ok(InstanceHandle {
tenant_id: tenant_id.to_string(),
instance_id,
isolation_type: IsolationType::Instance,
created_at: std::time::SystemTime::now(),
})
}
fn spawn_isolated_process(
&mut self,
tenant_id: &str,
_module_bytes: &[u8],
) -> Result<ProcessHandle, TenantError> {
// Spawn new process with restricted capabilities
use std::process::{Command, Stdio};
let child = Command::new("wasm-runner")
.arg("--tenant")
.arg(tenant_id)
.arg("--restricted")
.stdin(Stdio::piped())
.stdout(Stdio::piped())
.stderr(Stdio::piped())
.spawn()
.map_err(|_| TenantError::ProcessCreationFailed)?;
let pid = child.id();
let process_handle = ProcessHandle {
pid,
tenant_id: tenant_id.to_string(),
created_at: std::time::SystemTime::now(),
};
self.isolation_manager.process_pool.processes
.insert(pid.to_string(), process_handle.clone());
Ok(process_handle)
}
fn create_dedicated_runtime(
&mut self,
tenant_id: &str,
_module_bytes: &[u8],
) -> Result<RuntimeHandle, TenantError> {
let runtime_id = format!("runtime-{}-{}", tenant_id, uuid::Uuid::new_v4());
// This would create a dedicated WASM runtime with tenant-specific
// configuration and resource limits
let runtime_handle = RuntimeHandle {
runtime_id: runtime_id.clone(),
tenant_id: tenant_id.to_string(),
instance_count: 1,
};
self.isolation_manager.runtime_pool.runtimes
.insert(runtime_id, runtime_handle.clone());
Ok(runtime_handle)
}
}
#[derive(Debug)]
pub struct InstanceHandle {
pub tenant_id: String,
pub instance_id: String,
pub isolation_type: IsolationType,
pub created_at: std::time::SystemTime,
}
#[derive(Debug)]
pub enum IsolationType {
Process,
Runtime,
Instance,
}
#[derive(Debug)]
pub struct ModuleInfo {
pub imports: Vec<ImportInfo>,
}
#[derive(Debug)]
pub struct ImportInfo {
pub module: String,
pub name: String,
}
#[derive(Debug)]
pub enum TenantError {
TenantNotFound,
GlobalLimitExceeded,
InstanceLimitExceeded,
InvalidLimits,
UnauthorizedImport,
ProcessCreationFailed,
RuntimeCreationFailed,
ModuleValidationFailed,
}
// UUID generation (simplified)
mod uuid {
pub struct Uuid;
impl Uuid {
pub fn new_v4() -> String {
format!("{:x}", rand::random::<u64>())
}
}
}
mod rand {
pub fn random<T>() -> T
where
T: Default,
{
T::default()
}
}
Security Monitoring and Auditing
Real-time Security Monitoring
// Security monitoring and alerting system
use serde::{Deserialize, Serialize};
use std::collections::{HashMap, VecDeque};
use std::sync::{Arc, Mutex};
use std::time::{Duration, SystemTime};
pub struct SecurityMonitor {
alerts: Arc<Mutex<VecDeque<SecurityAlert>>>,
metrics: Arc<Mutex<SecurityMetrics>>,
rules: Vec<AlertRule>,
anomaly_detector: AnomalyDetector,
}
#[derive(Debug, Clone, Serialize)]
pub struct SecurityAlert {
id: String,
timestamp: SystemTime,
severity: AlertSeverity,
alert_type: AlertType,
source: String,
description: String,
details: HashMap<String, String>,
resolved: bool,
}
#[derive(Debug, Clone, Serialize, PartialEq, Eq, PartialOrd, Ord)]
pub enum AlertSeverity {
Info,
Warning,
Critical,
}
#[derive(Debug, Clone, Serialize)]
pub enum AlertType {
MemoryViolation,
UnauthorizedAccess,
RateLimitExceeded,
AnomalousActivity,
ModuleIntegrityFailure,
ResourceExhaustion,
SandboxEscape,
}
#[derive(Debug, Default)]
pub struct SecurityMetrics {
total_requests: u64,
blocked_requests: u64,
memory_violations: u64,
unauthorized_access_attempts: u64,
failed_authentications: u64,
anomaly_detections: u64,
last_updated: SystemTime,
}
pub struct AlertRule {
name: String,
condition: Box<dyn Fn(&SecurityMetrics, &HashMap<String, f64>) -> bool + Send + Sync>,
severity: AlertSeverity,
description: String,
}
pub struct AnomalyDetector {
baseline_metrics: HashMap<String, f64>,
historical_data: VecDeque<MetricSnapshot>,
sensitivity: f64,
}
#[derive(Debug, Clone)]
pub struct MetricSnapshot {
timestamp: SystemTime,
metrics: HashMap<String, f64>,
}
impl SecurityMonitor {
pub fn new() -> Self {
let mut monitor = Self {
alerts: Arc::new(Mutex::new(VecDeque::new())),
metrics: Arc::new(Mutex::new(SecurityMetrics::default())),
rules: Vec::new(),
anomaly_detector: AnomalyDetector {
baseline_metrics: HashMap::new(),
historical_data: VecDeque::new(),
sensitivity: 2.0, // 2 standard deviations
},
};
monitor.initialize_alert_rules();
monitor
}
fn initialize_alert_rules(&mut self) {
// Memory violation rule
self.rules.push(AlertRule {
name: "memory_violation".to_string(),
condition: Box::new(|metrics, _| {
metrics.memory_violations > 0
}),
severity: AlertSeverity::Critical,
description: "WASM module exceeded memory limits".to_string(),
});
// High failure rate rule
self.rules.push(AlertRule {
name: "high_failure_rate".to_string(),
condition: Box::new(|metrics, _| {
if metrics.total_requests == 0 {
return false;
}
let failure_rate = metrics.blocked_requests as f64 / metrics.total_requests as f64;
failure_rate > 0.1 // 10% failure rate
}),
severity: AlertSeverity::Warning,
description: "High request failure rate detected".to_string(),
});
// Anomaly detection rule
self.rules.push(AlertRule {
name: "anomalous_activity".to_string(),
condition: Box::new(|metrics, current_metrics| {
current_metrics.get("anomaly_score").unwrap_or(&0.0) > &0.8
}),
severity: AlertSeverity::Warning,
description: "Anomalous activity pattern detected".to_string(),
});
// Resource exhaustion rule
self.rules.push(AlertRule {
name: "resource_exhaustion".to_string(),
condition: Box::new(|_, current_metrics| {
let memory_usage = current_metrics.get("memory_usage_percent").unwrap_or(&0.0);
let cpu_usage = current_metrics.get("cpu_usage_percent").unwrap_or(&0.0);
*memory_usage > 90.0 || *cpu_usage > 95.0
}),
severity: AlertSeverity::Critical,
description: "System resources nearly exhausted".to_string(),
});
}
pub fn record_event(&mut self, event: SecurityEvent) {
let mut metrics = self.metrics.lock().unwrap();
// Update metrics based on event
match event.event_type {
SecurityEventType::MemoryViolation => metrics.memory_violations += 1,
SecurityEventType::UnauthorizedAccess => metrics.unauthorized_access_attempts += 1,
SecurityEventType::AuthenticationFailure => metrics.failed_authentications += 1,
SecurityEventType::AnomalousActivity => metrics.anomaly_detections += 1,
_ => {}
}
metrics.total_requests += 1;
if event.blocked {
metrics.blocked_requests += 1;
}
metrics.last_updated = SystemTime::now();
// Check alert rules
self.check_alert_rules(&metrics);
// Update anomaly detector
self.update_anomaly_detection(&metrics);
}
fn check_alert_rules(&mut self, metrics: &SecurityMetrics) {
let current_metrics = self.collect_current_metrics();
for rule in &self.rules {
if (rule.condition)(metrics, ¤t_metrics) {
let alert = SecurityAlert {
id: uuid::Uuid::new_v4().to_string(),
timestamp: SystemTime::now(),
severity: rule.severity.clone(),
alert_type: self.rule_name_to_alert_type(&rule.name),
source: "security_monitor".to_string(),
description: rule.description.clone(),
details: current_metrics.iter()
.map(|(k, v)| (k.clone(), v.to_string()))
.collect(),
resolved: false,
};
self.trigger_alert(alert);
}
}
}
fn collect_current_metrics(&self) -> HashMap<String, f64> {
let mut current_metrics = HashMap::new();
// Collect system metrics
current_metrics.insert("memory_usage_percent".to_string(), self.get_memory_usage());
current_metrics.insert("cpu_usage_percent".to_string(), self.get_cpu_usage());
current_metrics.insert("active_instances".to_string(), self.get_active_instances() as f64);
// Add anomaly score
current_metrics.insert("anomaly_score".to_string(), self.calculate_anomaly_score());
current_metrics
}
fn get_memory_usage(&self) -> f64 {
// Get actual memory usage (simplified)
75.0 // Placeholder
}
fn get_cpu_usage(&self) -> f64 {
// Get actual CPU usage (simplified)
45.0 // Placeholder
}
fn get_active_instances(&self) -> u32 {
// Get number of active WASM instances (simplified)
25 // Placeholder
}
fn calculate_anomaly_score(&self) -> f64 {
// Calculate anomaly score based on current vs historical patterns
let current_metrics = HashMap::new(); // Simplified
self.anomaly_detector.detect_anomaly(¤t_metrics)
}
fn update_anomaly_detection(&mut self, metrics: &SecurityMetrics) {
let snapshot = MetricSnapshot {
timestamp: SystemTime::now(),
metrics: self.collect_current_metrics(),
};
self.anomaly_detector.historical_data.push_back(snapshot);
// Keep only last 1000 snapshots
if self.anomaly_detector.historical_data.len() > 1000 {
self.anomaly_detector.historical_data.pop_front();
}
// Update baseline periodically
if self.anomaly_detector.historical_data.len() % 100 == 0 {
self.anomaly_detector.update_baseline();
}
}
fn rule_name_to_alert_type(&self, rule_name: &str) -> AlertType {
match rule_name {
"memory_violation" => AlertType::MemoryViolation,
"high_failure_rate" => AlertType::UnauthorizedAccess,
"anomalous_activity" => AlertType::AnomalousActivity,
"resource_exhaustion" => AlertType::ResourceExhaustion,
_ => AlertType::AnomalousActivity,
}
}
fn trigger_alert(&mut self, alert: SecurityAlert) {
println!("🚨 SECURITY ALERT: {:?} - {}", alert.severity, alert.description);
// Store alert
let mut alerts = self.alerts.lock().unwrap();
alerts.push_back(alert.clone());
// Keep only last 10000 alerts
if alerts.len() > 10000 {
alerts.pop_front();
}
// Send notifications based on severity
match alert.severity {
AlertSeverity::Critical => {
self.send_immediate_notification(&alert);
self.auto_remediate(&alert);
}
AlertSeverity::Warning => {
self.send_notification(&alert);
}
AlertSeverity::Info => {
// Just log
}
}
}
fn send_immediate_notification(&self, alert: &SecurityAlert) {
// Send to on-call team immediately
println!("📱 IMMEDIATE ALERT: {}", alert.description);
// Implementation would integrate with PagerDuty, Slack, etc.
}
fn send_notification(&self, alert: &SecurityAlert) {
// Send to security team
println!("📧 Security Alert: {}", alert.description);
// Implementation would send email/Slack notification
}
fn auto_remediate(&self, alert: &SecurityAlert) {
match alert.alert_type {
AlertType::MemoryViolation => {
println!("🔧 Auto-remediation: Restarting affected instances");
// Restart instances that violated memory limits
}
AlertType::SandboxEscape => {
println!("🔧 Auto-remediation: Terminating suspicious instances");
// Immediately terminate suspicious instances
}
AlertType::ResourceExhaustion => {
println!("🔧 Auto-remediation: Scaling resources");
// Trigger auto-scaling or resource cleanup
}
_ => {
// No auto-remediation for other alert types
}
}
}
pub fn get_security_dashboard(&self) -> SecurityDashboard {
let alerts = self.alerts.lock().unwrap();
let metrics = self.metrics.lock().unwrap();
let critical_alerts = alerts.iter()
.filter(|a| a.severity == AlertSeverity::Critical && !a.resolved)
.count();
let warning_alerts = alerts.iter()
.filter(|a| a.severity == AlertSeverity::Warning && !a.resolved)
.count();
SecurityDashboard {
critical_alerts,
warning_alerts,
total_requests: metrics.total_requests,
blocked_requests: metrics.blocked_requests,
success_rate: if metrics.total_requests > 0 {
1.0 - (metrics.blocked_requests as f64 / metrics.total_requests as f64)
} else {
1.0
},
recent_alerts: alerts.iter().rev().take(10).cloned().collect(),
system_health: self.calculate_system_health(),
}
}
fn calculate_system_health(&self) -> SystemHealth {
let current_metrics = self.collect_current_metrics();
let memory_usage = current_metrics.get("memory_usage_percent").unwrap_or(&0.0);
let cpu_usage = current_metrics.get("cpu_usage_percent").unwrap_or(&0.0);
let anomaly_score = current_metrics.get("anomaly_score").unwrap_or(&0.0);
if *memory_usage > 90.0 || *cpu_usage > 95.0 || *anomaly_score > 0.9 {
SystemHealth::Critical
} else if *memory_usage > 80.0 || *cpu_usage > 85.0 || *anomaly_score > 0.7 {
SystemHealth::Warning
} else {
SystemHealth::Healthy
}
}
}
impl AnomalyDetector {
fn detect_anomaly(&self, current_metrics: &HashMap<String, f64>) -> f64 {
if self.baseline_metrics.is_empty() {
return 0.0;
}
let mut anomaly_score = 0.0;
let mut metric_count = 0;
for (metric_name, ¤t_value) in current_metrics {
if let Some(&baseline_value) = self.baseline_metrics.get(metric_name) {
let variance = self.calculate_variance(metric_name);
if variance > 0.0 {
let z_score = (current_value - baseline_value).abs() / variance.sqrt();
anomaly_score += (z_score / self.sensitivity).min(1.0);
metric_count += 1;
}
}
}
if metric_count > 0 {
anomaly_score / metric_count as f64
} else {
0.0
}
}
fn calculate_variance(&self, metric_name: &str) -> f64 {
let values: Vec<f64> = self.historical_data.iter()
.filter_map(|snapshot| snapshot.metrics.get(metric_name))
.copied()
.collect();
if values.len() < 2 {
return 1.0; // Default variance
}
let mean = values.iter().sum::<f64>() / values.len() as f64;
let variance = values.iter()
.map(|x| (x - mean).powi(2))
.sum::<f64>() / values.len() as f64;
variance.max(0.01) // Minimum variance to avoid division by zero
}
fn update_baseline(&mut self) {
if self.historical_data.len() < 10 {
return;
}
// Calculate new baseline from recent data
let recent_data: Vec<&MetricSnapshot> = self.historical_data.iter()
.rev()
.take(100)
.collect();
let mut new_baseline = HashMap::new();
// Get all metric names
let metric_names: std::collections::HashSet<String> = recent_data.iter()
.flat_map(|snapshot| snapshot.metrics.keys())
.cloned()
.collect();
for metric_name in metric_names {
let values: Vec<f64> = recent_data.iter()
.filter_map(|snapshot| snapshot.metrics.get(&metric_name))
.copied()
.collect();
if !values.is_empty() {
let mean = values.iter().sum::<f64>() / values.len() as f64;
new_baseline.insert(metric_name, mean);
}
}
self.baseline_metrics = new_baseline;
}
}
#[derive(Debug, Serialize)]
pub struct SecurityDashboard {
pub critical_alerts: usize,
pub warning_alerts: usize,
pub total_requests: u64,
pub blocked_requests: u64,
pub success_rate: f64,
pub recent_alerts: Vec<SecurityAlert>,
pub system_health: SystemHealth,
}
#[derive(Debug, Serialize)]
pub enum SystemHealth {
Healthy,
Warning,
Critical,
}
#[derive(Debug)]
pub struct SecurityEvent {
pub event_type: SecurityEventType,
pub blocked: bool,
}
#[derive(Debug)]
pub enum SecurityEventType {
MemoryViolation,
UnauthorizedAccess,
AuthenticationFailure,
AnomalousActivity,
}
// UUID generation (simplified)
mod uuid {
pub struct Uuid;
impl Uuid {
pub fn new_v4() -> Self {
Self
}
pub fn to_string(&self) -> String {
format!("uuid-{}", rand::random::<u64>())
}
}
}
mod rand {
pub fn random<T>() -> T
where
T: Default,
{
T::default()
}
}
Compliance and Regulatory Considerations
Compliance Framework Implementation
# Compliance configuration for WASM deployments
apiVersion: v1
kind: ConfigMap
metadata:
name: compliance-config
labels:
compliance.framework: "sox-gdpr-hipaa"
data:
compliance.yaml: |
frameworks:
- name: "SOX"
requirements:
- audit_logging: true
- data_retention: "7_years"
- access_controls: "strict"
- change_management: "required"
- segregation_of_duties: true
- name: "GDPR"
requirements:
- data_encryption: "required"
- data_minimization: true
- consent_tracking: true
- data_portability: true
- right_to_erasure: true
- privacy_by_design: true
- name: "HIPAA"
requirements:
- phi_encryption: "required"
- access_logging: "detailed"
- minimum_necessary: true
- business_associate: "required"
- breach_notification: true
audit_settings:
log_level: "detailed"
retention_period: "7_years"
encryption: "aes256"
immutable_storage: true
real_time_monitoring: true
access_controls:
mfa_required: true
session_timeout: 1800 # 30 minutes
password_policy: "complex"
role_based_access: true
privilege_escalation_audit: true
data_protection:
encryption_at_rest: "aes256"
encryption_in_transit: "tls13"
key_rotation: "quarterly"
data_classification: true
data_loss_prevention: true
---
# Compliance monitoring deployment
apiVersion: apps/v1
kind: Deployment
metadata:
name: compliance-monitor
labels:
component: compliance
spec:
replicas: 2
selector:
matchLabels:
app: compliance-monitor
template:
metadata:
labels:
app: compliance-monitor
spec:
securityContext:
runAsNonRoot: true
runAsUser: 1000
fsGroup: 2000
containers:
- name: monitor
image: compliance-monitor:latest
env:
- name: AUDIT_LOG_LEVEL
value: "DETAILED"
- name: RETENTION_PERIOD
value: "7_YEARS"
- name: COMPLIANCE_FRAMEWORKS
value: "SOX,GDPR,HIPAA"
volumeMounts:
- name: audit-logs
mountPath: /var/log/audit
- name: compliance-config
mountPath: /etc/compliance
securityContext:
allowPrivilegeEscalation: false
readOnlyRootFilesystem: true
capabilities:
drop:
- ALL
resources:
requests:
memory: "256Mi"
cpu: "200m"
limits:
memory: "512Mi"
cpu: "500m"
volumes:
- name: audit-logs
persistentVolumeClaim:
claimName: audit-logs-pvc
- name: compliance-config
configMap:
name: compliance-config
Conclusion
WebAssembly's security model represents a fundamental advancement in secure computing. By combining capability-based security, memory safety, and strong sandboxing, WASM provides unprecedented security guarantees for cloud-native applications.
Key Security Benefits
- ✅ Memory Safety by Design - Prevents buffer overflows and memory corruption
- ✅ Capability-Based Access - Explicit permission model with no ambient authority
- ✅ Strong Sandboxing - Isolated execution environment with controlled system access
- ✅ Multi-Tenant Isolation - Secure isolation between different tenants and workloads
- ✅ Vulnerability Mitigation - Reduced attack surface and comprehensive security monitoring
Production Security Checklist
-
Implement Capability-Based Security
- Define minimal required capabilities
- Use explicit permission grants
- Regular capability audits
-
Enable Comprehensive Monitoring
- Real-time security event detection
- Anomaly detection and alerting
- Audit logging for compliance
-
Harden Runtime Configuration
- Set resource limits and timeouts
- Enable security features
- Restrict dangerous operations
-
Validate and Scan Modules
- Implement vulnerability scanning
- Verify digital signatures
- Check against security policies
-
Plan for Multi-Tenant Isolation
- Choose appropriate isolation level
- Implement tenant resource limits
- Monitor cross-tenant access
Security Best Practices
- Never trust user input - Validate and sanitize all data
- Implement defense in depth - Multiple security layers
- Monitor continuously - Real-time security monitoring
- Plan for incidents - Incident response procedures
- Regular security audits - Periodic security assessments
WebAssembly's security model enables new architectures that were previously impossible, providing both security and performance for modern cloud-native applications.
Resources
- WebAssembly Security Model
- WASI Security Design
- OWASP WASM Security Guide
- NIST Container Security Guide
Ready to explore multi-language WASM development? Check out our next article on building WASM modules with different programming languages! 🛡️