mirror of
https://github.com/trailofbits/algo.git
synced 2025-09-03 02:23:39 +02:00
* Fix VPN routing by adding output interface to NAT rules The NAT rules were missing the output interface specification (-o eth0), which caused routing failures on multi-homed systems (servers with multiple network interfaces). Without specifying the output interface, packets might not be NAT'd correctly. Changes: - Added -o {{ ansible_default_ipv4['interface'] }} to all NAT rules - Updated both IPv4 and IPv6 templates - Updated tests to verify output interface is present - Added ansible_default_ipv4/ipv6 to test fixtures This fixes the issue where VPN clients could connect but not route traffic to the internet on servers with multiple network interfaces (like DigitalOcean droplets with private networking enabled). 🤖 Generated with [Claude Code](https://claude.ai/code) Co-Authored-By: Claude <noreply@anthropic.com> * Fix VPN routing by adding output interface to NAT rules On multi-homed systems (servers with multiple network interfaces or multiple IPs on one interface), MASQUERADE rules need to specify which interface to use for NAT. Without the output interface specification, packets may not be routed correctly. This fix adds the output interface to all NAT rules: -A POSTROUTING -s [vpn_subnet] -o eth0 -j MASQUERADE Changes: - Modified roles/common/templates/rules.v4.j2 to include output interface - Modified roles/common/templates/rules.v6.j2 for IPv6 support - Added tests to verify output interface is present in NAT rules - Added ansible_default_ipv4/ipv6 variables to test fixtures For deployments on providers like DigitalOcean where MASQUERADE still fails due to multiple IPs on the same interface, users can enable the existing alternative_ingress_ip option in config.cfg to use explicit SNAT. Testing: - Verified on live servers - All unit tests pass (67/67) - Mutation testing confirms test coverage This fixes VPN connectivity on servers with multiple interfaces while remaining backward compatible with single-interface deployments. 🤖 Generated with [Claude Code](https://claude.ai/code) Co-Authored-By: Claude <noreply@anthropic.com> * Fix dnscrypt-proxy not listening on VPN service IPs Problem: dnscrypt-proxy on Ubuntu uses systemd socket activation by default, which overrides the configured listen_addresses in dnscrypt-proxy.toml. The socket only listens on 127.0.2.1:53, preventing VPN clients from resolving DNS queries through the configured service IPs. Solution: Disable and mask the dnscrypt-proxy.socket unit to allow dnscrypt-proxy to bind directly to the VPN service IPs specified in its configuration file. This fixes DNS resolution for VPN clients on Ubuntu 20.04+ systems. 🤖 Generated with [Claude Code](https://claude.ai/code) Co-Authored-By: Claude <noreply@anthropic.com> * Apply Python linting and formatting - Run ruff check --fix to fix linting issues - Run ruff format to ensure consistent formatting - All tests still pass after formatting changes 🤖 Generated with [Claude Code](https://claude.ai/code) Co-Authored-By: Claude <noreply@anthropic.com> * Restrict DNS access to VPN clients only Security fix: The firewall rule for DNS was accepting traffic from any source (0.0.0.0/0) to the local DNS resolver. While the service IP is on the loopback interface (which normally isn't routable externally), this could be a security risk if misconfigured. Changed firewall rules to only accept DNS traffic from VPN subnets: - INPUT rule now includes -s {{ subnets }} to restrict source IPs - Applied to both IPv4 and IPv6 rules - Added test to verify DNS is properly restricted This ensures the DNS resolver is only accessible to connected VPN clients, not the entire internet. 🤖 Generated with [Claude Code](https://claude.ai/code) Co-Authored-By: Claude <noreply@anthropic.com> * Fix dnscrypt-proxy service startup with masked socket Problem: dnscrypt-proxy.service has a dependency on dnscrypt-proxy.socket through the TriggeredBy directive. When we mask the socket before starting the service, systemd fails with "Unit dnscrypt-proxy.socket is masked." Solution: 1. Override the service to remove socket dependency (TriggeredBy=) 2. Reload systemd daemon immediately after override changes 3. Start the service (which now doesn't require the socket) 4. Only then disable and mask the socket This ensures dnscrypt-proxy can bind directly to the configured IPs without socket activation, while preventing the socket from being re-enabled by package updates. Changes: - Added TriggeredBy= override to remove socket dependency - Added explicit daemon reload after service overrides - Moved socket masking to after service start in main.yml - Fixed YAML formatting issues Testing: Deployment now succeeds with dnscrypt-proxy binding to VPN IPs 🤖 Generated with [Claude Code](https://claude.ai/code) Co-Authored-By: Claude <noreply@anthropic.com> * Fix dnscrypt-proxy by not masking the socket Problem: Masking dnscrypt-proxy.socket prevents the service from starting because the service has Requires=dnscrypt-proxy.socket dependency. Solution: Simply stop and disable the socket without masking it. This prevents socket activation while allowing the service to start and bind directly to the configured IPs. Changes: - Removed socket masking (just disable it) - Moved socket disabling before service start - Removed invalid systemd directives from override Testing: Confirmed dnscrypt-proxy now listens on VPN service IPs 🤖 Generated with [Claude Code](https://claude.ai/code) Co-Authored-By: Claude <noreply@anthropic.com> * Use systemd socket activation properly for dnscrypt-proxy Instead of fighting systemd socket activation, configure it to listen on the correct VPN service IPs. This is more systemd-native and reliable. Changes: - Create socket override to listen on VPN IPs instead of localhost - Clear default listeners and add VPN service IPs - Use empty listen_addresses in dnscrypt-proxy.toml for socket activation - Keep socket enabled and let systemd manage the activation - Add handler for restarting socket when config changes Benefits: - Works WITH systemd instead of against it - Survives package updates better - No dependency conflicts - More reliable service management This approach is cleaner than disabling socket activation entirely and ensures dnscrypt-proxy is accessible to VPN clients on the correct IPs. 🤖 Generated with [Claude Code](https://claude.ai/code) Co-Authored-By: Claude <noreply@anthropic.com> * Document debugging lessons learned in CLAUDE.md Added comprehensive debugging guidance based on our troubleshooting session: - VPN connectivity troubleshooting order (DNS first!) - systemd socket activation best practices - Common deployment failures and solutions - Time wasters to avoid (lessons learned the hard way) - Multi-homed system considerations - Testing notes for DigitalOcean These additions will help future debugging sessions avoid the same rabbit holes and focus on the most likely issues first. 🤖 Generated with [Claude Code](https://claude.ai/code) Co-Authored-By: Claude <noreply@anthropic.com> * Fix DNS resolution for VPN clients by enabling route_localnet The issue was that dnscrypt-proxy listens on a special loopback IP (randomly generated in 172.16.0.0/12 range) which wasn't accessible from VPN clients. This fix: 1. Enables net.ipv4.conf.all.route_localnet sysctl to allow routing to loopback IPs from other interfaces 2. Ensures dnscrypt-proxy socket is properly restarted when its configuration changes 3. Adds proper handler flushing after socket configuration updates This allows VPN clients to reach the DNS resolver at the local_service_ip address configured on the loopback interface. 🤖 Generated with [Claude Code](https://claude.ai/code) Co-Authored-By: Claude <noreply@anthropic.com> * Improve security by using interface-specific route_localnet Instead of enabling route_localnet globally (net.ipv4.conf.all.route_localnet), this change enables it only on the specific interfaces that need it: - WireGuard interface (wg0) for WireGuard VPN clients - Main network interface (eth0/etc) for IPsec VPN clients This minimizes the security impact by restricting loopback routing to only the VPN interfaces, preventing other interfaces from being able to route to loopback addresses. The interface-specific approach provides the same functionality (allowing VPN clients to reach the DNS resolver on the local_service_ip) while reducing the potential attack surface. 🤖 Generated with [Claude Code](https://claude.ai/code) Co-Authored-By: Claude <noreply@anthropic.com> * Revert to global route_localnet to fix deployment failure The interface-specific route_localnet approach failed because: - WireGuard interface (wg0) doesn't exist until the service starts - We were trying to set the sysctl before the interface was created - This caused deployment failures with "No such file or directory" Reverting to the global setting (net.ipv4.conf.all.route_localnet=1) because: - It always works regardless of interface creation timing - VPN users are trusted (they have our credentials) - Firewall rules still restrict access to only port 53 - The security benefit of interface-specific settings is minimal - The added complexity isn't worth the marginal security improvement This ensures reliable deployments while maintaining the DNS resolution fix. 🤖 Generated with [Claude Code](https://claude.ai/code) Co-Authored-By: Claude <noreply@anthropic.com> * Fix dnscrypt-proxy socket restart and remove problematic BPF hardening Two important fixes: 1. Fix dnscrypt-proxy socket not restarting with new configuration - The socket wasn't properly restarting when its override config changed - This caused DNS to listen on wrong IP (127.0.2.1 instead of local_service_ip) - Now directly restart the socket when configuration changes - Add explicit daemon reload before restarting 2. Remove BPF JIT hardening that causes deployment errors - The net.core.bpf_jit_enable sysctl isn't available on all kernels - It was causing "Invalid argument" errors during deployment - This was optional security hardening with minimal benefit - Removing it eliminates deployment errors for most users These fixes ensure reliable DNS resolution for VPN clients and clean deployments without error messages. 🤖 Generated with [Claude Code](https://claude.ai/code) Co-Authored-By: Claude <noreply@anthropic.com> * Update CLAUDE.md with comprehensive debugging lessons learned Based on our extensive debugging session, this update adds critical documentation: ## DNS Architecture and Troubleshooting - Explained the local_service_ip design and why it requires route_localnet - Added detailed DNS debugging methodology with exact steps in order - Documented systemd socket activation complexities and common mistakes - Added specific commands to verify DNS is working correctly ## Architectural Decisions - Added new section explaining trade-offs in Algo's design choices - Documented why local_service_ip uses loopback instead of alternatives - Explained iptables-legacy vs iptables-nft backend choice ## Enhanced Debugging Guidance - Expanded troubleshooting with exact commands and expected outputs - Added warnings about configuration changes that need restarts - Documented socket activation override requirements in detail - Added common pitfalls like interface-specific sysctls ## Time Wasters Section - Added new lessons learned from this debugging session - Interface-specific route_localnet (fails before interface exists) - DNAT for loopback addresses (doesn't work) - BPF JIT hardening (causes errors on many kernels) This documentation will help future maintainers avoid the same debugging rabbit holes and understand why things are designed the way they are. 🤖 Generated with [Claude Code](https://claude.ai/code) Co-Authored-By: Claude <noreply@anthropic.com> --------- Co-authored-by: Claude <noreply@anthropic.com>
204 lines
6.1 KiB
Python
204 lines
6.1 KiB
Python
#!/usr/bin/env python3
|
|
"""
|
|
Test user management functionality without deployment
|
|
Based on issues #14745, #14746, #14738, #14726
|
|
"""
|
|
|
|
import os
|
|
import re
|
|
import sys
|
|
import tempfile
|
|
|
|
import yaml
|
|
|
|
|
|
def test_user_list_parsing():
|
|
"""Test that user lists in config.cfg are parsed correctly"""
|
|
test_config = """
|
|
users:
|
|
- alice
|
|
- bob
|
|
- charlie
|
|
- user-with-dash
|
|
- user_with_underscore
|
|
"""
|
|
|
|
config = yaml.safe_load(test_config)
|
|
users = config.get("users", [])
|
|
|
|
assert len(users) == 5, f"Expected 5 users, got {len(users)}"
|
|
assert "alice" in users, "Missing user 'alice'"
|
|
assert "user-with-dash" in users, "Dash in username not handled"
|
|
assert "user_with_underscore" in users, "Underscore in username not handled"
|
|
|
|
# Test that usernames are valid
|
|
username_pattern = re.compile(r"^[a-zA-Z0-9_-]+$")
|
|
for user in users:
|
|
assert username_pattern.match(user), f"Invalid username format: {user}"
|
|
|
|
print("✓ User list parsing test passed")
|
|
|
|
|
|
def test_server_selection_format():
|
|
"""Test server selection string parsing (issue #14727)"""
|
|
# Test various server display formats
|
|
test_cases = [
|
|
{"display": "1. 192.168.1.100 (algo-server)", "expected_ip": "192.168.1.100", "expected_name": "algo-server"},
|
|
{"display": "2. 10.0.0.1 (production-vpn)", "expected_ip": "10.0.0.1", "expected_name": "production-vpn"},
|
|
{
|
|
"display": "3. vpn.example.com (example-server)",
|
|
"expected_ip": "vpn.example.com",
|
|
"expected_name": "example-server",
|
|
},
|
|
]
|
|
|
|
# Pattern to extract IP and name from display string
|
|
pattern = re.compile(r"^\d+\.\s+([^\s]+)\s+\(([^)]+)\)$")
|
|
|
|
for case in test_cases:
|
|
match = pattern.match(case["display"])
|
|
assert match, f"Failed to parse: {case['display']}"
|
|
|
|
ip_or_host = match.group(1)
|
|
name = match.group(2)
|
|
|
|
assert ip_or_host == case["expected_ip"], f"Wrong IP extracted: {ip_or_host}"
|
|
assert name == case["expected_name"], f"Wrong name extracted: {name}"
|
|
|
|
print("✓ Server selection format test passed")
|
|
|
|
|
|
def test_ssh_key_preservation():
|
|
"""Test that SSH keys aren't regenerated unnecessarily"""
|
|
with tempfile.TemporaryDirectory() as tmpdir:
|
|
ssh_key_path = os.path.join(tmpdir, "test_key")
|
|
|
|
# Simulate existing SSH key
|
|
with open(ssh_key_path, "w") as f:
|
|
f.write("EXISTING_SSH_KEY_CONTENT")
|
|
with open(f"{ssh_key_path}.pub", "w") as f:
|
|
f.write("ssh-rsa EXISTING_PUBLIC_KEY")
|
|
|
|
# Record original content
|
|
with open(ssh_key_path) as f:
|
|
original_content = f.read()
|
|
|
|
# Test that key is preserved when it already exists
|
|
assert os.path.exists(ssh_key_path), "SSH key should exist"
|
|
assert os.path.exists(f"{ssh_key_path}.pub"), "SSH public key should exist"
|
|
|
|
# Verify content hasn't changed
|
|
with open(ssh_key_path) as f:
|
|
current_content = f.read()
|
|
assert current_content == original_content, "SSH key was modified"
|
|
|
|
print("✓ SSH key preservation test passed")
|
|
|
|
|
|
def test_ca_password_handling():
|
|
"""Test CA password validation and handling"""
|
|
# Test password requirements
|
|
valid_passwords = ["SecurePassword123!", "Algo-VPN-2024", "Complex#Pass@Word999"]
|
|
|
|
invalid_passwords = [
|
|
"", # Empty
|
|
"short", # Too short
|
|
"password with spaces", # Spaces not allowed in some contexts
|
|
]
|
|
|
|
# Basic password validation
|
|
for pwd in valid_passwords:
|
|
assert len(pwd) >= 12, f"Password too short: {pwd}"
|
|
assert " " not in pwd, f"Password contains spaces: {pwd}"
|
|
|
|
for pwd in invalid_passwords:
|
|
issues = []
|
|
if len(pwd) < 12:
|
|
issues.append("too short")
|
|
if " " in pwd:
|
|
issues.append("contains spaces")
|
|
if not pwd:
|
|
issues.append("empty")
|
|
assert issues, f"Expected validation issues for: {pwd}"
|
|
|
|
print("✓ CA password handling test passed")
|
|
|
|
|
|
def test_user_config_generation():
|
|
"""Test that user configs would be generated correctly"""
|
|
users = ["alice", "bob", "charlie"]
|
|
server_name = "test-server"
|
|
|
|
# Simulate config file structure
|
|
for user in users:
|
|
# Test WireGuard config path
|
|
wg_path = f"configs/{server_name}/wireguard/{user}.conf"
|
|
assert user in wg_path, "Username not in WireGuard config path"
|
|
|
|
# Test IPsec config path
|
|
ipsec_path = f"configs/{server_name}/ipsec/{user}.p12"
|
|
assert user in ipsec_path, "Username not in IPsec config path"
|
|
|
|
# Test SSH tunnel config path
|
|
ssh_path = f"configs/{server_name}/ssh-tunnel/{user}.pem"
|
|
assert user in ssh_path, "Username not in SSH config path"
|
|
|
|
print("✓ User config generation test passed")
|
|
|
|
|
|
def test_duplicate_user_handling():
|
|
"""Test handling of duplicate usernames"""
|
|
test_config = """
|
|
users:
|
|
- alice
|
|
- bob
|
|
- alice
|
|
- charlie
|
|
"""
|
|
|
|
config = yaml.safe_load(test_config)
|
|
users = config.get("users", [])
|
|
|
|
# Check for duplicates
|
|
unique_users = list(set(users))
|
|
assert len(unique_users) < len(users), "Duplicates should be detected"
|
|
|
|
# Test that duplicates can be identified
|
|
seen = set()
|
|
duplicates = []
|
|
for user in users:
|
|
if user in seen:
|
|
duplicates.append(user)
|
|
seen.add(user)
|
|
|
|
assert "alice" in duplicates, "Duplicate 'alice' not detected"
|
|
|
|
print("✓ Duplicate user handling test passed")
|
|
|
|
|
|
if __name__ == "__main__":
|
|
tests = [
|
|
test_user_list_parsing,
|
|
test_server_selection_format,
|
|
test_ssh_key_preservation,
|
|
test_ca_password_handling,
|
|
test_user_config_generation,
|
|
test_duplicate_user_handling,
|
|
]
|
|
|
|
failed = 0
|
|
for test in tests:
|
|
try:
|
|
test()
|
|
except AssertionError as e:
|
|
print(f"✗ {test.__name__} failed: {e}")
|
|
failed += 1
|
|
except Exception as e:
|
|
print(f"✗ {test.__name__} error: {e}")
|
|
failed += 1
|
|
|
|
if failed > 0:
|
|
print(f"\n{failed} tests failed")
|
|
sys.exit(1)
|
|
else:
|
|
print(f"\nAll {len(tests)} tests passed!")
|