Replication
Implement multi-region DNS replication strategies for global availability.
Replication Models
Hub-and-Spoke
One central primary, multiple regional secondaries:
graph TB
primary["Primary (us-east-1)"]
sec1["Secondary<br/>(us-west)"]
sec2["Secondary<br/>(eu-west)"]
sec3["Secondary<br/>(ap-south)"]
primary --> sec1
primary --> sec2
primary --> sec3
style primary fill:#e8f5e9,stroke:#1b5e20,stroke-width:2px
style sec1 fill:#e1f5ff,stroke:#01579b,stroke-width:2px
style sec2 fill:#e1f5ff,stroke:#01579b,stroke-width:2px
style sec3 fill:#e1f5ff,stroke:#01579b,stroke-width:2px
Pros: Simple, clear source of truth Cons: Single point of failure, latency for distant regions
Multi-Primary
Multiple primaries in different regions:
graph TB
primaryA["Primary A<br/>(us-east)"]
primaryB["Primary B<br/>(eu-west)"]
sec1["Secondary<br/>(us-west)"]
sec2["Secondary<br/>(ap-south)"]
primaryA <-->|Sync| primaryB
primaryA --> sec1
primaryB --> sec2
style primaryA fill:#e8f5e9,stroke:#1b5e20,stroke-width:2px
style primaryB fill:#e8f5e9,stroke:#1b5e20,stroke-width:2px
style sec1 fill:#e1f5ff,stroke:#01579b,stroke-width:2px
style sec2 fill:#e1f5ff,stroke:#01579b,stroke-width:2px
Pros: Regional updates, better latency Cons: Complex synchronization, conflict resolution
Hierarchical
Tiered replication structure:
graph TB
global["Global Primary"]
reg1["Regional<br/>Primary"]
reg2["Regional<br/>Primary"]
reg3["Regional<br/>Primary"]
local1["Local<br/>Secondary"]
local2["Local<br/>Secondary"]
local3["Local<br/>Secondary"]
global --> reg1
global --> reg2
global --> reg3
reg1 --> local1
reg2 --> local2
reg3 --> local3
style global fill:#f3e5f5,stroke:#4a148c,stroke-width:2px
style reg1 fill:#e8f5e9,stroke:#1b5e20,stroke-width:2px
style reg2 fill:#e8f5e9,stroke:#1b5e20,stroke-width:2px
style reg3 fill:#e8f5e9,stroke:#1b5e20,stroke-width:2px
style local1 fill:#e1f5ff,stroke:#01579b,stroke-width:2px
style local2 fill:#e1f5ff,stroke:#01579b,stroke-width:2px
style local3 fill:#e1f5ff,stroke:#01579b,stroke-width:2px
Pros: Scales well, reduces global load Cons: More complex, longer propagation time
Configuration Examples
Hub-and-Spoke Setup
# Central Primary (us-east-1)
apiVersion: bindy.firestoned.io/v1alpha1
kind: Bind9Instance
metadata:
name: global-primary
labels:
dns-role: primary
region: us-east-1
spec:
replicas: 3
config:
allowTransfer:
- "10.0.0.0/8" # Allow all regional networks
---
# Regional Secondaries
apiVersion: bindy.firestoned.io/v1alpha1
kind: Bind9Instance
metadata:
name: secondary-us-west
labels:
dns-role: secondary
region: us-west-2
spec:
replicas: 2
---
apiVersion: bindy.firestoned.io/v1alpha1
kind: Bind9Instance
metadata:
name: secondary-eu-west
labels:
dns-role: secondary
region: eu-west-1
spec:
replicas: 2
Replication Latency
Measuring Propagation Time
# Update record on primary
kubectl apply -f new-record.yaml
# Check serial on primary
PRIMARY_SERIAL=$(kubectl exec -n dns-system deployment/global-primary -- \
dig @localhost example.com SOA +short | awk '{print $3}')
# Wait and check secondary
SECONDARY_SERIAL=$(kubectl exec -n dns-system deployment/secondary-eu-west -- \
dig @localhost example.com SOA +short | awk '{print $3}')
# Calculate lag
echo "Primary: $PRIMARY_SERIAL, Secondary: $SECONDARY_SERIAL"
Optimizing Propagation
- Reduce refresh interval - More frequent checks
- Enable NOTIFY - Immediate notification of changes
- Use IXFR - Faster incremental transfers
- Optimize network - Low-latency connections between regions
Automatic Zone Transfer Configuration
New in v0.1.0: Bindy automatically configures zone transfers between primary and secondary instances.
When you create a DNSZone resource, Bindy automatically:
- Discovers secondary instances - Finds all
Bind9Instanceresources labeled withrole=secondaryin the cluster - Configures zone transfers - Adds
also-notifyandallow-transferdirectives with secondary IP addresses - Tracks secondary IPs - Stores current secondary IPs in
DNSZone.status.secondaryIps - Detects IP changes - Monitors for secondary pod IP changes (due to restarts, rescheduling, scaling)
- Auto-updates zones - Automatically reconfigures zones when secondary IPs change
Example:
# Check automatically configured secondary IPs
kubectl get dnszone example-com -n dns-system -o jsonpath='{.status.secondaryIps}'
# Output: ["10.244.1.5","10.244.2.8"]
# Verify zone configuration on primary
kubectl exec -n dns-system deployment/primary-dns -- \
curl -s localhost:8080/api/zones/example.com | jq '.alsoNotify, .allowTransfer'
Self-Healing: When secondary pods are rescheduled and get new IPs:
- Detection happens within 5-10 minutes (next reconciliation cycle)
- Zones are automatically updated with new secondary IPs
- Zone transfers resume automatically with no manual intervention
No manual configuration needed! The old approach of manually configuring allowTransfer networks is no longer required for Kubernetes-managed instances.
Conflict Resolution
When using multi-primary setups, handle conflicts:
Prevention
- Separate zones per primary
- Use different subdomains per region
- Implement locking mechanism
Detection
# Compare zones between primaries
diff <(kubectl exec deployment/primary-us -- cat /var/lib/bind/zones/example.com.zone) \
<(kubectl exec deployment/primary-eu -- cat /var/lib/bind/zones/example.com.zone)
Monitoring Replication
Replication Dashboard
Monitor:
- Serial number sync status
- Replication lag per region
- Transfer success/failure rate
- Zone size and growth
Alerts
Set up alerts for:
- Serial number drift > threshold
- Failed zone transfers
- Replication lag > SLA
- Network connectivity issues
Best Practices
- Document topology - Clear replication map
- Monitor lag - Track propagation time
- Test failover - Regular DR drills
- Use consistent serials - YYYYMMDDnn format
- Automate updates - GitOps for all regions
- Capacity planning - Account for replication traffic
Next Steps
- High Availability - HA architecture
- Zone Transfers - Transfer configuration
- Performance - Optimize replication performance