Update resource _using.md
This commit is contained in:
192
resource _using.md
Normal file
192
resource _using.md
Normal file
@@ -0,0 +1,192 @@
|
||||
# 🔥 Kubernetes Resource Usage Report 2026/01/06
|
||||
|
||||
## 💾 TOP 10 MEMORY CONSUMERS
|
||||
|
||||
| Rank | Namespace | Pod Name | Memory |
|
||||
|------|-----------|----------|--------|
|
||||
| 1 | jenkins | jenkins-6fc54b66d5-dg6h9 | **1361Mi** 🔥 |
|
||||
| 2 | monitoring | prometheus-k8s-monitoring-kube-promet-prometheus-0 | **1011Mi** |
|
||||
| 3 | longhorn-system | instance-manager-d615ed1c3a0e53d6a8ab21533bc5d628 | **600Mi** |
|
||||
| 4 | longhorn-system | instance-manager-30e7dd49f715bfdb9ae030c0e2b45bbf | **599Mi** |
|
||||
| 5 | longhorn-system | instance-manager-adf5b54f2ad6e50eb3987b8dc9bd1d52 | **559Mi** |
|
||||
| 6 | calibre-web | calibre-web-58b6bc49fd-msmgz | **317Mi** |
|
||||
| 7 | monitoring | k8s-monitoring-grafana-b4c85bb7c-b4v8x | **313Mi** |
|
||||
| 8 | argocd | argocd-application-controller-0 | **210Mi** |
|
||||
| 9 | longhorn-system | longhorn-manager-sbqhd | **176Mi** |
|
||||
| 10 | longhorn-system | longhorn-manager-l4sj6 | **178Mi** |
|
||||
|
||||
---
|
||||
|
||||
## 🔥 TOP 10 CPU CONSUMERS
|
||||
|
||||
| Rank | Namespace | Pod Name | CPU |
|
||||
|------|-----------|----------|-----|
|
||||
| 1 | monitoring | prometheus-k8s-monitoring-kube-promet-prometheus-0 | **199m** 🔥 |
|
||||
| 2 | longhorn-system | instance-manager-d615ed1c3a0e53d6a8ab21533bc5d628 | **125m** |
|
||||
| 3 | longhorn-system | instance-manager-30e7dd49f715bfdb9ae030c0e2b45bbf | **83m** |
|
||||
| 4 | longhorn-system | instance-manager-adf5b54f2ad6e50eb3987b8dc9bd1d52 | **65m** |
|
||||
| 5 | argocd | argocd-application-controller-0 | **38m** |
|
||||
| 6 | longhorn-system | longhorn-manager-sbqhd | **28m** |
|
||||
| 7 | longhorn-system | longhorn-manager-mgpgj | **26m** |
|
||||
| 8 | loki | promtail-qh9n9 | **26m** |
|
||||
| 9 | loki | promtail-7fl7h | **25m** |
|
||||
| 10 | gitea | gitea-valkey-cluster-2 | **24m** |
|
||||
|
||||
---
|
||||
|
||||
## 📊 SUMMARY BY NAMESPACE
|
||||
|
||||
| Namespace | Pods | Total Memory | Avg Memory per Pod |
|
||||
|-----------|------|--------------|-------------------|
|
||||
| jenkins | 1 | **1361Mi** 🔥 | 1361Mi |
|
||||
| monitoring | 5 | **1391Mi** | 278Mi |
|
||||
| longhorn-system | 32 | **2577Mi** | 81Mi |
|
||||
| argocd | 7 | **371Mi** | 53Mi |
|
||||
| gitea | 4 | **167Mi** | 42Mi |
|
||||
| calibre-web | 1 | **317Mi** | 317Mi |
|
||||
| loki | 4 | **296Mi** | 74Mi |
|
||||
| default | 5 | **238Mi** | 48Mi |
|
||||
|
||||
---
|
||||
|
||||
## 🎯 KEY INSIGHTS
|
||||
|
||||
### Jenkins (Biggest Consumer!)
|
||||
```
|
||||
Pod: jenkins-6fc54b66d5-dg6h9
|
||||
Memory: 1361Mi (1.3 GB!)
|
||||
CPU: 3m (low)
|
||||
|
||||
💡 Причина:
|
||||
- Jenkins хранит build history в памяти
|
||||
- Загружены плагины
|
||||
- Кэш workspace
|
||||
```
|
||||
|
||||
### Prometheus (Second Biggest)
|
||||
```
|
||||
Pod: prometheus-k8s-monitoring-kube-promet-prometheus-0
|
||||
Memory: 1011Mi (1 GB)
|
||||
CPU: 199m (highest CPU usage!)
|
||||
|
||||
💡 Причина:
|
||||
- Хранит метрики в памяти
|
||||
- Time series database
|
||||
- Retention period
|
||||
```
|
||||
|
||||
### Longhorn (Distributed Storage)
|
||||
```
|
||||
Total Memory: ~2.5GB across 32 pods
|
||||
Average: 81Mi per pod
|
||||
|
||||
💡 Причина:
|
||||
- 3 instance managers (по ~600Mi каждый)
|
||||
- Storage management overhead
|
||||
- Replica data
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## ⚠️ RECOMMENDATIONS
|
||||
|
||||
### 1. Jenkins Memory Optimization
|
||||
|
||||
**Current:** 1361Mi
|
||||
**Recommended:** Set limits
|
||||
|
||||
```yaml
|
||||
# apps/jenkins/deployment.yaml
|
||||
resources:
|
||||
limits:
|
||||
memory: 2Gi
|
||||
cpu: 1000m
|
||||
requests:
|
||||
memory: 1536Mi
|
||||
cpu: 500m
|
||||
```
|
||||
|
||||
**Actions:**
|
||||
- Configure max build history
|
||||
- Clean old workspaces
|
||||
- Limit concurrent builds
|
||||
|
||||
---
|
||||
|
||||
### 2. Prometheus Memory Optimization
|
||||
|
||||
**Current:** 1011Mi
|
||||
**Recommended:** Adjust retention
|
||||
|
||||
```yaml
|
||||
# Reduce retention period
|
||||
prometheus:
|
||||
retention: 7d # Down from 15d
|
||||
retentionSize: 10GB
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
### 3. Longhorn Optimization
|
||||
|
||||
**Current:** ~600Mi per instance manager
|
||||
**Status:** Normal for distributed storage
|
||||
|
||||
No action needed - this is expected for Longhorn.
|
||||
|
||||
---
|
||||
|
||||
## 📈 MONITORING COMMANDS
|
||||
|
||||
### Watch top consumers:
|
||||
```bash
|
||||
watch -n 5 kubectl top pods --all-namespaces --sort-by=memory
|
||||
```
|
||||
|
||||
### Check specific namespace:
|
||||
```bash
|
||||
kubectl top pods -n jenkins
|
||||
kubectl top pods -n monitoring
|
||||
kubectl top pods -n longhorn-system
|
||||
```
|
||||
|
||||
### Check nodes:
|
||||
```bash
|
||||
kubectl top nodes
|
||||
```
|
||||
|
||||
### Get detailed metrics:
|
||||
```bash
|
||||
# Pod metrics
|
||||
kubectl get --raw /apis/metrics.k8s.io/v1beta1/namespaces/jenkins/pods/jenkins-6fc54b66d5-dg6h9 | jq
|
||||
|
||||
# Node metrics
|
||||
kubectl get --raw /apis/metrics.k8s.io/v1beta1/nodes | jq
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## 🎯 QUICK WINS
|
||||
|
||||
1. ✅ **Add resource limits** to Jenkins
|
||||
2. ✅ **Reduce Prometheus retention** if needed
|
||||
3. ✅ **Monitor trends** in Grafana
|
||||
4. ⏳ **Consider HPA** for auto-scaling
|
||||
5. ⏳ **Add alerts** for high memory usage
|
||||
|
||||
---
|
||||
|
||||
## 📊 CURRENT CLUSTER CAPACITY
|
||||
|
||||
Run `kubectl top nodes` to see:
|
||||
```
|
||||
NAME CPU(cores) CPU% MEMORY(bytes) MEMORY%
|
||||
master1 ???m ??% ????Mi ??%
|
||||
master2 ???m ??% ????Mi ??%
|
||||
master3 ???m ??% ????Mi ??%
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
**Jenkins is your biggest memory consumer!** 🔥
|
||||
Consider adding resource limits and cleanup policies.
|
||||
Reference in New Issue
Block a user