Introduction: The Importance of Security Monitoring

In network security, there's a saying: "You can't defend what you can't see." No matter how powerful your firewalls and intrusion detection systems are, you cannot effectively respond to security incidents without knowing what's happening in real-time. Security monitoring and log analysis are essential capabilities for maintaining an organization's security posture and responding quickly to threats.

In this Part 9, we'll explore the types of logs and how to build centralized log management systems, configure log analysis platforms using ELK Stack, understand SIEM (Security Information and Event Management) concepts and usage, and learn about anomaly detection and threat intelligence utilization methods.

1. Types and Importance of Logs

1.1 System Logs

System logs record all events occurring at the operating system level. On Linux systems, they are stored in the /var/log/ directory, while on Windows, they can be viewed through Event Viewer.

  • /var/log/syslog (or /var/log/messages): System-wide messages and events
  • /var/log/auth.log: Authentication-related events (login attempts, sudo usage, etc.)
  • /var/log/kern.log: Kernel-level messages
  • /var/log/dmesg: Hardware and driver messages during boot
  • /var/log/cron: Cron job execution records
# Real-time log monitoring
tail -f /var/log/auth.log

# Search for specific patterns
grep "Failed password" /var/log/auth.log

# Check recent failed login attempts
lastb | head -20

1.2 Application Logs

Application logs are generated by each service or application. Web servers, databases, mail servers, and other services each have their own unique log formats.

  • Apache/Nginx access logs: Web request records
  • Apache/Nginx error logs: Error and warning messages
  • MySQL/PostgreSQL logs: Query execution and errors
  • Application-specific logs: Business logic-related events
# Nginx access log format example
# 192.168.1.100 - - [22/Jan/2026:10:15:32 +0900] "GET /api/users HTTP/1.1" 200 1234 "-" "Mozilla/5.0..."

# Check IPs with highest access frequency
awk '{print $1}' /var/log/nginx/access.log | sort | uniq -c | sort -rn | head -10

# Statistics by HTTP status code
awk '{print $9}' /var/log/nginx/access.log | sort | uniq -c | sort -rn

1.3 Security Logs

Security logs professionally record security-related events. They are generated by security devices and software such as firewalls, IDS/IPS, and antivirus.

  • Firewall logs: Allowed/blocked connection records
  • IDS/IPS logs: Detected attack attempts
  • Audit logs: System change tracking
  • VPN logs: Remote access records
# Check Linux auditd logs
ausearch -m USER_LOGIN -ts today

# Check iptables logs (when logging is configured)
grep "iptables" /var/log/syslog

# Check SELinux denial logs
ausearch -m avc -ts recent

1.4 Network Logs

Network logs record network traffic and connection information. They are generated by routers, switches, proxy servers, etc.

  • NetFlow/sFlow: Network traffic flow data
  • DNS query logs: Domain lookup records
  • DHCP logs: IP allocation records
  • Proxy logs: Web traffic records

2. Centralized Log Management System

2.1 Centralized Logging with rsyslog

rsyslog is the most widely used system logging daemon on Linux systems. It provides remote log collection and filtering capabilities.

# Central log server configuration (/etc/rsyslog.conf)
# Receive remote logs via UDP port 514
$ModLoad imudp
$UDPServerRun 514

# Receive remote logs via TCP port 514 (more reliable)
$ModLoad imtcp
$InputTCPServerRun 514

# Store logs separately by remote host
$template RemoteLogs,"/var/log/remote/%HOSTNAME%/%PROGRAMNAME%.log"
*.* ?RemoteLogs
# Client configuration (/etc/rsyslog.conf)
# Send all logs to remote server (TCP)
*.* @@192.168.1.10:514

# Send only specific facilities
auth,authpriv.* @@192.168.1.10:514
kern.* @@192.168.1.10:514

2.2 Advanced Log Management with syslog-ng

syslog-ng provides more flexible configuration and filtering capabilities than rsyslog. It's suitable for environments requiring complex log routing and parsing.

# syslog-ng server configuration (/etc/syslog-ng/syslog-ng.conf)
@version: 3.35

source s_network {
    tcp(ip("0.0.0.0") port(514));
    udp(ip("0.0.0.0") port(514));
};

destination d_hosts {
    file("/var/log/remote/$HOST/$PROGRAM.log"
        create-dirs(yes)
        dir-perm(0755)
        perm(0644));
};

filter f_security {
    facility(auth, authpriv) or
    match("attack" value("MESSAGE")) or
    match("failed" value("MESSAGE"));
};

log {
    source(s_network);
    filter(f_security);
    destination(d_hosts);
};

2.3 Log Retention Policy

Log retention policies are important for compliance and storage management. They can be automated using logrotate.

# /etc/logrotate.d/security-logs
/var/log/remote/*/*.log {
    daily
    rotate 90
    compress
    delaycompress
    missingok
    notifempty
    create 0644 root root
    sharedscripts
    postrotate
        /usr/bin/systemctl reload rsyslog > /dev/null 2>&1 || true
    endscript
}

3. Building the ELK Stack

3.1 ELK Stack Overview

The ELK Stack is a combination of Elasticsearch, Logstash, and Kibana, providing a powerful platform for collecting, storing, analyzing, and visualizing large-scale log data. It's also known as the Elastic Stack with the recent addition of Beats.

  • Elasticsearch: Distributed search and analytics engine, log data storage
  • Logstash: Data collection, parsing, and transformation pipeline
  • Kibana: Data visualization and dashboards
  • Beats: Lightweight data collectors (Filebeat, Metricbeat, etc.)

3.2 Installing and Configuring Elasticsearch

# Installing Elasticsearch (Ubuntu/Debian)
wget -qO - https://artifacts.elastic.co/GPG-KEY-elasticsearch | sudo apt-key add -
echo "deb https://artifacts.elastic.co/packages/8.x/apt stable main" | sudo tee /etc/apt/sources.list.d/elastic-8.x.list
sudo apt update && sudo apt install elasticsearch

# Basic configuration (/etc/elasticsearch/elasticsearch.yml)
cluster.name: security-cluster
node.name: node-1
network.host: 0.0.0.0
http.port: 9200
discovery.type: single-node

# Security settings (Elasticsearch 8.x)
xpack.security.enabled: true
xpack.security.transport.ssl.enabled: true
# JVM heap memory settings (/etc/elasticsearch/jvm.options.d/heap.options)
-Xms4g
-Xmx4g

# Start service
sudo systemctl enable elasticsearch
sudo systemctl start elasticsearch

# Check status
curl -X GET "localhost:9200/_cluster/health?pretty"

3.3 Configuring Logstash Pipeline

# Install Logstash
sudo apt install logstash

# Pipeline configuration (/etc/logstash/conf.d/security-logs.conf)
input {
    beats {
        port => 5044
    }
    syslog {
        port => 5514
        type => "syslog"
    }
}

filter {
    if [type] == "syslog" {
        grok {
            match => { "message" => "%{SYSLOGTIMESTAMP:syslog_timestamp} %{SYSLOGHOST:syslog_hostname} %{DATA:syslog_program}(?:\[%{POSINT:syslog_pid}\])?: %{GREEDYDATA:syslog_message}" }
        }
        date {
            match => [ "syslog_timestamp", "MMM  d HH:mm:ss", "MMM dd HH:mm:ss" ]
        }
    }

    # SSH login failure detection
    if [syslog_program] == "sshd" and [syslog_message] =~ /Failed password/ {
        grok {
            match => { "syslog_message" => "Failed password for %{USER:ssh_user} from %{IP:src_ip}" }
        }
        mutate {
            add_tag => ["ssh_failed_login"]
        }
    }

    # Add GeoIP information
    if [src_ip] {
        geoip {
            source => "src_ip"
            target => "geoip"
        }
    }
}

output {
    elasticsearch {
        hosts => ["localhost:9200"]
        index => "security-logs-%{+YYYY.MM.dd}"
    }
}

3.4 Log Collection with Filebeat

# Filebeat configuration (/etc/filebeat/filebeat.yml)
filebeat.inputs:
  - type: log
    enabled: true
    paths:
      - /var/log/auth.log
      - /var/log/syslog
    fields:
      log_type: system

  - type: log
    enabled: true
    paths:
      - /var/log/nginx/access.log
    fields:
      log_type: nginx_access

  - type: log
    enabled: true
    paths:
      - /var/log/nginx/error.log
    fields:
      log_type: nginx_error

output.logstash:
  hosts: ["localhost:5044"]

# Or send directly to Elasticsearch
# output.elasticsearch:
#   hosts: ["localhost:9200"]
#   index: "filebeat-%{+yyyy.MM.dd}"

3.5 Configuring Kibana Dashboards

# Install and configure Kibana
sudo apt install kibana

# Configuration (/etc/kibana/kibana.yml)
server.port: 5601
server.host: "0.0.0.0"
elasticsearch.hosts: ["http://localhost:9200"]

# Start service
sudo systemctl enable kibana
sudo systemctl start kibana

When configuring security dashboards in Kibana, it's recommended to include the following visualizations:

  • Login failure trends: Graph of failed login attempts by time
  • Top attack source IPs: List of IPs with the most failed attempts
  • Geographic distribution: Map showing geographic locations of attack attempts
  • Event type distribution: Pie chart showing event types
  • Real-time log stream: List of latest security events

4. SIEM Concepts and Usage

4.1 What is SIEM?

SIEM (Security Information and Event Management) is a solution that combines Security Information Management (SIM) and Security Event Management (SEM). It collects logs from various sources and detects security threats through correlation analysis.

  • Log collection and normalization: Standardize logs in various formats
  • Real-time monitoring: Real-time surveillance of security events
  • Correlation analysis: Connect multiple events to identify attack patterns
  • Alerting and response: Automatic alert notification upon threat detection
  • Compliance reporting: Generate audit and compliance reports

4.2 Open-Source SIEM: Wazuh

Wazuh is an open-source SIEM solution based on OSSEC that can be integrated with the ELK Stack.

# Wazuh server installation (single node)
curl -sO https://packages.wazuh.com/4.7/wazuh-install.sh
sudo bash wazuh-install.sh -a

# Wazuh agent installation (client)
curl -sO https://packages.wazuh.com/4.7/wazuh-agent-4.7.0-1.x86_64.rpm
sudo rpm -ivh wazuh-agent-4.7.0-1.x86_64.rpm

# Agent configuration
sudo sed -i 's/MANAGER_IP/192.168.1.10/' /var/ossec/etc/ossec.conf
sudo systemctl enable wazuh-agent
sudo systemctl start wazuh-agent

4.3 Writing SIEM Rules

<!-- Wazuh custom rules (/var/ossec/etc/rules/local_rules.xml) -->
<group name="custom_rules">

    <!-- Detect multiple SSH failures in short time -->
    <rule id="100001" level="10" frequency="5" timeframe="60">
        <if_matched_sid>5710</if_matched_sid>
        <same_source_ip />
        <description>SSH brute force attack detected: 5+ failures in 1 minute</description>
        <group>authentication_failures,pci_dss_10.2.4,</group>
    </rule>

    <!-- Detect login at unusual hours -->
    <rule id="100002" level="8">
        <if_sid>5501</if_sid>
        <time>12am - 6am</time>
        <description>Successful login at unusual hours</description>
    </rule>

    <!-- Detect critical file changes -->
    <rule id="100003" level="12">
        <if_sid>550</if_sid>
        <match>/etc/passwd|/etc/shadow|/etc/sudoers</match>
        <description>Critical system file modification detected</description>
    </rule>

</group>

5. Anomaly Detection

5.1 Establishing Baselines

To detect anomalies, you first need to understand normal activity patterns. Baselines can be set by time, day of week, and user.

  • Network traffic: Average bandwidth usage, protocol distribution
  • Login patterns: Normal login times, locations
  • Process activity: List of normally running processes
  • File access: Normal file access patterns

5.2 Types of Anomalies

  • Volume anomalies: Abnormally high or low traffic/log volume
  • Time anomalies: Activity at unusual hours
  • Geographic anomalies: Access from unexpected locations
  • Behavioral anomalies: User behavior patterns different from usual
  • Protocol anomalies: Unusual protocol usage or port access

5.3 Machine Learning-Based Anomaly Detection

Elasticsearch's ML features can automatically detect anomalies.

// Elasticsearch ML job creation example
PUT _ml/anomaly_detectors/security_login_anomaly
{
  "description": "Login anomaly detection",
  "analysis_config": {
    "bucket_span": "15m",
    "detectors": [
      {
        "function": "high_count",
        "field_name": "event.action",
        "by_field_name": "source.ip",
        "detector_description": "Abnormally high login attempts by IP"
      }
    ],
    "influencers": ["source.ip", "user.name"]
  },
  "data_description": {
    "time_field": "@timestamp"
  },
  "datafeed_config": {
    "indices": ["security-logs-*"],
    "query": {
      "bool": {
        "filter": [
          {"term": {"event.category": "authentication"}}
        ]
      }
    }
  }
}

6. Utilizing Threat Intelligence

6.1 What is Threat Intelligence?

Threat Intelligence refers to collected and analyzed information about cyber threats. Using this information enables proactive detection and response to known threats.

  • Tactical intelligence: IOCs (Indicators of Compromise), malicious IPs, domain lists
  • Operational intelligence: Attacker TTPs (Tactics, Techniques, Procedures)
  • Strategic intelligence: Threat trends, attack group analysis

6.2 Integrating Threat Feeds

# Download and apply malicious IP list
wget -O /etc/threat-intel/malicious-ips.txt https://example.com/threat-feeds/ips.txt

# Utilize threat intelligence in Logstash
filter {
    translate {
        field => "src_ip"
        destination => "threat_match"
        dictionary_path => "/etc/threat-intel/malicious-ips.txt"
        fallback => "clean"
    }

    if [threat_match] != "clean" {
        mutate {
            add_tag => ["threat_detected"]
            add_field => { "threat_source" => "ip_blocklist" }
        }
    }
}

6.3 MISP (Malware Information Sharing Platform)

MISP is an open-source platform for sharing and managing threat intelligence.

# IOC lookup using MISP API
from pymisp import PyMISP

misp = PyMISP('https://misp.example.com', 'your-api-key', ssl=True)

# Query specific IP
result = misp.search(controller='attributes', value='192.168.1.100')

# Query recent events
events = misp.search(controller='events', timestamp='7d')

# Export IOCs
iocs = misp.search(controller='attributes', type_attribute='ip-dst', to_ids=True)

7. Security Dashboard Configuration Examples

7.1 Key Security Indicators (KPIs)

For effective security monitoring, include the following metrics in your dashboard:

  • MTTD (Mean Time To Detect): Average time to detect threats
  • MTTR (Mean Time To Respond): Average time to respond
  • Daily security event count: Classified by severity
  • Blocked attack count: Attempts blocked by firewall/IPS
  • Vulnerability status: Number of unpatched systems

7.2 Real-Time Alert Configuration

# Elasticsearch Watcher alert configuration
PUT _watcher/watch/ssh_bruteforce_alert
{
  "trigger": {
    "schedule": { "interval": "1m" }
  },
  "input": {
    "search": {
      "request": {
        "indices": ["security-logs-*"],
        "body": {
          "query": {
            "bool": {
              "must": [
                { "match": { "tags": "ssh_failed_login" }},
                { "range": { "@timestamp": { "gte": "now-5m" }}}
              ]
            }
          },
          "aggs": {
            "by_ip": {
              "terms": { "field": "src_ip", "min_doc_count": 10 }
            }
          }
        }
      }
    }
  },
  "condition": {
    "compare": { "ctx.payload.aggregations.by_ip.buckets.0.doc_count": { "gte": 10 }}
  },
  "actions": {
    "send_email": {
      "email": {
        "to": "security@example.com",
        "subject": "[ALERT] SSH Brute Force Attack Detected",
        "body": "IP {{ctx.payload.aggregations.by_ip.buckets.0.key}} had {{ctx.payload.aggregations.by_ip.buckets.0.doc_count}} SSH login failures in 5 minutes."
      }
    },
    "webhook": {
      "webhook": {
        "method": "POST",
        "url": "https://hooks.slack.com/services/xxx/yyy/zzz",
        "body": "{\"text\": \"SSH Brute Force Attack Detected: {{ctx.payload.aggregations.by_ip.buckets.0.key}}\"}"
      }
    }
  }
}

8. Practical Log Analysis Scenarios

8.1 Web Server Intrusion Investigation

# Search for suspicious request patterns
grep -E "(union.*select|script>|../|cmd=|exec\()" /var/log/nginx/access.log

# Track all activity from specific IP
grep "192.168.1.100" /var/log/nginx/access.log | less

# Abnormally large response sizes (suspected data exfiltration)
awk '$10 > 1000000 {print $1, $7, $10}' /var/log/nginx/access.log

# Multiple 404 errors (suspected scanning)
awk '$9 == 404 {print $1}' /var/log/nginx/access.log | sort | uniq -c | sort -rn | head

8.2 Privilege Escalation Attempt Detection

# Analyze sudo usage records
grep "sudo:" /var/log/auth.log | grep -v "session opened\|session closed"

# Failed sudo attempts
grep "sudo:.*authentication failure" /var/log/auth.log

# su command usage records
grep "su\[" /var/log/auth.log

# Check suspicious cron jobs
grep "CRON" /var/log/syslog | grep -v "session"

Conclusion

Security monitoring and log analysis are core activities for maintaining an organization's security posture. Here's a summary of what we covered:

  • Types of logs: Characteristics and usage of system, application, security, and network logs
  • Centralized log management: Log centralization using rsyslog and syslog-ng
  • ELK Stack: Log analysis platform using Elasticsearch, Logstash, and Kibana
  • SIEM: Security event correlation analysis and automated threat detection
  • Anomaly detection: Baseline establishment and machine learning-based detection
  • Threat intelligence: Proactive detection using external threat information

Effective security monitoring requires more than just tools. Understanding your organization's environment and business, continuously improving detection rules, and managing alert fatigue are all crucial.

In the next Part 10, we'll learn how to respond when security incidents occur and how to analyze incidents through digital forensics. We'll conclude the series with security incident response processes and practical forensics techniques.