Centralized Logging and Management: Deploying Elasticsearch, Kibana, and Fluentd on GCP
Introduction
Effective log management is critical for monitoring, troubleshooting, and analyzing applications in real-time. In this post, we’re beginning a series on logging and management by setting up a centralized logging solution using Elasticsearch and Kibana with X-Pack security enabled. This ensures controlled access with user authentication.
We’ll deploy Elasticsearch and Kibana as Docker containers on a Google Cloud Platform (GCP) VM instance. Once the centralized logging system is set up, we’ll configure Fluentd to ship NGINX access logs to Elasticsearch for centralized monitoring.
Part 1: Setting Up the Centralized Server
Step 1: Create a GCP VM Instance
Log in to your GCP Console and navigate to Compute Engine > VM Instances.
Create a new instance with the following specifications:
Name:
logging-server
Machine type:
e2-medium
(or higher, depending on traffic)Boot disk: Ubuntu 20.04 LTS
Firewall: Allow HTTP and HTTPS traffic
SSH into the instance once it’s up and running.
Step 2: Install Docker on the VM
Update the VM and install Docker:
sudo apt update && sudo apt upgrade -y
sudo apt install -y docker.io
sudo systemctl enable --now docker
Step 3: Deploy Elasticsearch with X-Pack Security
Create a directory for Elasticsearch data:
mkdir -p ~/elasticsearch_data sudo chmod 777 ~/elasticsearch_data
Run the Elasticsearch container with X-Pack enabled:
docker run -d --name elasticsearch \ -e "discovery.type=single-node" \ -e "xpack.security.enabled=true" \ -e "ELASTIC_PASSWORD=elastic123" \ -v ~/elasticsearch_data:/usr/share/elasticsearch/data \ -p 9200:9200 -p 9300:9300 \ docker.elastic.co/elasticsearch/elasticsearch:8.10.1
Verify Elasticsearch is running:
curl -u elastic:elastic123 http://<VM_EXTERNAL_IP>:9200
Replace
<VM_EXTERNAL_IP>
with your instance’s external IP address.You should see a JSON response confirming Elasticsearch is active.
Step 4: Deploy Kibana
Run the Kibana container:
docker run -d --name kibana \ -e "ELASTICSEARCH_HOSTS=http://<VM_EXTERNAL_IP>:9200" \ -e "ELASTICSEARCH_USERNAME=elastic" \ -e "ELASTICSEARCH_PASSWORD=elastic123" \ -p 5601:5601 \ docker.elastic.co/kibana/kibana:8.10.1
Access Kibana in your browser at
http://<VM_EXTERNAL_IP>:5601
.
Use the elastic username and elastic123 password to log in.
Part 2: Shipping Logs with Fluentd
Step 1: Install Fluentd
On the server running NGINX (can be the same VM or another server):
Add the Fluentd repository:
curl -fsSL https://packages.treasuredata.com/GPG-KEY-td-agent | sudo apt-key add - echo "deb http://packages.treasuredata.com/4/ubuntu/focal/ focal contrib" | sudo tee /etc/apt/sources.list.d/td-agent.list sudo apt update
Install Fluentd:
sudo apt install -y td-agent
Step 2: Configure Fluentd for NGINX Logs
Install the Elasticsearch plugin for Fluentd:
sudo /usr/sbin/td-agent-gem install fluent-plugin-elasticsearch
Update Fluentd’s configuration to ship logs to Elasticsearch. Open the configuration file:
sudo nano /etc/td-agent/td-agent.conf
Add the following configuration:
<source> @type tail path /var/log/nginx/access.log pos_file /var/log/td-agent/nginx-access.log.pos tag nginx.access format nginx </source> <match nginx.access> @type elasticsearch host <VM_EXTERNAL_IP> port 9200 scheme http logstash_format true user elastic password elastic123 </match>
Restart Fluentd:
sudo systemctl restart td-agent
Step 3: Verify Logs in Kibana
Generate some logs by accessing your NGINX server:
curl http://<NGINX_SERVER_IP>
In Kibana, go to Discover and create an index pattern for
logstash-*
.
You should see the NGINX logs appearing in real-time.Create a Data View to analyze the index better and run queries.
Best Practices
Secure Access: Ensure your Elasticsearch and Kibana instances are behind a firewall or protected using IP whitelisting.
Persistent Data: Use persistent disks for Elasticsearch data to prevent data loss on container restarts.
Resource Management: Monitor resource usage and scale the VM as needed to handle increased log volumes.
Log Rotation: Set up log rotation on the Fluentd source (e.g., NGINX logs) to prevent disk exhaustion.
Conclusion
This setup provides a robust centralized logging system with Elasticsearch and Kibana, secured by X-Pack authentication. By using Fluentd to ship logs, you can monitor and analyze NGINX logs or any other application logs from a single interface.
In the next post, we’ll expand this setup to handle high-volume production traffic and discuss advanced index management techniques. Stay tuned!
Have questions or insights? Share them in the comments below or reach out on LinkedIn!