Set up an IPSEC VPN site-to-site between Palo Alto on-prem and Microsoft Azure

This is a diagram that I have used for the lab.

Create a new virtual network is Azure-PA.

Change default network to PrivateSubnet is 10.0.1.0.

A subnet address range is 10.0.1.0/24

Click Create.

Create a new subnet.

A subnet address range is 10.0.0.0/24

Go to “Virtual network gateway” to create a new virtual network gateway.

Virtual network: Azure-PA.

Subnet: Gatewaysubnet 10.0.0.0/24

Public IP address name: VPNIP

Click Create.

Wait around from 20 to 30 minutes to see if the Deployment is done.

Go to “Local network gateway” and create a new local network gateway.

An IP address is a public IP address of the Palo Alto firewall.

Address space is Palo Alto’s LAN subnets.

Click create.

Go to “Virtual network gateways”, and select the virtual network gateways that we have created in the previous step.

Go to “Connections” – Add.

Enter a shared key (PSK) for VPN site-to-site.

Take note of the IP address of Azure VPN.

On Palo Alto on-prem.

Interface tunnel1.

Create an IKE Crypto.

IPSEC Crypto.

According to Azure, we will use 27000 seconds for the key lifetime.

https://docs.microsoft.com/en-us/azure/vpn-gateway/vpn-gateway-about-compliance-crypto#:~:text=IKEv2%20Main%20Mode%20SA%20lifetime,KBytes%20(102GB)%20are%20used.

  • IKEv2 Main Mode SA lifetime is fixed at 28,800 seconds on the Azure VPN gateways.
  • QM SA Lifetimes are optional parameters. If none was specified, default values of 27,000 seconds (7.5 hrs) and 102400000 KBytes (102GB) are used.

Create a new IKE Gateways.

The peer address is the Azure VPN gateway.

Create an IPSEC tunnel.

Create astatic routes to PrivateSubnet on Azure.

Create both access rules to allow traffic from PA LAN subnets to private subnet on Azure and vice versa.

Click “Commit”.

Back to Azure, the VPN site-to-site connection is still not connected.

Create a new Windows 2016 virtual machine (Size is B1s).

Get the Public IP address of the Windows 2016 virtual machine.

Download RDP file.

Disable Windows Firewall, and ping the IP address of Palo Alto LAN subnets

Back to VPN2S, we can see the VPN status connection is “Connected”.

Microsoft Azure seems to not support many customer devices as Amazon AWS.

On Kali machine, pings Windows 2016 VM on Azure.

The IPSEC VPN site-to-site tunnel is up as well in Palo Alto.

Implement WordPress load balancing with Multi-AZs deployment for Amazon RDS on AWS

This is a diagram that I have used for the lab.

There are a couple of main steps that I have used in the lab.

  • Create both private subnets on both AZs
  • Create a WordPress instance on the first AZ.
  • Create a new MySQL instance on Multi-AZ deployments.
  • Create an AMI image for the WordPress instance.
  • Create a Launch configuration
  • Set up an Auto Scaling Group with your launch configuration
  • Create a CNAME record on your DNS zone for the Amazon domain name
  • Test an Application Load Balancer for WordPress on multiple AZs with MySQL instance on Multi-AZ deployments

Create a new VPC with CIDR 10.0.0.0/16

Create 4 private subnets: 10.0.0.0/24 on us-east-1a and 10.0.1.0/24 on us-east-1b, 10.0.2.0/24 on us-east-1c, and 10.0.3.0/24 on us-east-1d.

Launches a new Linux instance to run WordPress on AZ1.

Copy it into the User data setting.

#!/bin/bash
yum update -y
# Install Apache web service
yum install httpd -y
# Download WordPress 
wget https://wordpress.org/latest.tar.gz
tar -zxf latest.tar.gz
# Install php7.4
amazon-linux-extras install php7.4 -y

On Security Group, allow SSH, HTTP, HTTPS, and MySQL/Audora from 0.0.0.0/0.

SSH to Linux instance.

Check httpd, php is installed on the machine.

rpm -qa | grep httpd
rpm -qa | grep php
sudo yum install php -y
sudo systemctl start httpd
sudo systemctl enable httpd
netstat -antp

Copy all files on WordPress directory to /var/www/html.

cd wordpress
sudo cp -r * /var/www/html
cd /var/www/html

Create an ip.php file on /var/www/html directory.

#sudo nano ip.php
<?php
echo "Local IP address: "; echo $_SERVER['SERVER_ADDR'];
echo "<br>";
echo "Public IP address: "; echo $_SERVER['SERVER_NAME']
?>

Create a new AutoScaling-Security-Group-1 Security Group.

Go to Amazon RDS, and create subnet groups.

Then create a database.

Choose MYSQL, and choose the Dev/Test option and Multi-AZ DB instance Deployments.

Enter wordpress on the database/user/password setting. Then, select “Burstable classes” as following screenshot.

Unselect “Enable storage autoscaling” in this lab.

Security Group is AutoScaling-Security-Group.

Enter wordpress on the initial database name and uncheck “Enable Automatic backups”.

Then click “create database”.

MySQL instance has been successfully deployed on both Availability zones.

Set up WordPress on Linux instance. Copy the public IP address of the Linux instance and paste it into your web browser (http://44.203.24.125).

Copy RDS information that we have configured in previous steps.

The database host is an RDS instance on multiple AZs.

Create a new wp-config.php file under the/var/www/html directory and copy and paste the information in the screenshot below to this file.

Login to WP.

Check local and public IP addresses of WP instance (http://44.203.24.125/ip.php)

Check the IP address of the MySQL RDS instance.

Now, we move on to create an image for this WordPress instance. Click Actions – Image and templates – Create image.

Next, we create a “Launch configuration” with this image.

Click “Advanced details” and select “Assign a public IP address to every instance”

Choose Security group, and click create launch configuration.

Now, go to create the “Auto Scaling group” for the WP instance.

Select attach to a new load balancer.

Desired capacity: 2

Minimum capacity: 1

Maximum capacity: 4

Create “Auto Scaling Group”

On Load Balancer, copy the DNS name of ALB load balancer.

On Target group.

Go to instances, it can be seen that new both WP instances have been automatically created via AutoScaling group.

Go to your DNS zone setting on GoDaddy, add Amazon domain name as a CNAME record on your DNS zone as a screenshot below.

Check first WP instance.

Second WP instance.

Go to alb.tungle.ca. It can be seen that WordPress Application Load Balancing has been successfully deployed on AWS

We can see that both WP has accessed the master MySQL RDS.

Terminated a WP instance, then the new WP instance will be immediately created on AWS.

There is no downtime when terminating the WP instance.

Edit the Security Group, then only allows AutoScalling-Security-SG group on accessing MySQL instance on WordPress-SG security group.

Deploying WordPress and MySQL with Kubernetes on AWS

This is a diagram that I have used for this lab.

+ Create a Ubuntu Linux instance with 2GB RAM and 30GB storage for Kubernetes.

+ Create a MySQL deployment file.

#mysql-deployment.yaml

apiVersion: v1
kind: Service
metadata:
  name: wordpress-mysql
  labels:
    app: wordpress
spec:
  ports:
    - port: 3306
  selector:
    app: wordpress
    tier: mysql
  clusterIP: None
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: mysql-pv-claim
  labels:
    app: wordpress
spec:
  accessModes:
    - ReadWriteOnce
  resources:
    requests:
      storage: 20Gi
---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: wordpress-mysql
  labels:
    app: wordpress
spec:
  selector:
    matchLabels:
      app: wordpress
      tier: mysql
  strategy:
    type: Recreate
  template:
    metadata:
      labels:
        app: wordpress
        tier: mysql
    spec:
      containers:
      - image: mysql:5.6
        name: mysql
        env:
        - name: MYSQL_ROOT_PASSWORD
          valueFrom:
            secretKeyRef:
              name: mysql-pass
              key: password
        ports:
        - containerPort: 3306
          name: mysql
        volumeMounts:
        - name: mysql-persistent-storage
          mountPath: /var/lib/mysql
      volumes:
      - name: mysql-persistent-storage
        persistentVolumeClaim:
          claimName: mysql-pv-claim

+ Create a WordPress deployment file

#wordpress-deployment.yaml

apiVersion: v1
kind: Service
metadata:
  name: wordpress
  labels:
    app: wordpress
spec:
  ports:
    - port: 80
  selector:
    app: wordpress
    tier: frontend
  type: LoadBalancer
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: wp-pv-claim
  labels:
    app: wordpress
spec:
  accessModes:
    - ReadWriteOnce
  resources:
    requests:
      storage: 20Gi
---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: wordpress
  labels:
    app: wordpress
spec:
  selector:
    matchLabels:
      app: wordpress
      tier: frontend
  strategy:
    type: Recreate
  template:
    metadata:
      labels:
        app: wordpress
        tier: frontend
    spec:
      containers:
      - image: wordpress:4.8-apache
        name: wordpress
        env:
        - name: WORDPRESS_DB_HOST
          value: wordpress-mysql
        - name: WORDPRESS_DB_PASSWORD
          valueFrom:
            secretKeyRef:
              name: mysql-pass
              key: password
        ports:
        - containerPort: 80
          name: wordpress
        volumeMounts:
        - name: wordpress-persistent-storage
          mountPath: /var/www/html
      volumes:
      - name: wordpress-persistent-storage
        persistentVolumeClaim:
          claimName: wp-pv-claim

+ Download the MySQL deployment configuration file.

sudo curl -LO https://k8s.io/examples/application/wordpress/mysql-deployment.yaml

+ Download the WordPress configuration file.

sudo curl -LO https://k8s.io/examples/application/wordpress/wordpress-deployment.yaml

+ Create a kustomization.yaml.

secretGenerator:
- name: mysql-pass
  literals:
  - password=YOUR_PASSWORD
resources:
  - mysql-deployment.yaml
  - wordpress-deployment.yaml

+ Enable DNS dashboard storage.

microk8s enable dns dashboard storage

+ Apply and verify

microk8s kubectl apply -k ./

+ Verify a PersistentVolumeClaims (PVC) got dynamically provisioned.

microk8s kubectl get pvc

+ Verify the Pod is running.

microk8s kubectl get pods

+ Check Kubernetes is running.

microk8s kubectl get all --all-namespaces

+ Expose port 80 via External IP address (10.0.0.10) of Kubernetes instance on AWS. This allows accessing WordPress via the Internet.

microk8s kubectl patch svc wordpress -n default -p '{"spec": {"type": "LoadBalancer", "externalIPs":["10.0.0.10"]}}'

Check the port 80 is listening on the Kubernetes host.

+ Verify the WordPress service is running.

Access the WordPress on Kubernetes. (http://54.165.173.81)

Using Kubernetes to deploy PHP Guestbook application with Redis in AWS

This is a diagram that I have used for this lab.

I have used the following link to deploy the PHP Guestbook application with Redis (https://kubernetes.io/docs/tutorials/stateless-application/guestbook/)

Set up an Ubuntu Linux with 2GB RAM and 30GB storage for the Kubernetes host. Allow SSH and HTTP from anywhere to the Linux instance on Security Group.

+ Deployment Redis Database.

Creating the Redis Deployment

#nano redis-leader-deployment.yaml
# SOURCE: https://cloud.google.com/kubernetes-engine/docs/tutorials/guestbook
apiVersion: apps/v1
kind: Deployment
metadata:
  name: redis-leader
  labels:
    app: redis
    role: leader
    tier: backend
spec:
  replicas: 1
  selector:
    matchLabels:
      app: redis
  template:
    metadata:
      labels:
        app: redis
        role: leader
        tier: backend
    spec:
      containers:
      - name: leader
        image: "docker.io/redis:6.0.5"
        resources:
          requests:
            cpu: 100m
            memory: 100Mi
        ports:
        - containerPort: 6379

#Apply the Redis deployment.

microk8s kubectl apply -f https://k8s.io/examples/application/guestbook/redis-leader-deployment.yaml

# Check Redis Pod is running

microk8s kubectl get pods

# View logs from the Redis leader Pod

microk8s kubectl logs -f deployment/redis-leader

+ Creating the Redis leader Service

#Apply a service to proxy the traffic to the Redis Pod.

#nano redis-leader-service.yaml
# SOURCE: https://cloud.google.com/kubernetes-engine/docs/tutorials/guestbook
apiVersion: v1
kind: Service
metadata:
  name: redis-leader
  labels:
    app: redis
    role: leader
    tier: backend
spec:
  ports:
  - port: 6379
    targetPort: 6379
  selector:
    app: redis
    role: leader
    tier: backend

#Apply the Redis Service with the deployment file.

microk8s kubectl apply -f https://k8s.io/examples/application/guestbook/redis-leader-service.yaml

#verify that the Redis service is running

microk8s kubectl get service

+ Set up Redis followers to make it highly available on a few Redis followers.

#nano redis-follower-deployment.yaml

# SOURCE: https://cloud.google.com/kubernetes-engine/docs/tutorials/guestbook
apiVersion: apps/v1
kind: Deployment
metadata:
  name: redis-follower
  labels:
    app: redis
    role: follower
    tier: backend
spec:
  replicas: 2
  selector:
    matchLabels:
      app: redis
  template:
    metadata:
      labels:
        app: redis
        role: follower
        tier: backend
    spec:
      containers:
      - name: follower
        image: gcr.io/google_samples/gb-redis-follower:v2
        resources:
          requests:
            cpu: 100m
            memory: 100Mi
        ports:
        - containerPort: 6379

#Apply the Redis Deployment for the redis-follower-deployment.yaml file.

microk8s kubectl apply -f https://k8s.io/examples/application/guestbook/redis-follower-deployment.yaml

# Verify two Redis follower replicas are running. 

microk8s kubectl get pods

+ Creating the Redis follower service.

#nano redis-follower-service.yaml
# SOURCE: https://cloud.google.com/kubernetes-engine/docs/tutorials/guestbook
apiVersion: v1
kind: Service
metadata:
  name: redis-follower
  labels:
    app: redis
    role: follower
    tier: backend
spec:
  ports:
    # the port that this service should serve on
  - port: 6379
  selector:
    app: redis
    role: follower
    tier: backend

#Apply the Redis Service with the deployment file.

microk8s kubectl apply -f https://k8s.io/examples/application/guestbook/redis-follower-service.yaml

# verify that the Redis service is running.

microk8s kubectl get service

+ Set up and Expose the Guestbook Frontend.

#Creating Guestbook Frontend Deployment.

#nano frontend-deployment.yaml

# SOURCE: https://cloud.google.com/kubernetes-engine/docs/tutorials/guestbook
apiVersion: apps/v1
kind: Deployment
metadata:
  name: frontend
spec:
  replicas: 3
  selector:
    matchLabels:
        app: guestbook
        tier: frontend
  template:
    metadata:
      labels:
        app: guestbook
        tier: frontend
    spec:
      containers:
      - name: php-redis
        image: gcr.io/google_samples/gb-frontend:v5
        env:
        - name: GET_HOSTS_FROM
          value: "dns"
        resources:
          requests:
            cpu: 100m
            memory: 100Mi
        ports:
        - containerPort: 80

#Apply the frontend Deployment file

microk8s kubectl apply -f https://k8s.io/examples/application/guestbook/frontend-deployment.yaml

#Verify three frontend replicas are running.

microk8s kubectl get pods -l app=guestbook -l tier=frontend

+ Creating the Frontend Service.

#nano frontend-service.yaml

# SOURCE: https://cloud.google.com/kubernetes-engine/docs/tutorials/guestbook
apiVersion: v1
kind: Service
metadata:
  name: frontend
  labels:
    app: guestbook
    tier: frontend
spec:
  # if your cluster supports it, uncomment the following to automatically create
  # an external load-balanced IP for the frontend service.
  # type: LoadBalancer
  #type: LoadBalancer
  ports:
    # the port that this service should serve on
  - port: 80
  selector:
    app: guestbook
    tier: frontend

#Apply the frontend Service with the deployment file.

microk8s kubectl apply -f https://k8s.io/examples/application/guestbook/frontend-service.yaml

#Verify three frontend replicas are running.

microk8s kubectl get services

#Check the Frontend Service via LoadBalancer

microk8s kubectl get service frontend

+ Scale the Web Frontend.

microk8s kubectl scale deployment frontend --replicas=5
microk8s kubectl get pods
microk8s kubectl scale deployment frontend --replicas=2
microk8s kubectl get pods

#check frontend service.

+ Expose port 80 via public IP address on Kubernetes host (10.0.0.10). This allows accessing PHP Guestbook from outside.

microk8s kubectl patch svc frontend -n default -p '{"spec": {"type": "LoadBalancer", "externalIPs":["10.0.0.10"]}}'

Access PHP Guestbook from the Internet (http://54.165.173.81).

Building a customized Docker image using Docker compose on AWS

This is a diagram that I have used for this lab.

I have explained how to build a customized Docker image using Docker compose on-prem (https://tungle.ca/?p=2486). In this post, I will build a customized Docker image using Docker compose on AWS, then deploy WordPress via this docker container.

+ Create a new Debian Linux instance on AWS. Then, SSH to the instance and check Debian’s host version.

lsb_release -a

+ Create an index.php file with your customized information.

nano index.php

<?php
$yourname = "Tung Blog!";
$yourstudentnumber = "A123456789";
$image="tung.jpg"; // this must be included and uploaded as yourpic.jpg in your docker image (Dockerfile)
$uname=php_uname();
$all_your_output = <<<HTML
<html>
<head>
<meta charset="utf-8"/>
<title>$yourname - $yourstudentnumber</title>
</head>
<body>
<h1>$yourname - $yourstudentnumber</h1>
<img src="/$image">
<div>$uname</div>
</body>
<html>
HTML;
echo $all_your_output;
?>

Download a free .jpg image from the Internet and change it to tung.jpg

Create your sample docker file.

This tells Docker to:
– Build an image starting with the Debian 10 image.
– Label the container with your email address.
– Install Apache web service and PHP module.
– Remove the default index.html on the Apache web server document root directory.
– Copy a new index.php file and your customized image to the Apache document root directory on the docker container.
– Run the command hostname and apachectl -DFOREGROUND runs in the foreground.
– Image to describe that the container is listening on port 80.

nano Dockerfile
root@docker01:~# cat Dockerfile 
FROM debian:10
LABEL maintainer="xyz@my.bcit.ca"
#COPY index.php /usr/local/apache2/htdocs
#COPY index.php /var/www/html
#RUN apt-get update && apt-get -y install apache2
RUN apt update && apt -y install apt-utils systemd && apt-get -y install libapache2-mod-php
RUN rm /var/www/html/index.html
COPY index.php /var/www/html
COPY tung.jpg /var/www/html
#CMD apachectl -D FOREGROUND
CMD hostname TungA012345678 && apachectl -D FOREGROUND
EXPOSE 80

+ Build your app with Docker Compose.

docker build -t tung-a0123456789 .

Run your app with Docker compose.

docker run -d -p 80:80 --cap-add sys_admin -dit tung-a0123456789
---
-- -d starts docker in daemon mode, in the foreground.
-- -d p 80:80 listening the port 80 on docker container 
-- -cap-add sys_admin: basically root access to the host.
-- -dit: it is used for getting access to terminal inside a docker container. In this example is tung-a0123456789.

Check that port 80 is running on the docker container.

netstat -antp | grep 80

+ Check your application is running on a Docker container.

docker container ps -a

Connect to the Apache website with the PHP module on the docker container (http://3.239.117.185)

A few commands to use for checking the docker container.

docker ps -a
docker images
docker container ps -a
docker stop "Container ID"
docker rm "Container ID"
docker image rm "ImageID"

Implement WordPress load balancing on multiple Availability Zones with one MySQL RDS instance on AWS

This is a diagram that I have used to deploy this lab.

There are a couple of main steps that I have used in the lab.

  • Create both private subnets on both AZs
  • Create a WordPress instance on the first AZ.
  • Create a new MySQL instance.
  • Create an AMI for the WordPress instance.
  • Launches a new WordPress instance 2 on the second AZ by using your customized AMI.
  • Create an Application Load Balancer for WordPress on multiple AZs
  • Set up a checkhealth.html file to test the Application Load Balancing

Create a new VPC with CIDR 10.0.0.0/16

Create both private subnet, 10.0.0.0/24 on us-east-1a and 10.0.1.0/24 on us-east-1b.

Create a new Internet Gateway and attach this to your VPC.

Add a static route 0.0.0.0/0 on your Internet gateway.

Launches a new Linux instance to run WordPress on AZ1.

Copy it into the User data setting.

#!/bin/bash
yum update -y
# Install Apache web service
yum install httpd -y
# Download WordPress 
wget https://wordpress.org/latest.tar.gz
tar -zxf latest.tar.gz
# Install php7.4
amazon-linux-extras install php7.4 -y

On Security Group, allow SSH, HTTP, HTTPS, and MySQL/Audora from 0.0.0.0/0.

SSH to Linux instance.

Check httpd, php is installed on the machine.

rpm -qa | grep httpd
rpm -qa | grep php
sudo yum install php -y
sudo systemctl start httpd
sudo systemctl enable httpd
netstat -antp

Copy all files on WordPress directory to /var/www/html

cd /wordpress
sudo cp -r * /var/www/html

Go to Amazon RDS, create subnet groups on Amazon RDS.

Create a new Database instance on AWS.

Choose the Free tier.

Enter wordpress on “DB instance identifier”, “master user name and password”

DB instance class is d2.t2.micro.

Public access is No.

Choose the Availability zone as the following screenshot.

Enter “wordpress” on the initial database name.

Backup retiontion period: 0, then click “Create database”.

Wait a couple of minutes to completely create the database instance.

Access WordPress site via the public IP address of WP.

Database, username, password is wordpress.

Database Host is the endpoint address of the RDS database on AWS on the previous screenshot.

Click Submit.

Copy entire content, open SSH shell on Linux instance. Create a new wp-config.php under /var/www/html.

sudo nano wp-config.php 

Back to WP web interface set up, click “Run the installation”.

Click “Install WordPress”.

Log in WP.

Now, create a new AMI image for this WP. Right-click the WP instance, on Actions – Image and templates – Create image.

Right-click AMI. Click Actions – Launch an instance from AMI.

Go to the load balancer, and create a new application load balancer.

Create a new WordPress ALB SG. Allow HTTP from 0.0.0.0/0 on this Security Group.

Create a target group.

Select “Instances”.

Enter “AP-ALB” on the target group name and checkhealth.html for the health check WP instance.

Change the settings as the screenshot below. Click Next.

Select both instance IDs and click “Include as pending below”.

Create a target group.

Back to the Application Load Balancer setup, choose the “WP-ALB” on the target group.

Create a load balancer.

Wait a few minutes to see “Health status” is Healthy.

SSH to Linux instance on WordPress server 2.

Change the checkhealth.html to make the difference between WP1 and WP2.

On WP1.

<h1> This is health check from the WordPress Server 1 </h1>

On WP server 2.

Start the httpd daemon.

sudo systemctl start httpd 

Do the same on WP1 to make sure the httpd daemon is running after making the AMI template.

Access WP health check on WP server 1.

Make sure both WP servers have Healthy status on WP-ALB.

Copy the Amazon ALB link into your web browser.

Refresh, it can be seen that the web traffic is loaded balancing on the WP server 2.

Check the connection from WP instances to the Amazon RDS database.

I will set up Amazon Route 53, Amazon CloudFront with a real domain name such as awsbigfan.ca, and load balancing via HTTPS (WordPress SSL certificate will be issued by Amazon), not HTTP. Also. I will configure a strict Security Group policy to strengthen security from WordPress to the Amazon RSD database in the next labs.

Using Docker compose to deploy WordPress on AWS

This is a diagram that I have used for this lab.

I have explained how to use Docker compose to deploy WordPress on-prem (https://tungle.ca/?p=2381). In this article, I will install docker on the Debian Linux instance on AWS, then deploy WordPress via this docker.

SSH to the machine.

sudo apt-get update

+ Install Docker CE on Debian 10.

apt-get install apt-transport-https ca-certificates curl gnupg-agent software-properties-common

+ Add Docker GPG key and Docker repository.

curl -fsSL https://download.docker.com/linux/debian/gpg | apt-key add -
add-apt-repository "deb [arch=amd64] https://download.docker.com/linux/debian $(lsb_release -cs) stable"

+ Install Docker Engine.

apt-get update
apt-get install docker-ce docker-ce-cli

+ Enable and start Docker daemon.

systemctl start docker
systemctl enable docker
systemctl status docker

+ Next, use Docker compose to deploy WordPress. Download the Latest Docker Version.

sudo su
curl -L "https://github.com/docker/compose/releases/download/1.29.2/docker-compose-$(uname -s)-$(uname -m)" -o /usr/local/bin/docker-compose

+ Change file permission & check the docker version.

chmod +x /usr/local/bin/docker-compose
docker–compose -v

+ Create a sample docker-compose.yml file.

mkdir mywordpress
cd mywordpress/
nano docker-compose.yml
version: "3.9"
    
services:
  db:
    image: mysql:5.7
    volumes:
      - db_data:/var/lib/mysql
    restart: always
    environment:
      MYSQL_ROOT_PASSWORD: tungwordpress
      MYSQL_DATABASE: wordpress
      MYSQL_USER: wordpress
      MYSQL_PASSWORD: wordpress
    
  wordpress:
    depends_on:
      - db
    image: wordpress:latest
    volumes:
      - wordpress_data:/var/www/html
    ports:
      - "80:80"
    restart: always
    environment:
      WORDPRESS_DB_HOST: db
      WORDPRESS_DB_USER: wordpress
      WORDPRESS_DB_PASSWORD: wordpress
      WORDPRESS_DB_NAME: wordpress
volumes:
  db_data: {}
  wordpress_data: {}

+ Deploy WordPress by using Docker compose.

docker-compose up -d

+ Check that TCP port 80 is running on the docker node.

netstat -antp | grep 80

+ Access to WordPress.

+ Check the docker container is running.

docker-compose ps
docker-compose images

Set up a Router CSR on AWS

Below is a diagram that I have used to deploy this lab.

Create a new VPC.

New Public subnet and Private subnet.

Create and attach a new Internet gateway to your VPC.

Create a new Public Route table.

Create a new route to 0.0.0.0/0 to your Internet gateway.

Launches a new CSR instance.

Enter 10.0.0.10 on Primary IP setting.

Security Group.

Go to Network interfaces, and create a new network interface for Router CSR.

Then attach this network to Router CSR.

Disable “Change/source/dest check” for both Cisco CSR interfaces.

Back to route tables, configure the new route to the private Cisco CSR interface.

SSH from putty to Cisco Router.

conf t
int g2
ip add 10.0.1.10 255.255.255.0
no shut
exit
ping 8.8.8.8

Launches a new Windows 2016 machine to test RDP traffic from the Internet.

Enable SNAT and DNAT on the Router.

conf t
access-list 1 permit any
# Allow inside to outside
ip nat inside source list 1 interface g1 overload
# Allow outside to Windows server via the RDP service
ip nat inside source static tcp 10.0.1.174 3389 10.0.0.10 3389
int g1
ip nat outside
int g2
ip nat inside

Edit Router CSR Security Group and add RDP service into this group to allow RDP traffic from the Internet.

RDP to Elastic IP address of CSR Router.

Sending FortiGate logs to Graylog open-source log management on AWS via IPSEC VPN site-to-site

This is a diagram that I have used to build this lab.

There are a couple of steps in this lab.

  • Configure IPSEC VPN site-to-site IKEv2 between FortiGate and AWS.
  • Implementing Graylog open-source log management on a Linux instance on AWS.
  • Download FortiGate Content Pack (.json file) for Graylog.
  • Upload the file into Graylog.
  • Configure FortiGate to send logs to Graylog via Graylog’s IP address and the destination UDP port 1500.

Use the link below to know how to deploy the VPN site-to-site between FortiGate on-prem and AWS.

https://tungle.ca/?p=2753

Create a new Linux instance (4GB RAM) to install Graylog.

On Security Group, create a couple of following rules to allow FortiGate LAN subnets to communicate with Graylog on AWS LAN subnets.

SSH to the Linux instance.

+ Update your system and install needed packages.

sudo hostnamectl set-hostname graylog
sudo yum update -y
sudo yum install epel-release
sudo wget https://download-ib01.fedoraproject.org/pub/epel/7/x86_64/Packages/p/pwgen-2.08-1.el7.x86_64.rpm
sudo rpm -ivh pwgen-2.08-1.el7.x86_64.rpm

+ Install JAVA

sudo yum install java-1.8.0-openjdk-headless.x86_64 -y
sudo java -version

+ Create a repository file. Then add the content below to this repository.

sudo nano /etc/yum.repos.d/mongodb-org.repo
[mongodb-org-4.2]
name=MongoDB Repository
baseurl=https://repo.mongodb.org/yum/redhat/7/mongodb-org/4.2/x86_64/
gpgcheck=1
enabled=1
gpgkey=https://www.mongodb.org/static/pgp/server-4.2.asc

+ Install MongoDB.

sudo yum install mongodb-org -y

+ Enable and start the mongoDB service on the system.

sudo systemctl daemon-reload
sudo systemctl enable mongod.service
sudo systemctl start mongod.service
sudo systemctl --type=service --state=active | grep mongod

+ Check MongoDB service port.

netstat -antp | grep 27017

+ Installing Elasticsearch.

Create a repository, then add the following contents to the file.

sudo nano /etc/yum.repos.d/elasticsearch.repo

[elasticsearch-6.x]
name=Elasticsearch repository for 6.x packages
baseurl=https://artifacts.elastic.co/packages/oss-6.x/yum
gpgcheck=1
gpgkey=https://artifacts.elastic.co/GPG-KEY-elasticsearch
enabled=1
autorefresh=1

Install the open-source version of Elasticsearch.

sudo yum install elasticsearch-oss -y
#Edit the elasticsearch.yml file on /etc/elasticsearch/elasticsearch.yml
sudo nano /etc/elasticsearch/elasticsearch.yml

Modify the Elasticsearch configuration file. Set the cluster name to graylog and add “action.auto_create_index: false” to the file.

Save and exit the file. Enable, start and check the status of elastic search on the system.

sudo systemctl daemon-reload
sudo systemctl enable elasticsearch.service
sudo systemctl restart elasticsearch.service
sudo systemctl --type=service --state=active | grep elasticsearch

Check elastic search health.

curl -XGET 'http://localhost:9200/_cluster/health?pretty=true'

+ Installing the Graylog.

Now install the Graylog repository configuration with the following command.

sudo rpm -Uvh https://packages.graylog2.org/repo/packages/graylog-4.2-repository_latest.rpm

Install Graylog-server.

sudo yum install graylog-server -y

Configure Graylog.

Generate password_secret.

pwgen -N 1 -s 96

[ec2-user@ip-10-0-0-64 ~]$ pwgen -N 1 -s 96
Bv6a46BXTALlfI3VRZ3ACfzBoIZOo3evqd7v7FY0fsrSXNZDflPcWRtYoxRrm5BZfMvq2TKffWEobYL6iSwBW908gpSC9z79

Generate root_password_sha2.

echo -n graylog@123 | sha256sum | cut -d” ” -f1

[ec2-user@ip-10-0-0-64 ~]$ echo -n graylog@123 | sha256sum | cut -d” ” -f1
cc41de147e5c624c6a7c230648545f6d14f82fa0e591148dc96993b3e539abfc

Edit etc/graylog/server/server.conf file.

sudo nano /etc/graylog/server/server.conf
Comment the following line.
#http_bind_address = 127.0.0.1:9000

Add the following line with IP address of Graylog.
http_bind_address = 10.0.0.64:9000 

Enable and start Graylog service.

sudo systemctl enable graylog-server.service
sudo systemctl start graylog-server.service

Check Graylog Server listening port.

netstat -antp | grep 9000

Access Graylog web interface from Kali’s machine on FortiGate LAN subnets.

http://10.0.0.4:9000
user:admin
password:graylog@123

Back to FortiGate, configure the Syslog setting to send logs via the Graylog server on its IP address 10.0.0.64 with a destination port is 1500.

config log syslogd setting
set status enable
set server 10.0.0.64
set port 1500
end 
show log syslogd setting

On Graylog.

Download FortiGate Content Pack from Github.

https://marketplace.graylog.org/addons/f1b25e9c-c908-41e4-b5de-4549c500a9d0

https://github.com/teon85/fortigate6.4_graylog4

Download the JSON file (fortigate6.4_graylog4.json)

Go to System – Content Packs – Upload. Select the file (fortigate6.4_graylog4.json) and upload.

Click Install.

Change the Syslog port to 1500.

FortiGate dashboard.

Send Palo Alto logs on-prem to Splunk on AWS via VPN site-to-site

This is a diagram that I have used to deploy this lab.

We need to deploy a VPN site to site between Palo Alto on-prem and AWS.

On AWS.

On Palo Alto.

Pings Splunk instance (10.0.0.110) via ethernet 1/2 interface.

The VPN site-to-site tunnel is up in Palo Alto.

Set up a new Windows 2016 instance with 4 GB memory to run Splunk Enterprise on AWS.

RDP to the instance and install Splunk Enterprise. Then, add Splunk for Palo Alto on this instance.

Configure Splunk to get Palo Alto logs via UDP port 514.

Check the UDP 514 port is running on the Splunk instance.

Go to Palo Alto, and configure Syslog to send logs to Splunk.

By default, Palo Alto uses a management interface to send logs. We need to change the interface to allow Palo Alto to send logs via ethernet1/2 (LAN interface).

Log on PA console, type configure, and the command below to change the interface to send logs.

set deviceconfig system route service syslog source interface e1/2

Also, we can go to Device – Setup – Service Route Configuration – Syslog. Configure the source interface and source IP address like the following screenshot.

Configure Syslog on Palo Alto.

IP address: 10.0.0.110 (Splunk instance)

Port: 514 UDP

Log off and enter the wrong password on Palo Alto. Log back into Palo Alto to generate logs to send to Splunk.

We can see “failed authentication log” events have been generated on Splunk.