Set up VPN site-to-site between FortiGate on-prem and AWS. Send FortiGate logs to Splunk on AWS

This is a diagram that I have used for this demonstration.

Create your VPC.

Create a private subnet.

Create a new Internet Gateway and attach it to your VPC.

Create a new route to 0.0.0.0/0 to your Internet gateway.

Create a new Customer gateway with the public IP address of FortiGate.

Create a new Virtual Private Gateway and attach it to your VPC.

Create a new VPN site-to-site.

Click Download Configuration to configure on your FortiGate.

Log into FortiGate.

Interfaces.

Copies these commands and pastes them into FortiGate. Notes the set “mtu 1427” and set “mtu-override enable” does not available on FortiGate 6.2

Back to AWS and launch a new Linux VM instance. This machine is used to test VPN site-to-site.

Configure a new static route to allow LAN subnets on AWS to access LAN subnets on FortiGate.

On FortiGate, configure a new static route to AWS LAN subnets.

Configure access rules to allow FortiGate LAN subnets to communicate with AWS LAN subnets.

Pings from Kali machine to the Linux VM instance on AWS.

The IPSEC tunnel in FortiGate is up.

Back to AWS, the VPN tunnel is up.

Launches a new Windows 2016 VM instance to install Splunk.

On Security Group, add a couple of rules to allow ICMP and all traffic on FortiGate LAN subnets to access this instance.

RDP to Windows instance and disable Firewall to send logs from FortiGate.

Download Splunk Enterprise for Windows and install it into this instance.

Install FortiGate App for Splunk and Fortinet FortiGate Add on Splunk.

Click on the Settings tab and configure Splunk to get FortiGate logs. Select new Local UDP.

Enter 514 on the port setting. Be default, FortiGate is using UDP port 514 to send log to Syslog.

Select: fgt_log

App Context: Fortinet FortiGate App for Splunk

Method: IP

Index: Default

Check the UDP 514 port is running in the instance.

Back to FortiGate, configure Fortigate to send logs to Splunk on AWS. Enter the IP address of Splunk on the IP Address setting, and click choose All for “Event Logging” and “Local Logging”. Then, click Apply.

Log out of FortiGate and log back in to generate logs. If we may not see FortiGate logs on Splunk, we need to type the commands below to change the source-ip address to send log from using the “management interface” to using the LAN interface “172.16.1.254”

config log syslogd setting
    set status enable
    set mode udp
    set port 514
    set server "10.0.0.48"
    set source-ip "172.16.1.254"
end

Also, enable PING Access, HTTP, and HTTPS on tunnel 1 interface of FortiGate.

Splunk is able to ping the FortiGate LAN interface.

Back to the Splunk instance, now we are able to see logs from FortiGate.

Deploying WordPress and MySQL with Kubernetes

I have used this link below to deploy WordPress and MySQL with Kubernetes.

https://kubernetes.io/docs/tutorials/stateful-application/mysql-wordpress-persistent-volume/

+ mysql-deployment.yaml

apiVersion: v1
kind: Service
metadata:
  name: wordpress-mysql
  labels:
    app: wordpress
spec:
  ports:
    - port: 3306
  selector:
    app: wordpress
    tier: mysql
  clusterIP: None
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: mysql-pv-claim
  labels:
    app: wordpress
spec:
  accessModes:
    - ReadWriteOnce
  resources:
    requests:
      storage: 20Gi
---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: wordpress-mysql
  labels:
    app: wordpress
spec:
  selector:
    matchLabels:
      app: wordpress
      tier: mysql
  strategy:
    type: Recreate
  template:
    metadata:
      labels:
        app: wordpress
        tier: mysql
    spec:
      containers:
      - image: mysql:5.6
        name: mysql
        env:
        - name: MYSQL_ROOT_PASSWORD
          valueFrom:
            secretKeyRef:
              name: mysql-pass
              key: password
        ports:
        - containerPort: 3306
          name: mysql
        volumeMounts:
        - name: mysql-persistent-storage
          mountPath: /var/lib/mysql
      volumes:
      - name: mysql-persistent-storage
        persistentVolumeClaim:
          claimName: mysql-pv-claim

+ wordpress-deployment.yaml

apiVersion: v1
kind: Service
metadata:
  name: wordpress
  labels:
    app: wordpress
spec:
  ports:
    - port: 80
  selector:
    app: wordpress
    tier: frontend
  type: LoadBalancer
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: wp-pv-claim
  labels:
    app: wordpress
spec:
  accessModes:
    - ReadWriteOnce
  resources:
    requests:
      storage: 20Gi
---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: wordpress
  labels:
    app: wordpress
spec:
  selector:
    matchLabels:
      app: wordpress
      tier: frontend
  strategy:
    type: Recreate
  template:
    metadata:
      labels:
        app: wordpress
        tier: frontend
    spec:
      containers:
      - image: wordpress:4.8-apache
        name: wordpress
        env:
        - name: WORDPRESS_DB_HOST
          value: wordpress-mysql
        - name: WORDPRESS_DB_PASSWORD
          valueFrom:
            secretKeyRef:
              name: mysql-pass
              key: password
        ports:
        - containerPort: 80
          name: wordpress
        volumeMounts:
        - name: wordpress-persistent-storage
          mountPath: /var/www/html
      volumes:
      - name: wordpress-persistent-storage
        persistentVolumeClaim:
          claimName: wp-pv-claim

+ Download the MySQL deployment configuration file.

curl -LO https://k8s.io/examples/application/wordpress/mysql-deployment.yaml

+ Download the WordPress configuration file.

curl -LO https://k8s.io/examples/application/wordpress/wordpress-deployment.yaml

+ Create a kustomization.yaml.

#Add a Secret generator in kustomization.yaml from the following command. 

tung@node1:~$ cat kustomization.yaml 
secretGenerator:
- name: mysql-pass
  literals:
  - password=YOUR_PASSWORD
resources:
  - mysql-deployment.yaml
  - wordpress-deployment.yaml

+ Apply and Verify

microk8s kubectl apply -k ./

+ Verify all objects exist.

microk8s kubectl get secrets

+ Verify a PersistentVolumeClaims (PVC) got dynamically provisioned.

microk8s kubectl get pvc

+ Verify the Pod is running.

microk8s kubectl get pods

+ Verify the Service is running.

microk8s kubectl get service wordpress

+ Access the WordPress site.

+ Check Kubernetes is running.

microk8s kubectl get all --all-namespaces

Using Splunk to find information on Linux and Windows logs

Splunk – Settings – Data Input – File Directories – New Local File and Directory – Browse to the Linux log file.

Save as log as linux.

Count the number of Failed passwords for user root

sourcetype="linux" Failed password root | stats count

+ Count the number of Failed password except root.

sourcetype="linux" Failed password NOT root | stats count
+ Count the number of IP addresses and show the top 10.
sourcetype=linuxlogs NOT 'allurbase' | stats count by IP | head 10

+ show top 5 port number used for ssh2.

sourcetype="linux" | stats count by sshport | sort count by desc | head 5

+ show top 5 port number used for ssh2

sourcetype="linux" session opened for user | stats count by user | sort count desc | head 5 

Import Windows log file.

source=”windows_perfmon_logs.txt” | stats count by collection | where count>100 | sort collection desc

+ Count ComputerName start with acme and sort by desc.

source="windows_perfmon_logs.txt" ComputerName="acme*" | stats count by ComputerName | sort count desc

Implementing Elastic Network Load Balancing on both FortiGates in multiple AZs

This is a diagram that is used to deploy this lab.

In this lab, we will use Elastic Load Balancer to distribute RDP traffic via Windows 2016 VM instances among the FortiGate in different AZs on AWS.

Below are a couple of steps that are used to deploy this lab.

  • Create your VPC, subnets, and route tables.
  • Launch FortiGate 1 on AZ 1 and FortiGate 2 on AZ 2.
  • Create both Windows 2016 VM on AZ 1 and AZ 2.
  • Configure DNAT to allow RDP traffic from the Internet to Windows Server 2016 instance on each AZ.
  • Configure Elastic Network Load Balancing on both FortiGates on multiple AZ.
  • RDP traffic has been distributed to Windows 2016 VM1 and VM2 via Elastic Network Load Balancing

Create a new VPC.

Create both Public subnet 1 and Private subnet 1 on the first Availability Zone.

Create new both Public subnet 2 and Private subnet 2 on the Availability zone 2

Create 4 route tables as in the diagram above.

Link the subnets to corresponding route tables.

Create a new FortiGate on AZ 1.

Security Group.

Create a new Elastic IP address and associate for the first FortiGate.

Launch the new FortiGate instance on AZ 2.

Rename to Fortinet Zone 1 Public subnet and Fortinet Zone 2 Public Subnet.

Create a new Fortinet Zone 1 Private subnet.

Attach this into the first FortiGate.

Create a new Fortinet Zone 2 Private subnet and attach it to FortiGate 2.

Uncheck “Change source/destination check” on all FortiGate interfaces.

Back to Route tables.

Create a new route 0.0.0.0/0 on Public Route table 1 via Fortinet Zone 1 Public subnet interface.

Create a new route 0.0.0.0/0 on Public Route table 2 via Fortinet Zone 2 Public subnet interface.

Create a new route 0.0.0.0/0 on Private Route table 1 via Fortinet Zone 1 Private subnet interface.

Create a new route 0.0.0.0/0 on Private Route table subnet 2 via Fortinet Zone 2 Private subnet interface.

Access FortiGate management interface.

The FortiGate 1.

Change the LAN setting for port 2.

Do the same with FortiGate 2.

Create two new Windows Server 2016 instances on AZ1 and AZ2.

Windows Security Group.

Launch the new one.

Go to FortiGate 1, and DNAT port 3389 to Windows Server 2016 VM 1 instance.

Create a new inbound policy to allow traffic from the Internet to Windows 2016 instance.

On FortiGate 2.

Create a new Firewall Policy.

Edit the Security Group to allow RDP to Windows 2016 VM 2 instance.

Access Windows VM 1.

Create Network Load Balancer on AWS for RDP traffic to Windows Server 2016 instance.

Select “IP address”.

Add IP addresses on the public subnet of both FortiGates on “register targets”.

Click Register targets.

Wait until the health states on both IP addresses are healthy.

Right-click on FortiGate-NLB-RDP and enable “Cross zone load balancing” to allow load balancing on multiple AZ.

Set the same Windows password for both Windows 2016 instances.

Access RDP to the highlighted DNS name on NLB.

An RDP session will access Windows Server VM 1 or VM 2 via Elastic Load Balancing.

We are able to configure both web servers on Windows server 2016 VMs and distribute web traffic via Windows 2016 VM instances among the FortiGate in different AZs on AWS.

Building a customized Docker image using Docker compose

In this lab, I will explain how to use Docker compose to build your customized Docker image.

+ Create your sample docker file.

This tells Docker to:
Build an image starting with the Debian 10 image.
Label the container with your email address.
Install Apache web service and PHP module.
Remove the default index.html on the Apache web server document root directory.
Copy a new index.php file and your customized image to the Apache document root directory on the docker container.
Run the command hostname and apachectl -DFOREGROUND runs in the foreground.
Image to describe that the container is listening on port 80.

root@docker01:~# cat Dockerfile 
FROM debian:10
LABEL maintainer="xyz@my.bcit.ca"
#COPY index.php /usr/local/apache2/htdocs
#COPY index.php /var/www/html
#RUN apt-get update && apt-get -y install apache2
RUN apt update && apt -y install apt-utils systemd && apt-get -y install libapache2-mod-php
RUN rm /var/www/html/index.html
COPY index.php /var/www/html
COPY tung.jpg /var/www/html
#CMD apachectl -D FOREGROUND
CMD hostname TungA012345678 && apachectl -D FOREGROUND
EXPOSE 80

+ Create an index.php file with your customized information.

root@docker01:~# cat index.php 
<?php
$yourname = "Tung Blog!";
$yourstudentnumber = "A123456789";
$image="tung.jpg"; // this must be included and uploaded as yourpic.jpg in your docker image (Dockerfile)
$uname=php_uname();
$all_your_output = <<<HTML
<html>
<head>
<meta charset="utf-8"/>
<title>$yourname - $yourstudentnumber</title>
</head>
<body>
<h1>$yourname - $yourstudentnumber</h1>
<img src="/$image">
<div>$uname</div>
</body>
<html>
HTML;
echo $all_your_output;
?>

+ Build your app with Docker Compose.

docker build -t tung-a01234567 .

+ Run your app with Docker compose.

docker run -d -p 80:80 --cap-add sys_admin -dit tung-a01234567
---
-- -d starts docker in daemon mode, in the foreground.
-- -d p 80:80 listening the port 80 on docker container 
-- -cap-add sys_admin: basically root access to the host.
-- -dit: it is used for getting access to terminal inside a docker container. In this example is tung-a01234567.

Check your application is running on a Docker container.

docker container ps -a

Connect to the Apache website with PHP module on the docker container.

http://192.168.5.46/index.php

Deploy VPN IPSEC site-to-site IKEv2 tunnel between Cisco CSR Router and AWS

This is a diagram that is used to deploy this lab.

Create a new VPC with CIDR is 10.0.0.0/16. Then, create a new private subnet on AWS is 10.0.0.0/24.

Next, create a Customer gateway on AWS.

Create a virtual private gateway and attach this to your VPC.

Create a site-to-site between AWS and Router CSR.

Click download configuration to configure on Cisco CSR.

Add another route to Cisco CSR LAN subnets on AWS Private route table.

Configure CoreSW.

conf t
hostname CoreSW
ip routing
ip dhcp excluded-address 172.16.10.1 172.16.10.10
!
ip dhcp pool VLAN10
 network 172.16.10.0 255.255.255.0
 default-router 172.16.10.1
 dns-server 172.16.20.12

interface GigabitEthernet0/0
 no switchport
 ip address 172.16.1.1 255.255.255.0
!
interface GigabitEthernet0/1
 switchport trunk allowed vlan 10,20,99
 switchport trunk encapsulation dot1q
 switchport trunk native vlan 99
 switchport mode trunk
 negotiation auto
!
interface GigabitEthernet0/2
 switchport trunk allowed vlan 10,20,99
 switchport trunk encapsulation dot1q
 switchport trunk native vlan 99
 switchport mode trunk

interface Vlan10
 ip address 172.16.10.1 255.255.255.0
!
interface Vlan20
 ip address 172.16.20.1 255.255.255.0
!
router ospf 1
 router-id 1.1.1.1
 network 172.16.0.0 0.0.255.255 area 0
!
ip route 0.0.0.0 0.0.0.0 172.16.1.254
--
Configure VLAN
CoreSW(config)#vlan 10
CoreSW(config-vlan)#name PCs
CoreSW(config-vlan)#vlan 20
CoreSW(config-vlan)#name Servers
CoreSW(config-vlan)#vlan 99
CoreSW(config-vlan)#name Native
CoreSW(config-vlan)#do sh vlan bri

VLAN Name                             Status    Ports
---- -------------------------------- --------- -------------------------------
1    default                          active    Gi0/3, Gi1/0, Gi1/1, Gi1/2
                                                Gi1/3, Gi2/0, Gi2/1, Gi2/2
                                                Gi2/3, Gi3/0, Gi3/1, Gi3/2
                                                Gi3/3
10   PCs                              active
20   Servers                          active
99   Native                           active

Configure Cisco CSR.

interface GigabitEthernet1
 ip address dhcp
 ip nat outside
 negotiation auto
 no mop enabled
 no mop sysid
!
interface GigabitEthernet2
 ip address 172.16.1.254 255.255.255.0
 ip nat inside
 negotiation auto
 no mop enabled
 no mop sysid
router ospf 1
 router-id 3.3.3.3
 network 172.16.0.0 0.0.255.255 area 0
!
ip nat inside source list 1 interface GigabitEthernet1 overload
ip route 0.0.0.0 0.0.0.0 142.232.198.254

Next, opens the file that you have downloaded on AWS, then copy and paste its configuration to Cisco CSR to create both IPSEC VPN site-to-site IKEv2 tunnels on the Router.

Cisco CSR configuration
-------
crypto ikev2 proposal PROPOSAL1
 encryption aes-cbc-128
 integrity sha1
 group 2
!
crypto ikev2 policy POLICY1
 match address local 142.232.198.195
 proposal PROPOSAL1
!
crypto ikev2 keyring KEYRING1
 peer 3.209.99.165
  address 3.209.99.165
  pre-shared-key IuhDpOyPazd.NIHiEh.3Q_uY99mDw98X
 !
 peer 54.83.195.0
  address 54.83.195.0 255.255.255.0
  pre-shared-key tlDEo5uQkac9zzMt3s.kgU6ARGma5Cq8
 !

!crypto ikev2 profile IKEV2-PROFILE
 match address local 142.232.198.195
 match identity remote address 3.209.99.165 255.255.255.255
 match identity remote address 54.83.195.0 255.255.255.0
 authentication remote pre-share
 authentication local pre-share
 keyring local KEYRING1
 lifetime 28800
 dpd 10 10 on-demand
crypto isakmp keepalive 10 10
!
crypto ipsec security-association replay window-size 128
!
crypto ipsec transform-set ipsec-prop-vpn-0857221ac6c8785fe-0 esp-aes esp-sha-hmac
 mode tunnel
crypto ipsec transform-set ipsec-prop-vpn-0857221ac6c8785fe-1 esp-aes esp-sha-hmac
 mode tunnel
crypto ipsec df-bit clear
!
crypto ipsec profile ipsec-vpn-0857221ac6c8785fe-0
 set transform-set ipsec-prop-vpn-0857221ac6c8785fe-0
 set pfs group2
 set ikev2-profile IKEV2-PROFILE
!
crypto ipsec profile ipsec-vpn-0857221ac6c8785fe-1
 set transform-set ipsec-prop-vpn-0857221ac6c8785fe-1
 set pfs group2
 set ikev2-profile IKEV2-PROFILE
interface Tunnel1
 ip address 169.254.143.114 255.255.255.252
 ip tcp adjust-mss 1379
 tunnel source 142.232.198.195
 tunnel mode ipsec ipv4
 tunnel destination 3.209.99.165
 tunnel protection ipsec profile ipsec-vpn-0857221ac6c8785fe-0
 ip virtual-reassembly
!
interface Tunnel2
 ip address 169.254.192.6 255.255.255.252
 ip tcp adjust-mss 1379
 tunnel source 142.232.198.195
 tunnel mode ipsec ipv4
 tunnel destination 54.83.195.0
 tunnel protection ipsec profile ipsec-vpn-0857221ac6c8785fe-1
 ip virtual-reassembly
!
interface GigabitEthernet1
 ip address dhcp
 ip nat outside
 negotiation auto
 no mop enabled
 no mop sysid
!
interface GigabitEthernet2
 ip address 172.16.1.254 255.255.255.0
 ip nat inside
 negotiation auto
 no mop enabled
 no mop sysid
router ospf 1
 router-id 3.3.3.3
 network 172.16.0.0 0.0.255.255 area 0
!
ip nat inside source list 1 interface GigabitEthernet1 overload
ip route 0.0.0.0 0.0.0.0 142.232.198.254
ip route 10.0.0.0 255.255.255.0 Tunnel1
ip route 10.0.0.0 255.255.255.0 Tunnel2
!
ip access-list standard 1
 10 permit any
show CSR interfaces.
CSR# sh ip int brief
Interface              IP-Address      OK? Method Status                Protocol
GigabitEthernet1       142.232.198.195 YES DHCP   up                    up
GigabitEthernet2       172.16.1.254    YES manual up                    up
GigabitEthernet3       unassigned      YES unset  administratively down down
GigabitEthernet4       unassigned      YES unset  administratively down down
Tunnel1                169.254.143.114 YES manual up                    up
Tunnel2                169.254.192.6   YES manual up                    up

show ip ospf neighbor

show ip route

show crypt ikev2 sa

show crypto ipsec sa

show crypto ipsec sa

Pings Linux instance on AWS from a machine on CSR LAN subnet.

Pings a Windows machine on CSR LAN subnet from the Linux instance on AWS.

Both tunnels are up on AWS and CSR Router.

This image has an empty alt attribute; its file name is image-17.png

Using Docker compose to deploy WordPress

I have demonstrated to use Ansible to deploy WordPress (https://tungle.ca/?p=1252). However, it is more efficient to use Docker compose to deploy WordPress.

+ Download the Latest Docker Version.

curl -L "https://github.com/docker/compose/releases/download/1.29.2/docker-compose-$(uname -s)-$(uname -m)" -o /usr/local/bin/docker-compose

+ Change file permission & check docker version.

chmod +x /usr/local/bin/docker-compose
docker–compose --version

+ Create a sample docker-compose.yml file.

Use the following link as a reference to create the docker-compose file.

https://docs.docker.com/samples/wordpress/

Port forwarding the destination port 5547 on the docker node to port 80 on a docker container.

Limit 2GB memory for running WordPress.

+ Deploy WordPress by using Docker compose.

docker-compose up -d

Check a TCP port 5547 is running on the docker node.

Access to WordPress.

Deploy VPN site-to-site between Palo Alto on-prem and AWS. Setup OpenVPN and additional Domain Controller on AWS

This is the diagram is used to deploy this lab.

In this lab.

  • Configure VPN site to site IKEv2 between Palo Alto and Virtual Private Gateway on AWS.
  • Implementing multi-master domain controllers on-prem and AWS.
  • Authenticating OpenVPN tunnel via LDAP to support people working from home to access corporate servers on AWS.
  • Disconnect the domain controller on-prem to simulate migrating corporate servers to AWS in the near future.

Core Switch configuration.

CoreSW
conf t
hostname CoreSW
ip routing
ip dhcp excluded-address 172.16.10.1 172.16.10.10
!
ip dhcp pool VLAN10
 network 172.16.10.0 255.255.255.0
 default-router 172.16.10.1
 dns-server 172.16.20.12

interface GigabitEthernet0/0
 no switchport
 ip address 172.16.1.1 255.255.255.0
!
interface GigabitEthernet0/1
 switchport trunk allowed vlan 10,20,99
 switchport trunk encapsulation dot1q
 switchport trunk native vlan 99
 switchport mode trunk
 negotiation auto
!
interface GigabitEthernet0/2
 switchport trunk allowed vlan 10,20,99
 switchport trunk encapsulation dot1q
 switchport trunk native vlan 99
 switchport mode trunk

interface Vlan10
 ip address 172.16.10.1 255.255.255.0
!
interface Vlan20
 ip address 172.16.20.1 255.255.255.0
!
router ospf 1
 router-id 1.1.1.1
 network 172.16.0.0 0.0.255.255 area 0
!
ip route 0.0.0.0 0.0.0.0 172.16.1.254

--- 
SWCore#sh vlan brief

VLAN Name                             Status    Ports
---- -------------------------------- --------- -------------------------------
1    default                          active    Gi0/3, Gi1/0, Gi1/1, Gi1/2
                                                Gi1/3, Gi2/0, Gi2/1, Gi2/2
                                                Gi2/3, Gi3/0, Gi3/1, Gi3/2
                                                Gi3/3
10   End users                        active
20   Servers                          active
99   Native VLAN                      active

Check Kali VM., start SSH and Apache service on this machine.

On Palo Alto.

LAN interface.

e1/1 belongs to the VPN zone, and e1/2 belongs to the LAN zone, respectively.

Create a new network object for the PA LAN subnet.

Configure SNAT to allow traffic from the PA LAN subnet to access the Internet.

Configure a default route.

Configure OSPF on PA.

Allow ICMP on the Mgmt interface to troubleshoot.

Ping from PA.

Ping from a VM on the PA LAN subnet.

+ Create a new VPC.

Create a private subnet.

Create and attach Internet gateway to your VPC.

Route table.

Add a new route to your Internet Gateway.

Go to VPN, create a customer gateway.

Create a new VPN gateway.

Attach it to your VPC.

Create a VPN site to site.

Go to the Route table and add a new route to PA LAN subnet.

Click Download Configuration and select information as the following screenshot.

Open the file to use for configuring PA.

Configure IKECrypto.

Configure IPSECCrypto.

Configure IKE Gateway.

Create a new interface tunnel 1 for VPN IPSEC site to site between FG on AWS and PA.

Configure IPSEC Tunnel.

On Virtual Routers, add an interface tunnel 1 on the router settings.

Create a new static route to the AWS LAN subnet.

New address object.

Create both Security policies to allow traffic from LAN to VPN.

+ Back to AWS, create a new Linux and Windows instance on AWS.

Create a new key pair on AWS.

Allow HTTP, SSH, and ICMP on Security Group.

Back to GNS3, configure a new Windows 2016 server VM.

Takes notes of IP address of Linux instance on AWS.

Ping the Linux instance on AWS LAN subnet from PA LAN subnet.

The tunnel is up on PA

On AWS, the tunnel is up as well.

Configure Windows 2016 on GNS3.

Install Windows 2016.

On Kali, access SSH to Linux VM instance on AWS>

Website on-prem.

Website on AWS.

Change computer name to DC1 and promote it to a domain controller.

Create a new Windows VM on AWS.

Create a new OpenVPN server instance on AWS.

Access the OpenVPN server via SSH. Use openvpnas as a user to log in.

Check the private subnet on OpenVPN is matching with the private subnet on AWS.

Change the password of openvpn.

From Windows 2016 VM on GNS3, access RDP to Windows instance on AWS. Change DNS setting to DC1 on-prem.

Join the machine to domain on-prem and promote it to become additional domain controller.

Create a couple of users to test: tung, kevin, test on domain controllers.

On OpenVPN.

Change the setting to authenticate the OpenVPN tunnel via LDAP. We use both LDAP servers on AWS and on-prem.

Configure LDAP settings to query the corresponding information on domain controllers.

Access to OpenVPN mgmt interface.

Using a kevin user to log in.

Access a web server on a private subnet on AWS.

RDP to a private IP address on Windows DC2 on AWS.

Monitor Security traffic on PA.

Join Windows 10 to the domain.

Disconnect interface from DC1 to SW2 to simulate migrating servers to AWS cloud.

Windows 10 is still accessible to the domain on DC2 on AWS.

Access RDP to DC2 and a web server on AWS.

Domain users are able to access a domain controller on AWS and a web server on AWS when the domain controller on-prem was down.

Install Docker Swarm Cluster on Debian 10

This is diagram is used to deploy the 3 nodes Docker swarm.

On three nodes: docker01, docker02, and docker03.

apt-get update

Install Docker CE on Debian 10.

apt-get install apt-transport-https ca-certificates curl gnupg-agent software-properties-common

Add Docker GPG key and Docker repository.

curl -fsSL https://download.docker.com/linux/debian/gpg | apt-key add -
add-apt-repository "deb [arch=amd64] https://download.docker.com/linux/debian $(lsb_release -cs) stable"

Install Docker Engine

apt-get update
apt-get install docker-ce docker-ce-cli

Enable and start Docker daemon.

systemctl start docker
systemctl enable docker

+ Initialize Docker Swarm Cluster on node 1: docker01

docker swarm init --advertise-addr 192.168.5.46

Join docker02 and docker03 to the cluster.

Create a custom httpd image with the listening port is 8847.

We can check if the worker nodes have joined to cluster successfully using the command below:

docker node ls

+ Deploy Applications on Docker Swarm

Below is an example to deploy an Apache web service with the listening port is 8847 and service name is tunghttp

docker service create --name tunghttp -p 8847:80 httpd

+ Scale Applications on Docker Swarm to have high availability and high performance on 3 nodes.

docker service scale tunghttp=3
docker service ps tunghttp

Create a new index.html file on the docker host. Then, copy the file to the Apache webserver directory as a screenshot below.

<html>
<head>
<title>Tung A01</title> 
</head>
<body>
	<h1 Welcome to the web server that is deployed by Docker </h1>
	<img src="http://imagefromtheinternet.jpg">
</body>
</html>

Open the web browser and access the web server via port 8847 on three Docker IP addresses.

Check three nodes.

 docker node ls

Check the 8847 port is running on the Docker node.

netstat -ant | grep 8847

Deploying FortiGate HA by using CloudFormation on AWS

This is a diagram to deploy FortiGate HA by using CloudFormation on AWS.

Create a new VPC.

Create a public subnet.

Create a private subnet.

Create a subnet for Synchronization between both FGs.

Create a new subnet for FortiGate management.

Public subnet: 10.0.0.0/24

Private subnet: 10.0.1.0/24

FGSync subnet: 10.0.3.0/24

FGHA mgmt subnet: 10.0.4.0/24

Create a new Internet gateway, and attach it to your VPC.

Create a new public route.

Edit the public route, and add a new default route to your internet gateway.

Associate both public and HAmgmt subnet into the public route.

Create a new key pair.

Create a new bucket, and leave the settings by default.

Go to the GitHub of Fortinet, and download a json file for the existing VPC as a screenshot below.

https://github.com/fortinet/aws-cloudformation-templates/tree/main/FGCP/7.0/SingleAZ
Go to CloudFormation on AWS, click to create a new stack to deploy FortiGate HA.

Upload the template into this stack.

Enter your stack name, VPCID, VPCCIDR, and link public, private, sync, HAmgmt to corresponding subnets.

Will choose the minimize instance type for the lab is c5.xlarge.

Copy Public route table ID into the publicsubnetroutetableID.

The license is PAYG.

Click Next and accept the settings by default.

Click create stack. It will take a couple of minutes to complete.

On Output, copy all information to notepad to keep track.

There are three Elastic IP addresses that have been created on your VPC.

This is a master FG.

It will link to a default Security Group that has been created when creating a stack.

Wait until both FGs are checked passed.

Access the Primary HA FG via cluster IP address.

Both access rules have been automatically created when creating a stack.

Access the FG1, FG2 via mgmt IP address.

Check HA status.

FG1.

On FG2, there is only an elastic IP address.

Update the Elastic IP address.

Notes Network interface ID of FortiGate.

Edit and add a new route on the private subnet route to route all traffic on the subnet to network interface of the master FG

On FG2, open the console and type the command below.

diagnose debug application awsd -1
diagnose debug enable

On FG1, click instance state and stop the instance.

The Cluster IP address has been successfully moved to FG2.

On S3 bucket, we can see there are two config files for FG1 and FG2 have been created when installing a stack.

It only supports unicast for a heartbeat on AWS.

Refresh the cluster IP management access.

FG2 has become the Primary for HA.

The route has been updated to use a private network instance on FG2.

Also, we can see all interfaces have been disabled for “Change source/destination check”.

To terminate the lab, go to stack and delete the stack that has been created for the lab.