Author Archives: tungle

Using AWS Cloudformation to deploy VPC, subnet, Security group, Linux instance, and static Apache website.

Create a new key pair.

Upload the template to stack on AWS Cloudformation.

Upload a template to stack.

{
  "AWSTemplateFormatVersion" : "2010-09-09",
  "Description" : "AWS CloudFormation Sample Template VPC_with_PublicIPs_And_DNS: Sample template that creates a VPC with DNS and public IPs enabled. Note that you are billed for the AWS resources that you use when you create a stack from this template.",
  "Parameters": {
    "KeyPair": {
      "Description": "Name of the keypair to use for SSH access",
      "Type": "String"
      }
  },

  "Resources" : {
    "VPC" : {
      "Type" : "AWS::EC2::VPC",
      "Properties" : {
        "EnableDnsSupport" : "true",
        "EnableDnsHostnames" : "true",
        "CidrBlock" : "10.0.0.0/16"
      }
    },
    "PublicSubnet" : {
      "Type" : "AWS::EC2::Subnet",
      "Properties" : {
        "VpcId" : { "Ref" : "VPC" },
        "CidrBlock" : "10.0.0.0/24"
      }
    },
    "InternetGateway" : {
      "Type" : "AWS::EC2::InternetGateway"
    },
    "VPCGatewayAttachment" : {
       "Type" : "AWS::EC2::VPCGatewayAttachment",
       "Properties" : {
         "VpcId" : { "Ref" : "VPC" },
         "InternetGatewayId" : { "Ref" : "InternetGateway" }
       }
    },
    "PublicRouteTable" : {
      "Type" : "AWS::EC2::RouteTable",
      "Properties" : {
        "VpcId" : { "Ref" : "VPC" }
      }
    },
    "PublicRoute" : {
      "Type" : "AWS::EC2::Route",
      "DependsOn" : "VPCGatewayAttachment",
      "Properties" : {
        "RouteTableId" : { "Ref" : "PublicRouteTable" },
        "DestinationCidrBlock" : "0.0.0.0/0",
        "GatewayId" : { "Ref" : "InternetGateway" }
      }
    },
    "PublicSubnetRouteTableAssociation" : {
      "Type" : "AWS::EC2::SubnetRouteTableAssociation",
      "Properties" : {
        "SubnetId" : { "Ref" : "PublicSubnet" },
        "RouteTableId" : { "Ref" : "PublicRouteTable" }
      }
    },
    "PublicSubnetNetworkAclAssociation" : {
      "Type" : "AWS::EC2::SubnetNetworkAclAssociation",
      "Properties" : {
        "SubnetId" : { "Ref" : "PublicSubnet" },
        "NetworkAclId" : { "Fn::GetAtt" : ["VPC", "DefaultNetworkAcl"] }
      }
    },
    "WebServerSecurityGroup" : {
      "Type" : "AWS::EC2::SecurityGroup",
      "Properties" : {
        "GroupDescription" : "Enable HTTP ingress",
        "VpcId" : { "Ref" : "VPC" },
        "SecurityGroupIngress" : [
          {"IpProtocol" : "tcp","FromPort" : "80","ToPort" : "80","CidrIp" : "0.0.0.0/0"},
          {"IpProtocol" : "tcp", "FromPort" : "22", "ToPort" : "22", "CidrIp" : "0.0.0.0/0"}]
       }
    },
    "WebServerInstance": {
      "Type": "AWS::EC2::Instance",
      "Properties": {
        "InstanceType": "t2.micro",
        "ImageId": "ami-8c1be5f6",
        "NetworkInterfaces" : [{
          "GroupSet"                 : [{"Ref": "WebServerSecurityGroup"}],
          "AssociatePublicIpAddress" : "true",
          "DeviceIndex"              : "0",
          "DeleteOnTermination"      : "true",
          "SubnetId"                 : {"Ref": "PublicSubnet"}
        }],
        "KeyName": {
          "Ref": "KeyPair"
        },
        "UserData": {
          "Fn::Base64": {
            "Fn::Join": [
              "\n",
              [
                "#!/bin/bash -xe",
                "sudo yum update -y",
                "sudo yum install httpd -y",
                "sudo /etc/init.d/httpd start",
                "echo \"<html><body><h1>Welcome to Tung - CISA Cloud Project !!!</h1>\" > /var/www/html/index.html",
                "echo \"</body></html>\" >> /var/www/html/index.html"
              ]
            ]
          }
        }
      }
    }

  },
  "Outputs" : {
    "URL" : {
      "Description" : "URL of the sample website",
      "Value" :  { "Fn::Join" : [ "", [ "http://", { "Fn::GetAtt" : [ "WebServerInstance", "PublicDnsName" ]}]]}
    }
  }
}

Specify stack details.

Review.

Click create a stack.

Wait for a couple of minutes to complete.

Output.

Templates.

Check your VPC.

Subnets.

Route tables.

Subnet association.

Internet Gateway.

Linux instance.

Access the website.

Click delete stack to terminate the stack when completing the lab.

I have attached the short video demonstration in the following link.

Deploy IPSEC VPN site-to-site between FortiGate on AWS and Palo Alto on premises

This is a diagram to show how to create a VPN site to site connection from PA on-prem and FG on AWS.

In this lab:

  • Create a VPC, subnets, Internet gateway, route tables.
  • Create a FortiGate VM and Windows 2016 instance on AWS
  • Configure Palo Alto
  • Create VPN site to site between both sites PA and FortiGate
  • Allow Windows 2016 instance to access the Internet via FortiGate. Also, allow RDP to this machine via the Internet by using FortiGate.
  • Test ping traffic between both sites.
  • Allow a machine on the PA LAN subnet to access the Internet and the Windows 2016 instance on AWS.
  • Create a new SSLVPN portal on AWS and test to access the portal via SSLVPN.

+ Below are a couple of steps to deploy FortiGate on AWS.

Create a new VPC.

Create a public subnet.

Create a private subnet.

Create an Internet gateway.

Attach the gateway to your VPC.

Edit Route table, change default Route table to Private Route.

Create a Public Route Table.

Link the Public Subnet to the Public Route.

Add a new route 0.0.0.0/0 to your Internet gateway.

Create a new key pair.

+ Go to EC2, and deploy Fortinet on AWS.

Select your VPC, the subnet belongs to Lab Public Subnet. Also, changing the Auto-assign Public IP is Enable.

On the Security Group tab, add new two lines at the end of Security Group as a screenshot below. This allows to ping and RDP to the Windows 2016 machine on a private subnet later on.

Go to Network interfaces, change the interface to FG Public Interface.

Create a new FG Private interface. Links to the private subnet and FortiGate Security Group.

Change to FG Private Interface.

Select the FG private interface, choose Action on the top right-hand side and Attach this network interface to Fortinet EC2.

Right-click on both FG Public and Private interfaces, and disable “Change source/dest check” on both interfaces to allow NAT traffic on these interfaces.

Create a new Elastic IP address.

Change to Fortinet EIP.

Associate this Elastic IP address to Fortinet EC2.

Back to Route tables, add a new route 0.0.0.0/0 to FG private interface.

Now, Fortinet has two interfaces. One is Private, and another one is Public.

Copy the Elastic IP address and paste it to your web browser to access the FortiGate management interface.

Access Fortinet via the Internet.

+ Launch a new Windows VM EC2 instance on your VPC.

Network: Your VPC

Subnet: Private subnet

Auto-assign Public IP: Disabled. We will access RDP to the machine via DNAT on FortiGate.

On the Security Group setting, add two lines to allow RDP and ICMP traffic to the machine.

+ Login to Fortinet.

Copy the FG instance and paste it to password login.

Change the password to login to Fortinet.

Edit WAN and LAN interface setting.

Back to Fortinet to configure Firewall Policy to allow RDP traffic from the Internet to the Windows VM machine.

Configure port forwarding to allow traffic from the Internet to Windows 2016 VM instance on AWS.

External IP address: IP address of FG on the public subnet

Map to IPv4 address on the private subnet is IP address of Windows VM computer.

The external service port and map to IPv4 port is 3389.

Allow inbound traffic from WAN to this machine.

Create both static routes to allow a private subnet to access outside.

Creating new static routes for the private subnet from 10.0.0.0/16 to 10.0.1.1 that is the default gateway on the private subnet.

Try to access the machine.

Load private key to decrypt Windows password.

Access RDP to Windows 2016 instance on AWS.

Now we can see the RDP traffic via Fortinet.

Disable Windows Firewall to allow ICMP traffic to that machine on Palo Alto private subnet.

Configure IPSEC site to site wizard. Choose Custom.

Enter IP address of public interface of PA. Disable NAT traversal, enter the pre-shared key and choose the IKE v2.

Phase 1 and Phase 2 settings need to match with the Palo Alto setting.

Local Address: the private subnet of FG: 10.0.1.0/24

Remote Address: PA LAN subnets: 172.16.0.0/16

Click the Advanced tab. Change the setting to match with PA Phase 2 setting

Create Fortinet LAN and PA LAN subnet network addresses.

Create a static route on Fortinet to allow private subnet traffic to the Palo Alto LAN subnet.

Create a Security Policy to allow traffic from the Fortinet LAN subnet to the PA LAN subnet. Remember to uncheck NAT setting on access rules from AWS LAN to PA LAN and vice versa.

PA LAN subnet to AWS LAN subnet.

AWS LAN subnet to PA LAN subnet.

Create a new access rule to allow the FG LAN subnet to access the Internet.

Ping 8.8.8.8 from Windows 2016 VM instance.

+ Configure PA.

Setting the IP address for e1/1 is DHCP, and assign an IP address for e1/2 is 172.16.1.254/24

Create a tunnel interface: tunnel 1.

Create network objects for FortiGate, PA LAN, and AWS LAN.

Create IKEC Crypto.

Create an IPSEC Crypto.

IKE Gateway.

IPSEC tunnel.

On Proxy ID tab.

Local: PA LAN subnets.

Remote: AWS LAN subnet.

Create a Static Route from PA LAN to Fortinet LAN on AWS.

Create both Security Policies to allow traffic from PA LAN subnet to AWS LAN subnet.

Remember to click “Commit” button to apply the new settings on PA.

From Windows 2016 VM instance, pings a machine on PA LAN subnet.

+ Pings from PA LAN subnet to AWS LAN subnet.

On PA, a tunnel is up.

Monitoring to see the traffic on both sites.

On FortiGate.

An IPSEC VPN site-to-site tunnel is up.

diagnose vpn tunnel list

Click on the log and Report to see events that are related to VPN.

+ Back to PA to create another static route to allow the PA LAN subnet to access the Internet.

A next hop is the default gateway of the PA public subnet.

Create a SNAT policy to allow traffic from the PA LAN subnet to the Internet.

On the Destination interface, should choose e1/1. This is because VPN site-to-site traffic does not use NAT.

Ping 8.8.8.8 from PA LAN subnet.

+ Create an SSLVPN portal on FortiGate to allow to access FG private subnet on the SSLVPN zone.

RDP to Windows 2016 instance private subnet on AWS is 10.0.1.42

On SSLVPN setting, enable SSLVPN via 44333 port.

Create a new username and password to access SSLVPN.

Then assign this user to the portal that we have created on previous step.

Edit the Security Group to allow Internet traffic to SSLVPN port is 44333.

From a Windows machine, access SSLVPN portal on FG.

Also, we can use Forticlient to access if we have a license.

 Build Kubernetes cluster with MicroK8s

Below is a process to install the Kubernetes cluster on three nodes. with MicroK8s.

+ Deploying MicroK8s on node 1, node 2, and node 3

Install MicroK8s directly from the snap store.

sudo apt install snapd

To follow a specific upstream release series it’s possible to select a channel during installation, for example, v1.18 series.

sudo snap install microk8s --classic --channel=1.18/stable

Configure your firewall to allow pod-to-pod and pod-to-internet communication.

sudo ufw allow in on cni0 && sudo ufw allow out on cni0
sudo ufw default allow routed

Dashboard, core-dns, or local storage can be enabled by running the microk8s enable command:

microk8s enable dns dashboard storage
sudo usermod -a -G microk8s tung
sudo chown -f -R tung ~/.kube
newgrp microk8s

Get the output of the first node.

Get token.

Copy and paste the token on the following screenshot.

Kubernetes dashboard.

Get status of mircok8s service.

Do previous steps on node 2 and node 3. We do not need to install dashboard service on node 2 and node 3.

+ Create a MicroK8s multi-node cluster.

Now let’s focus on creating the Kubernetes cluster. On the first node run a command below.

microk8s add-node

This command will give you the following output:

On node 2, join the cluster.

Repeat this process (including generating a token, run it from the joining node) for the third node.

+ Deploy a sample containerized application

Let’s now create a microbot deployment with three pods via the kubectl cli. Run this on any of the control plane nodes:

microk8s kubectl create deployment microbot --image=dontrebootme/microbot:v1
microk8s kubectl scale deployment microbot --replicas=3

To expose our deployment we need to create a service:

microk8s kubectl expose deployment microbot --type=NodePort --port=80 --name=microbot-service

After a few minutes, check your cluster.

Check IP address of the K8S node.

Access micro-bot service via the port 30711.

Kubernetes dashboard.

Deploy VPN IPSEC site-to-site between FortiGate on-prem and AWS

This is a topology that is used to deploy this lab.

+ Configure FortiGate on AWS.

Create a new VPC with a CIDR network is 10.0.0.0/16. Then, create both Lab Public subnet and :ab Private subnet on AWS.

Create a new Internet gateway and attach to your VPC.

Create route tables.

Add a new route to the public Route table.

Associate the public subnet to the Public Route table.

Go to EC2 and create a new FortiGate instance.

Create a new private interface for FortiGate.

Attach the interface to FortiGate.

Disable “Source and destination check” on both Public and Private FortiGate interfaces.

Create a new Elastic IP address and assign it to your FortiGate instance.

Assign the Elastic IP address to public FortiGate interface.

Access FortiGate management interface.

Add a new route on a Private Route table to the Private FortiGate interface.

Create a new Windows instance on AWS.

Security Group.

Modify Windows Security Group to allow ICMP traffic.

Configure VPN site to site.

There are two routes that have been automatically created on FortiGate on the static routes setting.

+ Configure FortiGate on-prem.

Configure a default route on FortiGate.

Configure VPN site to site between both FortiGate.

+ Pings a Windows instance on AWS from a machine on FortiGate on-prem. Remember to access RDP to the machine and disable Windows Firewall to allow ICMP traffic from on-prem to that machine.

The IPSEC tunnel is up.

Pings from Windows instance on AWS to a computer on FortiGate LAN subnet on-prem.

The IPSEC tunnel on-prem is up.

+ Configure SSLVPN portal on FortiGate on AWS.

Deploying FortiGate on Amazon AWS

Diagram.

Below are a couple of steps to deploy Fortinet on AWS.

Create a new VPC.

Create a public subnet.

Create a private subnet.

Create an Internet gateway.

Attach the gateway to your VPC.

Edit Route table, change default Route table to Private Route Table.

Create a Public Route Table.

Edit the route and route all traffic to Internet Gateway.

Link Lab Public Subnet to Public Route Table.

Create a new key pair.

Go to EC2, and deploy Fortinet on AWS.

Select your VPC, the subnet belongs to Lab Public Subnet. Also Auto-assign Public IP is Enable.

Security Group.

Go to Network interfaces. Change the interface to Fortinet Public Subnet.

Create a new Fortinet Private subnet.

Attach this network interface to Fortinet EC2.

Create a new Elastic IP address.

Change to Fortinet EIP.

Associate this Elastic IP address to Fortinet EC2.

Now, Fortinet has two interfaces. One is Private, and another one is Public.

Access Fortinet via the Internet.

Login to Fortinet.

Change password to login to Fortinet.

Edit interfaces.

WAN interface.

LAN interface.

Edit Security Group to allow to ping Fortinet.

Disable Source and Destination Check on “Fortinet Private subnet”.

Now, change the route to route private subnet traffic via Fortinet Private subnet interface.

Create a new Windows 2016 VM EC2. The machine is belonged to “Lab private Subnet”.

Create a new Windows Security Group to allow HTTP and RDP traffic.

Back to Fortinet to configure FIrewall Policy to allow traffic from Fortinet Private subnet to access the Internet.

Configure port forwarding to allow traffic.

Allow inbound traffic from WAN to this machine.

Try to access the machine.

Sniffer traffic on Fortinet.

Modify the Security group to allow RDP.

Load private key to decrypt Windows password.

Access RDP to Windows 2016 instance on AWS.

Now we can see the RDP traffic via Fortinet.

diagnose sniffer packet port1 "port 3389"

The Windows machine is able to access the Internet.

Send Palo Alto, FortiGate, Cisco Router, and Linux Server logs to Splunk

This is a diagram that I have used to deploy this lab.

Log in to Splunk, and download Cisco Suite for Splunk, Fortigate, and Palo Alto app for Splunk.

Click Install app from file.

On Splunk.

+ Palo Alto

Go to Settings – Data inputs – New Local UDP.

Enter the port 5514 on the Port setting

Source type: pan_log

App Control: Palo Alto Networks

Method: IP

Index: Default

On Palo Alto, configure to send logs to Splunk server with destination port is 5514.

Commit, log off and log on to generate logs.

Back to Splunk.

Click Palo Alto App – Operations – Real-time Event Feed.

+ Cisco Router R1.

conf t
logging trap informational
logging host 142.232.197.8 transport udp port 5515 

On Splunk.

Port 5515

Source type: cisco:asa

App Context: Cisco Suite for Splunk

Method: IP

Index: default.

Back to Router, send sample logs to Splunk.

end
send log "Tung Le"
send log "Tung Le"

+ On Kali Linux.

sudo su
nano /etc/rsyslog.conf
##Add the following line to the end of the file. The listening port is 5516.
*.*                @142.232.198.8:5516

Restart rsyslog service.

systemctl restart rsyslog
systemctl status rsyslog

Back to Splunk, configure the listening port for the Linux server is 5516

source type: Syslog

app context: Apps Browser

Back to Kali, type the command below to generate logs to Splunk.

logger "Tung Le"

+ FortiGate:

Configure FortiGate to send logs to Splunk via the UDP port 5517.

config log syslogd setting
set status enable
set server 142.232.197.8
set port 5517
end 

Log into FortiGate, and enable the setting below to send logs to Splunk.

On Splunk, configure port is 5517.

Source type: fgt_log

App Context: FortiGate

Method: IP

Index: Default

Log off FortiGate, type w wrong password to generate logs.

Install Kubernetes on LinuxMint

Firstly, install Kubernetes and snapd package.

sudo apt install kubernetes snapd

Get an error as a screenshot below.

Enter “sudo rm /etc/apt/preferences.d/nosnap.pref”

sudo apt-get update
sudo apt-get install

Run the command above again.

Next, install Kurbenetes.

kubernetes install

+ Create our first app deployment using nginx.

microk8s.kubectl create deployment nginx --image nginx

+ Add your user to the microk8s admin group.

+ View all running services.

microk8s.kubectl get all

+ Enable Dashboard.

microk8s.enable dns dashboard

+ List all namespaces in microk8s. Take note of the IP address of the Kubernetes dashboard.

microk8s.kubectl get all –all-namespaces

Kubernetes has created a couple of new network interfaces.

+ View token.

token=$(microk8s kubectl -n kube-system get secret | grep default-token | cut -d " " -f1)
microk8s kubectl -n kube-system describe secret $token

+ Access to Kubernetes dashboard https://10.152.183.198, and enter the token on the previous step.

Enable WinRM on a Workgroup machine or Windows server

Open PowerShell with administrator right to enable WinRM. Then type the following command, and press “Y” two times.

winrm quickconfig

If your network connection type is Public, you will get an error below.

Go to Windows Firewall, and select “Public” on the Profiles setting to allow to connect via Public network.

Test the WinRM configuration by running the following command.

winrm identify -r:http://localhost:5985 -auth:none

If the test is successful as in the screenshot below, your device is able to be managed via WMI.

Check WinRM port is running.

Remotely check WinRM port has listened on Windows machine.

Test-NetConnection WIN102022-01 -port 5985

Enter WMI or WinRM credential for workgroup machine or Windows server.

PS C:\Windows\system32> $wmiuser, $wmisecret = (read-host 'Enter the WMIuser'), (Read-Host 'Enter the WMIsecret' -AsSecureString)

A newadmin user is member of local Administrators on Windows workgroup machine or Windows server.

Create wmi_cred_new variable for WMI or WinRM credential.

$wmi_cred_new = New-Object System.Management.Automation.PSCredential($wmiuser, $wmisecret) 

Check hostname and ipconfig on a remote host via WinRM.

Invoke-Command -ComputerName WIN102022-01 -Credential $wmi_cred_new -ScriptBlock {hostname; ipconfig}

Check current OS Windows Patch.

Invoke-Command -ComputerName WIN102022-01 -FilePath C:\Shared\Get-CurrentPatchInfo.ps1 -Credential $wmi_cred_new

Get Lastreboot and uptime on Windows 10 workgroup machine.

Invoke-Command -ComputerName WIN102022-01 -Credential $wmi_cred_new -ScriptBlock {$OS1 = Get-WmiObject Win32_operatingsystem -Computer WIN102022-01 ; $LastReboot =($OS1.ConvertToDateTime($OS1.LastBootUpTime)) ; $LastReboot ; $Uptime = (Get-Date) -$OS1.ConvertToDateTime($OS1.LastBootUpTime) ; $Uptime  = ([String]$Uptime.Days + " Days "  + $Uptime.Hours + " Hours " + $Uptime.Minutes + " Minutes") ; $uptime}

Implementing OpenVPN server on Debian 10

Below is a lab topology to use to implement the OpenVPN solution on Debian 10.

In this lab, we need to make sure clients on the Internet are able to create secure OpenVPN connections to the OpenVPN server. Also, the OpenVPN client is able to access inside the network beside the VPN tunnel (LAMP subnet: 192.168.131.0/24), and still access the Internet. Moreover, the Split tunneling feature should be used to make sure only traffic is related to accessing the LAMP subnet will be routed via the OpenVPN tunnel. All other traffic will use a public network adapter (Internet).

IP addresses of Debian OpenVPN server.

Access SSH from LinuxMint to easy to copy and paste commands.

Upgrade Debian’s machine.

apt-get update -y
apt-get upgrade -y

+ Enable IP Forwarding

Edit the file /etc/sysctl.conf and add the line below at the end of the file.

net.ipv4.ip_forward = 1

+ Enable proxy_arp for arp entry to appear on the OpenVPN server.

echo 1 > /proc/sys/net/ipv4/conf/all/proxy_arp

+ Add a line below into /etc/sysctl.conf to make it permanent.

net.ipv4.conf.all.proxy_arp=1

Run the following command to make the changes work.

sysctl -p

+ Install OpenVPN server.

apt-get install openvpn -y

Copy the easy-rsa directory from /usr/share directory to /etc/openvpn directory.for managing SSL certificates.

cp -r /usr/share/easy-rsa /etc/openvpn/

+ Set up Certificate Authority (CA)

cd /etc/openvpn/easy-rsa
nano vars
#Add information below to the file.
set_var EASYRSA                 "$PWD"
set_var EASYRSA_PKI             "$EASYRSA/pki"
set_var EASYRSA_DN              "cn_only"
set_var EASYRSA_REQ_COUNTRY     "CANADA"
set_var EASYRSA_REQ_PROVINCE    "BC"
set_var EASYRSA_REQ_CITY        "Vancouver"
set_var EASYRSA_REQ_ORG         "BCIT Student"
set_var EASYRSA_REQ_EMAIL	"admin@newhorizon.ca"
set_var EASYRSA_REQ_OU          "BCIT Student"
set_var EASYRSA_KEY_SIZE        2048
set_var EASYRSA_ALGO            rsa
set_var EASYRSA_CA_EXPIRE	7500
set_var EASYRSA_CERT_EXPIRE     365
set_var EASYRSA_NS_SUPPORT	"no"
set_var EASYRSA_NS_COMMENT	"BCIT Student CERTIFICATE AUTHORITY"
set_var EASYRSA_EXT_DIR         "$EASYRSA/x509-types"
set_var EASYRSA_SSL_CONF        "$EASYRSA/openssl-easyrsa.cnf"
set_var EASYRSA_DIGEST          "sha256"

 Run the following command to initiate your own PKI.

./easyrsa init-pki

Build the CA certificates.

./easyrsa build-ca

+ Generate Server Certificate Files.

./easyrsa gen-req tung-server nopass

+ Sign the public key of the Server Using Root CA.

./easyrsa sign-req server tung-server

Verify cert.

openssl verify -CAfile pki/ca.crt pki/issued/tung-server.crt 

+ Create a strong Diffie-Hellman key to use for the key exchange

./easyrsa gen-dh

After creating all certificate files, copy them to the /etc/openvpn/server/ directory.

cp pki/ca.crt /etc/openvpn/server/
cp pki/dh.pem /etc/openvpn/server/
cp pki/private/tung-server.key /etc/openvpn/server/
cp pki/issued/tung-server.crt /etc/openvpn/server/

+ Generate Client Certificate and Key File

./easyrsa gen-req client nopass

Next, sign the client key using your CA certificate.

./easyrsa sign-req client client

Next, copy all client certificate and key file to the /etc/openvpn/client/ directory

cp pki/ca.crt /etc/openvpn/client/
cp pki/issued/client.crt /etc/openvpn/client/
cp pki/private/client.key /etc/openvpn/client/

+ Configure OpenVPN Server

nano /etc/openvpn/server.conf
#---
root@debian10new:~# cat /etc/openvpn/server.conf 
port 1194
proto udp
# USE TCP
#port 4443
#proto tcp-server
dev tun
ca /etc/openvpn/server/ca.crt
cert /etc/openvpn/server/tung-server.crt
key /etc/openvpn/server/tung-server.key
dh /etc/openvpn/server/dh.pem
# OpenVPN tunnel IP address range
server 172.16.1.0 255.255.255.0
# server 192.168.131.0 255.255.255.0
# route all traffic via OpenVPN
push "redirect-gateway def1"
push "dhcp-option DNS 8.8.8.8"
duplicate-cn
cipher AES-256-CBC
tls-version-min 1.2
tls-cipher TLS-DHE-RSA-WITH-AES-256-GCM-SHA384:TLS-DHE-RSA-WITH-AES-256-CBC-SHA256:TLS-DHE-RSA-WITH-AES-128-GCM-SHA256:TLS-DHE-RSA-WITH-AES-128-CBC-SHA256
auth SHA512
auth-nocache
keepalive 20 60
persist-key
persist-tun
#disable compress lz4 because of error on OpenVPN client
#compress lz4
daemon
user nobody
group nogroup
log-append /var/log/openvpn.log
verb 3
root@debian10new:~# 
#---

+ Start OpenVPN service.

systemctl start openvpn@server
systemctl enable openvpn@server
systemctl status openvpn@server

Show OpenVPN tunnel.

ip a show tun0

+ Generate client configuration.

nano /etc/openvpn/client/client.ovpn
#---
client
dev tun
# USE UDP
proto udp
remote 10.0.0.52 1194

# USE TCP
#proto tcp-server
# Public IP address on OpenVPN is 10.0.0.52
#remote 10.0.0.52 4443
ca ca.crt
cert client.crt
key client.key
#remote-cert-tls server
cipher AES-256-CBC
# Below lines is important to allow OpenVPN is still accessing the Internet when making OpenVPN session.
# Split tunneling on OpenVPN
# https://forums.openvpn.net/viewtopic.php?t=8229
route-nopull
# the LAN subnet that you need to access via VPN tunnel
route 192.168.131.0 255.255.255.0 vpn_gateway
auth SHA512
auth-nocache
tls-version-min 1.2
tls-cipher TLS-DHE-RSA-WITH-AES-256-GCM-SHA384:TLS-DHE-RSA-WITH-AES-256-CBC-SHA256:TLS-DHE-RSA-WITH-AES-128-GCM-SHA256:TLS-DHE-RSA-WITH-AES-128-CBC-SHA256
resolv-retry infinite
#compress lz4
nobind
persist-key
persist-tun
mute-replay-warnings
verb 3
#---

+ Configure routing using UFW.

By default, the UFW firewall is not installed in Debian 10.

apt-get install ufw -y

Configure UFW to accept the forwarded packets.

nano /etc/default/ufw
# Change the following line:

DEFAULT_FORWARD_POLICY="ACCEPT"
nano /etc/ufw/before.rules

Note: Replace ens3 with the name of your public network interface in Debian OpenVPN server is ens35.

Allow the default OpenVPN port 1194 and OpenSSH. Then, reload the UFW firewall.

ufw allow 1194/udp
ufw allow OpenSSH
ufw disable
ufw enable

+ Connect OpenVPN from a client.

Install OpenVPN from the Kali machine.

apt-get install openvpn -y

On the client machine, run the command below to download all the client configuration files.

# public-vpn-server-ip: is 10.0.0.52
scp -r root@public-vpn-server-ip:/etc/openvpn/client .

Check OpenVPN tunnel.

On OpenVPN client.

ping 8.8.8.8 (Internet)

ping 192.168.131.134 (OpenVPN gw tunnel)

ping 192.168.131.128 (LAMP server behind OpenVPN server)

We can see split tunneling is working well on OpenVPN.

Access LAMP server.

On Debian OpenVPN server.

Check routing table on OpenVPN server.

Check OpenVPN logs on the OpenVPN server.

Monitor traffic on the OpenVPN server. OpenVPN traffic is using port 1194 UDP. OpenVPN traffic is encrypted using this tunnel.

Get LastReboot and Uptime value on Windows 10 and Windows Server via PS

In this lab, I have used PS to get LastReboot and Uptime value on Windows servers and clients. PS script will query “Get-WmiObject” to get the last boot time of the computers/servers.

Firstly, I will test with Windows Servers.

$OS1 = Get-WmiObject Win32_OperatingSystem -ComputerName DC1 -ErrorAction Stop
$LastReboot =($OS1.ConvertToDateTime($OS1.LastBootUpTime))
# Print LastReboot
$LastReboot
$Uptime = (Get-Date) -$OS1.ConvertToDateTime($OS1.LastBootUpTime)
$Uptime  = ([String]$Uptime.Days + " Days " + $Uptime.Hours + " Hours " + $Uptime.Minutes + " Minutes")
# Print uptime

Write a full PS script.

# This script is written by Tung on 2022-02-18
# This is used to get lastreboot and uptime on Windows servers.
# Get time when running the script
$filename = (Get-Date).tostring("dd-MM-yyyy-hh-mm")
# Change PowerShell working directory to C:\Shared
Set-Location C:\Shared
#$TestComputerName = get-content C:\Shared\tungmachine.txt
$servers = (Get-ADComputer -properties OperatingSystem -filter{(operatingsystem -like "*Windows Server*")}).name
Foreach ($server in $servers) {
        # Only check the machine if it is online
        $ping_result = Test-Connection -ComputerName $server -Count 1 -Quiet
        # If the machine is online
        if($ping_result){
        # Using "Get-WmiObject Win32_OperatingSystem" to get $OS object
        # Using "Get-WmiObject Win32_OperatingSystem -ComputerName $Testcomputer -ErrorAction Stop" to get $OS object on remote machines.
        $OS = Get-WmiObject Win32_OperatingSystem -ComputerName $server -ErrorAction Stop
        # Get LastReboot via $OS.LastBootUpTime variable
        $LastReboot =($OS.ConvertToDateTime($OS.LastBootUpTime)).tostring("dd-MM-yyyy")    
        # Get Uptime via $OS.LastBootUpTime variable
        $Uptime = (Get-Date) -$OS.ConvertToDateTime($OS.LastBootUpTime)
        # Create a hash table (dictionary type) with 3 columns: PSComputerName, LastReboot and Uptime
        $lastlogonproperties = @{
        # Add PSComputerName into column #1
        PSComputerName  = $server
        # Convert and only get dd-mm-yyyy on last reboot variable.
        # Add LastReboot into column #2
        # Only get dd-mm-yyy field on LastReboot record.
        LastReboot =($OS.ConvertToDateTime($OS.LastBootUpTime)).tostring("dd-MM-yyyy")
        # Add LastReboot into column #3
        Uptime  = ([String]$Uptime.Days + " Days " + $Uptime.Hours + " Hours " + $Uptime.Minutes + " Minutes")
        }

    # Create a new table object on PS to append the hash table values above   
    $forcecsv = New-Object psobject -Property $lastlogonproperties
    # change order of columns before appending to csv file. 
    # Convert it to csv file and save under LastReboot directory.
    $forcecsv | select-object PSComputerName, LastReboot, Uptime | export-csv -NoTypeInformation -append "C:\Shared\LastReboot\Server\$filename.csv"
    }
}

Run the script.

Then, do the same PS script to query Windows 10 machines.

Change “(operatingsystem -like “*Windows Server*”)” to (operatingsystem -like “*Windows 10*”)

$computers = (Get-ADComputer -properties OperatingSystem -filter{(operatingsystem -like "*Windows 10*")}).name
# This script is written by Tung on 2022-02-18
# This is is used t get lastreboot and uptime on Windows 10 machines.
# Get time when running the script
$filename = (Get-Date).tostring("dd-MM-yyyy-hh-mm")
# Change PowerShell working directory to C:\Shared
Set-Location C:\Shared
#$TestComputerName = get-content C:\Shared\tungmachine.txt
$computers = (Get-ADComputer -properties OperatingSystem -filter{(operatingsystem -like "*Windows 10*")}).name
Foreach ($Computer in $computers) {
        # Only check the machine if it is online
        $ping_result = Test-Connection -ComputerName $Computer -Count 1 -Quiet
        # If the machine is online
        if($ping_result){
        # Using "Get-WmiObject Win32_OperatingSystem" to get $OS object
        # Using "Get-WmiObject Win32_OperatingSystem -ComputerName $Testcomputer -ErrorAction Stop" to get $OS object on remote machines.
        $OS = Get-WmiObject Win32_OperatingSystem -ComputerName $Computer -ErrorAction Stop
        # Get LastReboot via $OS.LastBootUpTime variable
        $LastReboot =($OS.ConvertToDateTime($OS.LastBootUpTime)).tostring("dd-MM-yyyy")    
        # Get Uptime via $OS.LastBootUpTime variable
        $Uptime = (Get-Date) -$OS.ConvertToDateTime($OS.LastBootUpTime)
        # Create a hash table (dictionary type) with 3 columns: PSComputerName, LastReboot and Uptime
        $lastlogonproperties = @{
        # Add PSComputerName into column #1
        PSComputerName  = $Computer
        # Convert and only get dd-mm-yyyy on last reboot variable.
        # Add LastReboot into column #2
        # Only get dd-mm-yyy field on LastReboot record.
        LastReboot =($OS.ConvertToDateTime($OS.LastBootUpTime)).tostring("dd-MM-yyyy")
        # Add LastReboot into column #3
        Uptime  = ([String]$Uptime.Days + " Days " + $Uptime.Hours + " Hours " + $Uptime.Minutes + " Minutes")
        }

    # Create a new table object on PS to append the hash table values above   
    $forcecsv = New-Object psobject -Property $lastlogonproperties
    # change order of columns before appending to csv file. 
    # Convert it to csv file and save under LastReboot directory.
    $forcecsv | select-object PSComputerName, LastReboot, Uptime | export-csv -NoTypeInformation -append "C:\Shared\LastReboot\Client\$filename.csv"
    }
}

Run the script.