Start bash scripting by reviewing this guide:
https://mywiki.wooledge.org/BashGuide
Step-by-step Instructions:
1. Login to Tenable Core + Nessus using the link and credentials provided in your email. Be sure to check
the field Reuse my password for privileged tasks.
2. Click Terminal.
3. Type sudo /opt/nessus/sbin/nessuscli managed unlink
a. Note: If prompted for a password, type <nessus admin password>
4. Type sudo /opt/nessus/sbin/nessuscli managed link --key={your linking key} --cloud --name={your-name}_Scanner_Added
pytenable: a python library for using API capability of Tenable Products
https://pytenable.readthedocs.io/en/stable/
Create Ubuntu VM
qemu-img create -f qcow2 disk.qcow2 10G
dd if=/dev/zero conv=sync bs=1m count=64 of=ovmf_vars.fd
qemu-system-aarch64 \
-accel hvf \
-m 2048 \
-cpu cortex-a57 -M virt,highmem=off \
-drive file=/usr/local/share/qemu/edk2-aarch64-code.fd,if=pflash,format=raw,readonly=on \
-drive file=ovmf_vars.fd,if=pflash,format=raw \
-serial telnet::4444,server,nowait \
-drive if=none,file=disk.qcow2,format=qcow2,id=hd0 \
-device virtio-blk-device,drive=hd0,serial="dummyserial" \
-device virtio-net-device,netdev=net0 \
-netdev user,id=net0 \
-vga none -device ramfb \
-cdrom /path/to/ubuntu.iso \
-device usb-ehci -device usb-kbd -device usb-mouse -usb \
-monitor stdio
sudo apt-get install ubuntu-desktop
Troubleshooting
# ssh <Remote Host IP>
ERROR: No ECDSA host key is known for <Remote Host IP> and you have requested strict checking.
ERROR: Host key verification failed.
# ssh <Remote Host IP> -o StrictHostKeyChecking=no
Warning: Permanently added '<Remove Host IP> (ECDSA) to the list of known hosts.
root@<Remove Host IP> 's password:
# ssh <Remote Host IP>
Reference:
https://www.ibm.com/support/pages/qradar-ssh-host-fails-error-no-ecdsa-host-key-known-and-you-have-requested-strict-checking
Unable to connect to the server: x509: certificate has expired or is not yet valid
#: kubectl get pods -A
Unable to connect to the server: x509: certificate has expired or is not yet valid
openssl s_client -connect localhost:6443 -showcerts < /dev/null 2>&1 | openssl x509 -noout -enddate
As a precautionary measure backup the TLS dir.
sudo tar -czvf /var/lib/rancher/k3s/server/apphost-cert.tar.gz /var/lib/rancher/k3s/server/tls
Remove the following file.
sudo rm /var/lib/rancher/k3s/server/tls/dynamic-cert.json
Remove the cached certificate from a kubernetes secret.
sudo kubectl --insecure-skip-tls-verify=true delete secret -n kube-system k3s-serving
Restart the K3s service to rotate the certificates.
sudo systemctl restart k3s
Verify that kubectl commands function.
sudo kubectl get pods -A
Additionally, you can verify that all K3s internal certificates are no longer due to expire.
sudo su
for i in `ls /var/lib/rancher/k3s/server/tls/*.crt`; do echo $i; openssl x509 -enddate -noout -in $i; done
Or run
curl -v -k https://localhost:6443 [https://localhost:6443] to confirm the new date of your app host cert
Reference:
https://supportcontent.ibm.com/support/pages/expired-k3s-certificates-are-not-automatically-rotated-causing-connection-issues
If you cannot access Log Source Management App. You can try to use legacy url:
https://10.10.2.10/console/do/core/genericsearchlist?appName=eventviewer&pageId=SensorDeviceList
Step 1:
Get peak date time information from logs
cat qradar.log |grep SourceMonitor |grep ecs-ec-ingress |sed -r 's#^(.+?)::.+ Peak in the last 60s: (.+?)\. Max Seen.+#\1 \2#'
Step 2:
Go to QRadar UI and query that time period and group by EventID or SourceIP for detecting root cause of the problem.
Step 3:
If the traffic which causes the peaks is abnormal, if it is possible try to drop unnecessary traffic at the log source level by solving the root cause of the problem.
/opt/qradar/bin/contentManagement.pl -a export -c referencedata -i all -e
tcpdump -nns0 -i any -c 100000 dst port 514 |awk '{print $3}' |cut -d. -f1-4 |sort -V |uniq -c |sort -n
/opt/qradar/support/extractRules.py -o QRadarRules.tsv
# psql -t -A -U qradar -c "SELECT rule_data FROM custom_rule WHERE id=100311" | xmllint --xpath "//rule/testDefinitions/test/text" - | perl -MHTML::Entities -pe 'decode_entities($_);' |sed -e 's/<[^>]*>//g'
AQL query:
SELECT username FROM events WHERE INOFFENSE(ID) GROUP BY username
AQL query for getting remote ip addresses which is related with specific offense:
select distinct destinationip from events where INOFFENSE(633) TIMES OFFENSE_TIME(633) AND eventdirection IN ('L2R', 'R2R')
nmap -p 1433 10.0.30.0/24
nmap --script ms-sql-info -p 1433 10.0.30.33
nmap -p 1433 --script ms-sql-brute --script-args userdb=/root/Desktop/wordlist/common_users.txt,passdb=/root/Desktop/wordlist/100-common-passwords.txt 10.0.30.33
nmap -p 1433 --script ms-sql-empty-password 10.0.30.33
nmap -p 1433 --script ms-sql-query --script-args mssql.username=admin,mssql.password=anamaria,ms-sql-query.query="SELECT * FROM master..syslogins" 10.0.30.33 -oN output.txt
gvim output.txt
nmap -p 1433 --script ms-sql-xp-cmdshell --script-args mssql.username=admin,mssql.password=anamaria,ms-sql-xp-cmdshell.cmd="ipconfig" 10.0.30.33
nmap -p 1433 --script ms-sql-xp-cmdshell --script-args mssql.username=admin,mssql.password=anamaria,ms-sql-xp-cmdshell.cmd="type c:\flag.txt" 10.0.30.33
Steps:
1. In vSphere client, add a new hard disk at Virtual Device Node SCSI (0: 1).
2. SSH to the server.
3. Get the host bus number:
sudo grep mpt /sys/class/scsi_host/host?/proc_name
The response is similar to
/sys/class/scsi_host/host2/proc_name:mptspi
4. Rescan for new disks on host bus 2:
echo "- - -" > sudo /sys/class/scsi_host/host2/scan
5. Make sure new disk /dev/sdb is added to the system:
sudo fdisk -l
6. Create a new partition on /dev/sdb with file system type 8e (Linux LVM):
sudo fdisk /dev/sdb
=======
Command (m for help): n
Select (default p): p
Partition number (1-4, default 1): 1
First sector (a-b, default a): [Enter to use the default value]
Last sector, +sectors or +size{K,M,G} (a-b, default b): [Enter to use the default value]
Command (m for help): t
Hex code (type L to list all codes): 8e
Command (m for help): p
Command (m for help): w
=======
7. Create a physical volume for LVM:
sudo pvcreate /dev/sdb1
8. Get the volume group name (VG Name):
sudo vgdisplay
9. Extend the 'resilient' volume group by adding in the physical volume of /dev/sdb1:
sudo vgextend resilient /dev/sdb1
10. Scan all disks for physical volumes:
sudo pvscan
11. Check the volume group name (VG Name) again to make sure free space is added:
sudo vgdisplay
12. Display the path of the logical volume (LV Path):
sudo lvdisplay
The following assumes that you want to split the new disk over the three logical volumes.
13. Extend the logical volume for multiple logical volumes:
sudo lvresize --resizefs --extents +80%FREE /dev/resilient/root
sudo lvresize --resizefs --extents +100%FREE /dev/resilient/co3
The previous commands allocate 80% of the extended space to "/dev/resilient/root" LVM, and then allocate the rest 20% to "/dev/resilient/co3" LVM.
14. Display the disk space usage to ensure new space is added:
sudo df -h
Reference:
logger -n <remoteip> -T -P 514 "aliokan Test message"
Problem:
..
DEBUG vsphere_tag_category.category: Creation complete after 0s [id=urn:vmomi:InventoryServiceCategory:f49160e4-a017-404f-9ecf-88b93e02f300:GLOBAL]
DEBUG vsphere_tag.tag: Creating...
DEBUG vsphere_tag.tag: Creation complete after 0s [id=urn:vmomi:InventoryServiceTag:75e6da39-1493-4760-a360-5708375c1e49:GLOBAL]
DEBUG vsphereprivate_import_ova.import: Creating...
ERROR
ERROR Error: failed to find provided vSphere objects: failed to find a host in the cluster that contains the provided datastore
ERROR
ERROR on ../../tmp/openshift-install--761813450/main.tf line 44, in resource "vsphereprivate_import_ova" "import":
ERROR 44: resource "vsphereprivate_import_ova" "import" {
ERROR
ERROR
FATAL failed to fetch Cluster: failed to generate asset "Cluster": failed to create cluster: failed to apply Terraform: failed to complete the change
..
Root cause of the problem:
This issue was caused by trying to use a datastore that was not shared storage on a cluster with multiple hypervisors.
Bulma is a free, open source framework that provides ready-to-use frontend components that you can easily combine to build responsive web interfaces.
Themes:
https://jenil.github.io/bulmaswatch/