Posts filed under ‘HOW To’s’
How to install a wildcard SSL certificate on the Splunk Web
Hi Guys
If you come across a situation where you need to install / renew a wildcard certificate (*.local.test.net) on the Splunk Web Instance you can follow the below guidelines .The main purpose of this guide is to help fellow splunk engineers on a similar situation. Since , I did not find a proper guide neither in Splunk Portal nor via the Google searches. ,
- If it is a standard SSL certificate please follow the detailed splunk guide line https://docs.splunk.com/Documentation/Splunk/8.2.4/Security/Getthird-partycertificatesforSplunkWeb
- In case if you want to convert it to crt to pem format use the command line “openssl x509 -in cert.crt -out cert.pem”
- My commands are based on openssl utility in a Linux Server.
- Already a CSR has been created on another server (a Windows Server) and the wildcard certificate has been obtained from the SSL vendor .Thereafter ,we have installed certificate on that Server and then exported the SSL certifcate(as .pfx format with the private key) to be imported to all other servers including our Splunk Server.
Now let’s go in to the steps
- Do not create a seperate private key as quoated in the above guide. No need to create a CSR on the Splunk Server as well.
- Copy the Intermediate Root certificate to the Splunk Server and convert it to a pem
openssl x509 -in MyRoot.crt -out MyRoot.pem - Copy the .pfx file to the Splunk Server and extract the private key. , and when it prompts for the password , Enter the password you entered when you created /exported the pfx certificate
#openssl pkcs12 -in certificate.pfx -out privatekey.key -nocerts -nodes - Now extract the Server Key certificate ,and and when it prompts for the password , Enter the password when you created /exported the pfx certificate
#openssl pkcs12 -in certificate.pfx -out certificate.pem -nokeys -clcerts - Now you can verify the MD5 hashes for the above using the below commands. It must match.
#openssl x509 -noout -modulus -in certificate.pem |openssl md5
#openssl rsa -noout -modulus -in privatekey.key |openssl md5
The final step is to combine the server certificate and the root certificate in to a single .pem file.
# cat certificate.pem MyRoot.pem >> MySplunkWebCert.pem
That’s it , now you can point the privatekey.key and MySplunkWebCert.pem(The combined one which will include the Server Certifcate and the Root Certificate) as per the guidelines https://docs.splunk.com/Documentation/Splunk/8.2.4/Security/SecureSplunkWebusingasignedcertificate
NOTE : I have not focused on the paths and the file names . so please ensure you add the file paths and the names according to your environment.
Source:
https://trustzone.com/knowledge-base/split-pfx-file-into-pem-key-files-openss-windows-linux/
ESXTOP Thresholds
Hi Guys
Below table provides the recommended thresholds from ESXi . These values can be monitored via ESXTOP commands.
Metrics and Thresholds
Display | Metric | Threshold | Explanation |
CPU | %RDY | 10 | Overprovisioning of vCPUs, excessive usage of vSMP or a limit(check %MLMTD) has been set. Note that you will need to expand the VM Group to see how this is distributed across vCPUs. If you have many vCPUs than per vCPU may be low and this may not be an issue. 10% is per world! |
CPU | %CSTP | 3 | Excessive usage of vSMP. Decrease amount of vCPUs for this particular VM. This should lead to increased scheduling opportunities. |
CPU | %MLMTD | 0 | The percentage of time the vCPU was ready to run but deliberately wasn’t scheduled because that would violate the “CPU limit” settings. If larger than 0 the world is being throttled due to the limit on CPU. |
CPU | %SWPWT | 5 | VM waiting on swapped pages to be read from disk. Possible cause: Memory overcommitment. |
MEM | MCTLSZ | 1 | If larger than 0 hosts is forcing VMs to inflate balloon driver to reclaim memory as host is overcommited. |
MEM | SWCUR | 1 | If larger than 0 hosts has swapped memory pages in the past. Possible cause: Overcommitment. |
MEM | SWR/s | 1 | If larger than 0 host is actively reading from swap(vswp). Possible cause: Excessive memory overcommitment. |
MEM | SWW/s | 1 | If larger than 0 host is actively writing to swap(vswp). Possible cause: Excessive memory overcommitment. |
MEM | CACHEUSD | 0 | If larger than 0 hosts has compressed memory. Possible cause: Memory overcommitment. |
MEM | ZIP/s | 0 | If larger than 0 hosts is actively compressing memory. Possible cause: Memory overcommitment. |
MEM | UNZIP/s | 0 | If larger than 0 host has accessing compressed memory. Possible cause: Previously host was overcommited on memory. |
MEM | N%L | 80 | If less than 80 VM experiences poor NUMA locality. If a VM has a memory size greater than the amount of memory local to each processor, the ESX scheduler does not attempt to use NUMA optimizations for that VM and “remotely” uses memory via “interconnect”. Check “GST_ND(X)” to find out which NUMA nodes are used. |
NETWORK | %DRPTX | 1 | Dropped packets transmitted, hardware overworked. Possible cause: very high network utilization |
NETWORK | %DRPRX | 1 | Dropped packets received, hardware overworked. Possible cause: very high network utilization |
DISK | GAVG | 25 | Look at “DAVG” and “KAVG” as the sum of both is GAVG. |
DISK | DAVG | 25 | Disk latency most likely to be caused by the array. |
DISK | KAVG | 2 | Disk latency caused by the VMkernel, high KAVG usually means queuing. This is the ESXi storage stack, the vSCSI layer and the VMM. Check “QUED”. |
DISK | QUED | 1 | Queue maxed out. Possibly queue depth set to low, or controller overloaded. Check with array vendor for optimal queue depth value. (Enable this via option “F” aka QSTATS |
DISK | ABRTS/s | 1 | Aborts issued by guest(VM) because storage is not responding. For Windows VMs this happens after 60 seconds by default. Can be caused for instance when paths failed or array is not accepting any IO for whatever reason. |
DISK | RESETS/s | 1 | The number of commands resets per second. |
DISK | ATSF | 1 | The number of failed ATS commands, this value should be 0 |
DISK | ATS | 1 | The number of successful ATS commands, this value should go up over time when the array supports ATS |
DISK | DELETE | 1 | The number of successful UNMAP commands, this value should go up over time when the array supports UNMAP! |
DISK | DELETE_F | 1 | The number of failed UNMAP commands, this value should be 0 |
DISK | CONS/s | 20 | SCSI Reservation Conflicts per second. If many SCSI Reservation Conflicts occur performance could be degraded due to the lock on the VMFS. |
VSAN | SDLAT | 5 | Standard deviation of latency, when above 10ms latency contact support to analyze vSAN Observer details to find out what is causing the delay |
Source: http://www.yellow-bricks.com/esxtop/#esxtop-thresholds