nmap + nikto + nuclei: Better Together

nmap, nikto, and nuclei are the backbone of web application security scanning. If you run assessments professionally, you use all three. But odds are you run them in isolation — three separate terminals, three separate outputs, manually connecting the dots between them.

That workflow leaves gaps. Not because the tools are lacking, but because each one produces intelligence the others could use and never gets. This post covers what changes when you chain them into a single pipeline — and why the sum is significantly greater than the parts.

What Each Tool Does Best

nmap: The Reconnaissance Layer

nmap maps the attack surface. Before you can test anything, you need to know what's listening, on which ports, running what software.

# Service detection across all ports
nmap -sV -p- --open target.com

# Script scan for HTTP-specific enumeration
nmap -sV --script=http-title,http-headers,http-methods,http-server-header -p 80,443,8080,8443 target.com

# Grab SSL cert details + enumerate HTTP info
nmap -sV --script=ssl-cert,ssl-enum-ciphers,http-enum -p 443 target.com

Key outputs for the chain: open ports, service names and versions, SSL/TLS presence, HTTP server headers, detected technologies via NSE scripts. Every downstream tool needs this information.


nikto: The Server Misconfiguration Hunter

nikto doesn't care about your application logic. It cares about what your web server exposes that it shouldn't — default files, outdated components, dangerous HTTP methods, backup files, directory listings, information leaking through headers.

# Standard scan
nikto -h https://target.com

# Target a non-standard port found by nmap
nikto -h target.com -p 8443 -ssl

# Output XML for downstream parsing
nikto -h https://target.com -Format xml -output nikto-results.xml

# Scan specific categories: misconfigs + info disclosure + interesting files
nikto -h https://target.com -Tuning 123

Key outputs for the chain: server software and version strings, identified technologies (PHP version from X-Powered-By, framework signatures), exposed files and directories, potential injection points, security header analysis.


nuclei: The Vulnerability Verification Engine

nuclei is a precision instrument. Its YAML templates encode exact detection logic for thousands of known vulnerabilities — specific HTTP requests with specific response matching. No fuzzing, no guessing. If a template matches, the vulnerability exists.

# Broad scan, critical and high only
nuclei -u https://target.com -severity critical,high

# Technology-targeted after identifying Apache 2.4.49
nuclei -u https://target.com -tags apache -severity critical,high,medium

# CVE-focused scan
nuclei -u https://target.com -tags cve -severity critical,high

# Scan multiple targets from a file
cat targets.txt | nuclei -severity critical,high

# Run specific template when you know what you're looking for
nuclei -u https://target.com -t cves/2024/CVE-2024-23897.yaml

Key outputs for the chain: confirmed CVEs with severity ratings, exposed admin panels, default credentials, misconfigured services, technology-specific vulnerabilities with proof.

The Problem with Running Them Separately

Here's what actually happens when you run these tools independently:

Duplicate work. nmap identifies Apache 2.4.51 on port 443. You note it mentally. Then nikto reports the same Apache version. Then nuclei's http-fingerprint templates detect it again. Three tools, three separate discovery cycles for the same information.

Missed context. nmap finds port 8080 running Jetty 9.4.43. You run nikto against port 443 because that's the obvious target. You forget 8080, or you run nikto against it 20 minutes later, after you've already moved on to nuclei. The Jetty instance never gets nuclei's CVE templates because you didn't think to add it.

No data flow. nikto discovers that the server returns X-Powered-By: PHP/7.4.3. That's a goldmine for nuclei — PHP 7.4 is EOL with known CVEs. But nuclei doesn't know nikto found this. You'd have to manually read nikto's output, identify the PHP version, then run nuclei -tags php. In practice, this step gets skipped under time pressure.

Fragmented reporting. You end up with three output files in different formats. Correlating findings requires manual effort. Was the CVE nuclei found on the same service nikto flagged as misconfigured? You have to cross-reference by hand.

The tools are excellent individually. The workflow of running them as three independent processes is not.

How Chaining Them Transforms Results

When the output of each tool feeds directly into the configuration of the next, the pipeline becomes more than a sequence — it becomes a feedback loop.

Stage 1: nmap discovers the surface

# Scan and output structured results
nmap -sV -sC --open -oX nmap-scan.xml target.com

nmap finds: port 80 (nginx 1.18), port 443 (nginx 1.18 with SSL), port 8080 (Apache Tomcat 9.0.50), port 3000 (Node.js Express). Four services. Two different server technologies. One of them (Tomcat 9.0.50) has known CVEs.

Stage 2: nmap feeds nikto and nuclei

Instead of manually typing targets, the pipeline parses nmap's XML output and fans out:

# Parse nmap XML for HTTP services, run nikto against each
xmlstarlet sel -t -m "//port[state/@state='open'][service/@name='http' or service/@name='http-proxy' or service/@name='https']" \
  -v "concat(../../address/@addr,':',@portid)" -n nmap-scan.xml | \
  while read target; do
    nikto -h "$target" -Format xml -output "nikto-${target//:/-}.xml"
  done

nikto now scans all four HTTP services automatically — including port 8080 and 3000, which you might have skipped manually.

Stage 3: Combined intelligence drives targeted nuclei scans

Here's where chaining pays off. From nmap and nikto combined, we know:

Instead of running nuclei's full 8,000+ template library blindly, the chain targets:

# Targeted Tomcat CVE scan on the specific port
nuclei -u http://target.com:8080 -tags tomcat -severity critical,high,medium

# PHP-specific CVEs based on nikto's version discovery
nuclei -u https://target.com -tags php -severity critical,high

# Default credential checks on exposed Tomcat manager
nuclei -u http://target.com:8080/manager/html -tags default-login

# General scan on all discovered endpoints
echo -e "http://target.com\nhttps://target.com\nhttp://target.com:8080\nhttp://target.com:3000" | \
  nuclei -severity critical,high

The result: faster scans (fewer irrelevant templates), higher signal-to-noise ratio, and findings that would have been missed by a generic full-template run.

Manual Chaining: The Bash Script Approach

Most practitioners who chain these tools do it with bash scripts. Here's a typical pipeline:

#!/bin/bash
TARGET=$1
OUTDIR="./scan-results/$(date +%Y%m%d)-${TARGET}"
mkdir -p "$OUTDIR"

echo "[*] Stage 1: nmap service discovery"
nmap -sV -sC --open -oX "$OUTDIR/nmap.xml" -oN "$OUTDIR/nmap.txt" "$TARGET"

# Extract HTTP services from nmap output
HTTP_TARGETS=$(grep -E 'http|https|http-proxy' "$OUTDIR/nmap.txt" | \
  grep -oP '\d+/tcp' | grep -oP '\d+' | \
  while read port; do
    if grep -q "$port/tcp.*ssl\|$port/tcp.*https" "$OUTDIR/nmap.txt"; then
      echo "https://${TARGET}:${port}"
    else
      echo "http://${TARGET}:${port}"
    fi
  done)

echo "[*] Found HTTP targets:"
echo "$HTTP_TARGETS"

echo "[*] Stage 2: nikto scans"
echo "$HTTP_TARGETS" | while read url; do
    SAFE_NAME=$(echo "$url" | sed 's/[^a-zA-Z0-9]/-/g')
    nikto -h "$url" -Format xml -output "$OUTDIR/nikto-${SAFE_NAME}.xml" 2>&1 | \
      tee "$OUTDIR/nikto-${SAFE_NAME}.txt"
done

echo "[*] Stage 3: nuclei — broad scan"
echo "$HTTP_TARGETS" | nuclei -severity critical,high -o "$OUTDIR/nuclei-broad.txt"

# Extract technologies from nikto output for targeted nuclei runs
echo "[*] Stage 4: nuclei — targeted scans"
if grep -qi "tomcat" "$OUTDIR"/nikto-*.txt 2>/dev/null; then
    echo "[+] Tomcat detected, running Tomcat templates"
    echo "$HTTP_TARGETS" | nuclei -tags tomcat -o "$OUTDIR/nuclei-tomcat.txt"
fi

if grep -qi "php" "$OUTDIR"/nikto-*.txt 2>/dev/null; then
    echo "[+] PHP detected, running PHP templates"
    echo "$HTTP_TARGETS" | nuclei -tags php -o "$OUTDIR/nuclei-php.txt"
fi

if grep -qi "wordpress" "$OUTDIR"/nikto-*.txt 2>/dev/null; then
    echo "[+] WordPress detected, running WordPress templates"
    echo "$HTTP_TARGETS" | nuclei -tags wordpress -o "$OUTDIR/nuclei-wordpress.txt"
fi

echo "[*] Done. Results in $OUTDIR"

This works. It's better than running each tool by hand. But it's brittle in ways that matter:

You can build a more sophisticated pipeline. Some teams have hundreds of lines of bash and Python orchestrating these tools. But you're now maintaining a scanning platform instead of running scans.

How Ironimo Automates the Pipeline

Ironimo runs the same Kali Linux tools — actual nmap, nikto, and nuclei binaries, not reimplementations. The difference is what happens between tool executions.

An AI orchestration layer reads each tool's output, extracts structured intelligence, and makes real-time decisions about what to run next. Not a static decision tree. An adaptive one.

Concrete example of what this looks like in practice:

  1. nmap discovers port 8080 running Apache Tomcat/9.0.50
  2. The orchestrator knows Tomcat 9.0.50 is affected by CVE-2024-52316 (authentication bypass) and CVE-2024-50379 (RCE via partial PUT) — it queues nuclei with those specific templates
  3. nikto runs in parallel against all HTTP services, finds /manager/html returns 401 instead of 404 — Tomcat Manager is deployed
  4. The orchestrator sees nikto's manager finding + the Tomcat version, adds default credential nuclei templates targeting /manager/html specifically
  5. nuclei confirms default credentials work on Tomcat Manager — this gets flagged as critical with the full attack chain documented: open port + deployed manager + default creds

No bash glue. No grepping for technology names. The orchestrator understands what each finding means and what to do about it.

The reporting layer correlates across tools automatically. That critical Tomcat Manager finding references the nmap port discovery, the nikto manager detection, and the nuclei credential verification — one coherent narrative instead of three separate line items.

Beyond the Trio: When to Extend the Chain

nmap + nikto + nuclei cover the core assessment. But specific findings should trigger specialized tools:

sqlmap: When injection is suspected

If nikto flags OSVDB-* entries related to SQL injection, or nuclei detects a potential injection point, the pipeline should escalate to sqlmap for confirmation and characterization:

# nikto found error-based SQL indicator on /search endpoint
sqlmap -u "https://target.com/search?q=test" --batch --level=3 --risk=2

# nuclei flagged a parameterized endpoint
sqlmap -u "https://target.com/api/v1/users?id=1" --batch --dbms=postgresql --technique=BEU

sqlmap is expensive (slow, noisy, generates many requests). You don't run it against every endpoint. You run it where earlier tools gave you reason to.

testssl.sh: When TLS is present

Every HTTPS service nmap discovers should get a testssl.sh scan. This isn't conditional — it's automatic:

# Run against every HTTPS endpoint nmap found
testssl.sh --jsonfile tls-443.json https://target.com:443
testssl.sh --jsonfile tls-8443.json https://target.com:8443

Findings like TLS 1.0 support, weak ciphers, or certificate issues don't come from nikto or nuclei. testssl.sh is the only tool in the chain that covers this surface comprehensively.

whatweb: When technology identification needs depth

nmap's NSE scripts and nikto's headers give you partial technology fingerprints. whatweb goes deeper — CMS versions, JavaScript framework versions, CDN identification, analytics platforms:

# Aggressive fingerprinting
whatweb -a 3 -v https://target.com

# The output feeds nuclei's template selection
# whatweb identifies: WordPress 6.4.2, jQuery 3.6.0, PHP 8.1
# → nuclei gets: -tags wordpress, plus version-specific CVE templates

In a well-built pipeline, whatweb runs early (right after nmap) and its output enriches every subsequent tool's configuration.


The Takeaway

nmap, nikto, and nuclei are individually excellent. Together, with data flowing between them, they're a fundamentally different capability. The total coverage exceeds what you get from running each tool independently — not by a small margin, but by finding entire classes of issues that only emerge when one tool's output informs another's input.

You can build this pipeline yourself in bash. Many teams do. But the maintenance cost of the orchestration layer — the decision logic, the error handling, the cross-tool correlation — grows faster than the scanning logic itself.

The tools are solved. The orchestration is the hard part.

Ironimo chains 19 Kali Linux tools — nmap, nikto, nuclei, sqlmap, hydra, xsstrike, wpscan, and more — with AI orchestration that decides what to run based on what each scan discovers. Real pentester tools. Zero pentester required.

Join Waitlist
← Back to blog