FreshRSS

πŸ”’
❌ Secure Planet Training Courses Updated For 2019 - Click Here
There are new available articles, click to refresh the page.
Before yesterdayYour RSS feeds

Bytesrevealer - Online Reverse Enginerring Viewer

By: Unknown


Bytes Revealer is a powerful reverse engineering and binary analysis tool designed for security researchers, forensic analysts, and developers. With features like hex view, visual representation, string extraction, entropy calculation, and file signature detection, it helps users uncover hidden data inside files. Whether you are analyzing malware, debugging binaries, or investigating unknown file formats, Bytes Revealer makes it easy to explore, search, and extract valuable information from any binary file.

Bytes Revealer do NOT store any file or data. All analysis is performed in your browser.

Current Limitation: Files less than 50MB can perform all analysis, files bigger up to 1.5GB will only do Visual View and Hex View analysis.


Features

File Analysis

  • Chunked file processing for memory efficiency
  • Real-time progress tracking
  • File signature detection
  • Hash calculations (MD5, SHA-1, SHA-256)
  • Entropy and Bytes Frequency analysis

Multiple Views

File View

  • Basic file information and metadata
  • File signatures detection
  • Hash values
  • Entropy calculation
  • Statistical analysis

Visual View

  • Binary data visualization
  • ASCII or Bytes searching
  • Data distribution view
  • Highlighted pattern matching

Hex View

  • Traditional hex editor interface
  • Byte-level inspection
  • Highlighted pattern matching
  • ASCII representation
  • ASCII or Bytes searching

String Analysis

  • ASCII and UTF-8 string extraction
  • String length analysis
  • String type categorization
  • Advanced filtering and sorting
  • String pattern recognition
  • Export capabilities

Search Capabilities

  • Hex pattern search
  • ASCII/UTF-8 string search
  • Regular expression support
  • Highlighted search results

Technical Details

Built With

  • Vue.js 3
  • Tailwind CSS
  • Web Workers for performance
  • Modern JavaScript APIs

Performance Features

  • Chunked file processing
  • Web Worker implementation
  • Memory optimization
  • Cancelable operations
  • Progress tracking

Getting Started

Prerequisites

# Node.js 14+ is required
node -v

Docker Usage

docker-compose build --no-cache

docker-compose up -d

Now open your browser: http://localhost:8080/

To stop the docker container

docker-compose down

Installation

# Clone the repository
git clone https://github.com/vulnex/bytesrevealer

# Navigate to project directory
cd bytesrevealer

# Install dependencies
npm install

# Start development server
npm run dev

Building for Production

# Build the application
npm run build

# Preview production build
npm run preview

Usage

  1. File Upload
  2. Click "Choose File" or drag and drop a file
  3. Progress bar shows upload and analysis status

  4. Analysis Views

  5. Switch between views using the tab interface
  6. Each view provides different analysis perspectives
  7. Real-time updates as you navigate

  8. Search Functions

  9. Use the search bar for pattern matching
  10. Toggle between hex and string search modes
  11. Results are highlighted in the current view

  12. String Analysis

  13. View extracted strings with type and length
  14. Filter strings by type or content
  15. Sort by various criteria
  16. Export results in multiple formats

Performance Considerations

  • Large files are processed in chunks
  • Web Workers handle intensive operations
  • Memory usage is optimized
  • Operations can be canceled if needed

Browser Compatibility

  • Chrome 80+
  • Firefox 75+
  • Safari 13.1+
  • Edge 80+

Contributing

  1. Fork the project
  2. Create your feature branch (git checkout -b feature/AmazingFeature)
  3. Commit your changes (git commit -m 'Add some AmazingFeature')
  4. Push to the branch (git push origin feature/AmazingFeature)
  5. Open a Pull Request

License

This project is licensed under the MIT License - see the LICENSE.md file for details.

Security Considerations

  • All strings are properly escaped
  • Input validation is implemented
  • Memory limits are enforced
  • File size restrictions are in place

Future Enhancements

  • Additional file format support
  • More visualization options
  • Pattern recognition improvements
  • Advanced string analysis features
  • Export/import capabilities
  • Collaboration features


PANO - Advanced OSINT Investigation Platform Combining Graph Visualization, Timeline Analysis, And AI Assistance To Uncover Hidden Connections In Data

By: Unknown


PANO is a powerful OSINT investigation platform that combines graph visualization, timeline analysis, and AI-powered tools to help you uncover hidden connections and patterns in your data.

Getting Started

  1. Clone the repository: bash git clone https://github.com/ALW1EZ/PANO.git cd PANO

  2. Run the application:

  3. Linux: ./start_pano.sh
  4. Windows: start_pano.bat

The startup script will automatically: - Check for updates - Set up the Python environment - Install dependencies - Launch PANO

In order to use Email Lookup transform You need to login with GHunt first. After starting the pano via starter scripts;

  1. Select venv manually
  2. Linux: source venv/bin/activate
  3. Windows: call venv\Scripts\activate
  4. See how to login here

πŸ’‘ Quick Start Guide

  1. Create Investigation: Start a new investigation or load an existing one
  2. Add Entities: Drag entities from the sidebar onto the graph
  3. Discover Connections: Use transforms to automatically find relationships
  4. Analyze: Use timeline and map views to understand patterns
  5. Save: Export your investigation for later use

πŸ” Features

πŸ•ΈοΈ Core Functionality

  • Interactive Graph Visualization
  • Drag-and-drop entity creation
  • Multiple layout algorithms (Circular, Hierarchical, Radial, Force-Directed)
  • Dynamic relationship mapping
  • Visual node and edge styling

  • Timeline Analysis

  • Chronological event visualization
  • Interactive timeline navigation
  • Event filtering and grouping
  • Temporal relationship analysis

  • Map Integration

  • Geographic data visualization
  • Location-based analysis
  • Interactive mapping features
  • Coordinate plotting and tracking

🎯 Entity Management

  • Supported Entity Types
  • πŸ“§ Email addresses
  • πŸ‘€ Usernames
  • 🌐 Websites
  • πŸ–ΌοΈ Images
  • πŸ“ Locations
  • ⏰ Events
  • πŸ“ Text content
  • πŸ”§ Custom entity types

πŸ”„ Transform System

  • Email Analysis
  • Google account investigation
  • Calendar event extraction
  • Location history analysis
  • Connected services discovery

  • Username Analysis

  • Cross-platform username search
  • Social media profile discovery
  • Platform correlation
  • Web presence analysis

  • Image Analysis

  • Reverse image search
  • Visual content analysis
  • Metadata extraction
  • Related image discovery

πŸ€– AI Integration

  • PANAI
  • Natural language investigation assistant
  • Automated entity extraction and relationship mapping
  • Pattern recognition and anomaly detection
  • Multi-language support
  • Context-aware suggestions
  • Timeline and graph analysis

🧩 Core Components

πŸ“¦ Entities

Entities are the fundamental building blocks of PANO. They represent distinct pieces of information that can be connected and analyzed:

  • Built-in Types
  • πŸ“§ Email: Email addresses with service detection
  • πŸ‘€ Username: Social media and platform usernames
  • 🌐 Website: Web pages with metadata
  • πŸ–ΌοΈ Image: Images with EXIF and analysis
  • πŸ“ Location: Geographic coordinates and addresses
  • ⏰ Event: Time-based occurrences
  • πŸ“ Text: Generic text content

  • Properties System

  • Type-safe property validation
  • Automatic property getters
  • Dynamic property updates
  • Custom property types
  • Metadata support

⚑ Transforms

Transforms are automated operations that process entities to discover new information and relationships:

  • Operation Types
  • πŸ” Discovery: Find new entities from existing ones
  • πŸ”— Correlation: Connect related entities
  • πŸ“Š Analysis: Extract insights from entity data
  • 🌐 OSINT: Gather open-source intelligence
  • πŸ”„ Enrichment: Add data to existing entities

  • Features

  • Async operation support
  • Progress tracking
  • Error handling
  • Rate limiting
  • Result validation

πŸ› οΈ Helpers

Helpers are specialized tools with dedicated UIs for specific investigation tasks:

  • Available Helpers
  • πŸ” Cross-Examination: Analyze statements and testimonies
  • πŸ‘€ Portrait Creator: Generate facial composites
  • πŸ“Έ Media Analyzer: Advanced image processing and analysis
  • πŸ” Base Searcher: Search near places of interest
  • πŸ”„ Translator: Translate text between languages

  • Helper Features

  • Custom Qt interfaces
  • Real-time updates
  • Graph integration
  • Data visualization
  • Export capabilities

πŸ‘₯ Contributing

We welcome contributions! To contribute to PANO:

  1. Fork the repository at https://github.com/ALW1EZ/PANO/
  2. Make your changes in your fork
  3. Test your changes thoroughly
  4. Create a Pull Request to our main branch
  5. In your PR description, include:
  6. What the changes do
  7. Why you made these changes
  8. Any testing you've done
  9. Screenshots if applicable

Note: We use a single main branch for development. All pull requests should be made directly to main.

πŸ“– Development Guide

Click to expand development documentation ### System Requirements - Operating System: Windows or Linux - Python 3.11+ - PySide6 for GUI - Internet connection for online features ### Custom Entities Entities are the core data structures in PANO. Each entity represents a piece of information with specific properties and behaviors. To create a custom entity: 1. Create a new file in the `entities` folder (e.g., `entities/phone_number.py`) 2. Implement your entity class:
from dataclasses import dataclass
from typing import ClassVar, Dict, Any
from .base import Entity

@dataclass
class PhoneNumber(Entity):
name: ClassVar[str] = "Phone Number"
description: ClassVar[str] = "A phone number entity with country code and validation"

def init_properties(self):
"""Initialize phone number properties"""
self.setup_properties({
"number": str,
"country_code": str,
"carrier": str,
"type": str, # mobile, landline, etc.
"verified": bool
})

def update_label(self):
"""Update the display label"""
self.label = self.format_label(["country_code", "number"])
### Custom Transforms Transforms are operations that process entities and generate new insights or relationships. To create a custom transform: 1. Create a new file in the `transforms` folder (e.g., `transforms/phone_lookup.py`) 2. Implement your transform class:
from dataclasses import dataclass
from typing import ClassVar, List
from .base import Transform
from entities.base import Entity
from entities.phone_number import PhoneNumber
from entities.location import Location
from ui.managers.status_manager import StatusManager

@dataclass
class PhoneLookup(Transform):
name: ClassVar[str] = "Phone Number Lookup"
description: ClassVar[str] = "Lookup phone number details and location"
input_types: ClassVar[List[str]] = ["PhoneNumber"]
output_types: ClassVar[List[str]] = ["Location"]

async def run(self, entity: PhoneNumber, graph) -> List[Entity]:
if not isinstance(entity, PhoneNumber):
return []

status = StatusManager.get()
operation_id = status.start_loading("Phone Lookup")

try:
# Your phone number lookup logic here
# Example: query an API for phone number details
location = Location(properties={
"country": "Example Country",
"region": "Example Region",
"carrier": "Example Carrier",
"source": "PhoneLookup transform"
})

return [location]

except Exception as e:
status.set_text(f"Error during phone lookup: {str(e)}")
return []

finally:
status.stop_loading(operation_id)
### Custom Helpers Helpers are specialized tools that provide additional investigation capabilities through a dedicated UI interface. To create a custom helper: 1. Create a new file in the `helpers` folder (e.g., `helpers/data_analyzer.py`) 2. Implement your helper class:
from PySide6.QtWidgets import (
QWidget, QVBoxLayout, QHBoxLayout, QPushButton,
QTextEdit, QLabel, QComboBox
)
from .base import BaseHelper
from qasync import asyncSlot

class DummyHelper(BaseHelper):
"""A dummy helper for testing"""

name = "Dummy Helper"
description = "A dummy helper for testing"

def setup_ui(self):
"""Initialize the helper's user interface"""
# Create input text area
self.input_label = QLabel("Input:")
self.input_text = QTextEdit()
self.input_text.setPlaceholderText("Enter text to process...")
self.input_text.setMinimumHeight(100)

# Create operation selector
operation_layout = QHBoxLayout()
self.operation_label = QLabel("Operation:")
self.operation_combo = QComboBox()
self.operation_combo.addItems(["Uppercase", "Lowercase", "Title Case"])
operation_layout.addWidget(self.operation_label)
operation_layout.addWidget(self.operation_combo)

# Create process button
self.process_btn = QPushButton("Process")
self.process_btn.clicked.connect(self.process_text)

# Create output text area
self.output_label = QLabel("Output:")
self.output_text = QTextEdit()
self.output_text.setReadOnly(True)
self.output_text.setMinimumHeight(100)

# Add widgets to main layout
self.main_layout.addWidget(self.input_label)
self.main_layout.addWidget(self.input_text)
self.main_layout.addLayout(operation_layout)
self.main_layout.addWidget(self.process_btn)
self.main_layout.addWidget(self.output_label)
self.main_layout.addWidget(self.output_text)

# Set dialog size
self.resize(400, 500)

@asyncSlot()
async def process_text(self):
"""Process the input text based on selected operation"""
text = self.input_text.toPlainText()
operation = self.operation_combo.currentText()

if operation == "Uppercase":
result = text.upper()
elif operation == "Lowercase":
result = text.lower()
else: # Title Case
result = text.title()

self.output_text.setPlainText(result)

πŸ“„ License

This project is licensed under the Creative Commons Attribution-NonCommercial (CC BY-NC) License.

You are free to: - βœ… Share: Copy and redistribute the material - βœ… Adapt: Remix, transform, and build upon the material

Under these terms: - ℹ️ Attribution: You must give appropriate credit - 🚫 NonCommercial: No commercial use - πŸ”“ No additional restrictions

πŸ™ Acknowledgments

Special thanks to all library authors and contributors who made this project possible.

πŸ‘¨β€πŸ’» Author

Created by ALW1EZ with AI ❀️



GDBFuzz - Fuzzing Embedded Systems Using Hardware Breakpoints

By: Zion3R


This is the companion code for the paper: 'Fuzzing Embedded Systems using Debugger Interfaces'. A preprint of the paper can be found here https://publications.cispa.saarland/3950/. The code allows the users to reproduce and extend the results reported in the paper. Please cite the above paper when reporting, reproducing or extending the results.


Folder structure

.
β”œβ”€β”€ benchmark # Scripts to build Google's fuzzer test suite and run experiments
β”œβ”€β”€ dependencies # Contains a Makefile to install dependencies for GDBFuzz
β”œβ”€β”€ evaluation # Raw exeriment data, presented in the paper
β”œβ”€β”€ example_firmware # Embedded example applications, used for the evaluation
β”œβ”€β”€ example_programs # Contains a compiled example program and configs to test GDBFuzz
β”œβ”€β”€ src # Contains the implementation of GDBFuzz
β”œβ”€β”€ Dockerfile # For creating a Docker image with all GDBFuzz dependencies installed
β”œβ”€β”€ LICENSE # License
β”œβ”€β”€ Makefile # Makefile for creating the docker image or install GDBFuzz locally
└── README.md # This README file

Purpose of the project

The idea of GDBFuzz is to leverage hardware breakpoints from microcontrollers as feedback for coverage-guided fuzzing. Therefore, GDB is used as a generic interface to enable broad applicability. For binary analysis of the firmware, Ghidra is used. The code contains a benchmark setup for evaluating the method. Additionally, example firmware files are included.

Getting Started

GDBFuzz enables coverage-guided fuzzing for embedded systems, but - for evaluation purposes - can also fuzz arbitrary user applications. For fuzzing on microcontrollers we recommend a local installation of GDBFuzz to be able to send fuzz data to the device under test flawlessly.

Install local

GDBFuzz has been tested on Ubuntu 20.04 LTS and Raspberry Pie OS 32-bit. Prerequisites are java and python3. First, create a new virtual environment and install all dependencies.

virtualenv .venv
source .venv/bin/activate
make
chmod a+x ./src/GDBFuzz/main.py

Run locally on an example program

GDBFuzz reads settings from a config file with the following keys.

[SUT]
# Path to the binary file of the SUT.
# This can, for example, be an .elf file or a .bin file.
binary_file_path = <path>

# Address of the root node of the CFG.
# Breakpoints are placed at nodes of this CFG.
# e.g. 'LLVMFuzzerTestOneInput' or 'main'
entrypoint = <entrypoint>

# Number of inputs that must be executed without a breakpoint hit until
# breakpoints are rotated.
until_rotate_breakpoints = <number>


# Maximum number of breakpoints that can be placed at any given time.
max_breakpoints = <number>

# Blacklist functions that shall be ignored.
# ignore_functions is a space separated list of function names e.g. 'malloc free'.
ignore_functions = <space separated list>

# One of {Hardware, QEMU, SUTRunsOnHost}
# Hardware: An external component starts a gdb server and GDBFuzz can connect to this gdb server.
# QEMU: GDBFuzz starts QEMU. QEMU emulates binary_file_path and starts gdbserver.
# SUTRunsOnHost: GDBFuzz start the target program within GDB.
target_mode = <mode>

# Set this to False if you want to start ghidra, analyze the SUT,
# and start the ghidra bridge server manually.
start_ghidra = True


# Space separated list of addresses where software breakpoints (for error
# handling code) are set. Execution of those is considered a crash.
# Example: software_breakpoint_addresses = 0x123 0x432
software_breakpoint_addresses =


# Whether all triggered software breakpoints are considered as crash
consider_sw_breakpoint_as_error = False

[SUTConnection]
# The class 'SUT_connection_class' in file 'SUT_connection_path' implements
# how inputs are sent to the SUT.
# Inputs can, for example, be sent over Wi-Fi, Serial, Bluetooth, ...
# This class must inherit from ./connections/SUTConnection.py.
# See ./connections/SUTConnection.py for more information.
SUT_connection_file = FIFOConnection.py

[GDB]
path_to_gdb = gdb-multiarch
#Written in address:port
gdb_server_address = localhost:4242

[Fuzzer]
# In Bytes
maximum_input_length = 100000
# In seconds
single_run_timeout = 20
# In seconds
total_runtime = 3600

# Optional
# Path to a directory where each file contains one seed. If you don't want to
# use seeds, leave the value empty.
seeds_directory =

[BreakpointStrategy]
# Strategies to choose basic blocks are located in
# 'src/GDBFuzz/breakpoint_strategies/'
# For the paper we use the following strategies
# 'RandomBasicBlockStrategy.py' - Randomly choosing unreached basic blocks
# 'RandomBasicBlockNoDomStrategy.py' - Like previous, but doesn't use dominance relations to derive transitively reached nodes.
# 'RandomBasicBlockNoCorpusStrategy.py' - Like first, but prevents growing the input corpus and therefore behaves like blackbox fuzzing with coverage measurement.
# 'BlackboxStrategy.py', - Doesn't set any breakpoints
breakpoint_strategy_file = RandomBasicBlockStrategy.py

[Dependencies]
path_to_qemu = dependencies/qemu/build/x86_64-linux-user/qemu-x86_64
path_to_ghidra = dependencies/ghidra


[LogsAndVisualizations]
# One of {DEBUG, INFO, WARNING, ERROR, CRITICAL}
loglevel = INFO

# Path to a directory where output files (e.g. graphs, logfiles) are stored.
output_directory = ./output

# If set to True, an MQTT client sends UI elements (e.g. graphs)
enable_UI = False

An example config file is located in ./example_programs/ together with an example program that was compiled using our fuzzing harness in benchmark/benchSUTs/GDBFuzz_wrapper/common/. Start fuzzing for one hour with the following command.

chmod a+x ./example_programs/json-2017-02-12
./src/GDBFuzz/main.py --config ./example_programs/fuzz_json.cfg

We first see output from Ghidra analyzing the binary executable and susequently messages when breakpoints are relocated or hit.

Fuzzing Output

Depending on the specified output_directory in the config file, there should now be a folder trial-0 with the following structure

.
β”œβ”€β”€ corpus # A folder that contains the input corpus.
β”œβ”€β”€ crashes # A folder that contains crashing inputs - if any.
β”œβ”€β”€ cfg # The control flow graph as adjacency list.
β”œβ”€β”€ fuzzer_stats # Statistics of the fuzzing campaign.
β”œβ”€β”€ plot_data # Table showing at which relative time in the fuzzing campaign which basic block was reached.
β”œβ”€β”€ reverse_cfg # The reverse control flow graph.

Using Ghidra in GUI mode

By setting start_ghidra = False in the config file, GDBFuzz connects to a Ghidra instance running in GUI mode. Therefore, the ghidra_bridge plugin needs to be started manually from the script manager. During fuzzing, reached program blocks are highlighted in green.

GDBFuzz on Linux user programs

For fuzzing on Linux user applications, GDBFuzz leverages the standard LLVMFuzzOneInput entrypoint that is used by almost all fuzzers like AFL, AFL++, libFuzzer,.... In benchmark/benchSUTs/GDBFuzz_wrapper/common There is a wrapper that can be used to compile any compliant fuzz harness into a standalone program that fetches input via a named pipe at /tmp/fromGDBFuzz. This allows to simulate an embedded device that consumes data via a well defined input interface and therefore run GDBFuzz on any application. For convenience we created a script in benchmark/benchSUTs that compiles all programs from our evaluation with our wrapper as explained later.

NOTE: GDBFuzz is not intended to fuzz Linux user applications. Use AFL++ or other fuzzers therefore. The wrapper just exists for evaluation purposes to enable running benchmarks and comparisons on a scale!

Install and run in a Docker container

The general effectiveness of our approach is shown in a large scale benchmark deployed as docker containers.

make dockerimage

To run the above experiment in the docker container (for one hour as specified in the config file), map the example_programsand output folder as volumes and start GDBFuzz as follows.

chmod a+x ./example_programs/json-2017-02-12
docker run -it --env CONFIG_FILE=/example_programs/fuzz_json_docker_qemu.cfg -v $(pwd)/example_programs:/example_programs -v $(pwd)/output:/output gdbfuzz:1.0

An output folder should appear in the current working directory with the structure explained above.

Detailed Instructions

Our evaluation is split in two parts. 1. GDBFuzz on its intended setup, directly on the hardware. 2. GDBFuzz in an emulated environment to allow independend analysis and comparisons of the results.

GDBFuzz can work with any GDB server and therefore most debug probes for microcontrollers.

GDBFuzz vs. Blackbox (RQ1)

Regarding RQ1 from the paper, we execute GDBFuzz on different microcontrollers with different firmwares located in example_firmware. For each experiment we run GDBFuzz with the RandomBasicBlock and with the RandomBasicBlockNoCorpus strategy. The latter behaves like fuzzing without feedback, but we can still measure the achieved coverage. For answering RQ1, we compare the achieved coverage of the RandomBasicBlock and the RandomBasicBlockNoCorpus strategy. Respective config files are in the corresponding subfolders and we now explain how to setup fuzzing on the four development boards.

GDBFuzz on STM32 B-L4S5I-IOT01A board

GDBFuzz requires access to a GDB Server. In this case the B-L4S5I-IOT01A and its on-board debugger are used. This on-board debugger sets up a GDB server via the 'st-util' program, and enables access to this GDB server via localhost:4242.

  • Install the STLINK driver link
  • Connect MCU board and PC via USB (on MCU board, connect to the USB connector that is labeled as 'USB STLINK')
sudo apt-get install stlink-tools gdb-multiarch

Build and flash a firmware for the STM32 B-L4S5I-IOT01A, for example the arduinojson project.

Prerequisite: Install platformio (pio)

cd ./example_firmware/stm32_disco_arduinojson/
pio run --target upload

For your info: platformio stored an .elf file of the SUT here: ./example_firmware/stm32_disco_arduinojson/.pio/build/disco_l4s5i_iot01a/firmware.elf This .elf file is also later used in the user configuration for Ghidra.

Start a new terminal, and run the following to start the a GDB Server:

st-util

Run GDBFuzz with a user configuration for arduinojson. We can send data over the usb port to the microcontroller. The microcontroller forwards this data via serial to the SUT'. In our case /dev/ttyACM0 is the USB device to the microcontroller board. If your system assigned another device to the microcontroller board, change /dev/ttyACM0 in the config file to your device.

./src/GDBFuzz/main.py --config ./example_firmware/stm32_disco_arduinojson/fuzz_serial_json.cfg

Fuzzer statistics and logs are in the ./output/... directory.

GDBFuzz on the CY8CKIT-062-WiFi-BT board

Install pyocd:

pip install --upgrade pip 'mbed-ls>=1.7.1' 'pyocd>=0.16'

Make sure that 'KitProg v3' is on the device and put Board into 'Arm DAPLink' Mode by pressing the appropriate button. Start the GDB server:

pyocd gdbserver --persist

Flash a firmware and start fuzzing e.g. with

gdb-multiarch
target remote :3333
load ./example_firmware/CY8CKIT_json/mtb-example-psoc6-uart-transmit-receive.elf
monitor reset
./src/GDBFuzz/main.py --config ./example_firmware/CY8CKIT_json/fuzz_serial_json.cfg

GDBFuzz on ESP32 and Segger J-Link

Build and flash a firmware for the ESP32, for instance the arduinojson example with platformio.

cd ./example_firmware/esp32_arduinojson/
pio run --target upload

Add following line to the openocd config file for the J-Link debugger: jlink.cfg

adapter speed 10000

Start a new terminal, and run the following to start the GDB Server:

get_idf
openocd -f interface/jlink.cfg -f target/esp32.cfg -c "telnet_port 7777" -c "gdb_port 8888"

Run GDBFuzz with a user configuration for arduinojson. We can send data over the usb port to the microcontroller. The microcontroller forwards this data via serial to the SUT'. In our case /dev/ttyUSB0 is the USB device to the microcontroller board. If your system assigned another device to the microcontroller board, change /dev/ttyUSB0 in the config file to your device.

./src/GDBFuzz/main.py --config ./example_firmware/esp32_arduinojson/fuzz_serial.cfg

Fuzzer statistics and logs are in the ./output/... directory.

GDBFuzz on MSP430F5529LP

Install TI MSP430 GCC from https://www.ti.com/tool/MSP430-GCC-OPENSOURCE

Start GDB Server

./gdb_agent_console libmsp430.so

or (more stable). Build mspdebug from https://github.com/dlbeer/mspdebug/ and use:

until mspdebug --fet-skip-close --force-reset tilib "opt gdb_loop True" gdb ; do sleep 1 ; done

Ghidra fails to analyze binaries for the TI MSP430 controller out of the box. To fix that, we import the file in the Ghidra GUI, choose MSP430X as architecture and skip the auto analysis. Next, we open the 'Symbol Table', sort them by name and delete all symbols with names like $C$L*. Now the auto analysis can be executed. After analysis, start the ghidra bridge from the Ghidra GUI manually and then start GDBFuzz.

./src/GDBFuzz/main.py --config ./example_firmware/msp430_arduinojson/fuzz_serial.cfg

USB Fuzzing

To access USB devices as non-root user with pyusb we add appropriate rules to udev. Paste following lines to /etc/udev/rules.d/50-myusb.rules:

SUBSYSTEM=="usb", ATTRS{idVendor}=="1234", ATTRS{idProduct}=="5678" GROUP="usbusers", MODE="666"

Reload udev:

sudo udevadm control --reload
sudo udevadm trigger

Compare against Fuzzware (RQ2)

In RQ2 from the paper, we compare GDBFuzz against the emulation based approach Fuzzware. First we execute GDBFuzz and Fuzzware as described previously on the shipped firmware files. For each GDBFuzz experiment, we create a file with valid basic blocks from the control flow graph files as follows:

cut -d " " -f1 ./cfg > valid_bbs.txt

Now we can replay coverage against fuzzware result fuzzware genstats --valid-bb-file valid_bbs.txt

Finding Bugs (RQ3)

When crashing or hanging inputs are found, the are stored in the crashes folder. During evaluation, we found the following three bugs:

  1. An infinite loop in the STM32 USB device stack, caused by counting a uint8_t index variable to an attacker controllable uint32_t variable within a for loop.
  2. A buffer overflow in the Cypress JSON parser, caused by missing length checks on a fixed size internal buffer.
  3. A null pointer dereference in the Cypress JSON parser, caused by missing validation checks.

GDBFuzz on an Raspberry Pi 4a (8Gb)

GDBFuzz can also run on a Raspberry Pi host with slight modifications:

  1. Ghidra must be modified, such that it runs on an 32-Bit OS

In file ./dependencies/ghidra/support/launch.sh:125 The JAVA_HOME variable must be hardcoded therefore e.g. to JAVA_HOME="/usr/lib/jvm/default-java"

  1. STLink must be at version >= 1.7 to work properly -> Build from sources

GDBFuzz on other boards

To fuzz software on other boards, GDBFuzz requires

  1. A microcontroller with hardware breakpoints and a GDB compliant debug probe
  2. The firmware file.
  3. A running GDBServer and suitable GDB application.
  4. An entry point, where fuzzing should start e.g. a parser function or an address
  5. An input interface (see src/GDBFuzz/connections) that triggers execution of the code at the entry point e.g. serial connection

All these properties need to be specified in the config file.

Run the full Benchmark (RQ4 - 8)

For RQ's 4 - 8 we run a large scale benchmark. First, build the Docker image as described previously and compile applications from Google's Fuzzer Test Suite with our fuzzing harness in benchmark/benchSUTs/GDBFuzz_wrapper/common.

cd ./benchmark/benchSUTs
chmod a+x setup_benchmark_SUTs.py
make dockerbenchmarkimage

Next adopt the benchmark settings in benchmark/scripts/benchmark.py and benchmark/scripts/benchmark_aflpp.py to your demands (especially number_of_cores, trials, and seconds_per_trial) and start the benchmark with:

cd ./benchmark/scripts
./benchmark.py $(pwd)/../benchSUTs/SUTs/ SUTs.json
./benchmark_aflpp.py $(pwd)/../benchSUTs/SUTs/ SUTs.json

A folder appears in ./benchmark/scripts that contains plot files (coverage over time), fuzzer statistic files, and control flow graph files for each experiment as in evaluation/fuzzer_test_suite_qemu_runs.

[Optional] Install Visualization and Visualization Example

GDBFuzz has an optional feature where it plots the control flow graph of covered nodes. This is disabled by default. You can enable it by following the instructions of this section and setting 'enable_UI' to 'True' in the user configuration.

On the host:

Install

sudo apt-get install graphviz

Install a recent version of node, for example Option 2 from here. Use Option 2 and not option 1. This should install both node and npm. For reference, our version numbers are (but newer versions should work too):

➜ node --version
v16.9.1
➜ npm --version
7.21.1

Install web UI dependencies:

cd ./src/webui
npm install

Install mosquitto MQTT broker, e.g. see here

Update the mosquitto broker config: Replace the file /etc/mosquitto/conf.d/mosquitto.conf with the following content:

listener 1883
allow_anonymous true

listener 9001
protocol websockets

Restart the mosquitto broker:

sudo service mosquitto restart

Check that the mosquitto broker is running:

sudo service mosquitto status

The output should include the text 'Active: active (running)'

Start the web UI:

cd ./src/webui
npm start

Your web browser should open automatically on 'http://localhost:3000/'.

Start GDBFuzz and use a user config file where enable_UI is set to True. You can use the Docker container and arduinojson SUT from above. But make sure to set 'enable_UI' to 'True'.

The nodes covered in 'blue' are covered. White nodes are not covered. We only show uncovered nodes if their parent is covered (drawing the complete control flow graph takes too much time if the control flow graph is large).



❌