Reproducing Ret2's SkyLight fuzzer in MacOS Mojave

In last year’s Pwn2Own, the team at Ret2 Systems developed an interesting exploit chain for MacOS High Sierra through bugs they found in Safari and the MacOS WindowServer. A few months later, they published an excellent walkthrough of their methodology and the bugs used to compromise the machine. Of particular interest to me was their description of how they used Frida and a relatively basic fuzzing strategy to find an exploitable bug in the SkyLight library. I decided to spend some time to reimplement their methodology in Mojave as a way to get some hands on experience with Frida and mach message fuzzing.

Before we get into it, I’d like to thank the team at Ret2 Systems for their help in reproducing their research, and for the original write-up. Often we never get a chance to understand these chains, and the six-part series is a must-read for anyone interested in this kind of stuff. Follow along if you’re interested in the process, but for those who’d rather just get started fuzzing SkyLight you’ll find everything you need here.

While I would recommend reading all six posts in the Ret2 Systems Pwn2Own walkthrough, the most important for our purposes is post four, Cracking the Walls of the Safari Sandbox. In this post Ret2 describes their methodology and why they chose WindowServer, so we’ll skip that here.

intercepting messages

The first step is to identify the points in the SkyLight Library where we will be using Frida Interceptor to read and modify incoming mach messages. To do this, we first need to find the addresses of the MIG subsystems. You can get these addresses by using either jtool, by searching for the string “subsystem” in the library itself using your favorite disassembler.

$ jtool -arch x86_64 -q -d __DATA.__const  /System/Library/PrivateFrameworks/SkyLight.framework/SkyLight | grep "MIG sub"
Dumping from address 0x2b3e10 (Segment: __DATA.__const) to end of section
Address : 0x2b3e10 = Offset 0x2b4e10
0x2b4658: 68 74 00 00 a9 74 00 00 MIG subsystem 29800 (65 messages)
0x2b5110: 48 71 00 00 57 71 00 00 MIG subsystem 29000 (15 messages)
0x2b53f8: 10 72 00 00 13 74 00 00 MIG subsystem 29200 (515 messages)

Visiting each of these offsets in a disassembler, we’ll find cross-referencing code which will reveal the MIG dispatch handlers we want to hook. They may be different in new newer versions of Mojave – the above are from a 10.14 VM.

The first subsystem on our list is __CGXCGXWindowServer_subsystem, beginning 8 bytes before the jtool-provided offset in the SkyLight library.

subsystem

All we need to do is ask our disassembler to show us references to the subsystem, and it is simple enough to identify the associated dispatch routine. Note that I am doing this in Hopper, but free alternatives such as radare2 or GHIDRA should work just fine.

dispatch routine hook target

After the pointer to the mach message we want to fuzz has been moved into rdi(0xcde63), the mach message handler pointed to by rax will be called. This call instruction is where we want to intercept. Note this first case is special because of a limitation in Frida that prevents us from hooking directly at the call instruction. This is because Frida needs 5 bytes of space after the intercept target to do its relocations, but as the basic block ends here there is no room. Fortunately for us we can just hook one instruction earlier, as at this point rdi already contains the pointer to the message we want to intercept. With the other two subsystems, be sure to hook at the call instruction.

To perform the intercept, we need to note the offset of instruction and the target call register, which in this case would be 0xcde66 and rax. Repeat this process with the two other MIG subsystems identified earlier. Now we can start putting together the JavaScript that will injected into the WindowServer process by Frida.

// code as provided by Ret2 Systems at https://blog.ret2.io/2018/07/25/pwn2own-2018-safari-sandbox/
// intercept target offsets modified for Mojave

function InstallProbe(probe_address, target_register) {
    var probe = Interceptor.attach(probe_address, function(args) {
        var input_msg  = args[0]; // rdi (the incoming mach_msg)
        var output_msg = args[1]; // rsi (the response mach_msg)
        // extract the call target & its symbol name (_X...)
        var call_target = this.context[target_register];
        var call_target_name = DebugSymbol.fromAddress(call_target);
        // ready to read / modify / replay 
        console.log('[+] Message received for ' + call_target_name);
    });
    return probe;
}

var targets = [
    ['0xcde66', 'rax'], // WindowServer_subsystem
    ['0x27d4a', 'rcx'], // Rendezvous_subsystem
    ['0xd0886', 'rax']  // Services_subsystem
 ];

// locate the runtime address of the SkyLight framework
var skylight = Module.findBaseAddress('SkyLight');
console.log('[*]  SkyLight @ ' + skylight);
// hook the target instructions
for (var i in targets) {
    var hook_address = ptr(skylight).add(targets[i][0]); // base + offset
    InstallProbe(hook_address, targets[i][1])
    console.log('[+] Hooked dispatch @ ' + hook_address);
}

Save the file, and run it with Frida. Since WindowServer essentially runs as root you’ll need to use sudo. Also note that by default even as root you will not be able to grant Frida permission to attach to the WindowServer process without first disabling System Integrity Protection.

$ sudo frida -l <fuzzer_file.js> WindowServer
...
Attaching...                                                            
[*]  SkyLight @ 0x7fff55353000
[+] Hooked dispatch @ 0x7fff55420e66
[+] Hooked dispatch @ 0x7fff5537ad4a
[+] Hooked dispatch @ 0x7fff55423886
[Local::WindowServer]-> [+] Message received for 0x7fff55391e61 SkyLight!_XRedrawLayerContext
[+] Message received for 0x7fff55391ed0 SkyLight!_XContextDidCommit
[+] Message received for 0x7fff55391e61 SkyLight!_XRedrawLayerContext
[+] Message received for 0x7fff55391ed0 SkyLight!_XContextDidCommit
[+] Message received for 0x7fff55391e61 SkyLight!_XRedrawLayerContext
[+] Message received for 0x7fff55391ed0 SkyLight!_XContextDidCommit
[+] Message received for 0x7fff55391e61 SkyLight!_XRedrawLayerContext
[+] Message received for 0x7fff55391ed0 SkyLight!_XContextDidCommit 
…

We are now able to intercept the mach messages as they are being passed to their respective dispatch routines.

reading the mach_msg_header_t struct

rdi(arg[0] in our interceptor function) points to the message’s mach_msg_header_t struct, which has various members that we need to read in order to be able to fuzz the message’s inline data and save information about it for our replay log. The following image describes the structure. Most of the members are simply types that boil down to 32-bit unsigned integers. This image from Chapter 9 of Amit Singh’s Mac OS X Internals gives an overview of the struct.

login fuzz

To read the message, we just use Frida’s readU32() method at the correct offsets within the struct, and readS32() for the msgh_id member.

// msgh_bits is unsigned int, offset: dec 0
var msgh_bits = args[0].readU32().toString(16);

// msgh_size is unsigned int, offset: dec 4
var msgh_size = args[0].add(4).readU32();

// msgh_remote_port is unsigned int, offset: dec 8
var msgh_remote_port = args[0].add(8).readU32();

// msgh_local_port is unsigned int, offset: dec 12
var msgh_local_port = args[0].add(12).readU32();

// msgh_voucher_port is unsigned int, offset: dec 16
var msgh_voucher_port = args[0].add(16).readU32();

// msgh_id is signed int, offset: dec 20
var msgh_id = args[0].add(20).readS32().toString(16);

// msgh_buffer is data of size msgh_size - 24, offset: dec 24
var msgh_buff = args[0].add(buff_pos).readByteArray(msgh_size - buff_pos);

fuzzing the inline message data

Now that we’ve read the message, we can fuzz the inline data contained in msgh_buff and write the fuzzed buffer into memory. You could skip reading the message data if you wish, but in order to make a replay log this information is necessary later on.

var flip_offset = Math.floor(Math.random() * msgh_buff.byteLength)); 
var flip_mask = rand(256);
var v = new DataView(msgh_buff, flip_offset, 1);
v.setInt8(0, (v.getInt8() ^ flip_mask));

// write the fuzzed buff
args[0].add(buff_pos).writeByteArray(msgh_buff);

As you can see, we’re just doing a single bitflip somewhere in the message as demonstrated by Ret2, but you can of course change this to fuzz however you’d like.

At this point we can start a basic fuzzing session by running sudo frida -l <fuzzer_file.js> WindowServer as we did before. To keep this post brief I won’t describe how the replays work(big thanks to Markus at Ret2 for pointing me in the right direction on how to do it), but the full fuzz.js file with replay ability can be found on my github repo for this project. It includes the replay mode as well as a separate JavaScript file called driver.js that manages the fuzzing process, including the ability to reattach to WindowServer after a crash and log any exceptions found during fuzzing.

resources

https://blog.ret2.io

http://hurdextras.nongnu.org/ipc_guide/mach_ipc_basic_concepts.html

Chapter 9 on OSX IPC From Amit Singh’s Mac OS X Internals

https://fergofrog.com/code/cbowser/xnu/osfmk/mach/message.h.html#mach_msg_ool_descriptor32_t

https://github.com/uroboro/mach_portal/blob/master/mach_portal/unsandboxer.c

http://blog.wuntee.sexy/reaching-the-mach-layer

http://web.mit.edu/darwin/src/modules/xnu/osfmk/man/mach_msg_header.html

newt 0.6.0 now available

I’m happy to announce that the latest version of newt, 0.6.0, is now available! This new version introduces many bug and stability fixes, as well as three new mutator modes to use when fuzzing files. Below I’ll list some of the changes:

  • Added the ability to select specific mutators when format fuzzing
  • Added the ability to fuzz input from stdin
  • Added the ability to generate fuzzy buffers to use with other fuzzing mechanisms
  • Added three new mutators, byte arithmetic, bit rotate and “ripple” mutation
  • Added logging for which mutators are being used
  • Fixed error where some mutators discarded user-specified fuzz factor
  • Fixed issue in “buffmangler” mutator that sometimes generated bad cases
  • Fixed issue where certain program exit codes confused newt process monitor
  • Fixed issue with procmon that caused gdb-monitored programs to not respawn
  • Improved help messages to be more useful
  • Updated readme with a few usage examples

“ripple” mutation

Something I am very excited about in this release is a new mutator I am calling “ripple” mutation. It is based on the byte arithmetic methodology, wherein a byte value is selected and changed by random amounts. The difference is that after selecting an “impact” byte, arithmetic is also performed around that byte in both directions decreasing by squares from the original change. I like to think of this as like what happens when you throw a stone into a pond, and this is where the mutator’s name comes from.

In my initial testing, this new mutation mode has proven itself to be the most effective in newt’s arsenal on a variety of formats including fonts, images, videos and especially PDFs. I’m very pleased with the results so far, and I hope you will find it useful as well.

For a simple walkthrough on how to use newt, check out this earlier post.

Setting up a simple fuzz run with newt

Fuzzing is a great way to find security and stability issues in software. At its most basic, it’s extremely easy to do and generally requires much less work than auditing source code. Of course gathering a corpus and triaging bugs takes some time, but during the run itself you’re free to do other things, which for me is the biggest advantage to this approach.

This post will serve as a short tutorial for using my personal fuzzer, newt. It’s a simple, unattended file format fuzzer written in Node, which features several mutators, automatic triaging, and can monitor processes using either gdb or Address Sanitizer. At the time of writing, newt is available publicly at version 0.5.4, however for the last several months I have been working on 0.6.0 which introduces many new features, including new mutators, and piles of bug fixes. I will make this new version available soon, however usage will remain essentially the same so this tutorial will be applicable to 0.6.0 as well. Newt is fully compatible with Linux and Mac OS, and with a little bit of work runs reasonably well on Windows.

Why use newt?

There are tons of great fuzzers out there, most with many more features than newt, so why use it at all? The best answer to this question is ease-of-use. Fuzzers like afl offer things that newt doesn’t such as live run statistics, code coverage monitoring and crash minimizers(though this in particular is under active development). However many require a complex setup, such as recompiling with Address Sanitizer, and programs with GUIs may be especially challenging to set up as you’ll typically need to edit the source code to make the fuzzer happy. With newt this is not necessary, though it also supports fuzzing like this if you wish.

In writing newt, I wanted a tool that I could use without having to do much in the way of preparation. The idea was that if a simple fuzz run with newt uncovered a few bugs, then I knew it was worth my time to instrument a program or begin auditing source code in order to investigate further. I also wanted to come up with novel mutators in the hopes that it might uncover issues that other fuzzers had missed.

Installing newt

The latest version of newt will always be available on my GitHub page. First, we will clone the repository and install the required npm modules. If you don’t already have Node on your machine, you’ll need to install it now. I recommend using nvm.

$ git clone https://github.com/wreet/newt
$ cd newt
$ npm install
$ ./newt.js
#=> [~] newt.js 0.5.4 - a simple node-powered fuzzer
#=> Usage: newt command [opts]
#=> Commands:
#=>   autofuzz    Automatically generate cases and fuzz the subject
#=>   |  -i       Required, directory where file format or ngen seeds can be found
#=>   |  -o       Required, output directory for crashes, logs, cases
#=>   |  -s       Required, the subject binary to fuzz
#=>   |  -k       Sometimes required, kill subject after -k seconds, useful for GUI bins
#=>   |  -f       Optional, int value that translates to fuzzing 1/-f byte in buffMangler mode  
#=>   |  -m       Optional, monitor mode. Default is gdb, asan instrumented bins also supported                                                                                     
#=>   procmon     Launch and monitor a process
#=>   |  -s       Required, the subject binary [with args]
#=>   |  -m       Required, monitor mode [asan|gdb]
#=>   |  -r       Optional, respawn process on exit
#=>   |  -o       Optional, output dir. Results printed to console if none specified
#=>   netfuzz     Fuzz a remote network service
#=>   |  -i       Required, directory where ngen seeds can be found
#=>   |  -o       Required, output directory for crashes, logs, cases
#=>   |  -h       Required, the host to send the fuzz case as host:port

If you see the newt help output as shown above without any errors, you should be good to go.

Collecting a corpus

Perhaps the most critical step in achieving a successful fuzz run is the collection of a corpus. These are the seed files that will be mutated by the fuzzer and fed into the target program. The more seed files, and the more these file differ, the better your run will be as that is the key to attaining the highest amount of code coverage in the subject binary. There are plenty of great guides available for choosing an effective corpus, however I will briefly describe the process I typically follow. Depending on the format you’re working with, I find that your own machine is typically a good place to begin the search. In this guide we will be fuzzing PDFs, of which there are probably many on your drive. A quick look at my own Linux install reveals many documentation PDFs in a variety of languages utilizing a fairly wide array of features offered by the specification. Not a bad start for just a couple commands.

mkdir seeds
cp `sudo find / -name *.pdf 2>/dev/null` seeds/

Next, I usually turn to Google, which offers us a handy operator to search by file type. A typical query might look like filetype:pdf site:*.ru. You’ll notice in this example I restricted the search to Russian domains. The reason for this is to collect PDFs written in the Cyrillic alphabet. You can(and should) of course do this with any language. I find this helps to collect a corpus with more interesting inputs. Remember, we’re trying to find PDFs that will trigger as many different functions in the target program as possible.

At this point you are probably wondering how many inputs you should collect. The truth is the more the better, but you’ll have to decide for yourself how much time you’re willing to spend on this step. Since the point of newt is to get fuzzing fast, I typically don’t collect more than a few hundred inputs, personally.

Starting an autofuzz run

If you’re happy with your corpus, then you should be all set for the fun part: your first fuzz run with newt. Let’s jump right in.

mkdir out
./newt.js autofuzz -f 32 -s okular -m gdb -i seeds -o out -k 2

You should be off to the races. I’ll briefly describe what’s going on here.

-f is the fuzz factor, which controls how mutated inputs should be. It essentially translates to fuzzing 1/-f bytes in the input file. In this example, we’ll fuzz around 1/32 bytes. The lower the value, the more mutated the generated case will be. I typically fuzz with this value anywhere from 16-48. For programs particularly sensitive to file changes, you may need to increase this number quite a bit. On the other end of the spectrum, I find anything lower than about 8 tends to mangle files so badly the target rejects most cases without attempting to parse them.

-s is the subject binary, with any necessary arguments. Unfortunately, these can only be arguments that do not use hyphens as newt’s arguments parser uses this to denote its next flag. One way I get around this is to make a new alias with any arguments the target needs, and then use that as the argument to -s. Not ideal, but it seems to work fine for most things. Newt expects the program to open the case when fed from the command line, so in this example the command newt will run is okular <case.pdf>.

-m is the monitor mode. This argument tells newt’s process manager how to monitor for crashes. Supported values are gdb, or asan. Monitoring with gdb requires the exploitable module by jfoote as it is used to triage crashes caught by gdb. Asan mode requires that the target binary be instrumented with Address Sanitizer.

-o is the output folder for your fuzz run. In it, you will find newt’s run log, a cases directory, where any cases that caused crashes will be saved, and a crash directory which will contain output from gdb or asan with more information about any observed crashes.

-k is the time in seconds to wait before closing the program and moving on to the next case. This is what makes newt work so well with GUI programs. In this case it is set to 2 seconds, which is plenty of time to open and parse most PDFs. That is to say if the case is going to crash the reader, it will have done so by 2 seconds, at least on my machine. This argument is optional – if you have a program that closes after the case has been analyzed then you can omit it.

Let the fuzzer run for as long as you wish, I usually let it do its thing overnight and come back and check the crashes directory in the morning. You can also run more than one instance of newt at once by creating multiple output directories and sharing the seeds directory. With a little luck, you’ll have a few interesting crash cases to examine further.

If you run into any issues or have questions about newt, feel free to contact me at chase [at] wreet [dot] xyz. Happy hunting!

Installing Arch on a Raspberry Pi

Note: This post from 2016 has been moved from a previous blog ran with my frequent partner in crime Jon Cornwell

Who doesn’t love the Raspberry Pi? Affordable, well supported and widely available, I’ve used it for everything from VPN servers to hardware projects. I’m also a big fan of Arch linux, whose lightweight profile and deep community support make it a natural fit for the device.

The official Arch Linux wiki page for the Raspberry Pi has all the information you need to get started, however the instructions can be a little dense. We’ll be setting up Arch on a Raspberry Pi 2, but the process is essentially the same for other models of the device, you’ll just need to be sure to use the Arch image that corresponds to your hardware.

Preparing the SD card

The first thing we need to do is format the SD card and create a couple new partitions for our boot sector and root mount point. We’ll need to know which block device has been assigned to the card, so be sure to watch the system log as you plug your SD card into the reader.

$ journalctl -f
#=> Dec 20 12:28:26 wreet kernel: usb 1-9: new high-speed USB device number 9 using xhci_hcd
#=> Dec 20 12:28:26 wreet kernel: usb-storage 1-9:1.0: USB Mass Storage device detected
#=> Dec 20 12:28:26 wreet kernel: scsi host7: usb-storage 1-9:1.0
#=> Dec 20 12:28:27 wreet kernel: scsi 7:0:0:0: Direct-Access     Generic- SD/MMC           1.00 PQ: 0 ANSI: 0 CCS
#=> Dec 20 12:28:27 wreet kernel: sd 7:0:0:0: [sdb] Sense not available.
#=> Dec 20 12:28:27 wreet kernel: sd 7:0:0:0: [sdb] Write Protect is off
#=> Dec 20 12:28:27 wreet kernel: sd 7:0:0:0: [sdb] Mode Sense: 00 00 00 00
#=> Dec 20 12:28:27 wreet kernel: sd 7:0:0:0: [sdb] Asking for cache data failed
#=> Dec 20 12:28:27 wreet kernel: sd 7:0:0:0: [sdb] Assuming drive cache: write through
#=> Dec 20 12:28:27 wreet kernel: sd 7:0:0:0: [sdb] Attached SCSI removable disk

In this case, “sdb” is the relevant device. Once you know the correct block device, it’s time to format the card in preparation for the Arch image. We’ll use the fdisk tool to clear the existing partition table and write a new one that the Raspberry Pi can boot from. Please note this operation will permanently erase all information on the disk.

$ sudo fdisk /dev/sdb
#=> Welcome to fdisk (util-linux 2.27.1).
#=> Changes will remain in memory only, until you decide to write them.
#=> Be careful before using the write command.
#=> Command (m for help):

fdisk steps: Type o then enter to clear the existing table.

#=> Command (m for help): o
#=> Created a new DOS disklabel with disk identifier [id]

Hit n for a new partition, then p for primary. Next, press enter to accept the default partition number, which should be 1. Hit enter to accept the default start sector, then +100M as the last sector. This will create a 100MB partition that will serve as the boot disk.

#=> Command (m for help): n
#=> Partition type
#=>   p   primary (0 primary, 0 extended, 4 free)
#=>   e   extended (container for logical partitions)
#=> Select (default p): p
#=> Partition number (1-4, default 1):  
#=> First sector (2048-62521343, default 2048): 
#=> Last sector, +sectors or +size{K,M,G,T,P} (2048-62521343, default 62521343): +100M
#=> Created a new partition 1 of type 'Linux' and of size 100 MiB.

In order to boot, the Raspberry Pi expects this sector to contain a FAT filesystem. Type t and then c to change the partition type.

#=> Command (m for help): t
#=> Selected partition 1
#=> Partition type (type L to list all types): c
#=> Changed type of partition 'Linux' to 'W95 FAT32 (LBA)'.

For the root fs mount point, we’ll create another partition as before with fdisk. Type n and then p, and hit enter to accept the default partition number of 2. Press the enter key twice more to accept the default start and end sectors, which will include every free sector that comes after our new boot partition – all the remaining space on the drive. If you would like to further partition the disk, for example to add a swap partition, in the “last sector” prompt simply specify how much space the root fs should take. For example, if I wanted to leave 2GB for a swap partition, I’d put “+27.7G” when prompted.

#=> Command (m for help): n
#=> Partition type
#=>   p   primary (1 primary, 0 extended, 3 free)
#=>   e   extended (container for logical partitions)
#=> Select (default p): p
#=> Partition number (2-4, default 2): 
#=> First sector (206848-62521343, default 206848): 
#=> Last sector, +sectors or +size{K,M,G,T,P} (206848-62521343, default 62521343): 
#=> Created a new partition 2 of type 'Linux' and of size 29.7 GiB.

Write the new table by hitting w. After finishing the changes fdisk will exit automatically.

Next, we will write the actually filesystems to our new partitions. We will start with the boot partition. For convenience, I typically just create a new folder in my current working directory for each partition to serve as their mount points.

Note: mkfs.vfat may require an additional package, such as ‘dosfstools’ on Arch

$ mkdir boot root
$ sudo mkfs.vfat /dev/sdb1
#=> mke2fs 1.42.13 (17-May-2015)
$ sudo mkfs.ext4 /dev/sdb2
#=> mke2fs 1.42.13 (17-May-2015)
#=> Creating filesystem with 7789312 4k blocks and 1949696 inodes
#=> Filesystem UUID: eeb394b6-0570-4dbe-8798-436276711342
#=> Superblock backups stored on blocks: 
#=>    32768, 98304, 163840, 229376, 294912, 819200, 884736, 1605632, 2654208, 
#=>    4096000
#=> Allocating group tables: done                            
#=> Writing inode tables: done                            
#=> Creating journal (32768 blocks): done
#=> Writing superblocks and filesystem accounting information: done
$ sudo mount /dev/sdb1 boot
$ sudo mount /dev/sdb2 root

Installing the Arch image

Finally, we are ready to download our image and extract it onto the SD card. Download the appropriate image with wget, and extract it into the root directory.

Note: be sure to wget the correct image for your hardware.

$ wget http://archlinuxarm.org/os/ArchLinuxARM-rpi-2-latest.tar.gz
$ tar zxvfp ArchLinuxARM-rpi-2-latest.tar.gz -C root/
$ sudo sync

One of the newly extracted folders in the root directory will be labeled ‘boot’ and contains all the files that will be required in the boot partition. Simply move them to their new location.

$ sudo mv root/boot/* boot/

Now unmount the disks.

$ sudo umount boot root

Congratulations, your SD card should now be loaded with Arch and ready to use with your Raspberry Pi! By default, Arch will run a DHCP client for the ethernet port and accept SSH connections. Login with the username alarm and password alarm. The default root password is root.

Bonus: Black Arch

For the majority of my Arch setups, I like to add the Black Arch repository to my pacman config. That way I can easily install any security or pentesting tools I’d like to use without any hassle. The instructions can be found here on the Black Arch website, but it is essentially as simple as running a shell script.

$ curl -O https://blackarch.org/strap.sh

Verify the sha1 sum of the script, at the time of writing it is 9f770789df3b7803105e5fbc19212889674cd503, but you should always check the Black Arch website for the latest sum. Since running scripts from the internet is dangerous, especially as root, I would recommend reading the content of the script. It’s not too long, and the actions being taken are pretty simple.

Make the script executable, and run it as root.

$ chmod +x strap.sh
$ sudo ./strap.sh

You will now be able to pull any tool provided by the Black Arch repo using pacman. A full list of provided packages can be found here.

Welcome to wreet.xyz!

Hello all, it’s been a while. I’ve decided to finally get around to sharing some of the things that I have been working on the last little bit, and that can only mean one thing: it’s time for a new personal blog! Here I intend to share some of the security-related tools I have been working on, recently uncovered bugs, tutorials and occasional comments on news within the infosec world.

Here you will find the links to the latest versions of my fuzzers, along with information on how to use them and changes I intend to make in the future. I will also be sharing with you bugs uncovered while using them and other tools.

In addition, you’ll find some of the tools I use everyday during my work as a webapp pentester, including XSS helpers, a clickjack builder, a webhook to dump client connection information, my attack payloads database and many more to come.

I hope you’ll find this content helpful, and be sure to stay tuned for more!