Skip to content

Instantly share code, notes, and snippets.

Show Gist options
  • Star 1 You must be signed in to star a gist
  • Fork 0 You must be signed in to fork a gist
  • Save m4ll0k/01ae34924f5b7f6bff38b4430e91d5b8 to your computer and use it in GitHub Desktop.
Save m4ll0k/01ae34924f5b7f6bff38b4430e91d5b8 to your computer and use it in GitHub Desktop.
Some notes and tools for reverse engineering / deobfuscating / unminifying obfuscated web app code

Deobfuscating / Unminifying Obfuscated Web App Code

Table of Contents

Other files in this gist:

PoC

Tools

Unsorted

wakaru

webcrack

ast-grep

Restringer

debundle + related

joern

  • https://joern.io/
    • The Bug Hunter's Workbench

    • Query: Uncover attack surface, sloppy coding practices, and variants of known vulnerabilities using an interactive code analysis shell. Joern supports C, C++, LLVM bitcode, x86 binaries (via Ghidra), JVM bytecode (via Soot), and Javascript. Python, Java source code, Kotlin, and PHP support coming soon.

    • Automate: Wrap your queries into custom code scanners and share them with the community or run existing Joern-based scanners in your CI.

    • Integrate: Use Joern as a library to power your own code analysis tools or as a component via the REST API.

    • https://github.com/joernio/joern
      • Open-source code analysis platform for C/C++/Java/Binary/Javascript/Python/Kotlin based on code property graphs.

      • Joern is a platform for analyzing source code, bytecode, and binary executables. It generates code property graphs (CPGs), a graph representation of code for cross-language code analysis. Code property graphs are stored in a custom graph database. This allows code to be mined using search queries formulated in a Scala-based domain-specific query language. Joern is developed with the goal of providing a useful tool for vulnerability discovery and research in static program analysis.

    • https://docs.joern.io/
      • Joern is a platform for robust analysis of source code, bytecode, and binary code. It generates code property graphs, a graph representation of code for cross-language code analysis. Code property graphs are stored in a custom graph database. This allows code to be mined using search queries formulated in a Scala-based domain-specific query language. Joern is developed with the goal of providing a useful tool for vulnerability discovery and research in static program analysis.

      • The core features of Joern are:

        • Robust parsing. Joern allows importing code even if a working build environment cannot be supplied or parts of the code are missing.
        • Code Property Graphs. Joern creates semantic code property graphs from the fuzzy parser output and stores them in an in-memory graph database. SCPGs are a language-agnostic intermediate representation of code designed for query-based code analysis.
        • Taint Analysis. Joern provides a taint-analysis engine that allows the propagation of attacker-controlled data in the code to be analyzed statically.
        • Search Queries. Joern offers a strongly-typed Scala-based extensible query language for code analysis based on Gremlin-Scala. This language can be used to manually formulate search queries for vulnerabilities as well as automatically infer them using machine learning techniques.
        • Extendable via CPG passes. Code property graphs are multi-layered, offering information about code on different levels of abstraction. Joern comes with many default passes, but also allows users to add passes to include additional information in the graph, and extend the query language accordingly.
      • https://docs.joern.io/code-property-graph/
      • https://docs.joern.io/cpgql/data-flow-steps/
      • https://docs.joern.io/export/
        • Joern can create the following graph representations for C/C++ code:

          • Abstract Syntax Trees (AST)
          • Control Flow Graphs (CFG)
          • Control Dependence Graphs (CDG)
          • Data Dependence Graphs (DDG)
          • Program Dependence graphs (PDG)
          • Code Property Graphs (CPG14)
          • Entire graph, i.e. convert to a different graph format (ALL)

Blogs / Articles / etc

Libraries / Helpers

Unsorted

Recast + related

  • https://github.com/benjamn/recast
  • https://github.com/facebook/jscodeshift
    • A JavaScript codemod toolkit

    • jscodeshift is a toolkit for running codemods over multiple JavaScript or TypeScript files. It provides:

      • A runner, which executes the provided transform for each file passed to it. It also outputs a summary of how many files have (not) been transformed.
      • A wrapper around recast, providing a different API. Recast is an AST-to-AST transform tool and also tries to preserve the style of original code as much as possible.
    • facebook/jscodeshift#500
      • Bringing jscodeshift up to date

      • The biggest issue is with recast. This library hasn't really had a lot of maintenance for the last couple of years, and there's something like 150+ issues and 40+ pull requests waiting to be merged. It seems like 80% of the issues that are logged against jscodeshift are actually recast issues. In order to fix the jscodeshift's outstanding issues, either recast itself needs to fix them or jscodeshift will need to adopt/create its own fork of recast to solve them. For the past year and a half or so putout's main developer has been maintaining a fork of recast and adding a lot of fixes to it. It might be worthwhile to look at switching to @putout/recast as opposed to the recast upstream. I've also been working on a fork of @putout/recast for evcodeshift that adds a few other things to make evcodeshift transforms more debuggable in vscode.

      • https://github.com/putoutjs/recast
        • https://github.com/putoutjs/printer
        • Prints Babel AST to readable JavaScript. For ESTree use estree-to-babel.

          • Similar to Recast, but twice faster, also simpler and easier in maintenance, since it supports only Babel.
          • As opinionated as Prettier, but has more user-friendly output and works directly with AST.
          • Like ESLint but works directly with Babel AST.
          • Easily extendable with help of Overrides.
    • What can be said about recast can probably also be said to a lesser degree about ast-types

  • https://github.com/codemod-js/codemod
    • codemod rewrites JavaScript and TypeScript using babel plugins

  • https://github.com/unjs/magicast
    • Programmatically modify JavaScript and TypeScript source codes with a simplified, elegant and familiar syntax powered by recast and babel.

estools + related

Babel

semantic / tree-sitter + related

Shift AST

swc

esbuild

  • https://github.com/evanw/esbuild
    • An extremely fast bundler for the web

    • Written in Golang
    • https://esbuild.github.io/
      • https://esbuild.github.io/faq/#upcoming-roadmap
        • I am not planning to include these features in esbuild's core itself:

          • ..snip..
          • An API for custom AST manipulation
          • ..snip..

          I hope that the extensibility points I'm adding to esbuild (plugins and the API) will make esbuild useful to include as part of more customized build workflows, but I'm not intending or expecting these extensibility points to cover all use cases.

          • https://esbuild.github.io/plugins/
          • https://esbuild.github.io/api/
          • https://news.ycombinator.com/item?id=29004200
            • ESBuild does not support any AST transforms directly

              You can add it, via plugins, but its a serious limitation for a project like Next.js which require's these types of transforms

              You also end up with diminishing returns with the more plugins in you add to esbuild, and I imagine its worse with js plugins than it is with go based ones, none the less, you have zero access to it directly

            • It is trivial to write extensions for esbuild. We've written extensive plugins to perform ast transformations that all run, collectively, in under 0.5 seconds. Make a plugin, add acorn and escodegen.

              • This implies that the plugins are doing the AST transformation outside of esbuild itself (likely still running in JS), so wouldn't really benefit from the fact that esbuild is written in golang like I was hoping.
          • evanw/esbuild#2172
            • Forking esbuild to build an AST plugin tool

            • The internal AST is not designed for this use case at all, and it’s not a use case that I’m going to spend time supporting (so I’m not going to spend time documenting exactly how to do it). I recommend using some other tool if you want to do AST-level stuff, especially because keeping a hack like this working over time as esbuild changes might be a big pain for you.

            • If it really want to do this with esbuild, know that the AST is not cleanly abstracted and is only intended for use with esbuild (e.g. uses a lot of internal data structures, has implicit invariants regarding symbols and tree shaking, does some weird things for performance reasons).

Source Maps

Visualisation/etc

Browser Based Code Editors / IDEs

In addition to the links directly below, also make sure to check out the various online REPL/playground tools linked under various other parts of this page too (eg. babel, swc, etc):

  • https://github.com/microsoft/TypeScript-Website/tree/v2/packages/playground
    • This is the JS tooling which powers the https://www.typescriptlang.org/play/

    • It is more or less vanilla DOM-oriented JavaScript with as few dependencies as possible. Originally based on the work by Artem Tyurin but now it's diverged far from that fork.

    • https://github.com/microsoft/TypeScript-Website/tree/v2/packages/sandbox
      • The TypeScript Sandbox is the editor part of the TypeScript Playground. It's effectively an opinionated fork of monaco-typescript with extra extension points so that projects like the TypeScript Playground can exist.

    • https://github.com/microsoft/TypeScript-Playground-Samples
      • Examples of TypeScript Playground Plugins for you to work from

      • This is a series of example plugins, which are extremely well documented and aim to give you samples to build from depending on what you want to build.

        • TS Compiler API: Uses @typescript/vfs to set up a TypeScript project in the browser, and then displays all of the top-level functions as AST nodes in the sidebar.
        • TS Transformers Demo: Uses a custom TypeScript transformer when emitting JavaScript from the current file in the Playground.
        • Using a Web-ish npm Dependency: Uses a dependency which isn't entirely optimised for running in a web page, but doesn't have too big of a dependency tree that it this becomes an issue either
        • Presenting Information Inline: Using a fraction of the extensive Monaco API (monaco is the text editor at the core of the Playground) to showcase what parts of a TypeScript file would be removed by a transpiler to make it a JS file.

CodeMirror

  • https://codemirror.net/
    • CodeMirror is a code editor component for the web. It can be used in websites to implement a text input field with support for many editing features, and has a rich programming interface to allow further extension.

    • CodeMirror is open source under a permissive license (MIT).

    • A full parser package, often with language-specific integration and extension code, exists for the following languages

    • There is also a collection of CodeMirror 5 modes that can be used, and a list of community-maintained language packages. If your language is not listed above, you may still find a solution there.

    • https://codemirror.net/docs/community/
      • Community Packages

      • This page lists CodeMirror-related packages maintained by the wider community.

  • https://github.com/codemirror/dev
    • Development repository for the CodeMirror editor project

    • This is the central repository for CodeMirror. It holds the bug tracker and development scripts.

      If you want to use CodeMirror, install the separate packages from npm, and ignore the contents of this repository. If you want to develop on CodeMirror, this repository provides scripts to install and work with the various packages.

  • https://github.com/uiwjs/react-codemirror

monaco-editor

Obfuscation / Deobfuscation

Variable Name Mangling

Symbolic / Concolic Execution

  • https://en.wikipedia.org/wiki/Symbolic_execution
    • In computer science, symbolic execution (also symbolic evaluation or symbex) is a means of analyzing a program to determine what inputs cause each part of a program to execute. An interpreter follows the program, assuming symbolic values for inputs rather than obtaining actual inputs as normal execution of the program would. It thus arrives at expressions in terms of those symbols for expressions and variables in the program, and constraints in terms of those symbols for the possible outcomes of each conditional branch. Finally, the possible inputs that trigger a branch can be determined by solving the constraints.

    • https://en.wikipedia.org/wiki/Symbolic_execution#Tools
    • https://en.wikipedia.org/wiki/Symbolic_execution#See_also
      • Abstract interpretation

      • Symbolic simulation

      • Symbolic computation

      • Concolic testing

      • Control-flow graph

      • Dynamic recompilation

  • https://en.wikipedia.org/wiki/Concolic_testing
    • Concolic testing (a portmanteau of concrete and symbolic, also known as dynamic symbolic execution) is a hybrid software verification technique that performs symbolic execution, a classical technique that treats program variables as symbolic variables, along a concrete execution (testing on particular inputs) path. Symbolic execution is used in conjunction with an automated theorem prover or constraint solver based on constraint logic programming to generate new concrete inputs (test cases) with the aim of maximizing code coverage. Its main focus is finding bugs in real-world software, rather than demonstrating program correctness.

    • Implementation of traditional symbolic execution based testing requires the implementation of a full-fledged symbolic interpreter for a programming language. Concolic testing implementors noticed that implementation of full-fledged symbolic execution can be avoided if symbolic execution can be piggy-backed with the normal execution of a program through instrumentation. This idea of simplifying implementation of symbolic execution gave birth to concolic testing.

    • An important reason for the rise of concolic testing (and more generally, symbolic-execution based analysis of programs) in the decade since it was introduced in 2005 is the dramatic improvement in the efficiency and expressive power of SMT Solvers. The key technical developments that lead to the rapid development of SMT solvers include combination of theories, lazy solving, DPLL(T) and the huge improvements in the speed of SAT solvers. SMT solvers that are particularly tuned for concolic testing include Z3, STP, Z3str2, and Boolector.

      • https://en.wikipedia.org/wiki/Satisfiability_modulo_theories
        • In computer science and mathematical logic, satisfiability modulo theories (SMT) is the problem of determining whether a mathematical formula is satisfiable. It generalizes the Boolean satisfiability problem (SAT) to more complex formulas involving real numbers, integers, and/or various data structures such as lists, arrays, bit vectors, and strings. The name is derived from the fact that these expressions are interpreted within ("modulo") a certain formal theory in first-order logic with equality (often disallowing quantifiers). SMT solvers are tools that aim to solve the SMT problem for a practical subset of inputs. SMT solvers such as Z3 and cvc5 have been used as a building block for a wide range of applications across computer science, including in automated theorem proving, program analysis, program verification, and software testing.

      • https://en.wikipedia.org/wiki/Boolean_satisfiability_problem#Algorithms_for_solving_SAT
    • https://en.wikipedia.org/wiki/Concolic_testing#Algorithm
      • Essentially, a concolic testing algorithm operates as follows:

        • Classify a particular set of variables as input variables. These variables will be treated as symbolic variables during symbolic execution. All other variables will be treated as concrete values.
        • Instrument the program so that each operation which may affect a symbolic variable value or a path condition is logged to a trace file, as well as any error that occurs.
        • Choose an arbitrary input to begin with.
        • Execute the program.
        • Symbolically re-execute the program on the trace, generating a set of symbolic constraints (including path conditions).
        • Negate the last path condition not already negated in order to visit a new execution path. If there is no such path condition, the algorithm terminates.
        • Invoke an automated satisfiability solver on the new set of path conditions to generate a new input. If there is no input satisfying the constraints, return to step 6 to try the next execution path.
        • Return to step 4.

        There are a few complications to the above procedure:

        • The algorithm performs a depth-first search over an implicit tree of possible execution paths. In practice programs may have very large or infinite path trees – a common example is testing data structures that have an unbounded size or length. To prevent spending too much time on one small area of the program, the search may be depth-limited (bounded).
        • Symbolic execution and automated theorem provers have limitations on the classes of constraints they can represent and solve. For example, a theorem prover based on linear arithmetic will be unable to cope with the nonlinear path condition xy = 6. Any time that such constraints arise, the symbolic execution may substitute the current concrete value of one of the variables to simplify the problem. An important part of the design of a concolic testing system is selecting a symbolic representation precise enough to represent the constraints of interest.
    • https://en.wikipedia.org/wiki/Concolic_testing#Tools
      • Jalangi is an open-source concolic testing and symbolic execution tool for JavaScript. Jalangi supports integers and strings.
  • https://github.com/Z3Prover/z3
    • The Z3 Theorem Prover

    • https://github.com/Z3Prover/z3/wiki
      • Z3 is an SMT solver and supports the SMTLIB format.

        • https://smtlib.cs.uiowa.edu/
          • SMT-LIB is an international initiative aimed at facilitating research and development in Satisfiability Modulo Theories (SMT).

          • Documents describing the SMT-LIB input/output language for SMT solvers and its semantics;

          • etc
    • https://microsoft.github.io/z3guide/
      • Online Z3 Guide

      • https://github.com/microsoft/z3guide
        • Tutorials and courses for Z3

        • https://microsoft.github.io/z3guide/docs/logic/intro/
          • Introduction Z3 is a state-of-the art theorem prover from Microsoft Research. It can be used to check the satisfiability of logical formulas over one or more theories. Z3 offers a compelling match for software analysis and verification tools, since several common software constructs map directly into supported theories.

            The main objective of the tutorial is to introduce the reader on how to use Z3 effectively for logical modeling and solving. The tutorial provides some general background on logical modeling, but we have to defer a full introduction to first-order logic and decision procedures to text-books in order to develop an in depth understanding of the underlying concepts. To clarify: a deep understanding of logical modeling is not necessarily required to understand this tutorial and modeling with Z3, but it is necessary to understand for writing complex models.

        • https://microsoft.github.io/z3guide/programming/Z3%20JavaScript%20Examples/
          • Z3 JavaScript The Z3 distribution comes with TypeScript (and therefore JavaScript) bindings for Z3. In the following we give a few examples of using Z3 through these bindings. You can run and modify the examples locally in your browser.

  • https://github.com/Samsung/jalangi2
    • Dynamic analysis framework for JavaScript

    • Jalangi2 is a framework for writing dynamic analyses for JavaScript. Jalangi1 is still available at https://github.com/SRA-SiliconValley/jalangi, but we no longer plan to develop it. Jalangi2 does not support the record/replay feature of Jalangi1. In the Jalangi2 distribution you will find several analyses:

      • an analysis to track NaNs.
      • an analysis to check if an undefined is concatenated to a string.
      • Memory analysis: a memory-profiler for JavaScript and HTML5.
      • DLint: a dynamic checker for JavaScript bad coding practices.
      • JITProf: a dynamic JIT-unfriendly code snippet detection tool.
      • analysisCallbackTemplate.js: a template for writing a dynamic analysis.
      • and more ...

      See our tutorial slides for a detailed overview of Jalangi and some client analyses.

    • https://github.com/Samsung/jalangi2#usage
      • Usage

      • Analysis in node.js with on-the-fly instrumentation

      • Analysis in node.js with explicit one-file-at-a-time offline instrumentation

      • Analysis in a browser using a proxy and on-the-fly instrumentation

  • https://github.com/SRA-SiliconValley/jalangi
    • This repository has been archived by the owner on Dec 9, 2017. It is now read-only.

    • We encourage you to switch to Jalangi2 available at https://github.com/Samsung/jalangi2. Jalangi2 is a framework for writing dynamic analyses for JavaScript. Jalangi2 does not support the record/replay feature of Jalangi1. Jalangi1 is still available from this website, but we no longer plan to develop it.

    • Jalangi is a framework for writing heavy-weight dynamic analyses for JavaScript. Jalangi provides two modes for dynamic program analysis: an online mode (a.k.a direct or inbrowser analysis mode)and an offilne mode (a.k.a record-replay analysis mode). In both modes, Jalangi instruments the program-under-analysis to insert callbacks to methods defined in Jalangi. An analysis writer implements these methods to perform custom dynamic program analysis. In the online mode, Jalangi performs analysis during the execution of the program. An analysis in online mode can use shadow memory to attach meta information with every memory location. The offilne mode of Jalangi incorporates two key techniques: 1) selective record-replay, a technique which enables to record and to faithfully replay a user-selected part of the program, and 2) shadow values and shadow execution, which enables easy implementation of heavy-weight dynamic analyses. Shadow values allow an analysis to attach meta information with every value. In the distribution you will find several analyses:

      • concolic testing,
      • an analysis to track origins of nulls and undefined,
      • an analysis to infer likely types of objects fields and functions,
      • an analysis to profile object allocation and usage,
      • a simple form of taint analysis,
      • an experimental pure symbolic execution engine (currently undocumented)

Profiling

Unsorted

  • https://github.com/bytecodealliance/ComponentizeJS
    • ESM -> WebAssembly Component creator, via a SpiderMonkey JS engine embedding

    • Provides a Mozilla SpiderMonkey embedding that takes as input a JavaScript source file and a WebAssembly Component WIT World, and outputs a WebAssembly Component binary with the same interface.

    • https://bytecodealliance.org/articles/making-javascript-run-fast-on-webassembly
      • Making JavaScript run fast on WebAssembly

      • We should be clear here—if you’re running JavaScript in the browser, it still makes the most sense to simply deploy JS. The JS engines within the browsers are highly tuned to run the JS that gets shipped to them.

    • https://github.com/bytecodealliance/wizer
      • The WebAssembly Pre-Initializer Don't wait for your Wasm module to initialize itself, pre-initialize it! Wizer instantiates your WebAssembly module, executes its initialization function, and then snapshots the initialized state out into a new WebAssembly module. Now you can use this new, pre-initialized WebAssembly module to hit the ground running, without making your users wait for that first-time set up code to complete.

        The improvements to start up latency you can expect will depend on how much initialization work your WebAssembly module needs to do before it's ready. Some initial benchmarking shows between 1.35 to 6.00 times faster instantiation and initialization with Wizer, depending on the workload

  • https://wingolog.org/archives/2022/08/18/just-in-time-code-generation-within-webassembly
    • just-in-time code generation within webassembly

  • https://github.com/WebAssembly/wabt
    • The WebAssembly Binary Toolkit

    • WABT (we pronounce it "wabbit") is a suite of tools for WebAssembly, including:

      • wat2wasm: translate from WebAssembly text format to the WebAssembly binary format
      • wasm2wat: the inverse of wat2wasm, translate from the binary format back to the text format (also known as a .wat)
      • wasm-objdump: print information about a wasm binary. Similiar to objdump.
      • wasm-interp: decode and run a WebAssembly binary file using a stack-based interpreter
      • wasm-decompile: decompile a wasm binary into readable C-like syntax.
      • wat-desugar: parse .wat text form as supported by the spec interpreter (s-expressions, flat syntax, or mixed) and print "canonical" flat format
      • wasm2c: convert a WebAssembly binary file to a C source and header
      • wasm-strip: remove sections of a WebAssembly binary file
      • wasm-validate: validate a file in the WebAssembly binary format
      • wast2json: convert a file in the wasm spec test format to a JSON file and associated wasm binary files
      • wasm-stats: output stats for a module
      • spectest-interp: read a Spectest JSON file, and run its tests in the interpreter

      These tools are intended for use in (or for development of) toolchains or other systems that want to manipulate WebAssembly files. Unlike the WebAssembly spec interpreter (which is written to be as simple, declarative and "speccy" as possible), they are written in C/C++ and designed for easier integration into other systems. Unlike Binaryen these tools do not aim to provide an optimization platform or a higher-level compiler target; instead they aim for full fidelity and compliance with the spec (e.g. 1:1 round-trips with no changes to instructions).

My ChatGPT Research / Conversations

These are private chat links, so won't work for others, and are included here only for my reference:

See Also

My Other Related Deepdive Gist's and Projects

Chrome DevTools 'Sources' Extension

Originally articulated here:

  • j4k0xb/webcrack#29
    • add Chrome DevTools extension that allows the web IDE to be used within DevTools

  • pionxzh/wakaru#76
    • add Chrome DevTools extension that allows the web IDE to be used within DevTools

Overview

The following is as I originally wrote it on the above issues, copied here for posterity:

First off, I LOVE the new v2.11.0 update, and the changes to the web IDE to use monaco to support 'references', 'go to declaration', etc.

This issue is about an idea I've had and wanted to work on myself for a while, but haven't got around to it yet. Often when exploring web apps, I make pretty heavy use of Chrome DevTools' 'Search' / 'Sources' / 'Network' / etc tabs, the debugger, and the console / utilities APIs. While there have been some nice improvements to the 'Sources' tab over the years (pretty printing, syntax highlighting, code folding, etc); one area I have really wished it was able to support for a long time now is 'references' / 'go to definition' / similar.

A thought I had in this space is that, while I obviously can't outright replace the 'Sources' tab (which I believe is based on CodeMirror), it should be possible to create a Chrome DevTools Extension that adds a new tab/panel that does similar to the current 'Sources' tab, but using monaco as it's base, and enabling features like 'references' / 'go to definition' / etc within that.

Useful Chrome Extension APIs

Overview of extending DevTools:

Some of the Chrome Extension API's that would be useful/enable this:

Then there are also all of the 'standard' Chrome Extension APIs as well, which can do a lot of cool stuff:

A few of which could be useful for this feature:

And some that are a little more esoteric, but might still be interesting/useful:

Fingerprinting Minified JavaScript Libraries

Table of Contents

See Also

Initial ChatGPT Conversation / Notes

Source: https://chat.openai.com/c/d9b7b64f-aa93-474e-939f-79e376e6d375

Me:

Are there any tools that allow me to identify potentially minified javascript/etc libraries within a webpack or similar bundle?

ChatGPT:

Identifying and fingerprinting minified JavaScript modules within a webpack bundle, regardless of their minification method, can be a challenging task due to the nature of minification and bundling. However, there are approaches and tools you can use to achieve this:

  1. Custom Fingerprinting Script: Given your skills in IT and software engineering, you might consider writing a custom script to analyze the bundles. This script can use heuristics like function and variable name patterns, known code snippets, or specific algorithm implementations to identify libraries. The script can parse the bundle, apply these heuristics, and attempt to match segments of the code with known library patterns.

  2. Entropy Analysis: Minified code typically has higher entropy than non-minified code. By analyzing the entropy of different sections of your bundle, you might be able to identify boundaries between different libraries or modules.

  3. AST Analysis: Abstract Syntax Tree (AST) analysis involves parsing the JavaScript code into a tree structure, which can then be analyzed. You could use AST to look for specific patterns that match known library code structures.

  4. Binary Analysis Tools: Tools like Binwalk, which are traditionally used for analyzing binary files, can sometimes be repurposed for script analysis. They work by scanning files for embedded files and executable code.

  5. Regular Expression Matching: This involves creating regular expressions that match known patterns from popular libraries. While this approach can be time-consuming and may require regular updates, it can be effective for well-known and widely used libraries.

  6. Machine Learning Approaches: With advancements in machine learning, you can train a model to recognize patterns in minified code. This requires a substantial dataset of minified and unminified code to train the model.

  7. Community Databases: Sometimes, the community might maintain databases of fingerprints of popular libraries. These can be used to match against your bundle.

  8. Source Map Reversal: If source maps are available, you can reverse-engineer them to get the original source code, making it easier to identify the libraries.

Remember, the effectiveness of these methods depends on various factors, including the degree of minification, the presence of source maps, and the specific bundling process used. Additionally, given the potential complexity, this might be a time-consuming task, but your background in software engineering and ethical hacking should provide a solid foundation for tackling this challenge.

Me:

Let's work with the following ideas:

Custom Fingerprinting Script: Given your skills in IT and software engineering, you might consider writing a custom script to analyze the bundles. This script can use heuristics like function and variable name patterns, known code snippets, or specific algorithm implementations to identify libraries. The script can parse the bundle, apply these heuristics, and attempt to match segments of the code with known library patterns.

AST Analysis: Abstract Syntax Tree (AST) analysis involves parsing the JavaScript code into a tree structure, which can then be analyzed. You could use AST to look for specific patterns that match known library code structures.

Community Databases: Sometimes, the community might maintain databases of fingerprints of popular libraries. These can be used to match against your bundle.

--

Let's assume that there isn't currently a community database, but that once we figure out an effective way of creating and matching these fingerprints, that we could open source one.

Since we want this to work regardless of whether the library code has been minified or not, we need to take that into account.

How should we approach this?

ChatGPT:

Creating a system to fingerprint JavaScript libraries within a webpack bundle, especially considering the challenges posed by minification, is a multi-step process. The goal is to create a system that can analyze JavaScript code, extract identifiable patterns, and match these against a database of known library fingerprints. Here's a proposed approach:

Establishing a Fingerprinting Methodology

  1. Identify Unique Patterns: Begin by identifying patterns that remain consistent in a library, even after minification. These could include:

    • Unique function signatures.
    • Specific algorithm implementations.
    • Characteristic variable naming schemes that survive minification.
    • Constant values or strings used by the library.
  2. Heuristics for Detection: Develop heuristics to detect these patterns, such as specific function name and parameter combinations, unique method chaining styles, or specific API calls.

Abstract Syntax Tree (AST) Analysis

  1. Parse JavaScript Bundle: Utilize tools like Esprima, Acorn, or Babel to convert JavaScript code into AST, enabling effective analysis of code structure and content.

  2. Pattern Recognition in AST: Implement algorithms for matching parsed code against defined fingerprint criteria, focusing on structural and syntactical patterns and heuristics established earlier. This involves looking for specific node types (like function declarations, variable declarations, etc.) that match the identified patterns.

  3. Handling Minification: Design heuristics and pattern recognition to be resilient to common minification techniques like variable renaming and whitespace removal.

Building and Maintaining the Fingerprint Database

  1. Collecting Fingerprints: Start with popular libraries, analyzing unminified sources to establish baseline fingerprints.

  2. Database Structure and Contributions: Organize the database with library entries, fingerprints, and metadata, and set up a system for validating and integrating community contributions.

Matching Mechanism and Script Integration

  1. Matching Algorithm: Develop an algorithm to compare AST nodes against database fingerprints.

  2. Handling Ambiguities: Design the system to handle cases where a definitive match isn't possible. This could involve ranking potential matches or providing a probability score.

  3. Script Functionality: Ensure the script can process a JavaScript bundle, parse it, and match it against fingerprints. Test the matching algorithm with both minified and unminified versions of known libraries to ensure accuracy.

Handling Minification Variations

  1. Minification Resilience: Ensure your fingerprints are resilient to common minification techniques, like variable renaming and whitespace removal.

  2. Testing Across Minifiers: Test your system with code minified by different tools (e.g., UglifyJS, Terser) to ensure reliability.

User Interface and Accessibility

  1. Ease of Use: Develop a user-friendly interface for both contributing to the database and using the script to identify libraries.

  2. Integration with Development Tools: Consider how your tool could integrate with existing development workflows, such as bundlers or IDEs.

Open Sourcing and Community Engagement

  1. Open Source the Tool: Release the tool and database as open source, encouraging community contributions of new fingerprints and algorithm improvements.

  2. Documentation and Support Channels: Provide thorough documentation and establish channels for community support, feedback, and contributions.

Continuous Improvement and Adaptation

  1. Regular Updates and Algorithm Refinement: Keep the database up-to-date with new library versions and continuously refine matching algorithms based on user feedback and new JavaScript development trends.

  2. Performance Optimization: Optimize for performance, especially when analyzing large bundles, using efficient AST traversal and optimized pattern matching algorithms.

Legal and Ethical Considerations

  1. Respecting Copyright: Ensure that the use of library fingerprints complies with legal and ethical standards, respecting the intellectual property rights of library authors.

This project aligns well with your background in IT and software engineering. The main challenge lies in the nuances of JavaScript minification and the diverse coding patterns across different libraries, but a focused and iterative approach can lead to a robust and useful tool for the community.

Thoughts / comments as I've articulated them elsewhere

On j4k0xb/webcrack

Issue 21: rename short identifiers

When I was exploring this concept in my own deobfuscation PoC project, I was exploring to make the variable names unique + have them add sort of semantic information about their source/scope.

Eg. if it was an arg to a function, it might be arg_1. Or potentially if the function is foo, it might end up as foo_arg_1

It looks like most of the PoC code I was playing with was local/in a pretty messy/hacky state, but I did find a link in it to an online REPL I was playing around with some of it in. Not sure how outdated that code is, but it might be useful:

There were a number of different AST parsers I was playing around with, but I think that this babel code may have been the latest (not sure which one):

Within those files, I believe the functions getNameFromPath, getPrefix (and older commented out functions getTypePrefix, getPrefix


Edit: Came across this in another issue here:

I published my decompiler that I used in the above example. I think it might be a good reference for adding this feature. https://github.com/e9x/krunker-decompiler

Originally posted by @e9x in j4k0xb/webcrack#10 (comment)

And looking at it's libRenameVars code seems to be taking a vaguely similar approach to how I was looking at doing things in my original PoC that I described above:

  • https://github.com/e9x/krunker-decompiler/blob/master/src/libRenameVars.ts
    • getVarPrefix will set a prefix based on the type (eg. func, arg, Class, imported, var)
    • getName generates a new variable name that does not conflict with existing names or reserved keywords
    • generateName generates a new name for a variable considering its scope, type, and the context in which it is used (e.g., whether it's a class, a function variable, etc.). It employs various AST manipulations to ensure the generated name is appropriate and does not conflict with existing names.

A more generalised summary/overview (via ChatGPT):

Certainly, the code implements a sophisticated algorithm for renaming variables in a JavaScript program, adhering to several high-level rules and strategies:

  1. Type-Specific Prefixing:

    • The getVarPrefix function assigns specific prefixes to variable names based on their type (e.g., "func" for function names, "arg" for parameters). This approach helps in identifying the role of a variable just by its name.
  2. Avoiding Reserved Keywords:

    • The script includes a comprehensive list of reserved JavaScript keywords. If a variable's name matches a reserved keyword, it is prefixed with an underscore to prevent syntax errors.
  3. Unique Naming with Context Consideration:

    • The generateName function ensures that each variable gets a unique name that doesn't conflict with other variables in its scope. It also considers the context in which a variable is used. For example, if a variable is part of a class, it may receive a name that reflects this context, using pascalCase or camelCase as appropriate.
  4. Handling Special Cases:

    • The script contains logic to handle special cases, such as variables that are function expressions (isFuncVar) or class instances (isClass). This affects the naming convention applied to these variables.
  5. Randomness with Mersenne Twister:

    • A Mersenne Twister is used to generate random elements for variable names, ensuring that the names are not only unique within the scope of the program but also less predictable.
  6. AST-Based Renaming:

    • The script analyzes the Abstract Syntax Tree (AST) of the program to understand the structure and scope of variables. This analysis guides the renaming process, ensuring that the new names are consistent with the variable's usage and position in the code.
  7. Scope Analysis with ESLint Scope:

    • By leveraging eslint-scope, the script can accurately determine the scope of each variable. This is crucial in avoiding name collisions and ensuring that the renaming respects lexical scoping rules in JavaScript.
  8. Consideration for Exported and Assigned Variables:

    • The script pays special attention to variables that are exported or assigned in specific ways (e.g., through Object.defineProperty). It ensures that these variables receive names that are appropriate for their roles.

In summary, the script uses a combination of type-based naming conventions, context consideration, randomness, AST analysis, and scope analysis to systematically rename variables in a JavaScript program. This approach aims to enhance readability, avoid conflicts, and maintain the logical structure of the program.

Originally posted by @0xdevalias in j4k0xb/webcrack#21 (comment)


And for an even cooler/more extreme version of improving variable naming; I just came across this blog post / project from @jehna that makes use of webcrack + ChatGPT for variable renaming:

  • https://thejunkland.com/blog/using-llms-to-reverse-javascript-minification.html
    • Using LLMs to reverse JavaScript variable name minification This blog introduces a novel way to reverse minified Javascript using large language models (LLMs) like ChatGPT and llama2 while keeping the code semantically intact. The code is open source and available at Github project Humanify.

  • https://github.com/jehna/humanify
    • Un-minify Javascript code using ChatGPT

    • This tool uses large language modeles (like ChatGPT & llama2) and other tools to un-minify Javascript code. Note that LLMs don't perform any structural changes – they only provide hints to rename variables and functions. The heavy lifting is done by Babel on AST level to ensure code stays 1-1 equivalent.

Originally posted by @0xdevalias in j4k0xb/webcrack#21 (comment)


I came across another tool today that seemed to have a start on implementing some 'smart rename' features:

Digging through the code lead me to this:

There's also an issue there that seems to be exploring how to improve 'unmangling variable names' as well:

Which I wrote the following extra thoughts on:

I just finished up writing some thoughts/references for variable renaming on the webcrack repo, that could also be a useful idea for here. (see quotes below)

When I was exploring PoC ideas for my own project previously, I was looking to generate a file similar to the 'module map' that this project is using; but instead of just for the names of modules, I wanted to be able to use it to provide a 'variable name map'. Though because the specific variables used in webpack/etc can change between builds, my thought was that first 'normalising' them to a 'known format' based on their context would make sense to do first.

That could then be letter enhanced/expanded by being able to pre-process these 'variable name mappings' for various open source projects in a way that could then be applied 'automagically' without the end user needing to first create them.

It could also be enhanced by similar techniques such as what the humanify project does, by using LLMs/similar to generate suggested variable name mappings based on the code.

My personal ideal end goal for a feature like that would then allow me to use it within an IDE-like environment, where I can rename variables 'as I explore', knowing that the mappings/etc will be kept up to date.

Originally posted by @0xdevalias in pionxzh/wakaru#34 (comment)

Originally posted by @0xdevalias in j4k0xb/webcrack#21 (comment)


Another link from my reference notes that I forgot to include earlier; my thoughts on how to rename otherwise unknown variables are based on similar concepts that are used in reverse engineering tools such as IDA:

  • https://hex-rays.com/blog/igors-tip-of-the-week-34-dummy-names/
    • In IDA’s disassembly, you may have often observed names that may look strange and cryptic on first sight: sub_73906D75, loc_40721B, off_40A27C and more. In IDA’s terminology, they’re called dummy names. They are used when a name is required by the assembly syntax but there is nothing suitable available

    • https://www.hex-rays.com/products/ida/support/idadoc/609.shtml
      • IDA Help: Names Representation

      • Dummy names are automatically generated by IDA. They are used to denote subroutines, program locations and data. Dummy names have various prefixes depending on the item type and value


And a few more I was looking at recently as well (that is sort of basically smart-rename:

  • https://binary.ninja/2023/09/15/3.5-expanded-universe.html#automatic-variable-naming
    • Automatic Variable Naming One easy way to improve decompilation output is to come up with better default names for variables. There’s a lot of possible defaults you could choose and a number of different strategies are seen throughout different reverse engineering tools. Prior to 3.5, Binary Ninja left variables named based on their origin. Stack variables were var_OFFSET, register-based variables were reg_COUNTER, and global data variables were (data_). While this scheme isn’t changing, we’re being much more intelligent about situations where additional information is available.

      For example, if a variable is passed to a function and a variable name is available, we can now make a much better guess for the variable name. This is most obvious in binaries with type libraries.

    • This isn’t the only style of default names. Binary Ninja also will name loop counters with simpler names like i, or j, k, etc (in the case of nested loops)

  • Vector35/binaryninja-api#2558

Originally posted by @0xdevalias in pionxzh/wakaru#34 (comment)

Originally posted by @0xdevalias in j4k0xb/webcrack#21 (comment)

On pionxzh/wakaru

Issue 34: support un-mangle identifiers

I just finished up writing some thoughts/references for variable renaming on the webcrack repo, that could also be a useful idea for here. (see quotes below)

When I was exploring PoC ideas for my own project previously, I was looking to generate a file similar to the 'module map' that this project is using; but instead of just for the names of modules, I wanted to be able to use it to provide a 'variable name map'. Though because the specific variables used in webpack/etc can change between builds, my thought was that first 'normalising' them to a 'known format' based on their context would make sense to do first.

That could then be later enhanced/expanded by being able to pre-process these 'variable name mappings' for various open source projects in a way that could then be applied 'automagically' without the end user needing to first create them.

It could also be enhanced by similar techniques such as what the humanify project does, by using LLMs/similar to generate suggested variable name mappings based on the code.

My personal ideal end goal for a feature like that would then allow me to use it within an IDE-like environment, where I can rename variables 'as I explore', knowing that the mappings/etc will be kept up to date.


When I was exploring this concept in my own deobfuscation PoC project, I was exploring to make the variable names unique + have them add sort of semantic information about their source/scope.

Eg. if it was an arg to a function, it might be arg_1. Or potentially if the function is foo, it might end up as foo_arg_1

It looks like most of the PoC code I was playing with was local/in a pretty messy/hacky state, but I did find a link in it to an online REPL I was playing around with some of it in. Not sure how outdated that code is, but it might be useful:

There were a number of different AST parsers I was playing around with, but I think that this babel code may have been the latest (not sure which one):

Within those files, I believe the functions getNameFromPath, getPrefix (and older commented out functions getTypePrefix, getPrefix


Edit: Came across this in another issue here:

I published my decompiler that I used in the above example. I think it might be a good reference for adding this feature. https://github.com/e9x/krunker-decompiler

Originally posted by @e9x in j4k0xb/webcrack#10 (comment)

And looking at it's libRenameVars code seems to be taking a vaguely similar approach to how I was looking at doing things in my original PoC that I described above:

  • https://github.com/e9x/krunker-decompiler/blob/master/src/libRenameVars.ts
    • getVarPrefix will set a prefix based on the type (eg. func, arg, Class, imported, var)
    • getName generates a new variable name that does not conflict with existing names or reserved keywords
    • generateName generates a new name for a variable considering its scope, type, and the context in which it is used (e.g., whether it's a class, a function variable, etc.). It employs various AST manipulations to ensure the generated name is appropriate and does not conflict with existing names.

A more generalised summary/overview (via ChatGPT):

Certainly, the code implements a sophisticated algorithm for renaming variables in a JavaScript program, adhering to several high-level rules and strategies:

  1. Type-Specific Prefixing:

    • The getVarPrefix function assigns specific prefixes to variable names based on their type (e.g., "func" for function names, "arg" for parameters). This approach helps in identifying the role of a variable just by its name.
  2. Avoiding Reserved Keywords:

    • The script includes a comprehensive list of reserved JavaScript keywords. If a variable's name matches a reserved keyword, it is prefixed with an underscore to prevent syntax errors.
  3. Unique Naming with Context Consideration:

    • The generateName function ensures that each variable gets a unique name that doesn't conflict with other variables in its scope. It also considers the context in which a variable is used. For example, if a variable is part of a class, it may receive a name that reflects this context, using pascalCase or camelCase as appropriate.
  4. Handling Special Cases:

    • The script contains logic to handle special cases, such as variables that are function expressions (isFuncVar) or class instances (isClass). This affects the naming convention applied to these variables.
  5. Randomness with Mersenne Twister:

    • A Mersenne Twister is used to generate random elements for variable names, ensuring that the names are not only unique within the scope of the program but also less predictable.
  6. AST-Based Renaming:

    • The script analyzes the Abstract Syntax Tree (AST) of the program to understand the structure and scope of variables. This analysis guides the renaming process, ensuring that the new names are consistent with the variable's usage and position in the code.
  7. Scope Analysis with ESLint Scope:

    • By leveraging eslint-scope, the script can accurately determine the scope of each variable. This is crucial in avoiding name collisions and ensuring that the renaming respects lexical scoping rules in JavaScript.
  8. Consideration for Exported and Assigned Variables:

    • The script pays special attention to variables that are exported or assigned in specific ways (e.g., through Object.defineProperty). It ensures that these variables receive names that are appropriate for their roles.

In summary, the script uses a combination of type-based naming conventions, context consideration, randomness, AST analysis, and scope analysis to systematically rename variables in a JavaScript program. This approach aims to enhance readability, avoid conflicts, and maintain the logical structure of the program.

Originally posted by @0xdevalias in j4k0xb/webcrack#21 (comment)


And for an even cooler/more extreme version of improving variable naming; I just came across this blog post / project from @jehna that makes use of webcrack + ChatGPT for variable renaming:

  • https://thejunkland.com/blog/using-llms-to-reverse-javascript-minification.html
    • Using LLMs to reverse JavaScript variable name minification This blog introduces a novel way to reverse minified Javascript using large language models (LLMs) like ChatGPT and llama2 while keeping the code semantically intact. The code is open source and available at Github project Humanify.

  • https://github.com/jehna/humanify
    • Un-minify Javascript code using ChatGPT

    • This tool uses large language modeles (like ChatGPT & llama2) and other tools to un-minify Javascript code. Note that LLMs don't perform any structural changes – they only provide hints to rename variables and functions. The heavy lifting is done by Babel on AST level to ensure code stays 1-1 equivalent.

Originally posted by @0xdevalias in j4k0xb/webcrack#21 (comment)

Originally posted by @0xdevalias in pionxzh/wakaru#34 (comment)

For now, we have smart-rename that can guess the variable name based on the context. I would like to expand it to cover some other generic cases.

Linking to my smart-rename related issues to keep the contextual link here:

Originally posted by @0xdevalias in pionxzh/wakaru#34 (comment)


Another link from my reference notes that I forgot to include earlier; my thoughts on how to rename otherwise unknown variables are based on similar concepts that are used in reverse engineering tools such as IDA:

  • https://hex-rays.com/blog/igors-tip-of-the-week-34-dummy-names/
    • In IDA’s disassembly, you may have often observed names that may look strange and cryptic on first sight: sub_73906D75, loc_40721B, off_40A27C and more. In IDA’s terminology, they’re called dummy names. They are used when a name is required by the assembly syntax but there is nothing suitable available

    • https://www.hex-rays.com/products/ida/support/idadoc/609.shtml
      • IDA Help: Names Representation

      • Dummy names are automatically generated by IDA. They are used to denote subroutines, program locations and data. Dummy names have various prefixes depending on the item type and value

Originally posted by @0xdevalias in j4k0xb/webcrack#21 (comment)


And a few more I was looking at recently as well (that is sort of basically smart-rename:

  • https://binary.ninja/2023/09/15/3.5-expanded-universe.html#automatic-variable-naming
    • Automatic Variable Naming One easy way to improve decompilation output is to come up with better default names for variables. There’s a lot of possible defaults you could choose and a number of different strategies are seen throughout different reverse engineering tools. Prior to 3.5, Binary Ninja left variables named based on their origin. Stack variables were var_OFFSET, register-based variables were reg_COUNTER, and global data variables were (data_). While this scheme isn’t changing, we’re being much more intelligent about situations where additional information is available.

      For example, if a variable is passed to a function and a variable name is available, we can now make a much better guess for the variable name. This is most obvious in binaries with type libraries.

    • This isn’t the only style of default names. Binary Ninja also will name loop counters with simpler names like i, or j, k, etc (in the case of nested loops)

  • Vector35/binaryninja-api#2558

Originally posted by @0xdevalias in pionxzh/wakaru#34 (comment)


Was looking closer at the sourcemap spec today, and the names field jumped out at me as potentially useful:

  • https://tc39.es/source-map-spec/#names
    • names: a list of symbol names used by the mappings entry

  • https://tc39.es/source-map-spec/#mappings
    • mappings: a string with the encoded mapping data (see 4.1 Mappings Structure)

  • https://tc39.es/source-map-spec/#mappings-structure
    • The mappings data is broken down as follows:

      • each group representing a line in the generated file is separated by a semicolon (;)
      • each segment is separated by a comma (,)
      • each segment is made up of 1, 4, or 5 variable length fields.
    • It then goes on to describe the segment's in greater detail, but the specific part I was thinking could be relevant here would be this:
      • If present, the zero-based index into the names list associated with this segment. This field is a base 64 VLQ relative to the previous occurrence of this field unless this is the first occurrence of this field, in which case the whole value is represented.

Obviously if there is a full sourcemap for the webapp, then wakaru isn't really needed anyway.. but what I was thinking of here is that in combination with module detection (see - pionxzh/wakaru#41), if there are sourcemapss available for that original module, then we could potentially extract the original function/variable/etc names from the names field of the sourcemap, and use them in a sort of 'smart-rename with sourcemap' type way.


Another sourcemap related idea I had (which probably deserves it's own issue) is that it would be cool to be able to 'retroactively generate a sourcemap) for a webapp, based on the unminified output from wakaru; such that we could than take that sourcemap, and apply it to the original minified web app source for debugging the live app.

Edit: Created a new issue to track this:

Originally posted by @0xdevalias in pionxzh/wakaru#34 (comment)


It isn't very meaningful to support such a feature when you can access all the source code.

@pionxzh I was specifically talking about it in terms of bundled modules (eg. React, etc), and not the unique web app code of the app itself.

Originally posted by @0xdevalias in pionxzh/wakaru#34 (comment)


You mean like, for popular open-source projects, we can put some sourcemap in our project / read from the chunk, and then reverse map the minified variable and function name back to normal?

@pionxzh Similar to that, but probably not "put the sourcemap in our project" directly; but more process the sourcemaps from popular open-source projects and extract those details to an 'intermediary form'. That 'intermediary form' would be similar to the 'module map' file, as I described earlier in this thread:

When I was exploring PoC ideas for my own project previously, I was looking to generate a file similar to the 'module map' that this project is using; but instead of just for the names of modules, I wanted to be able to use it to provide a 'variable name map'. Though because the specific variables used in webpack/etc can change between builds, my thought was that first 'normalising' them to a 'known format' based on their context would make sense to do first.

That could then be later enhanced/expanded by being able to pre-process these 'variable name mappings' for various open source projects in a way that could then be applied 'automagically' without the end user needing to first create them.

It could also be enhanced by similar techniques such as what the humanify project does, by using LLMs/similar to generate suggested variable name mappings based on the code.

Originally posted by @0xdevalias in pionxzh/wakaru#34 (comment)


A configuration table/profile can be provided to allow users to manually write correspondences. wakaru can simply include the rules of the better known packages.

@StringKe nods, sounds like we are thinking about similar things here :)


Can you specify the content that you would expect to have? and the corresponding behavior

@pionxzh For me personally, I haven't deeply thought through all the use cases in depth, but at a high level I basically want to be able to take a web app that is going to be re-built multiple times, and be able to have a 'config file' similar to the 'module mapping' that wakaru has/had; but that also allows me to specify the variable/function names ('symbols') that are used within it.

The slightly more challenging part is that because the app will be re-built multiple times, the minified variables will change (sometimes every build), so we can't easily use those as the 'key' of the mapping. One idea I had for solving that is potentially by first renaming all of the variables based on a 'stable naming pattern' (eg. func_*, arg_*, const_*, etc; and then could just use a counter/similar based on the 'scope' it's being defined in) that would be generated based on the scope/type of the 'symbol', and would therefore be resilient to the minified variable names changing each build. Those 'stable intermediary names' could then potentially be used for the keys in the variable mapping.

Though then we also need to figure out what level of 'granularity' makes sense to generate those 'stable intermediary names' at; as having a 1:1 mapping of those 'stable name scopes' to JS scopes could potentially end up being really noisy in the mapping file. So maybe using a 'higher abstracted scope' would make more sense (eg. at the module level or similar)

My original hacky implementation of this in my own PoC code was using JS objects/JSON to map an explicit minified variable name to it's 'proper' name; but that broke because the minified names changed between builds. Even by implementing the 'stable naming pattern', if those 'stable names' included a 'counter' in them (eg. func_1, const_8, etc) we still probably wouldn't want to use those stable names directly as the key of an object, as if a new variable was added 'in between' in a later build, that would flow on to 'shifting' the 'counter' for every variable of a matching type afterwards, which would be a lot of effort to manually update in a mapping file. While I haven't thought too deeply about it, I think that by using an array in the mapping file, it should simplify things so that we only need to make a small change to 'fix the mappings' when a new variable is added that 'shifts' everything.

Even by using the array concept in the mappings file, there is still some manual pain/effort involved in trying to keep the mapping 'up to date' in newer builds. That's what lead me into some of the deeper/more esoteric ideas/thinking around 'fingerprinting' that I expand on below.

--

Another area I started looking into (but haven't deeply explored yet) for both figuring out how to map variable names to sections of code in a 'smart' way, and potentially also for module identification (see #41); is in the space of 'structural AST fingerprinting' or 'code similarity' algorithms and similar. (I realise that this is a rather deep/esoteric angle to be looking at this from, and that there are likely going to be far simpler/easier ways to implement the variable mapping/module identification in a 'good enough' way without going to this level of depth; but I'm curious to explore it regardless, to see if any good ideas come out of it)

I haven't gotten too far in my reading yet (got distracted on other things), but the high level of my idea was that maybe we could generate an 'AST fingerprint' that isn't impacted by the variable/function/etc names ('symbols') changing during minification; and then use that as the basis for the 'key' in the 'mappings file'; as that fingerprint could theoretically still identify a 'scope' (which might be a literal JS scope, or might be a higher level abstraction that we decide makes sense; the most abstract being probably at the bundled module level) even if the bundler decides to move some functions around to a different module/etc. Then obviously if we were able to generate those 'resilient fingerprints' to identify code even when it's been minified, that would make perfect sense to apply to module detection/etc (see #41) as well.

Some of the high level ideas / search terms that I was using to start my research in that area was things like:

  • AST fingerprinting
  • Source code similarity fingerprinting
  • Control flow graphs
  • Call flow graphs
  • Program dependence graph
  • etc

Here is a link dump of a bunch of the tabs I have open but haven't got around to reviewing in depth yet, RE: 'AST fingerprinting' / Code Similarity / etc:

Unsorted/Unreviewed Initial Link Dump RE: 'AST fingerprinting' / Code Similarity

--

Another idea I've had, but only lightly explored so far, is looking into how various projects like Terser, Webpack, etc choose their minified variable names in general; but also how they handle 'stable minified variables' between builds (which is something that I know at least Webpack has some concept of). My thought there is that by understanding how they implement their own 'stable minified variables between builds', that we might be able to leverage to either a) do similar, or b) be able to reverse engineer that in a way that might be able to be 'retroactively applied' on top of an existing minified project that didn't use 'stable minified variables', to 'stabilise' them.

Originally posted by @0xdevalias in pionxzh/wakaru#34 (comment)

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment