Universal Asset Protocol: Adding Token Minting to Whippet.

Universal Asset Protocol: Building Token Support from Scratch

When I first created Whippet, my goal was simple: test what a blockchain with 6-second blocks would look like. After getting that running, I realized something else interesting: an experimental chain with no users and no utility is also the perfect place to test new script capabilities without affecting any real network.

I did what a goofy guy like me always does: I couldn’t stop thinking about it, so I implemented three new script opcodes to enable token minting and introspection. This taught me a lot about consensus changes, script design, and just how much work it is to add features at the protocol level.

Why Build Token Minting?

Three reasons, really:

  1. Scriptability: Bitcoin’s script language is incredibly constrained. Most tokens on Bitcoin, Dogecoin, et cetera live in layers on top of the base protocol. I wanted to see what token support would look like if built into the consensus layer itself.

  2. Covenants: With token minting, you can write scripts that enforce even more interesting rules on how funds can be spent (called covenants). You could require tokens only transfer to specific outputs, or that transaction fees come from a designated pool. This is powerful stuff.

  3. Research: Whippet is a sandbox. I could try an idea, see what works, what breaks, and what the tradeoffs are. No production risk, just learning.

The Three Opcodes

I brainstormed for a while, and decided the best way to implement this was to take a minimal approach: just three new opcodes. Each has a specific purpose:

OP_MINT (0xb5) - Create Tokens

This opcode creates new tokens on the blockchain. The pattern is simple:

[multiplier] [salt] OP_MINT

The multiplier defines how many tokens you get per unit of Whippet’s native currency. A multiplier of 1000 means 1 WHIP = 1,000 tokens. The salt is a unique identifier that makes each token distinct.

Why a salt? Because you might mint multiple tokens on the same chain, and each needs a unique identity. The salt ensures that [1000, "salt1", OP_MINT] produces different tokens than [1000, "salt2", OP_MINT] even with identical multipliers.

The entry fee: To prevent spam, OP_MINT requires the transaction input to have value, at least 1000 native coin satoshis. This means you can’t mint tokens for free. There’s an actual cost. The amount you spend becomes the transaction fee. It’s a simple anti-spam mechanism we’re already used to.

OP_INSPECT (0xb7) - Read Transaction Properties

OP_INSPECT lets scripts peek at transaction properties during execution. The pattern is:

[field_selector] OP_INSPECT

Want to verify that an output has exactly 100 satoshis? 12 OP_INSPECT 100 EQUAL does it. Want to check the protocol version? 0 OP_INSPECT 1 EQUAL. This enables covenant scripts, which can verify that outputs follow specific rules before allowing the transaction.

OP_INSPECT_SELF (0xb6) - Examine Your Own Script

This one’s silly: OP_INSPECT_SELF pushes the current script onto the stack. Why is this useful? You can verify that multiple outputs use the same script:

OP_INSPECT_SELF OP_INSPECT_SELF EQUAL

This enables covenant chains. You can create a script that verifies it appears in multiple outputs, enforcing that tokens follow specific transfer rules. It’s a quine!

The Hard Parts

Implementing these opcodes sounds straightforward until you realize every change touches consensus rules, test data, script validation, and policy layers. Also, you’re seeing the output I ended up with, not my initial idea nor any of the missteps I took to get here!

Consensus vs. Policy

Consensus rules are immutable agreements between all nodes. If you break consensus, you fork the network. Policy rules are local decisions about what to relay, so they can be softer. Getting this distinction right is critical. This was the core of my research and the source (no pun intended) of some frustration.

For OP_MINT, I had to:

  • Add opcode recognition in script.h and script.cpp
  • Implement execution logic in interpreter.cpp with full validation
  • Add policy constraints in policy.cpp (entry fee, salt size)
  • Add new transaction script types in standard.cpp to classify OP_MINT outputs
  • Update test data in script_tests.json with hundreds of test cases

Each layer had its own assumptions about how scripts work. Violating those assumptions meant rewriting tests, fixing endianness issues, and debugging subtle consensus bugs. Thank you, Copilot, for letting me yell at you.

Similarly, the Whippet core (inherited almost unmodified from Dogecoin) allows either lax validation of transaction scripts or strict validation (the default). I had to come up with a way to add a template for OP_MINT scripts that would be recognized by the solver, and then ensure that the policy layer only accepted transactions that match this template. This was a non-trivial amount of work, especially as I had to maintain backward compatibility with existing script types.

Script Introspection Challenges

OP_INSPECT is particularly tricky because it accesses transaction context during script execution. The script engine normally doesn’t have access to transaction data. I had to change a fair amount of code to pass the transaction context through the interpreter stack, for example.

One subtle bug: if a script accesses an output that doesn’t exist, what happens? Fail the script? Return zero? Return an error? Each choice has implications for how covenants can be written. I chose strict validation, so that accessing a nonexistent output fails the script.

Test Data Hell

Dogecoin’s test suite includes thousands of pre-computed script test cases in script_tests.json. When I added new opcodes, I had to add test cases for valid execution, add test cases for invalid inputs (missing salt, short salt, wrong format), and update existing test cases that were affected by the new opcodes, all while ensuring backward compatibility (the existing tests still pass).

This consumed way more time than the actual opcode implementation, but it was worth it to catch edge cases and prevent regressions.

Building the Test Suite

After implementing the opcodes, I needed to verify they actually work. Python integration tests seemed like the right approach. This gave me three test files. This is where you come in:

uap_transactions.py - Tests token conservation. Can you create tokens? Do they obey basic conservation laws? Does the mempool accept them?

uap_mint_transfer_spend.py - Tests individual features. OP_INSPECT validation? OP_INSPECT_SELF correctness? Multi-level transaction chains where tokens transfer through covenants?

uap_integration.py - Complex real-world scenarios. Large token operations, nested covenants, covenant preservation across multiple transaction levels.

Each test verifies:

  • Tokens are created with correct supply
  • Fees are calculated properly
  • Salt uniqueness is enforced
  • Output introspection works correctly
  • Script introspection enables covenant chains
  • Tokens can be transferred through covenant scripts

Even more than that, these tests demonstrate how you can use these new opcodes.

The Silly Mistakes

Of course I made mistakes. Everyone does.

Protocol Version Management: I updated the protocol version but didn’t realize this would break existing node compatibility in subtle ways. Nodes running protocol version 70015 wouldn’t understand OP_MINT transactions. This is actually correct behavior because it enforces upgrades, but it meant my test nodes couldn’t communicate across versions.

Script Type Recognition: Early on, I incorrectly classified OP_MINT scripts. The Solver function needed updates to properly identify TX_OP_MINT patterns. Without this, the script type classifier would return TX_NONSTANDARD, and the policy layer would reject the transactions. Fixed, but it took a while to debug.

Important Lessons to Learn

I knew some of these already, but experiencing them all together reinforced them.

  1. Consensus changes are hard for good reasons. Every change ripples through validation, policy, test data, and documentation. This is why blockchains are conservative about protocol changes.

  2. Script introspection is powerful but dangerous. Being able to examine transaction properties during script execution enables sophisticated covenants. It also enables sophisticated bugs. Thorough testing is non-negotiable.

  3. Test data is part of consensus. The test suite also documents what the consensus rules actually are. Investing time in comprehensive test coverage pays off immediately when you discover edge cases.

  4. Protocol versioning matters. Nodes need to understand which version of the protocol they’re running. This determines what features are available and what they can communicate with.

  5. Token protocols can live on-chain. I was skeptical at first, but with OP_MINT and OP_INSPECT, you actually can build token systems that enforce conservation and covenants at the consensus level. The tradeoffs are different from off-chain protocols, but it works.

Try It Yourself

Whippet 1.1.0 includes all three opcodes and a comprehensive test suite. To experiment, either download binaries from the Whippet Do Not Use 1.1.0 Release, or build it yourself:

# Build from source
./autogen.sh && ./configure && make -j4

# Run the UAP tests
python3 qa/rpc-tests/uap_mint_transfer_spend.py

# Start a node and mint tokens
./whippetd -regtest
./whippet-cli -regtest getnewaddress  # Get an address

The documentation in doc/uap-minting-guide.md includes step-by-step examples and complete code walkthroughs.

What’s Next?

Someone could take this a lot of directions:

  • Smart contracts: Could we enable more sophisticated script logic? Loops? Conditional covenant chains?
  • Rollups: Would fast blocks + token support enable efficient rollup constructions?
  • Cross-chain tokens: Could tokens minted on Whippet be bridged to other chains?
  • Off-chain scaling: Could UAP work as a base layer for Plasma or Rollup constructions?

I might shut it all down in a few weeks. Either way, I’ve learned an enormous amount about building consensus-level features, script design, and the tradeoffs between different approaches to tokens and smart contracts. This may be the basis of bringing native assets to coins like Dogecoin or Pepecoin in the future.

Alternately, if you want to build something like pump.fun on scrypt chains, you can totally use this chain as a testbed. I intended these opcodes to work for that purpose, and the 6-second block times make experimenting with token economics and transfer patterns much faster than on a real chain.

If you’re curious about blockchain implementation details, or if you want to experiment with your own protocol changes, I’d encourage you to download/build/run Whippet and try something. The barrier to entry is lower than you might think. It’s mainly time, patience, and willingness to debug test failures.


Disclaimer: Whippet is an experimental sandbox. The tokens have no value. The chain may reset. The implementation is for research only. Do not use this for anything you care about.

To get started: https://github.com/chromatic/whippet-experimental-blockchain

Latest Release: Whippet 1.1.0

Documentation: UAP Minting Guide