readmes: simplified

This commit is contained in:
nym21
2025-12-18 17:10:23 +01:00
parent 549e2da05b
commit c5657b9c31
23 changed files with 812 additions and 2968 deletions

View File

@@ -1,273 +1,56 @@
# brk
Unified Bitcoin Research Kit crate providing optional feature-gated access to all BRK components.
Umbrella crate for the Bitcoin Research Kit.
[![Crates.io](https://img.shields.io/crates/v/brk.svg)](https://crates.io/crates/brk)
[![Documentation](https://docs.rs/brk/badge.svg)](https://docs.rs/brk)
## What It Enables
## Overview
Single dependency to access any BRK component. Enable only what you need via feature flags.
This crate serves as a unified entry point to the Bitcoin Research Kit ecosystem, providing feature-gated re-exports of all BRK components. It allows users to selectively include only the functionality they need while maintaining a single dependency declaration, with the `brk_cli` component always available for command-line interface access.
**Key Features:**
- Feature-gated modular access to 12 specialized BRK components
- Single dependency entry point with selective compilation
- Always-available CLI component for command-line operations
- Comprehensive documentation aggregation with inline re-exports
- `full` feature for complete BRK functionality inclusion
- Optimized build configuration with docs.rs integration
**Target Use Cases:**
- Applications requiring selective BRK functionality to minimize dependencies
- Library development where only specific Bitcoin analysis components are needed
- Prototyping and experimentation with different BRK component combinations
- Educational use cases demonstrating modular blockchain analytics architecture
## Installation
## Usage
```toml
[dependencies]
# Minimal installation with CLI only
brk = "0.0.107"
# Full functionality
brk = { version = "0.0.107", features = ["full"] }
# Selective features
brk = { version = "0.0.107", features = ["indexer", "computer", "server"] }
brk = { version = "0.x", features = ["query", "types"] }
```
## Quick Start
```rust
// CLI is always available
use brk::cli;
// Feature-gated components
#[cfg(feature = "indexer")]
use brk::indexer::Indexer;
#[cfg(feature = "computer")]
use brk::computer::Computer;
#[cfg(feature = "server")]
use brk::server::Server;
// Build complete pipeline with selected features
#[cfg(all(feature = "indexer", feature = "computer", feature = "server"))]
fn build_pipeline() -> Result<(), Box<dyn std::error::Error>> {
let indexer = Indexer::build("./data")?;
let computer = Computer::build("./analytics", &indexer)?;
let interface = brk::interface::Interface::build(&indexer, &computer);
let server = Server::new(interface, None);
Ok(())
}
use brk::query::Query;
use brk::types::Height;
```
## API Overview
## Feature Flags
### Feature Organization
| Feature | Crate | Description |
|---------|-------|-------------|
| `bencher` | `brk_bencher` | Resource monitoring |
| `binder` | `brk_binder` | Client code generation |
| `bundler` | `brk_bundler` | JS bundling |
| `computer` | `brk_computer` | Metric computation |
| `error` | `brk_error` | Error types |
| `fetcher` | `brk_fetcher` | Price data fetching |
| `grouper` | `brk_grouper` | Cohort filtering |
| `indexer` | `brk_indexer` | Blockchain indexing |
| `iterator` | `brk_iterator` | Block iteration |
| `logger` | `brk_logger` | Logging setup |
| `mcp` | `brk_mcp` | MCP server |
| `mempool` | `brk_mempool` | Mempool monitoring |
| `query` | `brk_query` | Query interface |
| `reader` | `brk_reader` | Raw block reading |
| `rpc` | `brk_rpc` | Bitcoin RPC client |
| `server` | `brk_server` | HTTP API server |
| `store` | `brk_store` | Key-value storage |
| `traversable` | `brk_traversable` | Data traversal |
| `types` | `brk_types` | Domain types |
The crate provides feature-gated access to BRK components organized by functionality:
**Core Data Processing:**
- `structs` - Bitcoin-aware data structures and type system
- `error` - Centralized error handling across components
- `store` - Transactional key-value storage wrapper
**Blockchain Processing:**
- `parser` - Multi-threaded Bitcoin block parsing
- `indexer` - Blockchain data indexing with columnar storage
- `computer` - Analytics computation engine
**Data Access:**
- `interface` - Unified query interface with fuzzy search
- `fetcher` - Multi-source price data aggregation
**Service Layer:**
- `server` - HTTP API server with caching and compression
- `mcp` - Model Context Protocol bridge for LLM integration
- `logger` - Enhanced logging with colored output
**Web Infrastructure:**
- `bundler` - Web asset bundling using Rolldown
### Always Available
**`cli`**: Command-line interface module (no feature gate required)
Provides access to the complete BRK command-line interface for running full instances.
## Examples
### Minimal Bitcoin Parser
```rust
use brk::parser::Parser;
fn parse_blocks() -> Result<(), Box<dyn std::error::Error>> {
#[cfg(feature = "parser")]
{
let parser = Parser::new("/path/to/blocks", None, rpc_client);
// Parse blockchain data
Ok(())
}
#[cfg(not(feature = "parser"))]
{
Err("Parser feature not enabled".into())
}
}
```
### Analytics Pipeline
```rust
use brk::{indexer, computer, interface};
#[cfg(all(feature = "indexer", feature = "computer", feature = "interface"))]
fn analytics_pipeline() -> Result<(), Box<dyn std::error::Error>> {
// Initialize indexer
let indexer = indexer::Indexer::build("./blockchain_data")?;
// Compute analytics
let computer = computer::Computer::build("./analytics", &indexer)?;
// Create query interface
let interface = interface::Interface::build(&indexer, &computer);
// Query latest price
let params = interface::Params {
index: "date".to_string(),
ids: vec!["price-close".to_string()].into(),
from: Some(-1),
..Default::default()
};
let result = interface.search_and_format(params)?;
println!("Latest Bitcoin price: {:?}", result);
Ok(())
}
```
### Web Server Setup
```rust
use brk::{server, interface, indexer, computer};
#[cfg(all(feature = "server", feature = "interface", feature = "indexer", feature = "computer"))]
async fn web_server() -> Result<(), Box<dyn std::error::Error>> {
let indexer = indexer::Indexer::build("./data")?;
let computer = computer::Computer::build("./analytics", &indexer)?;
let interface = interface::Interface::build(&indexer, &computer);
let server = server::Server::new(interface, None);
server.serve(true).await?;
Ok(())
}
```
### MCP Integration
```rust
use brk::{mcp, interface, indexer, computer};
#[cfg(all(feature = "mcp", feature = "interface"))]
fn mcp_server(interface: &'static interface::Interface) -> mcp::MCP {
mcp::MCP::new(interface)
}
```
## Feature Combinations
### Common Combinations
**Data Processing**: `["structs", "parser", "indexer"]`
Basic blockchain data processing and indexing.
**Analytics**: `["indexer", "computer", "fetcher", "interface"]`
Complete analytics pipeline with price data integration.
**API Server**: `["interface", "server", "logger"]`
HTTP API server with logging capabilities.
**Full Stack**: `["full"]`
All components for complete BRK functionality.
### Dependency Optimization
Feature selection allows for significant dependency reduction:
## Common Combinations
```toml
# Minimal parser-only dependency
brk = { version = "0.0.107", features = ["parser"] }
# Query-only (read computed data)
brk = { features = ["query", "types"] }
# Analytics without web server
brk = { version = "0.0.107", features = ["indexer", "computer", "interface"] }
# Full indexing pipeline
brk = { features = ["indexer", "computer", "query"] }
# Web server without parsing
brk = { version = "0.0.107", features = ["interface", "server"] }
# API server
brk = { features = ["server", "query", "mempool"] }
```
## Architecture
### Re-export Pattern
The crate uses `#[doc(inline)]` re-exports to provide seamless access to component APIs:
```rust
#[cfg(feature = "component")]
#[doc(inline)]
pub use brk_component as component;
```
This pattern ensures:
- Feature-gated compilation for dependency optimization
- Inline documentation for unified API reference
- Namespace preservation for component-specific functionality
### Build Configuration
- **Documentation**: `all-features = true` for complete docs.rs documentation
- **CLI Integration**: `brk_cli` always available without feature gates
- **Optional Dependencies**: All components except CLI are optional
## Configuration
### Feature Flags
| Feature | Component | Description |
|---------|-----------|-------------|
| `bundler` | `brk_bundler` | Web asset bundling |
| `computer` | `brk_computer` | Analytics computation |
| `error` | `brk_error` | Error handling |
| `fetcher` | `brk_fetcher` | Price data fetching |
| `indexer` | `brk_indexer` | Blockchain indexing |
| `interface` | `brk_query` | Data query interface |
| `logger` | `brk_logger` | Enhanced logging |
| `mcp` | `brk_mcp` | Model Context Protocol |
| `parser` | `brk_reader` | Block parsing |
| `server` | `brk_server` | HTTP server |
| `store` | `brk_store` | Key-value storage |
| `structs` | `brk_types` | Data structures |
| `full` | All components | Complete functionality |
### Documentation
Documentation is aggregated from all components with `#![doc = include_str!("../README.md")]` ensuring comprehensive API reference across all features.
## Code Analysis Summary
**Main Structure**: Feature-gated re-export crate providing unified access to 12 BRK components \
**Feature System**: Cargo features enabling selective compilation and dependency optimization \
**CLI Integration**: Always-available `brk_cli` access without feature requirements \
**Documentation**: Inline re-exports with comprehensive docs.rs integration \
**Dependency Management**: Optional dependencies for all components except CLI \
**Build Configuration**: Optimized compilation with all-features documentation \
**Architecture**: Modular aggregation crate enabling flexible BRK ecosystem usage
---
_This README was generated by Claude Code_

View File

@@ -1 +1,43 @@
# brk_bencher
Resource monitoring for long-running Bitcoin indexing operations.
## What It Enables
Track disk usage, memory consumption (current + peak), and I/O throughput during indexing runs. Progress tracking hooks into brk_logger to record processing milestones automatically.
## Key Features
- **Multi-metric monitoring**: Disk, memory (RSS + peak), I/O read/write
- **Progress tracking**: Integrates with logging to capture block heights as they're processed
- **Run comparison**: Outputs timestamped CSVs for comparing multiple runs
- **macOS optimized**: Uses libproc for accurate process metrics on macOS
- **Non-blocking**: Monitors in background thread with 5-second sample interval
## Core API
```rust
let mut bencher = Bencher::from_cargo_env("brk_indexer", &data_path)?;
bencher.start()?;
// ... run indexing ...
bencher.stop()?;
```
## Output Structure
```
benches/
└── brk_indexer/
└── 1703001234/
├── disk.csv # timestamp_ms, bytes
├── memory.csv # timestamp_ms, current, peak
├── io.csv # timestamp_ms, read, written
└── progress.csv # timestamp_ms, height
```
## Built On
- `brk_error` for error handling
- `brk_logger` for progress hook integration

View File

@@ -0,0 +1,34 @@
# brk_bencher_visualizer
SVG chart generation for benchmark visualization.
## What It Enables
Turn benchmark CSV data into publication-ready SVG charts showing disk usage, memory (current/peak), progress, and I/O over time. Compare multiple runs side-by-side with automatic color coding.
## Key Features
- **Multi-run comparison**: Overlay multiple benchmark runs with distinct colors
- **Dual-axis charts**: Memory charts show both current and peak usage (solid vs dashed lines)
- **Smart scaling**: Automatic unit conversion for bytes (KB/MB/GB) and time (seconds/minutes/hours)
- **Per-run trimming**: Aligns data by progress cutoffs for fair comparison
- **Dark theme**: Clean, readable charts with monospace fonts
## Core API
```rust
let viz = Visualizer::from_cargo_env()?;
viz.generate_all_charts()?; // Process all crates in benches/
```
## Chart Types
- `disk.svg` - Storage consumption over time
- `memory.svg` - Current + peak memory usage
- `progress.svg` - Processing progress (e.g., blocks indexed)
- `io_read.svg` / `io_write.svg` - I/O throughput
## Input Format
Reads CSV files from `benches/<crate>/<run_id>/`:
- `disk.csv`, `memory.csv`, `progress.csv`, `io.csv`

View File

@@ -1 +1,42 @@
# brk_binder
Code generation for BRK client libraries.
## What It Enables
Generate typed metric catalogs and constants for JavaScript/TypeScript clients. Keeps frontend code in sync with available metrics without manual maintenance.
## Key Features
- **Metric catalog**: Generates `metrics.js` with all metric IDs and their supported indexes
- **Compression**: Metric names compressed via word-to-base62 mapping for smaller bundles
- **Mining pools**: Generates `pools.js` with pool ID to name mapping
- **Version sync**: Generates `version.js` matching server version
## Core API
```rust
generate_js_files(&query, &modules_path)?;
```
## Generated Files
```
modules/brk-client/generated/
├── version.js # export const VERSION = "vX.Y.Z"
├── metrics.js # INDEXES, COMPRESSED_METRIC_TO_INDEXES
└── pools.js # POOL_ID_TO_POOL_NAME
```
## Metric Compression
To minimize bundle size, metric names are compressed:
1. Extract all words from metric names
2. Sort by frequency
3. Map to base52 codes (A-Z, a-z)
4. Store compressed metric → index group mapping
## Built On
- `brk_query` for metric enumeration
- `brk_types` for mining pool data

View File

@@ -1,278 +1,32 @@
# brk_bundler
Asset bundling and development server for BRK web interfaces with hot reloading and file watching.
JavaScript bundling with watch mode for BRK web interfaces.
[![Crates.io](https://img.shields.io/crates/v/brk_bundler.svg)](https://crates.io/crates/brk_bundler)
[![Documentation](https://docs.rs/brk_bundler/badge.svg)](https://docs.rs/brk_bundler)
## What It Enables
## Overview
Bundle and minify JavaScript modules using Rolldown, with file watching for development. Handles module copying, source map generation, and cache-busting via hashed filenames.
This crate provides a thin wrapper around the Rolldown JavaScript bundler specifically designed for BRK web interface development. It handles asset bundling, file copying, template processing, and development-mode file watching with automatic rebuilds and hot reloading for efficient web development workflows.
## Key Features
**Key Features:**
- **Rolldown integration**: Fast Rust-based bundler with tree-shaking and minification
- **Watch mode**: Rebuilds on file changes with live module syncing
- **Source maps**: Full debugging support in production builds
- **Cache busting**: Hashes main bundle filename, updates HTML references automatically
- **Service worker versioning**: Injects package version into service worker files
- JavaScript bundling with Rolldown (Rust-based bundler)
- Automatic file watching and hot reloading in development mode
- Template processing with version injection and asset hash replacement
- Service worker generation with version management
- Source map generation for debugging
- Minification for production builds
- Async/await support with Tokio integration
**Target Use Cases:**
- BRK blockchain explorer web interfaces
- Development of Bitcoin analytics dashboards
- Building responsive web applications for blockchain data visualization
- Hot reloading development environment for rapid iteration
## Installation
```bash
cargo add brk_bundler
```
## Quick Start
## Core API
```rust
use brk_bundler::bundle;
use std::path::Path;
// One-shot build
let dist = bundle(modules_path, websites_path, "src", false).await?;
#[tokio::main]
async fn main() -> std::io::Result<()> {
let websites_path = Path::new("./web");
let source_folder = "src";
let watch = true; // Enable hot reloading
// Bundle assets and start development server
let dist_path = bundle(websites_path, source_folder, watch).await?;
println!("Assets bundled to: {}", dist_path.display());
// Keep running for file watching (in watch mode)
if watch {
tokio::signal::ctrl_c().await?;
}
Ok(())
}
// Watch mode for development
bundle(modules_path, websites_path, "src", true).await?;
```
## API Overview
## Build Pipeline
### Core Functions
**`bundle(websites_path: &Path, source_folder: &str, watch: bool) -> io::Result<PathBuf>`**
Main bundling function that processes web assets and optionally starts file watching.
### Bundling Process
1. **Directory Setup**: Creates `dist/` directory and copies source files
2. **JavaScript Bundling**: Processes `scripts/entry.js` with Rolldown bundler
3. **Template Processing**: Updates `index.html` with hashed asset references
4. **Service Worker**: Generates service worker with version injection
5. **File Watching**: Optionally monitors source files for changes
### Configuration
**Rolldown Bundler Options:**
- **Input**: `./src/scripts/entry.js` (main JavaScript entry point)
- **Output**: `./dist/scripts/` directory
- **Minification**: Enabled for production builds
- **Source Maps**: File-based source maps for debugging
- **Asset Hashing**: Automatic hash generation for cache busting
## Examples
### Development Mode with Hot Reloading
```rust
use brk_bundler::bundle;
use std::path::Path;
#[tokio::main]
async fn main() -> std::io::Result<()> {
let web_root = Path::new("./websites");
// Start development server with file watching
let _dist_path = bundle(web_root, "explorer", true).await?;
println!("Development server started!");
println!("Hot reloading enabled - edit files to see changes");
// Keep server running
loop {
tokio::time::sleep(tokio::time::Duration::from_secs(1)).await;
}
}
```
### Production Build
```rust
use brk_bundler::bundle;
use std::path::Path;
#[tokio::main]
async fn main() -> std::io::Result<()> {
let web_root = Path::new("./websites");
// Build for production (no watching)
let dist_path = bundle(web_root, "dashboard", false).await?;
println!("Production build completed: {}", dist_path.display());
// Assets are minified and ready for deployment
Ok(())
}
```
### Custom Web Application Structure
```rust
use brk_bundler::bundle;
use std::path::Path;
// Expected directory structure:
// websites/
// ├── my_app/
// │ ├── index.html // Main HTML template
// │ ├── service-worker.js // Service worker template
// │ ├── scripts/
// │ │ └── entry.js // JavaScript entry point
// │ ├── styles/
// │ │ └── main.css // CSS files
// │ └── assets/
// │ └── images/ // Static assets
// └── dist/ // Generated output
#[tokio::main]
async fn main() -> std::io::Result<()> {
let websites_path = Path::new("./websites");
let source_folder = "my_app";
let dist_path = bundle(websites_path, source_folder, false).await?;
// Result: dist/ contains bundled and processed files
// - dist/index.html (with updated script references)
// - dist/service-worker.js (with version injection)
// - dist/scripts/main.[hash].js (minified and hashed)
// - dist/styles/ (copied CSS files)
// - dist/assets/ (copied static assets)
Ok(())
}
```
## Architecture
### File Processing Pipeline
1. **Source Copying**: Recursively copies all source files to dist directory
2. **JavaScript Bundling**: Rolldown processes entry.js with dependencies
3. **Asset Hashing**: Generates content-based hashes for cache busting
4. **Template Updates**: Replaces placeholders in HTML templates
5. **Version Injection**: Updates service worker with current package version
### File Watching System
**Development Mode Watchers:**
- **Source File Watcher**: Monitors non-script files for changes
- **Bundle Watcher**: Watches JavaScript files and triggers rebuilds
- **Template Watcher**: Updates HTML when bundled assets change
**Event Handling:**
- **File Creation/Modification**: Automatic copying to dist directory
- **Script Changes**: Triggers Rolldown rebuild and template update
- **Template Changes**: Processes HTML and updates asset references
### Template Processing
**index.html Processing:**
- Scans bundled JavaScript for asset hash
- Replaces `/scripts/main.js` with `/scripts/main.[hash].js`
- Maintains cache busting while preserving template structure
**service-worker.js Processing:**
- Replaces `__VERSION__` placeholder with current crate version
- Enables version-based cache invalidation
- Maintains service worker functionality
### Async Architecture
Built on Tokio async runtime:
- **Non-blocking I/O**: Efficient file operations and watching
- **Concurrent Tasks**: Parallel file watching and bundle processing
- **Background Processing**: Development server runs in background task
## Configuration Options
### Rolldown Configuration
The bundler uses optimized Rolldown settings:
```rust
BundlerOptions {
input: Some(vec!["./src/scripts/entry.js".into()]),
dir: Some("./dist/scripts".to_string()),
minify: Some(RawMinifyOptions::Bool(true)),
sourcemap: Some(SourceMapType::File),
// ... other default options
}
```
### File Structure Requirements
**Required Files:**
- `src/scripts/entry.js` - JavaScript entry point
- `src/index.html` - HTML template
- `src/service-worker.js` - Service worker template
**Optional Directories:**
- `src/styles/` - CSS stylesheets
- `src/assets/` - Static assets (images, fonts, etc.)
- `src/components/` - Additional JavaScript modules
## Development Workflow
### Setup
1. Create web application in `websites/app_name/`
2. Add required files (index.html, entry.js, service-worker.js)
3. Run bundler in watch mode for development
### Hot Reloading
- **Script Changes**: Automatic bundle rebuild and browser refresh
- **Template Changes**: Immediate HTML update with asset hash replacement
- **Asset Changes**: Instant copy to dist directory
- **Style Changes**: Direct copy without bundling
### Production Deployment
1. Run bundler without watch mode
2. Deploy `dist/` directory contents
3. Assets include content hashes for cache busting
4. Service worker includes version for cache management
## Code Analysis Summary
**Main Function**: `bundle()` async function coordinating Rolldown bundler with file processing and watching \
**File Operations**: Recursive directory copying with `copy_dir_all()` and selective file processing \
**Templating**: String replacement for asset hash injection and version management \
**File Watching**: Multi-watcher system using `notify` crate for real-time development feedback \
**Async Integration**: Tokio-based async architecture with background task spawning \
**Bundler Integration**: Rolldown wrapper with optimized configuration for web development \
**Architecture**: Development-focused asset pipeline with hot reloading and production optimization
---
_This README was generated by Claude Code_
1. Copy shared modules to source scripts directory
2. Bundle with Rolldown (minified, with source maps)
3. Update `index.html` with hashed script references
4. Inject version into service worker

View File

@@ -1,232 +1,46 @@
# brk_cli
Command-line interface orchestrating complete Bitcoin Research Kit instances with automatic configuration and continuous blockchain processing.
Command-line interface for running the Bitcoin Research Kit.
[![Crates.io](https://img.shields.io/crates/v/brk_cli.svg)](https://crates.io/crates/brk_cli)
[![Documentation](https://docs.rs/brk_cli/badge.svg)](https://docs.rs/brk_cli)
## What It Enables
## Overview
Run a full BRK instance: index the blockchain, compute metrics, serve the API, and optionally host a web interface. Continuously syncs with new blocks.
This crate provides the primary command-line interface for running Bitcoin Research Kit instances. It orchestrates the entire data processing pipeline from Bitcoin Core block parsing through analytics computation to HTTP API serving, with persistent configuration management, automatic error recovery, and continuous blockchain synchronization.
## Key Features
**Key Features:**
- **All-in-one**: Single binary runs indexer, computer, mempool monitor, and server
- **Auto-sync**: Waits for new blocks and processes them automatically
- **Web interface**: Downloads and bundles frontend from GitHub releases
- **Configurable**: TOML config for RPC, paths, and features
- **Collision checking**: Optional TXID collision validation mode
- **Memory optimized**: Uses mimalloc allocator, 512MB stack for deep recursion
- Complete BRK pipeline orchestration with parser, indexer, computer, and server coordination
- Persistent configuration system with TOML-based auto-save functionality
- Continuous blockchain processing with new block detection and incremental updates
- Flexible Bitcoin Core RPC authentication with cookie file and user/password support
- Configurable web interface options including auto-downloading from GitHub releases
- Large stack allocation (512MB) for handling complex blockchain processing workloads
- Graceful shutdown handling with proper cleanup and state preservation
**Target Use Cases:**
- Production Bitcoin analytics deployments requiring full pipeline operation
- Development environments for Bitcoin research and analysis
- Continuous blockchain monitoring with real-time data updates
- Academic research requiring comprehensive historical blockchain datasets
## Installation
## Usage
```bash
cargo install brk # or cargo install brk_cli
# See all options
brk --help
# The CLI will:
# 1. Index new blocks
# 2. Compute derived metrics
# 3. Start mempool monitor
# 4. Launch API server (port 3110)
# 5. Wait for new blocks and repeat
```
## Quick Start
## Components
```bash
# First run - configure and start processing
brk --brkdir ./data --bitcoindir ~/.bitcoin --fetch true
1. **Indexer**: Processes blocks into queryable indexes
2. **Computer**: Derives 1000+ on-chain metrics
3. **Mempool**: Real-time fee estimation
4. **Server**: REST API + MCP endpoint
5. **Bundler**: JS bundling for web interface (if enabled)
# Subsequent runs use saved configuration
brk
## Built On
# Override specific options
brk --website none --fetch false
```
## API Overview
### Core Structure
- **`Config`**: Persistent configuration with clap-based CLI parsing and TOML serialization
- **`Bridge`**: Interface trait for generating JavaScript bridge files for web interfaces
- **`Website`**: Enum for web interface options (None, Bitview, Custom)
- **Path Functions**: Cross-platform default path resolution for Bitcoin and BRK directories
### Main Operations
**`main() -> color_eyre::Result<()>`**
Entry point with error handling setup, directory creation, logging initialization, and high-stack thread spawning.
**`run() -> color_eyre::Result<()>`**
Core processing loop handling configuration, RPC connection, component initialization, and continuous blockchain monitoring.
### Configuration Management
**Persistent Settings:**
- All CLI arguments automatically saved to `~/.brk/config.toml`
- Argument overrides update saved configuration on each run
- Cross-platform path resolution with tilde and $HOME expansion
- Validation of Bitcoin directory, blocks directory, and RPC authentication
**CLI Parameters:**
- `--bitcoindir`, `--blocksdir`, `--brkdir`: Directory configuration
- `--fetch`, `--exchanges`: Data source configuration
- `--website`: Web interface selection
- `--rpcconnect`, `--rpcport`, `--rpccookiefile`, `--rpcuser`, `--rpcpassword`: RPC settings
## Examples
### Basic Usage
```bash
# Initialize with custom directories
brk --bitcoindir /data/bitcoin --brkdir /data/brk
# Enable all features with custom RPC
brk --fetch true --exchanges true --website bitview \
--rpcuser myuser --rpcpassword mypass
# Minimal setup with API only
brk --website none --fetch false
```
### Configuration File Example
After first run, settings are saved to `~/.brk/config.toml`:
```toml
bitcoindir = "/home/user/.bitcoin"
blocksdir = "/home/user/.bitcoin/blocks"
brkdir = "/home/user/brk_data"
fetch = true
exchanges = true
website = "bitview"
rpcconnect = "localhost"
rpcport = 8332
rpccookiefile = "/home/user/.bitcoin/.cookie"
```
### Web Interface Configuration
```bash
# Use built-in Bitview interface
brk --website bitview
# Use custom web interface
brk --website custom
# API only, no web interface
brk --website none
```
### Development Mode
```bash
# Development with local website directory
# Looks for ../../websites directory first
brk --website bitview
# Production with auto-download from GitHub
# Downloads websites from release artifacts
brk --website bitview
```
## Architecture
### Startup Sequence
1. **Environment Setup**: Color eyre error handling, directory creation, logging initialization
2. **High-Stack Thread**: 512MB stack for complex blockchain processing operations
3. **Configuration Loading**: CLI parsing, TOML reading, argument merging, validation
4. **Component Initialization**: Parser, indexer, computer, interface creation with proper dependencies
### Processing Pipeline
**Continuous Operation Loop:**
1. **Bitcoin Core Sync Wait**: Monitors `headers == blocks` for full node synchronization
2. **Block Count Detection**: Compares current and previous block counts for new block detection
3. **Indexing Phase**: Processes new blocks through parser with collision detection option
4. **Computing Phase**: Runs analytics computations on newly indexed data
5. **Server Operation**: Serves HTTP API with optional web interface throughout processing
### Web Interface Integration
**Website Handling:**
- **Development Mode**: Uses local `../../websites` directory if available
- **Production Mode**: Downloads release artifacts from GitHub using semantic versioning
- **Bundle Generation**: Creates optimized JavaScript bundles using `brk_bundler`
- **Bridge Files**: Generates JavaScript bridge files for vector IDs and pool data
**Download and Bundle Process:**
```rust
// Automatic website download and bundling
let url = format!("https://github.com/bitcoinresearchkit/brk/archive/refs/tags/v{VERSION}.zip");
let response = minreq::get(url).send()?;
zip::ZipArchive::new(cursor).extract(downloads_path)?;
bundle(&websites_path, website.to_folder_name(), true).await?
```
### RPC Authentication
**Flexible Authentication Methods:**
- **Cookie File**: Automatic detection at `--bitcoindir/.cookie`
- **User/Password**: Manual configuration with `--rpcuser` and `--rpcpassword`
- **Connection Validation**: Startup checks ensure proper Bitcoin Core connectivity
### Configuration System
**TOML Persistence:**
- Automatic serialization/deserialization with `serde` and `toml`
- Error-tolerant parsing with `default_on_error` deserializer
- Argument consumption validation ensuring all CLI options are processed
- Path expansion supporting `~` and `$HOME` environment variables
## Configuration
### Default Paths
**Cross-Platform Path Resolution:**
- **Linux**: `~/.bitcoin` for Bitcoin Core, `~/.brk` for BRK data
- **macOS**: `~/Library/Application Support/Bitcoin` for Bitcoin Core
- **Logs**: `~/.brk/log` for application logging
- **Downloads**: `~/.brk/downloads` for temporary website artifacts
### Performance Settings
**Memory Management:**
- 512MB stack size for main processing thread
- Multi-threaded tokio runtime with all features enabled
- Persistent configuration caching to minimize I/O operations
### Error Handling
**Comprehensive Validation:**
- Directory existence checks with user-friendly error messages
- RPC authentication verification before processing begins
- Graceful exit with help suggestions for configuration issues
## Code Analysis Summary
**Main Structure**: `Config` struct with clap-derived CLI parsing and persistent TOML configuration management \
**Processing Loop**: Continuous Bitcoin Core monitoring with sync detection and incremental block processing \
**Web Integration**: Automatic website download from GitHub releases with JavaScript bundle generation \
**Component Orchestration**: Coordination of parser, indexer, computer, and server with proper dependency management \
**Error Handling**: `color_eyre` integration with comprehensive validation and user-friendly error messages \
**Threading**: High-stack thread allocation (512MB) with tokio multi-threaded runtime for complex operations \
**Architecture**: Complete BRK pipeline orchestration with persistent configuration and continuous blockchain synchronization
---
_This README was generated by Claude Code_
- `brk_indexer` for blockchain indexing
- `brk_computer` for metric computation
- `brk_mempool` for mempool monitoring
- `brk_server` for HTTP API
- `brk_bundler` for web interface bundling

View File

@@ -1,303 +1,57 @@
# brk_computer
Advanced Bitcoin analytics engine that transforms indexed blockchain data into comprehensive metrics and financial analytics.
Derived metrics computation engine for Bitcoin on-chain analytics.
[![Crates.io](https://img.shields.io/crates/v/brk_computer.svg)](https://crates.io/crates/brk_computer)
[![Documentation](https://docs.rs/brk_computer/badge.svg)](https://docs.rs/brk_computer)
## What It Enables
## Overview
Compute 1000+ on-chain metrics from indexed blockchain data: supply breakdowns, realized/unrealized P&L, SOPR, MVRV, cohort analysis (by age, amount, address type), cointime economics, mining pool attribution, and price-weighted valuations.
This crate provides a sophisticated analytics engine that processes indexed Bitcoin blockchain data to compute comprehensive metrics, financial analytics, and statistical aggregations. Built on top of `brk_indexer`, it transforms raw blockchain data into actionable insights through state tracking, cohort analysis, market metrics, and advanced Bitcoin-specific calculations.
## Key Features
**Key Features:**
- **Cohort metrics**: Filter by UTXO age (STH/LTH, age bands), amount ranges, address types
- **Stateful computation**: Track per-UTXO cost basis, realized/unrealized states
- **Multi-index support**: Metrics available by height, date, week, month, year, decade
- **Price integration**: USD-denominated metrics when price data available
- **Mining pool attribution**: Tag blocks/rewards to known pools
- **Cointime economics**: Liveliness, vaultedness, activity-weighted metrics
- **Incremental updates**: Resume from checkpoints, compute only new blocks
- Comprehensive Bitcoin analytics pipeline with 6 major computation modules
- UTXO and address cohort analysis with lifecycle tracking
- Market metrics integration with price data and financial calculations
- Cointime economics and realized/unrealized profit/loss analysis
- Supply dynamics and monetary policy metrics
- Pool analysis for centralization and mining statistics
- Memory allocation tracking and performance optimization
- Parallel computation with multi-threaded processing
**Target Use Cases:**
- Bitcoin market analysis and research platforms
- On-chain analytics for investment and trading decisions
- Academic research requiring comprehensive blockchain metrics
- Financial applications needing Bitcoin exposure and risk metrics
## Installation
```bash
cargo add brk_computer
```
## Quick Start
## Core API
```rust
use brk_computer::Computer;
use brk_indexer::Indexer;
use brk_fetcher::Fetcher;
use vecdb::Exit;
use std::path::Path;
let mut computer = Computer::forced_import(&outputs_path, &indexer, fetcher)?;
// Initialize dependencies
let outputs_path = Path::new("./analytics_data");
let indexer = Indexer::forced_import(outputs_path)?;
let fetcher = Some(Fetcher::import(true, None)?);
// Compute all metrics for new blocks
computer.compute(&indexer, starting_indexes, &reader, &exit)?;
// Create computer with price data support
let mut computer = Computer::forced_import(outputs_path, &indexer, fetcher)?;
// Compute analytics from indexer state
let exit = Exit::default();
let starting_indexes = brk_indexer::Indexes::default();
computer.compute(&indexer, starting_indexes, &exit)?;
println!("Analytics computation completed!");
// Access computed data
let supply = computer.chain.height_to_supply.get(height)?;
let realized_cap = computer.stateful.utxo.all.height_to_realized_cap.get(height)?;
```
## API Overview
### Core Structure
The Computer is organized into 7 specialized computation modules:
- **`indexes`**: Fundamental blockchain index computations
- **`constants`**: Network constants and protocol parameters
- **`market`**: Price-based financial metrics and market analysis
- **`pools`**: Mining pool analysis and centralization metrics
- **`chain`**: Core blockchain metrics (difficulty, hashrate, fees)
- **`stateful`**: Advanced state tracking (UTXO lifecycles, address behaviors)
- **`cointime`**: Cointime economics and value-time calculations
### Key Methods
**`Computer::forced_import(outputs_path, indexer, fetcher) -> Result<Self>`**
Creates computer instance with optional price data integration.
**`compute(&mut self, indexer: &Indexer, starting_indexes: Indexes, exit: &Exit) -> Result<()>`**
Main computation pipeline processing all analytics modules.
### Analytics Categories
**Market Analytics:**
- Price-based metrics (market cap, realized cap, MVRV)
- Trading volume analysis and liquidity metrics
- Return calculations and volatility measurements
- Dollar-cost averaging and investment strategy metrics
**On-Chain Analytics:**
- Transaction count and size statistics
- Fee analysis and block space utilization
- Address activity and entity clustering
- UTXO age distributions and spending patterns
**Monetary Analytics:**
- Circulating supply and issuance tracking
- Realized vs. unrealized gains/losses
- Cointime destruction and accumulation
- Velocity and economic activity indicators
## Examples
### Basic Analytics Computation
```rust
use brk_computer::Computer;
// Initialize with indexer and optional price data
let computer = Computer::forced_import(
"./analytics_output",
&indexer,
Some(price_fetcher)
)?;
// Compute all analytics modules
let exit = vecdb::Exit::default();
computer.compute(&indexer, starting_indexes, &exit)?;
// Access computed metrics
println!("Market cap vectors computed: {}", computer.market.len());
println!("Chain metrics computed: {}", computer.chain.len());
println!("Stateful analysis completed: {}", computer.stateful.len());
```
### Market Analysis
```rust
use brk_computer::Computer;
use brk_types::{DateIndex, Height};
let computer = Computer::forced_import(/* ... */)?;
// Access market metrics after computation
if let Some(market) = &computer.market {
// Daily market cap analysis
let date_index = DateIndex::from_days_since_genesis(5000);
if let Some(market_cap) = market.dateindex_to_market_cap.get(date_index)? {
println!("Market cap on day {}: ${}", date_index, market_cap.to_dollars());
}
// MVRV (Market Value to Realized Value) ratio
if let Some(mvrv) = market.dateindex_to_mvrv.get(date_index)? {
println!("MVRV ratio: {:.2}", mvrv);
}
}
// Chain-level metrics
let height = Height::new(800000);
if let Some(difficulty) = computer.chain.height_to_difficulty.get(height)? {
println!("Network difficulty at height {}: {}", height, difficulty);
}
```
### Cohort Analysis
```rust
use brk_computer::Computer;
use brk_types::{DateIndex, CohortId};
let computer = Computer::forced_import(/* ... */)?;
// Address cohort analysis
let cohort_date = DateIndex::from_days_since_genesis(4000);
// Analyze address behavior patterns
if let Some(address_cohorts) = &computer.stateful.address_cohorts {
for cohort_id in address_cohorts.get_cohort_ids_for_date(cohort_date)? {
let cohort_data = address_cohorts.get_cohort(cohort_id)?;
println!("Cohort {}: {} addresses created",
cohort_id, cohort_data.addresses.len());
println!("Average holding period: {} days",
cohort_data.avg_holding_period.as_days());
}
}
// UTXO cohort lifecycle analysis
if let Some(utxo_cohorts) = &computer.stateful.utxo_cohorts {
let active_utxos = utxo_cohorts.get_active_utxos_for_date(cohort_date)?;
println!("Active UTXOs from cohort: {}", active_utxos.len());
}
```
### Supply and Monetary Analysis
```rust
use brk_computer::Computer;
use brk_types::{Height, DateIndex};
let computer = Computer::forced_import(/* ... */)?;
// Supply dynamics
let height = Height::new(750000);
if let Some(supply) = computer.chain.height_to_circulating_supply.get(height)? {
println!("Circulating supply: {} BTC", supply.to_btc());
}
// Realized vs unrealized analysis
let date = DateIndex::from_days_since_genesis(5000);
if let Some(realized_cap) = computer.market.dateindex_to_realized_cap.get(date)? {
if let Some(market_cap) = computer.market.dateindex_to_market_cap.get(date)? {
let unrealized_pnl = market_cap - realized_cap;
println!("Unrealized P&L: ${:.2}B", unrealized_pnl.to_dollars() / 1e9);
}
}
```
## Architecture
### Computation Pipeline
The computer implements a sophisticated multi-stage pipeline:
1. **Index Computation**: Fundamental blockchain metrics and time-based indexes
2. **Constants Computation**: Network parameters and protocol constants
3. **Price Integration**: Optional price data fetching and processing
4. **Parallel Computation**: Chain, market, pools, stateful, and cointime analytics
5. **Cross-Dependencies**: Advanced metrics requiring multiple data sources
### Memory Management
**Allocation Tracking:**
- `allocative` integration for memory usage analysis
- Efficient vector storage with compression options
- Strategic lazy vs. eager evaluation for memory optimization
**Performance Optimization:**
- `rayon` parallel processing for CPU-intensive calculations
- Vectorized operations for time-series computations
- Memory-mapped storage for large datasets
### State Management
**Stateful Analytics:**
- UTXO lifecycle tracking with creation/destruction events
- Address cohort analysis with behavioral clustering
- Transaction pattern recognition and anomaly detection
- Economic cycle analysis with market phase detection
**Cointime Economics:**
- Bitcoin days destroyed and accumulated calculations
- Velocity measurements and economic activity indicators
- Age-weighted value transfer analysis
- Long-term holder vs. active trader segmentation
### Modular Design
Each computation module operates independently:
- **Chain Module**: Basic blockchain metrics (fees, difficulty, hashrate)
- **Market Module**: Price-dependent financial calculations
- **Pools Module**: Mining centralization and pool analysis
- **Stateful Module**: Advanced lifecycle and behavior tracking
- **Cointime Module**: Economic time-value calculations
### Data Dependencies
**Required Dependencies:**
- `brk_indexer`: Raw blockchain data access
- `brk_types`: Type definitions and conversions
**Optional Dependencies:**
- `brk_fetcher`: Price data for financial metrics
- Market analysis requires price integration
### Computation Orchestration
**Sequential Stages:**
1. Indexes → Constants (foundational metrics)
2. Fetched → Price (price data processing)
3. Parallel: Chain, Market, Pools, Stateful, Cointime
**Exit Handling:**
- Graceful shutdown with consistent state preservation
- Checkpoint-based recovery for long-running computations
- Multi-threaded coordination with exit signaling
## Code Analysis Summary
**Main Structure**: `Computer` struct coordinating 7 specialized analytics modules (indexes, constants, market, pools, chain, stateful, cointime) \
**Computation Pipeline**: Multi-stage analytics processing with parallel execution and dependency management \
**State Tracking**: Advanced UTXO and address lifecycle analysis with cohort-based behavioral clustering \
**Financial Analytics**: Comprehensive market metrics including realized/unrealized analysis and cointime economics \
**Memory Optimization**: `allocative` tracking with lazy/eager evaluation strategies and compressed vector storage \
**Parallel Processing**: `rayon` integration for CPU-intensive calculations with coordinated exit handling \
**Architecture**: Modular analytics engine transforming indexed blockchain data into actionable financial and economic insights
---
_This README was generated by Claude Code_
## Metric Categories
| Module | Examples |
|--------|----------|
| `chain` | Supply, subsidy, fees, transaction counts |
| `stateful` | Realized cap, MVRV, SOPR, unrealized P&L |
| `cointime` | Liveliness, vaultedness, true market mean |
| `pools` | Per-pool block counts, rewards, fees |
| `market` | Market cap, NVT, Puell multiple |
| `price` | Height-to-price mapping from fetched data |
## Cohort System
UTXO and address cohorts support filtering by:
- **Age**: STH (<150d), LTH (≥150d), age bands (1d, 1w, 1m, 3m, 6m, 1y, 2y, ...)
- **Amount**: 0-0.001 BTC, 0.001-0.01, ..., 10k+ BTC
- **Type**: P2PKH, P2SH, P2WPKH, P2WSH, P2TR
- **Epoch**: By halving epoch
## Built On
- `brk_indexer` for indexed blockchain data
- `brk_fetcher` for price data
- `brk_reader` for raw block access
- `brk_grouper` for cohort filtering
- `brk_traversable` for data export

View File

@@ -1,143 +1,16 @@
# brk_error
Centralized error handling for Bitcoin-related operations and database interactions.
Unified error types for the Bitcoin Research Kit.
[![Crates.io](https://img.shields.io/crates/v/brk_error.svg)](https://crates.io/crates/brk_error)
[![Documentation](https://docs.rs/brk_error/badge.svg)](https://docs.rs/brk_error)
## Core API
## Overview
- `Error` - Comprehensive enum covering all error cases across the stack
- `Result<T>` - Convenience alias for `Result<T, Error>`
This crate provides a unified error type that consolidates error handling across Bitcoin blockchain analysis tools. It wraps errors from multiple external libraries including Bitcoin Core RPC, database operations, HTTP requests, and serialization operations into a single `Error` enum.
## Error Categories
**Key Features:**
**External integrations**: Bitcoin RPC, consensus encoding, address parsing, JSON serialization, database (fjall, vecdb), HTTP requests (minreq), async runtime (tokio)
- Unified error type covering 11+ different error sources
- Automatic conversions from external library errors
- Bitcoin-specific error variants for blockchain operations
- Database error handling for both Fjall and VecDB storage backends
- Custom error types for domain-specific validation failures
**Domain-specific**: Invalid addresses, unknown TXIDs, unsupported types, metric lookup failures with fuzzy suggestions, request weight limits
**Target Use Cases:**
- Applications processing Bitcoin blockchain data
- Systems requiring unified error handling across multiple storage backends
- Tools integrating Bitcoin Core RPC with local databases
## Installation
```bash
cargo add brk_error
```
## Quick Start
```rust
use brk_error::{Error, Result};
fn process_transaction() -> Result<()> {
// Function automatically converts various error types
let data = std::fs::read("transaction.json")?; // IO error auto-converted
let parsed: serde_json::Value = serde_json::from_slice(&data)?; // JSON error auto-converted
// Custom domain errors
if data.len() < 32 {
return Err(Error::WrongLength);
}
Ok(())
}
```
## API Overview
### Core Types
- **`Error`**: Main error enum consolidating all error types
- **`Result<T, E = Error>`**: Type alias for `std::result::Result` with default `Error` type
### Error Categories
**External Library Errors:**
- `BitcoinRPC`: Bitcoin Core RPC client errors
- `BitcoinConsensusEncode`: Bitcoin consensus encoding failures
- `Fjall`/`VecDB`/`SeqDB`: Database operation errors
- `Minreq`: HTTP request errors
- `SerdeJson`: JSON serialization errors
- `Jiff`: Date/time handling errors
- `ZeroCopyError`: Zero-copy conversion failures
**Domain-Specific Errors:**
- `WrongLength`: Invalid data length for Bitcoin operations
- `WrongAddressType`: Unsupported Bitcoin address format
- `UnindexableDate`: Date outside valid blockchain range (before 2009-01-03)
- `QuickCacheError`: Cache operation failures
**Generic Errors:**
- `Str(&'static str)`: Static string errors
- `String(String)`: Dynamic string errors
### Key Methods
All external error types automatically convert to `Error` via `From` trait implementations. The error type implements `std::error::Error`, `Debug`, and `Display` traits for comprehensive error reporting.
## Examples
### Bitcoin RPC Integration
```rust
use brk_error::Result;
use bitcoincore_rpc::{Client, Auth};
fn get_block_count(client: &Client) -> Result<u64> {
let count = client.get_block_count()?; // Auto-converts bitcoincore_rpc::Error
Ok(count)
}
```
### Database Operations
```rust
use brk_error::Result;
fn store_transaction_data(db: &fjall::Keyspace, data: &[u8]) -> Result<()> {
if data.len() != 32 {
return Err(brk_error::Error::WrongLength);
}
db.insert(b"tx_hash", data)?; // Auto-converts fjall::Error
Ok(())
}
```
### Date Validation
```rust
use brk_error::{Error, Result};
use jiff::civil::Date;
fn validate_blockchain_date(date: Date) -> Result<()> {
let genesis_date = Date::constant(2009, 1, 3);
let earliest_valid = Date::constant(2009, 1, 9);
if date < genesis_date || (date > genesis_date && date < earliest_valid) {
return Err(Error::UnindexableDate);
}
Ok(())
}
```
## Code Analysis Summary
**Main Type**: `Error` enum with 24 variants covering external libraries and domain-specific cases \
**Conversion Traits**: Implements `From` for 10+ external error types enabling automatic error propagation \
**Error Handling**: Standard Rust error handling with `std::error::Error` trait implementation \
**Dependencies**: Integrates errors from `bitcoin`, `bitcoincore-rpc`, `fjall`, `vecdb`, `jiff`, `minreq`, `serde_json`, and `zerocopy` crates \
**Architecture**: Centralized error aggregation pattern with automatic conversions and custom domain errors
---
_This README was generated by Claude Code_
**Network intelligence**: `is_network_permanently_blocked()` distinguishes transient failures (timeouts, rate limits) from permanent blocks (DNS failure, connection refused, TLS errors) to enable smart retry logic

View File

@@ -1,170 +1,45 @@
# brk_fetcher
Multi-source Bitcoin price data aggregator with automatic fallback between exchanges.
Bitcoin price data fetcher with multi-source fallback.
[![Crates.io](https://img.shields.io/crates/v/brk_fetcher.svg)](https://crates.io/crates/brk_fetcher)
[![Documentation](https://docs.rs/brk_fetcher/badge.svg)](https://docs.rs/brk_fetcher)
## What It Enables
## Overview
Fetch OHLC (Open/High/Low/Close) price data from Binance, Kraken, or BRK's own API. Automatically falls back between sources on failure, with 12-hour retry persistence for transient network issues.
This crate provides a unified interface for fetching Bitcoin price data from multiple sources including Binance, Kraken, and a custom BRK API. It implements automatic failover between data sources, retry mechanisms, and supports both real-time and historical price queries using blockchain height or date-based lookups.
## Key Features
**Key Features:**
- **Multi-source fallback**: Binance → Kraken → BRK API
- **Health tracking**: Temporarily disables failing sources
- **Two resolution modes**: Per-date (daily) or per-block (1-minute interpolated)
- **HAR file support**: Import Binance 1mn data from browser network captures for historical fills
- **Permanent block detection**: Stops retrying on DNS/TLS failures
- Multi-source price aggregation (Binance, Kraken, BRK API)
- Automatic fallback hierarchy with intelligent retry logic
- Historical price queries by blockchain height or date
- Support for both 1-minute and daily OHLC data
- HAR file import for extended historical data coverage
- Built-in caching with BTreeMap storage for performance
**Target Use Cases:**
- Bitcoin blockchain analyzers requiring accurate historical pricing
- Applications needing resilient price data with multiple fallbacks
- Tools processing large datasets requiring efficient price lookups
## Installation
```bash
cargo add brk_fetcher
```
## Quick Start
## Core API
```rust
use brk_fetcher::Fetcher;
use brk_types::{Date, Height, Timestamp};
let mut fetcher = Fetcher::import(true, Some(&hars_path))?;
// Initialize fetcher with exchange APIs enabled
let mut fetcher = Fetcher::import(true, None)?;
// Daily price
let ohlc = fetcher.get_date(Date::new(2024, 4, 20))?;
// Fetch price by date
let date = Date::from_ymd(2023, 6, 15)?;
let daily_price = fetcher.get_date(date)?;
// Fetch price by blockchain height
let height = Height::new(800000);
let timestamp = Timestamp::from(1684771200u32);
let block_price = fetcher.get_height(height, timestamp, None)?;
println!("Daily OHLC: {:?}", daily_price);
println!("Block OHLC: {:?}", block_price);
// Block-level price (uses 1mn data when available)
let ohlc = fetcher.get_height(height, block_timestamp, prev_timestamp)?;
```
## API Overview
## Sources
### Core Types
| Source | Resolution | Lookback | Notes |
|--------|------------|----------|-------|
| Binance | 1mn | ~16 hours | Best for recent blocks |
| Kraken | 1mn | ~10 hours | Fallback for recent |
| BRK API | Daily | Full history | Fallback for older data |
- **`Fetcher`**: Main aggregator managing multiple price data sources
- **`Binance`**: Binance exchange API client with HAR file support
- **`Kraken`**: Kraken exchange API client for OHLC data
- **`BRK`**: Custom API client for blockchain-indexed price data
## HAR Import
### Key Methods
For historical 1-minute data beyond API limits, export network requests from Binance's web interface and place the HAR file in the imports directory.
**`Fetcher::import(exchanges: bool, hars_path: Option<&Path>) -> Result<Self>`**
Creates a new fetcher instance with configurable data sources.
## Built On
**`get_date(&mut self, date: Date) -> Result<OHLCCents>`**
Retrieves daily OHLC data for the specified date with automatic source fallback.
**`get_height(&mut self, height: Height, timestamp: Timestamp, previous_timestamp: Option<Timestamp>) -> Result<OHLCCents>`**
Fetches price data for a specific blockchain height using minute-level precision.
### Data Source Hierarchy
1. **Kraken API** - Primary source for both 1-minute and daily data
2. **Binance API** - Secondary source with extended HAR file support
3. **BRK API** - Fallback source using blockchain-indexed pricing data
### Error Handling
The fetcher implements aggressive retry logic with exponential backoff, attempting each source up to 12 hours (720 retries) before failing. Failed requests trigger cache clearing and source rotation.
## Examples
### Basic Price Fetching
```rust
use brk_fetcher::Fetcher;
use brk_types::Date;
let mut fetcher = Fetcher::import(true, None)?;
// Fetch Bitcoin price for a specific date
let date = Date::from_ymd(2021, 1, 1)?;
match fetcher.get_date(date) {
Ok(ohlc) => println!("BTC price on {}: ${}", date, ohlc.close.to_dollars()),
Err(e) => eprintln!("Failed to fetch price: {}", e),
}
```
### Historical Data with HAR Files
```rust
use brk_fetcher::Fetcher;
use std::path::Path;
// Initialize with HAR file path for extended historical coverage
let har_path = Path::new("./import_data");
let mut fetcher = Fetcher::import(true, Some(har_path))?;
// Fetch minute-level data using HAR file fallback
let height = Height::new(650000);
let timestamp = Timestamp::from(1598918400u32); // August 2020
let price_data = fetcher.get_height(height, timestamp, None)?;
```
### Batch Processing with Caching
```rust
use brk_fetcher::Fetcher;
let mut fetcher = Fetcher::import(true, None)?;
// Process multiple heights - caching improves performance
for height in 800000..800100 {
let timestamp = Timestamp::from(1684771200u32 + (height - 800000) * 600);
match fetcher.get_height(Height::new(height), timestamp, None) {
Ok(ohlc) => println!("Height {}: ${:.2}", height, ohlc.close.to_dollars()),
Err(e) => eprintln!("Error at height {}: {}", height, e),
}
}
// Clear caches when done
fetcher.clear();
```
## Architecture
### Retry Mechanism
The crate implements a sophisticated retry system with:
- **Default retry count**: 6 attempts with 5-second delays
- **Extended retry**: Up to 720 attempts (12 hours) for critical operations
- **Cache invalidation**: Automatic cache clearing between retry attempts
- **Exponential backoff**: 60-second delays for extended retries
### Data Aggregation
Price data is aggregated using OHLC (Open, High, Low, Close) calculations spanning timestamp ranges. The `find_height_ohlc` function computes accurate OHLC values by scanning time series data between block timestamps.
### HAR File Processing
Binance integration supports HTTP Archive (HAR) files for extended historical data coverage, parsing browser network captures to extract additional pricing data beyond API limitations.
## Code Analysis Summary
**Main Types**: `Fetcher` aggregator with `Binance`, `Kraken`, and `BRK` source implementations \
**Caching**: BTreeMap-based caching for both timestamp and date-indexed price data \
**Network Layer**: Built on `minreq` HTTP client with automatic JSON parsing \
**Error Handling**: Comprehensive retry logic with source rotation and cache management \
**Dependencies**: Integrates `brk_types` for type definitions and `brk_error` for unified error handling \
**Architecture**: Multi-source aggregation pattern with hierarchical fallback and intelligent caching
---
_This README was generated by Claude Code_
- `brk_error` for error handling
- `brk_logger` for retry logging
- `brk_types` for `Date`, `Height`, `Timestamp`, `OHLCCents`

View File

@@ -1 +1,49 @@
# brk_grouper
UTXO and address cohort filtering for on-chain analytics.
## What It Enables
Slice the UTXO set and address population by age, amount, output type, halving epoch, or holder classification (STH/LTH). Build complex cohorts by combining filters for metrics like "realized cap of 1+ BTC UTXOs older than 155 days."
## Key Features
- **Age-based**: `TimeFilter::GreaterOrEqual(155)`, `TimeFilter::Range(30..90)`, `TimeFilter::LowerThan(7)`
- **Amount-based**: `AmountFilter::GreaterOrEqual(Sats::_1BTC)`, `AmountFilter::Range(Sats::_100K..Sats::_1M)`
- **Term classification**: `Term::Sth` (short-term holders, <155 days), `Term::Lth` (long-term holders)
- **Epoch filters**: Group by halving epoch
- **Type filters**: Segment by output type (P2PKH, P2TR, etc.)
- **Context-aware naming**: Automatic prefix generation (`utxos_`, `addrs_`) based on cohort context
- **Inclusion logic**: Filter hierarchy for aggregation (`Filter::includes`)
## Filter Types
```rust
pub enum Filter {
All,
Term(Term), // STH/LTH
Time(TimeFilter), // Age-based
Amount(AmountFilter), // Value-based
Epoch(HalvingEpoch), // Halving epoch
Type(OutputType), // P2PKH, P2TR, etc.
}
```
## Core API
```rust
let filter = Filter::Time(TimeFilter::GreaterOrEqual(155));
// Check membership
filter.contains_time(200); // true
filter.contains_amount(sats);
// Generate metric names
filter.to_full_name(CohortContext::Utxo); // "utxos_min_age_155d"
```
## Built On
- `brk_error` for error handling
- `brk_types` for `Sats`, `HalvingEpoch`, `OutputType`
- `brk_traversable` for data structure traversal

View File

@@ -1,277 +1,59 @@
# brk_indexer
High-performance Bitcoin blockchain indexer with parallel processing and dual storage architecture.
Full Bitcoin blockchain indexer for fast analytics queries.
[![Crates.io](https://img.shields.io/crates/v/brk_indexer.svg)](https://crates.io/crates/brk_indexer)
[![Documentation](https://docs.rs/brk_indexer/badge.svg)](https://docs.rs/brk_indexer)
## What It Enables
## Overview
Transform raw Bitcoin blockchain data into indexed vectors and key-value stores optimized for analytics. Query any block, transaction, address, or UTXO without scanning the chain.
This crate provides a comprehensive Bitcoin blockchain indexer built on top of `brk_reader`. It processes raw Bitcoin blocks in parallel, extracting and indexing transactions, addresses, inputs, outputs, and metadata into optimized storage structures. The indexer maintains two complementary storage systems: columnar vectors for analytics and key-value stores for fast lookups.
## Key Features
**Key Features:**
- **Multi-phase block processing**: Parallel TXID computation, input/output processing, sequential finalization
- **Address indexing**: Maps addresses to their transaction history and UTXOs per address type
- **UTXO tracking**: Live outpoint→value lookups, address→unspent outputs
- **Reorg handling**: Automatic rollback to valid chain state on reorganization
- **Collision detection**: Validates rapidhash-based prefix lookups against known duplicate TXIDs
- **Incremental snapshots**: Periodic checkpoints for crash recovery
- Parallel block processing with multi-threaded transaction analysis
- Dual storage architecture: columnar vectors + key-value stores
- Address type classification and indexing for all Bitcoin script types
- Collision detection and validation for address hashes and transaction IDs
- Incremental processing with automatic rollback and recovery
- Height-based synchronization with Bitcoin Core RPC validation
- Optimized batch operations with configurable snapshot intervals
## Core API
**Target Use Cases:**
```rust
let mut indexer = Indexer::forced_import(&outputs_dir)?;
- Bitcoin blockchain analysis requiring full transaction history
- Address clustering and UTXO set analysis
- Blockchain explorers needing fast address/transaction lookups
- Research applications requiring structured access to blockchain data
// Index new blocks
let starting_indexes = indexer.index(&blocks, &client, &exit)?;
## Installation
```bash
cargo add brk_indexer
// Access indexed data
let height = indexer.stores.txidprefix_to_txindex.get(&txid_prefix)?;
let blockhash = indexer.vecs.block.height_to_blockhash.get(height)?;
```
## Quick Start
```rust,ignore
use brk_indexer::Indexer;
use brk_reader::Parser;
use bitcoincore_rpc::{Client, Auth};
use vecdb::Exit;
use std::path::Path;
// Initialize Bitcoin Core RPC client
let rpc = Client::new("http://localhost:8332", Auth::None)?;
let rpc = Box::leak(Box::new(rpc));
// Create parser for raw block data
let blocks_dir = Path::new("/path/to/bitcoin/blocks");
let parser = Parser::new(blocks_dir, None, rpc);
// Initialize indexer with output directory
let outputs_dir = Path::new("./indexed_data");
let mut indexer = Indexer::forced_import(outputs_dir)?;
// Index blockchain data
let exit = Exit::default();
let starting_indexes = indexer.index(&parser, rpc, &exit, true)?;
println!("Indexed up to height: {}", starting_indexes.height);
```
## API Overview
### Core Types
- **`Indexer`**: Main coordinator managing vectors and stores
- **`Vecs`**: Columnar storage for blockchain data analytics
- **`Stores`**: Key-value storage for fast hash-based lookups
- **`Indexes`**: Current indexing state tracking progress across data types
### Key Methods
**`Indexer::forced_import(outputs_dir: &Path) -> Result<Self>`**
Creates or opens indexer instance with automatic version management.
**`index(&mut self, parser: &Parser, rpc: &'static Client, exit: &Exit, check_collisions: bool) -> Result<Indexes>`**
Main indexing function processing blocks from parser with collision detection.
### Storage Architecture
**Columnar Vectors (Vecs):**
- `height_to_*`: Block-level data (hash, timestamp, difficulty, size, weight)
- `txindex_to_*`: Transaction data (ID, version, locktime, size, RBF flag)
- `txoutindex_to_*`: Output data (value, type, address mapping)
- `txinindex_to_txoutindex`: Input-to-output relationship mapping
**Key-Value Stores:**
- `addresshash_to_typeindex`: Address hash to internal index mapping
- `blockhashprefix_to_height`: Block hash prefix to height lookup
- `txidprefix_to_txindex`: Transaction ID prefix to internal index
- `addresstype_to_typeindex_with_txoutindex`: Address type to output mappings
### Address Type Support
Complete coverage of Bitcoin script types:
- **P2PK**: Pay-to-Public-Key (33-byte and 65-byte variants)
- **P2PKH**: Pay-to-Public-Key-Hash
- **P2SH**: Pay-to-Script-Hash
- **P2WPKH**: Pay-to-Witness-Public-Key-Hash
- **P2WSH**: Pay-to-Witness-Script-Hash
- **P2TR**: Pay-to-Taproot
- **P2MS**: Pay-to-Multisig
- **P2A**: Pay-to-Address (custom type)
- **OpReturn**: OP_RETURN data outputs
- **Empty/Unknown**: Non-standard script types
## Examples
### Basic Indexing Operation
```rust,ignore
use brk_indexer::Indexer;
use brk_reader::Parser;
use std::path::Path;
// Initialize components
let outputs_dir = Path::new("./blockchain_index");
let mut indexer = Indexer::forced_import(outputs_dir)?;
let blocks_dir = Path::new("/Users/satoshi/.bitcoin/blocks");
let parser = Parser::new(blocks_dir, None, rpc);
// Index with collision checking enabled
let exit = vecdb::Exit::default();
let final_indexes = indexer.index(&parser, rpc, &exit, true)?;
println!("Final height: {}", final_indexes.height);
println!("Total transactions: {}", final_indexes.txindex);
println!("Total addresses: {}", final_indexes.total_address_count());
```
### Querying Indexed Data
```rust,ignore
use brk_indexer::Indexer;
use brk_types::{Height, TxidPrefix, AddressHash};
let indexer = Indexer::forced_import("./blockchain_index")?;
// Look up block hash by height
let height = Height::new(750000);
if let Some(block_hash) = indexer.vecs.block.height_to_blockhash.get(height)? {
println!("Block 750000 hash: {}", block_hash);
}
// Look up transaction by ID prefix
let txid_prefix = TxidPrefix::from_str("abcdef123456")?;
if let Some(tx_index) = indexer.stores.txidprefix_to_txindex.get(&txid_prefix)? {
println!("Transaction index: {}", tx_index);
}
// Query address information
let address_hash = AddressHash::from(/* address bytes */);
if let Some(type_index) = indexer.stores.addresshash_to_typeindex.get(&address_hash)? {
println!("Address type index: {}", type_index);
}
```
### Incremental Processing
```rust,ignore
use brk_indexer::Indexer;
// Indexer automatically resumes from last processed height
let mut indexer = Indexer::forced_import("./blockchain_index")?;
let current_indexes = indexer.vecs.current_indexes(&indexer.stores, rpc)?;
println!("Resuming from height: {}", current_indexes.height);
// Process new blocks incrementally
let exit = vecdb::Exit::default();
let updated_indexes = indexer.index(&parser, rpc, &exit, true)?;
println!("Processed {} new blocks",
updated_indexes.height.as_u32() - current_indexes.height.as_u32());
```
### Address Type Analysis
```rust,ignore
use brk_indexer::Indexer;
use brk_types::OutputType;
let indexer = Indexer::forced_import("./blockchain_index")?;
// Analyze address distribution by type
for output_type in OutputType::as_vec() {
let count = indexer.vecs.txout.txoutindex_to_outputtype
.iter()
.filter(|&ot| ot == output_type)
.count();
println!("{:?}: {} outputs", output_type, count);
}
// Query specific address type data
let p2pkh_store = &indexer.stores.addresstype_to_typeindex_with_txoutindex
.p2pkh;
println!("P2PKH addresses: {}", p2pkh_store.len());
```
## Architecture
### Parallel Processing
The indexer uses sophisticated parallel processing:
- **Block-Level Parallelism**: Concurrent processing of transactions within blocks
- **Transaction Analysis**: Parallel input/output processing with `rayon`
- **Address Resolution**: Multi-threaded address type classification and indexing
- **Collision Detection**: Parallel validation of hash collisions across address types
### Storage Optimization
**Columnar Storage (vecdb):**
- Compressed vectors for space-efficient analytics queries
- Raw vectors for frequently accessed data (heights, hashes)
- Page-aligned storage for memory mapping efficiency
**Key-Value Storage (Fjall):**
- LSM-tree architecture for write-heavy indexing workloads
- Bloom filters for fast negative lookups
- Transactional consistency with rollback support
### Memory Management
- **Batch Processing**: 1000-block snapshots to balance memory and I/O
- **Reader Management**: Static readers for consistent data access during processing
- **Collision Tracking**: BTreeMap-based collision detection with memory cleanup
- **Exit Handling**: Graceful shutdown with consistent state preservation
### Version Management
- **Schema Versioning**: Automatic migration on version changes (currently v21)
- **Rollback Support**: Automatic recovery from incomplete processing
- **State Tracking**: Height-based synchronization across all storage components
## Performance Characteristics
### Processing Speed
- **Parallel Transaction Processing**: Multi-core utilization for CPU-intensive operations
- **Optimized I/O**: Batch operations reduce disk overhead
- **Memory Efficiency**: Streaming processing without loading entire blockchain
### Storage Requirements
- **Columnar Compression**: Significant space savings for repetitive blockchain data
- **Index Optimization**: Bloom filters reduce lookup overhead
- **Incremental Growth**: Storage scales linearly with blockchain size
### Scalability
- **Height-Based Partitioning**: Enables distributed processing strategies
- **Modular Architecture**: Separate vector and store systems for flexible deployment
- **Resource Configuration**: Configurable batch sizes and memory limits
## Code Analysis Summary
**Main Structure**: `Indexer` coordinating `Vecs` (columnar analytics) and `Stores` (key-value lookups) \
**Processing Pipeline**: Multi-threaded block analysis with parallel transaction/address processing \
**Storage Architecture**: Dual system using vecdb for analytics and Fjall for lookups \
**Address Indexing**: Complete Bitcoin script type coverage with collision detection \
**Synchronization**: Height-based coordination with Bitcoin Core RPC validation \
**Parallel Processing**: rayon-based parallelism for transaction analysis and address resolution \
**Architecture**: High-performance blockchain indexer with ACID guarantees and incremental processing
---
_This README was generated by Claude Code_
## Data Structures
**Vecs** (append-only vectors):
- Block: `height_to_blockhash`, `height_to_timestamp`, `height_to_difficulty`
- Transaction: `txindex_to_txid`, `txindex_to_height`, `txindex_to_base_size`
- Input/Output: `txinindex_to_outpoint`, `txoutindex_to_value`, `txoutindex_to_outputtype`
- Address: Per-type `typeindex_to_addressbytes`
**Stores** (key-value lookups):
- `txidprefix_to_txindex` - TXID lookup via 10-byte prefix
- `blockhashprefix_to_height` - Block lookup via 4-byte prefix
- `addresshash_to_addressindex` - Address lookup per type
- `addressindex_to_unspent_outpoints` - Live UTXO set per address
## Processing Pipeline
1. **Block metadata**: Store blockhash, difficulty, timestamp
2. **Compute TXIDs**: Parallel SHA256d across transactions
3. **Process inputs**: Lookup spent outpoints, resolve address info
4. **Process outputs**: Extract addresses, assign type indexes
5. **Finalize**: Sequential store updates, UTXO set mutations
6. **Commit**: Periodic flush to disk
## Built On
- `brk_iterator` for block iteration
- `brk_store` for key-value storage
- `brk_grouper` for address type handling
- `brk_types` for domain types

View File

@@ -1 +1,56 @@
# brk_iterator
Unified block iteration with automatic source selection.
## What It Enables
Iterate over Bitcoin blocks with a simple API that automatically chooses between RPC (for small ranges) and direct blk file reading (for large scans). Handles reorgs gracefully.
## Key Features
- **Smart source selection**: RPC for ≤10 blocks, Reader for larger ranges
- **Flexible ranges**: By height span, from start, to end, last N blocks, or after hash
- **Reorg-safe**: Iteration may end early if chain reorganizes
- **Thread-safe**: Clone and share freely
## Core API
```rust
let blocks = Blocks::new(&rpc_client, &reader);
// Various range specifications
for block in blocks.range(Height::new(800_000), Height::new(800_100))? { ... }
for block in blocks.start(Height::new(840_000))? { ... }
for block in blocks.last(10)? { ... }
for block in blocks.after(Some(last_known_hash))? { ... }
```
## Source Modes
```rust
// Auto-select (default)
let blocks = Blocks::new(&client, &reader);
// Force RPC only
let blocks = Blocks::new_rpc(&client);
// Force Reader only
let blocks = Blocks::new_reader(&reader);
```
## Range Types
| Method | Description |
|--------|-------------|
| `range(start, end)` | Inclusive height range |
| `start(height)` | From height to chain tip |
| `end(height)` | From genesis to height |
| `last(n)` | Last n blocks from tip |
| `after(hash)` | All blocks after given hash |
## Built On
- `brk_error` for error handling
- `brk_reader` for direct blk file access
- `brk_rpc` for RPC queries
- `brk_types` for `Height`, `BlockHash`

View File

@@ -1,212 +1,37 @@
# brk_logger
Colored console logging with optional file output for Bitcoin Research Kit applications.
Colorized, timestamped logging with optional file output and hooks.
[![Crates.io](https://img.shields.io/crates/v/brk_logger.svg)](https://crates.io/crates/brk_logger)
[![Documentation](https://docs.rs/brk_logger/badge.svg)](https://docs.rs/brk_logger)
## What It Enables
## Overview
Drop-in logging initialization that silences noisy dependencies (bitcoin, fjall, rolldown, rmcp) while keeping your logs readable with color-coded levels and local timestamps.
This crate provides a thin wrapper around `env_logger` with enhanced formatting, colored output, and optional file logging. Designed specifically for BRK applications, it offers structured logging with timestamps, level-based coloring, and automatic filtering of noisy dependencies.
## Key Features
**Key Features:**
- **Dual output**: Console (colorized) + optional file logging
- **Log hooks**: Register callbacks to intercept log messages programmatically
- **Sensible defaults**: Pre-configured filters silence common verbose libraries
- **Timestamp formatting**: Uses system timezone via jiff
- Level-based color coding for console output (error=red, warn=yellow, info=green, etc.)
- Dual output: colored console logs and optional plain-text file logging
- System timezone-aware timestamps with customizable formatting
- Automatic filtering of verbose dependencies (bitcoin, fjall, tracing, etc.)
- Environment variable configuration support via `RUST_LOG`
**Target Use Cases:**
- Bitcoin blockchain analysis applications requiring structured logging
- Development and debugging of BRK components
- Production deployments with file-based log archival
- Applications needing both human-readable and machine-parseable logs
## Installation
```bash
cargo add brk_logger
```
## Quick Start
## Core API
```rust
use brk_logger;
brk_logger::init(Some(Path::new("app.log")))?; // Console + file
brk_logger::init(None)?; // Console only
// Initialize with console output only
brk_logger::init(None)?;
// Initialize with file logging
let log_path = std::path::Path::new("application.log");
brk_logger::init(Some(log_path))?;
// Use standard log macros
log::info!("Application started");
log::warn!("Configuration issue detected");
log::error!("Failed to process block");
brk_logger::register_hook(|msg| {
// React to log messages
})?;
```
## API Overview
### Core Functions
**`init(path: Option<&Path>) -> io::Result<()>`**
Initializes the logging system with optional file output. Console logging is always enabled with color formatting.
**Re-exported Types:**
- **`OwoColorize`**: Color formatting trait for custom colored output
### Log Formatting
**Console Output Format:**
```
2024-09-16 14:23:45 - INFO Application started successfully
2024-09-16 14:23:46 - WARN Bitcoin RPC connection slow
2024-09-16 14:23:47 - ERROR Failed to parse block data
```
**File Output Format (Plain Text):**
```
2024-09-16 14:23:45 - info Application started successfully
2024-09-16 14:23:46 - warn Bitcoin RPC connection slow
2024-09-16 14:23:47 - error Failed to parse block data
```
### Default Filtering
The logger automatically filters out verbose logs from common dependencies:
- `bitcoin=off` - Bitcoin library internals
- `bitcoincore-rpc=off` - RPC client details
- `fjall=off` - Database engine logs
- `lsm_tree=off` - LSM tree operations
- `tracing=off` - Tracing framework internals
## Examples
### Basic Logging Setup
## Usage
```rust
use brk_logger;
use log::{info, warn, error};
use log::info;
fn main() -> std::io::Result<()> {
// Initialize logging to console only
brk_logger::init(None)?;
info!("Starting Bitcoin analysis");
warn!("Price feed temporarily unavailable");
error!("Block parsing failed at height 750000");
info!("Ready");
Ok(())
}
```
### File Logging with Custom Path
```rust
use brk_logger;
use std::path::Path;
fn main() -> std::io::Result<()> {
let log_file = Path::new("/var/log/brk/application.log");
// Initialize with dual output: console + file
brk_logger::init(Some(log_file))?;
log::info!("Application configured with file logging");
// Both console (colored) and file (plain) will receive this log
log::error!("Critical system error occurred");
Ok(())
}
```
### Environment Variable Configuration
```bash
# Set custom log level and targets
export RUST_LOG="debug,brk_indexer=trace,bitcoin=warn"
# Run application - logger respects RUST_LOG
./my_brk_app
```
```rust
use brk_logger;
fn main() -> std::io::Result<()> {
// Respects RUST_LOG environment variable
brk_logger::init(None)?;
log::debug!("Debug information (only shown if RUST_LOG=debug)");
log::trace!("Trace information (only for specific modules)");
Ok(())
}
```
### Colored Output Usage
```rust
use brk_logger::OwoColorize;
fn print_status() {
println!("Status: {}", "Connected".green());
println!("Height: {}", "800000".bright_blue());
println!("Error: {}", "Connection failed".red());
}
```
## Architecture
### Logging Pipeline
1. **Initialization**: `init()` configures `env_logger` with custom formatter
2. **Environment**: Reads `RUST_LOG` or uses default filter configuration
3. **Formatting**: Applies timestamp, level formatting, and color coding
4. **Dual Output**: Writes colored logs to console, plain logs to file (if configured)
### Color Scheme
- **ERROR**: Red text for critical failures
- **WARN**: Yellow text for warnings and issues
- **INFO**: Green text for normal operations
- **DEBUG**: Blue text for debugging information
- **TRACE**: Cyan text for detailed tracing
- **Timestamps**: Bright black (gray) for visual hierarchy
### File Handling
- **File Creation**: Creates log file if it doesn't exist
- **Append Mode**: Appends to existing files without truncation
- **Error Resilience**: Console logging continues even if file write fails
- **File Rotation**: Manual - application responsible for log rotation
### Filtering Strategy
Default filter prioritizes BRK application logs while suppressing:
- Verbose dependency logging that clutters output
- Internal library debug information not relevant to users
- Framework-level tracing that doesn't aid in debugging BRK logic
## Code Analysis Summary
**Main Function**: `init()` function that configures `env_logger` with custom formatting and dual output \
**Dependencies**: Built on `env_logger` for filtering, `jiff` for timestamps, `owo-colors` for terminal colors \
**Output Streams**: Simultaneous console (colored) and file (plain text) logging with different formatting \
**Timestamp Format**: System timezone-aware formatting using `%Y-%m-%d %H:%M:%S` pattern \
**Color Implementation**: Level-based color mapping with `owo-colors` for terminal output \
**Filter Configuration**: Predefined filter string that suppresses verbose dependencies while enabling BRK logs \
**Architecture**: Thin wrapper pattern that enhances `env_logger` with BRK-specific formatting and behavior
---
_This README was generated by Claude Code_

View File

@@ -1,283 +1,45 @@
# brk_mcp
Model Context Protocol bridge enabling LLM access to Bitcoin Research Kit data.
Model Context Protocol (MCP) server for Bitcoin on-chain data.
[![Crates.io](https://img.shields.io/crates/v/brk_mcp.svg)](https://crates.io/crates/brk_mcp)
[![Documentation](https://docs.rs/brk_mcp/badge.svg)](https://docs.rs/brk_mcp)
## What It Enables
## Overview
Expose BRK's query capabilities to AI assistants via the MCP standard. LLMs can browse metrics, fetch datasets, and analyze on-chain data through structured tool calls.
This crate provides a Model Context Protocol (MCP) server implementation that exposes Bitcoin blockchain analytics as tools for Large Language Models. Built on the `brk_rmcp` library, it creates a standards-compliant bridge allowing LLMs to query Bitcoin data through structured tool calls with type-safe parameters and JSON responses.
## Key Features
**Key Features:**
- **Tool-based API**: 8 tools for metric discovery and data retrieval
- **Pagination support**: Browse large metric catalogs in chunks
- **Self-documenting**: Built-in instructions explain available capabilities
- **Async**: Full tokio integration via `AsyncQuery`
- Model Context Protocol server with tools capability
- 10 specialized tools for Bitcoin data discovery and querying
- Type-safe parameter validation with structured error handling
- Stateless operation for scalability and load balancing
- Direct integration with BRK interface layer for real-time data access
- Paginated responses for large datasets (up to 1,000 entries per page)
- Multi-format output support through underlying interface layer
## Available Tools
**Target Use Cases:**
| Tool | Description |
|------|-------------|
| `get_metric_count` | Count of unique metrics |
| `get_vec_count` | Total metric × index combinations |
| `get_indexes` | List all index types and variants |
| `get_vecids` | Paginated list of metric IDs |
| `get_index_to_vecids` | Metrics supporting a given index |
| `get_vecid_to_indexes` | Indexes supported by a metric |
| `get_vecs` | Fetch metric data with range selection |
| `get_version` | BRK version string |
- LLM-powered Bitcoin analytics and research applications
- Conversational interfaces for blockchain data exploration
- AI-driven financial analysis requiring Bitcoin metrics
- Automated research tools with natural language queries
## Claude
### Quick Start
Add a `bitview` custom connector with the following URL:
`https://bitview.space/mcp`
## Rust
### Installation
```bash
cargo add brk_mcp
```
### Quick Start
## Usage
```rust
use brk_mcp::MCP;
use brk_query::Interface;
use brk_rmcp::{ServerHandler, RoleServer};
let mcp = MCP::new(&async_query);
// Initialize with static interface reference
let interface: &'static Interface = /* your interface */;
let mcp = MCP::new(interface);
// Use as MCP server handler
let server_info = mcp.get_info();
println!("MCP server ready with {} tools", server_info.capabilities.tools.unwrap().len());
// Individual tool usage
let index_count = mcp.get_index_count().await?;
let vec_count = mcp.get_vec_count().await?;
let version = mcp.get_version().await?;
// The MCP server implements ServerHandler for use with rmcp
// Tools are auto-registered via the #[tool_router] macro
```
## API Overview
## Integration
### Core Structure
The MCP server is integrated into `brk_server` and exposed at `/mcp/sse` endpoint for SSE-based MCP transport.
- **`MCP`**: Main server struct implementing ServerHandler trait
- **Tool Router**: Automatic tool discovery and routing with `#[tool_router]` macro
- **Parameters**: Type-safe tool parameters with validation
- **CallToolResult**: Structured tool responses with success/error handling
## Built On
### Available Tools
**Discovery Tools:**
- `get_index_count()` - Total number of available indexes
- `get_vecid_count()` - Total number of vector identifiers
- `get_vec_count()` - Total vector/index combinations
- `get_indexes()` - List of all available indexes
- `get_accepted_indexes()` - Mapping of indexes to their variants
**Data Access Tools:**
- `get_vecids(pagination)` - Paginated list of vector IDs
- `get_index_to_vecids(index, pagination)` - Vector IDs supporting specific index
- `get_vecid_to_indexes(id)` - Indexes supported by vector ID
- `get_vecs(params)` - Main data query tool with flexible parameters
**System Tools:**
- `get_version()` - Bitcoin Research Kit version information
### Tool Parameters
**Core Query Parameters:**
- `index`: Time dimension (height, date, week, month, etc.)
- `ids`: Dataset identifiers (comma or space separated)
- `from`/`to`: Range filtering (supports negative indexing from end)
- `format`: Output format (json, csv)
**Pagination Parameters:**
- `page`: Page number for paginated results (up to 1,000 entries)
## Examples
### Basic Tool Usage
```rust
use brk_mcp::MCP;
use brk_query::{Params, PaginationParam};
let mcp = MCP::new(interface);
// Get system information
let version = mcp.get_version().await?;
println!("BRK version: {:?}", version);
// Discover available data
let indexes = mcp.get_indexes().await?;
let vec_count = mcp.get_vec_count().await?;
// Get paginated vector IDs
let pagination = PaginationParam { page: Some(0) };
let vecids = mcp.get_vecids(pagination).await?;
```
### Data Querying
```rust
use brk_mcp::MCP;
use brk_query::Params;
// Query latest Bitcoin price
let params = Params {
index: "date".to_string(),
ids: vec!["dateindex-to-price-close".to_string()].into(),
from: Some(-1),
..Default::default()
};
let result = mcp.get_vecs(params).await?;
match result {
CallToolResult::Success { content, .. } => {
println!("Latest price: {:?}", content);
},
_ => println!("Query failed"),
}
```
### Multi-Vector Analysis
```rust
use brk_query::Params;
// Get OHLC data for last 30 days
let params = Params {
index: "date".to_string(),
ids: vec![
"dateindex-to-price-open".to_string(),
"dateindex-to-price-high".to_string(),
"dateindex-to-price-low".to_string(),
"dateindex-to-price-close".to_string(),
].into(),
from: Some(-30),
format: Some("csv".to_string()),
..Default::default()
};
let ohlc_data = mcp.get_vecs(params).await?;
```
### Vector Discovery Workflow
```rust
use brk_query::{PaginatedIndexParam, IdParam};
// 1. Get available indexes
let indexes = mcp.get_indexes().await?;
// 2. Find vectors for specific index
let paginated_index = PaginatedIndexParam {
index: "height".to_string(),
pagination: PaginationParam { page: Some(0) },
};
let height_vectors = mcp.get_index_to_vecids(paginated_index).await?;
// 3. Check what indexes a vector supports
let id_param = IdParam { id: "height-to-blockhash".to_string() };
let supported_indexes = mcp.get_vecid_to_indexes(id_param).await?;
```
## Architecture
### Server Implementation
**MCP Protocol Compliance:**
- Implements `ServerHandler` trait for MCP compatibility
- Provides `ServerInfo` with tools capability enabled
- Uses latest protocol version with proper initialization
- Includes instructional context for LLM understanding
**Tool Registration:**
- `#[tool_router]` macro for automatic tool discovery
- `#[tool]` attribute for individual tool definitions
- Structured parameter types with automatic validation
- Consistent error handling with `McpError` responses
### HTTP Integration
**Axum Router Extension:**
- `MCPRoutes` trait for router integration
- Conditional MCP endpoint mounting based on configuration
- Stateless HTTP service with `LocalSessionManager`
- Nested service mounting at `/mcp` path
**Transport Layer:**
- `StreamableHttpService` for MCP over HTTP
- Configurable server options with stateless mode
- Session management for concurrent connections
- Request context handling for proper MCP operation
### Data Integration
**Interface Layer Access:**
- Direct access to `brk_query::Interface` for data queries
- Static lifetime requirements for server operation
- Unified access to indexer and computer data sources
- Consistent parameter types across tool boundaries
**Response Formatting:**
- JSON content serialization for all tool responses
- Success/error response wrapping with proper MCP structure
- Logging integration for request tracking and debugging
- Content type handling for different response formats
## Configuration
### Server Information
The MCP server provides comprehensive information to LLMs:
```rust
ServerInfo {
protocol_version: ProtocolVersion::LATEST,
capabilities: ServerCapabilities::builder().enable_tools().build(),
server_info: Implementation::from_build_env(),
instructions: "Context about Bitcoin data access and terminology",
}
```
### LLM Instructions
Built-in instructions explain Bitcoin data concepts:
- Vectors/vecs/arrays/datasets are interchangeable terms
- Indexes represent timeframes for data organization
- VecIds are dataset names describing their content
- Tool-based exploration before web browsing is encouraged
## Code Analysis Summary
**Main Structure**: `MCP` struct implementing `ServerHandler` with embedded `ToolRouter` for automatic tool discovery \
**Tool Implementation**: 10 specialized tools using `#[tool]` attribute with structured parameters and JSON responses \
**HTTP Integration**: `MCPRoutes` trait extending Axum routers with conditional MCP endpoint mounting \
**Parameter Types**: Type-safe parameter structs from `brk_query` with automatic validation \
**Error Handling**: Consistent `McpError` responses with proper MCP protocol compliance \
**Transport Layer**: `StreamableHttpService` with stateless configuration for scalable deployment \
**Architecture**: Standards-compliant MCP bridge providing LLM access to comprehensive Bitcoin analytics
---
_This README was generated by Claude Code_
- `brk_query` for data access
- `brk_rmcp` for MCP protocol implementation

View File

@@ -1,30 +1,57 @@
# brk_monitor
# brk_mempool
A lightweight, thread-safe Rust library for maintaining a live, in-memory snapshot of the Bitcoin mempool.
Real-time Bitcoin mempool monitoring with fee estimation.
## What It Enables
Track mempool state, estimate transaction fees via projected block building, and query address mempool activity. Updates automatically with 1-second sync cycles.
## Key Features
- **Real-time synchronization**: Polls Bitcoin Core RPC every second to track mempool state
- **Thread-safe access**: Uses `RwLock` for concurrent reads with minimal contention
- **Efficient updates**: Only fetches new transactions, with configurable rate limiting (10,000 tx/cycle)
- **Zero-copy reads**: Exposes mempool via read guards for lock-free iteration
- **Optimized data structures**: Uses `FxHashMap` for fast lookups and minimal hashing overhead
- **Automatic cleanup**: Removes confirmed/dropped transactions on each update
- **Projected blocks**: Simulates Bitcoin Core's block template algorithm with CPFP awareness
- **Fee estimation**: Multi-tier fee recommendations (fastest, half-hour, hour, economy, minimum)
- **Address tracking**: Maps addresses to their pending transactions
- **Dependency handling**: Respects transaction ancestry for accurate fee calculations
- **Rate-limited rebuilds**: Throttles expensive projections to 1/second
## Design Principles
## Core API
- **Minimal lock duration**: Lock held only during HashSet operations, never during I/O
- **Memory efficient**: Stores only missing txids during fetch phase
- **Simple API**: Just `new()`, `start()`, and `get_txs()`
- **Production-ready**: Error handling with logging, graceful degradation
```rust
let mempool = Mempool::new(&rpc_client);
## Use Cases
// Start background sync loop
std::thread::spawn(move || mempool.start());
- Fee estimation and mempool analysis
- Transaction monitoring and alerts
- Block template prediction
- Network research and statistics
// Query current state
let fees = mempool.get_fees();
let info = mempool.get_info();
let blocks = mempool.get_block_stats();
let snapshot = mempool.get_snapshot();
## Description
// Address lookups
let tracker = mempool.get_addresses();
```
A clean, performant way to keep Bitcoin's mempool state available in your Rust application without repeatedly querying RPC. Perfect for applications that need frequent mempool access with low latency.
## Fee Estimation
Returns `RecommendedFees` with sat/vB rates for different confirmation targets:
- `fastest_fee` - Next block
- `half_hour_fee` - ~3 blocks
- `hour_fee` - ~6 blocks
- `economy_fee` - ~144 blocks
- `minimum_fee` - Relay minimum
## Block Projection
Builds projected blocks by:
1. Constructing transaction dependency graph
2. Calculating effective fee rates (including ancestors)
3. Selecting transactions greedily by ancestor-aware fee rate
4. Partitioning into ~4MB virtual blocks
## Built On
- `brk_error` for error handling
- `brk_rpc` for mempool RPC calls
- `brk_types` for `MempoolInfo`, `MempoolEntryInfo`, `RecommendedFees`

View File

@@ -1,237 +1,72 @@
# brk_query
Unified data query and formatting interface for Bitcoin datasets with intelligent search and multi-format output.
Query interface for Bitcoin indexed and computed data.
[![Crates.io](https://img.shields.io/crates/v/brk_query.svg)](https://crates.io/crates/brk_query)
[![Documentation](https://docs.rs/brk_query/badge.svg)](https://docs.rs/brk_query)
## What It Enables
## Overview
Query blocks, transactions, addresses, and 1000+ on-chain metrics through a unified API. Supports pagination, range queries, and multiple output formats.
This crate provides a high-level interface for querying and formatting data from BRK's indexer and computer components. It offers intelligent vector search with fuzzy matching, parameter validation, range queries, and multi-format output (JSON, CSV) with efficient caching and pagination support.
## Key Features
**Key Features:**
- **Unified access**: Single entry point to indexer, computer, and mempool data
- **Metric discovery**: List metrics, filter by index type, fuzzy search
- **Range queries**: By height, date, or relative offsets (`from=-100`)
- **Multi-metric bulk queries**: Fetch multiple metrics in one call
- **Async support**: Tokio-compatible with `AsyncQuery` wrapper
- **Format flexibility**: JSON, CSV, or raw values
- Unified query interface across indexer and computer data sources
- Intelligent search with fuzzy matching and helpful error messages
- Multi-format output: JSON, CSV with proper formatting
- Range-based data queries with flexible from/to parameters
- Comprehensive pagination support for large datasets
- Schema validation with JSON Schema generation for API documentation
- Efficient caching system for error messages and repeated queries
**Target Use Cases:**
- REST API backends requiring flexible data queries
- Data export tools supporting multiple output formats
- Interactive applications with user-friendly error messaging
- Research platforms requiring structured data access
## Installation
```bash
cargo add brk_query
```
## Quick Start
## Core API
```rust
use brk_query::{Interface, Params, Index};
use brk_indexer::Indexer;
use brk_computer::Computer;
let query = Query::build(&reader, &indexer, &computer, Some(mempool));
// Initialize with indexer and computer
let indexer = Indexer::build(/* config */)?;
let computer = Computer::build(/* config */)?;
let interface = Interface::build(&indexer, &computer);
// Current height
let height = query.height();
// Query data with parameters
let params = Params {
// Metric queries
let data = query.search_and_format(MetricSelection {
vecids: vec!["supply".into()],
index: Index::Height,
ids: vec!["height-to-blockhash".to_string()].into(),
from: Some(800000),
to: Some(800100),
format: Some(Format::JSON),
range: DataRange::Last(100),
..Default::default()
};
})?;
// Search and format results
let output = interface.search_and_format(params)?;
println!("{}", output);
// Block queries
let info = query.block_info(Height::new(840_000))?;
// Transaction queries
let tx = query.transaction(&txid)?;
// Address queries
let stats = query.address_stats(&address)?;
```
## API Overview
## Query Types
### Core Types
| Domain | Methods |
|--------|---------|
| Metrics | `metrics`, `search_and_format`, `metric_to_indexes` |
| Blocks | `block_info`, `block_txs`, `block_status`, `block_timestamp` |
| Transactions | `transaction`, `tx_status`, `tx_merkle_proof` |
| Addresses | `address_stats`, `address_txs`, `address_utxos` |
| Mining | `difficulty_adjustments`, `hashrate`, `pools`, `epochs` |
| Mempool | `mempool_info`, `mempool_fees`, `mempool_projected_blocks` |
- **`Interface<'a>`**: Main query interface coordinating indexer and computer access
- **`Params`**: Query parameters including index, IDs, range, and formatting options
- **`Index`**: Enumeration of available data indexes (Height, Date, Address, etc.)
- **`Format`**: Output format specification (JSON, CSV)
- **`Output`**: Formatted query results with multiple value types
### Key Methods
**`Interface::build(indexer: &Indexer, computer: &Computer) -> Self`**
Creates interface instance with references to data sources.
**`search(&self, params: &Params) -> Result<Vec<(String, &&dyn AnyCollectableVec)>>`**
Searches for vectors matching the query parameters with intelligent error handling.
**`format(&self, vecs: Vec<...>, params: &ParamsOpt) -> Result<Output>`**
Formats search results according to specified output format and range parameters.
**`search_and_format(&self, params: Params) -> Result<Output>`**
Combined search and formatting operation for single-call data retrieval.
### Query Parameters
**Core Parameters:**
- `index`: Data index to query (height, date, address, etc.)
- `ids`: Vector IDs to retrieve from the specified index
- `from`/`to`: Optional range filtering (inclusive start, exclusive end)
- `format`: Output format (defaults to JSON)
**Pagination Parameters:**
- `offset`: Number of entries to skip
- `limit`: Maximum entries to return
## Examples
### Basic Data Query
## Async Usage
```rust
use brk_query::{Interface, Params, Index, Format};
let async_query = AsyncQuery::build(&reader, &indexer, &computer, mempool);
let interface = Interface::build(&indexer, &computer);
// Run blocking queries in thread pool
let result = async_query.run(|q| q.block_info(height)).await;
// Query block heights to hashes
let params = Params {
index: Index::Height,
ids: vec!["height-to-blockhash".to_string()].into(),
from: Some(750000),
to: Some(750010),
format: Some(Format::JSON),
..Default::default()
};
match interface.search_and_format(params)? {
Output::Json(value) => println!("{}", serde_json::to_string_pretty(&value)?),
_ => unreachable!(),
}
// Access inner Query
let height = async_query.inner().height();
```
### CSV Export with Range Query
## Built On
```rust
use brk_query::{Interface, Params, Index, Format};
// Export price data as CSV
let params = Params {
index: Index::Date,
ids: vec!["dateindex-to-price-close".to_string()].into(),
from: Some(0), // From genesis
to: Some(5000), // First ~13 years
format: Some(Format::CSV),
..Default::default()
};
match interface.search_and_format(params)? {
Output::CSV(csv_text) => {
std::fs::write("bitcoin_prices.csv", csv_text)?;
println!("Price data exported to bitcoin_prices.csv");
},
_ => unreachable!(),
}
```
### Intelligent Error Handling
```rust
use brk_query::{Interface, Params, Index};
// Query with typo in vector ID
let params = Params {
index: Index::Height,
ids: vec!["height-to-blockhas".to_string()].into(), // Typo: "blockhas"
..Default::default()
};
// Interface provides helpful error with suggestions
match interface.search(&params) {
Err(error) => {
println!("{}", error);
// Output: No vec named "height-to-blockhas" indexed by "height" found.
// Maybe you meant one of the following: ["height-to-blockhash"] ?
},
Ok(_) => unreachable!(),
}
```
## Architecture
### Data Source Integration
The interface acts as a bridge between:
- **Indexer**: Raw blockchain data vectors (blocks, transactions, addresses)
- **Computer**: Computed analytics vectors (prices, statistics, aggregations)
- **Unified Access**: Single query interface for both data sources
### Search Implementation
1. **Parameter Validation**: Validates index existence and parameter consistency
2. **Vector Resolution**: Maps vector IDs to actual data structures
3. **Fuzzy Matching**: Provides suggestions for mistyped vector names
4. **Error Caching**: Caches error messages to avoid repeated expensive operations
### Output Formatting
**JSON Output:**
- Single value: Direct value serialization
- List: Array of values
- Matrix: Array of arrays for multi-vector queries
**CSV Output:**
- Column headers from vector IDs
- Row-wise data iteration with proper escaping
### Caching Strategy
- **Error Message Caching**: 1000-entry LRU cache for error messages
- **Search Result Caching**: Upstream caching in server/client layers
- **Static Data Caching**: Index and vector metadata cached during initialization
## Configuration
### Index Types
Available indexes include:
- `Height`: Block height-based indexing
- `Date`: Calendar date indexing
- `Address`: Bitcoin address indexing
- `Transaction`: Transaction hash indexing
- Custom indexes from computer analytics
### Format Options
- **JSON**: Structured data with nested objects/arrays
- **CSV**: Comma-separated values with proper escaping
## Code Analysis Summary
**Main Structure**: `Interface` struct coordinating between `Indexer` and `Computer` data sources \
**Query System**: Parameter-driven search with `Params` struct supporting range queries and formatting options \
**Error Handling**: Intelligent fuzzy matching with cached error messages and helpful suggestions \
**Output Formats**: Multi-format support (JSON, CSV) with proper data serialization \
**Caching**: `quick_cache` integration for error messages and expensive operations \
**Search Logic**: `nucleo-matcher` fuzzy search for user-friendly vector name resolution \
**Architecture**: Abstraction layer providing unified access to heterogeneous Bitcoin data sources
---
_This README was generated by Claude Code_
- `brk_indexer` for raw indexed data
- `brk_computer` for derived metrics
- `brk_mempool` for mempool queries
- `brk_reader` for raw block access

View File

@@ -1,199 +1,46 @@
# brk_reader
High-performance Bitcoin block parser for raw Bitcoin Core block files with XOR encryption support.
High-performance Bitcoin block reader from raw blk files.
[![Crates.io](https://img.shields.io/crates/v/brk_reader.svg)](https://crates.io/crates/brk_reader)
[![Documentation](https://docs.rs/brk_reader/badge.svg)](https://docs.rs/brk_reader)
## What It Enables
## Overview
Stream blocks directly from Bitcoin Core's `blk*.dat` files with parallel parsing, automatic XOR decoding, and chain-order delivery. Much faster than RPC for full-chain scans.
This crate provides a multi-threaded Bitcoin block parser that processes raw Bitcoin Core `.dat` files from the blockchain directory. It supports XOR-encoded block data, parallel processing with `rayon`, and maintains chronological ordering through crossbeam channels. The parser integrates with Bitcoin Core RPC to validate block confirmations and handles file metadata tracking for incremental processing.
## Key Features
**Key Features:**
- **Direct blk file access**: Bypasses RPC overhead entirely
- **XOR decoding**: Handles Bitcoin Core's obfuscated block storage
- **Parallel parsing**: Multi-threaded block deserialization
- **Chain ordering**: Reorders out-of-sequence blocks before delivery
- **Smart start finding**: Binary search to locate starting height across blk files
- **Reorg detection**: Stops iteration on chain discontinuity
- Multi-threaded pipeline architecture with crossbeam channels
- XOR decryption support for encrypted block files
- Parallel block decoding with rayon thread pools
- Chronological block ordering with height-based validation
- Bitcoin Core RPC integration for confirmation checking
- File metadata tracking and incremental processing
- Magic byte detection for block boundary identification
**Target Use Cases:**
- Bitcoin blockchain analysis tools requiring raw block access
- Historical data processing applications
- Block explorers and analytics platforms
- Research tools needing ordered block iteration
## Installation
```bash
cargo add brk_reader
```
## Quick Start
## Core API
```rust
use brk_reader::Parser;
use bitcoincore_rpc::{Client, Auth, RpcApi};
use brk_types::Height;
use std::path::PathBuf;
let reader = Reader::new(blocks_dir, &rpc_client);
// Initialize Bitcoin Core RPC client
let rpc = Box::leak(Box::new(Client::new(
"http://localhost:8332",
Auth::None
).unwrap()));
// Stream blocks from height 800,000 to 850,000
let receiver = reader.read(Some(Height::new(800_000)), Some(Height::new(850_000)));
// Create parser with blocks directory
let blocks_dir = PathBuf::from("/path/to/bitcoin/blocks");
let outputs_dir = Some(PathBuf::from("./parser_output"));
let parser = Parser::new(blocks_dir, outputs_dir, rpc);
// Parse blocks in height range
let start_height = Some(Height::new(700000));
let end_height = Some(Height::new(700100));
let receiver = parser.parse(start_height, end_height);
// Process blocks as they arrive
for (height, block, block_hash) in receiver.iter() {
println!("Block {}: {} transactions", height, block.txdata.len());
println!("Block hash: {}", block_hash);
}
```
## API Overview
### Core Types
- **`Parser`**: Main parser coordinating multi-threaded block processing
- **`AnyBlock`**: Enum representing different block states (Raw, Decoded, Skipped)
- **`XORBytes`**: XOR key bytes for decrypting block data
- **`XORIndex`**: Circular index for XOR byte application
- **`BlkMetadata`**: Block file metadata including index and modification time
### Key Methods
**`Parser::new(blocks_dir: PathBuf, outputs_dir: Option<PathBuf>, rpc: &'static Client) -> Self`**
Creates a new parser instance with blockchain directory and RPC client.
**`parse(&self, start: Option<Height>, end: Option<Height>) -> Receiver<(Height, Block, BlockHash)>`**
Returns a channel receiver that yields blocks in chronological order for the specified height range.
### Processing Pipeline
The parser implements a three-stage pipeline:
1. **File Reading Stage**: Scans `.dat` files, identifies magic bytes, extracts raw block data
2. **Decoding Stage**: Parallel XOR decryption and Bitcoin block deserialization
3. **Ordering Stage**: RPC validation and chronological ordering by block height
## Examples
### Basic Block Iteration
```rust
use brk_reader::Parser;
let parser = Parser::new(blocks_dir, Some(output_dir), rpc);
// Parse all blocks from height 650000 onwards
let receiver = parser.parse(Some(Height::new(650000)), None);
for (height, block, hash) in receiver.iter() {
println!("Processing block {} with {} transactions",
height, block.txdata.len());
// Process block transactions
for (idx, tx) in block.txdata.iter().enumerate() {
println!(" Tx {}: {}", idx, tx.txid());
}
}
```
### Range-Based Processing
```rust
use brk_reader::Parser;
let parser = Parser::new(blocks_dir, Some(output_dir), rpc);
// Process specific block range
let start = Height::new(600000);
let end = Height::new(600999);
let receiver = parser.parse(Some(start), Some(end));
let mut total_tx_count = 0;
for (height, block, _hash) in receiver.iter() {
total_tx_count += block.txdata.len();
if height == end {
break; // End of range reached
}
}
println!("Processed 1000 blocks with {} total transactions", total_tx_count);
```
### Incremental Processing with Metadata
```rust
use brk_reader::Parser;
let parser = Parser::new(blocks_dir, Some(output_dir), rpc);
// Parser automatically handles file metadata tracking
// Only processes blocks that have been modified since last run
let receiver = parser.parse(None, None); // Process all available blocks
for (height, block, hash) in receiver.iter() {
// Parser ensures blocks are delivered in chronological order
// even when processing multiple .dat files in parallel
if height.as_u32() % 10000 == 0 {
println!("Reached block height {}", height);
}
for block in receiver {
// Process block in chain order
}
```
## Architecture
### Multi-Threading Design
1. **File scanner**: Maps `blk*.dat` files to indices
2. **Byte reader**: Streams raw bytes, finds magic bytes, segments blocks
3. **Parser pool**: Parallel deserialization with rayon
4. **Orderer**: Buffers and emits blocks in height order
The parser uses a sophisticated multi-threaded architecture:
## Performance
- **File Scanner Thread**: Reads raw bytes from `.dat` files and identifies block boundaries
- **Decoder Thread Pool**: Parallel XOR decryption and block deserialization using rayon
- **Ordering Thread**: RPC validation and chronological ordering with future block buffering
The parallel pipeline can saturate disk I/O while parsing on multiple cores. For recent blocks, falls back to RPC for lower latency.
### XOR Encryption Support
## Built On
Bitcoin Core optionally XOR-encrypts block files using an 8-byte key stored in `xor.dat`. The parser:
- Automatically detects XOR encryption presence
- Implements circular XOR index for efficient decryption
- Supports both encrypted and unencrypted block files
### Block File Management
The parser handles Bitcoin Core's block file structure:
- Scans directory for `blk*.dat` files
- Tracks file modification times for incremental processing
- Maintains block height mappings with RPC validation
- Exports processing metadata for resumable operations
## Code Analysis Summary
**Main Type**: `Parser` struct coordinating multi-threaded block processing pipeline \
**Threading**: Three-stage pipeline using crossbeam channels with bounded capacity (50) \
**Parallelization**: rayon-based parallel block decoding with configurable batch sizes \
**XOR Handling**: Custom XORBytes and XORIndex types for efficient encryption/decryption \
**RPC Integration**: Bitcoin Core RPC validation for block confirmation and height mapping \
**File Processing**: Automatic `.dat` file discovery and magic byte boundary detection \
**Architecture**: Producer-consumer pattern with ordered delivery despite parallel processing
---
_This README was generated by Claude Code_
- `brk_error` for error handling
- `brk_rpc` for RPC client (height lookups, recent blocks)
- `brk_types` for `Height`, `BlockHash`, `BlkPosition`, `BlkMetadata`

View File

@@ -1 +1,44 @@
# brk_rpc
Thread-safe Bitcoin Core RPC client with automatic retries.
## What It Enables
Query a Bitcoin Core node for blocks, transactions, mempool data, and chain state. Handles connection failures gracefully with configurable retry logic.
## Key Features
- **Auto-retry**: Up to 1M retries with configurable delay on transient failures
- **Thread-safe**: Clone freely, share across threads
- **Full RPC coverage**: Blocks, headers, transactions, mempool, UTXO queries
- **Mempool transactions**: Resolves prevouts for mempool tx fee calculation
- **Reorg detection**: `get_closest_valid_height` finds main chain after reorg
- **Sync waiting**: `wait_for_synced_node` blocks until node catches up
## Core API
```rust
let client = Client::new("http://localhost:8332", Auth::CookieFile(cookie_path))?;
let height = client.get_last_height()?;
let hash = client.get_block_hash(height)?;
let block = client.get_block(&hash)?;
// Mempool
let txids = client.get_raw_mempool()?;
let entries = client.get_raw_mempool_verbose()?;
```
## Key Methods
- `get_block`, `get_block_hash`, `get_block_header_info`
- `get_transaction`, `get_mempool_transaction`, `get_tx_out`
- `get_raw_mempool`, `get_raw_mempool_verbose`
- `get_blockchain_info`, `get_last_height`
- `is_in_main_chain`, `get_closest_valid_height`
## Built On
- `brk_error` for error handling
- `brk_logger` for debug logging
- `brk_types` for `Height`, `BlockHash`, `Txid`, `MempoolEntryInfo`

View File

@@ -1,39 +1,53 @@
# brk_server
HTTP server for the Bitcoin Research Kit.
HTTP API server for Bitcoin on-chain analytics.
[![Crates.io](https://img.shields.io/crates/v/brk_server.svg)](https://crates.io/crates/brk_server)
[![Documentation](https://docs.rs/brk_server/badge.svg)](https://docs.rs/brk_server)
## What It Enables
## Overview
Serve BRK data via REST API with OpenAPI documentation, response caching, MCP endpoint, and optional static file hosting for web interfaces.
This crate provides an HTTP server that exposes BRK's blockchain data through a REST API. It serves as the web interface layer for the Bitcoin Research Kit, making data accessible to applications, dashboards, and research tools.
## Key Features
Built on `axum` with automatic OpenAPI documentation via Scalar.
- **OpenAPI spec**: Auto-generated documentation at `/api/openapi.json`
- **Response caching**: LRU cache with 5000 entries for repeated queries
- **Compression**: Brotli, gzip, deflate, zstd support
- **MCP endpoint**: Server-Sent Events transport at `/mcp/sse`
- **Static files**: Optional web interface hosting
- **Request logging**: Colorized status/latency logging
## Usage
## Core API
```rust
use brk_server::Server;
use brk_query::AsyncQuery;
let query = AsyncQuery::build(&reader, &indexer, &computer, Some(mempool));
let server = Server::new(&query, None);
// Starts on port 3110 (or next available)
server.serve(true).await?;
let server = Server::new(&async_query, Some(files_path));
server.serve(true).await?; // true enables MCP endpoint
```
Once running:
- **API Documentation**: `http://localhost:3110/api`
- **OpenAPI Spec**: `http://localhost:3110/api.json`
## API Endpoints
## Features
| Path | Description |
|------|-------------|
| `/api/blocks/{height}` | Block info, transactions, status |
| `/api/txs/{txid}` | Transaction details, status, merkle proof |
| `/api/addresses/{addr}` | Address stats, transactions, UTXOs |
| `/api/metrics` | Metric catalog and data queries |
| `/api/mining/*` | Hashrate, difficulty, pools, epochs |
| `/api/mempool/*` | Fee estimates, projected blocks |
| `/mcp/sse` | MCP Server-Sent Events endpoint |
- **REST API** for addresses, blocks, transactions, mempool, mining stats, and metrics
- **OpenAPI documentation** with interactive Scalar UI
- **Multiple formats**: JSON and CSV output
- **HTTP caching**: ETag-based conditional requests
- **Compression**: Gzip, Brotli, Deflate, Zstd
- **MCP support**: Model Context Protocol for AI integrations
- **Static file serving**: Optional web interface hosting
## Caching
Uses ETag-based caching with stale-while-revalidate semantics:
- Height-indexed data: Cache until height changes
- Date-indexed data: Cache with longer TTL
- Mempool data: Short TTL, frequent updates
## Configuration
Server binds to port 3110 by default, auto-incrementing if busy (up to 3210).
## Built On
- `brk_query` for data access
- `brk_mcp` for MCP protocol
- `aide` + `axum` for HTTP routing and OpenAPI
- `tower-http` for compression and tracing

View File

@@ -1,276 +1,43 @@
# brk_store
High-performance transactional key-value store wrapper around Fjall with blockchain-aware versioning.
Key-value storage layer built on fjall for Bitcoin indexing.
[![Crates.io](https://img.shields.io/crates/v/brk_store.svg)](https://crates.io/crates/brk_store)
[![Documentation](https://docs.rs/brk_store/badge.svg)](https://docs.rs/brk_store)
## What It Enables
## Overview
Persist and query Bitcoin index data (address→outputs, txid→height, etc.) with access patterns optimized for different workloads: random lookups, sequential scans, and recent-data queries.
This crate provides a type-safe wrapper around the Fjall LSM-tree database engine, specifically designed for Bitcoin blockchain data storage. It offers transactional operations, automatic version management, height-based synchronization, and optimized configuration for blockchain workloads with support for batch operations and efficient range queries.
## Key Features
**Key Features:**
- **Workload-optimized configs**: `Kind::Random` (bloom filters, pinned blocks), `Kind::Recent` (point-read optimized), `Kind::Sequential` (scan-friendly), `Kind::Vec` (append-heavy)
- **Write batching**: Accumulate puts/deletes in memory, commit atomically
- **Tiered caching**: In-memory LRU cache layers before hitting disk
- **Version management**: Automatic schema versioning with `StoreMeta`
- **Height-aware operations**: `insert_if_needed` / `remove_if_needed` skip work at heights already processed
- Transactional key-value storage with ACID guarantees
- Blockchain height-based synchronization and versioning
- Automatic metadata management with version compatibility checking
- Optimized configuration for Bitcoin data patterns (32MB write buffers, 8MB memtables)
- Type-safe generic interface with zero-copy ByteView integration
- Batch operations with deferred commits for performance
- Optional bloom filter configuration for space/speed tradeoffs
**Target Use Cases:**
- Bitcoin blockchain indexing with transactional consistency
- UTXO set management requiring atomic updates
- Address-to-transaction mapping with range queries
- Any Bitcoin data storage requiring versioned, transactional access
## Installation
```bash
cargo add brk_store
```
## Quick Start
## Core API
```rust
use brk_store::{Store, open_keyspace};
use brk_types::{Height, Version};
use std::path::Path;
// Open keyspace (database instance)
let keyspace = open_keyspace(Path::new("./data"))?;
// Create typed store for height-to-blockhash mapping
let mut store: Store<Height, BlockHash> = Store::import(
&keyspace,
Path::new("./data/height_to_hash"),
"height-to-blockhash",
Version::ONE,
Some(true), // Enable bloom filters
let store: Store<Txid, Height> = Store::import(
&db, &path, "txid_to_height",
Version::new(1), Mode::default(), Kind::Random
)?;
// Insert data (batched in memory)
store.insert_if_needed(
Height::new(750000),
block_hash,
Height::new(750000)
);
store.insert(txid, height);
store.commit(height)?;
// Commit transaction to disk
store.commit(Height::new(750000))?;
// Query data
if let Some(hash) = store.get(&Height::new(750000))? {
println!("Block hash: {}", hash);
}
let height = store.get(&txid)?;
```
## API Overview
## Access Patterns
### Core Types
| Kind | Use Case | Optimization |
|------|----------|--------------|
| `Random` | UTXO lookups, txid queries | Aggressive bloom filters |
| `Recent` | Mempool, recent blocks | Point-read hints |
| `Sequential` | Full chain scans | Minimal indexing |
| `Vec` | Append-only series | Large memtables, no filters |
- **`Store<Key, Value>`**: Generic transactional store with type-safe operations
- **`AnyStore`**: Trait for height-based synchronization and metadata operations
- **`StoreMeta`**: Version and height metadata management
- **`TransactionalKeyspace`**: Fjall keyspace wrapper for database management
## Built On
### Key Methods
**`Store::import(keyspace, path, name, version, bloom_filters) -> Result<Self>`**
Creates or opens a store with automatic version checking and migration.
**`get(&self, key: &Key) -> Result<Option<Cow<Value>>>`**
Retrieves value by key, checking both pending writes and committed data.
**`insert_if_needed(&mut self, key: Key, value: Value, height: Height)`**
Conditionally inserts data based on blockchain height requirements.
**`commit(&mut self, height: Height) -> Result<()>`**
Atomically commits all pending operations and updates metadata.
### Height-Based Synchronization
The store implements blockchain-aware synchronization:
- **`has(height)`**: Checks if store contains data up to specified height
- **`needs(height)`**: Determines if store requires data for specified height
- **`height()`**: Returns current synchronized height
## Examples
### Basic Key-Value Operations
```rust
use brk_store::{Store, open_keyspace};
use brk_types::{Height, TxId, Version};
let keyspace = open_keyspace(Path::new("./blockchain_data"))?;
// Create store for transaction index
let mut tx_store: Store<TxId, Height> = Store::import(
&keyspace,
Path::new("./blockchain_data/txid_to_height"),
"txid-to-height",
Version::ONE,
Some(true),
)?;
// Insert transaction mapping
let txid = TxId::from_str("abcdef...")?;
let height = Height::new(800000);
tx_store.insert_if_needed(txid, height, height);
tx_store.commit(height)?;
// Query transaction height
if let Some(tx_height) = tx_store.get(&txid)? {
println!("Transaction {} found at height {}", txid, tx_height);
}
```
### Batch Processing with Height Synchronization
```rust
use brk_store::{Store, AnyStore};
let mut store: Store<Address, AddressData> = Store::import(/* ... */)?;
// Process blocks sequentially
for block_height in 750000..750100 {
let height = Height::new(block_height);
// Skip if already processed
if store.has(height) {
continue;
}
// Process block transactions
for (address, data) in process_block(block_height)? {
store.insert_if_needed(address, data, height);
}
// Commit entire block atomically
store.commit(height)?;
println!("Processed block {}", block_height);
}
// Ensure data is persisted to disk
store.persist()?;
```
### Version Migration and Reset
```rust
use brk_store::{Store, AnyStore};
use brk_types::Version;
// Open store with new version
let mut store: Store<Height, Data> = Store::import(
&keyspace,
path,
"my-store",
Version::TWO, // Upgraded from Version::ONE
Some(false), // Disable bloom filters for space
)?;
// Check if reset is needed for data consistency
if store.version() != Version::TWO {
println!("Resetting store for version compatibility");
store.reset()?;
}
// Verify store is empty after reset
assert!(store.is_empty()?);
```
### Iterator-Based Data Access
```rust
use brk_store::Store;
let store: Store<Height, BlockHash> = Store::import(/* ... */)?;
// Iterate over all key-value pairs
for (height, block_hash) in store.iter() {
println!("Height {}: {}", height, block_hash);
// Process in chunks for memory efficiency
if height.as_u32() % 10000 == 0 {
println!("Processed up to height {}", height);
}
}
```
## Architecture
### Storage Engine
Built on Fjall LSM-tree engine with optimizations:
- **Write Buffers**: 32MB for high-throughput blockchain ingestion
- **Memtables**: 8MB for balanced memory usage
- **Manual Journal Persist**: Explicit control over durability guarantees
- **Bloom Filters**: Configurable for read-heavy vs. space-constrained workloads
### Transaction Model
- **Read Transactions**: Consistent point-in-time snapshots
- **Write Transactions**: ACID-compliant with rollback support
- **Batch Operations**: In-memory accumulation with atomic commits
- **Height Synchronization**: Blockchain-aware conflict resolution
### Version Management
Automatic handling of schema evolution:
1. **Version Detection**: Reads stored version from metadata
2. **Compatibility Check**: Compares with expected version
3. **Migration**: Automatic store reset for incompatible versions
4. **Metadata Update**: Persistent version tracking
### Memory Management
- **Zero-Copy**: ByteView integration for efficient serialization
- **Copy-on-Write**: Cow<Value> for memory-efficient reads
- **Parking Lot**: RwLock for concurrent partition access
- **Deferred Operations**: BTreeMap/BTreeSet for batched writes
## Configuration
### Keyspace Options
```rust
use fjall::Config;
let keyspace = Config::new(path)
.max_write_buffer_size(32 * 1024 * 1024) // 32MB write buffers
.open_transactional()?;
```
### Partition Options
```rust
use fjall::PartitionCreateOptions;
let options = PartitionCreateOptions::default()
.max_memtable_size(8 * 1024 * 1024) // 8MB memtables
.manual_journal_persist(true) // Manual sync control
.bloom_filter_bits(None); // Disable bloom filters
```
## Code Analysis Summary
**Main Structure**: `Store<Key, Value>` generic wrapper around Fjall with typed operations and metadata management \
**Transaction Layer**: Read/write transaction abstraction with deferred batch operations via BTreeMap/BTreeSet \
**Metadata System**: `StoreMeta` for version compatibility and height tracking with automatic migration \
**Height Synchronization**: Blockchain-aware operations with `needs()`, `has()`, and conditional insertion logic \
**Memory Efficiency**: Zero-copy ByteView integration with parking_lot RwLock for concurrent access \
**Storage Engine**: Fjall LSM-tree with optimized configuration for blockchain workloads \
**Architecture**: Type-safe database abstraction with ACID guarantees and blockchain-specific synchronization patterns
---
_This README was generated by Claude Code_
- `brk_error` for error handling
- `brk_types` for `Height`, `Version`

View File

@@ -1 +1,45 @@
# brk_traversable
Trait for navigating and exporting hierarchical data structures.
## What It Enables
Traverse nested data collections (datasets, grouped metrics) as trees for inspection, and iterate all exportable vectors for bulk data export.
## Key Features
- **Tree navigation**: Convert nested structs into `TreeNode` hierarchies for exploration
- **Export iteration**: Walk all `AnyExportableVec` instances in a data structure
- **Derive macro**: `#[derive(Traversable)]` with `derive` feature
- **Compression backends**: Support for PCO, LZ4, ZeroCopy, Zstd via feature flags
- **Blanket implementations**: Works with `Box<T>`, `Option<T>`, `BTreeMap<K, V>`
## Core API
```rust
pub trait Traversable {
fn to_tree_node(&self) -> TreeNode;
fn iter_any_exportable(&self) -> impl Iterator<Item = &dyn AnyExportableVec>;
}
```
## Supported Vec Types
All vecdb vector types implement `Traversable`:
- `BytesVec`, `EagerVec`, `PcoVec` (with `pco` feature)
- `ZeroCopyVec` (with `zerocopy` feature)
- `LZ4Vec`, `ZstdVec` (with respective features)
- `LazyVecFrom1/2/3` for derived vectors
## Feature Flags
- `derive` - Enable `#[derive(Traversable)]` macro
- `pco` - PCO compression support
- `zerocopy` - Zero-copy vector support
- `lz4` - LZ4 compression support
- `zstd` - Zstd compression support
## Built On
- `brk_types` for `TreeNode` type
- `brk_traversable_derive` for the derive macro (optional)

View File

@@ -1 +1,33 @@
# brk_traversable_derive
Proc-macro for deriving the `Traversable` trait on data structures.
## What It Enables
Automatically generate tree traversal and export iteration for structs, eliminating boilerplate when working with hierarchical data that needs serialization or inspection.
## Key Features
- **Automatic tree building**: Converts struct fields into `TreeNode::Branch` hierarchies
- **Export iteration**: Generates `iter_any_exportable()` to walk all exportable vectors
- **Field attributes**: `#[traversable(skip)]` to exclude fields, `#[traversable(flatten)]` to merge nested structures
- **Option support**: Gracefully handles `Option<T>` fields
- **Generic-aware**: Properly bounds generic parameters with `Traversable + Send + Sync`
## Core API
```rust
#[derive(Traversable)]
struct MyData {
pub metrics: MetricsCollection,
#[traversable(flatten)]
pub nested: NestedData,
#[traversable(skip)]
internal: Cache,
}
```
## Generated Methods
- `to_tree_node(&self) -> TreeNode` - Build navigable tree structure
- `iter_any_exportable(&self) -> impl Iterator<Item = &dyn AnyExportableVec>` - Iterate all exportable vectors

View File

@@ -1,252 +1,45 @@
# brk_types
Bitcoin-aware type system and zero-copy data structures for blockchain analysis.
Domain types for Bitcoin data analysis with serialization and indexing support.
[![Crates.io](https://img.shields.io/crates/v/brk_types.svg)](https://crates.io/crates/brk_types)
[![Documentation](https://docs.rs/brk_types/badge.svg)](https://docs.rs/brk_types)
## What It Enables
## Overview
Work with Bitcoin primitives (heights, satoshis, addresses, transactions) through purpose-built types that handle encoding, arithmetic, time conversions, and database storage automatically.
This crate provides a comprehensive type system for Bitcoin blockchain analysis, featuring zero-copy data structures, memory-efficient storage types, and Bitcoin-specific primitives. Built on `zerocopy` and `vecdb`, it offers type-safe representations for blockchain data with optimized serialization and database integration.
## Key Features
**Key Features:**
- **Bitcoin primitives**: `Height`, `Sats`, `Txid`, `BlockHash`, `Outpoint` with full arithmetic and conversion support
- **Address types**: All output types (P2PKH, P2SH, P2WPKH, P2WSH, P2TR, P2PK, P2A, OP_RETURN) with address index variants
- **Time indexes**: `DateIndex`, `WeekIndex`, `MonthIndex`, `QuarterIndex`, `YearIndex`, `DecadeIndex` with cross-index conversion
- **Protocol types**: `DifficultyEpoch`, `HalvingEpoch`, `TxVersion`, `RawLocktime`
- **Financial types**: `Dollars`, `Cents`, `OHLC` (Open/High/Low/Close)
- **Serialization**: Serde + JSON Schema generation via schemars
- **Compression**: PCO (Pco) derive for columnar compression in vecdb
- Zero-copy data structures with `zerocopy` derives for high-performance serialization
- Bitcoin-specific types for heights, timestamps, addresses, and transaction data
- OHLC (Open, High, Low, Close) price data structures for financial analysis
- Comprehensive address type classification (P2PK, P2PKH, P2SH, P2WPKH, P2WSH, P2TR, etc.)
- Time-based indexing types (Date, DateIndex, WeekIndex, MonthIndex, etc.)
- Memory allocation tracking with `allocative` integration
- Stored primitive wrappers for space-efficient database storage
## Type Categories
**Target Use Cases:**
| Category | Examples |
|----------|----------|
| Block metadata | `Height`, `BlockHash`, `BlockTimestamp`, `BlkPosition` |
| Transaction | `Txid`, `TxIndex`, `TxIn`, `TxOut`, `Vsize`, `Weight` |
| Address | `P2PKHAddressIndex`, `P2TRBytes`, `AnyAddressIndex`, `AddressStats` |
| Value | `Sats`, `Dollars`, `Cents`, `Bitcoin` |
| Time | `Date`, `DateIndex`, `WeekIndex`, `MonthIndex`, ... |
| Metric | `Metric`, `MetricData`, `MetricSelection` |
| API | `Pagination`, `Health`, `RecommendedFees`, `MempoolInfo` |
- Bitcoin blockchain analysis and research tools
- Financial data processing and OHLC calculations
- Database-backed blockchain indexing systems
- Memory-efficient UTXO set management
## Core API
## Installation
```bash
cargo add brk_types
```
## Quick Start
All types implement standard traits: `Debug`, `Clone`, `Serialize`, `Deserialize`, plus domain-specific operations like `CheckedSub`, `Formattable`, and `PrintableIndex`.
```rust
use brk_types::*;
use brk_types::{Height, Sats, DateIndex, Date};
// Bitcoin-specific types
let height = Height::new(800000);
let timestamp = Timestamp::from(1684771200u32);
let date = Date::from(timestamp);
let sats = Sats::new(100_000_000); // 1 BTC in satoshis
// Price data structures
let price_cents = Cents::from(Dollars::from(50000.0));
let ohlc = OHLCCents::from((
Open::new(price_cents),
High::new(price_cents * 1.02),
Low::new(price_cents * 0.98),
Close::new(price_cents * 1.01),
));
// Address classification
let output_type = OutputType::P2PKH;
println!("Is spendable: {}", output_type.is_spendable());
println!("Has address: {}", output_type.is_address());
// Time indexing
let date_index = DateIndex::try_from(date)?;
let week_index = WeekIndex::from(date);
let height = Height::new(840_000);
let reward = Sats::FIFTY_BTC / 16; // Post-4th-halving reward
let date_idx = DateIndex::try_from(Date::new(2024, 4, 20))?;
```
## API Overview
## Built On
### Core Bitcoin Types
- **`Height`**: Block height with overflow-safe arithmetic operations
- **`Timestamp`**: Unix timestamp with Bitcoin genesis epoch support
- **`Date`**: Calendar date with blockchain-specific formatting (YYYYMMDD)
- **`Sats`**: Satoshi amounts with comprehensive arithmetic operations
- **`Bitcoin`**: Floating-point BTC amounts with satoshi conversion
- **`BlockHash`** / **`TxId`**: Cryptographic hash identifiers
### Address and Output Types
- **`OutputType`**: Comprehensive Bitcoin script type classification
- Standard types: P2PK (33/65-byte), P2PKH, P2SH, P2WPKH, P2WSH, P2TR
- Special types: P2MS (multisig), OpReturn, P2A (address), Empty, Unknown
- Type-checking methods: `is_spendable()`, `is_address()`, `is_unspendable()`
- **Address Index Types**: Specialized indexes for each address type
- `P2PKHAddressIndex`, `P2SHAddressIndex`, `P2WPKHAddressIndex`
- `P2WSHAddressIndex`, `P2TRAddressIndex`, etc.
### Financial Data Types
**Price Representations:**
- **`Cents`**: Integer cent representation for precise financial calculations
- **`Dollars`**: Floating-point dollar amounts with cent conversion
- **`OHLCCents`** / **`OHLCDollars`** / **`OHLCSats`**: OHLC data in different denominations
**OHLC Components:**
- **`Open<T>`**, **`High<T>`**, **`Low<T>`**, **`Close<T>`**: Generic price point wrappers
- Support for arithmetic operations and automatic conversions
### Time Indexing System
**Calendar Types:**
- **`DateIndex`**: Days since Bitcoin genesis (2009-01-03)
- **`WeekIndex`**: Week-based indexing for aggregation
- **`MonthIndex`**, **`QuarterIndex`**, **`SemesterIndex`**: Hierarchical time periods
- **`YearIndex`**, **`DecadeIndex`**: Long-term time categorization
**Epoch Types:**
- **`HalvingEpoch`**: Bitcoin halving period classification
- **`DifficultyEpoch`**: Difficulty adjustment epoch tracking
### Storage Types
**Stored Primitives:** Memory-efficient wrappers for database storage
- **`StoredU8`**, **`StoredU16`**, **`StoredU32`**, **`StoredU64`**: Unsigned integers
- **`StoredI16`**: Signed integers
- **`StoredF32`**, **`StoredF64`**: Floating-point numbers
- **`StoredBool`**: Boolean values
- **`StoredString`**: String storage optimization
## Examples
### Block Height Operations
```rust
use brk_types::Height;
let current_height = Height::new(800000);
let next_height = current_height.incremented();
// Check halving schedule
let blocks_until_halving = current_height.left_before_next_halving();
println!("Blocks until next halving: {}", blocks_until_halving);
// Difficulty adjustment tracking
let blocks_until_adjustment = current_height.left_before_next_diff_adj();
```
### Price Data Processing
```rust
use brk_types::*;
// Create OHLC data from individual price points
let daily_ohlc = OHLCDollars::from((
Open::from(45000.0),
High::from(47500.0),
Low::from(44000.0),
Close::from(46800.0),
));
// Convert between price denominations
let ohlc_cents: OHLCCents = daily_ohlc.into();
let sats_per_dollar = Sats::_1BTC / daily_ohlc.close;
// Aggregate multiple OHLC periods
let weekly_close = vec![
Close::from(46800.0),
Close::from(48200.0),
Close::from(47100.0),
].iter().sum::<Close<Dollars>>();
```
### Address Type Classification
```rust
use brk_types::OutputType;
use bitcoin::{Address, Network};
let address_str = "1BvBMSEYstWetqTFn5Au4m4GFg7xJaNVN2";
let address = Address::from_str(address_str)?.assume_checked();
let output_type = OutputType::from(&address);
match output_type {
OutputType::P2PKH => println!("Legacy address"),
OutputType::P2WPKH => println!("Native SegWit address"),
OutputType::P2SH => println!("Script hash address"),
OutputType::P2TR => println!("Taproot address"),
_ => println!("Other address type: {:?}", output_type),
}
// Check spend conditions
if output_type.is_spendable() && output_type.is_address() {
println!("Spendable address-based output");
}
```
### Time-Based Indexing
```rust
use brk_types::*;
let date = Date::new(2023, 6, 15);
let date_index = DateIndex::try_from(date)?;
// Convert to different time granularities
let week_index = WeekIndex::from(date);
let month_index = MonthIndex::from(date);
let quarter_index = QuarterIndex::from(date);
// Calculate completion percentage for current day
let completion_pct = date.completion();
println!("Day {}% complete", completion_pct * 100.0);
```
## Architecture
### Zero-Copy Design
All major types implement `zerocopy` traits (`FromBytes`, `IntoBytes`, `KnownLayout`) enabling:
- Direct memory mapping from serialized data
- Efficient network protocol handling
- High-performance database operations
- Memory layout guarantees for cross-platform compatibility
### Type Safety
The type system enforces Bitcoin domain constraints:
- `Height` prevents integer overflow in block calculations
- `Timestamp` handles Unix epoch edge cases
- `OutputType` enum covers all Bitcoin script patterns
- Address types ensure correct hash length validation
### Memory Efficiency
Storage types provide space optimization:
- Stored primitives reduce allocation overhead
- OHLC structures support both heap and stack allocation
- Index types enable efficient range queries
- Hash types use fixed-size arrays for predictable memory usage
## Code Analysis Summary
**Main Categories**: 70+ struct types across Bitcoin primitives, financial data, time indexing, and storage optimization \
**Zero-Copy Support**: Comprehensive `zerocopy` implementation for all major types \
**Type Safety**: Bitcoin domain-specific constraints with overflow protection and validation \
**Financial Types**: Multi-denomination OHLC support with automatic conversions \
**Address System**: Complete Bitcoin script type classification with 280 enum variants \
**Time Indexing**: Hierarchical calendar system from daily to decade-level granularity \
**Storage Integration**: `vecdb::Pco` traits for efficient database operations \
**Architecture**: Type-driven design prioritizing memory efficiency and domain correctness
---
_This README was generated by Claude Code_
- `brk_error` for error handling