Skip to main content

🧭 Market Study Hierarchy

Understanding, Prediction, Action.

This document formalizes the three primary layers of empirical market study within QLIR-style research pipelines.
Each layer has distinct epistemic goals, data requirements, and output forms.
Together, they define a reproducible progression from exploration β†’ inference β†’ simulation.


Level 1 β€” Non-Path-Dependent Characterization​

β€œWhat does the landscape look like?”​

Purpose
Quantify unconditional or quasi-conditional statistical features of the data β€” no sequencing, no temporal dependency, just structural description.

Examples

  • Candle-size distributions by day-of-week or session.
  • Bollinger-band width percentiles.
  • Average wick-to-body ratio for volume-filtered bars.
  • Volatility vs. time-of-day curves anchored on session open.

Properties

  • Bar order is irrelevant (no temporal dependency).
  • Output is distributional: histograms, percentiles, heatmaps.
  • Used for calibration, feature scaling, or visualization (e.g., Tableau dashboards).

Typical Outputs

summary_table
distribution_plot
p(X | filter)

Level 2 β€” Path-Dependent Characterization​

β€œGiven that this sequence happened, what tends to happen next?”​

Purpose
Describe conditional or sequential behavior β€” transition probabilities without trade logic.

Examples

  • β€œ70 % of the time when Bollinger width < threshold for β‰₯ 5 bars, expansion follows within 15 bars.”
  • β€œAfter three consecutive up-closes with declining volume, median next-bar return = -0.12 %.”
  • β€œVolatility spikes decay to baseline within N bars 80 % of the time.”

Properties

  • Preserves chronological order.
  • No exposure or capital logic; introduces temporal conditioning.
  • Produces persistence curves, hit-rate tables, or event-anchored averages.

Typical Outputs


event_summary
transition_matrix
p(next_state | pattern)

Role in Pipeline
This is the bridge layer that converts descriptive structure (Level 1) into parameterized hypotheses for backtesting.


Level 3 β€” Backtests (Decision Simulations)​

β€œIf I acted on those patterns with real rules and capital, what happens?”​

Purpose
Evaluate full trading logic under execution and capital constraints.

Adds New Dimensions

  • Entry / exit definitions
  • Position sizing and leverage
  • Fees, slippage, funding
  • Portfolio aggregation and risk metrics

Typical Outputs

equity_curve
PnL_distribution
drawdown_stats
Sharpe/Sortino

Conceptual Flow​

raw_market_data
↓
Level 1: Descriptive statistics β†’ what the terrain looks like
↓
Level 2: Conditional characterization β†’ what tends to happen next
↓
Level 3: Simulation (backtest) β†’ what happens if I act on it

Each layer feeds the next:

  • L1 β†’ L2: supply priors and filters
  • L2 β†’ L3: supply candidate conditions and parameters
  • L3 β†’ feedback: validate or falsify intuition, feeding back into new L1/L2 refinements

Why Maintain This Separation​

DimensionLevel 1Level 2Level 3
GoalUnderstandingPredictionAction
Data handlingShufflableOrderedCausal simulation
ValidationStatistical significanceConditional probabilityEconomic utility
Time budgetInteractiveExploratoryBatch/offline
OutputDistributionsConditional metricsPerformance curves

Key Takeaway​

A characterization study (Levels 1 & 2) transforms intuition into reproducible statistics. A backtest (Level 3) transforms those statistics into actionable performance under constraints.

Maintaining these layers separately keeps your research stack modular, falsifiable, and conceptually clean.