RiskModels API Documentation

This repository is the authoritative reference for the ERM3 hierarchical equity risk model API—built for quant developers and AI agents who need residual risk, hedge ratios, and explained risk with production-grade fidelity.

Live API snapshots

MAG7 macro correlations (L3 residual) and a cross-sectional rank snapshot — produced by scripts/generate_readme_assets.py with your API key, same assets as the GitHub README.

MAG7 macro correlation matrix from the RiskModels APIMacro sensitivity (VIX, Gold, BTC)
Cross-sectional rank percentile chart from the RiskModels APIRank percentile (get_rankings)
Combined RiskModels macro and ranking visualizationCombined — ranks & macro (readme inspiration)

ERM3 factor logic: the strategic value

Actionable quant framing—what each construct buys you in research and execution, and which semantic field to read in the SDK or API.

ConceptThe quant edgeKey field (SDK)SDK example
Residual Risk (RR)Uncover alpha. RR is idiosyncratic variance after L3 hedges. High RR (e.g. above 0.50) flags names where SPY and sector ETFs explain less—room for stock-specific edge.l3_residual_erclient.get_metrics("NVDA", as_dataframe=True)["l3_residual_er"].iloc[0]
Hedge Ratio (HR)Precision hedging. Not a single beta—ERM3 gives dollars of ETF per 1 stock at each L3 layer (market, sector, subsector).
l3_market_hrl3_sector_hrl3_subsector_hr
client.get_metrics("NVDA", as_dataframe=True)[["l3_market_hr","l3_sector_hr","l3_subsector_hr"]]
Explained Risk (ER)Risk attribution. Variance share from market, sector, and subsector factors (orthogonalized). At L3, l3_market_er + l3_sector_er + l3_subsector_er + l3_residual_er ≈ 1.0 within tolerance.
l3_market_erl3_sector_erl3_subsector_erl3_residual_er
client.get_metrics("NVDA", as_dataframe=True)[["l3_market_er","l3_sector_er","l3_subsector_er","l3_residual_er"]]
Macro factor correlationCross-asset context. Rolling correlation vs macro factors (VIX, Bitcoin, Gold, Oil, DXY, UST 10y-2y) helps identify hedging regimes and factor crowding. Helps quantify macro exposure not captured by L3 equity factors.
correlations(via factor_correlation)
client.get_factor_correlation_single("NVDA", factors=["vix","bitcoin"])
Sign conventionModel safety. l3_market_hr is typically ≥ 0. l3_sector_hr and l3_subsector_hr may be negative from orthogonalization or a long subsector ETF hedge — see Methodology; validate="warn" may flag edge cases.
l3_market_hrl3_sector_hrl3_subsector_hr
float(client.get_metrics("NVDA", as_dataframe=True)["l3_subsector_hr"].iloc[0]) # may be < 0

Overview

The RiskModels API provides institutional-grade equity risk analysis for AI agents and quantitative applications:

  • Daily factor decompositions — market, sector, and subsector explained-risk fractions for ~3,000 US equities
  • Hedge ratios — dollar-denominated ETF hedge amounts at three precision levels (L1 market-only, L2 market+sector, L3 full three-ETF)
  • Historical time series — split- and dividend-adjusted daily returns plus rolling hedge ratios going back to 2006
  • AI-agent ready — machine-readable manifest at /.well-known/agent-manifest, per-request billing via prepaid balance

Data coverage: Universe uni_mc_3000 (~3,000 top US stocks), date range 2006-01-04 to present, updated daily.

Why The Engine Matters

  • Built to be time-safe — the engine is designed to avoid common sources of forward contamination such as recycled tickers, snapshot shares, and retroactive universe contraction
  • Backed by a real Security Master — point-in-time identity, classification, and shares logic support more stable ticker-level outputs
  • Hierarchical by design — market, sector, and subsector structure are modeled explicitly rather than compressed into a flat beta view
  • Tradeable in practice — hedge ratios are designed to remain executable with liquid raw ETFs, not just synthetic factors
  • Built on adjusted returns — split- and dividend-adjusted return series improve consistency through corporate actions and long horizons

For the mathematical detail behind L1/L2/L3 decomposition and hedge construction, see Methodology. For the design choices behind time safety, identity continuity, and tradeable hedge outputs, see ERM3 Engine.

Choose Your Workflow

📈

Quant Research and Hedging

Use the core RiskModels endpoints when you want factor decomposition, hedge ratios, residual risk, and historical return analytics on securities or portfolios.
🏦

Brokerage-Linked Portfolio Holdings

Use Plaid when you want a user to connect a real brokerage account in the web app once, then pull holdings through the API with the same account identity.

Risk Metrics

Beyond static HR/ER snapshots, RiskModels exposes on-demand correlation of equity returns against daily macro factor returns (cross-asset context for regimes and crowding).

📊

Macro factor correlation

Correlate a stock's daily returns—gross or ERM3 residuals (L1 market-only, L2 market+sector, or L3 full hedge)—against Bitcoin, Gold, Oil, DXY, VIX, and UST 10y–2y. Use POST /api/correlation for single-ticker or batch; GET /api/metrics//correlation for query-string convenience.

Core Endpoints

EndpointMethodDescriptionCostSDK example (Python)
/api/ticker-returnsGETDaily returns + rolling V3 hedge ratios and explained-risk fractions, up to 15y$0.005/call
from riskmodels import RiskModelsClient; RiskModelsClient.from_env().get_ticker_returns("NVDA", years=5)
/api/metrics/GETLatest snapshot: HR/ER fields (semantic columns in SDK), vol, Sharpe, sector, market cap$0.005/call
from riskmodels import RiskModelsClient; RiskModelsClient.from_env().get_metrics("NVDA", as_dataframe=True)
/api/l3-decompositionGETMonthly historical HR/ER time series$0.005/call
from riskmodels import RiskModelsClient; RiskModelsClient.from_env().get_l3_decomposition("NVDA")
/api/correlation
/api/metrics//correlation
POST
GET
Macro factor correlation (VIX, Bitcoin, Gold, Oil, DXY, UST 10y-2y). Use POST for batch, GET for single-ticker.$0.002–0.005/call
from riskmodels import RiskModelsClient; RiskModelsClient.from_env().get_factor_correlation_single("NVDA", factors=["vix","bitcoin"])
/api/batch/analyzePOSTMulti-ticker batch up to 100, 25% cheaper per position$0.002/position
from riskmodels import RiskModelsClient; RiskModelsClient.from_env().batch_analyze(["NVDA","AAPL"], ["returns","full_metrics"], years=2)
/api/tickersGETTicker universe search, MAG7 shortcutFree
from riskmodels import RiskModelsClient; RiskModelsClient.from_env().search_tickers(search="NVDA")
/api/balanceGETAccount balance and rate limitsFree
from riskmodels import RiskModelsClient; RiskModelsClient.from_env()._transport.request("GET", "/balance")[0]
/api/invoicesGETInvoice history and spend summaryFree
from riskmodels import RiskModelsClient; RiskModelsClient.from_env()._transport.request("GET", "/invoices")[0]
/api/healthGETService healthFree
from riskmodels import RiskModelsClient; RiskModelsClient.from_env()._transport.request("GET", "/health")[0]
/.well-known/agent-manifestGETAI agent discovery manifestFree
import httpx; httpx.get("https://riskmodels.app/.well-known/agent-manifest", timeout=30).json()

MCP coverage note: riskmodels_list_endpoints returns capability bundles, not a 1:1 mirror of every route above. Account routes (/balance, /invoices) and the agent manifest ( /.well-known/agent-manifest) are documented here but do not appear as separate MCP tool IDs.

Account routes (/balance, /invoices, /health) use the SDK transport with the same OAuth or API key as data methods. The well-known manifest is fetched from the site root (no auth).

Pricing model: prepaid balance (Stripe). Cached responses are free. Minimum top-up: $10.

🤖 Agent-Native Helpers (Python SDK)

The riskmodels-py package includes methods and metadata designed for agent workflows and LLM reasoning:

ToolPurpose
client.discover(format="json")Returns JSON digest with method names, parameters (name/type/required/defaults/enums), return types, and tool_definition_hints for Claude Desktop or MCP-style tool synthesis. Acts as a map for agents.
to_llm_context(obj)One call → Markdown tables + lineage + semantic field cheatsheet + ERM3 legend. Works on DataFrame, PortfolioAnalysis, xarray.Dataset, or dict. Includes all metadata agents need to interpret results without guessing.
client.from_env()Auto-discover API key or OAuth credentials from environment variables (RISKMODELS_API_KEY or RISKMODELS_CLIENT_ID + RISKMODELS_CLIENT_SECRET).
client.get_dataset() (aliases get_cube, get_panel)After pip install riskmodels-py[xarray], turns batch long-table returns + rolling HRs into an xarray.Dataset for vectorized portfolio math and multi-ticker panels (format="parquet" default).
Ticker alias warningsSDK logs and emits ValidationWarning when detecting ticker aliases (e.g. GOOGL→GOOG). Format: Warning: ... Fix: Use the canonical symbol in all future calls. Agents can self-correct.
validate="warn" / "error"ER sum and HR sign checks. Errors formatted as Error: {issue} Fix: {action} so agents and humans know how to proceed.
df.attrs["legend"]Every tabular result includes a short ERM3 legend (same as SHORT_ERM3_LEGEND). Read this instead of guessing column semantics.
df.attrs["riskmodels_semantic_cheatsheet"]JSON + bullet list mapping wire keys → semantic names + units. Ground truth for field interpretation.
df.attrs["riskmodels_lineage"]JSON string: model version, as-of date, factor set, universe size. Provenance for all API responses.

Grounding wire keys in docs: to_llm_context() pulls legend, lineage, and riskmodels_semantic_cheatsheet from object attrs — aligned with the same wire↔semantic definitions as

SEMANTIC_ALIASES.md

. Treat that file as narrative ground truth when explaining API JSON: e.g. wire l3_res_er → idiosyncratic variance share ( l3_residual_er in SDK output) so agents self-correct instead of guessing meanings.

Install from PyPI (riskmodels-py). With xarray support for multi-dimensional portfolio math:

pip install riskmodels-py[xarray]

Minimal patterns

from riskmodels import RiskModelsClient

# Credentials from env (see Quickstart for variable names)
client = RiskModelsClient.from_env()

df = client.get_metrics("NVDA", as_dataframe=True)  # attrs: legend, cheatsheet, lineage
pa = client.analyze({"NVDA": 0.5, "AAPL": 0.5})     # portfolio hedge ratios + per-name tables
# Requires the [xarray] extra — batch long table → labeled dimensions
ds = client.get_dataset(["NVDA", "AAPL", "MSFT"], years=2)
# ds is suitable for to_llm_context(ds) and broadcasted portfolio math (see Methodology)

See the package README for complete method signatures and examples.

Support


Related