Learn Pyth
Interactive guides explaining how Pyth's oracle network works — with live data examples pulled directly from the feeds.
How Pyth's Pull Oracle Works
Why Pyth is different from push-based oracles and how the pull model dramatically reduces on-chain costs.
What Confidence Intervals Really Mean
The ±CI on every Pyth price update — what it measures, how it's computed, and how calibrated it is.
Verifiable Randomness with Pyth Entropy
How Entropy generates on-chain randomness that nobody — including Pyth — can predict or manipulate.
Pyth Pro & Benchmarks — Historical Oracle Data
How Pyth Benchmarks powers the volatility dashboard and calibration analysis — and how developers can query the full historical price archive.
The $388K Oracle Mismatch: @ploutos_money Exploit
A DeFi lending protocol was drained in a single transaction because it used the wrong Chainlink price feed. This is a textbook oracle misconfiguration — and exactly the class of failure Pyth's design prevents.
Every Pyth price feed carries a symbol identifier and a confidence interval (CI). A correctly integrated Pyth feed for USDC should query Crypto.USDC/USD — which publishes at ≈ $1.0000 with a CI of ±$0.0003 (0.03% uncertainty).
If you accidentally query Crypto.BTC/USD for USDC pricing, Pyth returns ≈ $80,000 with a CI of ±$400 (0.5% uncertainty). A stablecoin with a ±$400 confidence band is an immediate on-chain red flag — protocols can enforce CI width limits as a circuit breaker.
The CI is a free, built-in sanity check. Any protocol that enforces require(confPct < 0.5%, "Oracle too uncertain") would have rejected the malicious BTC/USD feed for a stablecoin and the exploit would have failed at the smart contract level — automatically, with zero additional code.