How it works
- A trigger (signal or schedule) fires
- The system loads the prompt template and replaces variables with live data
- The enriched prompt is sent to the trader’s configured LLM
- The LLM response is parsed into a structured trading decision
- The decision is validated, risk-checked, and executed
Creating a prompt
Template variables
Use these placeholders in your prompt content. They are replaced with live data at execution time.| Variable | Content |
|---|---|
{market_prices} | Current bid/ask prices for all watchlist symbols |
{sampling_data} | Rolling window of market snapshots from the sampling pool |
{trigger_context} | Metadata about the signal or schedule that triggered execution |
{open_positions} | Active positions with entry price, size, unrealized PnL |
{recent_trades} | Recent trade history for context on past decisions |
{market_regime} | Current regime classification (trending, ranging, volatile, calm) |
{news_context} | Latest AI-classified news summaries |
{kline_data} | Recent candlestick data at the configured timeframe |
{technical_indicators} | RSI, MACD, Bollinger Bands, and other computed indicators |
Binding prompts to traders
Connect a prompt to a trader via a binding. The binding specifies the trigger type:AI-assisted prompt creation
Use the HyperAI agent to help write prompts by describing your strategy in plain language:Prompt design tips
Be specific about output format
Always specify the exact JSON fields you expect in the response. LLMs produce more reliable structured output when the format is explicit.
Include reasoning
Ask the LLM to explain its reasoning. This creates an audit trail and helps you identify when the model misinterprets data.
Set clear boundaries
Define what the model should NOT do — e.g., “Never use leverage above 5x” or “Only trade BTC and ETH.”
Test with backtesting
Use prompt backtesting to evaluate how your prompt would have performed on historical data.