Code on Volatility Estimators, and Volatility Trading with Constraints
First things first, the xmas giveaway:
is completed - so do check your spam folders if you have participated, and confirm with us any further steps if necessary.
In the last post on options testing:
we tested for the variance premium on selling SPX ATM straddles. There our risk management was a very naive but simple 3x notional exposure on account capital. Suppose the SPX sits around 4500 (which it does), and since the option multiplier is 100, then the notional exposure = 450,000, and we should expect to have 450,000/3 = 150,000USD in our account for the strategy to be viable selling ONE SPX options contract (let’s ignore the number of legs for a second).
And it further assumed continuous rebalancing (of the strikes, not the delta hedges)…so as the underlying shifts, we are actually selling out of the straddles and buying into new ‘ATM’ straddles. That is expensive.
And so a reader asked…well asking for 150k in a retail trading account may be a big ask, especially for just a single volatility position, so is the index variance premium harvesting…out of reach?
Well, I mean that’s a fairly important but rather common problem in trading, in that granularity becomes a problem. We often need to ask how we want to implement a correlated strategy or harvest the same effect while adjusting for constraints particular to one’s situation.
First…the notional exposure, well, SPX has multiple options traded on it, the standard being the SPX, and the smaller ones being XSP (mini) and the smallest being NANOS. Okay, but just looking up the option chain on the minis and nanos, we can see that the bid-ask spread is quite problematic, often paying 5 to 10 times the fee relative to SPX per dollar exposure.
And then of course you can do the math, maybe run some simulations and see if the variance premium can overcome the cost hurdle. But of course then there is that iffy margin of error, and your pnl is hurt. Another is the SPY options, which is an option on the ETF rather than the index, which trades at roughly the same notional exposure as the XSP.
Okay, but now we have a slightly different risk profile, in that since we are selling straddles and the options are American, at any time one of the legs may be exercised and we would be left with unwanted exposure. Also, physical settlement rather than cash isn’t what we want.
The good part is that the options are much more liquid. So we have smaller notional size, more liquid contracts, but exercise risk. Okay, we can mitigate this by selling a strangle instead of a straddle. As long as the underlying stays between the two OTM strikes, we have (almost as good as) zero exercise risk. I mean…things could jump, and we may gain back the risk, but we have largely mitigated it. The delta is still flat at the outset, and we pay for it with the lower gamma exposure.
So no, when you face constraints, often there are workarounds, and many a times you will have to tradeoff some risk exposure you don’t want for another risk exposure you are willing to accept. It is about what exposure you are trying to gain, versus what risk you are trying to mitigate and finding an acceptable balance.
Anyway, enough story. We will release the code on the straddle selling backtest for equities, and then use these two different code bases as a starting point to engineer our options backtesting engine, which we will make available to our paid readers.
Awhile ago, we had our discussion of volatility estimators:
See our thread on it:
https://twitter.com/HangukQuant/status/1736055064069021798
and although there is no clear best estimator, some studies show when averages occur, such as in the more well behaved universe of indices and ETFs, the more `complex’ estimators give better results:
On that note…here is the short script for computing the various estimators…(paid):