Trade broker commission split formulas18 comments
Etrade stock account minimum
This is the third part of our interview with a senior quantitative portfolio manager at a large hedge fund. In the first part , she discussed the theoretical phase of creating a quantitative trading strategy. How do you monitor and manage your model once live? What additional checks and procedures do you use? I like to know, every single day, exactly where my PL is coming from. What richened, what cheapened, by how much, and why.
This gives me confidence that the model is working as designed, and it serves as an early warning system for bad news. I try not to recalibrate my model too often. But I do try to second-guess myself all the time: The combination of watching my own trades like a hawk, and conversing with intelligent, skeptical colleagues and counterparts seems to work pretty well for me.
None of the above, by the way, should be construed as a replacement for an excellent and independent risk management team, or for desk-level monitoring.
Do you set up predefined monitoring rules or circuit breakers that take the model out of action automatically? If so, how do you construct these, what kinds of measures do you use in them?
Or to be more precise, portfolios with programmatic circuit breakers underperform portfolios without, over the long term. The reasoning is that circuit breakers stop you out of good trades at a loss way too often, such that those losses outweigh the rare occasions when they keep you out of big trouble.
For starters, they rarely blow up instantly. Instead, either the opportunity just gradually disappears arbitraged away by copycats , or the spread slowly and imperceptibly drifts further and further away from fair value and never comes back regime change. Conversely, if a trade diverges and then the divergence accelerates, that smells to me much more of a capitulation.
In those cases, I want to hold on to my position and indeed add if I can. So the paradoxical conclusion is that the faster a model loses money, the more likely it is to be still valid.
So you want to stop out. This is actually a microcosm of the larger problem. A situation where a circuit breaker would help will almost definitely be one perverse enough to avoid most a priori attempts at definition. How do you determine if the model is dead or just having a bad time? Do you know of any useful predictive regime change filters? This was the single most commonly asked question. I wish I did! For me, I use a variety of rules of thumb. Statistical tests to make sure the meta-characteristics of the model remain intact.
Anecdotal evidence of capital entering or leaving the market. Model deaths seem to last a period of years then come back better than ever sometimes. Absolutely, and this is a great point. Models do come back from the dead. US T-note futures versus cash is a classic example: So I never say goodbye to a model forever; I have a huge back catalogue of ideas whose time may come again.
Yes, this is an interesting idea. To an extent, every good PM does this, but some are more rigorous than others. And at least one big shop that I know of is completely and unequivocally run this way. But I use them as a sanity check, not as a primary determinant of positions.
Do you rely on one system or do you keep changing systems arbitrarily? I typically find that the most tedious part is making sure the data flows consistently and smoothly between different apps or languages. Syntax translation is easy; data translation, not so much. And indeed I find myself using Python more and more. But that was not always the case; the plethora of open-source financial libraries in Python is a relatively recent phenomenon.
Excel is fragile in many ways. So you have to be very careful in how and where you use Excel. That said, I do find the benefits outweigh the many costs. What kind of turnaround time do you expect from engineering colleagues coding up your strategy in C or Python?
Both for the first cut implementation, and then fixes and enhancements? Depends on the strategy. Some strategies are simpler and can be brought live in a matter of days. On the other hand, I remember one particular strategy that took several months to instantiate. I found this comment interesting: I play around with monthly data until I get something I think works. But the model should still behave in the same way. These events cause something to happen never mind what at those frequencies.
Take two futures strips in the same space — maybe winter and spring wheat. Look for cases when one is backwardated and the other is in contango. Buy front low, sell back high, sell front high, buy back low. This is a great case for changing time scales.
So, given that the strategy is really clean, we can get away with this kind of robustness test. Bid-ask is the bane of quants everywhere. But I would never apply this same test to, say, a trend-following strategy. That would raise all sorts of philosophical questions. By optimizing for that sweet spot, are you curve-fitting? Or does the fact that almost everyone uses 9d and d create a self-fulfilling prophecy, and so those numbers represent something structural about the market?
What if you sampled your data at interval X, and then did 9X and X moving averages — would that work? Could you give more details on the use of Monte Carlo in parameters initialization? I use Monte Carlo sampling to generate these starting points: How do you scale your trading strategy? How much gain per transaction would be considered a good model? And on what time scale is it trading? What range of time scales are used in your industry?
How much money can there be poured into a successful scheme, is this limited by how much money your fund has available or are there typically limits on the trading scheme itself? I have a few rules that I try to follow. If bid-ask is 1bp, I want to make 10bps with a high probability of success after financing costs. The binding constraint on these trades is usually balance sheet: I need to make sure that the trade pays a decent return on capital locked up.
Obviously I use very fat tails in my prognosis. Incidentally, optimal scale changes over time. I know some of the LTCM folks, and they used to make full points of arbitrage profit on Treasuries over a span of weeks. A decade later, that same trade would make mere ticks: You have to be aware of and adapt to structural changes in the market, as knowledge diffuses. I personally am comfortable on time scales from a few weeks to a few months.
The two best trades of my career were held for two years each. They blew up, I scaled in aggressively, then rode convergence all the way back to fair value. My partner on the trading desk trades the same instruments and strategies as I do, but holds them for a few hours to a few days at most.
I work for a large-ish fund, and the constraint has almost always been the market itself. Even when the market is as large and liquid as say US Treasuries. Or do you mean that he is calibrating his models such that they take trades in tighter neighborhoods around an equilibrium value but also have tighter stop outs? A bit of both. His execution is more mechanistic: He does smaller trades for quicker opportunities with tighter stops.
Do you have advice for someone who just started as a quant at a systematic hedge fund? How do I become really good at this? What differentiates the ones who succeed from those who do not? By which I mean a combination of procedural rigor, lack of self-deception, and humility in the face of data. Quants tend to get enamoured of their models and stick to them at all costs. The intellectual satisfaction of a beautiful model or technology is seductive.