[Blotter-commits] r1653 - pkg/quantstrat/sandbox/backtest_musings

noreply at r-forge.r-project.org noreply at r-forge.r-project.org
Wed Nov 19 21:09:39 CET 2014


Author: braverock
Date: 2014-11-19 21:09:38 +0100 (Wed, 19 Nov 2014)
New Revision: 1653

Removed:
   pkg/quantstrat/sandbox/backtest_musings/Notes_S_Rush_2014-10-10.txt
   pkg/quantstrat/sandbox/backtest_musings/strat_dev_proc_Kent_notes_2014-09-21.txt
Log:
- remove notes


Deleted: pkg/quantstrat/sandbox/backtest_musings/Notes_S_Rush_2014-10-10.txt
===================================================================
--- pkg/quantstrat/sandbox/backtest_musings/Notes_S_Rush_2014-10-10.txt	2014-11-19 16:51:17 UTC (rev 1652)
+++ pkg/quantstrat/sandbox/backtest_musings/Notes_S_Rush_2014-10-10.txt	2014-11-19 20:09:38 UTC (rev 1653)
@@ -1,50 +0,0 @@
-Main Idea: Setting out the correct process for a systemic trading system
-
-- Be careful with the "archetypal strategy" benchmark. If the strategy is loosely
-implemented, it starts to look a lot like the straw man arguements similar to
-the tests against buy and hold. This is also an area where I very quickly lose
-interest in the rest of the paper/presentation because the author doesn't
-understand the alternative strategy that he/she is arguing against.
-
-- I like the advise to measure yourself against multiple benchmarks even if
-you think the benchmark is inappropriate because someone else will probably
-do it anyway.
-
-- Page 4, bottom "tradeStats()" Are you assuming the readers will be familiar
-with quantstrat?
-
-- Page 5 I agree with what you are saying on creating a testable hypothesis
-but I think that I would be careful to use the word "prediction". There are
-many rigorous tests that work well in the cross section and provide some
-meanful insight as to how market work but are not useful in predicting what
-will happen in the next period. I might argue that in the context of systemic
-trading strategy development, it is the prediction that matters. A good
-hypothesis does not need to be predictive in the time dimension but does need
-to have a testable conclusion. This might seem like splitting hairs but if
-are targeting anyone with a scientific background, there is a difference
-between an experiment that has a predictable outcome and an experiment
-that verifies a natural process. In physical sciences, they look the same
-but in social sciences, they are different. An example is the correlation vs
-causation problem of people carrying umbrellas before it rains. Clearly
-umbrellas don't cause rain but if you knew nothing about the world, observing
-people carrying umbrellas would be an excellent predictor of rain. In strategy
-development, it would most likely be better to assume that umbrellas cause
-rain rather than produce a global weather model. This is bad science but
-probably more profitable.
-
-- Page 9 In look ahead bias, I would add that even assuming that a rule has
-access to and can act on contemporaneous information may be too optimistic.
-No matter how fast you are, there is a time cost to observing and acting.
-Marine Corps officers used to be taught the "OODA Loop" Observe, Orient,
-Decide, Act. You cannot execute the loop instantaneously so your actions will
-always lag observable information.
-
-- Page 11 The figures on this page are not large enough to add value to the
-paper.
-
-- Page 11 I like the description of evaluating signals with distributions.
-
-- Page 16 Bottom figure may look good on a good printer but is unreadable
-in a 30" monitor.
-
-

Deleted: pkg/quantstrat/sandbox/backtest_musings/strat_dev_proc_Kent_notes_2014-09-21.txt
===================================================================
--- pkg/quantstrat/sandbox/backtest_musings/strat_dev_proc_Kent_notes_2014-09-21.txt	2014-11-19 16:51:17 UTC (rev 1652)
+++ pkg/quantstrat/sandbox/backtest_musings/strat_dev_proc_Kent_notes_2014-09-21.txt	2014-11-19 20:09:38 UTC (rev 1653)
@@ -1,24 +0,0 @@
-
-Subject:
-Re: Strategy development process draft
-From:
-Kent Hoxsey <khoxsey at gmail.com>
-Date:
-09/21/2014 08:10 PM
-To:
-"Brian G. Peterson" <brian at braverock.com>
-
-Interesting stuff. One initial scoping question comes to mind: what kind of experience do you expect your audience to bring to this? I can picture quite a range (leaving out experienced professionals for now):
-- raw newbies trying to figure out the business (Ilya and passionate learners everywhere)
-- screen traders attempting to systematize (me, maybe Paul T)
-- experienced finance people figuring out prop (Simon, perhaps Josh)
-- academics (Marc, Matthew)
-Plenty of overlap between these groups, particularly given a large slice of "raw noob" in all of us who have never known a system with edge and thus never traded actual size. And it gets even more interesting if you allow for experienced professionals who *do* know systems with edge and *have* traded real size.
-
-I think the emphasis shifts depending on whether you are aiming to include the experienced professionals or not. Budding/aspirational strategists such as me/Josh/Simon and anyone working within the context of a prop or management firm will likely have constraints and objectives defined in advance. In contrast, an academic like Marc will benefit from the example trading system objectives (in your FIXME) as he attempts to define indicators/signals for consulting clients.
-
-Thinking back on my experience with both MDFA and CRSI, I have to say I never formulated a testable hypothesis. My only tests were whether the strategy was positive or negative. Based on my own experience as well as what I have seen of other people attempting to design strategies (or even Simon, attempting to integrate and validate a "working" strategy he purchased), this is a common approach or perhaps the only approach. So when you state "most strategy ideas will be rejected during hypothesis creation and testing", you are saying something both obvious to all and deeply counter-intuitive, if not outright subversive. So I am looking forward to further exposition later in the document. Because "creating and confirming a hypothesis" sounds a lot like "identifying an edge".
-
-As we get into the outline, the one obvious feature I would request is some explicit discussion of the bar-vs-tick data issue. With both MDFA and CRSI, I had promising bar-data indicators fall apart when faced with actual order books. The stats didn't give a good indication this would happen, but were just too close to the edge of profitability to survive any adverse data. (Identifying edge is my current hurdle, so more is better IMHO).
-
-Looking good so far, excited to see what comes next.



More information about the Blotter-commits mailing list