Hello,
I am a fairly new member. I have algorithms that are basically ready to test, but I am still in the process of learning the Quantopian IDE environment.
I am interested in the competition, but I'm a little confused about the judgement process. From reading various posts on past competitions, it seems like the goalposts are constantly moving, so as a newcomer I am not really sure what the contest is looking for. I understand that updating rules to accommodate changing conditions makes sense. I'm not questioning that. I'm just interested in learning what the goals currently are, so I can tailor my algorithms appropriately.
I'm especially interested in any qualitative rules that are enforced, and known by the community, but not necessarily formally written down. For instance, I see that in the recent competition, the "winning" algorithm was DQ-ed because of a historical out of sample test, but I don't see such conditions in any of the rules I've checked.
I ask about this in particular because tailoring an algorithm to perform well over a 10+ year out-of-sample back test, while also performing well under out-of-sample "live" conditions is kind of serving two masters at once. Certainly some approaches can do well under both scenarios. Suppose, though, I have a particular algorithm that I believe will work well now, given something I've observed, but I know would not perform well in the past because of something else, is such an algorithm just not relevant to the competition?
Another topic is the judging criteria. Are the quantified measurements of performance equally weighted? Does anyone go through them to check for well-performing algorithms that don't "excel" based on the risk measurements but perform well? I.e. are the contest-entrants slaves to the metrics, or is there some sort of procedure to check for good return profiles that aren't captured very well by the judging criteria risk metrics?
I know I'm getting into some fine print here, but I don't want to waste anybody's time (mine in learning the IDE, judges in reviewing performance) going forward.
Thanks,
Alexander
Edited for clarity