[GSoC-PortA] Incorporating New Portfolio Object
Brian G. Peterson
brian at braverock.com
Thu Jul 11 22:28:17 CEST 2013
On 07/11/2013 02:27 PM, Ross Bennett wrote:
> Thanks for the insight and thoughts into making fn_map work with
> constrained_objective.
>
> Just to clarify a few points and make sure my understanding is
> correct. I am mainly thinking in terms of how this will apply to
> DEoptim and random for optimize methods.
For random portfolios and DEoptim, if you call fn_map inside
constrained_objective, you'll either do nothing, or you'll try to
re-adjust for any constraints which you had to relaxe the first time
through.
That bears expanding on...
In the random portfolios code, you're using fn_map when *generating* the
random portfolios. So the weights vector is already 'as good as we
could manage'. In this case, I'd assume that we'd then force
normalize=FALSE when calling constrained_objective().
> constrained_objective is the "fn" argument to DEoptim. The function
> "fn" needs to return a single argument, which in our case is the
> return value from constrained_objective, out.
For the DEoptim solver, we should be passing fnMap=fn_map in
optimize.portfolio. This will let DEoptim apply fn_map to the
population of weight vectors in each generation. (we may need a wrapper
around fn_map that takes a multi-row matrix) DEoptim will then use
these pre-transformed vectors when calling constrained_objective(), so
again, normalize=FALSE should be what we use from optimize.portfolio.
> The store_output object is useful for us to track the weights vector
> before and after it is transformed at each iteration, but does not
> affect the optimization.
Correct.
If we use normalize=TRUE, the *solver* will not know that we've
internally transformed the weights vector to map to our allowed
constraints. In some cases, penalizing the weights vector for how much
it fails to meet the constriants when normalize=FALSE will help guide
the solver to make better decisions, without having to transform/map the
weights vector (especially when the feasible space is large). In other
cases, penalizing the original weights vector won't help at all,
typically when the feasible space is very irregular, there are lots of
assets, or the optimizer has a limited knowledge of what's going on
inside the objective function.
So, for all the other solvers, I think we need to preserve the option
for the user to either use the mapping function fn_map with
normalize=TRUE, or to not use them, and to then penalizethe weights vector.
I guess this suggests that the normalize=FALSE section of
constrained_objective() needs to change as well to penalize based on the
degree that the weights vector violates the constraints. It already
does this for leverage constraints, but not for the other constraint
types. I think that there may be an argument to be made to *always*
penalize, because the constraints may have been relaxed somewhat. In
that case, the 'else' clause of the normalise=TRUE/FALSE if/else block
may come out of the else and just always be execute the penalty for
weights that violate each constraint.
> The transformed weights from the output of fn_map will be used to
> calculate the objective measures and the return value, out. Does the
> optimizer, e.g. DEoptim, keep track of the adjusted weights or the
> original weights?
The optimizer will only keep track of the weights it sends in, and the
singular value of 'out'.
Regards,
Brian
--
Brian G. Peterson
http://braverock.com/brian/
Ph: 773-459-4973
IM: bgpbraverock
More information about the GSoC-PortA
mailing list