[GSoC-PortA] mapping function
Brian G. Peterson
brian at braverock.com
Sat Jun 29 23:13:36 CEST 2013
You're absolutely on the right track, except for this one part:
* if a constraint is violated, omit that set of weights
Depending on your constraints, your chance of constructing a valid
portfolio may be very very slim.
In random portfolios, we could try this, just keep drawing until you get
ones that match, but this could take forever.
In stochastic solvers, you typically only have support for box
constraints, so again your chance of having truly feasible vectors is
pretty small in many cases.
This leads to the need for a transformation function, to bring a
random/stochastic weights vector into compliance, or close, with
all/most of your constraints.
In DEoptim, we added the possibility of fnMap to be called directly on
each generation's population, to 'fix' the random population and give it
a full population.
In PortfolioAnalytics, we can support an 'fnMap'-like transform directly
in DEoptim and in random portfolios.
We can support it *indirectly* for any arbitrary solver inside
constrained_objective (as long as we can store the transformed weight
vector to know what the objective was actually calculated on, which we
can get to later).
So i think we still need to write a fnMap transform function.
some other notes:
You're correct that volatility is usually dealt with as an objective.
In fact, since it requires returns, I only think of this as an objective.
Diversification I need to look at and see what you're calling
'diversification' to see if that requires the returns, or if it is just
a measure of *weights* concentration, that word can mean either.
Turnover can be handled either as an objective or as a constraint.
Penalization is likely necessary in constrained_objective when we have
to relax constraints somewhat to get to anything feasible, hopefully
Doug can provide more guidance here, as the literature on relaxing
constraints is pretty thin, to the best of my knowledge.
Regards,
Brian
On 06/29/2013 03:32 PM, Ross Bennett wrote:
> Brian,
>
> Thanks for sharing the paper and your ideas for the mapping function.
>
> One of the main point I got from the methodology described in the paper
> is that the sets of weights are omitted instead of transformed if they
> do not meet the constraints.
>
> * generate random portfolios
> * test each set of portfolio weights
> * if a constraint is violated, omit that set of weights
> * compute the objective function on each remaining set of weights
> * select the set of optimal weights
>
> Is this what you were getting at here?
> "A slightly more rigorous treatment of the problem is given here:
> http://papers.ssrn.com/sol3/__papers.cfm?abstract_id=1680224
> <http://papers.ssrn.com/sol3/papers.cfm?abstract_id=1680224>
> It is possible that can use this method directly for random portfolios."
>
> I could add a block of code at the end of random_portfolios that tests
> each set of weights against the constraints and only keeps the weights
> that satisfy the constraints. Thoughts?
>
> If we have to transform the weights, here are my thoughts looking at the
> specific cases for constraint types
> Constraint Type
>
> * Leverage (min_sum and max_sum)
> o This is done in randomize_portfolio by randomly permuting and
> increasing or decreasing an individual element (asset weight)
> until min_sum and max_sum constraints are satisfied while taking
> into account box constraints. These constraints are satisfied
> based on the way random portfolios are constructed.
> o This is done in constrained_objective by transforming the entire
> vector (
> + if(sum(w)>max_sum) { w<-(max_sum/sum(w))*w } # normalize to
> max_sum
> + if(sum(w)<min_sum) { w<-(min_sum/sum(w))*w } # normalize to
> min_sum
> o Implement by moving this into the mapping function... correct?
> * Box (min and max)
> o This is done in randomize_portfolio by construction
> o This is done in constrained_objective by penalizing weights
> outside box constraints.
> o Implement by using logic from randomize_portfolio to transform
> weights instead of penalizing them. Goal is to transform the
> weights vector instead of penalize... correct?
> * Group (cLO and cUP)
> o One approach is to normalize the weights in each given group
> that violate cLO or cUP so that the group weights sum to cLO or
> cUP. This changes the sum of weights, so when the weights vector
> is normalized the group constraints will likely be violated, but
> it gets us close. See sandbox/testing_constrained_group.R
> o Another approach is to add this to randomize_portfolio so the
> group constraints as well as box and leverage are satisfied by
> construction. Need to spend more time understanding code in
> randomize_portfolio to see how feasible this is.
> * turnover
> o Could we include this in constrained_objective as a penalty?
> * diversification
> o Could we include this in constrained_objective as a penalty?
> * volatility
> o Could we include this in constrained_objective as a penalty?
> * position_limit
> o This may be able to be implemented in randomize_portfolio by
> generating portfolios with the number of non-zero weights equal
> to max.pos, then fill in weights of zero so the length of the
> weights vector is equal to the number of assets, then scramble
> the weights vector. The number of non-zero weights could also be
> random so that the number of non-zero weights is not always
> equal to max.pos. This could be implemented in the DEoptim
> solver with the mapping function. This might be do-able in Rglpk
> for max return and min ETL. Rglpk supports mixed integer types,
> but solve.QP does not. May be able to use branch-and-bound
> technique using solve.QP, but needs more research.
>
>
> Regards,
> Ross
>
>
> On Sat, Jun 29, 2013 at 8:45 AM, Brian G. Peterson <brian at braverock.com
> <mailto:brian at braverock.com>> wrote:
>
> Based on side conversations with Ross and Peter, I thought I should
> talk a little bit about next steps related to the mapping function.
>
> Apologies for the long email, I want to be complete, and I hope that
> some of this can make its way to the documentation.
>
> The purpose of the mapping function is to transform a weights vector
> that does not meet all the constraints into a weights vector that
> does meet the constraints, if one exists, hopefully with a minimum
> of transformation.
>
> In the random portfolios code, we've used a couple of techniques
> pioneered by Pat Burns. The philosophical idea is that your optimum
> portfolio is most likely to exist at the edges of the feasible space.
>
> At the first R/Finance conference, Pat used the analogy of a
> mountain lake, where the lake represents the feasible space. With a
> combination of lots of different constraints, the shore of the lake
> will not be smooth or regular. The lake (the feasible space) may
> not take up a large percentage of the terrain.
>
> If we randomly place a rock anywhere in the terrain, some of them
> will land in the lake, inside the feasible space, but most will land
> outside, on the slopes of the mountains that surround the lake. The
> goal should be to nudge these towards the shores of the lake (our
> feasible space).
>
> Having exhausted the analogy, let's talk details.
>
> A slightly more rigorous treatment of the problem is given here:
> http://papers.ssrn.com/sol3/__papers.cfm?abstract_id=1680224
> <http://papers.ssrn.com/sol3/papers.cfm?abstract_id=1680224>
> It is possible that can use this method directly for random
> portfolios (and that we could add the ectra constraint types to
> DEoptim). If so, much of the rest of what I'll write here is
> irrelevant. I strongly suspect that there will be some constraint
> types that will still need to be 'adjusted' via a mapping method
> like the one laid out below, since a stochastic solver will hand us
> a vector that needs to be transformed at least in part to move into
> the feasible space. It's alsom not entirely clear to me that the
> methods presented in the paper can satisfy all our constraint types.
>
>
> I think our first step should be to test each constraint type, in
> some sort of hierarchy, starting with box constraints (almost all
> solvers support box constraints, of course), since some of the other
> transformations will violate the box constraints, and we'll need to
> transform back again.
>
> Each constraint can be evaluated as a logical expression against the
> weights vector. You can see code for doing something similar with
> time series data in the sigFormula function in quantstrat. It takes
> advantage of some base R functionality that can treat an R object
> (in this case the weights vector) as an environment or 'frame'. This
> allows the columns of the data to be addressed without any major
> manipulation, simply by column name (asset name in the weights
> vector, possibly after adding names back in).
>
> The code looks something like this:
> eval(parse(text=formula), data)
>
> So, 'data' is our weights vector, and 'formula' is an expression
> that can be evaluated as a formula by R. Evaluating this formula
> will give us TRUE or FALSE to denote whether the weights vector is
> in compliance or in violation of that constraint. Then, we'll need
> to transform the weight vector, if possible, to comply with that
> constraint.
>
> Specific Cases:
> I've implemented this transformation for box constraints in the
> random portfolios code. We don't need the evaluation I'll describe
> next for box constraints, because each single weight is handled
> separately.
>
> min_sum and max_sum leverage constraints can be evaluated without
> using the formula, since the formula is simple, and can be expressed
> in simple R code. The transformation can be accomplished by
> transforming the entire vector. There's code to do this in both the
> random portfolios code and in constrained_objective. It is probably
> preferable to do the transformation one weight at a time, as I do in
> the random portfolios code, to end closer to the edges of the
> feasible space, while continuing to take the box constraints into
> account.
>
> linear (in)equality constraints and group constraints can be
> evaluated generically via the formula method I've described above.
> Then individual weights can be transformed taking the value of the
> constraint (<,>,=) into account (along with the box constraints and
> leverage constraints).
>
> and so on...
>
> Challenges:
> - recovering the transformed vector from a optimization solver that
> doesn't directly support a mapping function. I've got some tricks
> for this using environments that we can revisit after we get the
> basic methodology working.
>
> -allowing for progressively relaxing constraints when the
> constraints are simply too restrictive. Perhaps Doug has some
> documentation on this as he's done it in the past, or perhaps we can
> simply deal with it in the penalty part of constrained_objective()
>
> Hopefully this was helpful.
>
> Regards,
>
> Brian
>
> --
> Brian G. Peterson
> http://braverock.com/brian/
> Ph: 773-459-4973 <tel:773-459-4973>
> IM: bgpbraverock
> _________________________________________________
> GSoC-PortA mailing list
> GSoC-PortA at lists.r-forge.r-__project.org
> <mailto:GSoC-PortA at lists.r-forge.r-project.org>
> http://lists.r-forge.r-__project.org/cgi-bin/mailman/__listinfo/gsoc-porta
> <http://lists.r-forge.r-project.org/cgi-bin/mailman/listinfo/gsoc-porta>
>
>
>
>
> _______________________________________________
> GSoC-PortA mailing list
> GSoC-PortA at lists.r-forge.r-project.org
> http://lists.r-forge.r-project.org/cgi-bin/mailman/listinfo/gsoc-porta
>
--
Brian G. Peterson
http://braverock.com/brian/
Ph: 773-459-4973
IM: bgpbraverock
More information about the GSoC-PortA
mailing list