Four Essays on Robustification of Portfolio Models

##### Dateien

##### Datum

##### Autor:innen

##### Herausgeber:innen

##### ISSN der Zeitschrift

##### Electronic ISSN

##### ISBN

##### Bibliografische Daten

##### Verlag

##### Schriftenreihe

##### Auflagebezeichnung

##### URI (zitierfähiger Link)

##### Internationale Patentnummer

##### Link zur Lizenz

##### Angaben zur Forschungsförderung

##### Projekt

##### Open Access-Veröffentlichung

##### Sammlungen

##### Core Facility der Universität Konstanz

##### Titel in einer weiteren Sprache

##### Publikationstyp

##### Publikationsstatus

##### Erschienen in

##### Zusammenfassung

The financial crisis has shown that quantitative asset allocation models and risk management models have not been sufficiently understood. Since the seminal paper of Markowitz (1952), the academic portfolio management literature has relied on sample estimates of return moments, which have led to extreme and unstable portfolio weights. In the 1990s, Britten-Jones (1999) and others demonstrated the poor empirical performance of the standard Markowitz model. As particularly the mean is difficult to estimate, literature often relies on the minimum variance portfolio (for a sensitivity analysis to changes to the mean see Best and Grauer, 1991a). Still, the estimation error in the covariance matrix is large (Chan et al., 1999). To reduce the estimation error in the covariance matrix, Ledoit and Wolf (2004) suggest a shrinkage approach.

Several regularization procedures have been proposed. Jagannathan and Ma (2003) advise short-selling restrictions. The approach is generalized by Brodie et al. (2009) who penalize short positions. They introduce a Lasso penalty on the norm of the portfolio weights. Stabilizing portfolio weights by shrinking them directly towards some predefined target is studied by Frahm and Memmel (2010). Despite numerous efforts, DeMiguel et al. (2009b) show that naive portfolio strategies such as the equally weighted portfolio are difficult to outperform. Clearly, it is not yet fully understood under which circumstances portfolio optimization fails. This thesis advocates and contributes to the development of alternatives and extensions to current portfolio optimization procedures. One possibility to reduce estimation risk is the combination of different asset allocation models. Model combination in a forecasting context is analyzed in Chapter 1, while the transfer to asset allocation is presented in Chapter 2. The combined model turns out to have a stable performance and does not (strongly) suffer from misspecification of the individual models. Other possibilities are alternative asset allocation strategies or improvements upon current portfolio optimization procedures. Chapter 3 advocates a portfolio strategy, called Minimax, which does not suffer from estimation risk in the returns’ moments. We show that the Minimax can be considered as an alternative to Markowitz portfolio optimization. An extension to standard portfolio optimization is given in Chapter 4. We introduce a penalty term for the norm of portfolio weights to prevent extreme asset allocations. The penalty is shown to improve the performance of standard asset allocation strategies.

Chapter 1 is a joint paper with Fabian Krüger and analyzes the performance of model combination. The performance of a probabilistic forecast model is commonly measured by strictly proper scoring rules (Gneiting and Raftery, 2007). It has been shown that some popular scoring rules are concave functions of the forecast. By Jensen’s inequality it holds that the average score is necessarily smaller (i.e. worse) than the score of the average forecast. This feature is often related to the good empirical performance of forecast combination, compared to the individual forecasts. The success of forecast combination is partly a consequence of the forecast evaluation methodology. We generalize the literature by showing that (smooth and strictly proper) scoring rules cannot be entirely convex, may be entirely concave, and are at least locally concave around the true probability. The finding implies that if forecast predictions are sufficiently close to the true probability, the performance of the average forecast is at least as good as the average performance of the individual models. Concavity depends on the true probability, which is not known in practice. For a given set of forecasts, we suggest to derive a range of true probabilities for which concavity holds. As an example, we analyze the prediction of US recessions based on a Probit and a survey of experts. We find that the range of probabilities under which the (spherical) score is concave is typically much larger than the interval defined by the two forecasts. Further, we find that the ex-post better model and the combined model significantly outperform the ex-post worse model. However, the ex-post better model and the combined model are statistically indistinguishable. We conclude that model combination is rewarding for most scenarios and scoring rules.

Chapter 2 transfers the insights gained in Chapter 1 to portfolio optimization. Under a (strictly) concave performance function, the average model is better than the average performance of the individual models. We find that many performance measures used in asset allocation are concave, while others are concave under certain conditions. So far, the literature was hunting for one single ”true” asset allocation model. We run a large empirical study to analyze the average model of several well-known asset allocation models. We use five models of three model classes applied to six different data sets and evaluate them by five different performance measures. We find that no single model constantly outperforms the others. The ranking of the models depends on the performance measure as well as on the chosen data set. Even for a certain performance measure and a certain data set, the ranking of the models strongly changes over time. The finding confirms that in calm periods sophisticated models outperform naive models; in rough periods data-ignorant models outperform sophisticated models. In the situation of changing model ranking and concave performance measures, the average model has to perform well by definition. The theoretical conclusion can be affirmed by our empirical study. The average model performs almost as good as the ex-post best model.

Chapter 3 is a joint paper with Steffen Schaarschmidt proposing a portfolio strategy called Minimax. The strategy deviates from classical risk measures and defines risk in terms of the worst case scenario. Common symmetric risk measures have undesirable properties. First, large positive returns should not be considered as risk. Second, rare extreme losses might get too little attention. In view of the recent economic turmoil, investors may prefer our conservative risk measure. Additionally, our approach circumvents the estimation of unstable means as well as the estimation of a large covariance matrix. A typical investor might be a corporation, a pension fund, or a bank which has to implement daily risk management. The target is to minimize daily investment losses. A second type of investor is an investor who is facing mark-to-market accounting. He aims to minimize the margin calls as the portfolio falls short of a certain level. The Minimax strategy is a ”pessimistic” trading strategy, as it chooses the portfolio weights such that the portfolio payoff is maximized for the minimum outcome. Our contribution is to show that the Minimax portfolio is implementable for a multi asset investor. In our empirical study, we use US stock, bond, real estate and commodity indexes to construct portfolios with yearly holding periods. We compare the performance of the Minimax portfolio to the performance of the asset allocation of a typical pension fund. Additionally, we use alternative asset allocation strategies, such as the equally weighted portfolio, the Minimum Variance portfolio and the (short-selling restricted) Mean Variance portfolio. We find that the Minimax portfolio performs well compared to the benchmarks considered.

Chapter 4 is a joint paper with Prof. Dr. Pohlmeier considering a recent regularization approach for portfolio weights. To stabilize portfolio weights, Jagannathan and Ma (2003) find that the no-shortsale constraint works well in portfolio optimization. Still, the approach is too restrictive, as moderate short positions can enhance the performance of the portfolio (Fan, 2010). Brodie et al. (2009) introduce a L1 norm restriction for the portfolio weights. In the statistics literature, the type of restriction was introduced as the ”Lasso” by Tibshirani (1996). The restriction can be interpreted as a penalization of the portfolio’s short positions. Naturally, the question arises how to determine the optimal penalty level for short-selling. Our contribution is to introduce a rule-of-thumb for the penalty level. The resulting ”Lasso portfolio” is easy to implement and asymptotically optimal. It performs well in a simulation

study. In an empirical study, we find that the Lasso portfolio outperforms the no short-sale Mean Variance portfolio, the unrestricted Mean Variance portfolio as well as various alternative strategies proposed by literature.

##### Zusammenfassung in einer weiteren Sprache

##### Fachgebiet (DDC)

##### Schlagwörter

##### Konferenz

##### Rezension

##### Zitieren

## ISO 690

SCHANBACHER, Peter, 2013.*Four Essays on Robustification of Portfolio Models*[Dissertation]. Konstanz: University of Konstanz

## BibTex

@phdthesis{Schanbacher2013Essay-25947, year={2013}, title={Four Essays on Robustification of Portfolio Models}, author={Schanbacher, Peter}, address={Konstanz}, school={Universität Konstanz} }

## RDF

<rdf:RDF xmlns:dcterms="http://purl.org/dc/terms/" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns:bibo="http://purl.org/ontology/bibo/" xmlns:dspace="http://digital-repositories.org/ontologies/dspace/0.1.0#" xmlns:foaf="http://xmlns.com/foaf/0.1/" xmlns:void="http://rdfs.org/ns/void#" xmlns:xsd="http://www.w3.org/2001/XMLSchema#" > <rdf:Description rdf:about="https://kops.uni-konstanz.de/server/rdf/resource/123456789/25947"> <dspace:isPartOfCollection rdf:resource="https://kops.uni-konstanz.de/server/rdf/resource/123456789/46"/> <dcterms:issued>2013</dcterms:issued> <foaf:homepage rdf:resource="http://localhost:8080/"/> <void:sparqlEndpoint rdf:resource="http://localhost/fuseki/dspace/sparql"/> <dcterms:hasPart rdf:resource="https://kops.uni-konstanz.de/bitstream/123456789/25947/2/dissSchanbacher_flat.pdf"/> <dc:language>eng</dc:language> <dspace:hasBitstream rdf:resource="https://kops.uni-konstanz.de/bitstream/123456789/25947/2/dissSchanbacher_flat.pdf"/> <dc:rights>terms-of-use</dc:rights> <dcterms:isPartOf rdf:resource="https://kops.uni-konstanz.de/server/rdf/resource/123456789/46"/> <dc:date rdf:datatype="http://www.w3.org/2001/XMLSchema#dateTime">2014-01-20T08:49:45Z</dc:date> <dc:contributor>Schanbacher, Peter</dc:contributor> <dcterms:abstract xml:lang="eng">The financial crisis has shown that quantitative asset allocation models and risk management models have not been sufficiently understood. Since the seminal paper of Markowitz (1952), the academic portfolio management literature has relied on sample estimates of return moments, which have led to extreme and unstable portfolio weights. In the 1990s, Britten-Jones (1999) and others demonstrated the poor empirical performance of the standard Markowitz model. As particularly the mean is difficult to estimate, literature often relies on the minimum variance portfolio (for a sensitivity analysis to changes to the mean see Best and Grauer, 1991a). Still, the estimation error in the covariance matrix is large (Chan et al., 1999). To reduce the estimation error in the covariance matrix, Ledoit and Wolf (2004) suggest a shrinkage approach.<br /><br />Several regularization procedures have been proposed. Jagannathan and Ma (2003) advise short-selling restrictions. The approach is generalized by Brodie et al. (2009) who penalize short positions. They introduce a Lasso penalty on the norm of the portfolio weights. Stabilizing portfolio weights by shrinking them directly towards some predefined target is studied by Frahm and Memmel (2010). Despite numerous efforts, DeMiguel et al. (2009b) show that naive portfolio strategies such as the equally weighted portfolio are difficult to outperform. Clearly, it is not yet fully understood under which circumstances portfolio optimization fails. This thesis advocates and contributes to the development of alternatives and extensions to current portfolio optimization procedures. One possibility to reduce estimation risk is the combination of different asset allocation models. Model combination in a forecasting context is analyzed in Chapter 1, while the transfer to asset allocation is presented in Chapter 2. The combined model turns out to have a stable performance and does not (strongly) suffer from misspecification of the individual models. Other possibilities are alternative asset allocation strategies or improvements upon current portfolio optimization procedures. Chapter 3 advocates a portfolio strategy, called Minimax, which does not suffer from estimation risk in the returns’ moments. We show that the Minimax can be considered as an alternative to Markowitz portfolio optimization. An extension to standard portfolio optimization is given in Chapter 4. We introduce a penalty term for the norm of portfolio weights to prevent extreme asset allocations. The penalty is shown to improve the performance of standard asset allocation strategies.<br /><br />Chapter 1 is a joint paper with Fabian Krüger and analyzes the performance of model combination. The performance of a probabilistic forecast model is commonly measured by strictly proper scoring rules (Gneiting and Raftery, 2007). It has been shown that some popular scoring rules are concave functions of the forecast. By Jensen’s inequality it holds that the average score is necessarily smaller (i.e. worse) than the score of the average forecast. This feature is often related to the good empirical performance of forecast combination, compared to the individual forecasts. The success of forecast combination is partly a consequence of the forecast evaluation methodology. We generalize the literature by showing that (smooth and strictly proper) scoring rules cannot be entirely convex, may be entirely concave, and are at least locally concave around the true probability. The finding implies that if forecast predictions are sufficiently close to the true probability, the performance of the average forecast is at least as good as the average performance of the individual models. Concavity depends on the true probability, which is not known in practice. For a given set of forecasts, we suggest to derive a range of true probabilities for which concavity holds. As an example, we analyze the prediction of US recessions based on a Probit and a survey of experts. We find that the range of probabilities under which the (spherical) score is concave is typically much larger than the interval defined by the two forecasts. Further, we find that the ex-post better model and the combined model significantly outperform the ex-post worse model. However, the ex-post better model and the combined model are statistically indistinguishable. We conclude that model combination is rewarding for most scenarios and scoring rules.<br /><br />Chapter 2 transfers the insights gained in Chapter 1 to portfolio optimization. Under a (strictly) concave performance function, the average model is better than the average performance of the individual models. We find that many performance measures used in asset allocation are concave, while others are concave under certain conditions. So far, the literature was hunting for one single ”true” asset allocation model. We run a large empirical study to analyze the average model of several well-known asset allocation models. We use five models of three model classes applied to six different data sets and evaluate them by five different performance measures. We find that no single model constantly outperforms the others. The ranking of the models depends on the performance measure as well as on the chosen data set. Even for a certain performance measure and a certain data set, the ranking of the models strongly changes over time. The finding confirms that in calm periods sophisticated models outperform naive models; in rough periods data-ignorant models outperform sophisticated models. In the situation of changing model ranking and concave performance measures, the average model has to perform well by definition. The theoretical conclusion can be affirmed by our empirical study. The average model performs almost as good as the ex-post best model.<br /><br />Chapter 3 is a joint paper with Steffen Schaarschmidt proposing a portfolio strategy called Minimax. The strategy deviates from classical risk measures and defines risk in terms of the worst case scenario. Common symmetric risk measures have undesirable properties. First, large positive returns should not be considered as risk. Second, rare extreme losses might get too little attention. In view of the recent economic turmoil, investors may prefer our conservative risk measure. Additionally, our approach circumvents the estimation of unstable means as well as the estimation of a large covariance matrix. A typical investor might be a corporation, a pension fund, or a bank which has to implement daily risk management. The target is to minimize daily investment losses. A second type of investor is an investor who is facing mark-to-market accounting. He aims to minimize the margin calls as the portfolio falls short of a certain level. The Minimax strategy is a ”pessimistic” trading strategy, as it chooses the portfolio weights such that the portfolio payoff is maximized for the minimum outcome. Our contribution is to show that the Minimax portfolio is implementable for a multi asset investor. In our empirical study, we use US stock, bond, real estate and commodity indexes to construct portfolios with yearly holding periods. We compare the performance of the Minimax portfolio to the performance of the asset allocation of a typical pension fund. Additionally, we use alternative asset allocation strategies, such as the equally weighted portfolio, the Minimum Variance portfolio and the (short-selling restricted) Mean Variance portfolio. We find that the Minimax portfolio performs well compared to the benchmarks considered.<br /><br />Chapter 4 is a joint paper with Prof. Dr. Pohlmeier considering a recent regularization approach for portfolio weights. To stabilize portfolio weights, Jagannathan and Ma (2003) find that the no-shortsale constraint works well in portfolio optimization. Still, the approach is too restrictive, as moderate short positions can enhance the performance of the portfolio (Fan, 2010). Brodie et al. (2009) introduce a L1 norm restriction for the portfolio weights. In the statistics literature, the type of restriction was introduced as the ”Lasso” by Tibshirani (1996). The restriction can be interpreted as a penalization of the portfolio’s short positions. Naturally, the question arises how to determine the optimal penalty level for short-selling. Our contribution is to introduce a rule-of-thumb for the penalty level. The resulting ”Lasso portfolio” is easy to implement and asymptotically optimal. It performs well in a simulation<br />study. In an empirical study, we find that the Lasso portfolio outperforms the no short-sale Mean Variance portfolio, the unrestricted Mean Variance portfolio as well as various alternative strategies proposed by literature.</dcterms:abstract> <bibo:uri rdf:resource="http://kops.uni-konstanz.de/handle/123456789/25947"/> <dcterms:available rdf:datatype="http://www.w3.org/2001/XMLSchema#dateTime">2014-01-20T08:49:45Z</dcterms:available> <dc:creator>Schanbacher, Peter</dc:creator> <dcterms:rights rdf:resource="https://rightsstatements.org/page/InC/1.0/"/> <dcterms:title>Four Essays on Robustification of Portfolio Models</dcterms:title> </rdf:Description> </rdf:RDF>