Browse Research
Viewing 1701 to 1725 of 7690 results
2009
I review 150 textbooks on corporate finance and valuation published between 1979 and 2009 by authors such as Brealey, Myers, Copeland, Damodaran, Merton, Ross, Bruner, Bodie, Penman, Arzac… and find that their recommendations regarding the equity premium range from 3% to 10%, and that 51 books use different equity premia in various pages. The 5-year moving average has declined from 8.4% in 1990 to 5.7% in 2008 and 2009.
2009
We report 1,466 answers from European managers and professors. 1,143 respondents use required return and 824 use betas to calculate it. 44.4% of the managers and 8% of the professors do not use beta to justify the required return. Only 2% of the professors and 15% of the managers justify the beta using exclusively personal judgement (named qualitative, common sense, intuitive, and logical magnitude betas by different persons).
2009
Value-at-Risk (VaR) is a powerful tool for assessing market risk in real time-- a critical insight when making trading and hedging decisions. The VaR Modeling Handbook is the most complete, up-to-date reference on the subject for today's savvy investors, traders, portfolio managers, and other asset and risk managers.
2009
Assessing the systemic risk of a heterogeneous portfolio of banks during the recent financial crisis
This paper extends the approach of measuring and stress-testing the systemic risk of a banking sector in Huang, Zhou, and Zhu (2009) to identifying various sources of financial instability and to allocating systemic risk to individual financial institutions.
2008
Extreme climatic events present society with significant challenges in a rapidly warming world. Ordinary citizens, the insurance industry and governments are concerned about the apparent increase in the frequency of weather and climate events causing extreme, and in some instances, catastrophic, impacts.
2008
Starting from the reward-risk model for portfolio selection introduced in De Giorgi (2005), we derive the reward-risk Capital Asset Pricing Model (CAPM) analogously to the classical mean-variance CAPM. In contrast to the mean-variance model, reward-risk portfolio selection arises from an axiomatic definition of reward and risk measures based on a few basic principles, including consistency with second-order stochastic dominance.
2008
Over the last years, the valuation of life insurance contracts using concepts from financial mathematics has become a popular research area for actuaries as well as financial economists. In particular, several methods have been proposed of how to model and price participating policies, which are characterized by an annual interest rate guarantee and some bonus distribution rules.
2008
Claims reserving is central to the insurance industry. Insurance liabilities depend on a number of different risk factors which need to be predicted accurately. This prediction of risk factors and outstanding loss liabilities is the core for pricing insurance products, determining the profitability of an insurance company and for considering the financial strength (solvency) of the company.
2008
When analyzing catastrophic risk, traditional measures for evaluating risk, such as the probable maximum loss (PML), value at risk (VaR), Tail VAR (TVaR), and others, can become practically impossible to obtain analytically in certain types of insurance, such as earthquake. Given the available information it can be very difficult for an insurer to measure this risk.
2008
As a life reinsurer, understanding mortality risk is mission critical. Superior models of mortality risk lead to more competitive pricing in profitable market segments and more efficient use of capital. Surprisingly, modern statistical tools such as predictive modeling have not previously been used to model mortality data to gain this edge.
2008
Personal financial decision making plays an important role in modern finance. Decision problems about consumption and insurance are in this article modelled in a continuous-time multi-state Markovian framework. The optimal solution is derived and studied.
2008
This paper presents a Bayesian stochastic loss reserve model with the following features.
1. The model for expected loss payments depends upon unknown parameters that determine the expected loss ratio for the given accident years and the expected payment for each settlement lag.
2. The distribution of outcomes is given by the collective risk model in which the expected claim severity increases with the settlement lag.
2008
Motivation: Chain ladder forecasts are notoriously volatile for immature exposure periods. The Bornhuetter-Ferguson method is one commonly used alternative but needs a priori estimates of ultimate losses.
2008
Clustering methods are briefly reviewed and their applications in insurance ratemaking are discussed in this paper. First, the reason for clustering and the consideration in choosing clustering methods in insurance ratemaking are discussed. Then clustering methods are reviewed and particularly the problem of applying these methods directly in insurance ratemaking is discussed.
2008
Motivation. Territory as it is currently implemented is not a causal rating variable. The actual causal forces that drive the geographical loss generating process (LGP) do so in a complicated manner. Both the loss cost gradient (LCG) and information density (largely driven by the geographical density of exposures and by loss frequency) can change rapidly, and at different rates and in different directions.
2008
Geographic risk is a primary rating variable for personal lines insurance in the United States. Creating homogeneous groupings of geographic areas is the goal in defining rating territories. One methodology that can be used for creating these groupings with similar exposure to the risk of insurance losses is cluster analysis. This paper gives a description of an application to define rating territories using a k-means partition cluster analysis.
2008
In this paper, we describe a general process on how to integrate different types of predictive models within an organization to fully leverage the benefits of predictive modeling. The three major predictive modeling applications discussed in this paper are marketing, pricing, and underwriting models.
2008
The current financial crisis is the result of the interplay of changing circumstances, since the 1970s, on Main Street, Wall Street and internationally.
2008
Crises are devastating. They leave behind a high amount of entropy (unemployment, poverty, lack of safety). But this is half the truth.
2008
The present crisis in global financial markets has created an impression that enterprise risk management (ERM) has failed broadly to protect the safety and soundness of the financial system as well as that of many institutions, including insurance companies.
2008
Risk management can be a thankless profession. In bull markets, risk managers are often viewed as wet blankets who, as some might say, “take away the punch bowl just when the party starts getting interesting.”1 Upon a turn for the worse, people wonder why they weren’t warned earlier. Even when successful risk managers limit losses, their recognition is somehow lacking.
2008
There are certain aspects of the world of finance, and more particularly of the world of loans, that seem obvious if only to the non-professional. For example, any time you lend money, to anyone, there is some risk that you might not get it back.
2008
Over the past two decades, capital regimes for financial intermediaries have evolved from simple, static estimates toward dynamic, risk-based methodologies that reflect the real nature, extent and mix of risks to which an organization is exposed. This evolution is undoubtedly a good thing, helping companies maintain capital in proper proportion to risk and, hopefully, stay solvent through extreme conditions.
2008
Given the lower incidence rate, use of decision tree techniques like Classification and Regression Tree (CART) in understanding credit or operational risk becomes quite challenging. A commonly adopted solution is biased sampling approach, where more weights are attached to bad customers to artificially hike the incidence or bad rate. While adopting this type of biased sampling approach, question of the best weight arises.