The canonical method to forecasting a credit migration matrix is an econometric model: the one factor approach described in Belkin et al. (1998). This approach suggests that one might consider an approach to condition migration (transition) matrices by creating a systematic component which represents the “credit cycle” that relates the economic condition to the credit quality of a loan portfolio. The credit cycle can be thought of as the historical pattern of credit rating shared by all borrowers in a sector or economy.Read More »
Artificial Intelligence (AI) and Data Science continue their progression towards becoming mainstream and ubiquitous. This is a very exciting time for scientists, model developers, programmers, and a lot of other technically inclined professionals. But to be honest it can be confusing and overwhelming at times. We all hear terms like “AI”, “Data Science”, “Big Data”, “Machine Learning”, “Statistical Learning”, “Data Mining”, “Deep Learning”, etc., and it’s often hard to make sense of it all even for those of us who have been writing code to implement statistical models for decades. But it seems these terms are being used among people in every field and every industry. How do remote sensing professionals use data from a satellite to create land cover maps? how do certain streaming services determine what shows or movies to recommend based on your watching habits? How did Cambridge Analytica determine the poor shmucks Donald Trump should focus on? The answers to all these questions lay in machine learning algorithms. (If interested you can find more information on the differences or definitions of all the terms mentioned above on various discussion threads on social sites like Quora, StackExchange, LinkedIn, and KDNuggets among others.)
This article will be a little more focused on the question: how can we use machine learning in areas where statistics have traditionally been employed in credit risk?
A nifty .NET library, ‘R.NET‘, allows you to leverage previously written R scripts within your C# and other .NET applications. Experienced model developers often end up accumulating extremely useful R scripts which have been optimized to perform specific tasks. When you have a library of codes that you trust to perform operations as you expect, the knowledge that you can re-use the codes regardless of the development platform is invaluable. This is one of the benefits R.NET provides.
R.NET is just one of several method you can use to establish an interface between C# (.NET). The advantage of R. R.NET is that it enables the .NET Framework to ‘interoperate’ with the R statistical language in the same process (this is important because it prevents bulky code) . Also, the syntax is simple enough that anyone who has a little experience with both R and .NET products can pretty easily use it.Read More »
In banking, Economic Capital (EC), is an internal measure of capital required to absorb unexpected losses while remaining solvent at a targeted solvency level. It provides a common basis for comparing risk-adjusted profitability and relative economic value of lines of business and asset classes with varying degrees and sources of risk. EC has various applications that include performance measurement, risk-adjusted pricing, capital allocation, capital adequacy and risk concentration management. EC can be allocated at either a loan, facility or line of business level.
Economic Capital is statistically/quantitatively determined and designed to be sensitive to changes in loan characteristics (risk factors) as a result of both systematic and idiosyncratic factors. Very often EC is calculated through the use of Monte Carlo Simulation – An analytical technique that involves performing a large number of random iterations, called simulations, to generate a statistical distribution of possible outcomes. In finance, Monte Carlo simulations are used to value and analyze complex instruments, portfolios and investments by simulating the various sources of uncertainty affecting their value.Read More »
This blog introduces my R package, RTransprob. The RTransprob package contains a set of functions used to automate commonly used methods to estimate migration matrices used in credit risk analysis. This includes methods for estimating migration and default rates based on the duration and cohort methods, bootstrapping default rates and forecasting/stress testing credit exposures migrations, via Econometrics and a couple of Machine Learning algorithms.
So, you’ve written code in R which contains somewhat complicated loops. The execution time is not quite as fast as you hoped for. You turn to using the profvis package in RStudio (or Rprof) to profile the R program, in the hopes of finding the places in your code that are causing the bottleneck. The profiler returns a few areas that you focus on to make more efficient, but unfortunately no matter how many ‘loops’ you jump through, you can’t seem to reduce the execution time.
Next, you spend at least a couple of frustrating hours trying to figure out how to vectorize (think: higher-level programming to improve efficiency) the loops creating the bottleneck, to no avail. And it’s okay to admit it, we’ve all been there.
STOP!!!! The solution may be to rewrite some of your key functions in C++.Read More »
For a quantitative analyst whose models are frequently scrutinized by Federal Reserve Bank examiners, the ability to quantify model risk is an important part of the model documentation process. Model risk is typically described as “. . . the potential for adverse consequences from decisions based on incorrect or misused model outputs and reports.”
Model Risk quantification can be a tricky concept to grasp. But when we consider that models are nothing more than abstractions of real life situations, it’s easier to see how there are risks associated with models. Even when models perform exceptionally well in recreating said real life scenario.Read More »