Using open source data modeling tools has been a topic of debate as large organizations, including government agencies and financial institutions, are under increasing pressure to keep up with technological innovation to maintain competitiveness. Organizations must be flexible in development and identify cost-efficient gains to reach their organizational goals, and using the right tools is crucial. Organizations must often choose between open source software, i.e., software whose source code can be modified by anyone, and closed software, i.e., proprietary software with no permissions to alter or distribute the underlying code.
VIENNA, Va., April 12, 2017 – RiskSpan, the data management, data applications, and predictive analytics firm that specializes in risk solutions for the mortgage, capital markets, and banking industries, today announced it is relocating its headquarters to the Rosslyn business district in Arlington, Virginia.
The financial industry has traditionally been slow to adopt the latest data and technology trends, and the case of open source software is no exception. While open source has been around for decades, we’re only now seeing its manifestation within the finance and mortgage industries. Many institutions are exploring the viability of open source within the financial industry but hesitate to act because of the potential risks open source can expose them to.
VIENNA, Va., March 22, 2017 – RiskSpan, the data management, data applications, and predictive analytics firm that specializes in risk solutions for the mortgage, capital markets, and banking industries, today at IBM’s InterConnect conference in Las Vegas, announced the launch of a new data and analytics API on the IBM Bluemix platform.
Last month I highlighted the role of the front-end insurance risk share process around Credit Risk Transfer (CRT). I reviewed what the front-end risk share model is in the current state and noted the expanded efforts underway to broaden the pool of MI’s and reinsurers as counterparties to expand the front-end offerings. This is in addition to the already successful back-end CRT which has found great success thus far. So, the key question for 2017 is what does CRT look like in a post housing reform environment where much of the capital at risk is not the government credit guarantee but is comprised of private capital?
The question of “build versus buy” is every bit as applicable and challenging to model validation departments as it is to other areas of a financial institution. With no “one-size-fits-all” solution, banks are frequently faced with a balancing act between the use of internal and external model validation resources. This article is a guide for deciding between staffing a fully independent internal model validation department, outsourcing the entire operation, or a combination of the two.
VIENNA, Va., March 9, 2017 – RiskSpan, the data management, data applications, and predictive analytics firm that specializes in risk solutions for the mortgage, capital markets, and banking industries, announced that it has been selected for HousingWire’s 2017 HW TECH100™ award.
This year saw the highest number of nominees in the history of HW TECH100™, which recognizes leading companies that bring tech innovation to the U.S. housing economy. Among this year’s winners are other industry-leading firms such as Accenture, CoreLogic, and Freddie Mac.
Models based on Machine Learning are being increasingly adopted by the finance community in general and the mortgage market in particular. The use of modeling and data analytics has been key in the turnaround of this market; however, anyone who has worked with mortgage loan data knows it is notorious for errors and data gaps. Despite industry-wide efforts to incorporate robust quality control programs, challenges with mortgage data persist. Fortunately, combining machine learning in finance with cloud computing shows promise in addressing mortgage data gaps and producing more accurate results than traditional approaches.
VIENNA, Va., March 2, 2017 – RiskSpan, the data management, data applications, and predictive analytics firm that specializes in risk solutions for the mortgage, capital markets, and banking industries, today announced the addition of Faith Schwartz, Senior Advisor of Accenture Credit Services (ACS), to RiskSpan’s Board of Directors.
The FHFA issued an RFI to solicit feedback from stakeholders on proposals from the GSEs to adopt additional front-end credit risk transfer structures and to consider additional credit risk transfer policy issues. There is firm interest in this new and growing execution for risk transfer by investors who have confidence in the underwriting and servicing of mortgage loans through new and improved GSE standards.
As we look forward to 2017 and the critical issues facing the nation’s housing finance system, one of the paramount matters will be the ongoing development of the Credit Risk Transfer (CRT) initiative.
In some respects, the OCC 2011-12/SR 11-7 mandate to verify model inputs could not be any more straightforward: “Process verification … includes verifying that internal and external data inputs continue to be accurate, complete, consistent with model purpose and design, and of the highest quality available.” From a logical perspective, this requirement is unambiguous and non-controversial. After all, the reliability of a model’s outputs cannot be any better than the quality of its inputs.
Even with CECL compelling banks to collect more internal loan data, we continue to emphasize profitability as the primary benefit of robust, proprietary, loan-level data. Make no mistake, the data template we outline below is for CECL modeling. CECL compliance, however, is a prerequisite to profitability. Also, while third-party data may suffice for some components of the CECL estimate, especially in the early years of implementation, reliance on third-party data can drag down profitability. Third-party data is often expensive to buy, may be unsatisfactory to an auditor, and can yield less accurate forecasts. Inaccurate forecasts mean volatile loss reserves and excessive capital buffers that dilute shareholder profitability. An accurate forecast built on internal data not only solves these problems but can also be leveraged to optimize loan screening and loan pricing decisions.
When someone asks you what a model validation is what is the first thing you think of? If you are like most, then you would immediately think of performance metrics— those quantitative indicators that tell you not only if the model is working as intended, but also its performance and accuracy over time and compared to others. Performance testing is the core of any model validation and generally consists of the following components:
- Sensitivity Analysis
- Stress Testing
Sensitivity analysis and stress testing, while critical to any model validation’s performance testing, will be covered by a future article. This post will focus on the relative virtues of benchmarking versus back-testing—seeking to define what each is, when and how each should be used, and how to make best use of the results of each.
On this Veterans Day, I was reminded of the Urban Institute’s 2014 article on VA loan performance and its explanation of why VA loans outperform FHA loans. The article illustrated how VA loans outperformed comparable FHA loans despite controlling for key variables like FICO, income, and DTI. The article further explained the structural differences and similarities between the veterans program and FHA loans—similarities that include owner occupancy, loan size, and low down payments.
As the amount of student loan debt continues to balloon, there is growing national attention on this subject, with attempts to better educate potential borrowers. However, there is clearly a need to further improve and assist prospective college students and their families to make informed financial decisions about the true cost of higher education. With student loan repayment terms extending up to 25 years (and 30 years for consolidated loans), the long-term effect of this debt on students and the future of our nation is concerning.
Model risk management is a necessary undertaking for which model owners must prepare on a regular basis. Model risk managers frequently struggle to strike an appropriate cost-benefit balance in determining whether a model requires validation, how frequently a model needs to be validated, and how detailed subsequent and interim model validations need to be. The extent to which a model must be validated is a decision that affects many stakeholders in terms of both time and dollars. Everyone has an interest in knowing that models are reliable, but bringing the time and expense of a full model validation to bear on every model, every year is seldom warranted. What are the circumstances under which a limited-scope validation will do and what should that validation look like?
We have identified four considerations that can inform your decision on whether a full-scope model validation is necessary...
The single family rental market has existed for decades as a thriving part of the U.S. housing market. Investment in single family homes for rental purposes has provided many opportunities for the American “mom and pop” investors to build and maintain wealth, prepare for retirement, and hold residual cash flow producing assets. According to the National Rental Home Council (NRHC) (“Single-Family Rental Primer”; Green Street Advisors, June 6, 2016) as of year-end 2015, the single family rental market comprised approximately 13% (16 million detached single-family rentals) of all occupied housing and roughly 37% of the entire United States rental market.
Better Modeling Lowers Sample Requirements and Reliance on Proxy Data
Part one of a two-part series on CECL Data Requirements
With CECL implementation looming, many bankers are questioning whether they have enough internal loan data for CECL modeling. Ensuring your data is sufficient is a critical first step in meeting the CECL requirements, as you will need to find and obtain relevant third-party data if it isn’t. This article explains in plain English how to calculate statistically sufficient sample sizes to determine whether third-party data is required. More importantly, it shows modeling techniques that reduce the required sample size. Investing in the right modeling approach could ultimately save you the time and expense of obtaining third-party data.
This blog post is the first in a two-part series about Mortgage Insurance and Loss Severity. During the implementation of RiskSpan’s Credit Model, which enables users to estimate loan-level default, prepayment, and loss severity based on loan-level credit characteristics and macroeconomic forecasts, our team explored the many variables that affect loss severity. This series will highlight what our team discovered about Mortgage Insurance and loss severity, enabling banks to use this GSE data to benchmark their own MI recovery rates and help estimate their credit risk from MI shortfalls.