Part 2: This series is based on a speech given by Sandra Braunstein, formerly of the Federal Reserve Board staff, at the Wolters Kluwer CRA and Fair Lending Colloquium in November 2015. It looks at the past, present, and potential future of the Community Reinvestment Act. For more about Braunstein, see the extended author's note at the end of this article. Part 1: "Looking at CRA afresh"
Thus far in this three-part series I’ve discussed the performance of CRA to date, and the continuing need for the act. Where do we go from here?
What are the major issues with CRA, as it is presently constructed, and how can they be addressed?
Certainly, it became clear from the CRA public hearings held in 2010 that redefining assessment areas, especially for large banks, was the big issue raised by both the industry and consumer and community advocates. This issue has been a longstanding one, dating back to the major rewrite of CRA regulations in 1995.
At that time—and ever since—the agencies kicked the can down the road.
The stated reason in 1995 was that technology in banking was evolving and the agencies needed to see how things developed before changing the regulations.
A decade later, I think that ship has sailed.
Outdated regulatory approach
Drawing circles around buildings is not very effective for larger institutions in this era of internet banking, mobile banking, and other technology. However, tackling the assessment areas will require regulatory changes, not additions to the interagency Q and A.
The agencies have not ventured into regulatory reform because of concerns about possible political and congressional interference. This concern may have merit—given the current make-up and dysfunction on the Capitol Hill. However, the result has been that the agencies missed an opportunity to issue regulations before control of the entire Congress changed hands.
If the agencies had been prepared to move on regulatory changes immediately after the 2010 hearings, it is possible that they could have completed the changes in a safer political environment. This is somewhat conjecture, because the regulatory process is intentionally slow and deliberate with many steps (proposal, public comment, and final rules), which can drag on for years. It is very difficult to make quick regulatory changes, especially when the changes are to a rule as controversial as CRA. However, they certainly missed the window to try.
But, that’s water under the bridge. And, with a major election taking place this year, suffice it to say that regulatory reform is unlikely to happen anytime soon.
So, what else can be done?
The agencies have issued some revised and new Q and As over the past few years. I question the necessity and impact of these. I see the effort merely as an attempt to “just do something”—when maybe doing nothing would have been better.
I wonder if these marginal changes are just further confusing things, and if the agencies, absent any initiative to tackle regulatory reform, should just cease-and-desist until they are ready to dig deeper. As a former senior federal regulator, I am pretty sure that agency staff has plenty of other work to do.
Or, if the agencies want to continue to address CRA, can those staff resources provide greater impact by looking at other issues?
Rethinking CRA and how we structure it
The question is whether there are some things that can be addressed outside the regulatory realm that would make a difference. I am going to discuss three possible areas of consideration moving forward, two in this installment and the last in the conclusion to this series.
1. Must CRA be so complex?
First, let’s address complexity. Is there a way to make CRA less complex?
As mentioned in part 1 of this blog series, we have a short statute and hundreds of pages of interpretations. Regardless of your affiliations, a key fact is that banks have a hard time deciding if community development investments and loans are CRA-qualified or not.
Should determination be that hard?
Banks often hire consultants for the purpose of helping them determine eligibility of their community development activities. Should the decision be so complicated that a bank can’t decide on its own if its activity is qualified?
As I have waded through some bank files, I have come to wonder whether everyone—banks, regulators, and communities—are missing the real point. In other words, are regulators and examiners so wedded to data and checking boxes that they are missing the big picture?
Let me review three examples:
A. I saw a bank put forward a loan for community development consideration. It was a loan to a private entity that planned to build a prison in one of the bank’s assessment areas.
The location was an LMI community—of course, the firm wouldn’t build a prison in a high-income area. And, the argument for community development qualification was that the prison would provide jobs for LMI residents of the community, and could even be seen as revitalizing and stabilizing a distressed area.
A prison? There may be a point about needed jobs, but “stabilizing”?
I would think the presence of a prison could, instead, drive people and businesses away. The discussion about this project focused on data (numbers of jobs, wage levels, incomes of the employees). But, in the bigger picture, was this project good for that community?
B. On the flip side, another discussion took place about a building that was developed in a low-income community and leased to several government-run human and social service agencies. In the discussion of whether this project was CRA-qualified, there was concern about the income levels of the agencies’ clients—and a suggestion that the bank ask the social service agencies for income data on their clients. These were social service agencies, and each of their stated missions was clearly to provide services for people who were not wealthy.
C. More recently, there was a discussion on a project that served public housing residents. An issue was raised that there was no stated correlation between public housing residents and CRA regulatory income limits. A suggestion, by a consultant who was formerly an examiner, was made to the bankers that they should seek data on the clients’ incomes.
Come on—agency was serving public housing residents!
These three projects, for me, are examples of an overreliance on data when what should prevail is common sense.
These discussions reminded me of conversations a few years ago on Unfair and Deceptive Acts and Practices (UDAP, one A). In that case, there were specific stated criteria for characterizing something as UDAP, and there were compliance requirements for products. Over time, we realized that checking compliance boxes was not enough.
Meeting the compliance criteria and being a perfectly legal product did not always answer the question as to whether the product was good for the consumer. Or, as we spoke of it, did the product pass the “grandmother” test?
We must look beyond the CRA complex criteria and mandatory data the same way. Regardless of the checked boxes, would you want your community to have this project?
2. Lack of inter-agency consistency
The second issue that bears some attention going forward is lack of agency consistency in execution of exams; use of data elements; and in examiner training.
There have been some attempts to address inconsistencies in specific examination issues, sometimes through the issuance of the Q&As.
That’s a logical approach. However, speaking from my experience, sometimes working on interagency projects and issues can be akin to working in Congress and trying to find consensus.
I know that the Federal Reserve has always felt strongly about legislative intent, expressed at the time of passage of CRA, that CRA not become credit allocation. So, while using data for some purposes, Fed officials have been loath to set actual numerical targets and standards. A lot of the exam is subjective in nature as a result, and some of the uncertainty discussed in this series results from that factor.
On the other hand, the other federal regulators involved with CRA—which explicitly does not include the Consumer Financial Protection Bureau—have not always been as concerned about numerical standards. However, whatever yardstick those agencies are using is not publicly known or available, and may differ in terms of comparators.
So, part of the inconsistency may be due to individual agency culture and philosophy. But, there are also differences in structure and training.
Since the 1970s, the Fed recognized consumer compliance supervision as an important specialty, requiring a different set of skills and knowledge than required for prudential supervision. The Fed established a separate examination force of consumer compliance and CRA examiners who are trained and commissioned as such.
These examiners do not conduct prudential exams. While the prudential and consumer examiners coordinate on many issues, at the Federal Reserve, there is a separate training track, separate commissioning process, and a career path for the consumer compliance examiners that is equal to that of the prudential examiners. I also know that, due to the subjectivity and judgment required in a CRA exam, the Fed tries, wherever possible, to use more senior and experienced examiners for CRA exams.
FDIC, several years back, also established a separate cadre of compliance examiners with their own commission and career path.
It would be very helpful if all the agencies were on the same page with regard to CRA examination, as the agency infrastructure does impact the training level and the experience of the examiners on-site. Examiners who are generalists, and conducting both prudential and compliance/CRA exams, may not have the depth of experience, knowledge, and training for the complexity and the subjective judgments that are intrinsic to a CRA exam.
Additionally, there should be thought given to conducting some interagency joint CRA exams. Even with interagency examination procedures, there can be differences in how the exams are conducted and how conclusions are reached. Rather than the policy people just talking through these issues in Washington, it might be instructive for the agencies to conduct some number of joint examinations to observe first-hand the differences in procedures, data analysis, and subjective judgments, with the goal of working on increasing the consistency.
In the past, there were attempts at doing just this, with little to show for it. One reason may be the methodology that was used to conduct the interagency exams. My understanding is that the exam work was often divided amongst the agencies so that each agency took a section, and then came together for the final result. While this may have been an expedient way to conduct this experiment, it was not structured to produce the desired results of comparing approaches to the same facets of the exam.
If the agencies venture down the path of interagency exams again, there should be a strict methodology which requires a mix of staff from the three agencies on each individual piece of the exam. This way, agency staff can truly gauge their differences on their views and calculations of loans, investments, and services.
Return to Part 1: "Looking at CRA afresh"
Read these other articles based on sessions from the Wolters Kluwer conference:
- Compliance Automation to Increase Consumer Protection and Enhance Customer Experience
- Predict Illicit Transactions Faster, Meet Regulators’ Expectations Earlier
- Fending Off Tech Giants Through Digital Transformation
- FACTA Red Flags Rule: Re-Evaluating the Rulebook
- Is the Global Code Working? (Part Two)