January 22, 2008 | Leave a Comment
Achieving the Dream (AtD) released the November/December 2007 Data Notes recently and gave a thoughtful analysis of Enrollment Status and Student Outcomes for full- and part-time students. This is of particular importance since AtD reports that 46 percent of students enroll part-time. The authors report striking differences between full-time and part-time students when it comes to persistence.
The largest persistence gap between full- and part-time students was seen at the second term (21 percentage points) rather than in the second or third academic years (16 and 12 percentage points, respectively).
Part of this descrepency may be explained by the attainment of educational goals by part-time students. Not all part-time students are seeking credentials, and therefore may not “have the desire to” persist in order to achieve their educational goals.
Unfortunately for institutions that are part of the AtD initiative, educational goals are not part of the reporting requirement. It might be useful to explore how educational goals vary over time, and if these changes relate to persistence rates. Another limiting factor in the analysis of educational goals is that most ERP systems (from which the AtD data are derived) do not capture educational goals in a meaningful way – we discussed this in a recent post. Institutions may be capturing educational goals at the time of application, but rarely do they have these data follow students through the education life cycle.
While the AtD data are important peices of the persistence puzzle, they often won’t provide for a complete picture. Other data that are captured in the Student Information System (SIS) are critical for individual institutions. In order to make the most of both AtD and SIS data, institutions should identify automated ways to blend the two. One way to accomplish this is to create dimensional data models…but that’s a topic for another post.
January 17, 2008 | 1 Comment
You’d have to do a lot to avoid discussion of the subprime mess and the effects that it is alleged to be having on the housing and financial markets (although we think there are more reasons for optimism than many, but more on that in another article). One area where we do have concerns is in the effect of the subprime cleanup on fair lending.
Fair lending compliance requires that regulated entitites (which includes essentially all retail mortgage lending institutions) provide government regulators specific facts on the mortgage applications they did and did not approve. These data are then examined statistically to determine if there is an observable pattern of discrimination in underwriting practices based on the treatment of members of protected groups.
In a recent article, the Washington Post describes the actions that lenders are taking to correct the underwriting excesses of the subprime boom. While the industry experts offer somewhat differing accounts of actions that are expected to be taken, all agreed that credit scores are going to lose some of their weight in the underwriting process, at least in the near term. To quote from the article,
But income matters now, and so does cash, said Sean O’Boyle, a vice president at SunTrust Mortgage in Chevy Chase. Lenders expect borrowers to have several months’ worth of mortgage payments in reserve and a steady job. ‘Job stability. Credit. Cash,’ O’Boyle said. ‘They’re all equally important. Not one of them overshadows the other.’
Unfortunately, moving to a broader set of measures of creditworthiness, while intended as a means of tightening underwriting, could achieve just the opposite, and cause fair lending compliance issues to boot. Remember that the pressure to maintain and increase loan volumes fueled the use of exotic mortgages to increase the number of eligible borrowers. In the same way there will be pressure to use additional information to cherry pick borrowers with borderline credit scores.
Here’s where fair lending compliance comes in – cherry picking borrowers based on any information not included in the compliance data provided to regulators introduces the risk that correlations will be found between protected group status and the probability of receiving a loan, even if no mortgage discrimination was ever intended. Is the value of cherry picking worth the potential compliance issues it might cause? We don’t think so.
Here’s a much less problematic solution to tightening underwriting requirements – demand higher credit scores from all borrowers and stick close to the information reported for fair lending compliance when making the underwriting decision. This would have none of the potential downside of including non-reported information and would require few changes to underwriting processes. It will be interesting to see if the pressure to maintain loan volumes or the need to assure regulatory compliance wins out.
January 11, 2008 | Leave a Comment
One of the biggest problems with any business intelligence project is for all involved to fully understand the data model and it’s relationship to the business of the organization. Without an excellent business/data analyst, much will be lost in the translation.An article in DM Review addresses this very issue, however, largely from the technical person’s point of view. Since this blog is primarily aimed at business leaders and decision makers (information consumers) that require business intelligence to make informed decisions, I won’t go into gory detail summarizing the article.
The most useful tool mentioned in the article was this spreadsheet (pictured) to help put business language around a logical data model.
What’s a logical data model?
[Silence fills the air.]
As a business decision maker you don’t really need to worry about it so much, however, your analysts and technical staff developing the business intelligence architecture sure do.
Essentially it is the relationship between and amongst the tables that represent the business data.
Without a rock solid understanding of the ‘business rules’ that describe the data, the business intelligence architecture will fail.
This tool is as good as any I’ve seen to help business decisions use their own language to precisely describe the relationships in the data model.
Think of this tool as the Rosetta Stone of BI. Okay. Never mind. I’m sure I’m overstating it. But it is useful nonetheless.
January 7, 2008 | Leave a Comment
In a recent article published on the Achieving the Dream website by Vanessa Smith Morest and Davis Jenkins titled Institutional Research and the Culture of Evidence at Community Colleges, the authors point out several important barriers to high quality institutional effectiveness research in community colleges. These barriers include:
- Lack of dedicated IR personnel. IR personnel may not have the training for quantitative analysis.
- “Data analysis” is typically “Data collection” for compliance and accreditation. These data are typically not very useful in college management.
- Data contained in the student information system may be “dirty”, and are designed to support administrative functions, not research.
- Skepticism with college presidents about using quantitative methods to manage the institution.
I’ll add that responsibility for data entry, warehousing, and reporting are often scattered among several offices making coordination of a research agenda difficult at best. In addition, faculty are often not engaged in the management aspects of the institution. This may, in part, be due to a lack of timely and accurate information. In recent years, however, the proliferation of business intelligence (BI) and analysis tools (e.g., Business Objects, SAS, SPSS, Tableau, etc.) allow information consumers to quickly access analysis in a “self-service” environment.
Most BI software products have matured to the point where creating reports and even running sophisticated statistical analysis is point and click. This BI software ease of use helps to make the most of the limited staff dedicated to IR. In addition, the self-service nature of BI software helps to extend human resources by making data and analysis more broadly available to administrators, faculty, and staff. There certainly exists the potential to increase the number of research topics as well, since faculty may utilize these tools for investigation of success rates or retention of their own students. Instead of having one or two IR staff stretched to analyze a single topic over several months, there may be dozens of concurrent analyses by curious faculty.
This helps to address items #1 and #2 on the barriers list.
A complication for most institutions right now is the lack of standard sources for data to answer questions. The lack of data access usually encourages the proliferation of “shadow systems”. These shadow systems in turn lead to multiple and inconsistent reports for topics like headcounts, retention, or success rates. It’s not always the case that the student information system (SIS) is at fault. Most of the data needed to answer questions or to conduct analysis is contained in the SIS. Information consumers usually don’t know how to get to data they need. This is where self-service plays an important role.
By implementing BI tools, information consumers can typically make sense of what should be the “authoritative data source”, the SIS. Surfacing data contained in the SIS will often illustrate where data quality problems are, and point to deficiencies in business processes. Most data quality issues in higher education are the result of breakdowns in processes, in other words: it’s usually not a technology deficiency. This helps to address item #3 on the barriers list.
As for item #4, and the coordination across multiple offices, governance structure and commitment to a “culture of evidence” are usually necessary at the highest levels. Providing self-service to analytics across an institution using BI tools is one way to foster this culture of evidence.
January 1, 2008 | 1 Comment
Former president of the League for Innovation in the Community College, Mark Milliron, wrote a provocative article challenging community college leaders to rethink the term ’2-year college.’ He makes a compelling argument that the term hurts students, the institution itself, and the communities supporting the institution since the term sets an expectation among these constituents that cannot be met. In fact, less than 5% of community college students complete an associates degree in 2 years. Milliron summarizes the reason in the article in the following statement:
Students are not coming to us in a neat, linear pipeline progression. They are swirling into and out of our educational programs and services at all different ages and stages…Our students by and large follow diverse pathways through our programs.
Most everyone I know in community college administration agrees that few students ever intend to get an associates degree, let alone achieve it in 2 years. Yet we know that although students do not receive a degree from the community college they often are successful in their own ‘unreported’ goals.
It is not enough, however, to expect change through conversation. It is time to demonstrate this success through new measurement vehicles. Community colleges must begin to report success through the eyes of its students.
Most community colleges ask the student for their educational goal on the admission application. Do your codes reflect the goals of your students or do they match the reporting outcomes required by your state? Students attend your institution with a variety of goals, including: intent to transfer to a four-year institution, receive a certificate in a technical program, enrichment, advance in their career, etc. You should survey your students to determine the list of educational goal codes that is right for your institution.
Do not simply collect these goals upon the students entry to your institution. You must put business processes in place that make it easy for students to keep you updated as their goals change, and you know they will.
Next you will need to measure the outcomes of these goals. Some goals will be easy to measure through the data captured in your student information system. Others, such as enrichment or ‘take a course to further my career’ will be harder to measure. You will need to enhance existing assessment vehicles to allow students to self-report that they have met their goals.
Instead of traditional retention rates, you can now measure educational goal attainment rates. While accrediting agencies and state boards do not yet measure your institutions effectiveness on these outcomes, they will be far better data points on which to base decisions. Then as institutions begin to demonstrate their success through a documented process based upon the student’s own intended outcome, the conversation about higher education success and accountability may truly change.