Logo
Close
Get In Touch

Free Consultation. Guaranteed Response within 8 Business Hours.

TechnoScore – the Digital Engineering Services Division of  SunTec India.
30%
Reduced IT Costs
40%
Higher Underwriter Throughput
50%
Reduced Manual Effort
Service
  • Business Process Automation
  • Computer Vision
  • Cloud Deployment
Technology
  • AWS
  • OCR
  • TensorFlow Models
The Client

A Mid-Sized Commercial NBFC

Our client is a leading North American-based commercial finance firm specializing in business credit and asset-backed loans. Serving the middle-market sector, they manage hundreds of loan applications each month and provide financial solutions across various industries.

Their Challenge

Manual Underwriting Slowing Down Efficiency and Accuracy

Despite being a prominent player, the client struggled with legacy systems and manual processes for underwriting, which resulted in:

  • High Labor Costs: Their team of financial analysts spent 20-30 hours per loan file manually aggregating and entering data from unstructured documents, such as financial statements and tax returns.
  • Slow Processing Times: Manual processes resulted in a 7-10-day Time-to-Yes (pre-approval decision), well behind modern competitors.
  • Compliance Risk: With data entry errors averaging 7% in key financial information, the risk of inaccurate credit scoring and regulatory non-compliance was high.
The Requirement

Digital Transformation to Streamline Underwriting and Document Processing

The client needed a workflow automation and intelligent document processing solution that could:

  • Integrate seamlessly with existing legacy systems.
  • Provide secure, centralized access to digitized loan documents.
  • Automate document processing and underwriting to reduce labor costs and improve speed.
  • Enable resource reallocation to higher-value tasks, such as client relationship management and exception handling.
Our Solution

Cloud-Native Intelligent Document Processing (IDP) and Automation Platform

We developed a robust, cloud-native solution hosted on AWS, designed to automate the client’s document processing and underwriting workflows. The platform was built using a microservices architecture to isolate tasks, ensuring flexibility and scalability. It incorporated Amazon RDS as the primary database for structured data and a custom-trained IDP module powered by TensorFlow for data extraction and classification.

Solution Workflow

01

Data and Infrastructure Assessment

We conducted a comprehensive audit of the client's existing document archives, data infrastructure, and legacy systems. Our goal was to understand their data readiness, integration points, and governance needs. We also evaluated security requirements to ensure compliance with privacy regulations such as CCPA and SOC 2.

02

Data Processing and Cloud Setup

To digitize the existing unstructured documents, we used OpenCV-based OCR tools to scan and extract text from a variety of formats (PDFs, scanned images). We applied data engineering algorithms to standardize formats (dates, currencies, names) across all archived documents.

  • Set up an AWS environment with VPC security, establishing a secure, isolated network for document processing.
  • Configured Amazon RDS (PostgreSQL) as the central data repository, ensuring a single source of truth.
03

IDP Module Development

We built a proprietary IDP module to automate data ingestion, classification, and processing. Deployed on AWS Lambda, the module was connected to the client’s systems via secure APIs managed through Amazon API Gateway for seamless integration.

The module used custom-trained TensorFlow models to classify documents and extract critical data like:

  • Gross Revenue/Sales
  • Net Income/Loss
  • EBITDA (Earnings Before Interest, Taxes, Depreciation, and Amortization)
  • Operating Expenses
  • Existing Debt Obligations
  • Working Capital
04

Process Automation

We developed a dynamic workflow automation engine that computes a preliminary financial risk score from extracted data. Files that meet the pre-set low-risk thresholds were automatically fast-tracked, and automated notifications were sent to analysts for further action.

05

Testing, Optimization, and Ongoing Support

We implemented an MLOps pipeline on Amazon SageMaker to enable continuous training and model optimization. The solution maintained a 99.5% accuracy rate in data extraction.

  • Ran A/B testing on new document formats and regulatory changes to ensure compliance and accuracy.
  • Continuously monitored and optimized model performance to handle evolving document types and changes in regulatory standards.
Data and Infrastructure Assessment
  • Data and infrastructure audit
  • Evaluate legacy systems
  • Identify integration points
Data Processing and Cloud Setup
  • OCR-based data digitization
  • Data cleansing and standardization
  • AWS cloud setup and RDS configuration
IDP Module Development
  • Custom ML model training
  • Extract key financial data
  • API-driven deployment on AWS Lambda
Process Automation
  • Automated risk scoring
  • Rule-based approval of files meeting low-risk thresholds
  • Automated notifications and alerts
Testing, Optimization, and Ongoing Support
  • MLOps pipeline implementation
  • A/B testing for improvement
  • Ongoing compliance updates

Technology Stack

Cloud Platform

  • Amazon Web Services (AWS)
    Amazon Web Services (AWS)

API/Compute

  • AWS Lambda
    AWS Lambda
  • API Gateway
    API Gateway

Primary Databases

  • Amazon RDS (PostgreSQL)
    Amazon RDS (PostgreSQL)
  • AWS DynamoDB
    AWS DynamoDB

Intelligent Document Processing (IDP)

  • OpenCV
    OpenCV
  • Custom TensorFlow Models
    Custom TensorFlow Models

Project Outcomes

30% reduction in operational IT costs.

40% higher underwriter throughput

50% reduction in manual effort needed for audit trail generation

95% reduced data entry rate with automation

Based on the outcomes achieved, the client entered into a 24/7 managed services agreement with us. This agreement covers:

  • Continuous monitoring of the automation platform to ensure smooth operations.
  • Infrastructure patching to maintain system security and performance.
  • Quarterly retraining of the TensorFlow models to ensure accuracy, adaptability, and compliance with evolving data.