About Client
The client is a leading housing finance company and a subsidiary of one of India’s most diversified non-banking financial institutions. Serving over 90 million customers across the country, the organization offers a comprehensive range of property-related financial products, including home loans, commercial property financing, renovation loans, and loans against property. It also extends support to real estate developers through construction finance, lease rental discounting, and working capital solutions. The client holds top-tier credit ratings, reflecting its strong financial health and market reputation.
Client Challenge
The client was using 27TB of storage for their Oracle database on AWS, but only 11TB of this was actively in use. After the application team performed data cleanup to remove outdated and redundant records, approximately 12TB of space remained free — but the storage usage remained unchanged.
This happened because AWS RDS does not release unused storage space automatically, and the overall storage size continued increasing due to daily activity. The client was continuing to incur costs for this unused storage, and traditional methods for resizing the database would have caused significant downtime, which wasn’t acceptable due to business-critical usage of the database.
Additional challenges included:
- Data structure issues, such as missing primary key indexes, large object (LOB) data, and integrity constraints.
- DMS task-related difficulties, including long durations for full data loads, data lag, and configuration complexities.
- Need to ensure minimal disruption while performing storage reduction.
Solution
Blazeclan designed a step-by-step strategy using AWS Database Migration Service (DMS) along with native export methods, enabling the client to reduce storage without extended downtime.
Implementation Approach:
- Information Collection: Gathered all required data for creating DMS tasks.
- Template for Data Sorting: Created a template to organize and classify data effectively.
- Collaboration with Application Team: Worked closely with the client’s application team to sort data into three categories — pre-cutover, actual cutover, and post-cutover, defining task sequences and downtime windows.
- Data Segmentation: Sorted data by table size, importance, and presence or absence of primary keys.
- Task Planning: Created tasks for each cutover phase based on the sorted data.
- Activity Division: Separated activities according to data criticality to manage timing and effort.
- Export Strategy: For specific tables and schemas, data was migrated using exports, with time estimates calculated in advance.
- Migration Setup: Provisioned
- 4 replication instances
- 28 DMS replication tasks
- 5 export-based migration tasks
This approach helped structure the activity efficiently and reduce disruption to the client’s operations.
Testing & Validation
To ensure smooth execution we had 3 practice runs that helped us plan for the business downtime:
- Measured duration of each task to identify focus areas and required changes.
- Evaluated timing and dependencies of all activities post-migration.
- Validation performed by the application team to confirm data accuracy.
- Incorporated required changes based on test results and feedback to align with objectives.
Outcome
Blazeclan successfully reduced the client’s RDS storage from 27TB to 11TB, using a mix of DMS and export-based methods, with minimal downtime.
Key Outcomes:
- Significant Cost Savings: Reduced storage size helped the client save $30,000 annually, along with additional savings in backup storage.
- Minimized Downtime: Cutover activities were planned precisely, ensuring minimal business disruption.
- Improved Data Structure: Addressed issues related to primary keys, LOB data, and integrity constraints as part of the migration.
- Organized Execution: Activity planning and task division enabled smooth execution and effective control over each migration stage.
- Reduced Unused Storage Costs: Storage no longer in use was effectively eliminated, bringing storage in line with actual usage.
- Improved Performance: Removing fragmented and unnecessary storage improved overall data access and operational efficiency.
- Better Resource Utilization: Optimized use of allocated storage across workloads.
- Improved Data Reliability: Cleaner structure helped enhance data integrity.
- Scalability: A more efficient storage setup supports future scaling.
- Faster Backups and Recovery: Reduced volume made backup and restore processes quicker.
Impact Highlights
- 27TB → 11TB: Storage Reduced
- ~$30,000 Annual Cost Savings plus backup cost savings
Tech Stack
- Database Migration Service (DMS): For migration of data with parallel option.
- Oracle Datapump: To migrate the data with faster way.
- AWS CloudWatch: To monitor the lag metrics.