SQL Data Engineer (Mid & Senior Levels) (Contract – Hybrid – Charlotte, NC or Dallas, TX)
A leading Fortune 100 Financial Institution is actively seeking highly skilled SQL Data Engineers (Mid & Senior Levels) (Contract – Hybrid – Charlotte, NC or Dallas, TX) to expand their technology team. This 12-24 month contract position offers a flexible hybrid work model, requiring 3-4 days per week onsite in either Charlotte, NC, or Dallas, TX. This division plays a pivotal role in building and maintaining scalable data pipelines that power mission-critical finance applications. You’ll join a team that thrives on ownership, technical rigor, and effective communication across data and business stakeholders, offering a chance to significantly impact enterprise-scale decision-making systems.
SQL Data Engineer (Mid & Senior Levels) (Contract – Hybrid – Charlotte, NC or Dallas, TX)
Location: Hybrid – Charlotte, NC or Dallas, TX (3–4 days per week onsite)
Contract Duration: 12–24 Months
Employment Type: Contract (W2 Only)
Pay Rate Range: Mid-Level: $53–57/hr | Senior: $67–74/hr
Industry: Computer and Mathematical / Banking/Financial Services
Start Date: ASAP
Interview Format: Microsoft Teams – One & Done (Interview Slots Available Daily 1–5 PM ET – quick turnaround guaranteed)
Position Overview: Powering Mission-Critical Finance Applications
We are actively hiring Mid-Level and Senior SQL Data Engineers to support the strategic expansion of a leading financial institution’s technology team. This specialized division is at the heart of the organization, playing a pivotal role in building, optimizing, and maintaining scalable data pipelines that serve as the backbone for mission-critical finance applications. This opportunity offers you the chance to leave a lasting mark on core systems that directly influence and empower enterprise-scale decision-making processes. You’ll join a team culture that deeply values individual ownership, rigorous technical excellence, and the ability to communicate complex data concepts effectively across both technical and business stakeholders.
What’s the Job? Core Responsibilities Driving Data Pipeline Excellence
As a SQL Data Engineer, you will be instrumental in the full lifecycle of data pipeline management, from design and development to optimization and troubleshooting, ensuring high availability and accuracy of data for critical financial applications.
- Design, Build, and Manage Robust SQL-Based ETL Pipelines: You will be responsible for meticulously designing, building, and managing robust SQL-based ETL (Extract, Transform, Load) pipelines. This involves architecting efficient data extraction strategies from various source systems, developing complex SQL scripts for data transformation to ensure quality and consistency, and optimizing loading processes into target data warehouses or data marts. Your work will be foundational for delivering clean, reliable data.
- Leverage AutoSys for Job Scheduling and Orchestration: You will expertly leverage AutoSys (or other comparable enterprise job control systems like Control-M, IBM Tivoli Workload Scheduler) for the sophisticated scheduling and orchestration of data jobs. This includes defining job dependencies, setting up complex schedules, monitoring job execution, and troubleshooting failures to ensure that data pipelines run efficiently and reliably on time, every time, supporting critical financial reporting cycles.
- Collaborate with Upstream Data Providers and Troubleshoot Issues: You will foster strong relationships and collaborate with upstream data providers to understand data sources, schemas, and delivery mechanisms. A crucial part of this collaboration will be to troubleshoot data delivery or schema issues, working proactively with source system owners to resolve inconsistencies or delays that impact downstream data pipelines, ensuring continuous and accurate data flow.
- Translate Complex Data Requirements into Scalable Engineering Solutions: You will possess the skill to translate complex data requirements from business stakeholders into robust and scalable engineering solutions. This involves understanding the nuances of financial data, designing data models that support analytical needs, and implementing efficient data processing methodologies that can handle vast volumes of information without performance degradation.
- Proactively Identify and Resolve Pipeline Performance Bottlenecks: You will continuously proactively identify and resolve pipeline performance bottlenecks. This requires deep analytical skills to monitor data flow, analyze query execution plans, pinpoint inefficient processes, and implement optimizations (e.g., indexing, query refactoring, resource allocation adjustments) to ensure data pipelines operate at peak efficiency and meet strict delivery SLAs.
- Ensure High Availability and Accuracy of Data Across Systems: A paramount responsibility is to ensure the high availability and unwavering accuracy of data across all systems. This involves implementing data validation checks, establishing data quality rules, monitoring data integrity, and designing resilient data architectures that minimize downtime and ensure that financial applications always have access to correct and complete information.
- Maintain Clear Documentation of Data Flows and Operational Practices: You will meticulously maintain clear and comprehensive documentation of data flows and operational practices. This includes detailing data lineage, transformation rules, job schedules, error handling procedures, and troubleshooting guides. High-quality documentation is vital for knowledge transfer, auditing, and ensuring consistent operational support for critical financial data.
- Participate in Agile Ceremonies and Report on Progress: You will actively participate in Agile ceremonies, including daily stand-ups, sprint planning, sprint reviews, and retrospectives. You will also be responsible for clearly reporting on dependencies and progress, ensuring transparency within the Agile team and providing accurate updates to stakeholders on data pipeline development efforts.
What’s Needed? Your Core Qualifications for Data Engineering Excellence
To excel as a SQL Data Engineer, you’ll need significant experience in data engineering, strong SQL and job scheduling expertise, and a background in financial services or enterprise technology.
- Extensive Data Engineering Experience: You must possess 5+ years of experience for a Mid-Level role, or 7+ years of experience for a Senior-Level role, specifically in Data Engineering. This extensive background indicates a seasoned professional capable of designing, building, and managing complex data solutions in an enterprise environment.
- Strong Command Over SQL and ETL Development: You must have a strong command over SQL (Structured Query Language) and extensive hands-on experience with ETL (Extract, Transform, Load) development. This includes writing complex SQL queries, stored procedures, functions, and understanding various ETL methodologies and tools for data integration.
- AutoSys or Comparable Job Control Systems Experience: You are required to have proven experience using AutoSys or comparable enterprise job control systems (e.g., Control-M, IBM Tivoli Workload Scheduler). This demonstrates your ability to manage and orchestrate large-scale automated data jobs and workflows.
- Background in Finance, Banking, or Enterprise Tech: You must have a proven background in supporting finance, banking, or enterprise tech environments. This industry experience provides valuable context for understanding the unique data requirements, regulatory compliance, and performance demands of mission-critical financial applications.
- Clear, Concise Communication for Stakeholder Engagement: You possess clear, concise communication skills, which are especially crucial for client-facing or lead-level engagements. This enables you to effectively articulate complex technical concepts to both technical peers and non-technical business stakeholders, ensuring alignment and understanding.
Preferred Qualifications: Adding Further Value to Your Profile
While the above are essential, the following qualifications would further strengthen your application:
- Python or Apache Spark Experience: Experience with Python (for scripting, data processing, or automation) or Apache Spark (for large-scale distributed data processing) would be a significant plus, indicating versatility in modern data engineering tools.
- Exposure to CI/CD and Version Control: Familiarity with CI/CD (Continuous Integration/Continuous Delivery) pipelines and version control systems (e.g., Git) for managing code and automating deployments would be beneficial.
- Performance Tuning of Data Workflows: Experience with performance tuning of complex data workflows and pipelines.
- Data Quality, Governance, and Compliance Familiarity: Familiarity with concepts of data quality, data governance, and compliance expectations (e.g., regulatory reporting requirements in finance) would be highly advantageous.
Not a Fit If You Are: (Clarifying Role Focus)
To ensure alignment with the specific needs of this role, this position is not a fit if you are:
- Primarily a Big Data Specialist: Your primary expertise is solely in Big Data technologies (e.g., extensive Hadoop, Hive, Kafka-heavy roles) without recent hands-on SQL/ETL focus.
- Lacking Recent Hands-on SQL and Traditional ETL: You are lacking recent hands-on experience with SQL and traditional ETL systems, as this role requires deep practical expertise in these areas.
- Unable to Confidently Communicate: You are unable to confidently communicate with both technical and business stakeholders, as clear communication is paramount for success in this collaborative environment.
What’s in it for me? Impact, Growth, and Collaboration in Finance
This 12-24 month contract Data Engineer role offers a compelling environment for professional growth and significant impact within a leading Fortune 100 Financial Institution.
- Lead in Financial Institution Expansion: You’ll directly support the expansion of a leading financial institution’s technology team, playing a pivotal role in strengthening the data backbone of a Fortune 100 company.
- Power Mission-Critical Finance Applications: Your work will directly power mission-critical finance applications, enabling decision-making at an enterprise scale and contributing to the core operations of the bank.
- Team Thrives on Ownership and Technical Rigor: You’ll join a team that genuinely thrives on ownership, technical rigor, and collaborative problem-solving, fostering an environment where your contributions are valued and impactful.
- Opportunity to Leave Your Mark on Systems: This is a unique opportunity to leave your mark on systems that directly impact decision-making at the highest levels within a global financial institution.
- Competitive Pay Range: The role offers a competitive pay rate range ($53–57/hr for Mid-Level, $67–74/hr for Senior), recognizing the specialized skills and critical nature of this position.
- Hybrid Work Model: Benefit from a hybrid onsite schedule, allowing for a balance between in-office collaboration in Charlotte or Dallas and remote flexibility.
- Streamlined Interview Process: A Microsoft Teams – One & Done interview format is offered, guaranteeing a quick turnaround for qualified candidates.
If this SQL Data Engineer role, whether at the Mid-Level or Senior-Level, aligns with your expertise in SQL, AutoSys, and data pipeline development within financial services, we encourage you to learn more about this exciting hybrid contract opportunity. This is a fantastic chance to make a significant impact on mission-critical finance applications.
Ready to build robust data pipelines for a Fortune 100 financial institution?
Job Features
Job Category | Data, Engineering |