Title: : Big Data Engineer
Duration: Long term
Location: Atlanta, GA (Remote)
(ONLY W2)
Job Description:
We are seeking a highly skilled Big Data Engineer at the senior or staff level to design, develop, and optimize large-scale data platforms and processing systems. The ideal candidate will have a strong background in Apache Spark, Scala, Google BigQuery, PostgreSQL, and Apache Parquet, with hands-on experience building performant, scalable, and reliable data pipelines.This role requires an individual who can write efficient SQL queries, optimize performance across distributed architectures, and collaborate with cross-functional teams to deliver end-to-end data solutions. Experience in CPU localization or performance tuning at the hardware level is a plus.
Key Responsibilities
Design, implement, and maintain highly scalable data processing pipelines using Apache Spark and Scala.
Develop optimized data warehouse and analytics solutions using Google BigQuery.
Architect and manage PostgreSQL databases, ensuring efficient schema design and data partitioning.
Work extensively with Apache Parquet for data serialization and efficient big data storage solutions.
Write high-performance SQL queries and perform tuning on complex datasets.
Collaborate with data scientists, application developers, and product teams to deliver data-driven capabilities.
Optimize data processing and CPU utilization through strong understanding of distributed systems.
Implement best practices for ETL/ELT processes, data quality, monitoring, and security.
Benchmark system performance and recommend improvements for scalability and efficiency.
Required Skills and Qualifications
Bachelor’s or Master’s degree in Computer Science, Engineering, or a related technical field.
6–10 years of experience in Data Engineering or Big Data roles.
Expert-level proficiency in Apache Spark and Scala.
Strong working experience with Google BigQuery and PostgreSQL.
Hands-on expertise with Apache Parquet, ETL frameworks, and distributed computing.
Strong knowledge of SQL optimization and database performance tuning.
Familiarity with CPU localization, cluster performance tuning, and hardware-level optimization techniques.
Experience working in cloud-based environments (preferably Google Cloud Platform).
Excellent communication skills and ability to work in distributed teams.
Nice to Have
Experience with Airflow, Kafka, or other orchestration and streaming tools.
Knowledge of data modeling, data governance, and DevOps CI/CD principles for data systems.
Strong analytical, problem-solving, and debugging abilities.
...of duties in an industrial indoormanufacturing environment. This individual will safelyand efficiently inspect,operate, and maintain an overhead crane in a precast production plant. Job Responsibilities Safely operatean overhead crane to pour concrete into...
Receptionist Onsite | Baltimore County, MD Are you a warm, polished, and professional people-person who thrives in a fast-paced, high-touch environment? A top-tier organization in Baltimore County is seeking a Receptionist who will be the face of the company...
...everyone that works for our Company has the power to make positive memories not only for... ...communities. We take pride in training and developing our teams so that they can provide a... ...reports using tools such as SSRS, Power BI, Tableau, or Cognos. Develop complex SQL...
...Are you ready to take your career to the next level? Join our passionate team in Issaquah, WA as an Audiologist or Hearing Instrument Specialist and help people rediscover the joy of hearing. You'll perform comprehensive hearing assessments, fit state-of-the-art hearing...
...Commercial Property Manager | The Woodlands, TX | Fully In-Office Opportunity ***Must have commercial real estate experience. ***Exceptional benefits & amazing work culture. Role Overview The Property Manager is accountable for the full day-to-day management...