Job Type : W2

Experience : 1-2 YRs

Location : PA

Posted Date : 15-Aug-2018

Description :

 Position: Software Engineer (Hadoop)

Location: Philadelphia 

 

Description :

Looking for a Software Engineer with knowledge in Hadoop and Spark. Experience with data mining and stream processing technologies (Kafka, Spark Streaming, Akka Streams)

Responsibilities: 

  • Translate functional and technical requirements into detail Specifications running on AWS using  services EC2, ECS, RDS Aurora Mysql, SQS, SNS, KMS, Athena.

  • Migrate current Prometheus and DARQ applications to manage multi-account AWS cloud environment  containers, scalable computing platforms Docker (ECS), Kubernetes.

  • Develop Spark code using scala and Spark-SQL/Streaming for faster testing and processing of data. Create, optimize and troubleshoot complex SQL queries to retrieve and analyze data from databases such as Redshift, Oracle, MS SQL Server, MySQL and PostgreSQL.

  • Design ETL transformations and jobs using Pentaho Kettle Spoon designer 5.7.12 and Pentaho Data Integration Designer and scheduling them on ETL WFE application Carte Server.

  • Design, code, test and customize RHQ reports for Market systems data and provide data quality solutions to meet client requirements.

  • Develop various complex queries for different data sources Nasdaq Data Warehouse, Revenue Management System and work on performance tuning of queries. Create scripts for automation process for data ingestion.

  •  Build and Deploy artifacts (RPM’s) and services using GitLab pipelines to Dev, QC and Prod Aws accounts in cloud platform.

  •  Validate DARQ data, reports and manage document libraries on collaboration site using Confluence.

 

Requirements :

  • The minimum education requirements to perform the above job duties are a Bachelor’s degree in Computer Science, Information Technology or related technical field.

  • Knowledge in Hadoop, Spark, and Kafka

  • Strong communication skills.