Job Type : W2

Experience : 0-2 yrs

Location : Phoenix, AZ

Posted Date : 05-May-2018

Description :

 Position: Software Engineer

Location: Phoenix, AZ

Duration: 21+ Months

Responsibilities:

  • Develop the Microservices based RESTful API’s using Spring MVC framework by writing controller, service, DAOs classes and business logics using java api and data structures. 

  • Design multiple MapReduce programs for data extraction, transformation and aggregation in XML, JSON, CSV and other compressed file formats.

  • Develop the spark code using Scala and Spark-SQL for data cleaning, pre-processing and aggregation. Apply the spark RDD transformations on top of Hive external tables.Develop complex hive queries for different file sources and writing Hive UDFs. Conduct performance tuning of hive queries. 

  • Coordinate the cluster and shedule workflows using Zookeeper and Oozie. Create HBase-Hive integration tables and loading large sets of semi-structured data coming from various sources. Create Shell scripts to simplify the execution (Hbase, Hive, Oozie, Kafka and MapReduce) and move the data inside and outside of HDFS. 

  • Configure Sqoop and develop scripts to extract data from Teradata into HDFS.

  • Develop multiple Kafka Producers and Consumers by Implementing data ingestion process and handle clusters in real timeTest, build, design, deploy, and maintain the CICD process using tools like Jenkins, Maven Git, Docker and Enterprise Cloud Platform (eCP). 

  • Provide the daily progress in stand-up’s, participate in the code reviews, work effectively with QA in creating test cases and deploy new releases in production

Requirements:

  • The minimum education requirements to perform the above job duties are a Bachelor’s degree in Computer Science, Applications, Engineering or related technical field.

  • Knowledge in Hadoop, Hive, Data Structures, Core Java, Scritpting, Spark Python, Scala

  • Strong communication skills both verbal and written skills