Careers

Get to know Spry Info Solutions and you'll find a truly different technology company. We are a team of individuals with unique personalities and lifestyles. We believe in bringing together the best and brightest talents we can Find in the industry - people who are passionate about their skills can bring a diverse background to a dynamic team. Our long history, our relationship with our clients.

If you're looking for a company that aligns itself with the value of individual talent within team collaboration in delivering real business and technology solutions, then we're looking for you.

We offer consulting and permanent opportunities, a challenging -- yet friendly -- work environment, excellent compensation, and a comprehensive benefits package. you are encouraged to manage and develop your career according to your aspirations.

Enquiry & Sales: info@spryinfosol.com

Career: hr@spryinfosol.com

Job Posts



Notice: Undefined index: lang in /home/t26tr3euuhnu/public_html/careers.php on line 185

Job Description:

 

Looking for a Software Engineer with knowledge in Hadoop and Spark. Experience with data mining, Data Governance Frameworks and stream processing technologies (Kafka, Spark Streaming)

 

Responsibilities:

  • Develop Spark applications by using Scala and deploy Spark streaming applications with optimized no. of executors, write ahead logs & check point configurations.
  • Develop Spark code using Scala and Spark-sql for faster testing , processing of data, improving the performance and optimization of the existing algorithms using Spark-context, Spark-sql, Data Frames, Pair Rdd’s, Spark yarn.
  • Design multiple POC’s/prototype using Spark and deploy on the yarn cluster, compare the performance of Spark with sql. also, create data pipeline for different events of ingestion, aggregation and load corresponding data into glue catalog tables in HDFS location to serve as feed for abstraction layer and downstream application.
  • Coordinate with production warranty support group to develop new releases and check for the failed jobs to resolve the issue and work with QA in creating test cases, and assist in creating implementation plans.
  • Create Elastic MapReduce(EMR) clusters and set up environments on amazon AWS EC2 instances and import data from AWS S3 into Spark Rdd, perform transformations and actions on Rdd’s.
  • Collect data using Spark streaming from AWS S3 bucket in near-real-time and performs necessary transformations and aggregations to build the data model and persists the data in HDFS.
  • Working with Spark ecosystem using Spark sql and Scala queries on different formats like text file, csv file. extensively work with parquet file formats.
  • Implement a mechanism for triggering the Spark applications on EMR on file arrival from the client.
  •  Work on continuous integration of application using Jenkins, rundeck and CICD pipelines. Coordinate with the team on many design decisions and translate functional and technical requirements into detail programs running on Spark.
  • Create mappings and workflows to extract and load data from relational databases, flat file sources and legacy systems using azure. Implement an application to do the address normalization for all the clients datasets and administer the cluster and tuning the memory based on the Rdd usage.

 

 

Requirements:

  • The minimum education requirements to perform the above job duties are a Bachelor’s degree in Computer Science, Applications, Engineering or related technical field.
  • Should have good knowledge on Hadoop Ecosystems, Spark Scala, Python, Java
  • Should have NoSQL, SparkSQL, and ANSI SQL query language skills
  • Strong verbal and written communication and English language skills

Read More >>



Notice: Undefined index: lang in /home/t26tr3euuhnu/public_html/careers.php on line 185

 Job Description:

Position: Dot Net Full Stack Developer

Location: South Carolina

Duration: 12 Months + Extensions

Full Stack C#  ASP.NET Developer

Responsibilities:

 

  •  Participate in business and system requirements sessions, requirements elicitation and translation to technical specifications Solution road map for future growth, process architecture and mapping of Disaster Recovery Functions.

  • Design applications based on identified architecture and support implementation of design by resolving complex technical issues faced by the IT project team during development, deployment and support.

  • Develop and maintain web based applications using .NET Framework (C#, VB), WCF, Entity Framework with the knowledge of SQL Server database.

  • Design front-end applications using AngularJS framework, JavaScripts, HTML, Bootstrap and CSS. Develop web services like REST, SOAP and consuming the third party API.

  • Work with cross work streams and determining solution design impacting the core frameworks and components.

  • Perform performance optimizations on .Net frameworks and Shell Scripts. Prepare estimations, release plan and road map for future releases.

  • Define RDBMS models and schemas. Work in performance tuning of database schema, migrations and slowly changing dimensional databases.

  • Prepare & Maintain Technical design document of the code development and subsequent revisions. Prepare workflow diagram to describe the flow of process for deployment by system admin.

Requirements:

  • The minimum education requirements to perform the above job duties are a Bachelor’s degree in Computer Science, Information Technology or related technical field.

  • Should have good knowledge on C#.NET applications, ASP.NET

  • Experience with web technologies like AngularJS, Web API, MVC, HTML, JavaScript, jQuery.

Read More >>



Notice: Undefined index: lang in /home/t26tr3euuhnu/public_html/careers.php on line 185

Job Description:

Position : Software Engineer - Hadoop

Location: Bentonville, AR

Duration: 1 year + Extensions

 

 

Responsibilities:

  • Installation and configuration the Hadoop platform distributions (cloudera and Hortonworks and MAPR) and Hadoop component services and adding edge nodes and gateway nodes and assign the master and slaves nodes in the cluster.

  • Adding and delete the nodes, connecting the servers through the remote secure shell (ssh) and set up the Rack Awareness

  • Set up HDFS replication factor for replica of data and set up log4j properties and integrated AWS cloud watch. Upgrade and patching the cluster one version to another version (CDH and HDP, MAPR) and patching the Linux servers.

  • Optimize and tune the Hadoop environments to meet performance requirements. Install   and configure monitoring tools for all the critical Hadoop systems and services.

  • Configure and maintain HA of HDFS, YARN (Yet Another Resource Negotiator) Resource Manager, Map Reduce, Hive, HBASE, Kafka and Spark.

  • Manage scalable Hadoop virtual and physical cluster environments. Manage the backup and disaster recovery for Hadoop data.Work in tandem with big data developers and designers for use case specific scalable supportable -infrastructure.

  • Provide very responsive support for day-to-day requests from development, support and business analyst teams.

  • Performance analysis and debugging of slow running development and production processes. Perform product/tool upgrades and apply patches for the identified defects with the root cause analysis (RCA). Perform ongoing capacity management forecasts including timing and budget considerations.

  • Design scripts for Automation of jobs to run in Hadoop environments for Validation checks to monitor the cluster health.

Requirements:

  • The minimum education requirements to perform the above job duties are a Bachelor’s degree in Computer Science, Information Technology or related technical field.

  • Should have good knowledge on Cloudera, Hadoop, HDFS, Hive, Oozie, Spark, Python, Scala, Splunk.

  • Performance tuning of Hadoop clusters.

Read More >>



Notice: Undefined index: lang in /home/t26tr3euuhnu/public_html/careers.php on line 185

 

Job Description:

Looking for Hadoop Engineer for an 18+ Months Project In Mason, OH.

Responsibilities:

  • Build database prototypes to validate system requirements by discussing with Project managers, Business owners, Analyst teams. Document code and perform code review.

  • Design, develop, validate and deploy the Talend ETL processes for the DWH team using HADOOP (HIVE) on Hadoop.

  • Build data pipeline for different events of ingestion, aggregation and load consumer response data into Hive external tables in HDFS location to serve as feed for several dashboards and Web APIs. Develop SQOOP scripts to migrate data from Oracle to Big data Environment.

  • Design experimental Spark API for better optimization of existing algorithms such as Spark context, Spark SQL, Spark UDF’s, Spark DataFrames. Work with different file formats like CSV, Json, AVRO, text and Parquet and compression techniques    like snappy according to the request of the client.

  • Integrate Spark with MongoDB and create Mongo Collections, consumed by API teams. Convert Hive/SQL queries into Spark transformations using Spark RDDs and Scala.

  • Work on Kafka POC to establish the messages in to Kafka topics and test the frequency of messages. Work on cluster tuning and in-memory computing capabilities of Spark using Scala based on the resources available on the cluster.

  • Develop Shell Scripts to automate the Jobs before moving to Production in a configured way by passing Parameters. Schedule automated jobs on daily basis and weekly basis according to the requirement using Control-M as Scheduler.

  • Work on operation controls like job failure notifications, email notifications for failure logs and exceptions.

  • Support the project team for successful delivery of the client's business requirements through all the phases of the implementation.

 

Requirements:

  • The minimum education requirements to perform the above job duties are a Bachelor’s degree in Computer Science or related technical field.

  • Should have good knowledge on Hadoop eco systems, HDFS, Hive, Oozie, Sqoop, Kafka, Storm, Spark, Scala

  • Should be well versed with SDLC phases, release and change management processes

  • Should have good analytical and problem solving skills.

Read More >>



Notice: Undefined index: lang in /home/t26tr3euuhnu/public_html/careers.php on line 185

 

Job Description:

Looking Qlik Software Engineer for QlikView/Qlik Sense Development and Administration for advanced applications in Qlik platform for a Long Term Project in Houston, Texas 

 

Responsibilities:

  • Analyze, design, develop, test, implement and maintain detailed Business Intelligence solutions using Qlikview & Qliksense under windows operating systems.

  • Develop technical and functional requirements and design specifications to develop custom Qlikview & Qliksense applications.

  • Create Qlikview & Qliksense extract layers, Logical layers, Star Schema/Snow flake Schema data models and developing functional reports in user interface in Qlikview & Qliksense Application.

  • Provide Qlikview & Qliksense configurations and maintain Qlikview & Qliksense instances and cluster monitoring.

  • Deploy the designed reports into integration and implement the performance test, Response index test, Governance test, User validation of data accuracy.

  • Develop and implement automation and efficiencies with Qlikview & Qliksense API’s (Application Program Interface).

  • Design and develop dashboards and monitor and track Qlikview & Qliksense performance issues; provide strategic support of Qlikview & Qliksense integrations, deployments, and configurations.

  • Prepare workflow charts and diagrams to specify detailed operations to be performed by the Qlik Applications; Plan and prepare technical reports, instructional manuals as documentation of program development.

Requirements:

  • The minimum education requirements to perform the above job duties are a Bachelor’s degree in Computer Science or related technical field.

  • Working experience in Qlikview/Qliksense development and Administration.

  • Innovative thinking and problem solving capabilities in a fast-paced environment

  • Data-driven and customer-focused.

Read More >>


Showing 51 to 55 of 90 [Total 18 Pages]

Office Address

Corporate Headquarters

9330 LBJ Freeway, Suite 900, Dallas, TX 75243, USA
Get Directions

Georgia Office:

1081 Cambridge SQ, Suite B, Alpharetta, GA 30009, USA
Get Directions

Phone

+1(214)561-6706

Fax

+1(214)561-6795

Email

info@spryinfosol.com

Business Hours

Monday - Friday

9:00AM - 5:00PM