Careers

Get to know Spry Info Solutions and you'll find a truly different technology company. We are a team of individuals with unique personalities and lifestyles. We believe in bringing together the best and brightest talents we can Find in the industry - people who are passionate about their skills can bring a diverse background to a dynamic team. Our long history, our relationship with our clients.

If you're looking for a company that aligns itself with the value of individual talent within team collaboration in delivering real business and technology solutions, then we're looking for you.

We offer consulting and permanent opportunities, a challenging -- yet friendly -- work environment, excellent compensation, and a comprehensive benefits package. you are encouraged to manage and develop your career according to your aspirations.

Enquiry & Sales: info@spryinfosol.com

Career: hr@spryinfosol.com

Job Posts



Notice: Undefined index: lang in /home/t26tr3euuhnu/public_html/careers.php on line 185

 

Job Description:

 

Position: Software Engineer

Location: Madison, WI

Duration: Long Term

 

Responsibilities:

  • Develop, maintain, and enhance critical business applications and web sites to provide high-level customer service to the internal client base web applications using Microsoft TFS/VSTS/Azure DevOps.
  • Analyze, create, test, and modify computer program stored procedures to maintain different reporting tables and interfaces to and from external systems.
  • Develop Testing Scripts, TFS (Azure DevOps) Test scripts, and .NET Testing Code.
  • Create Client framework by Consuming API’s and write Automation Test Cases for the Application using C# and MVC, MVVM Design Patterns.
  • Implement stored procedures using SQL and perform the N-Unit testing and deploy the application. Automated testing for unit and regression testing. 
  • Work with Project Manager’s in technical design sessions to understand and influence the design and decisions based on automation needs.
  • Develop websites and Internet applications using the Microsoft .NET framework and related development technologies such as ASP.NET using C# programming language.
  • Perform various coding and programming tasks, such as design or development of Web front-end, back-end Web-based applications, and Web/Internet services. Involve in coding and integration of text, graphics, sound, and/or video.

 

Requirements:

  • The minimum education requirements to perform the above job duties are a Bachelor’s degree in Computer Science, Application, Engineering or related technical field.
  • Should have good knowledge on C#.NET applications, ASP.NET, Jenkins, Azure.
  • Good communication skills, including the ability to work independently.

Read More >>



Notice: Undefined index: lang in /home/t26tr3euuhnu/public_html/careers.php on line 185

 

Job Description:

Looking for a Service Now Consultant for Multiple year Project in San Antonio, TX.

 

Responsibilities:

  • Analysis, design, develop and implement modules, features/functionality and applications on ServiceNow platform using Business rules, Client Script, UI actions, UI Policies.

  • Deliver internal or external ServiceNow applications using JavaScript, HTML, CSS, Jelly Script, Angular JS and other relevant languages.

  • Coordinate with customer stake holders to implement enhancements and customizations in ServiceNow using business and IT requirements.

  • Implement ITSM and ITOM including Orchestration, Service Mapping, Event Management, Discovery, Reports, Dashboards and Performance Analytics.

  • Participation and collaboration of SDLC using Agile/Scrum methodology.

  • Customize modules including, but not limited to Service Catalog, Service Portal, Incident, Change, Problem, Knowledge, Release, Asset Management and Configuration Management.

  • Integrating ServiceNow with other Enterprise Application via API, using REST/SOAP web services, Scripting Workflow, ServiceNow Discovery.

  • Provides operational readiness through the engineering, planning, coordination, and execution of performance and tuning analysis, systems support, and incident and problem resolution.

  • Review test scenarios and refine test cases to ensure quality and work with users to meet expected results.

  • Ongoing management of the ServiceNow environment, including platform version upgrades and application releases without business impact.

Requirements:

  • The minimum education requirements to perform the above job duties are a Bachelor’s degree in Computer Science or related technical field.

  • Should have good knowledge on Service Now Platform, using JavaScript, HTML, CSS, Jelly Script, Angular JS

  • Strong verbal and written communication skills.

Read More >>



Notice: Undefined index: lang in /home/t26tr3euuhnu/public_html/careers.php on line 185

Job Description:

 

Looking for a Software Engineer with knowledge in Hadoop and Spark. Experience with data mining, Data Governance Frameworks and stream processing technologies (Kafka, Spark Streaming)

 

Responsibilities:

  • Develop Spark applications by using Scala and deploy Spark streaming applications with optimized no. of executors, write ahead logs & check point configurations.
  • Develop Spark code using Scala and Spark-sql for faster testing , processing of data, improving the performance and optimization of the existing algorithms using Spark-context, Spark-sql, Data Frames, Pair Rdd’s, Spark yarn.
  • Design multiple POC’s/prototype using Spark and deploy on the yarn cluster, compare the performance of Spark with sql. also, create data pipeline for different events of ingestion, aggregation and load corresponding data into glue catalog tables in HDFS location to serve as feed for abstraction layer and downstream application.
  • Coordinate with production warranty support group to develop new releases and check for the failed jobs to resolve the issue and work with QA in creating test cases, and assist in creating implementation plans.
  • Create Elastic MapReduce(EMR) clusters and set up environments on amazon AWS EC2 instances and import data from AWS S3 into Spark Rdd, perform transformations and actions on Rdd’s.
  • Collect data using Spark streaming from AWS S3 bucket in near-real-time and performs necessary transformations and aggregations to build the data model and persists the data in HDFS.
  • Working with Spark ecosystem using Spark sql and Scala queries on different formats like text file, csv file. extensively work with parquet file formats.
  • Implement a mechanism for triggering the Spark applications on EMR on file arrival from the client.
  •  Work on continuous integration of application using Jenkins, rundeck and CICD pipelines. Coordinate with the team on many design decisions and translate functional and technical requirements into detail programs running on Spark.
  • Create mappings and workflows to extract and load data from relational databases, flat file sources and legacy systems using azure. Implement an application to do the address normalization for all the clients datasets and administer the cluster and tuning the memory based on the Rdd usage.

 

 

Requirements:

  • The minimum education requirements to perform the above job duties are a Bachelor’s degree in Computer Science, Applications, Engineering or related technical field.
  • Should have good knowledge on Hadoop Ecosystems, Spark Scala, Python, Java
  • Should have NoSQL, SparkSQL, and ANSI SQL query language skills
  • Strong verbal and written communication and English language skills

Read More >>



Notice: Undefined index: lang in /home/t26tr3euuhnu/public_html/careers.php on line 185

 Job Description:

Position: Dot Net Full Stack Developer

Location: South Carolina

Duration: 12 Months + Extensions

Full Stack C#  ASP.NET Developer

Responsibilities:

 

  •  Participate in business and system requirements sessions, requirements elicitation and translation to technical specifications Solution road map for future growth, process architecture and mapping of Disaster Recovery Functions.

  • Design applications based on identified architecture and support implementation of design by resolving complex technical issues faced by the IT project team during development, deployment and support.

  • Develop and maintain web based applications using .NET Framework (C#, VB), WCF, Entity Framework with the knowledge of SQL Server database.

  • Design front-end applications using AngularJS framework, JavaScripts, HTML, Bootstrap and CSS. Develop web services like REST, SOAP and consuming the third party API.

  • Work with cross work streams and determining solution design impacting the core frameworks and components.

  • Perform performance optimizations on .Net frameworks and Shell Scripts. Prepare estimations, release plan and road map for future releases.

  • Define RDBMS models and schemas. Work in performance tuning of database schema, migrations and slowly changing dimensional databases.

  • Prepare & Maintain Technical design document of the code development and subsequent revisions. Prepare workflow diagram to describe the flow of process for deployment by system admin.

Requirements:

  • The minimum education requirements to perform the above job duties are a Bachelor’s degree in Computer Science, Information Technology or related technical field.

  • Should have good knowledge on C#.NET applications, ASP.NET

  • Experience with web technologies like AngularJS, Web API, MVC, HTML, JavaScript, jQuery.

Read More >>



Notice: Undefined index: lang in /home/t26tr3euuhnu/public_html/careers.php on line 185

Job Description:

Position : Software Engineer - Hadoop

Location: Bentonville, AR

Duration: 1 year + Extensions

 

 

Responsibilities:

  • Installation and configuration the Hadoop platform distributions (cloudera and Hortonworks and MAPR) and Hadoop component services and adding edge nodes and gateway nodes and assign the master and slaves nodes in the cluster.

  • Adding and delete the nodes, connecting the servers through the remote secure shell (ssh) and set up the Rack Awareness

  • Set up HDFS replication factor for replica of data and set up log4j properties and integrated AWS cloud watch. Upgrade and patching the cluster one version to another version (CDH and HDP, MAPR) and patching the Linux servers.

  • Optimize and tune the Hadoop environments to meet performance requirements. Install   and configure monitoring tools for all the critical Hadoop systems and services.

  • Configure and maintain HA of HDFS, YARN (Yet Another Resource Negotiator) Resource Manager, Map Reduce, Hive, HBASE, Kafka and Spark.

  • Manage scalable Hadoop virtual and physical cluster environments. Manage the backup and disaster recovery for Hadoop data.Work in tandem with big data developers and designers for use case specific scalable supportable -infrastructure.

  • Provide very responsive support for day-to-day requests from development, support and business analyst teams.

  • Performance analysis and debugging of slow running development and production processes. Perform product/tool upgrades and apply patches for the identified defects with the root cause analysis (RCA). Perform ongoing capacity management forecasts including timing and budget considerations.

  • Design scripts for Automation of jobs to run in Hadoop environments for Validation checks to monitor the cluster health.

Requirements:

  • The minimum education requirements to perform the above job duties are a Bachelor’s degree in Computer Science, Information Technology or related technical field.

  • Should have good knowledge on Cloudera, Hadoop, HDFS, Hive, Oozie, Spark, Python, Scala, Splunk.

  • Performance tuning of Hadoop clusters.

Read More >>


Showing 51 to 55 of 92 [Total 19 Pages]

Office Address

Corporate Headquarters

9330 LBJ Freeway, Suite 900, Dallas, TX 75243, USA
Get Directions

Georgia Office:

1081 Cambridge SQ, Suite B, Alpharetta, GA 30009, USA
Get Directions

Phone

+1(214)561-6706

Fax

+1(214)561-6795

Email

info@spryinfosol.com

Business Hours

Monday - Friday

9:00AM - 5:00PM