sata-intelligence

Software Developer

Data Intelligence Technologies - Chantilly, VA

Summary

The successful candidate will leverage their development skills and experience, as part of our Sponsor’s Data Layer Engineering Team, to support the successful ingestion, cleansing, transformation, loading, and display of significant amounts of data, with a particular focus on Cloud data.

Duties, Tasks & Responsibilities

  • Designing, implementing, and optimizing large-scale ingest systems in a Big Data Cloud environment
  • Designing and implementing large-scale ingest systems in a Big Data environment
  • Optimizing all stages of the data lifecycle, from initial planning, to ingest, through final display and beyond
  • Designing and implementing data extraction, cleansing, transformation, loading, and replication/distribution
  • Developing custom solutions/code to ingest and exploit new and existing data sources
  • Developing data profiling, deduping logic, and matching logic for analysis
  • Organizing and maintaining Data Layer documentation, so others are able to understand and use it
  • Collaborating with teammates, other service providers, vendors, and users to develop new and more efficient methods
  • Effectively articulating the risks and constraints associated with software solutions, based on environment

Required Experience, Skills, & Technologies

  • TS/SCI clearance with appropriate poly
  • Bachelor’s Degree in Computer Science, Information Systems, Engineering, or other related discipline, OR 10 years of related technical development experience may be substituted for education
  • Demonstrated experience delivering solutions using Amazon Web Services (e.g. AWS EC2, RDS, S3, etc.) and/or other cloud technologies
  • Demonstrated data analysis and parsing experience
  • Demonstrated Java development experience coupled with significant SQL/database experience
  • Experience with the full data lifecycle, from ingest through display, in a Big Data environment
  • Hands-on experience with Java-related technologies, such as JDK, J2EE, EJB, JDBC, and/or Spring, and experience with RESTful APIs
  • Experience with Hadoop, Hbase, MapReduce
  • Experience with Kafka and Zookeeper
  • Experience developing and performing ETL tasks in a Linux environment

Desired Experience, Skills & Technologies

  • Experience with Elasticsearch
  • Experience Gradle

(Req - 160)



Posted On: Monday, August 2, 2021



Apply to this job
  • Pre - Screen Questionnaire (FSP)
  • *
  • *
  • *
  • *
  • *
  • *