Big Data Hadoop Engineer (CR180)

DrangKro Aerospace - Pleasanton, CA

Must Have Skills:
• 4+ years of hands-on Development, Deployment and production Support
experience in Big Data environment.
• 4-5 years of programming experience in Java, Scala, Python, Solr,
Hbase
• Proficient in SQL and relational database design and methods for data
retrieval.
• Hands-on experience in Cloudera Distribution 6.x
• Must have experience with Spring framework, Web Services and REST
API's.
• Project Experience in Query Processing Language (QPL) – a search
engine independent technology for Advance Query Processing is highly
desirable.

Principal duties/Roles and responsibilities:
The tasks for the Hadoop Engineer include, but are not limited to, the
following:
1. Provide vision, gather requirements and translate client user
requirements into technical architecture.
2. Design and implement an integrated Big Data platform and analytics
solution
3. Design and implement data collectors to collect and transport data to
the Big Data Platform.
4. Implement monitoring solution(s) for the Big Data platform to monitor
health on the infrastructure.

MENTORING & SKILL ENHANCEMENT:-
• Supplier Personnel will make every effort to provide skills
enhancement at a satisfactory rate and report any issues that may impede
the progress of training and mentoring.
• Supplier Personnel resources shall provide input to Contract Executive
to develop training and mentoring plan to include specific skill sets,
tasks, and training methodologies.
• Supplier Personnel will be responsible to execute the training and
mentoring plan(s) with designated Client employees and shall provide
input to refine and further develop training and mentoring plans as
training progresses.
• Supplier Personnel shall meet and discuss progress of training to
Client on a monthly basis.

Required Skills/ Technical Skills:
• Project Experience in Query Processing Language (QPL) – a search
engine independent technology for Advance Query Processing is highly
desirable.
• 4+ years of hands-on Development, Deployment and production Support
experience in Big Data environment.
• 4-5 years of programming experience in Java, Scala, Python.
• Proficient in SQL and relational database design and methods for data
retrieval.
• Knowledge of NoSQL systems like HBase or Cassandra
• Hands-on experience in Cloudera Distribution 6.x
• Hands-on experience in creating, indexing Solr collections in Solr
Cloud environment.
• Hands-on experience building data pipelines using Hadoop components
Sqoop, Hive, Solr, MR, Impala, Spark, Spark SQL.
• Must have experience with developing Hive QL, UDF’s for analyzing semi
structured/structured datasets.
• Must have experience with Spring framework, Web Services and REST
API's.
• Hands-on experience ingesting and processing various file formats like
Avro/Parquet/Sequence Files/Text Files etc.
• Must have working experience in the data warehousing and Business
Intelligence systems.
• Expertise in Unix/Linux environment in writing scripts and
schedule/execute jobs.
• Successful track record of building automation scripts/code using
Java, Bash, Python etc. and experience in production support issue
resolution process.
• Experience in building ML models using MLLib or any ML tools.
• Hands-on experience working in Real-Time analytics like
Spark/Kafka/Storm
• Experience with Graph Databases like Neo4J, Tiger Graph, Orient DB
• Agile development methodologies.

Posted On: Monday, November 2, 2020
Compensation: $74/hr (W2)



Apply to this job
  • Consultant Information
  • Educational Details
  • Please Provide Reference
  • Skills