|Reference # :||18-01656||Title :||Hadoop Engineer|
|Location :||Tampa, FL|
|Position Type :||Contract|
|Experience Level :||Start Date / End Date :||11/15/2018 / 11/14/2019|
Our client, a leading global financial services company, has approximately 200 million customer accounts and does business in more than 140 countries. They provide consumers, corporations, governments and institutions with financial products and services, including consumer banking and credit, corporate and investment banking, securities brokerage, transaction services, and wealth management.
The Big Data & Analytics Engineering team is actively recruiting for an Engineer Specialist with a solid understanding and extensive hands-on experience engineering Hadoop products for large scale deployments. The candidate must have experience with all components of the Hadoop ecosystem and will contribute to the architecture and engineering responsibilities of the Hadoop offering within the Major Global Bank's portfolio.
- Design Hadoop deployment architectures (with features such as high availability, scalability, process isolation, load-balancing, workload scheduling, etc.).
- Install, validate, test, and package Hadoop products on Red Hat Linux platforms.
- Publish and enforce Hadoop best practices, configuration recommendations, usage design/patterns, and cookbooks to developer community.
- Engineer process automation integrations.
- Perform security and compliance assessment for all Hadoop products.
- Contribute to Application Deployment Framework (requirements gathering, project planning, etc.).
- Evaluate capacity for new application on-boarding into a large scale Hadoop cluster.
- Provide Hadoop SME and Level-3 technical support for troubleshooting.
- 10+ years overall IT experience.
- 2+ years of experience with Big Data solutions and techniques.
- 2+ years Hadoop application infrastructure engineering and development methodology background.
- Experience with Cloudera distribution (CDH) and Cloudera Manager is preferred.
- Advanced experience with HDFS, Spark, MapReduce, Hive, HBase, ZooKeeper, Impala, SOLR, KAFKA and Flume.
- Experience installing, troubleshooting, and tuning the Hadoop ecosystem.
- Experience with multi-tenant platforms taking into account Data Segregation, Resource Management, Access Controls, etc.
- Experience with Red Hat Linux, UNIX Shell Scripting, Java, RDBMS, NoSQL, and ETL solutions.
- Experience with Kerberos, TLS encryption, SAML, LDAP
- Experience with full Hadoop SDLC deployments with associated administration and maintenance functions.
- Experience developing Hadoop integrations for data ingestion, data mapping and data processing capabilities.
- Experience with designing application solutions that make use of enterprise infrastructure components such as storage, load-balancers, 3-DNS, LAN/WAN, and DNS.
- Experience with concepts such as high-availability, redundant system design, disaster recovery and seamless failover.
- Overall knowledge of Big Data technology trends, Big Data vendors and products.
- Good interpersonal with excellent communication skills - written and spoken English.
- Able to interact with client projects in cross-functional teams.
- Good team player interested in sharing knowledge and cross-training other team members and shows interest in learning new technologies and products.
- Ability to create documents of high quality. Ability to work in a structured environment and follow procedures, processes and policies.
- Self-starter who works with minimal supervision. Ability to work in a team of diverse skill sets and geographies.
- Exposure to the Major Global Bank internal standards, policies and procedures is a plus (does not apply to external candidates).
Please see our complete list of jobs at:
Job Status: Contract/Temporary