Контакты

Senior Big Data Engineer

Weehawken, NJ, USA

Ranked as #12 on Forbes’ List of 25 Fastest Growing Public Tech Companies for 2017, EPAM is committed to providing our global team of over 25,900+ EPAMers with inspiring careers from day one. EPAMers lead with passion and honesty, and think creatively. Our people are the source of our success and we value collaboration, try to always understand our customers’ business, and strive for the highest standards of excellence. No matter where you are located, you’ll join a dedicated, diverse community that will help you discover your fullest potential.

DESCRIPTION


You are curious, persistent, logical and clever – a true techie at heart. You enjoy living by the code of your craft and developing elegant solutions for complex problems. If this sounds like you, this could be the perfect opportunity to join EPAM as a Senior Big Data Engineer. Scroll down to learn more about the position’s responsibilities and requirements.

We are planning to form a data research team able to make sense of financial bank data, work with our client's stakeholders to discuss findings and firm up business rules, and productize findings in production code.

Responsibilities

  • Develop proposals for implementation and design of scalable big data architecture;
  • Develop scalable production ready data integration and processing solutions;
  • Convert large volumes of structured and unstructured customer data;
  • Design, implement, and deploy high-performance, custom applications at scale on Hadoop;
  • Work closely with data analysts and development stakeholders to transform data operations;
  • Design, document, and implement data lake and data stream processing;
  • Support the testing, deployment, and support of data processes;
  • Understand when to use data streams vs data lakes;
  • Design and implement support tools for data process;
  • Benchmark systems, analyze bottlenecks and propose solutions to eliminate them;
  • Articulate and align fellow team members to data process designs.

Qualification

  • HDFS;
  • Spark;
  • Cassandra - optional;
  • Java services;
  • Proficient understanding of distributed computing principles;
  • Management of Hadoop cluster (Cloudera preferred), with all included services;
  • Ability to solve any ongoing issues with operating the cluster;
  • Proficiency with Hadoop v2, MapReduce, HDFS, Sqoop;
  • Good knowledge of Big Data querying tools, such as Pig, Hive, and Impala.