1 Click Easy Apply

1 Click Easy Apply to Big Data Engineer Job Opening in ATLANTA, Georgia

Big Data Engineer


ATLANTA, Georgia


Job Type: CTH


Rate: 80.00


Big Data Engineer Job Opening in ATLANTA, Georgia - Our client is looking for a savvy Data Engineer to join the Consumer and Brand Protection Team within our Corporate Security Organization. The Data Engineer will be responsible for expanding and optimizing our data and data pipeline architecture, as well as optimizing data flow and collection from cross-functional sources. The ideal candidate is an experienced data pipeline builder and data wrangler who enjoys optimizing data systems and building them from the ground up.



The Data Engineer will support our security analysts and data scientists on various data initiatives and will ensure optimal data delivery architecture is consistent throughout ongoing projects. They must be self-directed and comfortable supporting the data needs of multiple teams, systems, and products. The right candidate will be excited by the prospect of optimizing or even re-designing our company?s data pipelines and flows to support our next generation of products and data initiatives.



Duties & Responsibilities:

Create and maintain optimal data pipeline architecture, including implementing ETL process to import data from various existing data sources, and enrich data from external data sources

Assemble large, complex data sets that meet functional / non-functional business requirements

Identify, design, and implement internal process improvements: automating manual processes, optimizing data delivery, re-designing infrastructure for greater scalability, etc.

Build the infrastructure required for optimal extraction, transformation, and loading of data from a wide variety of data sources using SQL and AWS ?big data? technologies

Assist in the construction of analytics tools that utilize the data pipeline to provide actionable insights into key business performance metrics

Work with key stakeholders, from Executives to Data Scientists, to assist with data-related technical issues and support their data infrastructure needs

Keep our data separated and secure across national boundaries through multiple data centers and AWS regions

Create data tools for analytics and data scientist team members that assist them in building and optimizing our products into an innovative industry leader

Work with data and analytics experts to strive for greater functionality in our data systems

Select and integrate any Big Data tools and frameworks required to provide requested capabilities

Monitoring performance and advising of any necessary infrastructure changes

Required Experience:

Advanced working SQL knowledge and experience working with relational databases, query authoring (SQL) as well as working familiarity with a variety of databases (e.g. Postgresql, MySQL, Microsoft SQL Server, IBM DB2, Oracle)

Experience with data extraction from non-structured data sources, e.g Splunk, CRM Notes

Management of big data clusters (Hadoop, Accumulo), with all included services

Experience with one or more of Cloudera/MapR/Hortonworks

Ability to solve any ongoing issues with operating the cluster

Experience with building stream-processing systems, using solutions such as Storm or Spark

Good knowledge of Big Data querying tools, such as Pig, Hive, and Impala

Experience with NoSQL databases, such as HBase, Cassandra, MongoDB

Knowledge of various ETL techniques and frameworks, such as Flume

Experience with various messaging systems, such as Kafka or RabbitMQ

Experience with Big Data ML toolkits, such as Mahout, SparkML, or H2O

Experience with one or more of Cloudera/MapR/Hortonworks

Experience with object-oriented/object function languages: Python, Java, C++, Scala, C#, etc.

Operating system experience including Linux, MS Windows, Docker



Core Qualifications:

We are looking for a candidate with 5+ years of experience in a Data Engineer role, who has attained a Bachelor?s degree or higher in Computer Science, Statistics, Informatics, Information Systems or another quantitative field.



Experience building and optimizing ?big data? data pipelines, architectures, and data sets

Experience performing root cause analysis on internal and external data and processes to answer specific business questions and identify opportunities for improvement

Strong analytic skills related to working with unstructured datasets

Build processes supporting data transformation, data structures, metadata, dependency and workload management

A successful history of manipulating, processing and extracting value from large disconnected datasets

Experience supporting and working with cross-functional teams in a dynamic environment.

Proficient understanding of distributed computing principles

1 Click Easy Apply

TalentEinstein.com - Superhuman AI Recruiting Assistant | Terms & Conditions

All rights reserved
Swanco LLC