Data Engineer (Java, Python, C++)

Tower 2, Times City, 458 Minh Khai Street, Vinh Tuy Ward, Hai Ba Trung District, Ha Noi, Vietnam., Hai Ba Trung, Ha Noi
See map
10 days ago

Top 3 Reasons To Join Us

  • 13th month salary
  • Development Opportunities
  • Attractive Salary

The Job

Responsibilities for Data Engineer

  • Create and maintain optimal data pipeline architecture,
  • Assemble large, complex data sets that meet functional / non-functional business requirements.
  • Identify, design, and implement internal process improvements: automating manual processes, optimizing data delivery, re-designing infrastructure for greater scalability, etc.
  • Build the infrastructure required for optimal extraction, transformation, and loading of data from a wide variety of data sources using SQL and AWS/GCP/Azure ‘big data’ technologies.
  • Build analytics tools that utilize the data pipeline to provide actionable insights into customer acquisition, operational efficiency, and other key business performance metrics.
  • Work with stakeholders including the Executive, Product, Data and Design teams to assist with data-related technical issues and support their data infrastructure needs.
  • Create data tools for analytics and data scientist team members that assist them in building and optimizing our product into an innovative industry leader.
  • Work with data and analytics experts to strive for greater functionality in our data systems.

Your Skills and Experience

Qualifications for Data Engineer

  • We are looking for a candidate with 3+ years of experience in a Data Engineer role. Having a  degree in Computer Science, Statistics, Informatics, Information Systems or another quantitative field is a plus. They should also have experience using most of the following software/tools/platforms:
  • Experience with object-oriented/object function scripting languages: Python, Java, C++, Scala, etc.
  • Experience with big data tools: Hadoop, Spark, Kafka, etc.
  • Experience with relational SQL and NoSQL databases, including Postgres and Cassandra.
  • Experience with data pipeline and workflow management tools: Azkaban, Luigi, Airflow, etc.
  • Experience with AWS/GCP/Azure cloud services.
  • Experience with stream-processing systems: Storm, Spark-Streaming, etc.
  • Advanced working SQL knowledge and experience working with relational databases, query authoring (SQL) as well as working familiarity with a variety of databases.
  • Experience building and optimizing ‘big data’ data pipelines, architectures, and data sets.
  • Experience performing root cause analysis on internal and external data and processes to answer specific business questions and identify opportunities for improvement.
  • Able to understand various data structures and common methods in data transformation
  • Strong analytic skills related to working with structured and/or unstructured datasets.
  • Build processes supporting data transformation, data structures, metadata, dependency and workload management.
  • A successful history of manipulating, processing and extracting value from large disconnected datasets.
  • Working knowledge of message queuing, stream processing, and highly scalable ‘big data’ data stores.
  • Strong project management and organizational skills.
  • Experience supporting and working with cross-functional teams in a dynamic environment.