Data Engineer

Full Time
Not Specifiy
Posted 2 months ago

Job Duties:

  • Build distributed and highly parallelized Big Data ingestion and processing pipelines that are able to process massive amounts of data (both structured and unstructured data) in near real-time
  • Collaborate with engineering and analytics teams to shape and drive tactical and strategic development of data infrastructure, reporting and analytics applications
  • Design data flows in the data lake platform
  • Analyze, design and develop the data pipelines, frameworks and infrastructure for data generation and transformation
  • Design and develop a data storage strategy and architecture
  • Evaluate and deploy a data quality framework to measure and improve data quality in the organization over time
  • Automate processes to improve repeatability and reliability
  • Maintain the cloud and Enterprise Data Warehouse
  • Work with the business analysts and translate the technical data requirements
  • Investigate the introduction of no-SQL and non-relational technologies that may benefit the Coperate’s data strategy

Requirements:

  • Degree holder in Computer Science or Engineering, minimum 5 years of experience in high data volume environments, preferably in software, internet industry.
  • Hands-on experience in Data Warehouse development including ETL and SQL development
  • Experience with Python and SQL environments preferred.
  • Experience with Big Data Technology like
    – Pandas, Spark, Nifi, Flink, Kafka, Hadoop, Google Pub/Sub, Tensorflow
    – Hadoop Administration-Setting up Hadoop clusters(Infra setup)
    – Hadoop Development- Debugging experience on Hadoop cluster
    – Hbase
    – Zookeeper
    – Ambari
    – OpenTSDB
    – Storm
    – Kafka
    – Azure data factory
    – Azure data bricks
    – Azure synapse
  • Optional Technical Skills
    – Neo4j
    – Grafana
  • Experience on Message queuing and data streaming is preferred
  • Flexible, adaptive, quick learner – works well in a collaborative, communicative environment.
  • Data driven thinking
  • Experience in container technologies (Docker/Kubemetes)
  • Knowledge in design and implementation of data infrastructure on Azure, AWS or GCP
  • Experience in implementing data security capabilities such as encryption, anonymization, and pseudonymization
  • Understanding of statistical models, machine learning, and graph analysis.
  • Experience with deploying and running ML algorithms and experiments is a plus.
  • Demonstrated experience with web architectures, scaling, debugging code, and performance analysis is a plus
  • Good spoken and written English, Cantonese and Mandarin is a plus

Salary Range: HK$40,000 – 50,000

*** Permanent Hong Kong Resident is preferred. Expected Salary in CV is needed for consideration ***

Interested party, please send your details resume with current and expected salary to our HR Department by email to gadmin@askit.com.hk OR apply through our Online System below. 

All information provided will be treated in strict confidence and used solely for recruitment purposes. The resume will be retained for a period of two years for future recruitment purposes within our group and clients.

Job Features

Job CategoryIT

Apply Online

A valid email address is required.
A valid phone number is required.