Data Engineering

Data Engineer - Singapore

Preferable Location: Singapore   |   Full Time

Tiger Analytics is a global analytics consulting firm that is today pioneering what AI can do to solve some of the toughest problems faced by organizations globally. With data and technology at the core of our solutions, we’re solving problems that eventually impact the lives of millions globally.

Our culture is modeled around expertise, humility, and mutual respect. In the world of AI & Analytics, we have arguably one of the best teams in the world. No one is bigger than the team, and we swim or sink together. Whether you are a novice just entering this industry or a veteran, you’ll have the opportunity to explore different roles over time and will be pushed to get out of our comfort zone. You’ll work alongside the best in the industry, learn the structured approach to innovate, and have fun along the way.

About the role - 

Working at Tiger Analytics, you’ll be at the heart of this AI revolution. You’ll work with teams that push the boundaries of what-is-possible and build solutions that energize and inspire.

We are looking for a data engineer to be based out of our Singapore office. As a big data engineer you will be responsible to

  • Develop big data solutions for near real-time stream processing, as well as batch processing on the Big Data platform.
  • Analyse problems and engineer highly flexible solutions
  • Set up and run BigData development Frameworks like Hive, Sqoop, Streaming mechanisms, Pig, Mahout, Scala, SPARK and others.
  • Working experience on Big Data services on the cloud, preferred Azure (ADF, ADLS, Blob Storage, Azure SQL WH, etc)
  • Work with business domain experts, data scientists and application developers to identify data relevant for analysis and develop the Big Data solution 
  • Coordinate effectively with team members in the project, customer and with business partners
  • Adapt and learn new technologies surrounding BigData eco systems
  • Take initiative to run the project and gel in the start-up environment. 

 Required Experience, Skills & Competencies: 

  • Minimum 5 years of Professional experience with 2 years of Hadoop project experience.
  • Experience in Big Data technologies like HDFS, Hadoop, Hive, Pig, Sqoop, Flume, Spark etc.
  • Experience working on the cloud environment, preferred on Azure
  • Must Have core Java experience or advance java experience.
  • Experience in developing and managing scalable Hadoop cluster environments and other scalable supportable infrastructure.
  • Familiarity with data warehousing concepts, distributed systems, data pipelines and ETL.
  • Good communication (written and oral) and interpersonal skills. 
  • Extremely analytical with strong business sense.
  • Experience in NOSQL technologies like Hbase, Cassandra, MongoDB (Good to have)

Submit Your Application

You have successfully applied
  • You have errors in applying