1. Develop big data solutions for near real-time stream processing, as well as batch processing on the BigData platform.
2. Analyse problems and engineer highly flexible solutions
3. Set up and run BigData development Frameworks like Hive, Sqoop, Streaming mechanisms, Pig, Mahout, Scala, SPARK and others.
4. Work with business domain experts, data scientists and application developers to identify data relevant for analysis and develop the Big Data solution
5. Coordinate effectively with team members in the project, customer and with business partners
6. Adapt and learn new technologies surrounding BigData eco systems
7. Take initiative to run the project and gel in the start-up environment.
1. Minimum 5 years of Professional experience with 2 years of Hadoop project experience.
2. Experience in Big Data technologies like HDFS, Hadoop, Hive, Pig, Sqoop, Flume,Spark etc.
3. Must Have core Java experience or advance java experience.
4. Experience in developing and managing scalable Hadoop cluster environments and other scalable supportable infrastructure.
5. Working knowledge of setting up and running Hadoop clusters.
6. Familiarity with data warehousing concepts, distributed systems, data pipelines and ETL.
7. Good communication (written and oral) and interpersonal skills.
8. Extremely analytical with strong business sense.
Good to have skills:
1. Worked on Cloudera or Hortonworks Hadoop distribution
2. Experience in NOSQL technologies like Hbase, Cassandra, MongoDB.