Bigdata Developer

Roles & Responsibilities:
  • Expertise in newer concepts like Apache Spark and Scala programming.
  • Managing transactions with linked data groups, Bulk processing and Error handling
    Strong Hands on SQL.
  • Working experience on fact & dimension tables.
  • Design, develop, document and architect Hadoop applications using AWS.
  • Work with Hadoop Log files to manage and monitor it.
  • Develop MapReduce coding that works seamlessly on Hadoop clusters.
  • Working knowledge of SQL, NoSQL, data warehousing and Hive queries..
  • Test software prototypes, propose standards and smoothly transfer it to operations.
  • Excellent code debugging skills.
  • Make scalable and high functional web services to track data.
  • JSON coding to get and put the data and configure as per the system..
  • Capacity of writing MapReduce jobs, Writing maintainable, reliable and high-performance code
Preferred Education:
BE/ Btech/ MCA
Area of Expertise:
Spark, Scala, SQL and JSON
4.8 Years
Job Code:
Was this post helpful?
Let us know, if you liked the post. Only in this way, we can improve us.
Powered by Helpful
Contact Us

If you’d like us to contact you, please fill out the form.

Not readable? Change text. captcha txt