Any Graduation Degree
01 Sep 2020
- Data Engineering
- Big Data
We are a full stack conversational AI company from building products to delivering bots for our clients. We are looking for a - Data Engineer- to believe in this mission and work with us to scale the data pipelines at Haptik.
With humongous amount of data being generated across the board at Haptik, you will work towards expanding and optimizing our data and data pipeline architecture, as well as optimizing data flow and collection for cross functional teams. The ideal candidate is an experienced data pipeline builder and data wrangler who enjoys optimizing data systems and building them from the ground up.
From building new data pipelines, to processing data and enabling business metrics across the board, you will bring in more visibility into the Haptik IVA platform.
- Experience in designing scalable and automated Data Pipelines/ Data Lakes
- Sound knowledge of cloud technologies
- 2+ years of Experience working on Big Data, Hadoop, Spark Kafka, RedShift etc.
- Sound Knowledge of working on Databases like MySQL, Elasticsearch & NoSQL Databases like Mongo.
- Inquisitive nature & Self-starter who can implement with minimal guidance
- Expert with scripting languages (high five on python, R and/or shell)
- Requirements is such a strong word. We don't necessarily expect to find a candidate that has done everything listed, but you should be able to make a credible case that you have done most of it and are ready for the challenge of adding some new things to your resume.
- Improve existing Data Pipelines (AWS, Azure, GCP)
- Improve performance of existing datastores/databases
- Help drive a Data Driven culture across the organization
- Build and improve Security first data systems
- Ensure that all Data systems meet the business/company requirements as well as industry practices.
- Integrate up-and-coming data management and software engineering technologies into existing data structures.
- Develop set processes for data modeling, and data production.
- Research new uses for existing data.
- Collaborate with members of your team (eg, data architects, the IT team, data scientists) on the project's goals.
- Recommend different ways to constantly improve data reliability and quality.