Data Engineer II
Karachi, Pakistan
1d ago

About Careem

Careem is the leading technology platform of the Greater Middle East. A pioneer of the region’s ride-hailing economy, Careem is expanding its services across its network to include payments, delivery and mass transportation.

Established in July 2012, Careem operates in more than 130 cities across 15 countries, has created more than one million job opportunities in the region and hosts over 30 million users.

Careem’s engineering team is growing rapidly, and we are looking for talented engineers to help us in our mission to simplify and improve the lives of people and build a lasting institution that inspires.

About the Role

You will be involved in the full life cycle of an application and a team member of an agile development process, responsible for the design and implementation of applications’ build, release, deployment, and configuration activities for data platform.

Other responsibilities include working with internal business partners to gather requirements, prototyping, architecting, implementing / updating CICD solutions, , Exploring tools , Release management process automation, Deployment automation, Environment setups, managing operations, and triaging and fixing operational issues..

What You’ll Need

  • BS degree in Computer Science similar technical field of study or equivalent practical experience
  • 3+ years of experience in programming / coding (Preferably in Python, Java, C# etc)
  • Expertise of scripting and problem solving
  • Experience with Computer Science fundamentals including data structures, algorithms, and complexity analysis
  • Infrastructure automation with Boto, CloudFormation, TerraForm, Troposphere or similar
  • Jenkins and similar CI / CD tools
  • Hands-on skills of Cloud services (certifications preferred)
  • Able to resolve known issues by runbooks.
  • Strong Linux background
  • Desirable :

  • Demonstrate proficiency in data management and automation on Spark, Hadoop, and HDFS environments
  • Experience managing data in relational databases and developing ETL pipelines
  • Experience using Spark SQL and Hive to write queries and scripts
  • Experience with data visualization tools maintenance such as Redash and Tabelau
  • What will keep you busy?

  • Assist in resolving production issues, conducts diagnostics, root cause analysis and pulls / pushes code to resolve conflicts.
  • Build pipelines from scratch integrating data sources with the data lake.
  • Infrastructure automation for Big Data services running on AWS using python and bash
  • Build and maintain CI / CD pipelines
  • Design and implement high availability features like auto-scaling, load balancing, health checks etc.
  • Perform capacity planning and propose the right size / capacity required for running the service (number of instances, disk size, instance type like compute optimized vs memory optimized etc.)
  • Track and support the rollout version and deployment.
  • Understand and use appropriate tools and principles to deploy, run, resolve, release, maintain and monitor code
  • What do we offer you?

    Salary Package :

  • Competitive salary
  • Unlimited paid annual leave
  • Entrepreneurial working environment
  • Flexible working arrangements
  • Mentorship and career growth
  • Report this job

    Thank you for reporting this job!

    Your feedback will help us improve the quality of our services.

    My Email
    By clicking on "Continue", I give neuvoo consent to process my data and to send me email alerts, as detailed in neuvoo's Privacy Policy . I may withdraw my consent or unsubscribe at any time.
    Application form