Эта вакансия уже завершена
Quickly developing IT company is looking for a Big Data Engineer.
Requirements:
- Software engineering mindset and ability to write elegant, maintainable code.
- Analytical mindset to understand business needs, and come up with engineering solutions.
- Experience balancing complexity and simplicity in terms of pipeline design.
- Expertise building data pipelines on large complex data sets using Spark, Hadoop or other open source frameworks.
- Expertise in one or more programming languages: Python (an advantage), Ruby, Java, Go.
- Strong SQL skills and experience in working with various databases or query engines (Spark SQL / Dataframes, AWS Athena, Google Big Query).
- Experience with data workflows management tools: Airflow, Luigi, etc.
- Experience with AWS cloud services: EC2, ECS, EMR, Athena, S3. (or Google Cloud equivalent).
- Excellent communication skills to collaborate with cross-functional partners and independently drive projects and decisions.
- Advantage - Experience in: Docker, Kubernetes / AWS, Fargate, Git, Web Crawling.
Responsibilities:
- Build highly scalable data pipelines and data sets.
- Enhance our data architecture to balance scale and performance.
- Generate required information for data across the web
- Build, optimize and automate data processes to support business requirements.
- Work closely with the CTO to make sure our infrastructure supports the ongoing growth of data.
- Design and implement the required infrastructure for running data pipelines at scale onAWS / Google Cloud.
We offer:
- Long-term project with attractive payment.
- Brand new office in city center of Kiev (Metro Pecherska) with young environment and talented guys.
- PE registration, handled by the Company's accountant; + Monthly payment to the PE bank account.
- 20 working days of annual paid vacation.
- English classes inside the office.
- Opportunity to grow as professional.
- Interesting corporate events and presents.
Юлия