We are looking for a DevOps/Infrastructure Engineer to join our team!
This is what we do:
End-to-end data solutions or, in other words, data pipelines, optimal data ingestion and transformation/processing and storage, integration of multiple data sources, data access, presenting, reporting, and preparation for Data Science/BI/ML applications. In cloud or on-prem environments.
As working with complex End-to-end Data Solution puts a great importance to automation, security and infrastructure monitoring, and requires data flows orchestration, ML models versioning, deployment and serving, we are in need of skillful infrastructure engineers in our team.
Some of the technologies we like to use:
- Cloud infrastructure management (mostly AWS, but Azure and GCP as well)
- On-prem infrastructure provisioning & operations
- Terraform, Ansible and cloud provided automation tools (e.g. Cloud Formation) to automate infrastructure provisioning and configuration
- Monitoring tools, so we are in control (Graphana, Cloud provided monitoring, Prometheus, ELK)
- Orchestration (multi step ETLs, automated data pipelines with Nifi, Airflow, MLflow)
- Automation (deployment, operations) of data storage and processing technologies such as Cassandra, Druid, Kafka, Spark, Flink, Presto.
- Resource management (Docker, K8S, EKS, Rancher)
You:
- You are an infrastructure engineer (DevOps if you like). That means to us that you understand why it is important to test and monitor the software, automate deployment, understand the underlying hardware requirements, dream of basic software engineering basics such as networking, OS systems, cloud vs. on-premise, etc.
- You have 3+ years of experience in DevOps
- You have experience in some scripting language (bash, Python)
- You have decent experience with one or more monitoring stacks
- You have profound experience with one or more automation tools
- You know what MLOps is (or you have a desire to know it), and you secretly craving to apply your DevOps skills to Machine Learning projects
- You already know / would like to learn how to operate and manage distributed technologies such as Kafka, Cassandra, Spark, Flink, the-latest-and-greatest-you name it-tech, etc.
- You worked on a project with Scrum organization so you know how to cope with daily, demo, retrospective, refinement. You know how to manage your work transparently through an issue tracking tool
- You like learning new things and are open to exploring different approaches to solutions
- You like a healthy balanced relationship between work and free time – we don’t like overtime
- You know when and how to ask for help – we’re here to work together
We offer:
- Clearly defined pay grades: from L1 (talented junior) to L5 (a senior who is an expert in at least one technology we use)
- Career path that connects these grades – you know where your life is going (at least here with us)
- Loyalty coefficient: 10% on net compensation after 3 years in SmartCat, 20% on net compensation after 5 years
- Knowledge budget: extra money for conferences, books, and training of your choice
- Flexible working hours and work from home
- End-of-the-Year bonus program
- Full transparency – information about levels and salaries, company strategy, financial reports, and beyond
- Full support towards gaining knowledge and expertise
- An excellent team of senior engineers and data scientists