Data Engineer - Hybrid in Paris, France - Full Remote (+/- 2h)

Kaiko is a rapidly growing fintech startup in the digital assets industry with an international presence. Our mission is to be the foundation of the new digital finance economy, which promises to expand financial opportunity and inclusion globally. We do this by empowering market participants with accurate, transparent, and actionable crypto data to be leveraged for a range of market activities including strategy backtesting, in-depth research, valuation, analytics, and integrations. 

About the position

You will be joining a fast-paced engineering team made up of people with significant experience working with terabytes of data. We believe that everybody has something to bring to the table, and therefore put collaborative effort and team-work above all else (and not just on the engineering side). You will be able to work autonomously as an equally trusted member of the team, and participate in efforts such as:

● Addressing high availability problems: load balancing, disaster recovery, replication, sharding, etc
● Addressing “big data” problems: 300+ millions of messages/day, 220B data points since 2010 (currently growing at a rate of 15B per month)
● Improving our development workflow, continuous integration, continuous delivery and in a broader sense our team practices
● Expanding our platform’s observability through monitoring, logging, alerting and tracing
● Designing, developing and deploying scalable and observable backend microservices
● Reflecting on our storage, querying and aggregation capabilities, as well as the technologies required to meet our objectives
● Working hand-in-hand with all other departments on developing new features, addressing issues and extending the platform

Who We Are Looking For

We value soft skills as much as hard skills, we do our best to recruit team players and not rockstars. In no particular order, here are the top qualities we look for in each other:

Honest, getting and giving feedback is very important to you
● Humble, making new errors is an essential part of your journey
Empathetic, you feel a sense of responsibility for all the team’s endeavors and don’t pay attention to the individual level of involvement
● Open, you want to make yourself heard while respecting everybody’s point of view
● Trustworthy, you are reliable and do not micromanage others

In regards to the more technical skills required for this position:

● You have experience working as a Software/Data/DevOps Developer/Engineer
● You are knowledgeable about data ingestion pipelines and massive data querying
●You are not just a cloud user, but actually understand how it works under the hood: if you’ve worked with a bare-metal environment before, even better
●You have worked with, in no particular order: microservices architecture, infrastructure as a code, self-managed services (eg. deploy and maintain your own databases), distributed services, server-side development, etc
● You have the utmost respect for legacy code and infrastructure, with some occasional and perfectly understandable respectful complaints
● You are fluent in written and spoken English

Please note that we don’t have any “strict” requirement in terms of development platforms or technologies: we are primarily interested in people capable of adapting to an ever changing landscape of technical requirements, who learn fast and are not afraid to constantly push our technical boundaries. It is not uncommon for us to benchmark new technologies for a specific feature, or to change our infrastructure in a big way to better suit our needs.

A nice to have would be if you have experience with:

● Data scraping over HTTP, WebSocket, and/or FIX Protocol
● Developing financial product methodologies for indices, reference rates, and exchange rates
● Technicalities of financial market data, such as the difference between: calls, puts, straddles, different types of bonds, swaps, CFD, CDS, options, futures, etc

Our Stack

➔ Monitoring: VictoriaMetrics, Grafana
➔ Alerting: AlertManager, Karma, PagerDuty
➔ Logging: Vector, Loki
➔ Caching: FoundationDB
➔ Secrets management and PKI: Vault
➔ Configuration management and provisioning: Terraform, Ansible
➔ Service discovery: Consul
➔ Messaging: Kafka
➔ Proxying: HAProxy, Traefik
➔ Service orchestration: Nomad (plugged in Consul and Vault)
➔ Database systems: ClickHouse (main datastore), PostgreSQL (ACID workloads)
➔ Protocols: gRPC, HTTP (phasing out in favor of gRPC)
➔ Platforms (packaged in containers): Golang, NodeJS (phasing out in favor ofGolang), Ruby (phasing out in favor of Golang)
➔ Hosting: OVH (90% bare-metal)

Compensation & Benefits

● Hardware of your choice
● Meal vouchers (Swile, 50% subsidized by Kaiko)
● Health insurance (Alan, 75% subsidized by Kaiko)
● Flexible hours, 35 days of paid vacations per year
● Salary with equity based on experience (for 10y of experience, range is 80-90K€ depending on equity option chosen)

Recruitment Process

➔ Introduction call (30mins)
➔ Meeting with 2 engineers for a technical/product RPG: you read that right, no written test, no whiteboard quicksort implementation (1h30)
➔ Informal discussions with other members of the company in business, sales, research, or product departments (2 persons, 45m x2)
➔ Closing meeting with VP of Engineering (15m)
➔ Offer

Each step is generally held on a different day, we do our best to follow-up in the next 24 hours, and we always provide the candidates with a thorough explanation of our decision.

Interested? Reach out to us at engineering@kaiko.com.