hero

Senior Data-Backend Engineer (UK)

Causaly

Causaly

Software Engineering
London, UK
Posted on Friday, October 21, 2022

About us

Causaly accelerates how humans acquire knowledge and develop insights in Biomedicine. We enable researchers and decision-makers to discover evidence from millions of academic publications, clinical trials, regulatory documents, patents and other data sources... in minutes. Using our AI technology, we are developing the world’s biggest knowledge platform in Biomedicine powered by a high-precision Knowledge Graph.

We work with some of the world's largest biopharma companies and institutions on use cases spanning Drug Discovery, Safety and Competitive Intelligence. For example, read how Causaly is used in Target Identification here: AI-supported-target-identification-for-systemic-lupus-erythematosus.

We are backed by top VCs including Index Ventures, Pentech and Marathon.

What we are looking for

We are looking for a Sr. Data Engineer with a strong software-engineering profile and experiences in data pipelines, backend architectures, ETL, cloud and other related fields. You will join and help to grow our newly established Data & Semantic Technologies team. This team is responsible for designing & building the highly scalable and flexible data backend that we at Causaly need in order to make our vision become real. You will be working on incremental data pipelines supporting batch as well as targeted updates, grow and maintain massive knowledge graphs and ontologies, feed our constantly growing data warehouse, and so on. You will enable & empower the Applied AI and Application teams, and be responsible for linking their outcomes, in order to create true business value.

We are looking for innovative engineers who are capable, talented, engaged and passionate about creating industry-strength architectures and solutions that unleash the value of data. We are a multi-disciplinary team working in a fast-paced and collaborative environment, who value honest opinion and open debate. You have a strong problem-solving mind-set with a hands-on attitude, you are keen to design and build innovative solutions that leverage the value of data, you are passionate and creative in your work, you love to share ideas with your team and can pick the right tool for the job? Then you should become part of our journey!

Sneak preview on what you can expect to work on

  • Gather and understand data based on business requirements.
  • Import big data (millions of records) from various formats (e.g. CSV, XML, SQL, JSON) to BigQuery. Process further on BigQuery and combine with external data sources.
  • Implement and maintain highly performant data pipelines with the industry’s best practices and technologies for scalability, fault tolerance and reliability.
  • Build the necessary tools for monitoring, auditing, exporting and gleaning insights from our data pipelines
  • Work directly with a multitude of technical, product and business stakeholders.
  • Manage and maintain backend data processes related to data delivery, curation and machine learning operations
  • Help to build a strong data-engineering function, mentor and guide other engineers, shape our technology strategy and innovate on our data backbone
  • Minimum Requirements

    Successful candidates will have:

    • Master’s degree in Computer Science, Mathematics or a related technical field
    • 5+ years experience in backend data processing and data pipelines
    • Excellent knowledge of Python and related libraries for working with data (e.g. pandas, Airflow)
    • Excellent SQL and database skills
    • Solid understanding of modern software development practices (testing, version control, documentation, version control, etc…)
    • A product and user-centric mindset
    • Excellent problem solving, ownership, organizational skills, high attention to detail and quality

    Preferred Qualifications

    You have something additional to bring to the table? Such as experiences with NoSQL and big data technologies (e.g. Spark, Hadoop), full-text search databases (e.g., ElasticSearch), knowledge graphs and graph databases (e.g., Neo4J), MLOps, DataOps, machine learning in production? You possess knowledge of Terraform, Kubernetes and or/Docker Containers, cloud computing providers (especially GCP or AWS), UNIX scripting skills or similar? Any of these will be considered a plus.

    • Competitive Salary (see below)
    • Hybrid working (home + office)
    • Apple or Dell equipment (based on your OS preference)
    • Annual training budget for professional development (e.g. books, video tutorials)
    • Extensive sick-leave package
    • Plenty of opportunity to take on more responsibility as we grow
    • Be part of a multinational, diverse and exceptional team to build a transformative knowledge product that has real impact
    • Annual team retreat to secret destination

    The salary we offer is based on skills, professional experience and team fit. The applicant will be required to prove their qualifications via interview and written assignment.

    Causaly welcomes applications from all backgrounds. We are committed to diversity regardless of gender, religion or belief, ethnic or national origin, disability, age, citizenship, marital, domestic or civil partnership status, sexual orientation or gender identity. If you have a disability or additional need that requires accommodation, please do not hesitate to let us know.