Portfolio Jobs

Search open roles at our #staymagical portfolio companies

Data Engineer

Octus

Octus

Software Engineering, Data Science
Bogotá, Bogota, Colombia
Posted on Nov 14, 2025

Octus

Octus is a leading global provider of credit intelligence, data, and analytics. Since 2013, tens of thousands of professionals across hedge fund, investment banking, management consulting, and law firm verticals have come to rely on Octus to make better, faster, and more confident decisions in pace with the fast-moving credit markets.
For more information, visit: https://octus.com/

Working at Octus

Octus hires growth-minded innovators and trailblazers across the globe to drive our business and culture. Our core values – Action Oriented, Customer First Mindset, Effective Team Players, and Driven to Excel – define an organizational ethos that’s as high-performing as it is human. Among other perks, Octus employees enjoy competitive health benefits, matched 401k and pension plans, PTO, generous parental leave, gym subsidies, educational reimbursements for career development, recognition programs, pet-friendly offices (US only), and much more.

Role

Octus is seeking a Data Engineer to design, develop, and lead scalable data ingestion and transformation pipelines. You’ll play a key role in designing and maintaining robust data infrastructure that powers data platforms, products and automation initiatives across the firm. The ideal candidate is an expert Python and SQL developer with deep experience building modern data workflows using AWS services and infrastructure-as-code.

Responsibilities

  • Lead the design and development of data ingestion and transformation pipelines, ensuring scalability, efficiency, and reliability across diverse data sources (APIs, web data, internal feeds, etc.).

  • Serve as a key contributor and mentor within the data engineering team, guiding architecture, design, and implementation decisions.

  • Architect and manage data pipelines and orchestration workflows using AWS services such as MWAA (Airflow), Lambda, ECS, and SQS.

  • Implement and maintain infrastructure as code (IaC) using Terraform, ensuring reproducibility and compliance with cloud standards.

  • Partner with data analysts, scientists, and backend engineers to ensure data consistency, discoverability, and reliability.

  • Apply best practices in data modeling, schema design, and ETL/ELT processes for high-volume structured and semi-structured data.

  • Ensure data quality and lineage through automated testing, monitoring, and alerting.

  • Promote continuous improvement through code reviews, observability practices, and team-wide knowledge sharing.

  • Collaborate closely with technology leadership to align data platform development with business strategy and product goals.

  • Stay up to date with industry trends in data engineering, cloud architecture, AI/ML integration, and automation.

Requirements

  • Strong foundation in software engineering principles, including SOLID design, modularity, and scalability.

  • Expert proficiency in Python for data pipeline and automation development.

  • Advanced SQL skills and experience optimizing complex queries and data models.

  • Proven experience designing and maintaining cloud-native data pipelines on AWS (e.g., MWAA/Airflow, Lambda, ECS, SQS, Glue, S3, Redshift, etc.).

  • Experience implementing and managing Terraform or similar IaC frameworks.

  • Strong understanding of data ingestion, transformation, and orchestration tools and patterns, including those used in AI/ML pipelines.

  • Familiarity with CI/CD pipelines, automated testing, and modern DevOps practices.

  • 6+ years of experience in data engineering or backend development, with a focus on scalable data solutions.

  • Experience mentoring teams and leading data infrastructure projects end-to-end.

  • Familiarity with containerization (Docker) and workflow orchestration best practices.

  • Excellent communication, collaboration, and problem-solving skills.

Nice to Have

  • Experience with data warehousing or lakehouse technologies (Redshift, Snowflake, Databricks, etc.).

  • Exposure to streaming data technologies (Kafka, Kinesis, Flink).

  • Experience integrating data quality and observability tools (Great Expectations, Monte Carlo, etc.).

  • Familiarity with Scrapy, BeautifulSoup, or other data extraction frameworks for ingestion pipelines.

Equal Employment Opportunity

Octus is committed to providing equal employment opportunities to all employees and applicants for employment without regard to race, colour, religion, sex, sexual orientation, gender identity, national origin, age, disability, genetic information, marital status, pregnancy, veteran status, or any other legally protected status. We strive to create an inclusive and diverse work environment where all individuals are valued, respected, and treated fairly. We believe that diversity enriches our workplace and enhances our ability to innovate and succeed.