Senior Data Engineer

Herzliya, IL

About the role:

Finonex is a well-established and stable trading-platform company, powering leading financial institutions and brokerages worldwide.

Founded in 2010, with offices in Herzliya and Sofia, we deliver secure, high-performance trading solutions in regulated markets.

As we build a brand-new Data Department, we’re looking for skilled professionals to help shape our data platform and drive the next generation of Finonex’s trading technology.

In this role you will:

·      Design, implement, and maintain scalable data pipelines using Python and orchestration tools like Apache Airflow.

·      Take a leading role in evaluating and selecting foundational components of our data stack (e.g., Databricks vs. Snowflake).

·      Architect and optimize data workflows across AWS and on-premise environments.

·      Work with modern file formats (Iceberg, Parquet, ORC) and understand their trade-offs for performance, cost, and scalability.

·      Collaborate with analysts, engineers, and stakeholders to provide accessible and reliable datasets.

·      Help assess and implement frameworks such as DBT for data transformation and modeling.

·      Develop best practices and internal standards for data engineering, monitoring, and testing.

·      Contribute to the long-term vision and roadmap of Fin-on-ex’s data infrastructure.

Why Join Fin-on-ex?

·      Be a founding member of a high-impact Data team.

·      Help define the architecture and strategy of a modern data platform.

·      Work in an autonomous, fast-moving environment where your opinion matters.

·      Enjoy a flexible hybrid work model, strong support for personal growth, and a culture of innovation.

You have:

·      5+ years of hands-on experience as a Data Engineer or similar role.

·      Strong proficiency in Python for data processing and workflow automation.

·      Experience with Apache Airflow (managed or self-hosted).

      Experience with Data Platform (e.g. snowflake, databricks).

·      Solid understanding of data lake and table formats like Iceberg, Parquet, Delta Lake, and their use cases.

·      Familiarity with big data processing (e.g., Apache Spark) and query engines (e.g., Trino).

·      Experience working in AWS environments, and comfort with hybrid cloud/on-premise infrastructure.

·      Proven ability to work independently, drive decisions, and evaluate technologies with minimal oversight.

Will be nice that you have:

·      Experience with DBT or other data modeling and transformation tools.

·      Prior involvement in building or migrating data platforms from scratch.

·      Exposure to Superset or other modern BI tools.

·      Knowledge of data governance, cataloging, or lineage tracking solutions.

·      Interest in shaping team culture, mentoring, and knowledge sharing.

Share this position
Apply
Required field