About Dentsu
Led by Dentsu Group Inc. (Tokyo: 4324; ISIN: JP3551520004), a pure holding company established on January 1, 2020, the Dentsu Group encompasses two operational networks: dentsu japan network, which oversees Dentsu’s agency operations in Japan, and dentsu international, its international business headquarters in London, which oversees Dentsu’s agency operations outside of Japan
With a strong presence in approximately 145 countries and regions across five continents and with 65,000 dedicated professionals, the Dentsu Group provides a comprehensive range of client-centric integrated communications, media and digital services through its five leadership brands—Carat, dentsu X, iProspect, Dentsu Creative, and Merkle—as well as through Dentsu Japan Network companies, including Dentsu Inc., the world’s largest single brand agency with a history of innovation.
About the Role
We are looking for a Data Engineer to design, build, and optimize scalable data pipelines and infrastructure that power media and marketing analytics. This role is primarily engineering-focused but requires collaboration with analytics teams to enable insights through well-structured data models and visualization-ready datasets. Experience with Python, SQL, and cloud platforms is essential, while familiarity with Power BI/Tableau and basic data analysis is a plus.
Key Responsibilities
- Design, develop, and maintain ETL/ELT pipelines using orchestration tools (Apache Airflow, Cloud Composer, or similar).
- Build scalable, cloud-native data architectures on GCP and/or Azure, including compute, storage, networking, and IAM.
- Develop and manage API connectors for ingesting data from marketing platforms (Google Ads, Meta Ads, DV360, CM360, LinkedIn Ads) and internal systems.
- Implement CI/CD pipelines for automated deployment, testing, and monitoring of data workflows.
- Build and optimize data models for analytics and reporting, ensuring performance and reliability.
- Collaborate with data analysts and business teams to translate requirements into engineering solutions and prepare datasets for BI tools (Power BI, Tableau).
- Perform basic data validation and analysis to ensure data accuracy and usability for dashboards.
- Implement data quality checks, monitoring, and alerting for pipeline health and integrity.
- Ensure data governance, security, and compliance across all engineering outputs.
- Maintain code quality through GitHub workflows, version control, and documentation.
Required Skills & Experience
- 3–6 years of experience as a Data Engineer or similar role.
- Strong expertise in cloud platforms (GCP and/or Azure) including compute, storage, networking, IAM, and serverless services.
- Proficiency in Python for ETL, API integrations, and automation.
- Strong SQL skills and experience with analytical warehouses (BigQuery, Snowflake, Redshift, Synapse).
- Familiarity with data visualization tools (Power BI, Tableau) and ability to prepare datasets for reporting.
- Hands-on experience with orchestration tools (Airflow, Cloud Composer, Prefect, ADF).
- Knowledge of ETL/ELT best practices, data modelling, and pipeline optimization.
- Experience with GitHub, version control, and CI/CD pipelines.
- Good understanding of networking fundamentals (VPCs, subnets, firewalls, private endpoints).
- Basic to intermediate cloud security knowledge (IAM policies, encryption, secrets management).
- Experience building and deploying API connectors (REST, GraphQL, OAuth).
- Familiarity with dbt for transformation and modelling.
- Exposure to real-time streaming (Kafka, Pub/Sub, Event Hub).
- Familiarity with containerization (Docker, Kubernetes) and IaC (Terraform).
- Experience with data quality frameworks (Great Expectations, Monte Carlo, Soda).
- Additional cloud security exposure (network security groups, logging/monitoring, vulnerability scanning).
- Exposure to ML engineering workflows (feature engineering, model deployment) and tools like Vertex AI, Azure ML, or SageMaker.
#LI-DNI
…