
Location
Dehradun / Remote (India)
Experience
5–8 Years
At Rialtes, we are driven by a mission to help Fortune 500 companies simplify, enable, and empower their digital journeys across the globe. Our focus is on building lasting partnerships that spark growth, foster innovation, and ensure long-term success. As trusted SAP and Salesforce consulting experts, we offer bespoke, forward-thinking solutions leveraging cutting-edge platforms like Agentforce, Data Cloud, S/4HANA, Joule, and SuccessFactors. By partnering with technology leaders such as YARDI, Dell, Workato, and more, we ensure our solutions are integrated and scalable to meet the ever-evolving needs of our clients. At Rialtes, we don’t just provide services—we ignite digital transformation that drives meaningful success.
We are seeking a Databricks Analyst with strong experience in SAP data integration, Databricks Lakehouse architecture, and Unity Catalog governance. The ideal candidate will be responsible for ingesting, transforming, and analyzing SAP data within the Databricks platform, enabling enterprise analytics, AI/ML workloads, and modern data engineering pipelines. This role requires expertise in SAP data structures, Delta Lake architecture, data modeling, and secure data governance using Unity Catalog.
Extract and ingest data from SAP systems (ECC, S/4HANA, BW, or SAP Datasphere) into Databricks Lakehouse.
Build scalable ETL/ELT pipelines using Databricks, Spark, and Delta Lake.
Integrate SAP data via ODP, SLT, SAP Extractors, APIs, or CDC pipelines.
Design and optimize bronze, silver, and gold data layers within the Lakehouse architecture.
Work with SAP functional teams to understand data models across modules such as:
Finance (FI/CO)
Supply Chain (MM, SD, PP)
Manufacturing and Logistics
Translate SAP business data into analytics-ready datasets for reporting and ML use cases
Implement and manage Unity Catalog for centralized data governance.
Define data access policies, RBAC permissions, lineage tracking, and data security controls.
Ensure compliance with enterprise data governance and security frameworks
Design star schemas, dimensional models, and curated datasets for BI and analytics.
Optimize Spark jobs, partitioning strategies, and Delta Lake performance.
Manage data quality, data validation, and reconciliation with SAP source systems.
Enable data consumption via tools such as:
Power BI
Tableau
Databricks SQL
AI/ML pipelines
Support business users in building enterprise dashboards and analytics use cases.
Work closely with SAP architects, data engineers, and analytics teams.
Participate in data platform architecture discussions and roadmap planning.
Support enterprise initiatives related to AI, advanced analytics, and data monetization.
Strong experience with Databricks Lakehouse Platform
Expertise in Apache Spark (PySpark / Scala / SQL)
Hands-on experience with Delta Lake
Experience implementing Medallion Architecture (Bronze/Silver/Gold)
Experience extracting data from SAP ECC / S4HANA / SAP BW
Familiarity with:
SAP tables and structures
SAP Extractors
SAP SLT replication
ODP / OData APIs
Understanding of SAP business processes and data models
Hands-on experience with Databricks Unity Catalog
Knowledge of:
Data lineage
Role-based access control
Data governance frameworks
Experience working with cloud data platforms such as AWS, Azure, or GCP
Knowledge of data pipelines, orchestration, and workflow automation
Data lineage
Role-based access control
Data governance frameworks
Experience with Databricks ML / AI capabilities
Knowledge of Databricks Delta Live Tables
Familiarity with SAP Datasphere or SAP BTP data services
Experience with streaming data pipelines
Understanding of data security and compliance standards
Databricks Lakehouse
Apache Spark / PySpark
Delta Lake
Unity Catalog
SAP ECC / S4HANA / BW
SQL / Python
Power BI / Tableau
AWS / Azure / GCP
Bachelor’s or Master’s degree in Computer Science, Data Engineering, Information Systems, or related field.