At Accellor, we are a trusted digital transformation partner that uses best-of-breed Cloud technology to deliver superior customer engagement and business effectiveness for clients. We’ve created an atmosphere that encourages curiosity, constant learning, and persistence. We encourage our employees to grow and explore their interests. We cultivate an environment of collaboration, autonomy, and delegation – we know our people have a strong work ethic and a sense of pride and ownership over their work. They are passionate, eager, and motivated – focused on building the perfect solution but never losing sight of the bigger picture. Role: Senior Data Engineer - Microsoft Fabric Position Overview We are seeking an experienced Senior Data Engineer for a contract position to design, develop, and optimize data solutions using Microsoft Fabric. This role requires deep technical expertise in modern data engineering practices and Microsoft's unified analytics platform. Similar experience with Data Bricks would be considered.
Design and implement scalable data pipelines and ETL/ELT processes within Microsoft Fabric from a code-first approach Develop and maintain notebooks, data pipelines, workspace and other Fabric item configurations Build and optimize data architectures using delta tables, lakehouse, and data warehouse patterns Implement data modelling solutions including star schema, snowflake schema, and slowly changing dimensions (SCDs) Performance tune Delta, Spark, and SQL workloads through partitioning, optimization, liquid clustering, and other advanced techniques Develop and deploy Fabric solutions using CI/CD practices via Azure DevOps Integrate and orchestrate data workflows using Fabric Data Agents and REST APIs Collaborate with development team and stakeholders to translate business requirements into technical solutions
Hands-on experience with Fabric notebooks, pipelines, and workspace configuration Fabric Data Agent implementation and orchestration Fabric CLI and CI/CD deployment practices
Python (advanced proficiency) PySpark for distributed data processing Pandas and Polars for data manipulation Experience with Python libraries for data engineering workflows REST API development and integration
Delta Lake and Iceberg table formats Delta table optimization techniques (partitioning, Z-ordering, liquid clustering) Spark performance tuning and optimization SQL query optimization and performance tuning
Visual Studio Code Azure DevOps for CI/CD and deployment pipelines Experience with both code-first and low-code development approaches
Data warehouse dimensional modeling (star schema, snowflake schema) Slowly Changing Dimensions (SCD Type 1, 2, 3) Modern lakehouse architecture patterns Metadata driven approaches
5+ years of data engineering experience Previous experience with large-scale data platforms and enterprise analytics solutions Strong understanding of data governance and security best practices Experience with Agile/Scrum methodologies Excellent problem-solving and communication skills
Get similar opportunities delivered to your inbox. Free, no account needed!
You're currently viewing 1 out of 27,983 available remote opportunities
🔒 27,982 more jobs are waiting for you
Access every remote opportunity
Find your perfect match faster
New opportunities every day
Never miss an opportunity
Join thousands of remote workers who found their dream job
Premium members get unlimited access to all remote job listings, advanced search filters, job alerts, and the ability to save favorite jobs.
Yes! You can cancel your subscription at any time from your account settings. You'll continue to have access until the end of your billing period.
We offer a 7-day money-back guarantee on all plans. If you're not satisfied, contact us within 7 days for a full refund.
Absolutely! We use Stripe for payment processing, which is trusted by millions of businesses worldwide. We never store your payment information.