
iLink Digital
About
The Company:
iLink is a Global
Software Solution Provider and Systems Integrator, delivers
next-generation technology solutions to help clients solve
complex business challenges, improve organizational
effectiveness, increase business productivity, realize
sustainable enterprise value and transform your business
inside-out. iLink integrates software systems and develops
custom applications, components, and frameworks on the latest
platforms for IT departments, commercial accounts, application
services providers (ASP) and independent software vendors
(ISV). iLink solutions are used in a broad range of industries
and functions, including healthcare, telecom, government, oil
and gas, education, and life sciences. iLink’s expertise
includes Cloud Computing & Application Modernization, Data
Management & Analytics, Enterprise Mobility, Portal,
collaboration & Social Employee Engagement, Embedded
Systems and User Experience design
etc.
What makes iLink
Systems’ offerings unique is the fact that we use
pre-created frameworks, designed to accelerate software
development and implementation of business processes for our
clients. iLink has over 60 frameworks (solution accelerators),
both industry-specific and horizontal, that can be easily
customized and enhanced to meet your current business
challenges.
Requirements
We are looking for a skilled and passionate Data
Engineer to join our team, with strong expertise in Azure Databricks, AI/ML pipelines, and modern data engineering practices. The ideal
candidate will be responsible for designing and building scalable data
solutions that power analytics, machine learning, and AI initiatives across the
organization.
Key Responsibilities:
- Design,
develop, and maintain data pipelines using Azure Databricks, PySpark,
and Python. - Build
and optimize data workflows that support AI/ML model training and
inference. - Integrate
structured and unstructured data from various sources into a centralized
data platform. - Work
closely with Data Scientists and ML Engineers to productionize ML
models. - Ensure
data quality, governance, and security across all stages of data
processing. - Participate
in the development and implementation of MLOps practices. - Collaborate
with cross-functional teams using Agile methodologies. - Monitor,
debug, and optimize performance of data pipelines in cloud environments.
Required Skills:
- 5+
years of experience in data engineering or a similar role. - Strong
hands-on experience with Azure Databricks and PySpark. - Proficiency
in SQL and Python for data processing. - Good
understanding of AI/ML model lifecycle and integration into
pipelines. - Experience
in working with Delta Lake, Lakehouse architecture, or
equivalent. - Familiarity
with MLOps tools such as MLflow, DVC, or similar. - Knowledge
of data orchestration tools like Airflow, Data Factory,
or similar. - Hands-on
experience with cloud platforms (preferably Azure). - Exposure
to CI/CD for data pipelines and ML models.
Preferred Qualifications:
- Experience
in real-time/streaming data processing (e.g., using Kafka, Spark
Streaming). - Exposure
to DevOps tools and automation for data and ML. - Knowledge
of data security, compliance, and governance best
practices. - Azure
certifications (e.g., DP-203, AI-102) are a plus.
Benefits
-
Competitive
salaries
-
Medical,
Dental, Vision Insurance
-
Disability,
Life & AD&D Insurance
-
401K
With Generous Company Match
-
Paid
Vacation and Personal Leave
-
Pre-Paid
Commute Options
-
Employee
Referral Bonuses
-
Performance
Based Bonuses
-
Flexible
Work Options & Fun Culture
-
Continuing
Education Reimbursements
-
In-House
Technology Training
Apply now
To help us track our recruitment effort, please indicate in your cover/motivation letter where (jobsintech.ca) you saw this job posting.