At EzData, we hire engineers with experience in Data Engineering and AI/ ML areas for the job roles listed below. Ideal candidates will have expertise and at least 5+ years of industry experience in relevant areas with at least a bachelor’s degree in CS or CS/Engineering or related fields –
Skills
Description
Preferably 5+ years of industrial experience in ML/ Deep Learning Research and develop ML systems that help train and/or deploy models Prototype new ideas/technologies to create proof of concept and demo Experience with ML frameworks – Tensorflow, Keras, Pytorch, Caffee Experience in Python and/or C++ Expertise in ML algorithms for Regression, Classification, Clustering, Dimensionality reduction, Structured output, Anomaly detection etc. Familiarity with Contextual Bandit algorithms and/or Reinforcement Learning Understand recent ML research papers and adapt those models to solve real world problems
Skills
Description
Design and deploy AI agents using LangGraph, CrewAI, etc. Experience in multi-step reasoning, AI workflows and deploying AI models in production environment with proficiency in Python Expertise in FastAPI, LangChain and LangGraph and experience in Retrieval-Augmented Generation (RAG), scikit-learn, TensorFlow Strong in Vector databases (Milvus, FAISS, or similar) and optimizing LLM interactions Expertise in AWS, Azure or HANA Cloud for scalable Gen AI applications Database integration (MongoDB/PostgreSQL, GraphDB) and integrating AWS Bedrock models
Skills
Description
Experience in ML – Decision Trees, Random Forests, Rule Mining, Clustering, PCA, Support Vector Machine, Ensemble techniques Expertise in DL - Neural networks, word embeddings, categorical embedding, RNN and LSTM, word2vec, encoder/ decoder models, attention and transformer models, transfer learning (ULMFiT), foundation models from Azure Open AI Knowledge of RAN/O-RAN and research in GenAI Experience in Snowflake, Oracle, Graph database Programming & Scripting - Python, R, Unix-Shell scripting, PySpark MS or PhD in EE or CS preferred
Skills
Description
Define architecture for Hybrid Cloud applications – design, integration design and architecture reviews 5+ of experience architecting large scale enterprise systems, Hybrid cloud architecture 5+ years of experience in Microsoft Azure (AKS, Azure Service Bus, App Services, Azure Functions, Azure AD) Designing services/solutions for distributed systems, virtualization and/or cloud Experience in CI/CD and DevSecOps practices and OWASP Designing solutions based on Microservices and Event Driven Architecture Tuning and managing Azure services and configurations Experience with application frameworks like MVC, WCF, PubSub, etc. Experience with Web Services (SOAP/REST) architecture including API development and deployment
Skills
Description
5+ years of experience in Azure development, App services, Azure storage, Azure SQL Database, Virtual machines, Azure AD and notification hub Develop applications using AAS (Azure Analysis Services), Azure IaaS, PaaS, deploying Web Apps for a multitude of applications utilizing the Azure stack (Including Compute, Web & Mobile, Blobs, Resource Groups, Azure SQL, Cloud Services, and ARM) Develop Azure PaaS services including Web Jobs, Azure SQL, Azure BLOB storage, Azure Application Insights, Azure Data Factory, Azure functions, Azure BOT Service, Azure Active Directory, Azure Stream Analytics, Azure CDN Develop data migration from RDBMS to HDFS, HBASE and Hive by using Apache NiFi and SQOOP Automate the generation of HQL, creation of Hive Tables and loading data into Hive tables by using Apache NiFi
Skills
Description
Design/ develop Cognitive/Predictive analytics, UI using MicroStrategy, Tableau, Power BI, Azure Cognitive Services, Azure Databricks Design & develop actionable analytics using PySpark, Python, AI & ML and Use ML to build & validate predictive models, deploying completed models Work on AWS data pipeline, Informatica ETL, HDFS, MapReduce, HBase, Scala & Apache Spark Implement Azure data solution services using Azure Cosmos DB, Azure Synapse Analytics, Azure Data Factory, Azure Databricks
Skills
Description
Design, develop, and implement data integration solutions to move data between disparate systems (Database is Snowflake) Analyze data sources and target systems to understand data structures and formats – build and maintain data pipelines for seamless data flow Cleanse, transform, and map data to ensure accuracy and consistency Monitor data pipelines and identify performance bottlenecks for optimization Automate data integration processes to improve efficiency and reduce errors Implement data security measures to safeguard sensitive information during transfers
Skills
Description
Develop end-to-end data solutions (storage, integration, processing, visualization) in Azure - building Azure/ AWS data pipelines Build data pipelines using Databricks & Snowflake- scheduling Databricks jobs –real-time streaming using Kafka - API integration with external sources Develop Spark applications using Spark - SQL in Databricks for data extraction, transformation and aggregation from multiple file formats for analyzing & transforming the data to uncover insights into the customer usage patterns Estimate cluster size, monitoring, and troubleshooting of the Spark Databricks cluster. Set up Azure Data Lake Storage (ADLS GEN2) using Role Based Access mechanism. Develop solutions on Azure using Azure Data Platform services
Skills
Description
Develop and maintain data models (conceptual, logical, and physical) to support business intelligence and analytical needs. Conduct data profiling, data quality analysis, and anomaly detection to ensure clean, reliable, and well-structured data Design, implement, and optimize ETL/ELT pipelines within Databricks, using Apache Spark, Delta Lake, and cloud storage Experience in Data Governance and Data Quality practices Proficiency in Python, Scala, or languages commonly used in Databricks Interact with data architects (Azure Databricks, AWS Databricks, Google Cloud and integrate with data lakes and data warehouses
Skills
Description
Configure, optimize Databricks platform to enable data analytics, ML and data engineering activities within the organization Create workspaces, configure cloud resources, view usage data, and manage account identities, settings, and subscriptions Install, configure, and maintain Databricks clusters and workspaces Monitor and manage cluster performance, resource utilization, platform costs, and troubleshoot issues to ensure optimal performance Manage schema data with Unity Catalog - create, configure, catalog, external storage, and access permissions Administer interfaces with Azure AD and Amazon AWS
Skills
Description
5+ years of experience in setting up the enterprise infrastructure on Amazon Web Services (AWS) like EC2 instance, ELB, EBS, S3 Bucket, Security Groups, Auto scaling, AMI, RDS, IAM Cloud formation, Active directory Connector, Workspaces, Cloud Front & VPC services using Terraform, PowerShell Scripts, JSON scripts Expertise in DevOps tools such as GIT, SVN, ANT, Maven, Chef, Puppet, Ansible, Vagrant, Virtual Box, Jenkins, Docker Configuration and maintenance of all networks, firewall, storage, load balancers, operating systems and software in AWS EC2 Managing Custom AMI's, Creating AMI snapshots and modified AMI permissions Configure AWS Elastic Load balancer with backend applications Configure ELB or ALB with EC2 Auto scaling groups and Installation of EC2 instances for production, Testing and Development Environment Configure AWS Identity and Access Management (IAM) Groups and Users for improved login authentication Build S3 buckets and manage policies for S3 buckets & use S3 bucket and glacier for storage and backup on AWS Create Amazon Virtual Private Cloud to create public-facing subnet for web servers with internet gateway, and backend databases & application servers in a private-facing subnet