Our partner is a world leader in transportation equipment, leasing and asset management. They are in need of a team oriented and technical Data Engineer experienced in Python, PySpark, and Databricks. To get onboard with this great company and work for a truly wonderful human, please apply today. We value diversity in the workplace and encourage women, minorities, and veterans to apply. Thank you!
Location: US Remote (work in Pacific or Mountain time zone)
Type: Permanent F/T
The Data Engineer is responsible for expanding and optimizing data and data pipeline architecture, as well as optimizing data flow and collection for cross functional teams. The Data Engineer develops and supports a broad range of software capabilities including building data pipelines, managing the ETL/ELT processes, receiving, and delivering data through various interfaces, and processing significant amounts of data related to transportation, liability, and financial calculations.
Duties And Responsibilities
To perform this job successfully an individual must be able to perform the following essential duties satisfactorily. Other duties may be assigned to address business needs and changing business practices.
- Participates as a member of an Agile team developing Data Engineering solutions.
- Engages in requirements gathering, and technical design discussions to meet business needs.
- Designs and develops generic, scalable data pipelines in Azure Data factory and Databricks with python for on-prem and cloud data sources
- Assembles large, complex sets of data that meet non-functional and functional business requirements
- Solves unstructured data problems, manipulates, and optimizes large data sets to advance business problem-solving.
- Contributes to documentation, testing and cross-training of other team members.
- Works closely with others to assist and resolve production issues.
The following generally describes requirements to successfully perform the assigned duties.
- 4+ years of data engineering or equivalent experience in a related computer science field.
- 4+ years of hands-on experience in developing and deploying data architecture strategies or engineering practices.
- 4+ years of experience with complex SQL queries and knowledge of database technologies.
- Expert-level coding experience with PySpark and Python.
- Expert-level technical experience with Apache Spark / Azure Databricks.
- Proficient in using and designing solutions on Azure Cloud infrastructure (particularly Azure Data Factory) and Azure DevOps.
- Proficient with core business intelligence and data warehousing technology.
- Proficient designing and developing data integration solutions using ETL tools such as Azure Data Factory and/or SSIS.
- Proficient with software development practices such as Agile, TDD, and CI/CD.
- Ability to collaborate and communicate professionally, both verbally and in writing, at all levels of the organization, particularly bridging conversations between data and business stakeholders.
- Experience with Snowflake
- Experience with graph databases or graph libraries
- Kafka or other streaming technologies
- Elastic Search
- Experience in the rail or other commodities driven industry
Work Environment And Physical Requirements
The work environment characteristics described here are representative of those an employee encounters while performing the essential functions of this job. Reasonable accommodations may be made to enable individuals with disabilities to perform the essential functions.