Remote Data Science and Data Analyst Jobs

Data Engineer

Dealer Inspire - US
Posted: 6 months ago

Dealer Inspire (DI) is a leading disruptor in the automotive industry through our innovative culture, legendary service, and kick-ass website, technology, and marketing solutions. Our mission is to future-proof local dealerships by building the essential, mobile-first platform that makes automotive retail faster, easier, and smarter for both shoppers and dealers. Headquartered in Naperville, IL, our team of nearly 600 work friends are spread across the United States and Canada, pushing the boundaries and getting it done every day, together.

DI offers an inclusive environment that celebrates collaboration and thinking differently to solve the challenges our clients face. Our shared success continues to lead to rapid growth and positive change, which opens up opportunities to advance your career to the next level by working with passionate, creative people across skill sets. If you want to be challenged, learn every day, and work as a team with some of the best in the industry, we want to meet you. Apply today!

Dealer Inspire is a CARS brand. CARS includes the following brands:, Dealer Inspire, DealerRater, FUEL, CreditIQ and Accu-Trade. is one of Chicago’s original tech companies. Our online platform makes it easier for consumers to shop for, sell and service their cars. With our expert content, mobile app features, millions of new and used vehicle listings, a comprehensive set of research tools and the largest database of consumer reviews in the industry, offers innovative products to connect consumers with dealers across the country.

Data is the driver for our future at Cars. We’re searching for a collaborative, analytical, and innovative engineer to build scalable and highly performant platforms, systems and tools to enable innovations with data. If you are passionate about building large scale systems and data driven products, we want to hear from you.

Responsibilities Include:

  • Build data pipelines and deriving insights out of the data using advanced analytic techniques, streaming and machine learning at scale
  • Work within a dynamic, forward thinking team environment where you will design, develop, and maintain mission-critical, highly visible Big Data and Machine Learning applications
  • Build, deploy and support data pipelines and ML models into production.
  • Work in close partnership with other Engineering teams, including Data Science, & cross-functional teams, such as Product Management & Product Design
  • Opportunity to mentor others on the team and share your knowledge across the organization

Required Skills

  • Ability to develop Spark jobs to cleanse/enrich/process large amounts of data.
  • Experience with tuning Spark jobs for efficient performance including execution time of a job, execution memory, etc.
  • Experience with dimensional data modeling concepts.
  • Sound understanding of various file formats and compression techniques.
  • Experience with source code management systems such as Github and developing CI/CD pipelines with tools such as Jenkins for data.
  • Ability to understand deeply the entire architecture for a major part of the business and be able to articulate the scaling and reliability limits of that area; design, develop and debug at an enterprise level and design and estimate at a cross-project level.
  • Ability to mentor developers and lead projects of medium to high complexity.
  • Excellent communication and collaboration skills.

Required Experience

  • Software Engineering: 3 - 5 years of designing & developing complex, batch processes at enterprise scale; specifically utilizing Python and/or Scala.
  • Big Data Ecosystem: 2+ years of hands-on, professional experience with tools and platforms like PySpark, Airflow, and Redshift.
  • AWS Cloud: 2+ years of professional experience in developing Big Data applications in the cloud, specifically AWS.


  • Experience working with Clickstream Data
  • Experience working with digital marketing data
  • Experience with developing REST APIs.
  • Experience in deploying ML models into production and integrating them into production applications for use.
  • Experience with machine learning / deep learning using R, Python, Jupyter, Zeppelin, TensorFlow, etc.
Data Science and Analyst