Senior Data Engineer Lead (180k-200k + Bonus)

Data
Minneapolis
Posted 3 months ago

Minnesota or NYC based.

Global alternative investment firm that employs a credit-oriented, value-based approach to investing across a broad array of geographies, segments and asset types, including corporate credit, residential mortgages, real estate, specialty finance, transportation and infrastructure.  Offices located in Minneapolis, London, and Singapore.

Our firm’s core values of Integrity, Excellence, Collegiality, Innovation and Humility are apparent in our workplace every day.  Employees are rewarded for their contributions, especially their passion, expertise and ideas.

OPPORTUNITY

The work environment at our firm is fast-paced and exciting. We believe in our employees and what they can do. Our employees have the opportunity to work as part of a global platform, which is complex, diverse and ever-evolving.  We reward hard work and intellectual capability. Our strong sense of team is anchored by mutual respect and support, both given and received. We work hard, and take our work seriously. We don’t take ourselves seriously.

The position of Senior Data Engineer will play a critical role on the Data & Analytics team within the Finance, Technology and Operations group managed by our Chief Operating Officer. The Data & Analytics team is responsible for designing and implementing a new enterprise reporting architecture as well as building business intelligence solutions for global front, middle and back office teams.

The role will be located in the United States and report to the Head of Data & Analytics based in Minneapolis.

SUMMARY OF RESPONSIBILITIES

  • Collaborating as part of a cross-functional Agile team to create and enhance software that enables state of the art, next generation data applications
  • Building efficient storage for structured and unstructured data
  • Developing and deploying distributed computing Big Data applications using Open Source frameworks like Apache Spark, Apex, Flink, Nifi, Storm and Kafka
  • Utilizing programming languages like Java, Scala, Python and Open Source RDBMS and NoSQL databases and Cloud based data warehousing services such as Redshift
  • Utilizing Hadoop modules such as YARN & MapReduce, and related Apache projects such as Hive, Hbase, Pig, and Cassandra
  • Leveraging DevOps techniques and practices like Continuous Integration, Continuous Deployment, Test Automation, Build Automation and Test Driven Development to enable the rapid delivery of working code utilizing tools like Jenkins, Maven, Nexus, Chef, Terraform, Ruby, Git and Docker
  • Performing unit tests and conducting reviews with other team members to make sure your code is rigorously designed, elegantly coded, and effectively tuned for performance

 

QUALIFICATIONS

  • Bachelor’s degree in Computer Science or other technical field or equivalent work experience
  • Master’s Degree is preferred
  • We are looking for a candidate with 3+ years of professional work experience in data warehousing / analytics
  • At least 3 years of ETL design, development and implementation experience
  • 2+ years of Python development experience
  • 2+ years of Agile engineering experience
  • 2+ years of experience with the Hadoop Stack
  • 2+ years of experience with Cloud computing
  • 2+ years of Java development experience
  • 4+ years of scripting experience
  • 4+ years of experience with Relational Database Systems and SQL (PostgreSQL or Redshift)
  • 4+ years of UNIX/Linux experience
  • Able to work in the United States without sponsorship

Job Features

Job CategoryFull Time

Apply Online

A valid phone number is required.
A valid email address is required.