Hedge Fund is a $13 billion global alternative investment firm that employs a credit-oriented, value-based approach to investing across a broad array of geographies, segments and asset types, including corporate credit, residential mortgages, real estate, specialty finance, transportation and infrastructure. Hedge Fund has three key offices, located in NY, London, and Singapore.
Hedge Fund’s investing activities are guided by the following investment philosophy: (i) search opportunistically for undervalued or inefficient markets; (ii) invest in financial assets at a discount to their intrinsic value; (iii) identify catalysts for value recognition; (iv) pursue medium-term investment horizons; and (v) manage risk through diversified investment programs and trading strategies.
The work environment at Hedge Fund is fast-paced and exciting. We believe in our employees and what they can do. Our employees have the opportunity to work as part of a global platform, which is complex, diverse and ever-evolving. We reward hard work and intellectual capability. Our strong sense of team is anchored by mutual respect and support, both given and received. We work hard, and take our work seriously. We don’t take ourselves seriously.
The position of Senior Data Engineer will play a critical role on the Data & Analytics team within the Finance, Technology and Operations group managed by Hedge Fund’s Chief Operating Officer. The Data & Analytics team is responsible for designing and implementing a new enterprise reporting architecture as well as building business intelligence solutions for global front, middle and back office teams.
The role will be located in New York City and report to the Head of Data & Analytics based in Minneapolis, Hedge Fund’s global headquarters.
SUMMARY OF RESPONSIBILITIES
- Collaborating as part of a cross-functional Agile team to create and enhance software that enables state of the art, next generation data applications
- Building efficient storage for structured and unstructured data
- Developing and deploying distributed computing Big Data applications using Open Source frameworks like Apache Spark, Apex, Flink, Nifi, Storm and Kafka
- Utilizing programming languages like Java, Scala, Python and Open Source RDBMS and NoSQL databases and Cloud based data warehousing services such as Redshift
- Utilizing Hadoop modules such as YARN & MapReduce, and related Apache projects such as Hive, Hbase, Pig, and Cassandra
- Leveraging DevOps techniques and practices like Continuous Integration, Continuous Deployment, Test Automation, Build Automation and Test-Driven Development to enable the rapid delivery of working code utilizing tools like Jenkins, Maven, Nexus, Chef, Terraform, Ruby, Git and Docker
- Performing unit tests and conducting reviews with other team members to make sure your code is rigorously designed, elegantly coded, and effectively tuned for performance
- Bachelor’s degree in Computer Science or other technical field or equivalent work experience
- Master’s Degree is preferred
- We are looking for a candidate with 3+ years of professional work experience in data warehousing / analytics
- At least 3 years of ETL design, development and implementation experience
- At least 2+ years of Python development experience
- At least 2+ years of Agile engineering experience
- At least 2+ years of experience with the Hadoop Stack
- At least 2+ years of experience with Cloud computing
- At least 2+ years of Java development experience
- At least 4+ years of scripting experience
- At least 4+ years of experience with Relational Database Systems and SQL (PostgreSQL or Redshift)
- At least 4+ years of UNIX/Linux experience
- Able to work in the United States without sponsorship
|Job Category||Full Time|