 
    Hedge Fund #015
Location: New York City or Seattle, WA (Onsite | Hybrid flexibility)
Compensation: $200K–$300K Base + Bonus + Elite Benefits
We’re building the future of HPC + AI infrastructure at Global Scale —
and we need battle-tested engineers who thrive where cutting-edge research meets raw compute power.
This isn’t just another “Systems Role.” You’ll be at the Heart of One of the World’s most Advanced Trading Research Platforms —
Operating Petabyte-Scale Storage, GPU Superclusters, and Distributed Systems spanning Data Centers Worldwide.
Think Thousands of nodes, Triple-Digit Petabytes, Real-time workloads — and the mandate to make it all faster, leaner, and more reliable every single day.
What You’ll Do
- Engineer at scale: Architect, build, and fine-tune massive GPU compute clusters powering AI + HPC research.
- Squeeze every cycle: Profile, benchmark, and optimize GPU workloads across compute, storage, and network layers.
- Automate everything: Deploy, monitor, and troubleshoot thousands of nodes with Smart Automation + Python tooling.
- Collaborate with brilliance: Partner with Researchers, Quants, and Engineers to unlock Performance breakthroughs.
- Push boundaries: Test new hardware/software, drive vendor partnerships, and deploy Bleeding-Edge NVIDIA tech (NCCL, NVLink, GPUDirect RDMA, etc.).
What We’re Looking
- 5+ years in HPC, AI, or distributed Linux systems engineering — large-scale is your comfort zone.
- Deep expertise in GPU Performance Optimization, Distributed Workloads, and Linux tuning.
- Hands-on with Python Automation (bonus if you code in C/C++ or CUDA).
- Familiar with Config management (Ansible, Puppet, Salt, Chef) — but also not afraid to build tools when off-the-shelf won’t cut it.
- Comfortable diagnosing across Hardware, OS, and networking layers when things break (and they will).
- Strong communicator, high-impact operator, thrives under pressure.



