The Network Architect at Together AI will define and implement network strategies for AI compute platforms, ensuring high performance, resilience, and scalability across multiple environments. This deeply technical role involves collaboration with various teams to optimize network infrastructure for cutting-edge AI research and training.
This role involves managing debt capital markets, strategic partnerships, and financing processes, working closely with executive leadership. The candidate will have deep expertise in structured finance, legal negotiations, and relationship management to drive the company's infrastructure scaling strategy.
This internship involves managing product launch activities, creating marketing assets, and analyzing developer adoption for AI inference and fine-tuning platforms. It offers practical experience in a fast-paced AI company with competitive compensation and benefits.
Join Together AI as a research intern to explore efficient, scalable RL and post-training techniques for large language models, working at the intersection of algorithms and inference systems. The role involves designing experiments, analyzing system constraints, and contributing to foundational AI research.
$0k - $0k
T
GPU Cluster Resource Scheduling and Optimization Engineer
This role involves designing advanced scheduling and resource management strategies to improve performance and reduce costs in AI systems. The engineer will collaborate with research and engineering teams to build scalable, efficient solutions.
This role involves co-designing GPU kernels and model architectures to enhance AI system performance, collaborating across teams, and staying current with GPU programming advancements. The company focuses on building next-generation AI infrastructure with a mission to lower AI system costs.
Together AI is hiring a Rust Systems Engineer to enhance AI inference performance and work closely with AI researchers. The role involves developing high-performance, distributed systems and optimizing ML inference frameworks.
This role involves architecting multi-petabyte storage solutions, optimizing networks, building Kubernetes storage operators, and ensuring high availability for AI training and inference workloads. The candidate will also lead cost optimization, performance tuning, and contribute to open-source projects.