In the ever-evolving landscape of artificial intelligence, one concept that stands out prominently is “generative AI.” This groundbreaking technology has been garnering significant attention due to its potential…
Company & RoleThis business is a research-driven AI company. They believe open and transparent AI systems will drive innovation and create the best outcomes for society, and they are on a mission to significantly lower the cost of modern AI systems by designing software, hardware, algorithms, and models. They have contributed to leading open-source research, models, and datasets to advance the frontier of AI, and our team has been behind technological advancement such as FlashAttention, Hyena, FlexGen, and RedPajama. They are now on the lookout for the AI Engineer who will develop systems and APIs that enable customers to perform inference and fine-tune LLMs. Relevant experience includes implementing runtime systems that perform inference at scale using AI/ML models from simple models up to the largest LLMs.
- Design and build the production systems that power the Cloud inference and fine-tuning APIs, enabling reliability and performance at scale
- Partner with researchers, engineers, product managers, and designers to bring new features and research capabilities to the world
- Perform architecture and research work for AI workloads
- Analyze and improve efficiency, scalability, and stability of various system resources
- Conduct design and code reviews
- Create services, tools & developer documentation
- Create testing frameworks for robustness and fault-tolerance
- Participate in an on-call rotation to respond to critical incidents as needed
- 5+ years experience writing high-performance, well-tested, production quality code
- Bachelor’s degree in computer science or equivalent industry experience
- Demonstrated experience in building large scale, fault tolerant, distributed systems like storage, search, and computation
- Expert level programmer in one or more of Python, Go, Rust, or C/C++
- Experience implementing runtime inference services at scale or similar
- Excellent understanding of low level operating systems concepts including multi-threading, memory management, networking and storage, performance and scale
- GPU programming, NCCL, CUDA knowledge a plus
- Experience with Pytorch or Tensorflow, a plus
How do you Apply?If you are interested in applying for the AI Engineer role please do so via the link on this page or contact Digital Republic on the phone or email.
Contact us to find out more:Get in contact with Digital Republic Talent by sending an email to [email protected] or calling +44 204 518 5729. Check our Job Vacancies page for more job opportunities. You can also find more on Linkedin, Instagram or Facebook.
News & Insights
As we honour Black History Month in the United States, we are privileged to share the inspiring journey and insights of our Founder and Managing Director, Richard Manso….
The state of Texas is rapidly asserting itself as a formidable force in the ever-evolving landscape of artificial intelligence. With groundbreaking initiatives, educational advancements, and major industry players…