Job Description
Tenstorrent is leading the industry on cutting-edge AI technology, revolutionizing performance expectations, ease of use, and cost efficiency. With AI redefining the computing paradigm, solutions must evolve to unify innovations in software models, compilers, platforms, networking, and semiconductors. Our diverse team of technologists have developed a high performance RISC-V CPU from scratch, and share a passion for AI and a deep desire to build the best AI platform possible. We value collaboration, curiosity, and a commitment to solving hard problems. We are growing our team and looking for contributors of all seniorities.
Join Tenstorrent’s AI Models team and work at the layer most ML engineers never see: bringing advanced models to life on custom AI hardware. You’ll own real workloads end‑to‑end including porting, tuning, and validating LLMs and vision models on our accelerator, and chasing down every last millisecond and percentage point of accuracy. This role is for people who love the craft of ML engineering and want their work to matter at silicon scale, not just behind another API.
This role is hybrid, based in Cyprus.
We welcome candidates at various experience levels for this role. During the interview process, candidates will be assessed for the appropriate level, and offers will align with that level, which may differ from the one in this posting.
Who You Are
Bring up, run, and debug modern ML models (e.g., transformers) using PyTorch or TensorFlow.
Analyze model behavior and performance, and identify bottlenecks across the stack.
Improve efficiency, correctness, and scalability of model execution in real systems.
Work closely with compiler, kernel, and hardware teams to drive performance and system-level improvements.
Help translate state-of-the-art model architectures into production-grade, high-performance deployments.
What We Need
Strong experience building and working with ML models in PyTorch or TensorFlow.
Strong understanding of modern ML model architectures (ex: transformers).
Solid software engineering fundamentals with strong debugging and problem-solving skills.
Comfort working in a fast-moving, research-meets-engineering environment.
Bonus, not required: experience with profiling or performance tuning, or familiarity with quantization, flash attention, kernel fusion, memory hierarchies, C++, CUDA, or systems programming.
What You Will Learn
How to bring state‑of‑the‑art LLMs and vision models to high performance on a custom AI accelerator.
How to trace and fix performance bottlenecks from PyTorch code down to kernels and memory systems.
How to turn research‑grade models into reliable, production deployments on new hardware.
The practical trade‑offs between techniques like quantization, FlashAttention, and kernel fusion when you’re optimizing real throughput, latency, and memory.
How your findings can drive changes across compiler, kernel, and hardware teams in a full‑stack co‑design loop
Tenstorrent offers a highly competitive compensation package and benefits, and we are an equal opportunity employer.
This offer of employment is contingent upon the applicant being eligible to access U.S. export-controlled technology. Due to U.S. export laws, including those codified in the U.S. Export Administration Regulations (EAR), the Company is required to ensure compliance with these laws when transferring technology to nationals of certain countries (such as EAR Country Groups D:1, E1, and E2). These requirements apply to persons located in the U.S. and all countries outside the U.S. As the position offered will have direct and/or indirect access to information, systems, or technologies subject to these laws, the offer may be contingent upon your citizenship/permanent residency status or ability to obtain prior license approval from the U.S. Commerce Department or applicable federal agency. If employment is not possible due to U.S. export laws, any offer of employment will be rescinded.