Your LLM-powered product is gaining traction — maybe it's a smart assistant, a co-pilot, or even a fully autonomous agent. Early API tests look promising, but how do you transition from experimentation to robust, scalable deployment? This session guides you through the evolving discipline of GenAIOps/LLMOps: ensuring quality, safety, observability, and governance while continuously improving your model stack, prompt strategies, retrieval pipelines, and agent orchestration. Whether you're fine-tuning models, optimizing for latency and token costs, or introducing human feedback and multi-agent collaboration, you'll learn how to close the loop between innovation and reliability — step by step.
Maxim Salnikov is a tech and cloud community enthusiast based in Oslo. With over two decades of experience as a web developer, he shares his extensive knowledge of the web platform, cloud computing, and AI by speaking at and providing training for developer events worldwide. By day, Maxim plays a crucial role in supporting the development of cloud and AI solutions within European companies, serving as the leader of developer productivity business at Microsoft. During evenings, he can be found running events for Norway's largest web and cloud development communities. Maxim is passionate about exploring and experimenting with Generative AI possibilities, including AI-assisted development. To share his insights and connect with like-minded professionals globally, he founded and organized the inaugural Prompt Engineering Conference, the first of its kind on a global scale.