In case you hadn't noticed, Large Language Models tend to start making up stories when they run out of facts. There are a few reasons for this, but a lot of it is because they are just generating 'completions'. They don't stop just because they've run out of ideas (and yes, we all know actual people who do that as well). They've also been trained to be 'helpful', so you might have detected a certain tone in generated text. I'm pretty sure we can recognise GPT-4 output in the same way all those DALL-E images have a certain look. In this talk we'll examine the reasons for this behaviour. And more importantly we'll see how to use techniques like RAG and prompt engineering to mitigate these issues, and how to use a more complex 'multi-agent' architecture to build reliable AI solutions. We'll illustrate these techniques by looking at some real-life applications that use these techniques.
Dr Bill Ayers is a consultant developer and solution architect who has been working with computers for over 30 years. He originally earned his PhD in applications of computers in engineering before specialising in collaboration with SharePoint and more recently Microsoft 365 and Azure. He also specialises in mobile development and agile software development practices, and has been doing AI since the 1990s. He is a Microsoft Certified Master and Charter MCSM for SharePoint, and a Microsoft Certified Trainer and Microsoft MVP for M365 and AI Platform. He has also taken over forty Microsoft certifications and is a CompTIA CTT+ certified classroom trainer. He speaks regularly at international conferences and user groups and is based in Sheffield, UK.