// BLOG
AI, LLMs and Neural Networks from first principles - Pt 1 - Introduction

Since the launch of ChatGPT 3.5, Artificial Intelligence (AI) and Large Language Models (LLMs) have irrevocably entered the developer's toolkit and everyday workflow. It's now standard to encounter or integrate AI logic into applications, making a fundamental understanding of AI and its essential skill for developers.
However, for many, the focus has been on the application layer—mastering prompt engineering, API integration, and model configuration—as opposed to the core logic - the ‘how’ they work. (Note: This is different from understanding why they work, which remains a scientific mystery!). To assist in providing a solid technical foundation, we're launching a new series that explains AI and LLMs from first principles. Our goal is to equip with a deep understanding required to move beyond mere integration and truly engineer with these technologies.
To do this properly, we’ll start at the very beginning: the dawn of neural networks. We’ll explore how they work (which provides the computational foundation for machine learning), before building up to Recurrent Neural Networks (RNNs) and, finally, the transformer architecture powering LLMs.
If you’re ready to move past the API calls and truly master the mechanics of AI, LLMs, or neural networks, we hope this series provides the invaluable depth you need.
Next entries in series:
- AI, LLM and Neural Networks from first principles - Pt 2 - And There Was the Derivative
- AI, LLM and Neural Networks from first principles - Pt 3 - Main Concepts
- AI, LLM and Neural Networks from first principles - Pt 4 - Forward Propagation
- AI, LLM and Neural Networks from first principles - Pt 5 - Back Propagation
About Us
At Centrid Software Solutions we desire to make a difference in the world around us, of which we believe software has a major role to play.