Pages

Sunday, January 25, 2026

From Software Engineer to Agentic AI Specialist: A Realistic 18–24 Month Part-Time Roadmap

Many talented engineers I know want to pivot into Agentic AI — designing, building, and deploying autonomous AI systems that go beyond chatbots. These include multi-agent workflows for automation, research assistants, customer support agents, enterprise orchestration tools, and more.

They often ask for a clear roadmap. Here's one tailored for working engineers pursuing this part-time (10–15 hours/week). Timelines are approximate — time flies, so consistency beats intensity. Full-time learners can compress phases significantly.

By month 12, you can aim for employable junior/mid-level agentic AI skills. By month 18–24, you can reach specialist/architect level with a strong portfolio.

Phase 1: Core Foundations in AI and Software Engineering (Months 1–3) Build the engineering muscle needed for rapid AI prototyping. Skip if you're already strong in Python/backend.

Key Topics:

  • AI basics: Terminology (ML/DL/GenAI/agents), history, real-world applications, ethics, bias, hallucinations, limitations.
  • Advanced Python: Data structures, OOP, async/await, APIs (requests, httpx), typing, error handling.
  • Backend essentials: REST/GraphQL APIs, auth (JWT/OAuth), databases (PostgreSQL + SQLAlchemy, Redis for caching/state).
  • Dev tools: Git (branching, PRs), Docker (multi-container setups), basic CI/CD concepts.

Milestones:

  • Deploy a simple async backend service (e.g., API that queries external services + DB).
  • Containerize and version-control a project.

Recommended Resources (2026-updated):

  • AI intro: Andrew Ng's "AI For Everyone" (Coursera, free) + "Generative AI For Everyone" (newer course).
  • Python: "Automate the Boring Stuff with Python" (free online) or freeCodeCamp Python (YouTube).
  • Backend: FastAPI official docs + tutorial; PostgreSQL + Redis via freeCodeCamp or "Full Stack FastAPI" repo on GitHub.
  • Git/Docker: Official docs + "Docker for Developers" (free YouTube series).

Practical Projects:

  • Build → Dockerize → GitHub-deploy a weather API service that caches results in Redis.
  • Summarize 5 production AI use cases (e.g., GitHub Copilot impact, agentic support tools).

Phase 2: LLM Fundamentals & Prompt Engineering (Months 4–6) Master how LLMs really work and start building with accessible tools.

Key Topics:

  • LLM internals: Tokens, context windows, embeddings, attention, fine-tuning basics.
  • Prompt engineering: Chain-of-thought, few-shot, role-playing, structured output (JSON mode).
  • RAG fundamentals: Vector search, chunking, retrieval strategies.
  • Tools: OpenAI API, Azure OpenAI, Grok API, Anthropic Claude, Gemini.
  • Microsoft stack: GitHub Copilot, Copilot Studio, Azure AI Studio.

Milestones:

  • Build a production-grade RAG app.
  • Use AI coding assistants daily to accelerate learning.

Resources:

  • Prompting: DAIR.AI Prompt Engineering Guide (GitHub) + Anthropic's prompt library.
  • LLMs/RAG: Hugging Face NLP course (free); LangChain quickstart + "LangChain for LLM Application Development" (DeepLearning.AI short course).
  • Microsoft: Azure AI Fundamentals (AI-900) path on Microsoft Learn; Copilot Studio labs.

Practical Projects:

  • Experiment with GitHub Copilot/ChatGPT for code → Build a prompt-chaining document Q&A bot.
  • Create a RAG system over your own docs using Azure AI Search or Pinecone (free tier).
  • Document and mitigate 5 common LLM failure modes (hallucination, bias, jailbreak).

Phase 3: Machine Learning Basics + Intro to Agentic AI (Months 7–9) Bridge traditional ML with agent reasoning patterns.

Key Topics:

  • ML essentials: Supervised/unsupervised, scikit-learn, evaluation (precision/recall/F1, ROC), embeddings for search.
  • Agentic core: Autonomy, reasoning loops (ReAct, Plan-and-Execute, Reflection), tool use/calling.
  • First frameworks: LangChain basics → LangGraph for stateful graphs.

Milestones:

  • Train a small ML model and integrate it into an agent.
  • Build your first ReAct-style agent.

Resources:

  • ML: Andrew Ng "Machine Learning" (Coursera, updated) or fast.ai Practical Deep Learning (free).
  • Agents: LangChain/LangGraph docs; "ReAct" paper (arXiv); LangChain Academy (free courses).
  • Microsoft: Azure ML basics + Semantic Kernel intro.

Practical Projects:

  • Sentiment classifier → agent that uses it for decision-making.
  • ReAct agent that chains tools (e.g., search → calculate → respond).
  • Microsoft 365-integrated agent (e.g., email drafter/summarizer).

Phase 4: Memory, Multi-Agent Collaboration & Advanced Patterns (Months 10–12) Make agents persistent, collaborative, and reliable.

Key Topics:

  • Memory: Short-term (context), long-term (vector DB + Redis), entity memory.
  • Multi-agent: Supervisor/worker patterns, agent communication, handoffs.
  • Frameworks: LangGraph (state machines), CrewAI (role-based teams), AutoGen (conversational), Microsoft Semantic Kernel.
  • Human-in-the-loop, error recovery, planning.

Milestones:

  • Deploy a 2–3 agent system with shared memory.
  • Handle real-world messiness (API failures, context drift).

Resources:

  • Advanced RAG/memory: LangChain advanced guides; vector DBs (Pinecone, Weaviate, Qdrant free tiers).
  • Multi-agents: CrewAI docs/examples; AutoGen tutorials; LangGraph Academy.
  • Microsoft: Semantic Kernel + Azure AI multi-service labs.

Practical Projects:

  • Persistent personal assistant with conversation history.
  • Multi-agent team (researcher → writer → reviewer) for report generation.
  • Add approval gates for sensitive actions.

Phase 5: Productionization, LLMOps/MLOps & Cloud Deployment (Months 13–15) Go from prototype to reliable production system.

Key Topics:

  • LLMOps: Prompt/model versioning, evaluation datasets, tracing (LangSmith, Phoenix, Braintrust).
  • Observability: Cost tracking, latency, error rates; guardrails (NeMo Guardrails, Azure Content Safety).
  • Deployment: Serverless (Azure Functions), containers (AKS), CI/CD (GitHub Actions).
  • IaC: Terraform or Bicep; multi-cloud basics (AWS Bedrock, GCP Vertex AI).
  • Security: PII redaction, auth, multi-tenancy, compliance.

Milestones:

  • Production-deploy an agent system with monitoring.
  • Implement cost/latency optimizations.

Resources:

  • LLMOps: LangSmith tutorials; "LLMOps best practices" guides (2025–2026); Braintrust/Weights & Biases.
  • Cloud: Azure AI Engineer Associate (AI-102) prep path; free tiers on AWS/GCP/Azure.
  • Security: OWASP LLM Top 10 (updated).

Practical Projects:

  • CI/CD pipeline → Azure/AWS for agent app.
  • Add tracing + alerts to catch hallucinations/cost spikes.
  • Secure multi-tenant design doc + partial implementation.

Phase 6: Capstone Projects, Portfolio & Leadership (Months 16–24) Become an architect-level specialist.

Key Topics:

  • System design: Trade-offs, scalability, hybrid (ML + rules + agents).
  • Emerging: Multimodal agents, self-improving agents, ethical scaling.
  • Leadership: Design docs, code reviews, AI strategy alignment.
  • Certifications: Azure AI Engineer Associate (AI-102), Azure Data Scientist (DP-100); optional Google Professional ML Engineer.

Milestones:

  • 4–6 strong portfolio projects.
  • Contribute to open-source (e.g., LangGraph examples) or mentor.

Resources:

  • Projects: Build on real problems (enterprise automation, personal productivity).
  • Trends: Follow arXiv, Hugging Face blog, xAI/Grok updates, Microsoft Build.
  • Leadership: "Staff Engineer" book; AI design patterns repos.

Practical Projects:

  • End-to-end platform (e.g., multi-agent customer support in Azure + M365).
  • Portfolio site + GitHub with READMEs, architecture diagrams.
  • Contribute a feature/fix to CrewAI/LangGraph or write a blog series.

Final Mastery Tips (2026 Edition)

  • Build obsessively — 60%+ hands-on. Start simple → add complexity (single tool call → full multi-agent orchestration).
  • Community — Join r/LocalLLaMA, r/AI_Agents, Discord (LangChain, CrewAI), Microsoft AI forums; attend virtual meetups.
  • Track & reflect — Weekly journal: What failed? Why? How to guardrail? Revisit earlier phases when stuck.
  • System thinking — Always zoom out: How does this agent behave at 10k users/day? Cost? Failure modes?
  • Avoid hype — Focus on solving painful, real problems (e.g., automate repetitive engineering tasks first).
  • Flexibility — Skip/merge phases if strong in areas. Many reach solid agentic roles in 12–15 months with focused effort.

This path works because it balances depth (agents, ops) with practicality (Microsoft tools for enterprise jobs). Good luck — the agentic era is here, and skilled builders are in huge demand! 🚀




 

Tuesday, November 18, 2025

AzureChatOpenAI vs AzureOpenAI

AzureChatOpenAI and AzureOpenAI are components within the LangChain framework designed to interact with the Azure OpenAI Service, but they are tailored for different types of interactions with the underlying language models.

AzureChatOpenAI:
  • Purpose: 
    This class is specifically designed for interacting with chat completion models like GPT-3.5 Turbo and GPT-4, which are optimized for conversational AI.
  • Input/Output: 
    It handles input and output in a message-based format, reflecting the turn-based nature of conversations. You provide a list of messages (user, assistant, system roles), and it returns a message response.
  • Use Cases: 
    Building chatbots, conversational agents, and applications that require maintaining context over multiple turns of dialogue.
AzureOpenAI:
  • Purpose: 
    This class is designed for interacting with text completion models (older generation models like text-davinci-003) and potentially other types of models within the Azure OpenAI Service that are not primarily chat-focused.
  • Input/Output: 
    It typically takes a single prompt string as input and returns a generated text completion.
  • Use Cases: 
    Generating code, creative writing, text summarization, or other tasks where a single prompt and a single completion are sufficient.
Key Differences and When to Use Which:
  • Model Type: 
    AzureChatOpenAI is for chat models (e.g., GPT-3.5 Turbo, GPT-4), while AzureOpenAI is for older text completion models.
  • Interaction Style: 
    AzureChatOpenAI handles message lists for conversational turns, while AzureOpenAI handles single prompt strings for direct text generation.
  • Context Handling: 
    AzureChatOpenAI inherently supports conversational context through the message history you provide, making it suitable for multi-turn interactions. AzureOpenAI is more suited for single-shot text generation tasks.
In modern LangChain applications interacting with Azure OpenAI, AzureChatOpenAI is generally the preferred choice when working with the latest and most capable models like GPT-3.5 Turbo and GPT-4, especially for any application involving conversational elements. AzureOpenAI might be used for legacy applications or specific tasks where older text completion models are still relevant.

Tuesday, November 4, 2025

Exam AI-102: Microsoft Certified: Azure AI Engineer Associate

Another Milestone Achieved: Microsoft Certified Azure AI Engineer Associate (AI-102)

I’m happy to share that I’ve obtained a new certification: Exam AI-102: Microsoft Certified: Azure AI Engineer Associate from Microsoft!

Sunday, August 31, 2025

Friday, August 1, 2025

பூப்பாதை vs சிங்கப்பாதை

 பொதுவாக Software Engineering துறையில் Senior Tech Lead அல்லது Senior Consultant எனும் பதவியின் பின்னர் Junior Solutions Architect எனும் பதவி உயர்வும் பின்னர் Solutions Architect எனும் பதவி உயர்வும் அமையும்.

மென்பொறியியல் துறையில் இருக்கும் பலர் இப்பதவிகளின் பின்னால் திரிவதில்லை. அத கிழிச்சம்.. இத கிழிச்சம். ஊரில முதலாவது… நாங்களும் எஞ்சினியர் மார் எண்டெல்லாம் சொல்லிக்கு திறிற இல்லை.. இதுக்கு நேரமும் இல்லை… வேலையின் மீதுள்ள தீராத பற்று காரணமாக வேலை செஞ்சோமா உழைச்சோமா செலவழிச்சோமா fun எடுத்தோமா புள்ளை குட்டிகளை படிக்க வைச்சோமா என்று இருப்பர்.
விடயத்துக்கு வருவோமெ; எனது வேலைத்தளத்தில் கடந்த வருடம் ஏற்பட்ட Junior Architect மற்றும் Solutions Architect எனும் இரு தொடர்ச்சியாக அமையப்பெற்றுள்ள பதவி வெற்றிடங்களை நிரப்ப விண்ணப்பங்கள் கோரப்பட்டிந்தன.
நானும் விண்ணப்பித்திருந்தேன். விண்ணப்பம் ஏற்றுக்கொள்ளப்பட்டு நேர்முக பரீட்சைகள், பயன்பாட்டு ஆய்வு என பல சுற்றுக்கு பிறகு “நான் Junior Architect தகுதிக்கு அதிகமாகவும் Solutions Architect தகுதியினை பூர்த்தி செய்யவில்லை. Qualified above Associate Architect and Below to be Solutions Architect” எனவும் பெறுபேறு கிடைக்கப்பட்டது.
குதர்க்கமான பெறுபேற்றை தொடர்ந்து இரு தெரிவுகள் என் முன் வழங்கப்பட்டன.. பூப்பாதை, மற்றையது சிங்கப்பாதை!!!
1️⃣ பூப்பாதை: Junior Solutions Architect பதவியை ஏற்கலாம்
2️⃣ சிங்கப்பாதை: சவாலான பாதை. தற்போதைய Senior Digital Consultant ஆகவே தொடர்ந்தும், அதே நேரத்தில் Solutions Architect போல செயல்பட்டு முக்கியமான KPIs (செயல் திறன் குறியீடுகளை) பூர்த்தி செய்ய வேண்டும் - அடைய முடியவில்லை என்றால், மீண்டும் பழைய Senior Digital Consultant நிலைக்கு திரும்பவேண்டும்.
நமக்கு கொழுப்பு தானே சிங்கம் சிங்கிளாதான் செல்லும் எண்டு தேர்தெடுத்தது இரண்டாவது பாதையினை.
கடந்த 8 மாதங்களாக இரண்டு பதவி நிலைகளிலும் வேலை செய்து; நான் Solutions Architect ஆக வரத்தகுதியானவன் என்பதை நிரூபித்தேன். அல்ஹம்துலில்லாஹ்… Solutions Architect எனும் பதவியேற்றத்தினையும் பெற்றேன்.
இனி என்ன?? அதில மாங்கு மாங்கு என்று வேலை செய்யனும்… அதிலும் Artificial Intelligence இனை என்னெண்டு ஒரு கை பாக்கலாம் எண்டு நினைச்சிருக்கன்… இன்ஷா அல்லாஹ்…
இவ்வினிய செய்தியினை அன்பர்கள் நண்பர்களுக்கும் அனைவருக்கும் பகிர்ந்து கொள்வதில் மகிழ்ச்சியுறுகின்றேன்.
பலர் சந்தோசப்பட.. சிலர் ஹா ஹா போட… இந்த போஸ்ட் நிண்டு ஆடட்டும்..
Starting my AI journey — not just to follow trends, but to build what’s next.




Thursday, July 31, 2025

Officially a Solutions Architect!

 Back in June 2024, I applied for two internal openings in our organization — Junior Solutions Architect and Solutions Architect. After case studies, technical and non-technical interviews, the verdict was clear:

"You’re above Junior, but not quite there for Solutions Architect yet."

I was offered two choices:
1. Step into the Junior Solutions Architect role.
2. Take the challenging path — continue as a Senior Digital Consultant, but also play the role as a Solutions Architect and prove myself by achieving key KPIs. What if I couldn't achieve the KPIs: Back to Senior Digital Consultant.

I chose the second: The challenging and risky path. It wasn’t easy. Balancing both roles meant long hours, tough decisions, and constant learning. But I’m incredibly grateful to say:
I made it — officially promoted to Solutions Architect!

And I didn’t do it alone. I had the best support around me. Thanks for everyone who supported from Gallagher.

To Abul Khalam Azad and Jon Roper from my team— thank you for giving me the space and trust to explore and grow.
To Malsha Jayamaha, our Director — I appreciate your leadership and continuous encouragement.

And a big shoutout to Maduranga Gunasekara— my partner in crime, always stepping in, backing me up, and pushing forward as a true team player.

Thanks to Harinath Krishnakumar for being my guide and compass throughout this architect journey

Thanks to Ash Shahata and Ying Zou for trusting me with the new initiatives especially the AI initiatives from the Architecture practice at Gallagher — an area I’m deeply passionate about.

I'm deeply grateful to Ramakrishnan Vedanarayanan, our AI guru, whose mentorship has been invaluable in continuously expanding my thinking and knowledge, especially in AI. He's an exceptional companion for intellectual exchange.

Let's not forget my pillars
Hibathul Careem sir – My mentor. A constant source of wisdom and clarity
Prabath Fonseka – my architect, who set the foundation
Madu Ratnayake – whose leadership early in my career shaped my mindset
Dinusha Kumarasiri – always there as a friend and motivator

Taking the harder path was never easy. But with a team like this behind me, it was possible. Here’s to the journey ahead! Will give my 100%.

Team Gallagher Sri Lanka & Suhail Jamaldeen