What is Docker Offload and What Problem Does It Solve? Docker Offload revolutionizes development by seamlessly extending your local Docker workflow to cloud GPU infrastructure. Same commands, cloud-scale performance. ⚡🚀
MCP OAuth Integration: Secure GitHub Authentication for Docker Toolkit Docker MCP Toolkit adds OAuth support and streamlined Integration with GitHub and VS Code
Running Docker MCP Gateway in a Docker container The MCP Gateway is Docker's solution for securely orchestrating and managing Model Context Protocol (MCP) servers locally and in production including enterprise environments.
Docker MCP CLI Commands and CheatSheet The MCP Catalog currently includes over 120+ verified, containerized tools, with hundreds more on the way. Docker’s MCP Catalog now features an improved experience, making it easier to search, discover, and identify the right MCP servers for your workflows - all using CLI Interface.
How to Handle CORS Settings in Docker Model Runner Cross-Origin Resource Sharing (CORS) is a security feature implemented by web browsers to control how web applications can request resources from different domains, ports, or protocols. At its core, CORS enforces the Same-Origin Policy—a fundamental web security concept that prevents a webpage from one domain from accessing resources on
Why SLMs Will Replace LLMs in Agent Architectures Discover why small language models (SLMs) are becoming the future of agentic AI systems, offering superior efficiency, cost reduction, and operational benefits over large language models (LLMs) in enterprise deployments.
How to Setup Gemini CLI + Docker MCP Toolkit for AI-assisted Development Learn how to set up Gemini CLI with Docker MCP Toolkit for powerful AI-assisted development. Complete guide with step-by-step instructions, benefits, and real-world examples
Docker Desktop 4.42: llama.cpp Gets Streaming and Tool Calling Support Docker Desktop 4.42 brings real-time streaming and tool calling to Model Runner, transforming how developers build AI applications locally. No more waiting for responses or cloud dependencies—watch AI generate content token by token with full GPU acceleration on port 12434.
How to Connect n8n with Docker MCP Toolkit: A Simple HTTP Bridge Solution Solving the n8n community's #1 integration challenge with a practical HTTP bridge solution
Docker Model Runner Tutorial and Cheatsheet: Mac, Windows and Linux Support Whether you're building generative AI applications, experimenting with machine learning workflows, or integrating AI into your software development lifecycle, Docker Model Runner provides a consistent, secure, and efficient way to work with AI models locally.
🔍 From Prompt Engineering to Context Engineering The shift from prompt engineering to context engineering reflects a broader evolution in how we design, manage, and optimize interactions with large language models (LLMs). While prompt engineering was once hailed as a core skill for leveraging LLMs, the future lies in how we structure and engineer the context in
Collabnix AI Weekly - Edition 2 Your weekly digest of Cloud-Native AI and Model Context Protocol innovations.
When Should Your GenAI App Use GPU vs CPU? A Docker Model Runner Guide The new Docker Model Runner can deliver 5-10x speed improvements for AI workloads on MacBooks. Check it out.
Docker Desktop: The Infrastructure Foundation for Agentic AI How Docker is solving the packaging and security challenges that will define the next generation of intelligent agents