[LLM] Setting Up a Local LLM Environment on Apple Silicon with MLX
Setting up a local LLM environment with MLX and Qwen 3.6 on a MacBook Pro M5 Pro and laying the groundwork for researching agent frameworks.
Setting up a local LLM environment with MLX and Qwen 3.6 on a MacBook Pro M5 Pro and laying the groundwork for researching agent frameworks.
An overview of HyDE (Hypothetical Document Embeddings), a technique for improving retrieval quality in RAG.
An overview of LangChain’s LCEL, prompt templates, message classes, MessagesPlaceholder, and RunnableGenerator.
Using the Ollama Python library to connect to a remote LLM server, with generate, chat, and LangChain integration.
Configuring the OLLAMA_HOST environment variable and Windows Firewall to access a local LLM from other devices on the same network.
A complete guide to installing Ollama and running the Qwen3:8B model locally on Windows 11.