## DDG Search


### **From Go Code to GitOps: A Deep Dive into the `ddg-search` API**
---
  
For a portfolio, a project must demonstrate more than just the ability to write code; it must showcase a complete, end‑to‑end engineering solution. That was my goal with **`ddg-search`**: a simple yet full‑featured API, designed, built, and automated to production‑grade standards. This article details its key features and the DevOps engine that brings it to life.  

![](https://blog.bnei.dev/assets/images/n1bm9Y877sMNmDOpwO0ac.webp)

#### **Core API Features and Professional‑grade Characteristics**  
`ddg-search` is a lightweight REST API that acts as a proxy for DuckDuckGo search requests. Beyond this basic function, it is architected with the hallmarks of a modern production‑ready service:

1. **OpenAPI Specification** – The API is fully documented via OpenAPI (Swagger), providing clear, interactive documentation for any consumer and establishing a professional contract for its use.  
2. **Unique Feature** – A query parameter `?scrape=true` extends the API’s utility, allowing it to convert search results directly into a clean Markdown document.  
3. **Built‑in Security & Control** – The service is not publicly open; by default it includes essential security features:  
   * **Authentication** – The endpoint is protected by basic authentication.  
   * **Rate Limiting** – Configurable request throttling prevents abuse.  
4. **Observability** – All requests pass through middleware that emits structured JSON logs, capturing status, latency, and request path so the API’s behavior is immediately observable.

#### **The DevOps Engine: From Commit to Cluster**  
A modern API is inseparable from the automation that runs it. `ddg-search` follows a complete GitOps lifecycle, where the Git repository is the single source of truth for the entire system.

1. **Containerisation** – The application is packaged with a multi‑stage `Dockerfile`, producing a minimal, secure image that contains only the compiled Go binary.  
2. **Native Kubernetes Deployment** – Deployment manifests are managed declaratively with **Kustomize**, cleanly separating base configuration from environment‑specific overlays, making deployments predictable and repeatable.  
3. **Continuous Integration (CI)** – A robust **GitHub Actions** pipeline automates the entire pre‑deployment process on every merge to `main`:  
   * Linting and tests to ensure code quality.  
   * Semantic versioning derived from commit messages.  
   * Automatic generation of `CHANGELOG.md`.  
   * Build and push of a version‑tagged Docker image to a container registry.  
4. **Continuous Deployment (CD) with GitOps** – The loop is closed with **Argo CD**. An Argo CD *Application* watches the `ddg-search` repo; when CI pushes a new manifest (with a new image tag), Argo CD detects the change and automatically synchronises the Kubernetes deployment to match the state defined in Git.

#### **Conclusion – A Full‑stack Engineering Showcase**  
`ddg-search` is a practical demonstration of a complete software product. It combines a clean, well‑documented API with a mature, fully automated CI/CD pipeline, providing a reproducible process for delivering reliable software.

---

## Deployment Monitor Operator

### **Proactive Observability for Kubernetes Deployments: An Operator for Change Notifications** 
---

This Kubernetes Operator provides an automated, proactive monitoring solution for `Deployment` resources across all namespaces. It dramatically improves cluster observability by detecting specific changes and notifying stakeholders—a critical capability for maintaining operational integrity and reacting instantly to infrastructure events.

### **Key Features and Technical Highlights**  

1. **Declarative Monitoring via CRD** – Users define monitoring rules (target annotations/labels and recipient email addresses) through a custom `DeploymentMonitor` resource, leveraging Kubernetes’ native extensibility for clear, version‑controlled configuration.  
2. **Cluster‑wide Change Detection** – The operator continuously watches every `Deployment`. When a change in the spec or status of a monitored deployment is detected, it is immediately flagged.  
3. **Secure Email Notification** – Upon detecting a relevant change, the operator sends an email alert to predefined recipients. SMTP credentials are securely stored in Kubernetes Secrets, preserving credential confidentiality.  
4. **Built with Kubebuilder** – Developed using Kubebuilder, the operator follows standard Kubernetes best practices, delivering a robust, extensible, and maintainable monitoring solution.

### **Operational Flow**  

* **Configuration** – A `DeploymentMonitor` CR specifies which `Deployments` to watch and where to send notifications, referencing a securely stored SMTP configuration in a Kubernetes Secret.  
* **Continuous Reconciliation** – The operator’s controller constantly reconciles `DeploymentMonitor` objects with the actual state of `Deployments` throughout the cluster.  
* **Alerting** – Any deviation that matches the criteria defined in a `DeploymentMonitor` triggers a formatted email notification.

### **Deployment & Usage (Key Steps)**  

* **Prerequisites** – Standard Kubernetes tools (kubectl, Docker), a Go runtime, and an accessible image registry.  
* **Deploy** – Build and push the operator’s Docker image, then apply the deployment manifest to your cluster.  
* **Configure** – Create a Kubernetes Secret containing your SMTP credentials and define `DeploymentMonitor` CRs for the deployments you wish to monitor.  
* **Validate** – Create or modify a `Deployment` that matches a monitor’s criteria and observe the generated email alerts and operator logs.  

This project demonstrates deep expertise in building Kubernetes Operators, defining custom resources, securely handling secrets, and delivering robust, automated solutions for cloud‑native environments.  

---

## Dreamer Journal

### **Decoding the Subconscious: A Deep Dive into the Dream Analyst Application**
---

In an era where introspection is increasingly valued, the Dream Analyst application emerges as a sophisticated solution for personal exploration. Built with SvelteKit, it simplifies the capture, advanced analysis, and deep understanding of dream narratives through the strategic use of a modern AI model.

### **Core Functionality – The Main Feature of Dream Analyst**  
Dream Analyst serves as a robust platform for journaling and fully interpreting dreams. It leverages a Large Language Model (LLM) to provide multiple interpretations and to extract symbolic tags from user‑submitted dream entries. The primary goal is to give users a fluid interface for meticulous dream logging and to surface deep insights about their subconscious processes.

#### **Key Features & Architectural Strengths**  

1. **Simplified Dream Ingestion** – Users can effortlessly record their dreams via an intuitive text input or through an integrated voice‑to‑text feature (Web Speech API), ensuring accessibility and ease of use.  
2. **AI‑Powered Advanced Interpretation** – After submission, the AI engine processes the dream text, delivering a detailed interpretive reading and an indexed list of symbolic tags (comma‑separated). Users can choose between several analytical paradigms:  
   * **Jungian Analysis** – Interprets dreams through archetypes, the shadow self, anima/animus, and the collective unconscious.  
   * **Freudian Analysis** – Focuses on decoding repressed desires and underlying unconscious conflicts.  
   * **Simple Interpretation** – Provides a concise, easy‑to‑understand overview of dream themes.  
   * **Islamic Interpretation** – Offers readings grounded in the established Islamic hermeneutics of dreams.  
3. **Interactive AI Dialogue** – Beyond the initial analysis, users can engage in a dynamic conversational interface with the AI analyst, enabling deeper thematic exploration, targeted follow‑up queries, and extraction of additional nuances and insights.  
4. **Secure Data Vault** – All submitted dream data are encrypted and stored in a personalized digital vault, meticulously indexed by date, guaranteeing persistent access for longitudinal personal review.  
5. **Resource & Credit Management System** – To ensure fair resource distribution and operational stability, the app implements a credit‑based system governing AI analyses and interactive chat sessions. Users receive a predefined daily credit quota for these compute‑intensive features.  
6. **Granular Tag Management** – Extracted tags are displayed as clickable, interactive bullet points. Future iterations will add advanced capabilities such as redundancy highlighting (identifying recurring themes) and progression scoring for longitudinal tag analysis.  
7. **User Data Visualization & Indexing** – A chronological view of historic dream entries is provided, complemented by robust search and filtering capabilities for efficient retrieval.  
8. **Longitudinal Data Analysis (Roadmap)** – Upcoming enhancements include advanced visualisation tools (timelines, heatmaps) and comprehensive reporting modules to map dream patterns and track personal progress over time.  

---

## Voc On Steroid

## Vocabulary‑Learning Platform
*Full‑stack Go monolith (Clean Architecture)*  
---  

A web application that lets users discover, organise, and master new words through instant search, contextual examples, spaced‑repetition, and gamified challenges. The codebase is deliberately structured for long‑term testability and maintainability, and it runs on a production‑grade Kubernetes platform that I also built and operate.

### Technical Strengths  

| Category | Details |
|----------|---------|
| **Language & Framework** | **Go** + **Gin** for HTTP routing, **gqlgen** for a **GraphQL** API (`/graphql`). |
| **Architecture** | Clean Architecture (Domain ↔ Application ↔ Infrastructure). Business rules live in `internal/domain`; use‑case orchestration and CQRS handlers in `internal/application`; external adapters (PostgreSQL, Redis, JWT) in `internal/infrastructure`. |
| **CQRS & Event Bus** | In‑memory bus powered by **go‑dew** – commands (e.g., `CreateWordListCommand`) mutate state, queries (e.g., `GetWordListQuery`) read data. |
| **Persistence** | **PostgreSQL** via **GORM** for relational data; **Redis** for caching / session storage. |
| **Generic Repository** | `generic.Repository[T,F]` defines CRUD; `GormGenericRepository` provides type‑safe implementations for all domain entities. |
| **Dependency Injection** | Compile‑time wiring with **Google Wire**, making services easily replaceable in tests. |
| **Authentication** | **JWT** – short‑lived access token stored in an HTTP‑only cookie; long‑lived refresh token stored in the DB. Gin middleware extracts the token and injects `userID` into the request context. |
| **Logging** | Structured logging with **Zerolog** (JSON output, request IDs). |
| **Testing** | Unit & integration tests using **GoMock**, table‑driven test patterns, and in‑memory repositories for fast CI execution. |
| **Observability** | Metrics exported via **Prometheus**, traces via **OpenTelemetry**, and logs aggregated in a cluster‑wide stack (Prometheus + Grafana + Loki + Jaeger). |
| **Deployment Platform** | Runs on a self‑managed 3‑node HA **Kubernetes** cluster (MetalLB, Cilium CNI) with a **GitOps** workflow powered by **Argo CD**. Deployment time dropped from 1–2 h to < 10 min and failure rate reduced from ~40 % to < 5 %. |

### User‑Facing Core Features  

1. **Instant Word Search** – definitions, etymology, phonetic transcription, and audio playback; toggle between simple and detailed views.  
2. **Custom Lists & Tagging** – one‑click saving into user‑defined collections (e.g., *Philosophy Terms*) with label‑based tags for targeted review.  
3. **Contextual Examples** – literature, news, film excerpts, and community‑submitted quotes, filterable by genre or culture.  
4. **Gamified Daily Challenges** – adaptive quizzes (definition guess, fill‑in‑the‑blank, synonym matching) that award badges and update mastery graphs.  
5. **Spaced‑Repetition Engine** – automated review reminders at optimal intervals; users can override schedules for intensive training.  

### Operational Flow  

1. **Request → Gin Router** – Incoming HTTP (or GraphQL) request passes through JWT middleware.  
2. **Resolver → go‑dew Bus** – Light GraphQL resolvers validate input and publish a command or query onto the in‑memory bus.  
3. **Handler → Domain Logic** – Appropriate handler executes business rules in the `application` layer, invoking generic repositories as needed.  
4. **Infrastructure → DB / Cache** – GORM reads/writes PostgreSQL; Redis serves cached lookups and session data.  
5. **Observability Hooks** – Custom OpenTelemetry spans and Prometheus counters are emitted for each command/query, feeding the cluster‑wide monitoring stack.  

**Why it matters** – The project demonstrates end‑to‑end mastery of modern Go engineering (clean architecture, CQRS, DI), production‑grade Kubernetes operations (GitOps, observability, rapid deployments), and a user‑centric product that turns vocabulary learning into an engaging, data‑driven experience.  

---

## Editable blog


## **A Lightweight, Self‑Hosted, Multilingual CMS**

---

### 1. The Problem  
I wanted a personal publishing platform that would let me write a post once, have it automatically translated, share it on LinkedIn, and schedule the publication – all without juggling multiple tools or a heavyweight CMS. The solution had to run on my own Kubernetes node, be fast for visitors, and keep the editorial workflow friction‑less.

### 2. Core Architecture  
| Component | Choice & Rationale |
|-----------|--------------------|
| **Frontend / rendering** | **SvelteKit** with Server‑Side Rendering (SSR). Because most pages are static, SSR delivers sub‑second first‑byte times without a separate build step. |
| **Database** | **PostgreSQL** – stores markdown content, inline images/PDFs (as BLOBs), version metadata and LinkedIn post settings. |
| **Automation / translation** | **n8n** workflows triggered from the UI. A post is sent to n8n, which calls a translation service, stores the localized versions, and queues a LinkedIn payload. |
| **Containerisation** | **Docker** images built per commit; the image includes the SvelteKit app and the n8n runner. |
| **Orchestration** | **Kubernetes** with **Kustomize** overlays for dev / prod environments. The cluster is the same one I already operate for other side‑projects (see my personal cloud platform) . |
| **CI/CD** | **GitHub Actions** + **release‑it**. A push to `main` runs lint, unit tests, creates a new semantic version, builds the Docker image and updates the Kustomize `kustomization.yaml`. Argo CD (or a `kubectl apply`) then rolls the new release to the cluster. |
| **Versioning** | Within the app each article has a `version` field (draft → review → published). The overall app version is bumped automatically by **release‑it** during the CI run. |
| **Scheduling** | A cron‑job inside the cluster calls the n8n endpoint every 48 h, publishing any queued LinkedIn posts. |
| **Image/PDF handling** | Uploaded files are streamed directly into PostgreSQL, avoiding an external object store and keeping the deployment truly self‑hosted. |
| **Trivia / image scanning** | A small Go service (outside the scope of the blog) watches the uploaded assets and runs a quick scan for prohibited content – a proof‑of‑concept for future moderation. |

### 3. How Automation Reduces Friction  
1. **Write → Translate** – After saving a draft, the UI calls `/api/translate`. n8n fetches the markdown, sends it to a translation API, and writes back the localized copies.  
2. **Translate → LinkedIn** – The same workflow creates a LinkedIn payload (title, excerpt, image URL) and stores it in the DB.  
3. **Cron → Publish** – Every two days a Kubernetes `CronJob` triggers the n8n “post to LinkedIn” workflow, pulling any ready payloads and sending them to LinkedIn’s API.  

All steps happen with a single button click in the admin UI; no manual copy‑pasting or external script execution is required.

### 4. Performance & Reliability  
- **SSR speed** – Because the majority of content is static markdown, the SvelteKit server returns HTML in < 200 ms for typical pages (no client‑side hydration lag).  
- **Zero‑downtime deployments** – The CI pipeline builds a new image, updates the Kustomize manifest, and Argo CD (or `kubectl rollout`) performs a rolling update. My existing GitOps workflow on the same cluster cuts deployment time from hours to minutes.  
- **Self‑contained storage** – Images and PDFs live in PostgreSQL, eliminating network latency to an external bucket and simplifying backups.

### 5. Challenges & Solutions  
| Challenge | Solution |
|-----------|----------|
| **Concurrent edits** – Two editors could modify the same article. | Added an optimistic‑locking `version` field; the UI warns the user if the DB version changed since they loaded the draft. |
| **Large media uploads** – Storing PDFs in PostgreSQL could bloat the DB. | Implemented streaming inserts and set a 5 MB size limit per file; larger assets are rejected with a clear UI message. |
| **Translation latency** – External translation APIs can be slow. | n8n runs the translation asynchronously; the UI displays a “translation in progress” badge and updates automatically when the job finishes. |

### 6. Current Status & Next Steps  
- The prototype is fully functional on my personal Kubernetes cluster (Docker, K8s, Kustomize).  
- No production‑grade metrics yet, but early testing shows the end‑to‑end publish flow completes in under 30 seconds.  
- **Future work**: add analytics (page‑view counters), implement image‑optimisation pipelines, and expose a public demo for community feedback.

### 7. Why This Project Matters to Recruiters & Tech Leads  
- **End‑to‑end cloud‑native delivery** – From source to running pod, whole stack (Docker → Kustomize → CI/CD) is automated, mirroring enterprise GitOps practices [1].  
- **Multilingual & workflow automation** – Shows ability to integrate SaaS APIs (translation, LinkedIn) via n8n, a skill in high demand for modern content pipelines.  
- **Performance‑first design** – SSR with SvelteKit delivers fast page loads without a separate static site generator, demonstrating practical front‑end optimisation.  
- **Self‑hosting mindset** – Keeps everything in‑house (PostgreSQL storage, Kubernetes cluster), aligning with security‑first organisations.    

---

## Go AI cli


## A versatile Go‑based command‑line interface for interacting with AI models (text generation, speech‑to‑text, image generation)
--- 

- **Primary purpose:** Enable developers & content creators to call AI services directly from the terminal.  
- **Core features**  
  - Text generation via OpenAI GPT‑3 (or any configured model)  
  - Speech‑to‑text conversion  
  - Image generation from prompts  
  - Extensible plug‑in architecture (e.g., web‑search agent)  
- **UI framework:** Built with **Charm Bubble Tea** to provide a smooth, interactive terminal UI (REPL, shortcut hints, dynamic updates).  
- **Tech stack** – Go 1.20, Cobra CLI framework, Viper for configuration, PromptUI, Docker (multi‑stage builds), GitHub Actions + GoReleaser for CI/CD, optional `portaudio` tag for audio support.  
- **Installation** – `go install github.com/MohammadBnei/go‑ai‑cli@latest` (or with `-tags portaudio`); pre‑compiled binaries available on the Releases page.  
- **Configuration** – API key & model stored in `$HOME/.go‑ai‑cli.yaml`; CLI helpers to list/set values.  
- **Usage example** – `go‑ai‑cli prompt` launches an interactive REPL; shortcuts: `Ctrl‑D` (quit), `Ctrl‑H` (help), `Ctrl‑G` (options), `Ctrl‑F` (attach file).  
- **Open‑source stats** – ★ 6, 🍴 2, 3 branches, 37 tags; latest release 0.18.8 (18 Mar 2024).  
- **License** – MIT.  

---

## Go Realtime Chat

**Realtime Chat Server (Tutorial → Full‑Featured Implementation)** – A production‑ready Go backend that provides real‑time, room‑based chat with multiple adapters (HTML + Gin, REST, gRPC).  

- **Goal** – Turn a step‑by‑step tutorial into a complete, extensible chat service that can handle thousands of simultaneous messages with low latency.  
- **Broadcast layer** – Defined a `Broadcaster` interface (`Register`, `Unregister`, `Close`, `Submit`). Implemented a thread‑safe broadcaster using channels (`input`, `reg`, `unreg`, `outputs`) and a dedicated goroutine that selects over these channels, guaranteeing race‑free message distribution.  
- **Room manager** – Created a `Manager` interface for opening/closing listeners, submitting messages, and deleting rooms. Each room gets its own `Broadcaster`; the manager coordinates actions through buffered channels (`open`, `close`, `delete`, `messages`).  
- **Singleton pattern** – Exposed the manager via `GetRoomManager()`, ensuring a single shared instance across the whole application and running its `run()` loop in the background.  
- **Adapters**  
  - **HTML + Gin adapter** – Renders a simple HTML template, registers routes for creating rooms, posting messages, deleting rooms, and streaming messages via Server‑Sent Events (SSE).  
  - **REST adapter (implemented)** – Provides `GET /stream/:roomId` (SSE) and `POST /submit` endpoints for clients that prefer a pure JSON API.  
  - **gRPC adapter (optional)** – Structured for future extension, exposing the same `Submit` and `Stream` RPCs.  
- **Concurrency model** – Pure Go channels and goroutines enable thousands of messages per second while keeping registration/unregistration race‑free and guaranteeing clean shutdowns (closed channels prevent panics).  
- **Startup** – A single `main.go` wires everything together: initializes the singleton manager, creates the Gin HTML adapter, registers all routes, and launches the server on port 8080 (`router.Run(":8080")`).  
- **Technologies** – Go 1.20, Gin‑Gonic, HTML templates, Server‑Sent Events, gRPC (planned), channels & goroutines, singleton pattern, RESTful design.  
- **Result** – A fully functional, production‑grade real‑time chat backend that showcases best practices for Go concurrency, clean architecture, and multi‑adapter exposure, ready to be extended with additional front‑ends or scaling layers.

---

## Self‑Hosted HA Kubernetes Cluster


*Home‑grown, production‑grade Kubernetes platform built from the ground up*
---  

![ha-cluster.jpg](https://blog.bnei.dev/assets/images/Z3XlgbvxxNbtVhIrME5yc.webp)


**Project Synopsis**  
Designed, provisioned, and operated a high‑availability (HA) Kubernetes cluster on two legacy machines (an i5 desktop + 32 GB RAM and an i7 laptop + 16 GB RAM). The whole stack is managed with Infrastructure‑as‑Code (IaC), GitOps, and modern cloud‑native tooling, turning a home lab into a fully functional private cloud that supports dozens of services (Traefik, NFS storage, Argo CD, Infisical, etc.).  

**Key Highlights**  

| Area | What I Did | Tools & Techniques | Impact |
|------|------------|--------------------|--------|
| **Hardware & OS** | Repurposed old hardware, installed **Debian 12**, tuned SSH (custom port, key‑only auth, X11 forwarding) | Debian, Zsh + Oh‑My‑Zsh, Tmux | Secure, low‑overhead foundation for the cluster. |
| **Virtualisation** | Created **4 VMs** (2 per host) with **2 CPU** and **8 GiB RAM** each, using **bridged networking** so each VM gets its own IP. | KVM, libvirt, Vagrant (Ruby DSL) | Fast provisioning, isolated environments for control‑plane and workers. |
| **Cluster Bootstrap** | Replaced manual `kubeadm` steps with **Kubespray** (Ansible) to install a **2‑node control plane** + **3 etcd replicas**. | Kubespray, Ansible, Cert‑manager, Cilium CNI, MetalLB (layer‑2) | Fully HA control plane, automatic node scaling, reliable service IP allocation. |
| **Load Balancing & Ingress** | Deployed **MetalLB** (layer‑2) as a bare‑metal load balancer, limited to the control‑plane nodes via node selectors. | MetalLB v0.13.9, custom address pool 192.168.xx.10‑50 | Provides stable external IPs for all services without cloud LB. |
| **Storage** | Set up an **NFS share** on the primary host and installed the **kubernetes‑nfs‑provisioner** as the default StorageClass. | NFS, nfs‑subdir‑external‑provisioner | Persistent volumes shared across all worker nodes. |
| **Secrets Management** | Integrated **Infisical** via its Kubernetes operator to inject secrets as env variables or files. | Infisical operator, PostgreSQL backend | Centralised, secure secret handling for all apps. |
| **GitOps & CI/CD** | Installed **Argo CD** (Helm) to sync manifests from GitHub, using **Kustomize** for versioning. Fixed sync issues caused by Infisical polling. | Argo CD, Kustomize, GitHub Actions | Zero‑touch deployments; rollout time dropped from hours to minutes. |
| **Ingress Router** | Adopted **Traefik** (Helm) with **IngressRoute CRDs** for URL‑based routing; roadmap to **Gateway API**. | Traefik, Helm | Flexible, per‑domain routing with automatic TLS (future). |
| **Observability (planned)** | Future integration of **OpenTelemetry**, **Prometheus**, **Grafana**, **Jaeger**, and **Checkov** for security scanning. | — | Will provide full telemetry and compliance. |
| **Documentation & Automation** | Authored a detailed tutorial (the source of this description) and automated repeatable builds; CI pipeline reduces cluster rebuild from > 3 h to < 2 min. | Markdown, Mermaid diagrams, CI scripts | Knowledge transfer and rapid recovery. |

**Why This Project Stands Out**  

- **HA Control Plane** – 2 control‑plane nodes + 3 etcd members guarantee quorum and resilience.  
- **Pure‑IaC Lifecycle** – Vagrant + Kubespray + Ansible allow a declarative, version‑controlled infrastructure.  
- **GitOps‑First** – Argo CD continuously reconciles the desired state, cutting deployment time dramatically.  
- **Production‑Ready Services** – Traefik, MetalLB, NFS, and Infisical give the cluster all the plumbing needed for real workloads.  
- **Self‑Hosted Philosophy** – Shows mastery of end‑to‑end cloud stack without relying on any public provider, aligning with modern “edge‑cloud” trends.  
