Capturing, storing, retrieving, reasoning over, and governing knowledge is a discipline with over 40 years of foundations in AI research. A Knowledge-Based System (KBS) has a classical architecture: a knowledge base, an inference engine, and an interface. Knowledge Stack is a modern KBS — that same proven architecture built on modern components: search engines, vector databases, analytics engines, and LLM-based agents. The knowledge base and tools are universal and shared. The reasoning and interface are per-application. One engine. Many contexts.
12–18 months. Three teams. Most never finish.
Assemble retrieval, governance, audit, sovereignty, and PDPA subject rights from scratch. Real engineering across data, ML, and security. By the time it works, the LLM landscape has shifted twice.
Polished product. Sovereignty fails day 1.
Mature retrieval and chat UX. But your data leaves your boundary, your tenant lives on a shared cluster, and your compliance review surfaces the show-stopper before procurement finishes.
Easy adoption. Locked to one vendor's ecosystem.
Works only as well as your knowledge fits inside that vendor's stack. Federating to your warehouse, enforcing your classification scheme, or surviving a vendor switch each become a multi-quarter project.
Knowledge Stack is one Knowledge-Based System, applied to many domains.
Infozense ships L1 and L2 — the data layer and the tool layer. L3 and L4 are your agent and application layer: bring your own, or start from our open reference implementations.
Reference deployments stand up a working application in weeks on top of Knowledge Stack rather than the 12–18 months a custom build typically takes.
Articles 30, 33, 34, 35, 36 each have dedicated endpoints. Hand auditors a working API, not a slideshow.
Confidential and internal classifications force local LLM routing automatically. No tenant config can override.
Bring your existing search, vector store, or warehouse — or use the Apache 2.0 components we ship. Every layer is replaceable. No proprietary glue.
Every capability maps to a layer in the architecture. The Knowledge Base (L1+L2) is shared; reasoning and interface (L3+L4) are per-application.
Full-text keyword search and semantic vector search in a single query. Find documents by exact terms or by meaning — whichever gets the best result.
Reads PDFs, Office documents, and scanned images (including OCR for scans). Auto-classifies sensitivity and document type, extracts text and tables, detects PII (Thai national ID, phone, email, credit card). Schema-driven extraction for structured forms — invoices, PO documents, and similar. Field-by-field extraction is configured per document template.
The agent layer retrieves documents from the Knowledge Base and composes grounded answers with citations. Each domain application configures its own RAG prompts and LLM.
Track usage patterns, identify knowledge gaps, and monitor search quality metrics. Understand what your teams are looking for — and what they cannot find.
Per-tenant access isolation, classification-driven content controls (public / internal / confidential / restricted), retention policies with auto-deletion, PII redaction, right-to-erasure, and complete audit trails. Built for Thai PDPA, BoT, and banking secrecy requirements.
Approval before publication, retention reminders, automated re-classification, archive and retire. Lifecycle automation specific to knowledge assets — built on the audit + classification foundation already in place.
Native Thai and English search and retrieval — with support for additional languages. Ask in Thai, find English documents — and vice versa. Built for Southeast Asian enterprise use.
Measure retrieval precision, content freshness, and knowledge coverage. Detect gaps and stale documents automatically — so quality improves over time.
Start with built-in vector search for millions of documents. Scale to a dedicated vector database for billions — without changing your application.
Extract entities and relationships to build a knowledge graph — then reason across connections. Answer multi-hop questions like "How do these regulations interact?" instead of just "What does this document say?"
Every retrieval, query, and LLM call is logged with actor, tenant, timestamp, prompt hash, and routing decision. Compliance-ready by default. Show auditors what happened, not what the policy says should happen.
Per-tenant routing keeps confidential data on local infrastructure. Classification-driven LLM routing enforces "TH-only" and "internal-only" data residency. PII redacted before any external API call. Built for Thai regulatory requirements.
One Knowledge Stack, many domain applications. Each card below is a different context — same engines, same tool layer, different corpus.
“Which suppliers are at risk this quarter?”
POs, shipments, supplier catalogs, contracts
See supply chain application →“What does this sensor reading mean for uptime?”
Sensor manuals, device specs, telemetry
See IoT application →“What is happening on this parcel of land?”
Cadastral records, GIS layers, satellite imagery
See geospatial application →“Who on staff has the skills for this project?”
Job descriptions, resumes, training materials
See skills application →“Why did SET50 move this week?”
Filings, factor tables, news, Zignal weeklies
“How do these BoT circulars interact?”
Laws, circulars, rulings, public consultations
“Where in these textbooks is this concept covered?”
Textbooks, lesson plans, study guides, exam materials
“Have we signed something like this before?”
Contracts, agreements, case files, legal opinions
“What does the guideline say for this patient case?”
Drug databases, clinical guidelines
Engines, tool layer, schemas, tenancy, governance.
Registered assets, reference agent prompts, UI style.
Knowledge Stack runs anywhere and uses any LLM. Pick one option from each column.
Your cloud or on-premise. Full control over data and security.
Best for: regulated industries and data sovereignty requirements.
We host, operate, and optimize it for you.
Best for: teams that want results without managing infrastructure.
Inference runs on your hardware. Data never leaves your boundary.
Best for: regulated industries, PDPA / BoT scope, data residency.
Inference runs on a vendor API — your guardrail inspects every request, redacts PII, and enforces data classification rules before anything leaves your boundary.
Best for: offloading specific tasks that exceed local-model capability — frontier-model power, used selectively.
Knowledge Stack is software, but a successful deployment is people. Infozense Forward-Deployed Engineers embed in your team for the first 8–16 weeks: discovery, ingestion design, policy authoring, integration with your existing IdP and observability stack, and training your operators. Hand-off includes a phased rollout plan and 90 days of post-launch support.