← Back

Large Language Model Limitations

Large Language Models (LLMs) are essentially digital systems equipped with billions of metaphorical dials, known as parameters. Through a process called training—where the model is fed vast amounts of text, books, and webpages—these dials are precision-tuned to specific orientations. When you give an LLM a prompt, it uses these settings to calculate the most statistically plausible next word in a sequence. Most models also include a "randomness" setting (often called temperature). This allows the machine to occasionally bypass the top choice in favor of the second or third most likely word, ensuring that the AI can provide creative and varied responses to the exact same prompt.
Knowledge Management
01 // KNOWLEDGE MANAGEMENT

Drafting & Regulatory Summarization

The current strength of LLMs lies in "Architectural Knowledge Management" (AKM). They are highly effective at generating Architecture Decision Records (ADRs), documenting the context and consequences of design choices with accuracy comparable to human drafts. [1]

Furthermore, LLMs excel at ingesting complex regulatory texts—such as zoning ordinances and the IBC—answering queries like "What is the maximum height in Zone R-5?" significantly faster than manual search. [2]

Lastly, LLMs are generally a good reserach method, for example the 'deep research' of gemini. They can proide an in-depth referenced report about any topic like; location study, new technologies and legal advice.

Multi-Agent Systems
02 // RESEARCH FRONTIER

Multi-Agent Reasoning Systems

Text based AI's are moving beyond the "chatbot" toward Multi-Agent Systems (MAS) where specialized agents collaborate. In a compliance workflow, a "Planner" agent might interpret regulation while a "Code Expert" retrieves BIM data to execute logic checks. [3]

These systems allow for parallel execution—drafting narratives and verifying citations simultaneously—significantly speeding up complex reasoning tasks. [4]

Autonomous Negotiation
03 // THEORETICAL HORIZON

Autonomous Law & Semantic Geometry

Theoretically, LLM-based systems could handle the contractual phase of architecture, with autonomous "Legal Agents" negotiating Owner-Architect agreements in real-time. [4]

Additionally, Multimodal LLMs suggest a future where models "read" floor plans as fluently as text, allowing architects to discuss spatial qualities (e.g., "too enclosed") which the model translates into geometric modifications. [5]

Hallucination Hazard
04 // OPERATIONAL FLAWS

The Hallucination Hazard

The "Hallucination" problem is the Achilles' heel of professional practice. LLMs frequently invent fictitious court cases or building codes; relying on them can lead to findings of gross negligence. [6]

They also suffer from "Data Drift": confidentially providing outdated information because they do not "know" about code updates that occurred after their training cutoff. [7]

No Intent
05 // FUNDAMENTAL LIMITS

Absence of Intent & Ethics

An AI cannot seal a drawing. Legal liability relies on the "duty of care," and courts have ruled that responsibility (as of yet) cannot be transferred to a machine. [6]

Furthermore, AI cannot navigate ethical tensions—such as gentrification vs. development. These are value judgments requiring a human conscience, not optimization problems. [8]

References
[1]
GPT-4 & Flan-T5 for Architectural Decision Records (ADRs). Ref [13] in Source PDF
[2]
Automated Regulatory Summarization & Code Retrieval. Ref [2] in Source PDF
[3]
Multi-Agent Systems (MAS) for Code Compliance. Ref [15] in Source PDF
[4]
Parallel Execution & Self-Coding Systems in Architecture. Ref [16] in Source PDF
[5]
Multimodal Tokenization & LLM-Grounded Diffusion. Ref [5] in Source PDF
[6]
Legal Authorship, Liability, and Hallucination Risks. Ref [12] in Source PDF
[7]
Data Drift and Obsolescence in Static Models. Ref [17] in Source PDF
[8]
Phenomenological Understanding & Ethical Judgment. Ref [11] in Source PDF
[9]
Automated Specifications & Force Multipliers. Ref [14] in Source PDF