Software Development Lifecycle¶
Last Updated: February 28, 2026 Framework Alignment: Design for Six Sigma (DFSS) — Design for Six Sigma in Technology and Product Development (Creveling, Slutsky & Antis, 2003), Ch. 1–5 (CDOV process), Ch. 12 (DFMEA), Ch. 15 (Robust Design) Status: Active — validated through 21 bugs across 3 build versions and 4 VM environments
1. Purpose¶
This document defines the ConstructiVision Software Development Lifecycle (SDLC) — the repeating process by which code changes move from concept through validation to deployment. It captures how we actually deliver quality, not an aspirational framework.
The SDLC is aligned with the DFSS CDOV process (Concept → Design → Optimize → Verify) from Creveling, Slutsky & Antis. Where traditional software shops bolt quality on at the end, DFSS builds it into the transfer function: every output is traceable to a requirement, every failure mode is predicted before it occurs, and every test validates against a specification — not just a developer’s intuition.
Why DFSS and not Agile/Scrum?
ConstructiVision is a safety-adjacent construction tool — miscalculated lift points or wrong weld connections can cause tilt-up panel failures. The DFSS framework treats quality as a measurable engineering property ($Y = f(X)$), not a subjective user story. This matches the domain: concrete doesn’t care about sprint velocity.
2. DFSS Alignment — The CDOV Model¶
The Creveling/Slutsky/Antis CDOV framework maps directly to our lifecycle:
DFSS Phase |
ConstructiVision Phase |
Key Activities |
Key Deliverables |
|---|---|---|---|
Concept |
Baseline & Discovery |
Recover source code, inventory modules, establish VM reference environments, define what “working” means |
Inventory & Gap Analysis, v3.60 Source Recovery — Missing Dependency Fix, VM baseline snapshots |
Design |
Feature/Bug Development |
Fix bugs, implement workarounds, write deployment scripts, configure environments |
Code commits, |
Optimize |
FMEA & Risk Analysis |
Predict failure modes, rate severity/occurrence/detection, prioritize by RPN, refine controls |
31 — Comprehensive Workflow & Human Factors Analysis §9, Risk Register (2026) |
Verify |
Validation Testing & Bug Tracking |
AutoIT automation, manual desktop testing, OCR screenshot comparison, bug logging with DFMEA traceability |
Bug Tracker — Validation Campaign, |
The Transfer Function: $Y = f(X)$¶
In DFSS terms:
$Y$ (Critical-to-Quality): ConstructiVision loads, menus register, all commands execute, project drawings open, panel books generate correctly
$X$ (Critical Parameter): Registry configuration, file placement, AutoCAD profile state, OS environment, printer/plot subsystem, startup sequence timing
$f$ (Transfer Function): The deployment process — installer + configuration script + startup chain
Our validation loop measures $Y$ on multiple platforms and traces failures back to specific $X$ factors. Bugs 19–21 are textbook examples:
$X_1$ — Menu registration (Bug 19): Missing
Group1/Pop13entries inHKCU\...\Profiles\<<Unnamed Profile>>\Menuscausedsetvars Function cancelledon VM 103. See Bug Tracker — Validation Campaign, Bug 19 detail; 31 — Comprehensive Workflow & Human Factors Analysis, DFMEA row #19 (S=8, O=3, D=7, RPN=168).$X_2$ — Startup Suite timing (Bug 20): VLX loaded before printer/plot subsystem initialized → 0xC0000005 crash at
LocalizeReservedPlotStyleStrings+533on VM 104. Crash dumps preserved atreports/ocr-output/vm104-feb28/acadstk.dmp. See Bug 20 detail; DFMEA row #20 (S=9, O=3, D=8, RPN=216).$X_3$ — Project path configuration (Bug 21): Missing
Project Settings\CV\RefSearchPathregistry key on VM 104 prevented File Open from findingCSBsite1.dwg. See Bug 21 detail; DFMEA row #21 (S=7, O=3, D=6, RPN=126).
3. The Lifecycle — Six Phases¶
Phase 1: Baseline¶
DFSS equivalent: Concept Development — characterize the existing system before changing anything.
What we do:
Establish reference environments (VM snapshots) representing known-good states
Inventory all source files, registry keys, and configuration artifacts
Run the product end-to-end and document the expected behavior (our “specification”)
Capture screenshots of every dialog and command for comparison
Artifacts:
Artifact |
Location |
Purpose |
|---|---|---|
VM 102 snapshot |
Proxmox ZFS |
V11 reference — known-good XP + AutoCAD 2000 + CV (see Testing & Validation Strategy, VM table) |
VM 103 snapshot |
Proxmox ZFS |
V7.0 master copy — read-only production baseline (see VM103 remote access setup (sensitive)) |
Source inventory |
|
File-level diff: 134 files in |
DCL inventory |
|
All 44 dialog files cataloged with field counts (see also 31 — Comprehensive Workflow & Human Factors Analysis, §3 Dialog Architecture) |
Total Uninstall captures |
|
Complete registry + filesystem snapshots — used for Bug 19 registry diff (menu keys) and Bug 21 (project path comparison) |
Source recovery |
|
v3.60 archive — 12 missing modules recovered Feb 16 (see v3.60 Source Recovery — Missing Dependency Fix; commits |
Binary baselines |
|
Preserved acad.exe binaries from each VM for hex-level comparison (Bug 20 investigation, disproven exe-swap theory) |
Quality gate: Baseline passes when the reference VM (102) can run the full validation script (scripts/cv-menu-validation.au3, 572 lines) end-to-end with all screenshots captured to C:\CV-Validation\VM102\.
Phase 2: Develop¶
DFSS equivalent: Design Development — make changes with predictable effects.
What we do:
Fix bugs identified in Phase 4 (Analyze)
Implement workarounds for environmental issues
Write deployment automation (PowerShell scripts, registry patches)
Modify AutoLISP source in
src/x32/TB11-01x32/(never PB11)
Process rules:
All changes go to
TB11-01x32(test build) — production builds are read-onlyAtomic commit-push:
git add; git commit; git pull --rebase; git pushCommit messages use conventional prefixes:
fix:,feat:,docs:,chore:Every fix links to a bug number in the tracker
Recent examples:
Bug |
Change |
Commit |
File(s) |
Type |
|---|---|---|---|---|
Bug 4 |
Removed |
Session 1 |
Multiple |
Code fix — see Bug 4; ban added to |
Bug 7 |
Rewrote VLX detection: removed |
|
|
Code fix — see Bug 7 |
Bug 19 |
Added |
|
Registry: |
Deployment fix — see Bug 19 |
Bug 20 |
Created |
|
|
Workaround — see Bug 20 |
Bug 21 |
Added |
|
Registry: |
Config fix — see Bug 21 |
Quality gate: Code compiles/loads without error in AutoCAD 2000 on the target VM. (For AutoLISP, “compiles” means (load "file.lsp") returns without error.) Deployment fixes must be validated by ssh Administrator@<VM_IP> "reg query ..." confirming the key exists.
Phase 3: Validate¶
DFSS equivalent: Verify — measure the output ($Y$) against the specification.
Validation runs in two parallel streams:
3a. Automated Testing (AutoIT)¶
The scripts/cv-menu-validation.au3 script (572 lines) runs on XP VMs to exercise every ConstructiVision menu command, capture screenshots, and log results. Source: scripts/cv-menu-validation.au3, header lines 1–18.
AutoIt3.exe cv-menu-validation.au3 VM102
Step |
What it does |
Script Reference |
|---|---|---|
Phase 0 |
Find AutoCAD window by |
|
Phase 1 |
Open project drawing ( |
Requires Bug 21 fix (RefSearchPath) |
Phase 2 |
Walk every menu item, capture pre/post screenshots as BMPs |
Output: |
Phase 3 |
Open dialogs, verify controls render correctly |
Compare vs VM 102 baseline screenshots |
Phase 4 |
Execute key commands (panel edit, tilt-up, batch) |
Discovered Bugs 10, 12–17 |
Output directory: C:\CV-Validation\<VM_NAME>\ on target VM + validation-log.txt. Results copied to reports/ocr-output/ on developer workstation.
OCR pipeline: scripts/ocr-screenshots.py converts BMPs to PNGs and runs Tesseract OCR to extract dialog text. Output stored in reports/ocr-output/. Used in Bug 19 discovery: OCR of VM 103 screenshot revealed “setvars Function cancelled” error text.
Deployment: Scripts are deployed to VMs via SSH. Triggered by XP at scheduler (the only way to launch interactive processes remotely on XP without PsExec):
ssh Administrator@<VM_IP> "at 14:12 /interactive C:\run-validation.bat"
This was the actual command used to discover Bug 19 on VM 102 during Session 5 (Feb 27, 2026).
3b. Unstructured User Testing (Alpha)¶
Real users (GSCI engineers, the developer) operate ConstructiVision on alpha VMs doing actual construction estimating work. This catches issues that scripted automation cannot:
Workflow sequences the script doesn’t cover
Performance issues under real data loads
UI confusion and unexpected dialog behavior
Environmental interactions (printers, network paths, screen resolution)
VM |
Tester |
Role |
Bugs Found Here |
|---|---|---|---|
102 |
Developer (Chad) |
Reference validation, bug investigation |
Baseline (zero bugs — this is the known-good) |
103 |
Developer (Chad) |
PB11 comparison testing |
Bug 19 (menu registration) |
104 |
Developer (Chad) |
Fresh install testing, deployment validation |
Bugs 20, 21 (VLX crash, project paths) |
108 |
Developer (Chad) |
Source-mode development |
Bugs 1–18 (all source-mode bugs) |
201 |
Alpha tester (Tai) |
Real-world estimating workflow |
See Tai access guide (sensitive) |
202 |
Alpha tester (Dat) |
Real-world estimating workflow |
See Dat access guide (sensitive) |
The Bug Definition
A bug is any deviation from expected user experience — including code defects, misconfigurations, missing registry entries, incomplete installations, confusing UI behavior, automation failures, and environmental issues. If the user encounters something unexpected or suboptimal, it gets logged here regardless of root cause. The goal is to capture every friction point so the product and its deployment can be optimized.
Quality gate: Both automated and manual testing produce zero new Critical/High bugs, OR all new bugs are logged and triaged.
Phase 4: Analyze¶
DFSS equivalent: Data analysis and statistical thinking — diagnose root causes, not symptoms.
When a bug is discovered (from either testing stream), the analysis phase determines:
What failed? — The observable symptom (screenshot, error message, crash dump)
Why did it fail? — Root cause investigation (registry comparison, hex dump, crash stack analysis)
Where else could this fail? — Pattern analysis across VMs and versions
Was this predicted? — Check DFMEA (doc 31, §9) for matching failure modes
Investigation toolkit:
Tool |
Purpose |
Actual Use Case |
|---|---|---|
SSH remote registry queries |
Compare registry state across VMs |
Bug 21: |
Hex dump comparison |
Binary-level analysis of executables |
Bug 20: compared 36 bytes at offset |
|
AutoCAD crash stack traces |
Bug 20: |
Total Uninstall snapshots |
Filesystem + registry diff between environments |
|
OCR comparison |
Text-level diff of dialog screenshots |
Bug 19: Tesseract OCR of |
Analysis output format (from Bug Tracker — Validation Campaign):
Discovered: Session number and date
Severity: Critical / High / Medium / Low
Symptom: What the user sees
Root Cause: What actually went wrong
Fix: Exact code/registry/config change
Impact: What else is affected
DFMEA Reference: Matching failure mode ID and RPN
Quality gate: Root cause is identified and documented. “Unknown” is not an acceptable root cause — investigate until you find it, or explicitly document what was ruled out.
Example: Bug 20 — ruling out a wrong theory
Bug 20’s initial theory was that a 36-byte binary patch in acad.exe caused the crash. This was disproven by: (1) hex-dumping both binaries and showing the patch at offset 0x6452A0 is registration/serial data, not executable code; (2) swapping the VM 102 binary onto VM 104 and observing the same crash at the same address. The wrong theory was documented, corrected in commit d6930ba, and the bug tracker updated. See Bug 20 investigation timeline.
Phase 5: Optimize (FMEA)¶
DFSS equivalent: Optimize — use FMEA to predict failures before they occur and reduce risk.
This is where DFSS diverges most sharply from typical software development. Instead of just fixing bugs reactively, we maintain a Design FMEA (Failure Mode & Effects Analysis) that:
Predicts failure modes based on system architecture analysis
Rates each mode on three axes:
Severity (S): How bad is the effect? (1–10, where 10 = safety risk)
Occurrence (O): How likely is the cause? (1–10)
Detection (D): How likely is the failure to escape undetected? (1–10)
Calculates Risk Priority Number: $\text{RPN} = S \times O \times D$
Prioritizes action by RPN — highest numbers get attention first
Current DFMEA Summary (from doc 31)¶
RPN Range |
Count |
Action Level |
|---|---|---|
300–500 |
3 |
Immediate — redesign required |
200–299 |
4 |
High — design improvement recommended |
100–199 |
7 |
Medium — improvement desired |
<100 |
5 |
Low — monitor or document |
Top 3 risks by RPN (from 31 — Comprehensive Workflow & Human Factors Analysis, DFMEA table, §9):
RPN 500 — DFMEA #10:
csv.hlpHelp system. WinHelp viewer removed from Windows 10; affects all Win10 users. S=5, O=10, D=10.RPN 360 — DFMEA #2:
wc_dlgweld connection data entry. 15 identical slot patterns across 5 pages; cognitive overload on safety-critical output (incorrect hardware → potential structural failure during tilt). S=10, O=6, D=6.RPN 315 — DFMEA #1:
edit_boxdimensional data entry. 1,826 manual fields with no visual preview; typo → defective physical panel. S=9, O=7, D=5.
The DFMEA ↔ Bug Tracker Feedback Loop¶
Every bug updates the DFMEA — this is bidirectional traceability:
Cross-reference requirements (from Bug Tracker — Validation Campaign, DFMEA Cross-Reference section):
Every bug entry includes a DFMEA # and match status (Yes / NEW)
Every GitHub Issue body includes a
## DFMEA ReferencesectionNew failure modes get added to doc 31 with S/O/D ratings
After a fix, the Detection (D) rating is re-evaluated (controls improved → lower D → lower RPN)
Real example — the feedback loop in action (Feb 26–28, 2026):
Step |
What happened |
Evidence |
|---|---|---|
DFMEA predicted (pre-test) |
18 failure modes cataloged from architecture analysis |
31 — Comprehensive Workflow & Human Factors Analysis §9, rows #1–18 (committed before validation campaign) |
Bug 19 discovered (Feb 27) |
AutoIT validation on VM 103: menu registration missing |
Bug 19 detail: OCR extracted “setvars Function cancelled” from |
DFMEA updated |
New row #19 added to doc 31 |
Commit |
Bug 20 discovered (Feb 28) |
Manual testing on VM 104: VLX crash during Startup Suite load |
Bug 20 detail: crash dump at |
DFMEA updated |
New row #20 added to doc 31 |
Commit |
Bug 21 discovered (Feb 28) |
Manual validation on VM 104 after Bug 20 workaround: File Open can’t find |
Bug 21 detail: registry comparison via SSH revealed missing |
DFMEA updated |
New row #21 added to doc 31 |
Commit |
Pattern recognized |
All three share cause class: “incomplete manual installation” (O=3 across all) |
All three VMs with issues used manual installations that skipped registry steps the v3.60 InstallShield script handles ( |
Prevention planned |
|
Current script validates: support path (line 36), startup suite (lines 67–109). Missing: menu registration, RefSearchPath |
This loop is what Creveling calls “closing the knowledge gap” (Ch. 5) — the DFMEA starts as a design-time prediction and evolves into a living record of what actually fails in the field.
Quality gate: All bugs with RPN ≥ 200 have documented recommended actions. No bug exists in the tracker without a DFMEA cross-reference.
Phase 6: Deploy¶
DFSS equivalent: Verify at scale — transfer from lab to production.
Deployment pipeline:
Developer commits → git push → VMs pull nightly (22:00) → Validation runs
Component |
Mechanism |
Location |
Documented In |
|---|---|---|---|
Source code |
Git sparse checkout ( |
|
|
Runtime link |
NTFS junction |
|
|
Configuration |
PowerShell script (109 lines: support path, startup suite) |
|
Script header, lines 1–10 |
Nightly sync |
Windows Scheduled Task “ConstructiVision Git Pull” |
Runs daily at 22:00 on VMs 108, 109, 201, 202 |
Testing & Validation Strategy, Nightly Pull section |
Manual sync |
Desktop shortcut |
|
Alpha testing plan (sensitive) |
Deferred VLX loader |
|
|
Environment matrix:
VM |
OS |
Build |
Purpose |
|---|---|---|---|
102 |
XP SP3 |
V11 reference |
Baseline comparison |
103 |
XP SP3 |
V7.0 production |
Read-only master — never modify |
104 |
XP SP3 |
V7.0 patch |
Manual installation testing |
108 |
Win10 x32 |
TB11 source-mode |
Active development + source-mode bugs |
109 |
Win10 x64 |
TB11 |
64-bit compatibility testing |
201 |
Win10 x32 |
TB11 |
Alpha tester (Tai) |
202 |
Win10 x32 |
TB11 |
Alpha tester (Dat) |
Quality gate: All target VMs can pull updates and run the validation script without manual intervention.
4. Metrics & Measurement¶
DFSS demands measurement. Here is what we track:
Bug Discovery Rate¶
Period |
Bugs Found |
Method |
Notes |
|---|---|---|---|
Session 1 (Feb 16) |
4 |
Manual source testing on VM 108 |
First source-mode load |
Session 2 (Feb 17) |
3 |
Manual testing |
SCR command issues |
Session 3 (Feb 18) |
7 |
AutoIT + manual |
Dialog crashes, ENAME serialization |
Session 4 (Feb 19) |
2 |
AutoIT + OCR comparison |
Layout/tilt-up index bugs |
Session 5 (Feb 22) |
1 |
Manual (VM 103) |
Profile/deployment: menu registration |
Session 6–8 (Feb 24–28) |
4 |
Manual (VM 104) + AutoIT |
VLX crash, startup suite, project paths |
Total |
21 |
Mixed |
17 fixed, 1 workaround verified, 1 open, 2 deployment fixes |
Defect Classification¶
Category |
Count |
% |
Description |
Examples |
|---|---|---|---|---|
Code defects |
14 |
67% |
Bugs in |
Bug 4: |
Environment/Config |
5 |
24% |
Registry, profile, installer gaps (Bugs 19–21 + related) |
Bug 19: missing |
Dead code |
1 |
5% |
|
|
Workaround |
1 |
5% |
Startup Suite timing (Bug 20 — root cause TBD) |
|
Severity Distribution¶
Severity |
Count |
% |
|---|---|---|
Critical |
6 |
29% |
High |
12 |
57% |
Medium |
2 |
10% |
Low |
1 |
5% |
DFMEA Prediction Accuracy¶
Metric |
Value |
Evidence |
|---|---|---|
Failure modes predicted (pre-test) |
18 |
31 — Comprehensive Workflow & Human Factors Analysis §9, rows #1–18 (committed before first validation session) |
Failure modes discovered (in-test) |
3 (Bugs 19, 20, 21) |
Bug Tracker — Validation Campaign, DFMEA Cross-Reference: all marked NEW |
Prediction coverage |
86% (18/21 bugs fell within categories the DFMEA predicted for) |
14 code bugs match DFMEA components (csv.lsp routing, inspanel, tiltup, etc.); 3 deployment bugs required new DFMEA rows |
New failure modes added to DFMEA |
3 |
Rows #19–21 in doc 31, commits |
Total DFMEA rows |
21 |
31 — Comprehensive Workflow & Human Factors Analysis §9 DFMEA table |
5. Tools & Best Practices¶
5.1 GitHub Platform¶
GitHub (ConstructiVision/ConstructiVision, private repo) serves as the central hub for source control, CI/CD, documentation hosting, and project communication:
Repository features:
Private repository — source code, documentation, VM deployment infrastructure all in one repo
SSH deploy keys — ed25519 (no passphrase) on each VM for read-only sparse checkout
.github/copilot-instructions.md— 300+ lines of project context, coding standards, and architectural decisions that load automatically into the AI agent on every sessionGitHub Issues — external-facing bug tracking linked to doc 32 (e.g., GH Issue #18)
GitHub Actions (9 workflows in .github/workflows/):
Workflow |
File |
Trigger |
Purpose |
|---|---|---|---|
Update Changelog |
|
Push to |
Generates |
Docs → GitHub Pages |
|
After Update Changelog or Weekly Update |
Runs |
Deploy SimpleStruct |
|
Push to |
Syncs |
Weekly Update |
|
Monday 6 AM PST (cron) |
Auto-generates weekly status draft in doc 11 |
Sync Bug Tracker |
|
Push to |
Syncs doc 32 bug entries to GitHub Issues |
News Update |
|
Wednesday 1 AM PST (cron) |
Updates |
News RSS Feed |
|
Daily 8 AM UTC |
Fetches industry RSS feeds |
Events Update |
|
Monday 2 AM PST (cron) |
Updates |
Events RSS Feed |
|
Sundays 9 AM UTC |
Fetches event feed data |
The alternating pattern in git log (manual commit → chore: auto-update) is evidence of the Update Changelog workflow running after every push.
GitHub Pages: Sphinx documentation hosted at the repo’s Pages URL. Built by the pages.yml workflow using Python 3.11, docs/requirements.txt dependencies, and sphinx-build -b html docs/source docs/_build/html.
5.2 VS Code IDE¶
VS Code is the primary development environment, configured with workspace-specific settings (.vscode/):
Workspace configuration files:
File |
Purpose |
|---|---|
|
UTF-8 encoding enforcement, C++ compiler path (MSYS2/MinGW-w64), InstallShield compiler paths, file associations ( |
|
5 build/test tasks: Build Constructivision (g++), Compile InstallShield Script, Build InstallShield Project, Validate InstallShield Script, Clean InstallShield Output |
|
Debug configuration for C++ stub |
|
IntelliSense configuration for MinGW-w64 |
|
Custom TextMate grammar for InstallShield |
|
Code snippets for InstallShield scripting |
|
Bracket matching, comment toggling for |
Key extensions used:
GitHub Copilot (Claude) — AI engineering agent with SSH access and full repo context
PowerShell — scripting, terminal, debugging
C/C++ (ms-vscode.cpptools) — IntelliSense, formatting (clangFormat)
Prettier — Markdown formatting (
editor.defaultFormatter: esbenp.prettier-vscode)Python — utility scripts (
scripts/ensure_utf8_encoding.py,scripts/ocr-screenshots.py)
Build tasks (Ctrl+Shift+B): The default task is “Build Constructivision” (C++ stub via g++). InstallShield tasks use custom installshield.compilerPath and installshield.builderPath settings pointing to C:\Program Files (x86)\InstallShield\2021\System\. These tasks include custom problem matchers that parse InstallShield error output into VS Code’s Problems panel.
5.3 Proxmox Infrastructure¶
Proxmox VE hosts all VM test environments on a dedicated server. It replaced a VirtualBox-on-laptop setup in January 2026:
Tower → Proxmox migration (completed Jan 27, 2026, documented in Phase Plan (P0-P5), P0 Infrastructure Upgrade):
Aspect |
Before (VirtualBox) |
After (Proxmox) |
|---|---|---|
Location |
Developer laptop (local) |
Dedicated server |
Backup |
Manual, infrequent |
ZFS snapshots (instant, automated) |
Resources |
Shared with dev work |
Dedicated CPU/RAM per VM |
Remote access |
None |
Tailscale VPN + SSH |
Customer testing |
Not possible |
Alpha testers access VMs directly |
Proxmox capabilities used:
Capability |
How Used |
Evidence |
|---|---|---|
ZFS snapshots |
Pre-operation rollback point |
Bug 20: snapshot before exe-swap test, reverted after theory disproven |
VM cloning |
Create test environments from known-good states |
VM 103 cloned from VM 102 (MASTER COPY — never modify) |
Disk expansion |
Resize VMs as needed |
Disk expand for Win10 upgrade |
Console access |
Direct VM access when SSH fails |
Used for initial SSH setup on new VMs |
Resource isolation |
Each VM gets dedicated CPU/RAM allocation |
Multiple VMs running concurrently without contention |
Tailscale VPN: Provides secure remote access to all VMs from any network. Alpha testers connect to their assigned VMs through Tailscale. See alpha testing plan (sensitive).
LegacyUpdate.net: Integrated on XP VMs to keep legacy operating systems patched despite Microsoft end-of-life. Ensures security updates don’t disrupt test results.
NAS/Samba storage provides mapped drives for ISOs, backup media, and shared resources. ISO images for OS installation are stored at isos/ and served from the NAS.
Note
Network topology, IP addresses, and connection details have been moved to internal infrastructure documentation for security. See docs-sensitive/vm-infrastructure/ (requires GitHub login).
5.4 VM Testing Tools¶
Tool |
Version |
Purpose |
Installed On |
Key Workflow |
|---|---|---|---|---|
AutoIT 3 |
3.x |
UI automation: menu walking, screenshot capture, dialog verification |
XP/Win10 VMs |
|
Tesseract OCR |
4.x |
Extract text from screenshots for automated comparison |
Developer workstation |
|
Total Uninstall 6 |
6.x |
Registry + filesystem snapshot/diff between environments |
VMs 102, 103 |
Install-monitor-snapshot-compare workflow: snapshots at |
Bitvise SSH Server |
— |
Remote command execution on all VMs |
All VMs |
SSH deploy keys (ed25519), all investigation runs through SSH |
Windows |
Built-in (XP) |
Launch interactive GUI processes remotely |
XP VMs only |
|
Tailscale |
— |
VPN for secure remote access |
All VMs + developer |
Alpha tester access, developer investigation |
Total Uninstall workflow:
Take “before” snapshot of clean OS + AutoCAD
Install ConstructiVision
Take “after” snapshot
Total Uninstall diffs the two → produces file list + registry changes
Export to
_extract/vmNNN-total-uninstall/(XML + folder trees)Use as authoritative reference for “what a correct installation looks like” → informs WiX installer payload (152 files, 5.5 MB)
Evidence: Bug 19 was found by comparing _extract/vm102-total-uninstall/ vs _extract/vm103-total-uninstall/ — the diff revealed missing Menus\Group1 and Pop13 registry entries.
5.5 Scripts & Automation Library¶
The scripts/ directory contains the project’s executable knowledge — automation scripts that encode deployment, testing, and analysis procedures:
Script |
Language |
Lines |
Purpose |
|---|---|---|---|
|
PowerShell |
115 |
AutoCAD profile configuration (support path, startup suite) |
|
AutoIT |
572 |
Automated UI validation (menu walk, screenshot capture) |
|
Python |
— |
BMP→PNG + Tesseract OCR pipeline |
|
PowerShell |
— |
Alpha tester VM provisioning |
|
PowerShell |
— |
x64 CAD workstation setup |
|
Python |
— |
Pre-Sphinx-build encoding verification (used in |
|
PowerShell |
— |
C++ stub build wrapper |
|
PowerShell |
— |
VLX binary analysis |
|
PowerShell |
— |
Auto-generate module documentation |
|
PowerShell |
— |
v3.60 test matrix automation |
5.6 Version Control & Configuration Management¶
The project uses Git with enforced conventions that prevent common failures:
Conventional commit prefixes — Every commit uses a structured prefix that categorizes the change. From the last 30 commits (git log --oneline -30):
Prefix |
Count |
Purpose |
Example |
|---|---|---|---|
|
12 |
Documentation changes |
|
|
7 |
Code/config bug fixes |
|
|
10 |
Automated maintenance |
|
|
— |
New capabilities |
Used for new scripts, validation tools |
|
— |
Structural changes |
Used for file reorganization |
Atomic commit-push workflow — The repo has CI automation that pushes after every push (changelog + dashboard updates). A bare git push is always rejected due to remote changes. The enforced workflow (from .github/copilot-instructions.md):
git add <files>; git commit -m "prefix: summary" -m "details"; git pull --rebase; git push
This is executed as a single terminal command — never split across calls. Evidence: the alternating pattern in git log shows every manual commit followed by an automated chore: commit.
Sparse checkout deployment — VMs use read-only SSH deploy keys (ed25519, no passphrase) with sparse checkout patterns:
x32 VMs:
src/x32/+src/Project Files/x64 VMs:
src/x64/+src/Project Files/
This ensures VMs only pull the files they need, reducing clone size and preventing accidental edits on deployment targets.
Binary tracking exceptions — .gitignore has explicit exceptions for src/x32/**/*.exe, *.dll, *.pdf because these compiled/binary files are intentionally tracked for VM deployment. Removing these exceptions would break the nightly sync pipeline.
5.7 Remote Infrastructure Management¶
Managing 7 VMs across two OS generations (XP SP3, Win10) requires disciplined remote patterns:
SSH-first operations — All VM interactions go through Bitvise SSH Server. Pattern: ssh Administrator@<VM_IP> "<command>". Examples from actual bug investigations:
Operation |
Command |
Used In |
|---|---|---|
Registry query |
|
Bug 21: identified missing |
Registry write |
|
Bug 21: deployed fix |
File check |
|
Bug 20: verified VLX presence |
Crash dump check |
|
Bug 20: verify crash resolved |
XP interactive process limitation — Windows XP’s SSH cannot launch interactive GUI processes directly. The only way to trigger AutoIT validation scripts remotely is via the at scheduler:
ssh Administrator@<VM_IP> "at 14:12 /interactive C:\run-validation.bat"
This was the actual command used to discover Bug 19 on VM 102 (Feb 27, 2026). This constraint is documented because it affects test automation scheduling.
Proxmox snapshot discipline — Before any risky operation (driver install, registry patch, binary swap), a ZFS snapshot is taken on the Proxmox host. This provides instant rollback. Used during Bug 20 investigation when testing the exe-swap theory — snapshot was reverted after the theory was disproven.
NTFS junction deployment — The deployment model uses NTFS directory junctions to map the Git sparse checkout to AutoCAD’s expected path:
C:\Program Files\ConstructiVision → C:\Repos\Constructivision\src\x32\TB11-01x32
git pull updates the actual files; the junction makes AutoCAD see them at the expected location. Documented in ConstructiVision TB11-01x32 — Architecture & Deployment.
5.8 AutoLISP Development Standards¶
The coding standards are enforced via .github/copilot-instructions.md (which the AI agent reads automatically) and human review:
Comment hierarchy (from copilot-instructions.md):
;;;at column 0 — file headers and top-level documentation;;--— separator lines between major sections;;— block comments aligned with code;— end-of-line remarks or commented-out codeNever
#| |#— AutoCAD 2000’s(load)function doesn’t parse block comments. Bug 4 was caused by this exact issue; the ban was added to copilot-instructions.md after discovery.
Indentation: 2-space indent, no tabs. Closing ) on its own line, aligned with the opening form — never stacked like )))))).
Naming: C: prefix for user-callable commands (e.g., C:CSVMENU), csv_ or descriptive prefix for internal functions.
File pairing: Every .dcl dialog file has a corresponding .lsp loader file. The startup chain is: csvmenu.lsp → csv.vlx (compiled bundle containing all modules).
Testing protocol: “Compiles” means (load "file.lsp") returns without error in AutoCAD 2000. There is no unit test framework for AutoLISP — validation is done through the AutoIT UI automation pipeline and manual testing.
5.9 Deployment Automation Patterns¶
Deployment follows a validate-before-apply pattern, implemented in PowerShell:
Configure-ConstructiVision.ps1 (115 lines, scripts/Configure-ConstructiVision.ps1):
Validate prerequisites — checks
$CVPathexists, required files (csvmenu.lsp,csv.vlx,csv.mnu) are present, AutoCAD registry profile exists (lines 19–40)Apply settings — adds CV to support file search path (lines 43–53), configures Startup Suite auto-load entries (lines 56–115)
Verify — confirms each setting was written correctly
This pattern is being extended to cover Bug 19 (menu registration) and Bug 21 (RefSearchPath) — currently pending implementation.
acaddoc.lsp deferred loading (Bug 20 workaround, 749 bytes):
Problem: VLX loaded via Startup Suite crashes during
S::STARTUPon VMs with certain printer configurationsSolution:
acaddoc.lspruns on every document open (after AutoCAD is fully initialized), checks ifcsv.vlxis loaded, loads it if notPattern: deferred initialization — shift loading from startup to first-use to avoid timing-dependent crashes
Nightly sync pipeline:
Scheduled Task (22:00 daily) → git-pull.bat → git pull origin main → Files updated via junction
Running on VMs 108, 109, 201, 202. Manual override: git-pull.bat desktop shortcut on each VM.
5.10 Documentation-as-Code¶
All project documentation lives in docs/source/ as Sphinx + MyST Markdown, built with sphinx-build:
Numbered naming convention — Modernization docs follow NN-descriptive-name.md (00 through 35+). Every new doc must be added to the appropriate toctree section in docs/source/modernization-2026/index.md.
Cross-reference discipline — Every claim in documentation must link to a specific artifact: commit hash, file path with line numbers, registry key, VM IP, or bug tracker section. This standard was established after a review pass on this document (commit ecbc452).
MyST directive syntax — The project uses MyST Markdown, not reStructuredText. Admonitions use ```{note} / ```{warning} / ```{tip} fenced directives. Cross-refs use {doc} and {ref} roles.
Living documents — Documents like the bug tracker (doc 32), risk register (doc 05), and this SDLC document are updated in the same commit as the change they describe. The documentation IS the process — not a report written after the fact.
5.11 AI-Assisted Engineering¶
GitHub Copilot (Claude) operates as an AI engineering agent with direct SSH access to all VMs and full repository context:
Capability model:
Capability |
How It Works |
Example |
|---|---|---|
Remote investigation |
SSH into VMs, run diagnostic commands, analyze output |
Bug 20: hex-dumped |
Code analysis |
Read AutoLISP source, identify patterns, suggest fixes |
Bug 7: identified |
Documentation generation |
Create comprehensive docs with cross-references |
This document: 550+ lines with concrete evidence throughout |
DFMEA maintenance |
Read doc 31, add failure modes, calculate RPNs, update doc 32 |
Bugs 19–21: added 3 new DFMEA rows with S/O/D ratings in same session as fixes |
Deployment |
Write files to VMs via SSH, set registry keys, verify |
Bug 21: |
Guardrails (from .github/copilot-instructions.md):
Never modify
PB11-00x32,v3_60, or VM 103 (read-only references)Never remove
.gitignoreexceptions for tracked binariesAlways use atomic commit-push workflow
Treat AutoLISP as the real product, not
main.cpp
Session model: The AI agent works in interactive sessions with the developer. Each session produces commits, documentation updates, and DFMEA entries. The Feb 26–28 sprint produced 3 bug fixes, 3 DFMEA rows, and this SDLC document across approximately 6 sessions.
5.12 SimpleStruct Website & Cloud Infrastructure¶
The webpage/ directory contains the SimpleStruct marketing/information site, deployed via GitHub Actions:
Component |
Location |
Purpose |
|---|---|---|
Static site |
|
Product website (HTML, JS, CSS) |
News data |
|
Auto-updated by |
Events data |
|
Auto-updated by |
Terraform |
|
AWS infrastructure-as-code (S3 bucket, CloudFront distribution, IAM OIDC role) |
S3 bucket |
|
Static hosting via AWS |
CloudFront |
Distribution |
CDN with cache invalidation on deploy |
Deployment uses AWS OIDC federation (no static credentials) — GitHub’s identity provider is trusted by IAM role github-actions-deploy-role.
6. Build Versioning & Compatibility¶
6.1 Build Version Scheme¶
The product source lives in a structured directory tree that encodes architecture, build type, and version:
src/
├── x32/ # 32-bit builds
│ ├── PB11-00x32/ # Production Build v11.00 (134 files) — READ-ONLY
│ └── TB11-01x32/ # Test Build v11.01 (195 files) — ACTIVE DEVELOPMENT
├── x64/ # 64-bit builds
│ ├── PB11-00x64/ # Production Build v11.00 placeholder — READ-ONLY
│ └── TB11-01x64/ # Test Build v11.01 (synced from x32)
├── x86/ # Legacy archives
│ ├── v3_60/ # v3.60 InstallShield source (134 files) — READ-ONLY
│ └── v7.0(patch)/ # v7.0 patch files
└── Project Files/ # Shared project drawings (3,254 files, junctioned into builds)
Naming convention: {Type}{Version}-{Revision}{Architecture}
PB = Production Build (frozen reference, never modify)
TB = Test Build (active development target)
11 = Major version 11
00/01 = Revision number
x32/x64 = Target architecture
Legacy lock: PB11-00x32, PB11-00x64, and src/x86/v3_60/ are read-only reference archives. This is enforced via .github/copilot-instructions.md Critical Rules §2. All active work goes to TB11-01x32. The v3_60 archive was the source for recovering 12 missing modules in commit 944c3cb (see v3.60 Source Recovery — Missing Dependency Fix).
Product version lineage:
Version |
Era |
Files |
Status |
|---|---|---|---|
v3.60 |
~2001 |
134 |
Archived in |
v7.0 |
~2008 |
— |
VM 103 = production master, VM 104 = patch version |
v11.00 (PB) |
~2024 |
134 files |
Frozen baseline in |
v11.01 (TB) |
2026 |
195 files |
Active test build in |
6.2 Platform Compatibility Matrix¶
Validated through multi-OS testing (Feb 10–17, 2026). Full details in Windows 10 Upgrade Study - Constructivision Compatibility and Phase Plan (P0-P5), §P1 Platform Compatibility:
Platform |
AutoCAD 2000 |
ConstructiVision |
Status |
Test VM |
|---|---|---|---|---|
Windows XP SP3 |
✅ Works |
✅ Works |
Fully supported |
102, 103, 104 |
Windows Vista |
✅ Works |
✅ Works |
Fully supported |
Historical (P0) |
Windows 7 |
✅ Works |
⚠️ BHF bug |
Limited |
107 (Win7→Win10 upgrade source) |
Windows 10 x32 |
✅ Works |
✅ Works |
Fully supported |
108 (source-mode dev), 201, 202 |
Windows 10 x64 |
✅ Runs |
✅ Works (registry fix) |
Supported — requires Wow6432Node COM fix |
109 |
Windows 11 x64 |
CV SETUP.EXE runs (32-bit PE) |
AutoCAD SETUP.EXE blocked (16-bit NE) |
Blocked — AC2000 installer is 16-bit |
Not tested |
Key discovery: AutoCAD 2000’s SETUP.EXE uses a 16-bit stub → MSETUP.EXE → 16-bit _ISDEL.EXE chain. 64-bit Windows cannot run 16-bit executables. The WiX installer (P1 deliverable) will bypass this by packaging the already-installed files.
6.3 Architecture Documentation¶
System architecture is documented across several specialized documents:
Document |
What It Covers |
Key Content |
|---|---|---|
TB11-01x32 build structure |
NTFS junction layout, file inventory (195 files: 126 |
|
Application workflow & dialog architecture |
§3: Dialog architecture (44 DCL files, 1,826 input fields), §8: Human factors analysis, §9: DFMEA (21 failure modes) |
|
Module inventory & gaps |
A–Q depth mapping, file counts (134 PB11 → 195 TB11), gap analysis |
|
Source code archaeology |
Recovery of 12 missing modules from v3.60 archive, binary format analysis |
|
|
File-level diff between PB and TB |
Line-by-line comparison of 134 baseline vs 195 test build files |
|
Complete DCL dialog catalog |
All 44 dialog files with field counts, control types |
Startup chain architecture: csvmenu.lsp → loads csv.vlx (compiled Visual LISP eXecutable containing all modules) → registers menus → initializes global variables → ready for user commands.
6.4 Definition of Done¶
Phase-level “done” — Each phase gate has explicit exit criteria (from Phase Plan (P0-P5) and Milestones Dashboard):
Phase |
Definition of Done |
|---|---|
P0 |
CV runs in VM; dry run successful; demo recording captured |
P1 |
WiX installer GA release; >95% success rate across platform compatibility matrix; all Critical bugs resolved |
P2 |
UI/UX review complete; code refactoring done; documentation suite complete |
P3 |
AutoCAD 2026 compatibility validated; ADN membership active; .bundle format migration complete |
P4 |
Autodesk App Store submission approved; dual distribution live (direct + App Store); EV code signing |
P5 |
AI auto-fill forms working; “EZ Button” panel book generation with integrated tests |
Bug-level “done” — A bug is resolved when ALL of these are true:
Root cause is identified and documented (not “unknown”)
Fix is implemented and committed with conventional prefix (
fix:,docs:, etc.)Fix is verified on the target VM (via SSH query, manual test, or AutoIT)
Bug tracker entry (doc 32) is updated with status, fix description, and commit hash
DFMEA cross-reference is added (matching existing failure mode, or new row with S/O/D)
If RPN ≥ 200, recommended action is documented
Session-level “done” — A coding session produces a committable artifact or explicitly documents why it didn’t (e.g., investigation that ruled out a theory).
Project success criteria (from 2026 Timeline (Week-by-Week), macro timeline):
March 2026: WiX installer GA release — distributable without custom manual installation
December 2026: Dual distribution live — Autodesk App Store + direct download
December 2026: Demo-ready for World of Concrete 2027 (Jan 19–21)
7. Roles & Responsibilities¶
Role |
Person |
Responsibilities |
|---|---|---|
Developer / Owner |
Chad (Weidercx) |
Code fixes, deployment scripts, VM management, FMEA, documentation, project management |
Alpha Tester 1 |
Tai (GSCI) |
Unstructured user testing on VM 201 — real-world estimating workflows |
Alpha Tester 2 |
Dat (GSCI) |
Unstructured user testing on VM 202 — real-world estimating workflows |
AI Engineering Agent |
GitHub Copilot (Claude) |
Remote investigation, automated analysis, documentation generation, DFMEA maintenance |
The AI Agent Role
A significant innovation in this SDLC is the AI engineering agent (GitHub Copilot / Claude) acting as a remote investigation and documentation partner. Concrete examples from the Feb 26–28 sprint:
Bug 20 investigation: Agent SSH-ed into VM 104, hex-dumped
acad.exeat offset0x6452A0, compared against VM 102 binary, proved 36-byte patch was registration data (not code), deployedacaddoc.lspworkaround, and committed corrected documentation (commitd6930ba).Bug 21 discovery: Agent ran
reg queryon both VM 102 and VM 104 to compareProject Settingskeys, identified the missingCV\RefSearchPathsubkey, deployed the fix viareg add, and logged Bug 21 with full DFMEA cross-reference (commit3b5f238).DFMEA maintenance: Agent reads doc 31 DFMEA table, adds new failure mode rows with S/O/D ratings, updates summary counts, and cross-references in doc 32 — all in the same session as the bug fix.
This accelerates the Analyze → Optimize loop from days to minutes.
8. Project Management Methods¶
8.1 Planning Framework¶
The project uses a phase-gated plan with 6 phases (P0–P5) and 12 milestones:
Document |
What It Captures |
Update Cadence |
|---|---|---|
Phase definitions, entry/exit criteria, deliverables per phase |
Updated at phase transitions |
|
Week-by-week schedule, quarterly macro plan, budget milestones |
Updated weekly |
|
12 milestones (M1–M8) with target dates, exit criteria, status, blockers |
Updated at each milestone change |
|
Weekly status: delivered, changed, next, decisions |
Updated weekly |
Phase gate model (from Phase Plan (P0-P5)):
Phase |
Name |
Status |
Key Exit Criteria |
|---|---|---|---|
P0 |
VM “Run as-is” |
✅ Complete (Jan 13) |
CV runs in VM, demo recording captured |
P1 |
Installer Modernization |
🔧 Active (Feb–Mar) |
WiX installer GA, >95% success rate across Win matrix |
P2 |
Design Improvement |
⏳ Not Started (Q2) |
UI/UX review, code refactoring, documentation suite |
P3 |
Security & Modern AutoCAD |
⏳ Not Started (Q3) |
AutoCAD 2026 compatibility, ADN membership, .bundle format |
P4 |
Release & Distribution |
⏳ Not Started (Q4) |
App Store submission, dual distribution, EV code signing |
P5 |
AI Enhancements |
⏳ Not Started (2027) |
Auto-fill forms, “EZ Button” panel book generation |
Schedule philosophy (from 2026 Timeline (Week-by-Week)): Weekly cadence with 1 weekly update, 1 build artifact per meaningful progress, explicit phase gates, and Q4 dedicated to stabilization. The timeline is a plan, not a promise — the biggest swing factor is P2 parity closure.
8.2 Work Tracking & Prioritization¶
There is no formal sprint structure. Work is organized around four backlogs with different prioritization methods:
Backlog |
Document |
Prioritization |
Current Size |
|---|---|---|---|
Bug backlog |
Severity (Critical → Low) |
21 bugs: 17 fixed, 1 workaround, 1 open, 2 deployment |
|
Risk backlog |
RPN ($S \times O \times D$) |
21 failure modes, 3 with RPN ≥ 300 |
|
Project risk |
Impact × Likelihood |
R1–R20, top-3 reviewed weekly |
|
Feature backlog |
Phase gate sequence |
P0 ✅, P1 active, P2–P5 queued |
Prioritization rules:
Critical bugs block all other work — the system must not crash
High-RPN DFMEA items (≥300) drive the test plan — focus validation where risk is highest
Phase gate exit criteria determine what “done” means — no moving forward with open blockers
Within a session, the developer and AI agent triage reactively: investigate → document → fix → verify → next
8.3 Decision-Making Process¶
Decisions follow an evidence-over-opinion discipline. Key patterns:
Investigate before concluding: Bug 20’s initial theory (exe-swap caused the crash) was tested by (1) hex-dumping both binaries at offset 0x6452A0, (2) swapping the VM 102 binary onto VM 104, and (3) observing the same crash. The wrong theory was documented and corrected (commit d6930ba). No decision stands without evidence.
Document decisions as they are made: The InstallShield-to-WiX pivot (Feb 13) was captured immediately in Phase Plan (P0-P5) with rationale: low ROI on .rul recompilation, Total Uninstall payload validated on Win10 x32, Configure-ConstructiVision.ps1 handles AutoCAD integration. The decision was documented before WiX development began.
Decision records in code: Team norms and architectural decisions are encoded in .github/copilot-instructions.md (the AI agent’s instructions file). This serves as a living decision record: what directories are read-only, what commit format to use, what the product actually is (AutoLISP, not C++), what VMs must never be modified. Every coding session starts with these constraints loaded.
Formal decision documents: Major technical decisions get dedicated docs:
Installer technology selection → installer-modernization-decision.md (archived)
Win10 compatibility approach → Windows 10 Upgrade Study - Constructivision Compatibility
Source code recovery strategy → v3.60 Source Recovery — Missing Dependency Fix
8.4 Communication Model¶
This is a documentation-first project. All communication is captured in version-controlled artifacts:
Channel |
Purpose |
Frequency |
Example Artifact |
|---|---|---|---|
Sphinx docs |
Permanent record of decisions, analysis, procedures |
Per-change |
This document (35-software-development-lifecycle.md) |
Git commit messages |
Change-level communication |
Per-commit |
|
Weekly updates |
Status to stakeholders |
Weekly |
Weekly Updates: deliverables, changes, next steps |
Bug tracker |
Technical communication about defects |
Per-bug |
Bug Tracker — Validation Campaign: symptom, root cause, fix, DFMEA link |
GitHub Issues |
External-facing bug tracking |
As needed |
GH Issue #18: |
copilot-instructions |
Team norms for AI agent |
Updated as norms evolve |
|
There are no formal team meetings. Coordination happens through documentation: the developer writes context into docs, the AI agent reads context from docs, alpha testers receive access guides (Tai, Dat) (sensitive).
8.5 Resource Model¶
Resource |
Allocation |
Capacity |
|---|---|---|
Developer (Chad) |
Part-time, session-based |
~10–20 hrs/week during active sprints |
AI Agent (GitHub Copilot) |
On-demand during sessions |
Unlimited within session, stateless between sessions |
Alpha Tester 1 (Tai) |
Unstructured testing when available |
~2–4 hrs/week on VM 201 |
Alpha Tester 2 (Dat) |
Unstructured testing when available |
~2–4 hrs/week on VM 202 |
CI Automation |
Continuous (changelog, dashboard) |
Every push triggers |
VM Infrastructure |
24/7, nightly sync at 22:00 |
Multiple VMs on Proxmox |
Solo developer with AI force multiplication: The AI agent compensates for limited human resources by:
Performing SSH-based investigation across multiple VMs in minutes (vs hours manually)
Generating comprehensive documentation with cross-references in-session
Maintaining DFMEA bidirectional traceability that would otherwise be too tedious for one person
Reading and correlating information across 35+ modernization documents simultaneously
8.6 Session-Based Development¶
Work happens in interactive sessions between the developer and AI agent, not fixed-length sprints:
Session structure:
Developer opens VS Code with copilot-instructions loaded
States a goal or reports a problem
AI agent investigates: reads files, SSHs into VMs, runs diagnostics
Together they fix, document, and verify
Session ends with atomic commit-push + DFMEA update
Example: The 48-hour sprint (Feb 26–28, 2026):
Session |
Hours |
Goal |
Output |
|---|---|---|---|
1 |
~2 |
Investigate Bug 20 (VLX crash on VM 104) |
Hex dump analysis, exe-swap theory tested and disproven |
2 |
~2 |
Deploy Bug 20 workaround |
|
3 |
~1 |
Fix Bug 21 (discovered during Bug 20 verification) |
|
4 |
~2 |
Create SDLC document (doc 35) |
Initial 550-line document with DFSS framework |
5 |
~2 |
Cross-reference pass |
16 replacements adding commit hashes, file paths, line numbers |
6 |
~2 |
Elaborate Tools & Best Practices + PM Methods |
This content |
Total |
~11 |
3 bugs fixed + SDLC document |
4 commits, 3 DFMEA rows, 1 major document |
This session-based model works because:
The AI agent’s context loads instantly via copilot-instructions.md + conversation history
Documentation captures session state — if a session is interrupted, the next one can resume from documented artifacts
Every session produces a committable artifact — no work-in-progress languishes unrecorded
9. DFSS Principles in Practice¶
9.1 Voice of the Customer (VOC) → Critical-to-Quality (CTQ)¶
VOC |
CTQ |
Measurable Spec |
Validated By |
Bug That Proved It Matters |
|---|---|---|---|---|
“It should just work when I install it” |
Application loads cleanly on startup |
|
Manual test: user typed |
Bug 20 — VLX crash made |
“I need to open my project drawings” |
Project file navigation works |
File Open dialog shows |
Bug 21 fix verified: |
Bug 21 — missing |
“The menus should look like they always did” |
Menu registration correct |
|
AutoIT screenshot comparison: VM 103 post-fix matches VM 102 baseline |
Bug 19 — missing menu registration |
“Don’t crash” |
Zero unhandled exceptions |
No |
SSH check: |
Bug 20 — 3 crash dumps before fix ( |
“It works on my new computer” |
Win10 compatibility |
AutoIT validation passes on both XP (VM 102) and Win10 x32 (VM 108) |
|
Bugs 1–18 all found/fixed on Win10 VM 108; Windows 10 Upgrade Study - Constructivision Compatibility |
9.2 Robust Design — Parameter Diagram¶
Noise Factors (N): Signal (M):
├── OS version (XP/Vista/7/10) └── User commands
│ [Bug 20: XP plot config varies] (csv, panel edit,
├── AutoCAD build/patch level tilt-up, batch, etc.)
│ [Bug 20: R15.0 Startup Suite timing]
├── Printer/plot configuration ┌──────────────────┐
│ [Bug 20: 1 printer vs 7 → crash] M ──►│ ConstructiVision │──► Y (Outputs)
├── Installation method (full/manual) N ──►│ Transfer Function│ ├── Correct drawings
│ [Bugs 19-21: manual install skips] X ──►│ f(M, X, N) │ ├── Accurate calculations
├── Registry state (clean/migrated) └──────────────────┘ ├── Proper file I/O
│ [Bug 19: missing Menus\Group1] └── Clean UI experience
├── Screen resolution
└── Network paths / mapped drives
Control Factors (X):
├── Configure-ConstructiVision.ps1 [scripts/Configure-ConstructiVision.ps1, 109 lines]
│ ├── Support path (line 36)
│ └── Startup Suite (lines 67-109)
├── acaddoc.lsp deferred loading [Bug 20 workaround, 749 bytes on VM 104]
├── Startup Suite configuration [HKCU\...\Dialogs\Appload\Startup]
├── Project path registration [Bug 21: HKCU\...\Project Settings\CV\RefSearchPath]
└── Menu group registration [Bug 19: HKCU\...\Menus\Group1, Pop13]
The goal of the SDLC is to make $Y$ robust — insensitive to noise factors ($N$) — by identifying and controlling the critical $X$ parameters. Each bug we fix adds a control factor. Each DFMEA row predicts where noise might break through. Concrete evidence: VM 104 had 3 noise-induced failures (Bugs 19–21) that VM 102 did not, because VM 102’s InstallerShield-created environment had all control factors already set.
9.3 Knowledge-Based Development¶
Creveling emphasizes that DFSS organizations build reusable knowledge — not just working code. Our knowledge artifacts:
Artifact |
Knowledge Captured |
Size/Scope |
|---|---|---|
Bug Tracker (Bug Tracker — Validation Campaign) |
21 bugs, each with root cause, fix, DFMEA cross-reference |
609 lines, 21 detailed reports |
DFMEA (31 — Comprehensive Workflow & Human Factors Analysis, §9) |
21 failure modes with S/O/D ratings |
Table rows #1–21, RPN range 30–500 |
|
Deployment parameters encoded as executable validation |
109 lines: support path, startup suite. Menu reg + RefSearchPath pending |
Total Uninstall snapshots |
Authoritative reference for “correctly installed” state |
|
AutoIT validation script |
Automated UI test procedure for regression testing |
|
Crash dump archive |
Binary evidence of historical failures |
|
Binary baselines |
Preserved executables for forensic comparison |
|
This document |
The process itself — how we turn chaos into quality |
Cross-references every claim to a specific artifact |
9.4 Statistical Thinking¶
Even with a small sample (7 VMs, 21 bugs), we apply statistical reasoning:
Occurrence (O) ratings are based on observed frequency across VMs: Bugs 19–21 each appeared on 1 of 7 VMs (the one with manual installation), so O=3. DFMEA #10 (Help system) has O=10 because it affects 100% of Win10 users.
Detection (D) ratings drop when we add automated checks: Bug 19 was caught by AutoIT (OCR extracted the error text) → D=7 could drop to D=3 if we add a menu-check step to the script. Bug 20 required manual desktop testing (SSH can’t launch interactive AutoCAD startup scripts) → D=8.
RPN trending tells us whether our controls are improving: when
Configure-ConstructiVision.ps1gains menu registration and RefSearchPath checks, Bugs 19 and 21 get D≈2 (automated prevention), dropping their RPNs from 168→48 and 126→42 respectively.
10. Continuous Improvement — The PDCA Within CDOV¶
Each cycle through the lifecycle is a Plan-Do-Check-Act (PDCA) subcycle:
PDCA |
Maps to |
What we do |
|---|---|---|
Plan |
Baseline + FMEA priorities |
Identify which bugs to fix, which VMs to test, which DFMEA items to verify |
Do |
Develop + Deploy |
Write the fix, commit, push, wait for VM sync |
Check |
Validate + Analyze |
Run AutoIT, do manual testing, compare results to specification |
Act |
Optimize FMEA + Deploy |
Update DFMEA ratings, expand |
Cycle time: In the last 48 hours (Feb 26–28, 2026), we completed 3 full PDCA cycles:
Cycle |
Bug |
Plan |
Do |
Check |
Act |
Commits |
|---|---|---|---|---|---|---|
1 |
Bug 19 |
AutoIT screenshots showed error on VM 103 |
Added |
Re-ran AutoIT → all 11 screenshots match baseline |
Added DFMEA row #19 (RPN=168) |
|
2 |
Bug 20 |
Manual test: |
Created |
User typed |
Added DFMEA row #20 (RPN=216); disproved exe-swap theory |
|
3 |
Bug 21 |
File Open couldn’t find |
Added |
User confirmed dialog navigates to project subdirectory |
Added DFMEA row #21 (RPN=126); identified pattern: all manual installs |
|
11. Document Cross-References¶
Document |
Relationship to SDLC |
Specific Sections Referenced |
|---|---|---|
Phase 1 (Baseline) — what exists |
Module inventory, file counts (134 PB11 → 195 TB11) |
|
Phase 5 (Optimize) — project-level risk tracking |
R1–R20, top-3 weekly risks |
|
Phase 3 (Validate) — VM infrastructure and test procedures |
VM table (102–202), SSH config, security hardening |
|
Phase 3 (Validate) — platform compatibility |
Win10 x32 validation, x64 COM failure analysis |
|
Phase 1 (Baseline) — source code recovery |
12 missing modules recovered, commit |
|
Phase 1 (Baseline) — build architecture reference |
Junction layout, file inventory, startup chain |
|
Phase 5 (Optimize) — DFMEA table (§9), workflow CTQs |
DFMEA rows #1–21, dialog architecture (§3), human factors (§8) |
|
Phase 4 (Analyze) — every bug with DFMEA traceability |
21 detailed reports, DFMEA cross-reference table, Patterns & Lessons |
|
Alpha testing plan (sensitive) |
Phase 3 (Validate) — alpha tester deployment |
VM setup, Tailscale access, test procedures |
Tai access guide (sensitive) |
Phase 3 (Validate) — alpha tester 1 docs |
VM 201 access for Tai |
Dat access guide (sensitive) |
Phase 3 (Validate) — alpha tester 2 docs |
VM 202 access for Dat |
§6 (Versioning) + §8 (PM) — phase gates, compatibility matrix |
P0–P5 phases, exit criteria, platform compatibility table, InstallShield→WiX pivot |
|
§8 (PM) — schedule and cadence |
Weekly cadence philosophy, quarterly macro plan, budget milestones |
|
§8 (PM) — milestone tracking |
M1–M8 milestones with target dates, exit criteria, dependency chain |
|
§8 (PM) — status communication |
Weekly deliverables, changes, next steps, decisions |
12. Summary¶
The ConstructiVision SDLC is a closed-loop quality system built on DFSS principles:
Baseline — Know what “working” looks like before you change anything
Develop — Make traceable changes to the test build
Validate — Measure with both automation (AutoIT) and human testing (alpha users)
Analyze — Diagnose root causes, not symptoms; document everything
Optimize — Predict failures with FMEA; close the loop between prediction and observation
Deploy — Push to all environments; verify at scale
The framework produces bidirectional traceability: requirements → design → FMEA predictions → test plans → bugs → FMEA updates → improved controls → deployment. Every bug makes the next deployment more robust.
“The quality of a product is determined by how well it performs its intended function under various conditions of use.” — C.M. Creveling, Design for Six Sigma