VigilGuard Enterprise v1.5 Released
Improved Detection Accuracy
The semantic engine received its most significant overhaul since the platform launched. The objective was to reduce false blocks while improving detection of real threats.
Context Consistency introduces a new ONNX model that evaluates the relationship between user content and system context. Previously, prompts were analyzed in isolation, meaning a legitimate business message containing keywords associated with attacks could be blocked. The system now understands conversational context and can distinguish safe instructions from manipulation attempts.
Trust-Tier Remediation classifies safe message patterns into three trust levels. Verified patterns (BIPIA, business scenarios) provide full protection against false positives. Unverified patterns are automatically downgraded in contexts where they could mask a real attack. This mechanism particularly improved accuracy for Polish-language prompts.
The semantic pattern database was enriched with new categories: financial transactions, operational notifications, technical responses, and structured templates. On the attack side, 29 new patterns were added from Babelscape/ALERT (jailbreaking), injection, and garak (multi-shot).
| Metric | Change |
|---|---|
| False blocks | -28% |
| Missed attacks | -24% |
| F1 Score | +2.4 pp |
Injection Detection Model v14
The prompt injection classifier advanced two model generations (v12 to v14).
Polish language coverage. Nine new pattern sets covering Polish business scenarios: tool use, data access, security messages, bot greetings, and conversational exchanges. Business-related prompts are now significantly less likely to trigger false blocks.
New attack types. Expanded coverage of jailbreaking, injection, and multi-shot attacks that could previously go undetected.
Both model versions were validated against various benchmarks across four standardized test profiles.
Automatic Resource Scaling
Previously, the system ran with identical settings regardless of the underlying hardware. On smaller servers, services competed for CPU. On larger ones, resources sat idle.
During installation, the system now detects host parameters (CPU, RAM, disk). It automatically configures each service: thread allocation for the injection classifier, concurrent API request limits, and parallelism for PII and semantic analysis. No manual configuration is required.
When the system reaches capacity, the API returns 503 Retry-After instead of accepting additional requests and degrading response times for everyone. The Python SDK retries automatically after the indicated interval.
| Profile | Server | API in-flight | PII parallel | Semantic parallel |
|---|---|---|---|---|
| prod-32 | 16 CPU / 30 GB | 8 | 4 | 2 |
| prod-64 | 24 CPU / 64 GB | 10 | 5 | 2 |
| prod-128 | 32 CPU / 128 GB | 16 | 8 | 4 |
| prod-256 | 48 CPU / 256 GB | 24 | 8 | 4 |
Bielik AI Chat Protection
The browser extension now protects conversations on the Bielik AI platform, a leading Polish language model.
Supported platforms (9): ChatGPT, Claude, Gemini, Copilot, Perplexity, Mistral, Grok, AnonChatGPT, Bielik.
The extension intercepts messages before they are sent. When a threat is detected, content is either sanitized or blocked with a notification. After submission, the chat view is synchronized so the user sees what was actually sent. Chat sessions are not interrupted.
In-Panel Documentation
API Reference. Complete documentation for all five endpoints (Detect Input, Detect Output, Analyze, Batch, License Status) with code examples, response schemas, and error descriptions. Accessible directly from the management panel.
Scaling Profiles. Explains automatic scaling: available profiles, selection logic, and per-service memory usage (31.5 GB RAM total for the full stack).
Upgrading. Step-by-step upgrade guide covering what happens at each stage, which data is preserved, how rollback works, and air-gapped mode for offline environments.