Where is that analysis from last month?
20-40 minutes/day lost on folder search.
CAE operations product
Local-first run tracking for simulation teams. Monitor job health, parse logs, and recover engineering context without digging through folders.
1 view
for queued, running, completed, and failed jobs
< 5 min
to find previous runs with metadata filters
Local-first
processing path with optional offline AI summary
CAE Job Manager
| Job Name | Summary | What Changed | Status | Suggestions | Created At |
|---|---|---|---|---|---|
| Bracket_Analysis_001 | Linear static analysis completed successfully. | Updated mesh, added SPC. | COMPLETED | Review stress distribution and deformations. | Apr 25, 2024, 10:12 AM |
| Housing_Optimization_003 | Topology optimization, iteration 15. | Increased load, modified constraints. | RUNNING | Monitor convergence and update objective weights. | Apr 25, 2024, 09:45 AM |
| Beam_Structure_002 | Nodal singularity detected during solve. | No geometry changes in this run. | ERROR | Check constraints and review mesh continuity. | Apr 25, 2024, 08:30 AM |
| Panel_Test_004 | Nonlinear buckling setup staged for queue. | Input pack pending compute node. | QUEUED | Confirm material cards before launch. | Apr 25, 2024, 08:15 AM |
Run Detail: Beam_Structure_002
Inputs
Run index: 2
Input: /path/to/Beam_002.fem
Output: /path/to/Beam_002.out
Diagnostics
Error: Nodal singularity at node 1054.
Warnings: none captured in latest pass.
Log Preview
ERROR: Nodal singularity detected at grid 1054
ERROR #153
Abort triggered due to fatal solver error
Product preview inspired by your current interface screenshot and adapted to the public website style.
Most CAE teams already have strong solvers but weak run operations. CAE Job Manager addresses traceability, visibility, and failure triage in one lightweight layer.
Where is that analysis from last month?
20-40 minutes/day lost on folder search.
Was this result from A1 or A2 model version?
Duplicate reruns for already completed work.
Did my overnight job finish or stall?
Manual checking interrupts deep engineering work.
What solver settings produced this success case?
Missing run metadata slows design iteration.
Can the team see status without pinging me?
Low visibility across analysts and reviewers.
I need lightweight tracking without heavy PLM rollout.
Too much process overhead for small CAE teams.
Core modules extracted from your reference landing page and aligned for technical buyer review.
Watch selected folders and auto-register jobs from solver input files. No launcher lock-in, no workflow rewrite.
Parse solver logs continuously for progress, run stage, warnings, and error signatures as they happen.
Index name, date, model revision, solver, tags, and status so prior runs can be retrieved in seconds.
Compare two runs to identify what changed in input deck, constraints, mesh, and outcome summary.
Optional local LLM mode summarizes logs and flags likely root causes without sending data to cloud.
Shared status board and standardized notes reduce handoff friction between analysts, leads, and test teams.
Deterministic parser logic remains the source of truth. AI assistance, when enabled, acts as a sidecar for summarization and suggestions.
Watcher and Ingestion
File watchers detect new runs, collect basic metadata, and attach project context.
Parser and Classification
Log adapters parse solver output, map state transitions, and classify warnings/errors.
Storage and Search
Local SQLite by default with extensible API path for team-level persistence.
UI and Collaboration
Dashboard, filters, run detail pane, and suggestion stream for operational decisions.
Optional AI Copilot
Local model endpoint provides explainability hints; deterministic parsing remains source of truth.
Current technical framing for MVP communication with engineering teams.
| Area | Support | Notes |
|---|---|---|
| Input detection | .fem, .inp, .bdf, .k | File watch and registration pipeline. |
| Progress sources | .out, .sta, d3hsp, solver stdout | Parser adapters map solver stage and iteration. |
| Status model | Queued, Running, Completed, Error | Unified state machine across solver families. |
| Data location | Local workstation paths | Designed for confidentiality-first engineering teams. |
| Export package | Run summary, log snippet, change notes | Ready for design-review attachments. |
A practical 4-week deployment path for a first CAE team, focused on run visibility and error triage before broader automation.
STEP 1
Week 1: map active solver workflows and watched directories.
STEP 2
Week 2: configure parser adapters and state mapping for your team.
STEP 3
Week 3: stand up dashboard views and run-history indexing.
STEP 4
Week 4: validate error triage flow and finalize rollout checklist.