work on well of souls and yolo detection

This commit is contained in:
Boki 2026-02-20 16:40:50 -05:00
parent 3456e0d62a
commit 40d30115bf
41 changed files with 3031 additions and 148 deletions

View file

@ -0,0 +1,123 @@
# YOLO Detection Pipeline
Capture frames → annotate → train → detect live in-game.
## Setup
```bash
cd tools/python-detect
python -m venv .venv
.venv/Scripts/activate # Windows
pip install ultralytics opencv-python
# GPU training (NVIDIA):
pip install torch torchvision --index-url https://download.pytorch.org/whl/cu128 --force-reinstall
```
## 1. Capture Frames
In the app Debug tab, click **Burst Capture** during a boss fight. Frames save to `training-data/raw/` every 200ms. Click **Stop Capture** when done.
## 2. Annotate
```bash
python annotate.py # defaults to ../../training-data/raw
python annotate.py ../../training-data/raw
tools\python-detect\.venv\Scripts\python.exe tools\python-detect\annotate.py training-data\raw
```
### Toolbar
Clickable buttons at the top of the window:
| Button | Hotkey | Action |
|---|---|---|
| Predict | **P** | Run YOLO model on current image to auto-generate boxes |
| Save+Next | **Space** | Save labels and advance to next image |
| Filter | **F** | Cycle filter: All → Unlabeled → Labeled → per-class |
| Undo | **Z** | Undo last change |
| Del Image | **X** | Delete current image file and its label |
### Controls
| Action | Input |
|---|---|
| Draw new box | Left-drag on empty area |
| Select box | Left-click on box |
| Move box | Left-drag box body |
| Resize box | Left-drag corner handle |
| Cycle class | Right-click on box |
| Set class | 1-9 keys (selected box or new-box default) |
| Delete box | Delete / Backspace |
| Navigate | Left / Right arrow |
| Deselect | E |
| Quit | Q / Escape (auto-saves) |
### Predict (P)
Press **P** or click the Predict button to run the current YOLO model (`models/boss-v1.pt`) on the image. It replaces existing boxes with model predictions (undoable with Z). The model lazy-loads on first use.
Workflow: press P to auto-predict → adjust/delete bad boxes → Space to save+next.
### Classes
Edit the `CLASSES` list at the top of `annotate.py` to add new boss types:
```python
CLASSES = ["kulemak", "arbiter"]
```
## 3. Split Dataset
Create a train/valid split (85/15) for training. Example structure:
```
training-data/boss-dataset/
├── data.yaml
├── train/
│ ├── images/
│ └── labels/
└── valid/
├── images/
└── labels/
```
`data.yaml`:
```yaml
train: C:/Users/boki/repos/poe2trade/training-data/boss-dataset/train/images
val: C:/Users/boki/repos/poe2trade/training-data/boss-dataset/valid/images
nc: 2
names: ['kulemak', 'arbiter']
```
Copy images and their `.txt` label files into the appropriate split directories.
## 4. Train
```bash
python train.py --data C:/Users/boki/repos/poe2trade/training-data/boss-dataset/data.yaml --name boss-v1 --epochs 100
```
Options: `--batch 16`, `--imgsz 640`, `--device 0` (GPU) or `--device cpu`.
Best weights are auto-copied to `models/boss-v1.pt`. If ultralytics auto-increments the run name (e.g. `boss-v12`), copy manually:
```bash
cp runs/detect/boss-v1*/weights/best.pt models/boss-v1.pt
```
## 5. Live Detection
The C# app loads `models/boss-v1.pt` via `daemon.py` (JSON stdin/stdout protocol). Enable detection in the Mapping tab. Boss bounding boxes render on the Direct2D overlay in cyan.
## Files
| File | Purpose |
|---|---|
| `annotate.py` | Interactive annotator with predict, select/move/resize, tag, filter |
| `train.py` | Train YOLOv11n, copies best weights to `models/` |
| `daemon.py` | Persistent inference daemon (managed by C# `PythonDetectBridge`) |
| `fix_labels.py` | One-time fix for labels generated with the old buggy annotator |
| `models/` | Trained `.pt` weight files |

View file

@ -0,0 +1,760 @@
"""
Bounding-box annotator with select / move / resize / tag / filter / predict.
Controls
--------
Left drag (empty area) : draw new box
Left click (on box) : select it
Left drag (box body) : move it
Left drag (corner) : resize it
Right-click (on box) : cycle class
1-9 : set class of selected box (or default new-box class)
Delete : remove selected box
Space / Enter : save + next image
Left / Right arrow : prev / next image
P : predict run YOLO model on current image
F : cycle filter (All > Unlabeled > Empty > Labeled > per-class)
Z : undo
X : delete image file + next
E : deselect
Q / Escape : quit (auto-saves current)
Toolbar buttons at the top are also clickable.
Aspect ratio is always preserved (letterboxed).
Saves YOLO-format .txt labels alongside images.
"""
import cv2
import numpy as np
import os
import sys
import glob
# ── Classes ──────────────────────────────────────────────────────
# Passed in by manage.py via run_annotator(img_dir, classes).
# Standalone fallback: single-class kulemak.
DEFAULT_CLASSES = ["kulemak"]
COLORS = [
(0, 255, 255), # cyan-yellow (kulemak)
(255, 0, 255), # magenta (arbiter)
(0, 255, 0), # green
(255, 128, 0), # orange
(128, 0, 255), # purple
]
HANDLE_R = 7 # corner handle radius (px)
MIN_BOX = 0.01 # min normalised box dimension
PREDICT_CONF = 0.20 # confidence threshold for auto-predict
# Layout
TOOLBAR_Y = 32 # top of toolbar row (below info line)
TOOLBAR_H = 30 # toolbar row height
IMG_TOP = TOOLBAR_Y + TOOLBAR_H + 4 # image area starts here
HELP_H = 22 # reserved at bottom for help text
# Windows arrow / special key codes from cv2.waitKeyEx
K_LEFT = 2424832
K_RIGHT = 2555904
K_DEL = 3014656
# ── Box dataclass ─────────────────────────────────────────────────
class Box:
__slots__ = ("cx", "cy", "w", "h", "cls_id")
def __init__(self, cx, cy, w, h, cls_id=0):
self.cx, self.cy, self.w, self.h, self.cls_id = cx, cy, w, h, cls_id
@property
def x1(self): return self.cx - self.w / 2
@property
def y1(self): return self.cy - self.h / 2
@property
def x2(self): return self.cx + self.w / 2
@property
def y2(self): return self.cy + self.h / 2
def set_corners(self, x1, y1, x2, y2):
self.cx = (x1 + x2) / 2
self.cy = (y1 + y2) / 2
self.w = abs(x2 - x1)
self.h = abs(y2 - y1)
def contains(self, nx, ny):
return self.x1 <= nx <= self.x2 and self.y1 <= ny <= self.y2
def corner_at(self, nx, ny, thr):
for hx, hy, tag in [
(self.x1, self.y1, "tl"), (self.x2, self.y1, "tr"),
(self.x1, self.y2, "bl"), (self.x2, self.y2, "br"),
]:
if abs(nx - hx) < thr and abs(ny - hy) < thr:
return tag
return None
def copy(self):
return Box(self.cx, self.cy, self.w, self.h, self.cls_id)
# ── Toolbar button ────────────────────────────────────────────────
class Button:
__slots__ = ("label", "action", "x1", "y1", "x2", "y2")
def __init__(self, label, action):
self.label = label
self.action = action
self.x1 = self.y1 = self.x2 = self.y2 = 0
def hit(self, wx, wy):
return self.x1 <= wx <= self.x2 and self.y1 <= wy <= self.y2
# ── Main tool ─────────────────────────────────────────────────────
class Annotator:
def __init__(self, img_dir, classes=None):
self.classes = classes or DEFAULT_CLASSES
self.img_dir = os.path.abspath(img_dir)
self.all_files = self._scan()
# filter
self.FILTERS = ["all", "unlabeled", "empty", "labeled"] + \
[f"class:{i}" for i in range(len(self.classes))]
self.filt_idx = 0
self.files = list(self.all_files)
self.pos = 0
# image state
self.img = None
self.iw = 0
self.ih = 0
self.boxes = []
self.sel = -1
self.cur_cls = 0
self.dirty = False
self.undo_stack = []
# drag state
self.mode = None
self.d_start = None
self.d_anchor = None
self.d_orig = None
self.mouse_n = None
# display
self.WIN = "Annotator"
self.ww = 1600
self.wh = 900
self._cache = None
self._cache_key = None
# toolbar buttons (laid out during _draw)
self.buttons = [
Button("[P] Predict", "predict"),
Button("[Space] Save+Next", "save_next"),
Button("[F] Filter", "filter"),
Button("[Z] Undo", "undo"),
Button("[X] Del Image", "del_img"),
]
# YOLO model (lazy-loaded)
self._model = None
self._model_tried = False
# stats
self.n_saved = 0
self.n_deleted = 0
# ── file scanning ─────────────────────────────────────────────
def _scan(self):
files = []
for ext in ("*.jpg", "*.jpeg", "*.png"):
files.extend(glob.glob(os.path.join(self.img_dir, ext)))
files.sort()
return files
@staticmethod
def _lbl(fp):
return os.path.splitext(fp)[0] + ".txt"
@staticmethod
def _is_empty_label(lp):
"""Label file exists but has no boxes (negative example)."""
if not os.path.exists(lp):
return False
with open(lp) as f:
return f.read().strip() == ""
@staticmethod
def _has_labels(lp):
"""Label file exists and contains at least one box."""
if not os.path.exists(lp):
return False
with open(lp) as f:
return f.read().strip() != ""
def _refilter(self):
mode = self.FILTERS[self.filt_idx]
if mode == "all":
self.files = [f for f in self.all_files if os.path.exists(f)]
elif mode == "unlabeled":
self.files = [f for f in self.all_files
if os.path.exists(f) and not os.path.exists(self._lbl(f))]
elif mode == "empty":
self.files = [f for f in self.all_files
if os.path.exists(f) and self._is_empty_label(self._lbl(f))]
elif mode == "labeled":
self.files = [f for f in self.all_files
if os.path.exists(f) and self._has_labels(self._lbl(f))]
elif mode.startswith("class:"):
cid = int(mode.split(":")[1])
self.files = []
for f in self.all_files:
if not os.path.exists(f):
continue
lp = self._lbl(f)
if os.path.exists(lp):
with open(lp) as fh:
if any(l.strip().startswith(f"{cid} ") for l in fh):
self.files.append(f)
self.pos = max(0, min(self.pos, len(self.files) - 1))
# ── I/O ───────────────────────────────────────────────────────
def _load(self):
if not self.files:
return False
self.img = cv2.imread(self.files[self.pos])
if self.img is None:
return False
self.ih, self.iw = self.img.shape[:2]
self._cache = None
self._load_boxes()
self.sel = -1
self.dirty = False
self.undo_stack.clear()
return True
def _load_boxes(self):
self.boxes = []
lp = self._lbl(self.files[self.pos])
if not os.path.exists(lp):
return
with open(lp) as f:
for line in f:
p = line.strip().split()
if len(p) >= 5:
self.boxes.append(
Box(float(p[1]), float(p[2]),
float(p[3]), float(p[4]), int(p[0])))
def _save(self):
if not self.files:
return
lp = self._lbl(self.files[self.pos])
with open(lp, "w") as f:
for b in self.boxes:
f.write(f"{b.cls_id} {b.cx:.6f} {b.cy:.6f} "
f"{b.w:.6f} {b.h:.6f}\n")
self.n_saved += 1
self.dirty = False
def _push_undo(self):
self.undo_stack.append([b.copy() for b in self.boxes])
if len(self.undo_stack) > 50:
self.undo_stack.pop(0)
def _pop_undo(self):
if not self.undo_stack:
return
self.boxes = self.undo_stack.pop()
self.sel = -1
self.dirty = True
# ── YOLO predict ──────────────────────────────────────────────
def _ensure_model(self):
if self._model_tried:
return self._model is not None
self._model_tried = True
model_dir = os.path.join(os.path.dirname(os.path.abspath(__file__)), "models")
# Find the latest boss-v*.pt by version number, fallback to any .pt
import re
best_name, best_ver = None, -1
if os.path.isdir(model_dir):
for name in os.listdir(model_dir):
if not name.endswith(".pt"):
continue
m = re.match(r"boss-v(\d+)\.pt$", name)
if m and int(m.group(1)) > best_ver:
best_ver = int(m.group(1))
best_name = name
elif best_name is None:
best_name = name
if best_name:
path = os.path.join(model_dir, best_name)
print(f" Loading model: {best_name} ...")
from ultralytics import YOLO
self._model = YOLO(path)
print(f" Model loaded.")
return True
print(f" No .pt model found in {model_dir}")
return False
def _predict(self):
if self.img is None or not self.files:
return
if not self._ensure_model():
return
self._push_undo()
results = self._model(self.files[self.pos], conf=PREDICT_CONF, verbose=False)
det = results[0].boxes
self.boxes = []
for box in det:
cls_id = int(box.cls[0])
cx, cy, w, h = box.xywhn[0].tolist()
conf = box.conf[0].item()
self.boxes.append(Box(cx, cy, w, h, cls_id))
self.sel = -1
self.dirty = True
print(f" Predicted {len(self.boxes)} box(es)")
# ── coordinate transforms (letterbox) ─────────────────────────
def _xform(self):
"""Returns (scale, offset_x, offset_y) for letterbox display."""
avail_h = max(1, self.wh - IMG_TOP - HELP_H)
s = min(self.ww / self.iw, avail_h / self.ih)
dw = int(self.iw * s)
dh = int(self.ih * s)
ox = (self.ww - dw) // 2
oy = IMG_TOP + (avail_h - dh) // 2
return s, ox, oy
def _to_norm(self, wx, wy):
s, ox, oy = self._xform()
return (wx - ox) / (self.iw * s), (wy - oy) / (self.ih * s)
def _to_win(self, nx, ny):
s, ox, oy = self._xform()
return int(nx * self.iw * s + ox), int(ny * self.ih * s + oy)
def _corner_thr(self):
s, _, _ = self._xform()
return (HANDLE_R + 4) / (min(self.iw, self.ih) * s)
# ── hit-test ──────────────────────────────────────────────────
def _hit(self, nx, ny):
thr = self._corner_thr()
if 0 <= self.sel < len(self.boxes):
b = self.boxes[self.sel]
c = b.corner_at(nx, ny, thr)
if c:
return self.sel, c
if b.contains(nx, ny):
return self.sel, "inside"
for i, b in enumerate(self.boxes):
c = b.corner_at(nx, ny, thr)
if c:
return i, c
for i, b in enumerate(self.boxes):
if b.contains(nx, ny):
return i, "inside"
return -1, None
# ── drawing ───────────────────────────────────────────────────
def _scaled_base(self):
s, ox, oy = self._xform()
sz = (int(self.iw * s), int(self.ih * s))
key = (sz, self.ww, self.wh)
if self._cache is not None and self._cache_key == key:
return self._cache.copy(), s, ox, oy
canvas = np.zeros((self.wh, self.ww, 3), np.uint8)
resized = cv2.resize(self.img, sz, interpolation=cv2.INTER_AREA)
canvas[oy:oy + sz[1], ox:ox + sz[0]] = resized
self._cache = canvas
self._cache_key = key
return canvas.copy(), s, ox, oy
def _draw(self):
if self.img is None:
canvas = np.zeros((self.wh, self.ww, 3), np.uint8)
cv2.putText(canvas, "No images", (self.ww // 2 - 60, self.wh // 2),
cv2.FONT_HERSHEY_SIMPLEX, 0.7, (128, 128, 128), 1)
cv2.imshow(self.WIN, canvas)
return
canvas, s, ox, oy = self._scaled_base()
# ── Annotation boxes ──
for i, b in enumerate(self.boxes):
col = COLORS[b.cls_id % len(COLORS)]
is_sel = i == self.sel
p1 = self._to_win(b.x1, b.y1)
p2 = self._to_win(b.x2, b.y2)
cv2.rectangle(canvas, p1, p2, col, 3 if is_sel else 2)
name = self.classes[b.cls_id] if b.cls_id < len(self.classes) else f"c{b.cls_id}"
(tw, th), _ = cv2.getTextSize(name, cv2.FONT_HERSHEY_SIMPLEX, 0.55, 1)
cv2.rectangle(canvas, (p1[0], p1[1] - th - 8),
(p1[0] + tw + 6, p1[1]), col, -1)
cv2.putText(canvas, name, (p1[0] + 3, p1[1] - 5),
cv2.FONT_HERSHEY_SIMPLEX, 0.55, (0, 0, 0), 1)
if is_sel:
for hx, hy in [p1, (p2[0], p1[1]), (p1[0], p2[1]), p2]:
cv2.circle(canvas, (hx, hy), HANDLE_R, (255, 255, 255), -1)
cv2.circle(canvas, (hx, hy), HANDLE_R, col, 2)
# rubber-band
if self.mode == "draw" and self.d_start and self.mouse_n:
col = COLORS[self.cur_cls % len(COLORS)]
cv2.rectangle(canvas,
self._to_win(*self.d_start),
self._to_win(*self.mouse_n), col, 2)
# ── HUD info line ──
if self.files:
fname = os.path.basename(self.files[self.pos])
n = len(self.files)
filt = self.FILTERS[self.filt_idx]
cname = self.classes[self.cur_cls] if self.cur_cls < len(self.classes) \
else f"c{self.cur_cls}"
info = (f"[{self.pos + 1}/{n}] {fname} | "
f"filter: {filt} | new class: {cname} | "
f"boxes: {len(self.boxes)}")
if self.dirty:
info += " *"
cv2.putText(canvas, info, (10, 22),
cv2.FONT_HERSHEY_SIMPLEX, 0.5, (220, 220, 220), 1)
# class legend (top-right)
for i, c in enumerate(self.classes):
txt = f"{i + 1}: {c}"
(tw, _), _ = cv2.getTextSize(txt, cv2.FONT_HERSHEY_SIMPLEX, 0.5, 1)
cv2.putText(canvas, txt,
(self.ww - tw - 12, 22 + i * 20),
cv2.FONT_HERSHEY_SIMPLEX, 0.5,
COLORS[i % len(COLORS)], 1)
# ── Toolbar buttons ──
bx = 10
for btn in self.buttons:
(tw, th), _ = cv2.getTextSize(btn.label, cv2.FONT_HERSHEY_SIMPLEX, 0.45, 1)
bw = tw + 16
bh = TOOLBAR_H - 4
btn.x1 = bx
btn.y1 = TOOLBAR_Y
btn.x2 = bx + bw
btn.y2 = TOOLBAR_Y + bh
# button bg
cv2.rectangle(canvas, (btn.x1, btn.y1), (btn.x2, btn.y2),
(60, 60, 60), -1)
cv2.rectangle(canvas, (btn.x1, btn.y1), (btn.x2, btn.y2),
(140, 140, 140), 1)
# button text
cv2.putText(canvas, btn.label,
(bx + 8, TOOLBAR_Y + bh - 7),
cv2.FONT_HERSHEY_SIMPLEX, 0.45, (220, 220, 220), 1)
bx = btn.x2 + 6
# ── Help bar (bottom) ──
cv2.putText(
canvas,
"drag=draw | click=select | drag=move/resize | RClick=cycle class"
" | 1-9=class | Del=remove box | E=deselect",
(10, self.wh - 6),
cv2.FONT_HERSHEY_SIMPLEX, 0.36, (120, 120, 120), 1)
cv2.imshow(self.WIN, canvas)
# ── mouse ─────────────────────────────────────────────────────
def _on_mouse(self, ev, wx, wy, flags, _):
nx, ny = self._to_norm(wx, wy)
self.mouse_n = (nx, ny)
if ev == cv2.EVENT_LBUTTONDOWN:
# check toolbar buttons first
for btn in self.buttons:
if btn.hit(wx, wy):
self._do_action(btn.action)
return
# only interact with image area below toolbar
if wy < IMG_TOP:
return
idx, what = self._hit(nx, ny)
if what in ("tl", "tr", "bl", "br"):
self._push_undo()
self.sel = idx
self.mode = "resize"
b = self.boxes[idx]
opp = {"tl": (b.x2, b.y2), "tr": (b.x1, b.y2),
"bl": (b.x2, b.y1), "br": (b.x1, b.y1)}
self.d_anchor = opp[what]
self.d_start = (nx, ny)
elif what == "inside":
self._push_undo()
self.sel = idx
self.mode = "move"
self.d_start = (nx, ny)
b = self.boxes[idx]
self.d_orig = (b.cx, b.cy)
else:
self.sel = -1
self.mode = "draw"
self.d_start = (nx, ny)
self._draw()
elif ev == cv2.EVENT_MOUSEMOVE:
if self.mode == "draw":
self._draw()
elif self.mode == "move" and self.d_start and \
0 <= self.sel < len(self.boxes):
b = self.boxes[self.sel]
b.cx = self.d_orig[0] + (nx - self.d_start[0])
b.cy = self.d_orig[1] + (ny - self.d_start[1])
self.dirty = True
self._draw()
elif self.mode == "resize" and self.d_anchor:
b = self.boxes[self.sel]
ax, ay = self.d_anchor
b.set_corners(min(ax, nx), min(ay, ny),
max(ax, nx), max(ay, ny))
self.dirty = True
self._draw()
elif ev == cv2.EVENT_LBUTTONUP:
if self.mode == "draw" and self.d_start:
x1, y1 = min(self.d_start[0], nx), min(self.d_start[1], ny)
x2, y2 = max(self.d_start[0], nx), max(self.d_start[1], ny)
if (x2 - x1) > MIN_BOX and (y2 - y1) > MIN_BOX:
self._push_undo()
b = Box(0, 0, 0, 0, self.cur_cls)
b.set_corners(x1, y1, x2, y2)
self.boxes.append(b)
self.sel = len(self.boxes) - 1
self.dirty = True
self.mode = None
self.d_start = self.d_anchor = self.d_orig = None
self._draw()
elif ev == cv2.EVENT_RBUTTONDOWN:
idx, _ = self._hit(nx, ny)
if idx >= 0:
self._push_undo()
self.sel = idx
self.boxes[idx].cls_id = \
(self.boxes[idx].cls_id + 1) % len(self.classes)
self.dirty = True
self._draw()
# ── actions (shared by keys + buttons) ────────────────────────
def _do_action(self, action):
if action == "predict":
self._predict()
self._draw()
elif action == "save_next":
self._do_save_next()
elif action == "filter":
self._do_filter()
elif action == "undo":
self._pop_undo()
self._draw()
elif action == "del_img":
self._do_del_img()
def _do_save_next(self):
if not self.files:
return
self._save()
fname = os.path.basename(self.files[self.pos])
print(f" Saved {fname} ({len(self.boxes)} box(es))")
self._goto(self.pos + 1)
def _do_filter(self):
self.filt_idx = (self.filt_idx + 1) % len(self.FILTERS)
if self.dirty:
self._save()
self._refilter()
if self.files:
self._load()
self._draw()
print(f" Filter: {self.FILTERS[self.filt_idx]}"
f" ({len(self.files)} images)")
else:
self.img = None
self._draw()
print(f" Filter: {self.FILTERS[self.filt_idx]} (0 images)")
def _do_del_img(self):
if not self.files:
return
fp = self.files[self.pos]
lp = self._lbl(fp)
if os.path.exists(fp):
os.remove(fp)
if os.path.exists(lp):
os.remove(lp)
self.n_deleted += 1
print(f" Deleted {os.path.basename(fp)}")
self.all_files = [f for f in self.all_files if f != fp]
self.dirty = False
self._refilter()
if not self.files:
self.img = None
self._draw()
return
self.pos = min(self.pos, len(self.files) - 1)
self._load()
self._draw()
# ── navigation ────────────────────────────────────────────────
def _goto(self, new_pos):
if self.dirty:
self._save()
new_pos = max(0, min(new_pos, len(self.files) - 1))
if new_pos == self.pos and self.img is not None:
return
self.pos = new_pos
self._load()
self._draw()
# ── main loop ─────────────────────────────────────────────────
def run(self):
if not self.all_files:
print(f"No images in {self.img_dir}")
return
cv2.namedWindow(self.WIN, cv2.WINDOW_NORMAL)
cv2.resizeWindow(self.WIN, self.ww, self.wh)
cv2.setMouseCallback(self.WIN, self._on_mouse)
self._refilter()
if not self.files:
print("No images match current filter")
return
self._load()
self._draw()
while True:
key = cv2.waitKeyEx(30)
# detect window close (user clicked X)
if cv2.getWindowProperty(self.WIN, cv2.WND_PROP_VISIBLE) < 1:
if self.dirty:
self._save()
break
# detect window resize
try:
r = cv2.getWindowImageRect(self.WIN)
if r[2] > 0 and r[3] > 0 and \
(r[2] != self.ww or r[3] != self.wh):
self.ww, self.wh = r[2], r[3]
self._cache = None
self._draw()
except cv2.error:
pass
if key == -1:
continue
# Quit
if key in (ord("q"), 27):
if self.dirty:
self._save()
break
# Save + next
if key in (32, 13):
self._do_save_next()
continue
# Navigation
if key == K_LEFT:
self._goto(self.pos - 1)
continue
if key == K_RIGHT:
self._goto(self.pos + 1)
continue
# Predict
if key == ord("p"):
self._predict()
self._draw()
continue
# Delete selected box
if key == K_DEL or key == 8:
if 0 <= self.sel < len(self.boxes):
self._push_undo()
self.boxes.pop(self.sel)
self.sel = -1
self.dirty = True
self._draw()
continue
# Delete image
if key == ord("x"):
self._do_del_img()
continue
# Undo
if key == ord("z"):
self._pop_undo()
self._draw()
continue
# Filter
if key == ord("f"):
self._do_filter()
continue
# Number keys -> set class
if ord("1") <= key <= ord("9"):
cls_id = key - ord("1")
if cls_id < len(self.classes):
if 0 <= self.sel < len(self.boxes):
self._push_undo()
self.boxes[self.sel].cls_id = cls_id
self.dirty = True
self.cur_cls = cls_id
self._draw()
continue
# Deselect
if key == ord("e"):
self.sel = -1
self._draw()
continue
cv2.destroyAllWindows()
total = len(self.all_files)
labeled = sum(1 for f in self.all_files if self._has_labels(self._lbl(f)))
empty = sum(1 for f in self.all_files if self._is_empty_label(self._lbl(f)))
unlabeled = total - labeled - empty
print(f"\nDone. Saved: {self.n_saved}, Deleted: {self.n_deleted}")
print(f"Dataset: {total} images, {labeled} labeled, "
f"{empty} empty, {unlabeled} unlabeled")
def run_annotator(img_dir, classes=None):
"""Entry point callable from manage.py or standalone."""
tool = Annotator(img_dir, classes)
tool.run()
def main():
img_dir = sys.argv[1] if len(sys.argv) > 1 else "../../training-data/kulemak/raw"
run_annotator(img_dir)
if __name__ == "__main__":
main()

View file

@ -12,7 +12,7 @@ import sys
import json
import time
_model = None
_models = {}
def _redirect_stdout_to_stderr():
@ -26,36 +26,40 @@ def _restore_stdout(real_stdout):
sys.stdout = real_stdout
def load_model():
global _model
if _model is not None:
return _model
def load_model(name="enemy-v1"):
if name in _models:
return _models[name]
import os
from ultralytics import YOLO
model_dir = os.path.join(os.path.dirname(os.path.abspath(__file__)), "models")
model_path = os.path.join(model_dir, "enemy-v1.pt")
# Prefer TensorRT engine if available, fall back to .pt
engine_path = os.path.join(model_dir, f"{name}.engine")
pt_path = os.path.join(model_dir, f"{name}.pt")
model_path = engine_path if os.path.exists(engine_path) else pt_path
if not os.path.exists(model_path):
raise FileNotFoundError(f"Model not found: {model_path}")
raise FileNotFoundError(f"Model not found: {pt_path} (also checked {engine_path})")
sys.stderr.write(f"Loading YOLO model from {model_path}...\n")
sys.stderr.write(f"Loading YOLO model '{name}' from {model_path}...\n")
sys.stderr.flush()
real_stdout = _redirect_stdout_to_stderr()
try:
_model = YOLO(model_path)
model = YOLO(model_path)
# Warmup with dummy inference (triggers CUDA init)
import numpy as np
dummy = np.zeros((640, 640, 3), dtype=np.uint8)
_model.predict(dummy, verbose=False)
model.predict(dummy, verbose=False)
finally:
_restore_stdout(real_stdout)
sys.stderr.write("YOLO model loaded and warmed up.\n")
_models[name] = model
sys.stderr.write(f"YOLO model '{name}' loaded and warmed up.\n")
sys.stderr.flush()
return _model
return model
def handle_detect(req):
@ -70,12 +74,15 @@ def handle_detect(req):
img_bytes = base64.b64decode(image_base64)
img = np.array(Image.open(io.BytesIO(img_bytes)))
# PIL gives RGB, but ultralytics model.predict() assumes numpy arrays are BGR
img = img[:, :, ::-1]
conf = req.get("conf", 0.3)
iou = req.get("iou", 0.45)
imgsz = req.get("imgsz", 640)
model_name = req.get("model", "enemy-v1")
model = load_model()
model = load_model(model_name)
real_stdout = _redirect_stdout_to_stderr()
try:

View file

@ -0,0 +1,77 @@
"""Fix labels generated by buggy annotate.py that multiplied by scale."""
import glob
import os
import cv2
import sys
def compute_scale(img_path):
img = cv2.imread(img_path)
if img is None:
return None
h, w = img.shape[:2]
max_h, max_w = 900, 1600
if h > max_h or w > max_w:
return min(max_w / w, max_h / h)
return 1.0
def fix_label(label_path, scale):
if scale == 1.0:
return False
with open(label_path) as f:
lines = f.readlines()
fixed = []
for line in lines:
parts = line.strip().split()
if len(parts) != 5:
fixed.append(line)
continue
cls = parts[0]
cx = float(parts[1]) / scale
cy = float(parts[2]) / scale
w = float(parts[3]) / scale
h = float(parts[4]) / scale
cx = min(cx, 1.0)
cy = min(cy, 1.0)
w = min(w, 1.0)
h = min(h, 1.0)
fixed.append(f"{cls} {cx:.6f} {cy:.6f} {w:.6f} {h:.6f}\n")
with open(label_path, "w") as f:
f.writelines(fixed)
return True
def main():
dataset_dir = sys.argv[1] if len(sys.argv) > 1 else "../../training-data/boss-dataset"
dataset_dir = os.path.abspath(dataset_dir)
count = 0
for split in ["train", "valid"]:
label_dir = os.path.join(dataset_dir, split, "labels")
img_dir = os.path.join(dataset_dir, split, "images")
if not os.path.isdir(label_dir):
continue
for label_path in glob.glob(os.path.join(label_dir, "*.txt")):
base = os.path.splitext(os.path.basename(label_path))[0]
img_path = None
for ext in (".jpg", ".jpeg", ".png"):
candidate = os.path.join(img_dir, base + ext)
if os.path.exists(candidate):
img_path = candidate
break
if img_path is None:
print(f" WARNING: no image for {label_path}")
continue
scale = compute_scale(img_path)
if scale is None:
print(f" WARNING: can't read {img_path}")
continue
if fix_label(label_path, scale):
count += 1
# Show first few for verification
if count <= 3:
with open(label_path) as f:
print(f" Fixed {os.path.basename(label_path)}: {f.read().strip()}")
print(f"\nFixed {count} label files")
if __name__ == "__main__":
main()

View file

@ -0,0 +1,353 @@
"""
Unified CLI for the YOLO detection pipeline.
Subcommands (all take a positional boss name):
build kulemak [--ratio 0.85] [--seed 42] Split raw/ -> dataset/
train kulemak [--epochs 200] [--name X] Train model (auto-increments name)
runs kulemak List training runs + metrics table
annotate kulemak [dir] Launch annotation GUI
prelabel kulemak [dir] [--model boss-kulemak] Auto-label unlabeled images
"""
import argparse
import csv
import os
import random
import re
import shutil
import sys
# ── Shared constants ─────────────────────────────────────────────
SCRIPT_DIR = os.path.dirname(os.path.abspath(__file__))
REPO_ROOT = os.path.abspath(os.path.join(SCRIPT_DIR, "..", ".."))
RUNS_DIR = os.path.join(REPO_ROOT, "runs", "detect")
MODELS_DIR = os.path.join(SCRIPT_DIR, "models")
def raw_dir(boss):
return os.path.join(REPO_ROOT, "training-data", boss, "raw")
def dataset_dir(boss):
return os.path.join(REPO_ROOT, "training-data", boss, "dataset")
def boss_classes(boss):
"""Single-class list for the given boss."""
return [boss]
# ── build ────────────────────────────────────────────────────────
def cmd_build(args):
"""Scan raw/, split labeled+empty images into dataset train/valid."""
import glob as g
boss = args.boss
raw = raw_dir(boss)
dataset = dataset_dir(boss)
classes = boss_classes(boss)
images = sorted(g.glob(os.path.join(raw, "*.jpg")))
labeled, empty = [], []
skipped = 0
for img in images:
txt = os.path.splitext(img)[0] + ".txt"
if not os.path.exists(txt):
skipped += 1
continue
with open(txt) as f:
content = f.read().strip()
if content:
labeled.append(img)
else:
empty.append(img)
print(f"Raw: {len(images)} images -- {len(labeled)} labeled, "
f"{len(empty)} empty (negative), {skipped} unlabeled (skipped)")
if not labeled and not empty:
print("Nothing to build.")
return
rng = random.Random(args.seed)
rng.shuffle(labeled)
rng.shuffle(empty)
def split(lst, ratio):
n = max(1, round(len(lst) * ratio)) if lst else 0
return lst[:n], lst[n:]
train_labeled, valid_labeled = split(labeled, args.ratio)
train_empty, valid_empty = split(empty, args.ratio)
train_files = train_labeled + train_empty
valid_files = valid_labeled + valid_empty
# Wipe and recreate
for sub in ("train/images", "train/labels", "valid/images", "valid/labels"):
d = os.path.join(dataset, sub)
if os.path.exists(d):
shutil.rmtree(d)
os.makedirs(d)
def copy_files(file_list, split_name):
for img in file_list:
txt = os.path.splitext(img)[0] + ".txt"
base = os.path.basename(img)
base_txt = os.path.splitext(base)[0] + ".txt"
shutil.copy2(img, os.path.join(dataset, split_name, "images", base))
shutil.copy2(txt, os.path.join(dataset, split_name, "labels", base_txt))
copy_files(train_files, "train")
copy_files(valid_files, "valid")
# Write data.yaml
yaml_path = os.path.join(dataset, "data.yaml")
with open(yaml_path, "w") as f:
f.write(f"train: {os.path.join(dataset, 'train', 'images')}\n")
f.write(f"val: {os.path.join(dataset, 'valid', 'images')}\n\n")
f.write(f"nc: {len(classes)}\n")
f.write(f"names: {classes}\n")
# Delete stale label caches
for root, dirs, files in os.walk(dataset):
for fn in files:
if fn == "labels.cache":
os.remove(os.path.join(root, fn))
print(f"\nTrain: {len(train_files)} ({len(train_labeled)} labeled + {len(train_empty)} empty)")
print(f"Valid: {len(valid_files)} ({len(valid_labeled)} labeled + {len(valid_empty)} empty)")
print(f"data.yaml: {yaml_path}")
# ── runs ─────────────────────────────────────────────────────────
def _parse_simple_yaml(path):
"""Parse flat key: value YAML without requiring PyYAML."""
result = {}
if not os.path.exists(path):
return result
with open(path) as f:
for line in f:
line = line.strip()
if not line or line.startswith("#"):
continue
if ": " in line:
key, val = line.split(": ", 1)
# Try to cast to int/float/bool/null
val = val.strip()
if val == "null" or val == "~":
val = None
elif val == "true":
val = True
elif val == "false":
val = False
else:
try:
val = int(val)
except ValueError:
try:
val = float(val)
except ValueError:
pass
result[key.strip()] = val
return result
def cmd_runs(args):
"""List training runs with metrics for the given boss."""
boss = args.boss
prefix = f"{boss}-v"
if not os.path.isdir(RUNS_DIR):
print(f"No runs directory: {RUNS_DIR}")
return
run_dirs = sorted(
[d for d in os.listdir(RUNS_DIR)
if os.path.isdir(os.path.join(RUNS_DIR, d)) and d.startswith(prefix)],
key=lambda d: _run_sort_key(d)
)
if not run_dirs:
print(f"No {prefix}* runs found.")
return
rows = []
for name in run_dirs:
run_path = os.path.join(RUNS_DIR, name)
args_file = os.path.join(run_path, "args.yaml")
csv_file = os.path.join(run_path, "results.csv")
model = epochs_cfg = imgsz = "?"
if os.path.exists(args_file):
cfg = _parse_simple_yaml(args_file)
model = os.path.splitext(os.path.basename(str(cfg.get("model", "?"))))[0]
epochs_cfg = str(cfg.get("epochs", "?"))
imgsz = str(cfg.get("imgsz", "?"))
mAP50 = mAP50_95 = prec = rec = "-"
actual_epochs = "?"
status = "unknown"
if os.path.exists(csv_file):
with open(csv_file) as f:
reader = csv.DictReader(f)
best_map = -1
best_row = None
last_epoch = 0
for row in reader:
row = {k.strip(): v.strip() for k, v in row.items()}
ep = int(row.get("epoch", 0))
last_epoch = max(last_epoch, ep)
val = float(row.get("metrics/mAP50(B)", 0))
if val > best_map:
best_map = val
best_row = row
if best_row:
mAP50 = f"{float(best_row.get('metrics/mAP50(B)', 0)):.3f}"
mAP50_95 = f"{float(best_row.get('metrics/mAP50-95(B)', 0)):.3f}"
prec = f"{float(best_row.get('metrics/precision(B)', 0)):.3f}"
rec = f"{float(best_row.get('metrics/recall(B)', 0)):.3f}"
actual_epochs = str(last_epoch)
try:
if int(epochs_cfg) > last_epoch + 1:
status = "early-stop"
else:
status = "done"
except ValueError:
status = "?"
epoch_str = f"{actual_epochs}/{epochs_cfg}"
rows.append((name, model, epoch_str, imgsz, mAP50, mAP50_95, prec, rec, status))
headers = ("Run", "Model", "Epochs", "ImgSz", "mAP50", "mAP50-95", "P", "R", "Status")
widths = [max(len(h), max(len(r[i]) for r in rows)) for i, h in enumerate(headers)]
header_line = " ".join(h.ljust(w) for h, w in zip(headers, widths))
print(header_line)
print(" ".join("-" * w for w in widths))
for row in rows:
print(" ".join(val.ljust(w) for val, w in zip(row, widths)))
def _run_sort_key(name):
m = re.search(r"(\d+)", name)
return int(m.group(1)) if m else 0
# ── train ────────────────────────────────────────────────────────
def cmd_train(args):
"""Train a YOLO model, auto-incrementing run name per boss."""
boss = args.boss
# Auto-increment name: {boss}-v1, {boss}-v2, ...
if args.name is None:
prefix = f"{boss}-v"
highest = 0
if os.path.isdir(RUNS_DIR):
for d in os.listdir(RUNS_DIR):
m = re.match(re.escape(prefix) + r"(\d+)", d)
if m:
highest = max(highest, int(m.group(1)))
args.name = f"{prefix}{highest + 1}"
print(f"Auto-assigned run name: {args.name}")
if args.data is None:
args.data = os.path.join(dataset_dir(boss), "data.yaml")
if not os.path.exists(args.data):
print(f"data.yaml not found: {args.data}")
print(f"Run 'python manage.py build {boss}' first.")
return
# Pass boss name so train.py can name the output model
args.boss = boss
from train import run_training
run_training(args)
# ── annotate ─────────────────────────────────────────────────────
def cmd_annotate(args):
"""Launch annotation GUI for the given boss."""
boss = args.boss
img_dir = args.dir or raw_dir(boss)
classes = boss_classes(boss)
from annotate import run_annotator
run_annotator(img_dir, classes)
# ── prelabel ─────────────────────────────────────────────────────
def cmd_prelabel(args):
"""Auto-label unlabeled images."""
boss = args.boss
if args.img_dir is None:
args.img_dir = raw_dir(boss)
if args.model == _PRELABEL_MODEL_DEFAULT:
args.model = f"boss-{boss}"
from prelabel import run_prelabel
run_prelabel(args)
_PRELABEL_MODEL_DEFAULT = "__auto__"
# ── CLI ──────────────────────────────────────────────────────────
def main():
parser = argparse.ArgumentParser(
description="YOLO detection pipeline manager",
formatter_class=argparse.RawDescriptionHelpFormatter,
epilog=__doc__,
)
sub = parser.add_subparsers(dest="command")
# annotate
p = sub.add_parser("annotate", help="Launch annotation GUI")
p.add_argument("boss", help="Boss name (e.g. kulemak)")
p.add_argument("dir", nargs="?", default=None, help="Image directory (default: training-data/{boss}/raw)")
# build
p = sub.add_parser("build", help="Build dataset from raw/")
p.add_argument("boss", help="Boss name (e.g. kulemak)")
p.add_argument("--ratio", type=float, default=0.85, help="Train ratio (default 0.85)")
p.add_argument("--seed", type=int, default=42, help="Random seed")
# train
p = sub.add_parser("train", help="Train YOLO model")
p.add_argument("boss", help="Boss name (e.g. kulemak)")
p.add_argument("--data", default=None, help="Path to data.yaml")
p.add_argument("--model", default="yolo11s", help="YOLO model variant")
p.add_argument("--epochs", type=int, default=200, help="Training epochs")
p.add_argument("--imgsz", type=int, default=1280, help="Image size")
p.add_argument("--batch", type=int, default=8, help="Batch size")
p.add_argument("--device", default="0", help="CUDA device")
p.add_argument("--name", default=None, help="Run name (auto-increments if omitted)")
# runs
p = sub.add_parser("runs", help="List training runs with metrics")
p.add_argument("boss", help="Boss name (e.g. kulemak)")
# prelabel
p = sub.add_parser("prelabel", help="Pre-label unlabeled images")
p.add_argument("boss", help="Boss name (e.g. kulemak)")
p.add_argument("img_dir", nargs="?", default=None, help="Image directory")
p.add_argument("--model", default=_PRELABEL_MODEL_DEFAULT, help="Model name in models/ (default: boss-{boss})")
p.add_argument("--conf", type=float, default=0.20, help="Confidence threshold")
args = parser.parse_args()
if args.command is None:
parser.print_help()
return
commands = {
"annotate": cmd_annotate,
"build": cmd_build,
"train": cmd_train,
"runs": cmd_runs,
"prelabel": cmd_prelabel,
}
commands[args.command](args)
if __name__ == "__main__":
main()

View file

@ -0,0 +1,78 @@
"""
Pre-label images using an existing YOLO model.
Generates .txt labels for unlabeled images so the annotator can load them for review.
Usage:
python prelabel.py [image_dir] [--model boss-v1] [--conf 0.20]
"""
import argparse
import glob
import os
def run_prelabel(args):
"""Run pre-labeling. Called from main() or manage.py."""
img_dir = os.path.abspath(args.img_dir)
model_path = os.path.join(os.path.dirname(os.path.abspath(__file__)), "models", f"{args.model}.pt")
if not os.path.exists(model_path):
print(f"Model not found: {model_path}")
return
from ultralytics import YOLO
model = YOLO(model_path)
extensions = ("*.jpg", "*.jpeg", "*.png")
files = []
for ext in extensions:
files.extend(glob.glob(os.path.join(img_dir, ext)))
files.sort()
# Only process unlabeled images
unlabeled = []
for f in files:
label_path = os.path.splitext(f)[0] + ".txt"
if not os.path.exists(label_path):
unlabeled.append(f)
print(f"Found {len(files)} images, {len(files) - len(unlabeled)} already labeled, {len(unlabeled)} to pre-label")
if not unlabeled:
print("All images already have labels!")
return
labeled = 0
skipped = 0
for filepath in unlabeled:
results = model(filepath, conf=args.conf, verbose=False)
boxes = results[0].boxes
if len(boxes) == 0:
skipped += 1
continue
label_path = os.path.splitext(filepath)[0] + ".txt"
with open(label_path, "w") as f:
for box in boxes:
cls = int(box.cls[0])
xywhn = box.xywhn[0] # normalized center x, y, w, h
cx, cy, w, h = xywhn.tolist()
f.write(f"{cls} {cx:.6f} {cy:.6f} {w:.6f} {h:.6f}\n")
labeled += 1
fname = os.path.basename(filepath)
conf = boxes.conf[0].item()
print(f" {fname}: {len(boxes)} box(es), best conf={conf:.2f}")
print(f"\nPre-labeled {labeled} images, skipped {skipped} (no detections)")
def main():
parser = argparse.ArgumentParser(description="Pre-label images with YOLO model")
parser.add_argument("img_dir", nargs="?", default="../../training-data/kulemak/raw")
parser.add_argument("--model", default="boss-kulemak", help="Model name in models/")
parser.add_argument("--conf", type=float, default=0.20, help="Confidence threshold")
args = parser.parse_args()
run_prelabel(args)
if __name__ == "__main__":
main()

View file

@ -1,30 +1,24 @@
"""
Training script for YOLOv11n enemy detection model.
Training script for YOLO enemy/boss detection model.
Usage:
python train.py --data path/to/data.yaml --epochs 100
python train.py --data path/to/data.yaml --epochs 200
python train.py --data path/to/data.yaml --model yolo11m --imgsz 1280 --epochs 300
Expects YOLO-format dataset with data.yaml pointing to train/val image directories.
Export from Roboflow in "YOLOv11" format.
"""
import argparse
import glob
import os
def main():
parser = argparse.ArgumentParser(description="Train YOLOv11n enemy detector")
parser.add_argument("--data", required=True, help="Path to data.yaml")
parser.add_argument("--epochs", type=int, default=100, help="Training epochs")
parser.add_argument("--imgsz", type=int, default=640, help="Image size")
parser.add_argument("--batch", type=int, default=16, help="Batch size")
parser.add_argument("--device", default="0", help="CUDA device (0, cpu)")
parser.add_argument("--name", default="enemy-v1", help="Run name")
args = parser.parse_args()
def run_training(args):
"""Run YOLO training. Called from main() or manage.py."""
from ultralytics import YOLO
model = YOLO("yolo11n.pt") # start from pretrained nano
model = YOLO(f"{args.model}.pt")
model.train(
data=args.data,
@ -33,25 +27,71 @@ def main():
batch=args.batch,
device=args.device,
name=args.name,
patience=20, # early stopping
patience=30,
# Learning rate (fine-tuning pretrained, not from scratch)
lr0=0.001,
lrf=0.01,
cos_lr=True,
warmup_epochs=5,
weight_decay=0.001,
# Augmentation tuned for boss glow/morph effects
hsv_h=0.03,
hsv_s=0.8,
hsv_v=0.6,
scale=0.7,
translate=0.2,
degrees=5.0,
mixup=0.15,
close_mosaic=15,
erasing=0.3,
workers=0, # avoid multiprocessing paging file issues on Windows
save=True,
save_period=10,
plots=True,
verbose=True,
)
# Copy best weights to models directory
best_path = os.path.join("runs", "detect", args.name, "weights", "best.pt")
# Find best.pt — try the trainer's save_dir first, then scan runs/detect/
best_path = None
save_dir = getattr(model.trainer, "save_dir", None)
if save_dir:
candidate = os.path.join(str(save_dir), "weights", "best.pt")
if os.path.exists(candidate):
best_path = candidate
if not best_path:
run_base = os.path.join("runs", "detect")
candidates = sorted(glob.glob(os.path.join(run_base, f"{args.name}*", "weights", "best.pt")))
best_path = candidates[-1] if candidates else os.path.join(run_base, args.name, "weights", "best.pt")
output_dir = os.path.join(os.path.dirname(os.path.abspath(__file__)), "models")
os.makedirs(output_dir, exist_ok=True)
output_path = os.path.join(output_dir, f"{args.name}.pt")
# If boss is set (from manage.py), deploy as boss-{boss}.pt; otherwise use run name
boss = getattr(args, "boss", None)
model_filename = f"boss-{boss}.pt" if boss else f"{args.name}.pt"
output_path = os.path.join(output_dir, model_filename)
if os.path.exists(best_path):
import shutil
shutil.copy2(best_path, output_path)
print(f"\nBest model copied to: {output_path}")
else:
print(f"\nWarning: {best_path} not found — check training output")
print(f"\nWarning: {best_path} not found -- check training output")
def main():
parser = argparse.ArgumentParser(description="Train YOLO enemy/boss detector")
parser.add_argument("--data", required=True, help="Path to data.yaml")
parser.add_argument("--model", default="yolo11s", help="YOLO model variant (yolo11n, yolo11s, yolo11m)")
parser.add_argument("--epochs", type=int, default=200, help="Training epochs")
parser.add_argument("--imgsz", type=int, default=1280, help="Image size")
parser.add_argument("--batch", type=int, default=8, help="Batch size")
parser.add_argument("--device", default="0", help="CUDA device (0, cpu)")
parser.add_argument("--name", default="enemy-v1", help="Run name")
args = parser.parse_args()
run_training(args)
if __name__ == "__main__":

Binary file not shown.