PACE Compass

Private dashboard. Enter name + PIN.

Name or PIN not recognised.

PACE Compass

Pierre's data/insights department + career dashboard
Loading…

Department at a glance

Five overlapping workstreams. Not one thing.

PACE certification on track

101 ✓ · 201 ✓ · 301 Phase 13 prep

5/6 modules83%

Data-Workbench split

3 locations: skill, pace-local, standalone repo. Consolidation candidate.

Insight pipelines 1/4 warm

panthera-titration warm · email-analysis · genai-naep · session-analytics cold

Verified research spec'd

Perplexity pipeline designed, working, not yet wired to brain DB

Knowledge / brain live

27 learnings · 2 CF deploys · brain DB v2.sessions streaming

Overall milestone progress

0 / 0 done0%
Live deploys: pace-study.pages.dev · data-workbench.pages.dev · pace-compass.pages.dev (this one) — shared PIN via pace_pin_auth_v1.

Milestones — tick as you go

Persists per-device via localStorage. Each group shows its own %.

PACE certification

  • CAM_DS_101 Applied Statistics
    Complete · course scraped to canvas-export/
  • CAM_DS_201 Supervised ML
    Complete · pace-course2 portfolio + LinkedIn post drafted
  • CAM_DS_301 NLP — rubric 48/48
    All rubric items covered in v3 pipeline
  • CAM_DS_301 walkthrough deployed
    pace-study.pages.dev live
  • CAM_DS_301 submission notebook (basic)
    Run patched notebook top-to-bottom on A100 · sync back to repo
  • CAM_DS_301 submitted
    Deadline 2026-04-27 17:00 UK

8-project portfolio (DeepMind roadmap)

  • 1. Education Data Pipeline
    in_progress per yaml · but: data-workbench infra already ships this — rename?
  • 2. Student Success Predictor
    Needs Sparx task-attempt data access
  • 3. Student Error Type Classifier
    Multi-class; random-forest / XGBoost
  • 4. Learning Trajectory Model
    Time-series / LSTM — DeepMind-shaped
  • 5. ML Evaluation Pipeline
    Cross-validation, metrics for educational ML
  • 6. Adaptive Learning Recommender
    RL / recommender systems
  • 7. Interpretable ML for Teachers
    SHAP / LIME explanations
  • 8. Education ML Research Paper
    Combine PACE techniques with Sparx data

Vertex AI learning NEW

Detailed roadmap on the Vertex AI tab. Checked items sync.

  • GCP + Vertex AI account / free tier
    Billing alerts set ≤ £5/month
  • Vertex AI Studio walkthrough
    Prompt design + model comparison in console
  • Model Garden: deploy an endpoint
    Pick one model · deploy · call from Python SDK
  • Gemini via Vertex (vs direct API)
    Grounding, function calling, system instructions
  • Vector Search (formerly Matching Engine)
    Embed · index · query — brain DB doc_chunks equivalent
  • Vertex AI Pipelines (KFP)
    One real DAG — e.g. the PACE emotion pipeline
  • Custom training job
    BYO container · TPU/GPU pricing sanity-check
  • Agent Builder: one agent with tool use
    Compare to current brain/persona architecture
  • Portfolio project: port PACE → Vertex
    Batch prediction job for emotion classifier · publishable

Department hygiene

  • Commit uncommitted work
    pace-nlp-project + brain-vault · workbench tooling + patched notebook + learnings
  • Wire data-workbench guard hook
    One line into acebuddy/scripts/file-guardrail-hook.sh
  • Consolidate workbench (pace-local → standalone)
    After 2026-04-27 submission
  • Wire perplexity-api-test → brain DB
    Tier-classified writes to learnings + research_outputs
  • Email-analysis pipeline — first pass
    127k records in brain DB; attention pattern signal
  • Reconcile career.yaml vs sabbatical state
    See Career tab
  • Scrape Career Toolkits + Owning Your Career courses
    Canvas IDs 519 + 520 · already paid for

Career plan — current yaml

Target

DeepMind Research Engineer — AI for Education

Timeline: 2030 · Angle: PhD in learning mechanisms + Sparx data at scale

Career state — answered 2026-04-18: 0.8 FTE Sparx since Jan 2023 (both realities live)

career.yaml confirmed accurate: current_role: "School Success Coach at Sparx Learning (0.8 FTE)". Start date added (current_role_start: "2023-01") + last_reviewed: "2026-04-18". Not transitioning — both Sparx (present) and DeepMind 2030 (long-range) coexist.

Saves locally. Next session I'll read this and update yaml accordingly.

DeepMind 2030 — answered 2026-04-18: still live

Portfolio — from yaml

#ProjectStatusCareer score
1Education Data Pipelinein_progress
2Student Success Predictorplannedlogistic-regression: 9, nn: 10
3Student Error Type Classifierplannedrandom-forest: 8, xgboost: 9
4Learning Trajectory Modelplannedneural-networks: 10
5ML Evaluation Pipelineplannedevaluation-metrics: 8
6Adaptive Learning RecommenderplannedRL / recommender systems
7Interpretable ML for Teachersplannedexplainability
8Education ML Research Paperplannedpublication

Honest observation

Your actual current IP: cross-environment data tooling, safe-execution discipline for LLM workflows, verified-research pipelines, brain+persona architecture. The DeepMind-shaped yaml reads like a 2024 pre-LLM-era portfolio. Not wrong — but the frontier moved while you were building. Worth a fresh pass on whether "DeepMind 2030" or "AI-infra principal / staff at a well-funded research-adjacent team" is the better-aimed shot. Park or pick.
LinkedIn templates in yaml

Three post templates exist: technique_insight, sparx_application, portfolio_update. Hashtags include #DataScience #MachineLearning #EdTech #AIinEducation #CambridgeDataScience.

Vertex AI learning track

GCP's managed ML platform. Gap in current stack (AWS/local/HF). Cloud-neutral pedigree + Google-adjacent for DeepMind line. Budget-first approach.

Cost guardrail: set billing alert at £5/month before any experimentation. Vertex compute can burn fast (TPU/A100 per-minute). Default to Model Garden + Gemini API (token-priced) for most learning. Only spin custom training jobs when you know the run duration.

Roadmap

1. Foundations ~2h

  • Create GCP project, enable Vertex AI API
    Free tier: $300 credit 90 days · set hard budget cap
  • Install + authenticate gcloud CLI
    gcloud auth application-default login
  • Vertex AI Studio — prompt design console
    Compare Gemini 2.5 Flash vs Pro on a PACE review classification task

2. Model Garden ~3h

  • Deploy one endpoint from Model Garden
    e.g. Llama-3.1-8B or Gemma-7B · note cost/hr of the smallest machine
  • Call endpoint from Python SDK
    google-cloud-aiplatform
  • Undeploy endpoint (don't forget — billing)
    Endpoints accrue cost while running, even idle

3. Gemini via Vertex ~2h

  • Gemini 2.5 basic call (pay-per-token)
    Cheaper than endpoint deploys for learning
  • Grounding with Google Search
    Built-in retrieval for freshness
  • Function calling / tool use
    Structured output via schema

4. Vector Search ~4h

  • Generate embeddings (text-embedding model)
    Compare to OpenAI ada for cost + quality
  • Create Vector Search index
    Brain DB doc_chunks equivalent · ~1k docs to start
  • Query index from Python
    Nearest neighbour + metadata filter

5. Pipelines (Kubeflow) ~6h

  • Hello-world pipeline (2 components)
    KFP SDK basics
  • Port PACE emotion pipeline as a DAG
    load → clean → classify → report · portfolio-publishable

6. Custom training ~4h

  • BYO training container
    Dockerfile + training script
  • Hyperparameter tuning job
    Small search, single region, time-capped

7. Agent Builder ~3h

  • Build one agent with two tools
    Compare UX + cost to your brain/persona system

8. Portfolio project capstone

  • PACE emotion classifier → Vertex batch prediction
    Deploy · schedule · cost-report · write-up for LinkedIn
  • Full PACE v3 pipeline as a Vertex Pipeline
    Rubric-to-cloud port · DeepMind-relevant narrative
Good resources (unopened)
  • cloud.google.com/vertex-ai/docs — official
  • Google Cloud Skills Boost (formerly Qwiklabs) — hands-on with sandbox projects
  • github.com/GoogleCloudPlatform/vertex-ai-samples — copyable notebooks
  • "Prompt design with Gemini" + "Vertex Vector Search" learning paths on Skills Boost

Open threads — set priority

Edit priority numbers → auto-sort. 1 = ship next. Saves locally. Colour: 1-3 red, 4-6 amber, 7+ green.

Miss a thread? Priorities are yours — I'll pick up whichever sits at #1 next session.

Panel — next block of time

Fork: 9 days to PACE · ambiguous career state · uncommitted work · new Vertex track.

Architect

Consolidate workbench into standalone repo now. One night's work. Commit everything, wire guard hook, lift to ~/projects/data-workbench/. Pays forward into every future insight pipeline — including Vertex.

Risk: not on the rubric, not before deadline.

Cognitive Load Specialist

Commit everything today (30 min — protects state). Defer consolidation until after 2026-04-27. Start Vertex AI only after submission. You have a known ADHD pattern of starting rebuilds mid-project. Submit PACE clean, then refactor, then reopen career.

Risk: uncommitted discipline lingers 9 more days.

Behavioural Product Designer

Radios answered 2026-04-18: career state = "other/complicated" (both realities live — 0.8 FTE Sparx since Jan 2023 and DeepMind 2030 as long-range aim), DeepMind 2030 = live (north star). career.yaml updated with start_date + last_reviewed. No transition underway; the complexity is holding both simultaneously.

Risk resolved: career.yaml was accurate, not stale.

My read (subtractive)

Order:

  1. Finish PACE submission (9 days to 2026-04-27 17:00 — A100 run complete, notebook at basic/basic_notebook.ipynb; submit via Canvas assignment 3354 at fourthrev.instructure.com as .ipynb + report.md or zip)
  2. Then Vertex AI track · start with Foundations (£0 budget, 2h) + Gemini-via-Vertex (token-priced, small)
  3. Then wire perplexity → brain DB
  4. Status check on cold pipelines: email-analysis / genai-naep / session-analytics — alive or parked?

Retire: urge to restructure the department before submitting. Career.yaml update is all the restructuring this month needs.

Retired 2026-04-18: workbench-lift merge, guard-hook wiring, pace-compass git init — all shipped.

Quick actions

What I'm tracking for you between sessions

Tick state, hold-list priorities, career radio answers all live in localStorage on this device. Use Export state to back up. In next session I'll read this before planning.